Public clouds represent two changes. One is easy to appreciate, the actual location of data has changed from on-site to the remote compute and storage of the public cloud providers. This is easy to visualize, and I think anyone with a rudimentary understanding of the cloud can understand this. Indeed, it is hard to conceive of the cloud without having this primarily in mind.
But what often escapes considerations of moving to the public cloud is it’s not just where data resides that has changed, but also how that data is managed. This is not immediately apparent, but many a failed cloud migration strategy has failed to take this into account. To ease these migrations, the concept of hybrid cloud infrastructure is becoming increasingly popular. Datera is a company looking to operate in that space.
Hybrid Clouds, Why Not Both?
The public cloud certainly has profoundly changed enterprise IT. It provides limitless scale, impressive utilization, and changed capital investments. However, it often fails to provide enterprise level performance on a consistent level, and can lack the fine tune controls organizations have come to expect. Datera is building a cloud data management foundation for on-site clouds. Their goal is to make this autonomous and transparent layer to the organization to offer the agility of the public cloud, but with enterprise class performance and control.
They provide what they call an Elastic Data Fabric. This is a high performance, low latency, automated and flexible solution for enterprise workloads. This is represented by elastic block storage for hybrid clouds. Part of the data management problem in the liminal stages between traditional and cloud infrastructure are the prevalence of data silos. Part of this solution involves a move away from cloud specific data services, and instead provide ones that work across clouds.
Stitching Data Fabric
Using their data fabric, Datera is able to provide the same kind of scale and automation people have come to expect from the public cloud, but across all their infrastructure choices, with a single control plane. The system itself is a scale out distributed storage solution. This is fairly hardware agnostic on-site, being able to accommodate hybrid and all-flash array’s simultaneously. This is because Datera is able to use agents to “broadcast” the capabilities of storage, which gives the feature visibility needed to pool it as a resource. The storage is connected via an Ethernet-based fabric.
This storage and cloud storage resources can all be managed through Datera’s CloudOperations. This provides not just active monitoring, but can supply predictive analytics on performance and provisioning needs, including adding and deleting nodes. Datera has a fairly comprehensive list of data service available to storage within their hybrid cloud umbrella. Compression and thin provisioning are two of the most notable, but QoS, clones, snapshotting, and replication are all available.
Performance seems to be a highlight for Datera’s solution. They’re seeing up to 70k IOPS per node, with <1ms latency at the high end on a 70/30 read-write split. This scales up to twenty nodes, where you could expect up to 1.4 million IOPS.
There are no shortage of organization that are looking to move toward the cloud. But this is often hampered by considerable capital investments, regulatory requirements, and performance considerations. In these situations, the road to rolling your own cloud can seem intimidating. Datera’s Elastic Data Fabric is the type of solution for those organizations. It brings the agility and automation of the cloud, without losing control or locking you into one public cloud provider. Not every company has the luxury of designing their infrastructure from scratch. Datera is designed for those working with a legacy of investments, but the need to move to more agile infrastructure.
For an architectural overview and to see how Datera integrates with Kubernetes and OpenStack, check out their presentation from Storage Field Day.
[…] Datera Elastic Data Fabric […]