All Syndicated

Virtualization and High Bandwidth Datacenter—How the Datacenter Landscape Is Changing

They times… they are-a-changin’, right?! The view of the traditional datacenter is a-changin’ right along with it. My participation in #TechFieldDay sure drove that home.

The traditional datacenter is comprised of servers, network, and storage. We have all seen major changes in server architectures (including newer processors, new instruction sets, faster RAM, PCIe, and Blade architecture) and storage architecture (SAN/NAS functionality, SSDs/EFDs, caching improvements, replication, and storage tiering). These improvements have shown major benefits to the users of these systems and have kept capital investments in IT moving along as IT departments have been able to improve stability and performance due to them.

However, there has been a lack in mainstream improvement in network performance as well as a newcomer to the datacenter, virtualization, that stand to make a major change in how datacenters operate in the very near future.

The last major datacenter network change involved the adoption of 1Gbps networking from 10/100. I am sure there are some network guys that would love to debate this to no end (feel free to talk amongst yourselves, then). Server hardware began to include 1Gb NICs onboard and the switch manufacturers dropped the price of the 1Gb switching. At that point, almost anyone with some money and a datacenter could immediately realize the benefit of increasing the network performance 10-fold.

Since then, though, the network has been lacking in performance gains. Technologies and techniques, such as Infiniband, Port Channels, and NIC teaming all assist in boosting the performance in some fashion, but they are not commodity like being able to plug in an existing 1Gb NIC into a 1Gb switchport and everything work faster. However, fairly recently, the introduction of 10Gb networking has emerged. The same pattern will follow as with the adoption of 1Gb networking… NICs onboard, switching costs drop, adoption ensues. Suddenly, 10Gb networking will penetrate the market.

The new player to the datacenter is virtualization. Now, it is common to think of virtualization as server virtualization (aka — VMware vSphere, Microsoft Hyper-V, Citrix XenServer, etc…). However, virtualization is really just a layer of abstraction. In some situations, like server virtualization, this happens by the use of a hypervisor to abstract the hardware from the software layer. However, the same concept of virtualization as abstraction can be applied to other areas of the datacenter.

So, what can be abstracted and virtualized? Well… almost anything that is historically tied to being hardware specific. Categories like I/O Virtualization (Companies: Aprius and Xsigo), Storage Virtualization (Companies: Isilon/EMC, NetApp, Avere), and Network Virtualization (Cisco, Open vSwitch, VMware, and Citrix). Suddenly, what used to be specific to hardware has been lifted into a realm where it is not necessarily the case. Sure, PCIe cards are still tied to hardware for access… however, it is not necessarily on the physical server itself.

Higher bandwidth networking in the datacenter means that more and more data can be put on a network at any given time without impacting other operations. So, I/O functions like Fibre Channel storage and PCIe I/O are suddenly able to exist on the datacenter Ethernet network. Virtualization is becoming the mechanism to allow for the convergence of non-traditional data on the Ethernet network.

The addition of virtualization and high bandwidth networks is leading to a new shift in the functions of the datacenter components. Now, we are seeing more and more intelligence being placed on the service level and less and less functionality in the core datacenter components. Products are coming to market that remove functionality from the datacenter components and perform functions themselves. SAN caches are being placed in the data path that interrupt the flow to the primary SAN in order to increase performance. PCI cards are being removed from hosts and shared across multiple hosts, over Ethernet. Network decisions (VLANs, for example) are now being deployed to virtual servers, like ESXi and Hyper-V… virtual switches are even extending across multiple hosts (vDS, in vSphere environments). Some very powerful networking switches are being virtualized and deployed in virtual server environments (see Cisco Nexus 1000v and Open vSwitch) and away from the formerly closed hardware environments.

The change in the functions of core datacenter components is really a mixed bag. Partially a case of “what a great idea!” and “just because you can does not mean you should”. I feel that the virtualization of server hardware is leading to some pretty cool things… mostly, a time to re-question how a service is provided. Vendors are re-thinking about needing an expansion being a hardware device or a service being provided over the network. This leads to server hardware becoming smaller, more dense, and more cost efficient (lower cooling and power needs with high performance).

The Network virtualization is very much a middle ground for me. Removing network intelligence from network devices is not a good idea. The network devices are designed and built for efficiency and logic as to how to process the data. Removing that logic and placing it elsewhere is a step in the wrong direction. However, being able to extend network logic into areas once unavailable becomes a major benefit to everyone. Projects/products like Open vSwitch are really pushing the adoption of virtual network switches into the forefront of the server virtualization world as it provides major network functionality at the virtual switch level that puts the “devices” at the same level as traditional hardware switches.

Storage virtualization is another middle ground area for me. Abstracting the data on the storage device and allowing the device to logically place the data on tiers is amazing technology. This allows for higher performance on most frequently used data, block and file level access to data on the same storage groups, file access via metadata, and so much more. The issue becomes with the ever-so-precious data path. Storage admins are very hesitant to place a device in the middle of the storage path. Suddenly, there is another variable that can impact the quality and consistency of the Corporate data. So, while a product like Avere is so cool, it is inline and people are weary of that.

All of the virtualized components listed above require some level of Ethernet bandwidth for them to work properly. By increasing the bandwidth available on the network, these new technologies are making their way into the datacenters. We just need to ask ourselves “just because we can, does that mean we should?”  

About the author

Bill Hill

Leave a Comment