Before the holidays, I posed a question on Google+ that generated quite a bit of interest and feedback. Now that it has settled down a bit I’d like to summarize the unresolved elements to make FCoE truly a world-class storage interconnect.
Setting the Stage
FCoE has been a controversial topic in both storage and networking, and for good reason. No one would deny that Ethernet is not an ideal transport mechanism for block storage I/O. “Porting” Fibre Channel to run on Ethernet networks has been a supreme technical challenge, and many companies and individuals have labored long and hard to make FCoE a reality.
Now that FCoE is specified in the standard and has been deployed in production environments, the question turns to its future. Will it take off and seize the mantle of dominance currently held by what I like retroactively to call “Fibre Channel over Fibre Channel?” Will they coexist for the next decade, with FCoE mainly deployed in “block” environments such as Cisco UCS? Or will FCoE ultimately fail to catch on, displaced by some other storage protocol like plain FC, iSCSI, NFS, or something entirely different?
The data center needs a flexible new protocol to meet the needs of virtual environments, and convergence of storage and data networking makes a great deal of sense in these environments. This was the root of my question, and I ask it in all earnestness.
My question: What elements remain unresolved to make FCoE truly world-class? What should the vendors be prioritizing? Here are the answers I received.
Technical Considerations
Link Aggregation on CNA’s
Converged network adapters (CNA’s) allow multiple protocols to access a single Ethernet connection, but some also include multiple ports that can be aggregated. In traditional Ethernet networks, link aggregation is a respectable approach for performance and availability. But storage networks have traditionally relied on host-based MPIO software, and these features are mutually exclusive. The zeitgeist seems to be a recommendation to avoid link aggregation on CNA’s that are used for storage networks.
How Do You Handle Virtual Machine Mobility?
As I described recently, virtual machine mobility is a major technical challenge for existing networks. The VMware proposal, the VXLAN, seems to be gaining traction right now. But this is only a solution for data networking. How will FCoE SANs handle virtual machine mobility? This remains unresolved as far as I can tell, though Ethernet switch vendors have come up with their own answers. Brocade demonstrated just such a solution at Networking Field Day 2, and I know that others have answers as well. But will there be an interoperable industry solution?
How Should FCoE Be Implemented Over Longer Distances?
Fibre Channel has traditionally relied on routers and other protocols (FCIP and iFCP) to span distances, but FCoE raises the possibility of native traversal. While it is certainly possible to span distances with FCoE, this is definitely not a recommended or supported idea. Without TCP/IP, or any routing mechanism, it’s just a bad idea. But I imagine that it won’t be long before vendors decide to give it a go anyway.
Implementation Considerations
Is TRILL Required for FCoE Networks?
This has been one of my own questions since the very beginning. Clearly, edge only FCoE works just fine without TRILL. But as networks become more complicated, and virtual machines move, it seems an awfully good idea to have some protocol to alleviate East-West routing concerns. I feel much better with TRILL (or some similar Ethernet fabric technology) in a complicated FCoE network.
Should All Switches Be Full FC Forwarders?
There are number of ways to implement FCoE on Ethernet network, and not all involve building a full Fibre Channel stack in each switch. While many (including myself) assumed that FCoE implied Fibre Channel forwarding in all switches, this is clearly not the direction taken by vendors, at least initially. Perhaps the current “Ethernet forwarding” approach is only a stepping stone, or perhaps it will emerge as the dominant FCoE standard.
How Will OpenFCoE and LoM Be Used?
OpenFCoE is a software solution allowing FCoE to be run without a CNA. If this became popular, it wouldn’t be long before data center architects began looking at LAN on Motherboard (LoM) and even 10GBASE-T as a potential SAN alternative. Will this be used in the long run? It could happen, but it’s certainly not something that’s here at the moment. But OpenFCoE is a real player, especially with Intel’s backing.
How Will Technologies like Zoning Interoperate?
Many networkers are just now beginning to see the true complexity of Fibre Channel SANs. Although interoperability of higher-level Fibre Channel functions between vendors has never been a priority in “FC over FC” SANs, Ethernet could change things. I would not be at all surprised to see a groundswell of customer support demanding greater levels of interoperability from FCoE than from FC, and zoning and VSAN is the likely first beachhead.
The Big Question: When Will We See the “Killer App” For FCoE
Just about everyone agreed that the real challenge for FCoE is market acceptance. Customers aren’t yet demanding FCoE, and vendors are finding it hard to articulate a compelling case to move from “tried-and-true” FC. Convergence, cost savings, and performance have all been put forth, but customers aren’t biting. Perhaps they just need a little time and a little more proof.
This post relies extensively on feedback from a number of people, including Ivan Pepelnjak, Tony Bourke, J Metz, Dmitri Kalintsev, Derick Winkworth, David Hardaker, Juan Lage, and Corey Hines.
© sfoskett for Stephen Foskett, Pack Rat, 2012. |
Eight Unresolved Questions About FCoE
This post was categorized as Enterprise storage, Everything, Gestalt IT, Virtual Storage. Each of my categories has its own feed if you’d like to filter out or focus on posts like this.