Exclusives

FCoE IS about Rip’N’Replace (Just not your Storage)

There is a commonly held fallacy that FCoE provides a smooth migration path from your existing infrastructure to the new system. Its true that if you look narrowly enough at your infrastructure, then parts of your network don’t need upgrading. For example, your legacy FibreChannel switches, server cards and storage arrays don’t need replacing. The “Storage Stuff” gets to stay in place and ‘seamlessly interconnect’ with the new technology.

In the bigger picture, however, the impact of DCB on the Data Networking means that your entire Data Networking infrastructure needs to be overhauled and probably forklifted. And that’s exactly what Cisco wants you to do.

Although it’s possible to implement “patches of green” using the VCE model where a cluster of compute power is brownfield developed in your existing data centre, you are not going to realise the benefits of a DCB infrastructure without having an entire network that has the hardware that is capable of doing so. You need to deliver end-to-end service guarantees in the Data Centre and a network that doesn’t support the QoS and performance guarantees on every device, isn’t going to work.

In a very real way, the technology emphasis is going to shift away from the server / storage and into the network layer for the coming year or two.

The Storage/SCSI/FC over Ethernet is a great use case that is being used by the Networking industry to drive the first phase of migration to DCB and spark a massive round of investment into networking infrastructure. Cisco is heavily marketing DCB to encourage a new round of spending on the data network that is long overdue and Brocade is being forced to follow along behind it (and looks a lot like an also-ran in the process).

Time to look up, not down

It’s time for Storage Professional to stop looking at where you are placing your feet and look at the road ahead. The future of Storage Networking is not FibreChannel all the time but some other technology yet to arrive. FCoE is THE transition technology that heralds the fact that the Storage is now a maturing business and ready to actively integrate in the IT Team and not continue to be a silo in the corner with it’s own special needs.

But for the next couple of years, it looks like the network is going to take a lot of funding away from Storage as the upgrades begin.

About the author

Greg Ferro

Greg Ferro is the co-host of Packet Pushers. After surviving 25 years in Enterprise IT with only minor damage, he uses his networking expertise for good in the service of others by deep diving on technology and industry. His unique role as an inspirational cynicist brings a sense of fun, practicality and sheer talent to world of data networking and its place in a world of clouds.

He blogs regularly at http://etherealmind.com and the podcasts are at http://packetpushers.net.

3 Comments

  • Very interesting as always. Just one thought: I've seen you bring up the “FCoE = Transition” argument a couple of times, but I'm not really sure what you are suggesting it's supposed to be transitioning *to*. Are you suggesting that eventually we're simply looking at DCB as its own solution and FC is merely just one-of-many protocols that will flow over the Eth wire? Or am I missing something else?

  • Greg:

    Another passionate post, but–and I know you will be shocked about this–I will have to disagree on a number of fronts. So, first off, we are not looking to rip-and-replace anything. If you are happy with your Catalysts and MDS then more power to you–I could not be more pleased. Our Catalyst 6K family has been shipping for over 10 years and even the earliest MDS customer can upgrade their chassis to support 8G FC, so its not like we have a track record of forcing forklift upgrades.

    Even if you are looking to play with 10GbE, then the best route maybe to to start off with a line card in your existing Cat 6500 or with one of the Catalyst or Nexus rack switch options–continue to leverage your existing investment until it is not meeting your requirements–for most of our customers, this occurs when their 10GbE density hits a certain level.

    The thing is, most customers are not happy with the status quo. Across the board, we see heightened interest in 10GbE in the access layer for a number of reasons: 1) server virtualization, 2) more complex workloads, and 3) unified fabric of some sort (FCoE or iSCSI or both).

    So, for most folks, the migration to 10GbE (which is the real transition) is going to involve new hardware regardless. They may be able to do this incrementally or they may need to build out new infrastructure depending on their long-term plans.

    With regards to unified fabric, its a tool, not a goal–folks are going to unified fabric because it gets them something that they value (lower costs, better VMotion support, etc). The transition, however, is not as binary as you represent. For FCoE, the first step is deploy at the access layer. Will this require new access layer switches? Probably, but if you are making a commitment to 10GbE, then you are going there anyway. Do you have to rip-and-repalce all your switches? Nope. From the access layer switch (ours at least), you can break out to your existing LAN and SAN infrastructure. And yes, you can deploy in patches of green in the access layer, so you can deploy FCoE rack switch by rack switch without having to touch your upstream LAN or SAN. Heck, you can even go port by port if you are unsure of the technology–you can have a server with an HBA and a CNA have have one FC link back to your SAN and one FCoE link back to your SAN–neither your host nor your targets will care. That is the “transitional technology” part.

    And if you don't have an FC SAN and this is you first foray into any kind of networked storage, then iSCSI is likely a better direction for you–simpler and cheaper.

    At some point will you want to upgrade your core and agg switches–sure, but that decision is not going to be driven by FCoE or DCB, it is going to be driven by the fact that you need to scale your agg and core layers to support your 10GbE access layer. Again, its something you can do on your own terms, when your needs dictate.

    As far as storage spend, I don't think it will play out that way. What will happen is that the storage team can shift their spending away from storage transport (fabric switches, line cards for their directors, HBAs, fiber) and towards things that they care about (updating their storage, new tools). We saw this play out when IP Telephony replaced the traditional PBX. We see similar indications today as customers are starting to deploy unified fabric more broadly–when 100% of your servers can now access your SAN (vs the currently typical 20-40%) it starts to make sense to consolidate your disk spend on your SAN.

    Regards,

    Omar

    Omar Sultan
    Cisco

  • Greg:

    Another passionate post, but–and I know you will be shocked about this–I will have to disagree on a number of fronts. So, first off, we are not looking to rip-and-replace anything. If you are happy with your Catalysts and MDS then more power to you–I could not be more pleased. Our Catalyst 6K family has been shipping for over 10 years and even the earliest MDS customer can upgrade their chassis to support 8G FC, so its not like we have a track record of forcing forklift upgrades.

    Even if you are looking to play with 10GbE, then the best route maybe to to start off with a line card in your existing Cat 6500 or with one of the Catalyst or Nexus rack switch options–continue to leverage your existing investment until it is not meeting your requirements–for most of our customers, this occurs when their 10GbE density hits a certain level.

    The thing is, most customers are not happy with the status quo. Across the board, we see heightened interest in 10GbE in the access layer for a number of reasons: 1) server virtualization, 2) more complex workloads, and 3) unified fabric of some sort (FCoE or iSCSI or both).

    So, for most folks, the migration to 10GbE (which is the real transition) is going to involve new hardware regardless. They may be able to do this incrementally or they may need to build out new infrastructure depending on their long-term plans.

    With regards to unified fabric, its a tool, not a goal–folks are going to unified fabric because it gets them something that they value (lower costs, better VMotion support, etc). The transition, however, is not as binary as you represent. For FCoE, the first step is deploy at the access layer. Will this require new access layer switches? Probably, but if you are making a commitment to 10GbE, then you are going there anyway. Do you have to rip-and-repalce all your switches? Nope. From the access layer switch (ours at least), you can break out to your existing LAN and SAN infrastructure. And yes, you can deploy in patches of green in the access layer, so you can deploy FCoE rack switch by rack switch without having to touch your upstream LAN or SAN. Heck, you can even go port by port if you are unsure of the technology–you can have a server with an HBA and a CNA have have one FC link back to your SAN and one FCoE link back to your SAN–neither your host nor your targets will care. That is the “transitional technology” part.

    And if you don't have an FC SAN and this is you first foray into any kind of networked storage, then iSCSI is likely a better direction for you–simpler and cheaper.

    At some point will you want to upgrade your core and agg switches–sure, but that decision is not going to be driven by FCoE or DCB, it is going to be driven by the fact that you need to scale your agg and core layers to support your 10GbE access layer. Again, its something you can do on your own terms, when your needs dictate.

    As far as storage spend, I don't think it will play out that way. What will happen is that the storage team can shift their spending away from storage transport (fabric switches, line cards for their directors, HBAs, fiber) and towards things that they care about (updating their storage, new tools). We saw this play out when IP Telephony replaced the traditional PBX. We see similar indications today as customers are starting to deploy unified fabric more broadly–when 100% of your servers can now access your SAN (vs the currently typical 20-40%) it starts to make sense to consolidate your disk spend on your SAN.

    Regards,

    Omar

    Omar Sultan
    Cisco

Leave a Comment