Syndicated

Cutting Yourself on the Double-Edged Sword

Yesterday I published a short post titled “I/O Virtualization and the Double-Edged Sword”. In that post, I discussed how Xsigo was criticizing FCoE for “not going far enough” in the realm of I/O virtualization. Unfortunately, I didn’t do a very good job of really getting my point across, because the discussion rapidly turned into a discussion of the merits of various interconnect technologies and why one might win over the other. While that is a great discussion to have–and I’m thrilled my site can help further that discussion–it wasn’t really the key point behind my article. I/O virtualization was only the catalyst to prompt the original post.

Let me see if I can more clearly articulate what I’m trying to say here. If you are a Twitter user and into virtualization or storage, then you probably are following either Chad Sakac of EMC (@sakacc on Twitter), Vaughn Stewart of NetApp (@vaughn_stewart on Twitter), or both. That being the case, you are probably very familiar with the extensive “discussions” that take place between the two of them. Both of them are very passionate about storage and virtualization, but they have differing viewpoints. Now, before I’m accused by NetApp of being an EMC bigot (which would be ridiculous given the coverage I’ve given NetApp) or accused by EMC of being a NetApp bigot (that, at least, might be understandable as I’m just now starting to learn EMC storage), let me say that I’m not endorsing either product. NetApp’s products and EMC’s products are different; each of them has strengths and weaknesses in different areas.

Now, ask yourself, “Why do these products have different strengths and weaknesses?” Do you know the answer? These products have different strengths and weaknesses because of the technology decisions each company chose to make in the products’ development. NetApp chose one path, EMC chose another. For NetApp, that has created certain efficiences, certain strengths–and corresponding weaknesses. Likewise, EMC’s technology decisions have resulted in their products having certain strengths and weaknesses. Neither of these products is perfect. For NetApp to claim that “their way is the right way” is ridiculous; their way is only one of many different ways to accomplish something. The same is true for EMC. And, by extension, the same is true for every other technology vendor on the planet.

You want more examples? Consider the architectural differences between VMware ESX/ESXi and Microsoft Hyper-V. The technology choices made by each company created inherent strengths and weaknesses in each product. VMware claims their choices are the best choices; Microsoft believes their architecture is the best. Clearly, neither product is perfect. Both products have their flaws.

The real key takeaway here is that no technology vendor has the right to throw rocks at another technology vendor. All technology vendors live in glass houses. For VMware to claim that Microsoft’s architecture is all wrong is, well, wrong. For EMC to say that NetApp’s technology choices are stupid would be wrong. For Xsigo to claim that FCoE is the wrong path for I/O virtualization is wrong (although, personally, I don’t consider FCoE an I/O virtualization technology, but that’s a different discussion for a different day). Why? Because every company has to make technology choices, and those technology choices will–by the very nature of technology–automatically create inherent differences, strengths, and weaknesses in the resulting product. And when you accept that truth (and it is a truth, I promise you), then you see why vendors should not engage in negative marketing. When a vendor engages in negative marketing about the competition, that vendor is simply inviting others to pick apart the flaws in their own products.

Of course, I’m not naive enough to believe that vendors will stop negative competitive marketing overnight. Still, I stand firm in the belief that those vendors that focus on the strengths of their products instead of the flaws of others’ products will move ahead. I’m certainly more likely to do business with them.

I’d be interested to hear what others have to say. Voice your position in the comments.

Disclosure: As you probably know, I work for a reseller who represents many different vendors and manufacturers. My words here are not endorsed by my employer, nor do I represent my employer in this area.

About the author

Scott Lowe

As a 20+ year veteran in the Information Technology field, I've done quite a few different things. I've worked as an instructor, a technical trainer and Microsoft Certified Trainer (MCT), systems administrator, IT manager, systems engineer, consultant, and Chief Technology Officer for a small start-up. I was also lucky enough to publish a few books on topics like VMware vSphere, OpenStack, and network automation. Currently, I work at VMware, Inc., focusing on cloud computing, open source, networking, and virtualization.

5 Comments

  • Scott,

    You make some good points, but considering that you’re writing about our post on FCoE http://www.xsigo.com/blog/?p=48 I have to say you don’t really get at our central thesis. Our intent was not to debate FCoE. That’s not the issue here. The issue is this: we believe that I/O virtualization is best implemented as an external, centrally managed function. And because the current implementations of FCoE employ I/O that remains internal to each server, they do not go far enough to get I/O technology where it needs to be.

    That was the point of our first post which stated that FCoE does not solve the fundamental I/O problem. And again the point of the second post http://www.xsigo.com/blog/?p=132 that made the distinction between external and internal I/O virtualization.

    To say this is a vendor-based comparison — and is therefore off-limits for debate — strikes me as a misguided approach. This criterion would effectively eliminate discussion of numerous emerging technologies. It would have eliminated discussion of WAN acceleration in the early days of Riverbed when they pioneered that field. And it would have eliminated discussion of de-duplication in the early days of Data Domain. By your definition, those debates would have been “vendor-based” and therefore seen as rock throwing.

    To paraphrase Wayne Gretzky, we need to skate to where the I/O puck needs to be. Server I/O must become as flexible and easily managed as the virtual machines it serves. We believe that an incremental change to a protocol does not by itself sufficiently advance the cause of management simplicity. To stifle that debate simply because this view is held by a vendor goes against the grain of all technology innovation.

    – Jon Toor

  • Jon,

    I have to say that you aren't getting the main point of my post. The main point of my post–both the original post as well as this post–was not to debate the merits of FCoE vs. InfiniBand. That's an entirely separate discussion.

    Rather, the point of my post was to call out that EVERY technology decision inherently limits the end result. Xsigo chose to use InfiniBand as the basis for their I/O virtualization technology. Fine, no problem; InfiniBand has certain advantages. It also has certain disadvantages. By the very nature of making the decision to use InfiniBand, the end result–the products that Xsigo creates–will inherently have certain advantages and disadvantages. The same goes for FCoE. By the very fact that the creators of FCoE made certain decisions, the end result inherently has certain advantages and disadvantages.

    Now, to carry that a step farther, these technology decisions were made for specific reasons. The decision to make FCoE completely compatible with existing FC fabrics creates some advantages (the compatibility being one of them), and it creates some disadvantages. But those choices were made for a reason and for a purpose. Likewise, the decisions Xsigo made were made for a reason. Every vendor has a reason, a purpose, a plan for their products, and the technology decisions help to drive that plan or purpose.

    Here's where all of this comes together. In my opinion, FCoE wasn't intended to be an I/O virtualization technology; it was intended to replace traditional Fibre Channel and get FC onto the Ethernet cost/benefit curve. Therefore, decisions were made BECAUSE OF THAT INTENDED PURPOSE. Likewise, Xsigo's solution was intended to be an I/O virtualization solution, not a replacement for Fibre Channel. Therefore, decisions were made BECAUSE OF THAT INTENDED PURPOSE. Those decisions were shaped by the purpose of the product and those decisions in turn shaped the final form of the product, including the inherent advantages and disadvantages of the product.

    Now, for a vendor who makes a product expressly designed for one purpose to call out another vendor who makes a product intended for a different purpose because of the “deficiencies” of the second product in the first vendor's market is just plain wrong. You can't criticize FCoE because of deficiences in the I/O virtualization market because FCoE and Xsigo's product are two different products, designed with two different purposes in mind, and shaped by the technology decisions as a result.

    The debate of InfiniBand vs. FCoE (which is really just a protocol running on top of Ethernet) is a separate issue entirely.

  • Hi Scott,

    the one aspect that seems to get lost in the many technology discussions….are the business reasons that are often drivers of the technology decisions you discuss above. I can tell you that I have worked for market leaders including Intel and Seagate and I can tell you that Market share and Market position drive many of the technology decisions….in order to better protect those market positions.

    I can tell you that Intel was pursuing ARM-based processors for cellphones, but they were not the market leader and had a hard time differentiating their technology…..because guys like TI and Qualcomm can purchase the designs straight from ARM. Thus, intel made the business decision to jettison the ARM technology (sold to Marvell) and to focus on the ATOM architecture. From a technology perspective, the ARM processors are RISC based and much more power efficient than any CISC based technology….hence they dominate the portable device market (cellphones, etc.) Intel is positioning ATOM to move down into the ARM market….and is leveraging its supplier relationship and market position to do so….even with an inferior technology. If Microsoft would ever release Windows 7 for ARM, then Intel would rapidly lose significant market share in the Netbook, NetTop market space, etc.

    So, when discussing technologies such as FCoE….lets be very clear about who is leading the charge……AND the question of whether it is a better technology is irrelevant. The Business reasons trump all. IF the market for Storage transport moves ot Ethernet, there is only 1 winner….everyone else is playing defense.

    Lets also be clear…..large companies are rarely innovators as their business models are optimized around driving profits and few of them can afford to invest in truly innovative technologies that might hurt their own product lines. Start-ups competing in established markets, such as Enterpirse IT, find it near impossible to find enough air to breath as they are being smothered by the Gorillas who resist change for fear or losing market share, having to spend money on new technology, or having to buy up and coming technology at inflated prices….just to hold on to what was already theirs.

    So, my point is that many of these Technology debates are not really about technology at all….they are really about espousing a technological justification for maintaining the status quo from a business perspective.

  • Scott,

    The point of my post on the Xsigo blog was indeed exactly that FCoE doesn't provide the capabilities of I/O virtualization, and it is just a technology for consolidation of FC on Ethernet. As you said: “FCoE wasn't intended to be an I/O virtualization technology”. You know this, and I know this, but I keep encountering people who are confused about this and think that FCoE is actually an alternative to I/O virtualization, or that there's a large amount of overlap. I don't understand why you think it wasn't right for us to clarify this. We didn't say that FCoE is bad, just that it's the same old I/O architecture, it isn't the leap forwarded needed to modernize I/O, and I/O virtualization is where you get that leap forward.

    Your assertion that every technology has its advantages and disadvantages is a truism – no one can argue with that, but it doesn't say much. It's like saying that all decisions you make have consequences. Again, no one can argue with that, but it doesn't provide any enlightenment.

    Regards,

    Ariel

  • Scott,

    The point of my post on the Xsigo blog was indeed exactly that FCoE doesn't provide the capabilities of I/O virtualization, and it is just a technology for consolidation of FC on Ethernet. As you said: “FCoE wasn't intended to be an I/O virtualization technology”. You know this, and I know this, but I keep encountering people who are confused about this and think that FCoE is actually an alternative to I/O virtualization, or that there's a large amount of overlap. I don't understand why you think it wasn't right for us to clarify this. We didn't say that FCoE is bad, just that it's the same old I/O architecture, it isn't the leap forwarded needed to modernize I/O, and I/O virtualization is where you get that leap forward.

    Your assertion that every technology has its advantages and disadvantages is a truism – no one can argue with that, but it doesn't say much. It's like saying that all decisions you make have consequences. Again, no one can argue with that, but it doesn't provide any enlightenment.

    Regards,

    Ariel