Lets start with a bit of background so you can understand where each of the acronyms comes from.
The Ethernet that we know and understand is defined by the IEEE 802 standard is all its glorious detail and many variations including wireless, MAN, LAN, WAN and so on. It has been been stretched and abused in any number of ways over the last twenty years and continues to find new ways to adapt to the new challenges. Ethernet is a Layer 2 frame technology, that is, it works at the Data Link Layer and is mostly perceived to be a carrier for IP packets.
Ethernet Quality of Service and its deficiency
The Ethernet 802.1p QoS tagging is well understood today. By using the TOS field from the original definition, we can provide a hop-by-hop QoS mechanism across a switched infrastructure.
But for the emerging technologies for data centres, this QoS mechanism is not sufficient. In practical terms, there is no guarantee that an Ethernet frame can be dropped, or incur latency, or even that there is sufficient bandwidth using 802.1p, only that some sort of best effort
Storage transport and its “special requirements”
So Cisco (and it pretty much was only Cisco) decided that having two networks is a sales opportunity to make one network. In many data centres today, we have a network using Fibrechannel and another network using IP/Ethernet. You can make a good marketing story around putting FibreChannel over that Ethernet network instead of having two networks. This is called “convergence”.
But Storage Networking is not designed and not able to have any significant loss or delay in the network. This is because FibreChannel standards never implemented decent protocols and chose to continue with decades-old SCSI signalling and storage logic. Good for immediate sales, bad for long term product vision.
Compare this with IP/Ethernet where there is robust support for latency, jitter and loss. This creates a head to head clash of requirements. Storage networks must have low latency, low loss, guaranteed bandwidth and low scalability. Data networks support variable latency, variable loss, variable bandwidth and high scalability.
So Storage neworks used quite specific hardware that had very few ports to ensure bandwidth capacity, low latency and low loss silicon designs that do not block or drop frames.
Converging Data and Storage
If we want to build a Data Centre network that has extra features we need to implement a QoS and Bandwidth reservation signalling system that is similar in concept to RSVP for IP from a few years ago (notable because it was discarded because it was unworkable at large scale).
So the standards bodies have been getting some new initiatives. The IEEE Data Center Bridging group is working on the following standards:
* Priority-based flow control: IEEE 802.1Qbb
* Enhanced transmission selection: IEEE 802.1Qaz
* Congestion notification: IEEE 802.1Qau
* Data center bridging exchange protocol (DCBX): IEEE 802.1AB (LLDP). Proposed by Intel
There is one other standard that is important. L2 Multipathing (L2MP) (which I have discussed here) is going to be a vital part of making scalable data centre networks. There are two competing standards for L2MP shortest path bridging: IEEE 802.1aq and IETF TRILL.
So which is correct: DCB, CEE or DCE ?
So lets get to the core of this article. Which of these three acronyms is the best one to use when conversing about
DCE = Data Centre Ethernet = Cisco’s trademarked term for their implementation (trademarked on November 1st 2005)
CEE = Convergence Enhanced Ethernet = IBM’s trademarked term for their implementation (trademarked April 18th 2007, 18 months after Cisco trademarked Cisco Data Center Ethernet)
DCB = Data Centre Briding = name used by the IEEE standards group.
So this leads to a few points:
Data Centre Bridging is the most correct term for the ‘in progress’ IEEE Ethernet standards that will allow FCoE to actually be useful and scalable. This is the term that you SHOULD be using. Note however, that for complete purity that the IEEE DCB working group doesn’t really include RBridges L2MP standards, but practically, everyone will put them into the same basket, including the IEEE Trill standard for L2MP.
DCE is not necessarily Cisco proprietary, it was supposed to be a marketing name for Cisco’s attempt at pre-standard ethernet and was trademarked to give the lawyers something to do. For example, Cisco had InterSwitch Links (ISL) for many years before IEEE 802.1Q trunking became the standard. In this context, DCE was meant to be a convenient handle for the marketing message before standards became available. There has been considerable backlash at Cisco for the term due to the belief that Cisco was attempting to control the standards (because that it was Brocade would do, given a chance) or to create a non-standard version of DCB and, as a result, they have largely stopped using the term.
CEE is term used by IBM for their marketing efforts and attempt to spin a viable marketing effort for their professional services. This term is probably the most popular today, but is equally proprietary and incorrect as using Cisco’s DCE.
What makes this even more puzzling is that IBM has no networking technology to sell as they resell other vendors, and the term has no meaning when IBM uses it.
So, yes, all of terms are interchangeable, they encompass the same unfinished standards and have the same intent. However, the terms CEE and DCB are outright marketing speak for IBM and Cisco, respectively, to attempt to control to ‘conversation’. It allows the gross simplifcation of some very detailed and complex standards work to be “glumped” into a glib acronym that causes CIOs and CTOs to sign off faster and get sales going. Nothing wrong with that, except for the attenpt to co-opt the top position.
The correct term is…
Yes, the correct term is Data Centre Bridging with the acronym DCB. I would strongly recommend that you stick with this term that is truly part of the standards process and let IBM and Cisco use their proprietary terms all by themselves.