As a class and in smaller groups, I’ve participated in several discussions trying to understand UCS connectivity and communication both internally and externally to the LAN and the SAN. This post summarizes several diagrams and drawings from whiteboards, my notes, and the bootcamp manual to explain what hardware communicates with which protocol, and how redundancy and fail over works in Cisco’s Unified Compute System. If you are comparing UCS to other blade centers some details mentioned will jump out at you. I’ll conclude with some thoughts on these items.
Again I am using terminology and acronyms established in my post from day 1. Review that post if necessary.
The following diagram illustrates the current connectivity between the UCS Blades, Fabric Extenders (FEX), and the Interconnects. The diagram only includes a single chassis, a single half height blade, and a single full height blade for simplicity while covering all scenarios. Duplicate the same connectivity for each blade inside the chassis, and duplicate connectivity of 2 more FEX for each additional chassis in the solution. As shown, the 2 Interconnects can manage up to 20 different chassis if model 6140 and up to 10 chassis if model 6120. (The max number of chassis can not be achieved because 2 FCoE cables are being used to the Interconnects)
Click on the image to see a larger version.
New terms to understand before continuing:
- Northbound networking – any connectivity and communication to other switches outside of the UCS solution. Port channels, allowed VLANs, and a matching native VLAN must exist on the next switch up. LACP will be automatically configured. vSAN object in Service Profile
- Inbound networking – any connectivity and communications to blade servers. Configured by the UCSM, assigned via Service Profiles and represented as a vNIC or vHBA objects. Includes MACs, native VLAN, allowed VLANs, WWNs, WWPNs, etc.
- End-host mode – UCS Interconnect default operation. No MAC tables are maintained. Does not switch traffic northbound, but does switch traffic inbound – including both blade to blade connectivity and packets from outside UCS headed inbound.
- Pinning – automatic or manual assignment of ports. Happens both on the Interconnects and the mezzanine cards. On the interconnects without switching or MAC tables for northbound traffic from blades.
Yes, other switches for the LAN and SAN are needed since the Interconnects do not route or switch, and FCoE adapters in the storage device cannot be directly connected to the 6100s.
There is an option to change the mode of the entire UCS switch to “switching mode”, but it is highly recommended not to do this.
Blades cannot communicate with each other inside the same chassis via the FEX. Local traffic must travel to the Interconnects first.
There is no multipathing provided from the blade hardware (mezzanine cards). Multipathing is only possible from the blade operating system.
On the Interconnects, only ports 1 through 8 are licensed by default. ports 9 through 20/40 are licensed per port as needed.
Oplin mezzanine cards provide ethernet only. Menlo and Palo provide both LAN and SAN connectivity.
The biggest misconception I’ve had about UCS (and it has been common among a lot of people I have talked with) is where FCoE is used in the solution. In the current version of UCS FCoE exists between the mezzanine cards on the blades to the Interconnects only. FCoE is not possible between the Interconnects and the northbound switches. As mentioned earlier, a FCoE adapter in a storage device cannot be directly connected to the Interconnects. This is possibly on the roadmap, but today’s UCS cannot do it.
Blade mezzanine card to FEX connectivity
Each mezzanine card has 2 ports both capable of 10 GE. Half height blades can hold one mezzanine card and full height blades can hold two – or 4 ports each capable of 10 GE.
Without 2 FEX in a chassis only one mezzanine port will be active per card. This means fail over is not possible for half height blades and only possible in full height blades if two mezzanine cards exist – 1 active port on each card.
Without 2 interconnects having 2 FEX is useless. You can not connect both FEX to the same 6100.
Pinning also occurs between the mezzanine card and the internal ports on the FEX (inside the chassis). This assignment of ports is automatic and depends on the number of cables between the FEX and the Interconnects. Only 1, 2, or 4 cables (ports) can be used and pinned as follows:
- 4 cables
- Blade 1 and 5 to port 1
- Blade 2 and 6 to port 2
- Blade 3 and 7 to port 3
- Blade 4 and 8 to port 4
- 2 cables
- even numbered blades to port 1
- odd numbered blades to port 2
- 1 cable
- all blades to single port
If you have 4 cables uplinked and one fails UCS will have to re-pin the blades to a 2 cable configuration. Blades using ports 3 and 4 will temporarily lose connectivity.
The Part of Tens – UCS was built for virtualization
Blades cannot communicate inside the chassis – If the 10 GE between the FEX and the interconnects is not enough bandwidth for an application, running ESX on the blades allows an affinity to keep VMs that need local connectivity together.
Operating system multipathing only – VMware vSphere to the rescue again.
Hardware high availability limitations – vSphere VMotion, DRS, and HA server this purpose.
Bandwidth reduction from the interconnects to the northbound switches – Virtualized servers, regardless of chassis location, managed by the same interconnect domain should rarely have northbound needs. Until physical clients have 10 GE adapters inbound network traffic will not be an issue. Some storage devices currently have FCoE adapters however, and Cisco is aware of the need but maintains current virtual server loads do not need that size pipe.