During the UCS Bootcamp in San Jose, Cisco made it clear that the value proposition of UCS is the Stateless Model. Unlike traditional server deployment use of the Service Profile (I covered the Opt-In Model earlier in this series), the Stateless Model allows the physical hardware to become generic and, since the operating system and application resides on the SAN, a server personality duplicated and restarted from blade to blade.
“In Unified Computing system (UCS) the underlying hardware (or server) can be made completely transparent to the OS or applications that run over it. The kind of environment which an OS or application requires can be moved from one server to another or can be changed very easily. This is made possible by moving resources, such as MAC addresses, WWN values, IP addresses, UUID, firmware versions and even server BIOS, from one server to another at the time of deploying the server. This is accomplished by using the concept of Service profiles; which is like software definition of a server. The concept of stateless computing facilitates much greater scalability and can be used in conjunction with virtualization to achieve maximum data center utilization.”
One of the labs during the week showcased the Stateless Model in action, so what better way to help explain this feature then to walk through it again for all to understand?
The Stateless Model Lab Overview
Quoting the lab introduction, the purpose of the lab was to:
“.. demonstrate the statelessness by booting an OS off of a SAN LUN. The SAN connectivity and masking is specified by World Wide Names that are associated with the service profile. When your service profile moves from one blade to the next, you will be booting the exact same SAN based OS. No configuration outside of UCS will ever be required at this time.”
The following overview is of the UCSM configurations performed in the lab. Once again, this is not a “how to” but is instead intended to provide insight into the process and advantages of the UCS Stateless Model.
Create a Server Pool
Select multiple UCS blades to be in a pool. Almost is if the hardware was like non persistent virtual desktops and UCSM was the user, Service Profiles will be able to move between the hardware in a pool allowing OS and application to run on any pool member without any further setup. Cisco referred to these pools as “server farms.”
Create WWPN and WWNN Pools for storage
SAN ports and initiators are also to be grouped as pools. When server profiles move to another blade your FC fabric and storage see no change. No remapping will be required.
Cisco defined the SAN boot components configured in these pools the “triplet” because it includes:
- vHBA name
- WW port name of target array
For multipathing 2 triplets can be specified in the boot order. Remember that the mezzanine cards do not provide multipathing, but the operating system is instead responsible.
Create a MAC Pool for networking
Network interfaces are pooled as well. The MAC pool will be the vNICs used by the UCS blades in the pool. vNICs already have a native VLAN and allowed VLANs assigned, so the networking configuration remains seemless and mobile with the blades. This is matched to the configuration on the Northbound switches already created by the network administrator.
Create Service Profiles to use Pools
The steps from the lab explain the use of the pools created.
- Create service profile (explained with screen shots in Managing Blades with UCSM post)
- For the World Wide Node Name, choose the WWNN name pool.
- For the vHBA, choose a WWN from the WWPN pool
- For the vNIC, choose a MAC address
- Set the boot order.
- Associate your service profile by choosing the blade server pool, rather than a specific blade.
The creation of the Service Profile and the association of a blade causes the PXE boot of the PnuOS where the profile is applied, and then the blade can access the assigned LUN(s). Obviously, the SAN administrator has pre created LUNs and configured appropriate masking/zoning in order for this to work properly.
Clone the Service Profile
In the UCSM the service profile can be right-clicked where the administrator can choose to create a clone. Once again, the LUNs have been pre masked/zoned, and since the cloning process assigns a new value from the WWN, MAC, and server pool the cloned service profile results in an independent blade operating system and workload.
Matching the assigned configurations from the storage and network administrators is crucial, but once in place the UCS (server) admins handle all inbound connectivity set up. Thus, hardware mobility is enabled through pools.
Pooling servers that will be dedicated to similar functions, like ESX hosts for example, allows for workload mobility across reserved hardware. Cisco called this hardware availability, and stressed this is not the same concept as high availability. If you recall from the previous posts that moving server workloads is a manual process that requires OS shutdown, then it makes sense that virtualization is needed for the true high availability scenarios.
In a non virtualization example, consider a database run on a UCS blade pool. Since there are reserved blades in the pool proactive monitoring could identify a hardware failure developing, and the database could be powered off and relocated to reserve blade hardware. Although many IT shops do this today with bladecenters, the advantage in UCS is that the SAN, Network, and even the OS and application changes do not need to be performed, and the new hardware is then more quickly running the workload.
Just like the option to clone a service profile, a template can also be created. Templates come in two flavors, Initial templates and Updating templates. Initial templates seemed like the more common usage scenario similar to deploying new virtual machines in vCenter for example. Updating templates are a bit complex, remain linked to any Service Profiles created from them, and therefore changes to Updating Profiles impact blades in service. We did not cover these in detail in class so I am still a little fuzzy on templates.
I’ve got one more post for this series in mind. I plan to address the questions I outlined in my first UCS for Dummies post and explain some of my final thoughts on how, where, and why UCS fits in the datacenter.