Podcast: Play in new window | Download ()
Subscribe: Apple Podcasts | Spotify | Amazon Music | RSS | More
Continued coverage of vendor presentations from the GestaltIT.com Seattle Tech Field Day July 2010.
Presentation #3 was by F5 networks at the F5 Technology Center. During the first half-hour or so, F5 gave the TFD delegates a product line overview, and then kicked over into technical presentation mode. The first technical demonstration was of a long-distance VMotion. The Packet Pushers podcast covered this in some detail in Episode 2. In summary, F5 is providing the ability to move a virtual host between ESX clusters living in two different data centers without the VMware administrator having to touch the F5 appliance.
So how does the long-distance VMotion work? In summary, VMware vCenter Orchestrator uses the F5 iControl API to move a specific virtual machine from one data center to another. If you didn’t catch that, there’s some real magic going on there. F5 has partnered with VMware, such that vCenter will tell the F5 what to do using F5′s iControl. iControl is F5′s API to instruct the device what to do. So when the VMware admin goes into vCenter to move a virtual machine to a different cluster, vCenter will tell the F5 to move the host from the pool in the one data center and move it to the other data center, while maintaining TCP connection integrity. This allows you to do live migrations between data centers without killing all your clients, something very difficult to consider before. There’s some additional “magic” here in that 2 F5′s — one in one data center and one in another — build an EtherIP tunnel between each other, running wide-area optimizations between them (WOM is an add-on module providing deduplication, compression, and TCP optimizations) to facilitate a contiguous layer 2 space between the data centers. In the live demo, F5 stated that they had tested this with latency as high as 300ms, with as little as 10Mbps of WAN connectivity.
After the impressive live demo of long-distance VMotion, Joe Pruitt, Senior Strategic Architect, gave a talk on the F5 APIs: iControl (remotely controlling an F5) and iRules (TCL scripts that are attached to a virtual server and react to data flowing through the F5, providing customizable behavior). I have written my own simple iRules, which have saved me lots of time over radical reconfiguration of the F5 appliance to accomplish certain tasks.
First up, Joe talked about iControl, which he described as a set of web services. iControl gives you the ability to automate manual tasks such as management and monitoring. Using PowerShell, Perl, Java, and other languages, you can call over 3,000 iControl methods to cause the LTM to perform various tasks. Joe demonstrated the extensibility iControl offers by creating an RSS feed of F5 appliance events and tweeting BIGIP status messages.
After iControl, Joe moved on to a discussion of iRules. What are iRules? In short, iRules are a superset of TCL, allowing you to customize the behavior of an LTM as traffic flows into the box. There is an iRule Editor available for free, which includes syntax checking and highlighting. It’s a nice little tool I have used. Joe pointed out some coding examples, including a credit card scrubber that uses regex to match a potential number, and replaces that text with something obfuscated before returning the page to the HTTP client.
Joe referenced an important site that I believe to be a big part of the F5 platform’s appeal: DevCentral. DevCentral is an F5 community site where end users can contribute their own iRule and iControl code. I know that when I have an iRule to build, DevCentral is where I start.
Remaining demos involved the BIGIP Edge client which includes WOM to improve VPN user experience, and storage optimization using ARX.
Compellent presented to the Tech Field Day delegation about their automated storage solution which they call “Fluid Data”. View Compellent’s introductory video.
The highlight here was automated tiered storage. Compellent’s solution is a Fibre Channel play, as well as lower cost spindle, depending on the tier. Their chassis (Storage Center) is disk agnostic. Fluid Data automatically takes aging data and dumps it to lower-cost spindle. When data is determined to be “old” is based on frequency of access, coupled with metadata stats on what’s been touched. Customers can control what gets moved do a different tier when, but by default there is an automated profile that works in most contexts.
The final Tech Field Day presentation was from NEC, on their HYDRAstor storage array. In an attempt to summarize the HYDRAstor product, I describe it as linearly scalable, extremely high performance architecture that provides incredible throughput and disk-based backup capabilities. They accomplish this via global deduplication, among other techniques. The product also offers WAN optimization to keep arrays in sync for disaster recovery purposes. This is accomplished by presenting CIFS and NFS to the network, including leveraging CIFS 2.0 efficiencies.
When we went into the demo room, what I saw from a networking perspective was a rack full of storage, each storage shelf interconnected with multiple gigabit ethernet links. NEC-branded ethernet switches were also used to connect the storage rack to the network. NEC can ship just a shelf, but depending on the scale of the system you’re buying, can ship you an entire rack. As I recall, there are interoperability guides for uplinking the rack to a Cisco or other environment.
All in all, NEC’s demonstration of the raw power of the array was very impressive. I should also add that while I’m not specifically focused on storage or backup technologies, the guys in the room that were seemed to like the HYDRAstor solution very much.