Syndicated

How I Learned to Stop Worrying and Love Storage Automation

The first  storage performance  horseman is spindles: If you don’t have enough disk units, performance will suffer. I have been laying out storage on enterprise arrays since the dark ages, and one of the first lessons I learned was allocating data to avoid hotspots. I remember spending hours back in the 1990’s hunched over custom Excel spreadsheets trying to get my storage layout just right, balancing the workload across every available disk.

This is how we used to avoid hotspots in 1998: Carefully planning every detail of the storage layout.  

This is how we used to avoid hotspots in 1998: Carefully planning every detail of the storage layout.

 

 

Each disk drive consists of a spindle of spinning platters with read/write heads move back and forth. Each time you access a piece of data that’s not in cache, the drive moves its arm over the platter to access the correct piece of data. Since each drive can only access one piece of data at once, and since caches can only hold so much data, tuning a system to minimize the number of requests per drive is essential.

Manual storage array layout was an art, but we never fooled ourselves into thinking our designs were optimal. There were just too many intractable problems, so  we had to compromise at every turn:

  • We usually had no performance data to base our layout decisions on, so we had to rely on guesses and rules of thumb
  • Workloads tend to change over time and manual layouts are painful to modify
  • The smallest unit of allocation was an entire LUN or drive, so even the best disk layout mixed hot and rarely-accessed data everywhere
  • Much of the allocated space was unused, so we used expensive disks to store nothing

One might think that, 10 years later, advances in technology would have solved these basic issues. But for many people using many of the so-called modern mainstream enterprise storage systems, these problems remain.

Like all good systems administrators, I’m a natural control freak. I am uncomfortable letting the system manage itself, having been burned too many times by computers (well, software really) making stupid decisions. It’s analogous to the backlash against anti-lock brakes, traction control, and automated transmissions among racing enthusiasts.

Do we allow technology to help us get better performance, or do we try to micro-manage everything?  

Do we allow technology to help us get better performance, or do we try to micro-manage everything? Photo by ClearInnerVision

But the time has come to let go. We don’t have to micro-manage storage anymore, and we have much to gain by letting the array do the work:

  • Just as traction control can manage each wheel independently, something a driver could never do, modern virtualized storage systems can allocate small “chunks” to the optimal drive type, creating a better layout than anyone could manage with LUNs
  • Dynamic optimization technology can move these chunks around, adapting as loads change
  • Thin provisioning can go a step further, not wasting drive capacity for unused space
  • Wide striping and post-RAID storage systems have a higher threshold before performance suffers due to spindle hotspots
  • Widespread availability of tiered storage, including advanced caches, solid state drives, high-performance SAS and FC, and cheap bulk disks, gives us many more options

As I mentioned, not all systems have these capabilities, and not all implementations are created equal. I’m concerned about misuse of thin provisioning, for example, but it’s hard to argue with its effectiveness in many circumstances. Find out how granular your system’s allocation is – some remain LUN-only, while others are much more effective, using tiny chunks.

These new storage automation technologies really become essential once high-dollar flash storage is added to the mix. If you’re paying 30 times more for a flash drive, you want to make sure you’re making the best use of it that you can! Look at IBM’s recently-announced SAN Volume Controller (SVC) and solid state drive (SSD) combination, for example: It will almost certainly have fine-grained thin provisioning of SSDs, and should be able to dynamically move data between flash and disk storage and even between different storage arrays, but I still have questions on how granular this capability will be. HDS can do similar things with their USP-V. NetApp’s V-Series NAS systems will do dynamic allocation, thin provisioning, and data deduplication to enable a better return on the flash drive investment. I’d love to see 3PAR, Compellent, Dell/EqualLogic, and HP/LeftHand apply their solid dynamic allocation tech to solid state storage as well!

Then there’s the 800 lb gorilla: EMC. More enterprise SSD has probably been shipped out of Hopkinton than every other vendor combined, and both the CX and DMX support (optional/expensive) “virtual provisioning” (aka, thin provisioning) of flash storage. But EMC’s Optimizer is not widely used, and only migrates entire LUNs based on user input – hardly the kind of dynamic and granular technology needed to optimally use all of that flash storage. I’m sure the company is working on addressing this issue, though. Perhaps it will appear in the DMX-5 announcement we are all expecting this year?

About the author

Stephen Foskett

Stephen Foskett is an active participant in the world of enterprise information technology, currently focusing on enterprise storage, server virtualization, networking, and cloud computing. He organizes the popular Tech Field Day event series for Gestalt IT and runs Foskett Services. A long-time voice in the storage industry, Stephen has authored numerous articles for industry publications, and is a popular presenter at industry events. He can be found online at TechFieldDay.com, blog.FoskettS.net, and on Twitter at @SFoskett.

Leave a Comment