All Exclusives

Hitachi’s (HDS) RAID 6

Hitachi (HDS) has been one of the pioneers in implementing RAID 6 in their storage products. I believe the necessity of RAID 6 at  HDS  was initially realized back in 2004 with the release of high capacity disk drives, and since then they started implementing those in 2005 with the USP’s and then later in the  TagmaStore  Modular Storage products.

In the next upcoming posts, we will talk about the RAID 6 technology and its usage by different  OEMs  like  HDS,  EMC  and  NetApp. If possible I will try to write the final post on comparison between each of these  OEMs  and how they have leveraged the use RAID 6.

All the  OEM’s tend to modify RAID technology in their microcode / software / firmware to better fit their product or enhance it based on various factors like speed, rebuild times, etc, prime example will be  EMC’s implementation of RAID S with  Symmetrix  products. NetApp’s implementation of RAID DP with its products.  

HDS’s Business Case

RAID 6 is available in Hitachi’s USP,  WMS  and the  AMS  disk arrays.

System and storage administrators are all very well versed with RAID 5 and has been using it as a standard RAID technology across all servers and mid tier storage. With storage disk arrays the need to have RAID configuration is necessary, example RAID 1, RAID 1+0, RAID 3, RAID 5, RAID 6, RAID 10, RAID S, etc.

Hitachi products support RAID 0, RAID 1, RAID 5 and RAID 6.  

Please see my previous post on RAID and various different RAID Technologies

http://www.storagenerve.com/2009/01/raid-technology-continued.html

http://www.storagenerve.com/2008/07/raid-types.html

RAID 5 has been common practice since the last 10 to 15 years. Now the drive sizes during these years varied from 4GB disk to 146GB SCSI or Fiber Disk (which included various different sizes like 4.3GB, 9GB, 18GB, 36GB, 50GB, 73GB and 146GB). These days, seldom you see these size drives, customers are talking about disk sizes that are minimum 300GB (FC or SATA) and go up to 1TB. Over the next 2 to 3 years, we will absolutely see disk sizes that will be between 3TB to 4TB.

RAID 5 Abstract

Technology:  Striping Data with Distributed Parity, Block Interleaved Distributed Parity

Performance:  Medium

Overhead:  15% to 20% with additional drives in the Raid group you can substantially bring down the overhead.

Minimum Number of Drives:  3

Data Loss:  With one drive failure, no data loss. With multiple drive failures in the same Raid group data loss is  imminent.

Advantages:  It has the highest Read data transaction rate and with a medium write data transaction rate. A low ratio of  ECC(Parity) disks to data disks which converts to high efficiency along with a good aggregate transfer rate.

Disadvantages:  Disk failure has medium impact on throughput. It also has most complex controller design. Often difficult to rebuild in the event of a disk failure (as compared to RAID level 1) and individual block data transfer rate same as single disk.

RAID 5 also relies on parity information to provide redundancy and fault tolerance using independent data disks with distributed parity blocks. Each entire data block is written onto a data disk; parity for blocks in the same rank is generated on Writes, recorded in a distributed location and checked on Reads.  

This would classify as one of the most favorite RAID Technologies of the past.

The rebuild time on drive sizes from 4.3GB to 146GB during off production times can be about 18 to 24 hours, during off production can be close to 4 to 8 hours. There is a risk associated with RAID 5 and having any additional drive failures in the same RAID group.

Let’s say you have a single drive failure in your RAID 5. The vendor picks up the error using the call home feature and dispatches an engineer to come  onsite  to replace the drive. It’s now 4 hours since the drive has failed. You as a customer haven’t seen any performance impact yet. The drive is replaced and it will take 24 hours to completely sync (rebuild) with (from) its partners in the same RAID group. So now it’s really 28 to 30 hours since your initial drive failure. During this time, if you hit one more roadblock (Read / Write hiccup or a bad sector) in the same RAID group, the data in the RAID group will be lost.

These days the normal drive size is at least 300GB or more. With FC and  SATA  you can have your drive size variations as 250GB, 300GB, 450GB, 500GB, 750GB and then 1TB being the latest addition. With these larger  SATA  drives, the rebuild times can go into 30 to 45 hours or in some cases even 100 hours. Now the window where things can really go wrong is much higher. That is one of the reasons quite a few vendors these days have introduced RAID 6.

RAID 6 Abstract

Technology:  Striping Data with Double Parity, Independent Data Disk with Double Parity

Performance:  Medium

Overhead:  20% to 30% overhead, with additional drives you can bring down the overhead.

Minimum Number of Drives:  4

Data Loss:  With one drive failure and two drive failures in the same Raid Group no data loss.

Advantages:  RAID 6 is essentially an extension of RAID 5 which allows for additional fault tolerance by using a second independent distributed parity scheme (two-dimensional parity). Data is striped on a block level across a set of drives, just like in RAID 5, and a second set of parity is calculated and written across all the drives; RAID 6 provides for an extremely high data fault tolerance and can sustain multiple simultaneous drive failures which typically makes it a perfect solution for mission critical applications.

Disadvantages:  Poor Write performance in addition to requiring N+2 drives to implement because of two-dimensional parity scheme.

Note:  Hitachi does not recommend using RAID 6 drives with high performance applications where extreme random writes are being performed. In some cases, the use of RAID 1 or RAID 1+0 is essential. There is an performance overhead associated with use of RAID 6, we will talk about it later in the post.  

Probability of Data Loss with RAID 5 and RAID 6

As you see in the graph, the probability or the percentage of exposure related to RAID 5 double failures is as much as 7.5% while the chance of triple failure in a RAID 6 configuration is 0%. As the drive sizes are increasing, the usage of RAID 6 will become more prominent.

HDS’s Technology

Lets take the above as an example, we have 8 Disk Drives in a USP system.

The D1, D2, D3, D4, D5, D6 represents DATA BLOCKS and P1 and P2 (the Dual Parity).

The data blocks are followed by the parity and then the last parity drive is where the new data blocks starts to write again. With this sequential nature, the vast improvement is seen in the performance of this technology.

To make things a bit more complex and learn this technology lets introduce some mathematical formulas with implemention of RAID 6.  

In the above, D0, D1, D2, D3, D4 and D5 are the Data Blocks (Stripes) and P = Calibration data and Q = Secondary Parity

Using mathematical formula’s with the Data Stripes (D0, D1, D2, D3, D4 and D5) and XOR (Exclusive OR), the P (Calibration data) is generated.  

P = D0 XOR D1 XOR D2 XOR D2 XOR D4 XOR D5

Q is the product of Coefficent and Data Stripes (D0 through D5) XOR  

Q = A0 * D0 XOR A0 * D1 XOR A0 * D2 XOR A0 * D3 XOR A0 * D4 XOR A0 * D5

Typically with one drive failure the P (Calibration Data) is used to Generate or rebuild the new drive, with two drive failures, the P and Q data is used to rebuild the new drive.    

Risk

Here is a nice chart the shows the Risk associated with RAID 5 and RAID 6


As times elapse with the drive failure on RAID 5 (with time to respond and rebuild times), the risk associated tends to increase.

With RAID 6 and a drive failure the risk associated tends to stay the same and at 0%.

Based on different Raid Group Size, here is the risk of data loss with rebuild times.

As you can see in both the graphs, the risk associated with RAID 6 is pretty much zero percent.

Overhead

As discussed earlier, there is an additional overhead with usage of RAID 6 vs usage of RAID 5. But the risk associated with using RAID 5 is much higher than the overhead consumption by RAID 6. Here is a graph that shows the overhead associated with RAID 6.

As you see in the graph, the overhead with 6 Data drives and 2 Parity drives is only 25%. If you were running Mirroring or some other variation of RAID 5, the overhead can be between 50% to 25%. So in short even with    2 parity drives the advantages are quite greater with use of RAID 6.

From a performance standpoint, the performance of RAID 5 and RAID 6 is pretty similar when we talk about Random Read, Sequential Read and Sequential Write workloads. There is added penalty when we talk about Random Write workloads, that is because of the two dimensional parity. Compared to RAID 5, RAID 6 takes a 33% Performance hit on Hitachi with Random Write workloads.  

To sum up, if you are using high capacity disk drives on your Hitachi Systems and are looking to mitigate failures, it is highly recommended you use RAID 6 on these systems.  

RAID 6 is a great technology, may be the technology of present, but the future of RAID will go to a different place. Imagine you have a 20TB drive (reality of 2012 – SATA), how long will it take to rebuild that and what is the risk of triple fault with it.  

Note: The graphs above have been obtained from two different documents — Hitachi’s RAID 6 Protection and Hitachi’s RAID 6 Storage Doc.

About the author

Devang Panchigar

With more than 7 Years of IT experience, Devang is currently the Director of Technology Solutions and IT Operations at Computer Data Source, Inc. Devang has held several positions in the past including Sr. Systems Engineer, Sr. Network Engineer, Technical Support Manager, Director of Storage Support & Operations. He has been responsible for creating and managing worldwide technical support teams, technology solutions team, operations management, service delivery, pre and post sales support, marketing and business planning. In his current role Devang oversees multiple aspects of the Technology Solutions Group that works with various Multinational and Fortune 500 companies providing them infrastructure services. Along with various industry certifications, Devang holds a Bachelor of Science from South Gujarat University, India and a Master of Science in Computer Science from North Carolina A&T State University.

Leave a Comment