All Syndicated

NetApp Deduplication An In-depth Look

There has been a lot of discussion lately about the NetApp deduplication technology, especially on twitter.   We had a lot of misinformation and FUD flying around, so I thought that a blog entry that takes a close look at the technology was in order.

But first a bit of disclosure,   I currently work for a storage reseller that sells NetApp as well as other storage. The information in this blog posting is derived from NetApp documents, as well as my own personal experience with the technology at our customer sites.   This posting is not intended to promote the technology as much as it is to explain it. The intent here is to provide information from an independent perspective. Those reading this blog post are, of course, free to interpret it the way they choose.

How NetApp writes data to disk.

First lets talk about how the technology works.   For those who aren’t familiar with how a NetApp array stores data on disk, here’s the key to understanding how NetApp approaches writes.   NetApp stores data on disk using a simple file system called WAFL (Write Anywhere File Layout).   The file system stores metadata which contains information about the data blocks, has inodes that point to indirect blocks, and indirect blocks point to the data blocks. One other thing that should be noted about the way that NetApp writes data is that the controller will coalesce writes into full stripes when ever possible. Furthermore, the concept of updating a block is unknown in the NetApp world. Block updates are simply handled as new writes, and the pointers to the updated blocks are moved to point to the new “updated” block.

How deduplication works.

First, it should be noted that NetApp deduplication operates on a volume level.   In other words,all of the data within a single NetApp volume is a candidate for deduplication. This includes both file data, and block (LUN) data that is stored within that Netapp volume.   NetApp deduplication is a post-process that occurs based on either a watermark for the volume, or on a schedule.   For example, if the volume exceeds 80% of it’s capacity a deduplication run can be started automatically. Or, a   deduplication run can be started at a particular time of day, usually at a time when the user thinks the array will be less utilized.

The maximum sharing for a block is 255. This means that if there are 500 duplicate blocks,there will be 2 blocks actually stored with 1/2 of the pointers pointing to the first block and 1/2 of the pointers pointing to the second block. Note that this 255 maximum is separate from the 255 maximum for snapshots.

When deduplication runs for the first time on a NetApp volume with existing data, it scans the blocks in the volume and creates a fingerprint database, which contains a sorted list of all fingerprints for used blocks in the volume.   After the fingerprint file is created, fingerprints are checked for duplicates, and, when found, first a byte- by-byte comparison of the blocks is done to make sure that the blocks are indeed identical. If they are found to be identical, the block‘s pointer is updated to the already existing data block, and the new (duplicate) data block is released. Releasing a duplicate data block entails updating the indirect inode pointing to it, incrementing the block reference count for the already existing data block, and freeing the duplicate data block.

As new data is written to the deduplicated volume, a fingerprint is created for each new block and written to a change log file. When deduplication is run subsequently, the change log is sorted, its sorted fingerprints are merged with those in the fingerprint file, and then the deduplication processing occurs as described above.   There are two change log files, so that as deduplication is running and merging the new blocks from one change log file into the fingerprint file, new data that is being written to the flexible volume is causing fingerprints for these new blocks to be written to the second change log file. The roles of the two files are then reversed the next time that deduplication is run. (For those familiar with Data ONTAP usage of NVRAM, this is analogous to when it switches from one half to the other to create a consistency point.)   Note that when deduplication is run an an empty volume, the fingerprint file is still created from the log file.

Performance of NetApp deduplication
.

There has been a lot of discussion about the performance of Netapp deduplication. In general, deduplication will use CPU and memory in the controller. How much CPU will be ustilied is very had to determine ahead of time, however in general you can expect to use from 0% to 15% of the CPU in most cases, but as much as 50% has been observed in some cases. The impact of deduplication on a host or application can very significantly and depends on a number of different factors including:

–       The application and the type of dataset being used
–       The data access pattern (for example, sequential versus random access, the size and pattern of the
–       I/O)
–       The amount of duplicate data, the compressibility of the data, the amount of total data, and the
–       average file size
–       The nature of the data layout in the volume
–       The amount of changed data between deduplication runs
–       The number of concurrent deduplication processes and compression scanners running
–       The number of volumes that have compression/deduplication enabled on the system
–       The hardware platform–the amount of CPU/memory in the system
–       The amount of load on the system
–       Disk types ATA/FC, and the RPM of the disk
–       The number of disk spindles in the aggregate

The deduplication is a low priority process, so host I/O will take precedence over dedupllication. However, all of the items above will effect the performance of the deduplication process itself.   In general you can expect to get somewhere between 100MB/sec to 200/MB/sec of data dedupication from a NetApp controller.

The effect of deduplication on the write performance of a system is very dependent on the model of controller and the amont of load that is being put on the system. For deduplicated volumes, if the load on a system is low–that is, for systems where the CPU utilization is around 50% or lower–there is a negligible difference in performance when writing data to a deduplicated volume, and there is no noticeable impact on other applications running on the system. On heavily used systems, however, where the system is nearly saturated, the impact on write performance can be expected to be around 15% for most models of controllers.

Read performance of a deduplicated volume depends on the type of reads being performed. The implicit on random reads is negligible. In early versions of ONTAP the impact of deduplication was noticeable with heavy sequential read applications. However with version 7.3.1 and above NetApp added something they called “intelligent cache” to ONTAP specifically to help with the performance of sequential reads on deduplicated volumes and were able to mitigate the performance impact of sequential reads nearly completely. Finally, with the addition of FlashCache cards to a controller, performance of deduplicated volumes can actually be better than non-deduplicated volumes.

Deuplication Interoperability with Snapshots.

Snapshots and their interoperability with deduplication has been a hotly debated topic on the internet lately. Snapshot copies lock blocks on disk that cannot be freed until the Snapshot copy expires or is deleted. On any volume, once a Snapshot copy of data is made, any subsequent changes to that data temporarily require additional disk space, until the snapshot is deleted or expires. The is true with deduplicated volumes as well as non-deduplicated volumes. Thus the space savings from deuplication for any data held by a snapshot prior to a deduplication run will not be recognized until after that snapshot expires or is deleted.

Some best practices to achieve the best space savings from deduplication-enabled volumes that contain Snapshot copies include:

–       Run deduplication before creating new Snapshot copies.
–       Limit the number of Snapshot copies you maintain.
–       If possible, reduce the retention duration of Snapshot copies.
–       Schedule deduplication only after significant new data has been written to the volume.
–       Configure appropriate reserve space for the Snapshot copies.

Some Application Best Practices

VMWare

In general VMware deduplicates well, especially if a few best practices in laying out the VMDK files are considered. The following best practices should be considered for VMware implementations:

–       Operating system data deduplicates very well therefore you should stack as many OS’s   onto the same volume as possible.
–       Keep VM swap files, pagefiles, user and system temp directories on separate VMDK files.
–       Utilize FlashCache where ever possible to cache frequently accessed blocks (like those from the OS).
–       Always perform proper alignment of your VM’s on the NetApp 4K boundaries.

Microsoft Exchange

In general deduplication provides little benefit for versions of Microsoft Exchange prior to Exchange 2010. Starting with Exchange 2010 Microsoft has eliminated single instance storage and deduplication can reclaim much of the additional space created by this change.

Backups (NDMP, SnapMirror and SnapVault)

The following are some best practices to consider for backups of deduplicated volumes:

–       Ensure deduplication operations initiate only after your backup completes.
–       Deduplication operations on the destination volume complete prior to initiating the next backup.
–       If backing up data from multiple volumes to a single volume you may achieve significant space savings from deduplication beyond that of the deduplication savings from the source volumes.   This is because you are able to run deduplication on the destination volume which could contain duplicate
–       data from multiple source volumes.
–       If you are backing up data from your backup disk to tape consider using SMTape to preserve the deduplication/compression savings.   Utilizing NDMP to tape will not preserve the deduplication savings on tape.
–       Data compression can affect the throughput of your backups.   The amount of impact is dependent upon the type of data, compressibility, storage system type and available resources on the destination storage system.   It is important to test the affect on your environment before implementing
–       into production.
–       If the application that you are using to perform backups already does compression, NetApp data compression will not add significant additional savings.

Conclusions

In general, NetApp deduplication can help drive down the TCO of your storage systems significantly, especially when combined with FlashCache in a VMware or Virtual Desktop environment. If best practices are followed carefully, the performance impact of deduplication is negligible, and the space savings for some applications can be considerable. Some careful planning and testing in the customers environment are necessary to ensure that maximum advantage is taken of deduplication, however the ability to schedule when the operations take place combined with the ability to turn on and off deduplication provide significant flexibility in to tune the environment for a customer’s particular application profile.

About the author

Joerg Hallbauer

I am a long time data center denizen who currently focuses on storage and storage related issues. I have worked on both sides of the fence, for vendors as well as being the guy who had to impliment what they sold me. Finally, I've managed teams of UNIX, Windows, and Storage Admins.

Leave a Comment