There are two fundamental ways to speed up a database server: more RAM and faster storage. Both are expensive, particularly if you have a large database to hold. The key to expensive resources is making sure you are getting value for all the money you spent. You must always look at performance with a system view. The physical hardware limits the maximum possible performance; each layer above must be optimized to attain that performance. Since I am not a database administrator or application developer, I usually focus on the physical and operating system layers. There are a few ways to increase that maximum possible performance, some more expensive than others.
Adding more RAM is a great way to accelerate a database server. The extra RAM will be used by the operating system or database engine as a cache to accelerate disk access. This cache is handy if your storage is relatively slow for reads and your data is not updated frequently. Having this RAM act as a write cache isn’t a good idea as it is volatile; if the server crashes, then any writes in RAM are lost. Some databases use RAM to hold the entire database, effectively making your RAM into high-speed storage. Unfortunately, that RAM is still volatile, so you still need persistent storage that is fast enough to receive database writes and make them persistent. The downsides of adding RAM are that it is expensive, and servers cannot hold particularly large amounts of RAM. Sometimes multiple servers, each with lots of RAM, are clustered to provide database performance, making a fast but expensive database server.
Eventually, your database needs to land on persistent storage. The faster data flows to and from the database, the faster the storage must be to keep up. If you can’t cache all the data in RAM, then your storage will need to keep up with reads and writes. For the last few years, fast storage means SSD or an all-flash shared storage array. If you remember, when all-flash arrays were new, flash storage needed to be carefully managed to get the best performance and longevity out of your investment. Many database engines treat SSDs as an extension of RAM to organize data before it is written permanently, leading to large numbers of small writes. These small writes reduce the lifespan of the SSD and quickly lead to performance degradation when the SSD gets full due to garbage collection.
Local Storage Accelerator
We have GPUs to accelerate math-based workloads. We have smart NICs to accelerate network-intensive tasks. What about an accelerator for your storage? A hardware storage accelerator could be the key to unlocking more database performance; you might even view it as a database accelerator. Like GPUs, the first use cases are with local resources, making locally attached SSDs perform at full speed without eating up all your CPU time. It may surprise the virtualization team, but direct-attached storage is common in large database deployments, particularly with cloud application providers. Just like those all-flash arrays, the Pliops Storage Processor (PSP) knows how to get the best out of flash storage. It is familiar flash storage array features applied to local SSD. The Pliops card has NVRAM to protect and organize data before it is written as whole (large) blocks of compressed data. Because the data is reduced and only written in large chunks, the SSD keeps long life and performance without needing garbage collection. Pliops shows a single server with SSDs and a PSP delivers similar performance to an eight (8) server cluster using an in-memory configuration in some of their benchmark results. A Pliops Storage Processor might be the most cost-effective way to improve your database performance.
To learn more about Pliops, visit their website and be sure to check them out at the upcoming Cloud Field Day 11!
Leave a Comment