The modern enterprise generates massive amounts of data every day. One of the first problems posed by this data explosion is processing it. Come to think of it, storing these astronomical amounts of data is not really the problem, because insofar, storage providers have done a decent job of keeping up with the capacity demands. Yet storage today is under pressure. Why? Because in addition to capacity, now the constantly growing datasets need computing power to process AI/ML and analytics workloads.
From this demand, computational storage was born. Bringing computation closer to storage, computational storage makes possible processing of petabytes of data locally. Also known as “in-situ processing”, computational storage is a technology that makes a lot of sense in today’s data situation.
Chin-Fah Heoh, a Field Day delegate for many years, has an interesting article on this- “Computational Storage embodies Data Velocity and Locality” where he talks about the significant advantages of Computational Storage and what value it brings to organizations. In his blog, he writes:
I have been earnestly observing the growth of Computational Storage for a number of years now. It was known by several previous names, with the name “in-situ data processing” stuck with me the most. The Computational Storage nomenclature became more cohesive when SNIA® put together the CMSI (Compute Memory Storage Initiative) some time back. This initiative is where several standards body, the major technology players and several SIGs (special interest groups) in SNIA® collaborated to advance Computational Storage segment in the storage technology industry we know of today.
Read the rest of his article, “Computational Storage embodies Data Velocity and Locality” to learn his point of view on this.