Adam Banks at Ars Technica has a great writeup on how cheap RAM changes storage, and in effect system architecture. Cheap RAM means the ratio of RAM to storage is changing in the database, allowing for more data to be loaded and reducing disk reads. The problem? Well volatile memory is… volatile, not necessarily something you’re looking for in your database. Snapshots and journalling can mitigate this, but introduces computational overhead. Non-volatile RAM may offer a better alternative, but the most common implementation, SSDs, have potential long term reliability problems. Adam reviews some of the alternative NVRAM options emerging into the market, and shows some interesting implications for everything from business databases to the Exascale Computing Project.
Ars Technica comments:
RAM (random access memory) is a component of every computer system, from tiny embedded controllers to enterprise servers. In the form of SRAM (static RAM) or DRAM (dynamic RAM), it’s where data is held temporarily while some kind of processor operates on it. But as the price of RAM falls, the model of shuttling data to and from big persistent storage and RAM may no longer hold.
RAM is highly susceptible to market fluctuations, but the long-term price trend is steadily downward. Historically, as recently as 2000 a gigabyte of memory cost over $1,000 (£800 in those days); today, it’s just under $5 (~£5). That opens up very different ways of thinking about system architecture.
Read more at: Thanks for the memory: How cheap RAM changes computing
Image Credit: John C. McCallum (data)