In some ways, StorMagic has an old school approach to software-defined storage. Instead of a hyperconverged infrastructure approach that utilizes some of the same principals, but ultimately locks you into very specific hardware, StorMagic is strictly software only. Their goal is to provide software abstracted storage functions that allow organizations to run on their hardware of choice. They see their market at the edge of the enterprise. These would be remote locations for large organization where installing and deploying specialized hardware isn’t cost effective or physically feasible.
After looking at some of the announcements from AWS re:Invent, the most interesting was the AWS Snowmobile, an insane 100PB SAN on wheels. This seems like the ultimate in sneakernet, giving you a ton of throughput, but really slow latency. What if instead of offering a big pile of storage with bad latency, you could simply use your own storage, but distribute it with extremely low latency? ClearSky Data claims they can deliver this. I sat in on a product briefing to figure out how.
What’s happening in Storage
StorPool’s storage distribution solutions
Dell EMC Isilon’s presentation at Tech Field Day
Matt That IT Guy tells us about ioFabric
J Metz answers the question: When should a administrator use a storage area network technology and when should he use a network area storage technology?
A lot of vendors claim to have distributed storage. Certainly many of them will sell a solution marketed as distributed. The issue is that a lot of what is marketed as distributed relies on legacy implementation. These were made with the standard storage needs in mind. Capacity, reliability, and speed aren’t hard to find these days. You know what is really hard to do? True distributed storage. That’s where StorPool comes in.
ioFabric would be a great name for a company that makes clothes with embedded LEDs. The kind of stuff you see someone wearing around a mall, even though they don’t sell those clothes at any of the stores. The person who wears ioFabric always seems to be there when you are, so you start to wonder. […]
Gabriel Chapman of Thankfully the RAID is Gone comments: Bear with me folks, as this is going to be a two-parter, and yes I ramble when I write, which is also how I speak. Like many people who work in the technology field, I’m a bit of a pack rat when it comes to old […]
Jeff Wilson of Agnostic Computing comments: Right. So a couple weeks back I teased the hardware specs of the new storage array I built for the Daisetta Lab at home. My idea was to combine all types of disks -rotational 3.5â€³ & 2.5â€³ drives, SSDs, mSATAs, hell, I considered USB- into one tight, well-built storage […]
The time has come to take sides on the core question of storage for virtual servers: Do you want storage intelligence to live in the hypervisor or the array? Most administrators are already lining up on one side or the other, unintentionally casting their vote while the rest flounder. But the storage industry must wake up and embrace the divide.
Recently I got a Celerra NX4 storage array to meet my organizations storage needs, or out of the box, solve a specific problem that we are having with regard to storage. Slow data performance across the network and Windows Update. I found out quickly by doing some simple math covering what exists today and the maximum amount of available storage on the NX4 (~900GB) that this move to SAN Storage would indeed be something that has multiple phases (read disk shelves).
I’ve been thinking a bit about benchmarking and benchmarketing; pretty everyone agrees that SPC is a very poor representation of real world storage performance but at the moment, it’s the only thing that most of the market supports with one key exception. So I thought I’d come up with my own, so let me introduce the SSAC.