I generally think of file systems as fairly glacial creatures. It look me a while before I even realized that it was a separate component in an OS. For most people, it’s completely opaque. It wasn’t until I started learning Linux in college that I realized there was even the possibility of (gasp) choice with file systems.
Perhaps big players like Microsoft and Apple are to blame a bit. Until the release of APFS last year, they were both using file systems that were around two decades old in NTFS and HFS Plus. This isn’t to say they were stagnant, while NTFS last had a major version update number with XP, Microsoft adds new features in with each release of Windows. And the nature of a file system tends to lean away from frequent upgrades, as it’s tied up with data integrity and retrieval, something most people don’t want to have to think about, much less worry about it going wrong with a buggy update.
Still, the landscape of both storage and IT need has changed significantly over the past decade. Even my beloved ZFS wasn’t designed in an age of NVMe and a Big Data landscape. Stefan Radtke, Technical Director at Qumulo, makes an argument why their particular file system is designed for modern IT’s challenges. I’m no expert on their solution, so I won’t comment on their specific implementation. But it did get me thinking of what the needs of an enterprise file system in 2018 should be.
One interesting aspect I never considered the domain of a file system is analytics. The ability to natively capture relevant data without having a separate component either inline or otherwise disrupting the flow of data is huge. The real advantage of this would be to have these native analytics hooks in an open source FS, which isn’t strictly tied to a service contract or other hardware. Of course, given the surfeit of open source FS projects out there, I wouldn’t be surprised if something like this already exists, just looking for adoption.
Stefan Radtke comments:
Read more at: Attributes of a Modern File Storage System