Enrico Signoretti of Juku writes:
Data grows (steadily… and exponentially) and nothing gets thrown away. Since data adds up, the concept of the “data lake” has taken shape. Even systems created for big data are starting to sense this problem and system architects are beginning to think differently about storage.
I’m going to take Hadoop as an example because this gives a good idea of a hyper converged infrastructure, doesn’t it?
Today, most Hadoop clusters are built on top of a HDFS (Hadoop Distributed File system). HDFS characteristics make this FS much cheaper, reliable and scalable than many other solutions but, at the same time it’s limited by the cluster design itself.
A great look at the types of convergence (or lack thereof) in the market. Hyperconvergence isn’t for everyone. Read on to find out what may work best for you.
- Captivating Wireless Connectivity with Cisco OpenRoaming - January 22, 2020
- Does the Apple Airport Extreme Use VLANs? - January 21, 2020
- Predicting Data Patterns with Cradlepoint - January 16, 2020
- How Do RFC3161 Timestamps Work? - January 15, 2020
- Testing the Whole System with NetAlly EtherScope nXG - January 14, 2020
- Stupid Network Tricks - January 14, 2020
- There Is No Layer-2 in Public Cloud - January 8, 2020
- Assuring Your Service Level with Ixia IxProbe - January 8, 2020
- Wi-Fi and the Netflix Effect - December 27, 2019
- Figure Out What Problem You’re Trying to Solve - December 20, 2019