Enrico Signoretti of Juku writes:
Data grows (steadily… and exponentially) and nothing gets thrown away. Since data adds up, the concept of the “data lake” has taken shape. Even systems created for big data are starting to sense this problem and system architects are beginning to think differently about storage.
I’m going to take Hadoop as an example because this gives a good idea of a hyper converged infrastructure, doesn’t it?
Today, most Hadoop clusters are built on top of a HDFS (Hadoop Distributed File system). HDFS characteristics make this FS much cheaper, reliable and scalable than many other solutions but, at the same time it’s limited by the cluster design itself.
A great look at the types of convergence (or lack thereof) in the market. Hyperconvergence isn’t for everyone. Read on to find out what may work best for you.
- Accessing Security Insights with SolarWinds Network Insight For Cisco ASA - June 7, 2018
- McKesson’s SD-WAN Journey to Cost Savings - May 31, 2018
- Unlocking VXLAN with Mellanox - May 30, 2018
- Solving Complexity with SD-WAN at National Instruments - May 15, 2018
- Detecting Cryptocurrency Mining with Vectra Cognito - April 13, 2018
- Extreme Networks SLX Platform – Extremely Easy Analytics - April 9, 2018
- Succeeding With SaaS and Viptela Cloud On-Ramp - April 5, 2018
- Treating Your Cloud Like an SD-WAN Branch - March 21, 2018
- Taking SD-WAN Even Wider at Acadia - March 14, 2018
- Gaining Visibility with ObserverLive - March 13, 2018