This is post 3 of 14 in the series “FutureWAN 18 Tech Talks” Sometimes the combination of single parts create together something that is greater and enables us to reach a breaking point. A moment of true gestalt. Imagine if there were a self driving electric car waiting for you on every corner. At this […]
Tech Talks are sponsored conversations between industry leading analysts and influential companies. These posts explore the pressing issues in IT, examine fascinating use cases, and facilitate larger conversations.
These posts are written by our network of independent IT influencers according to topics selected by the sponsoring companies. Gestalt IT and these writers are paid by the company indicated in the header and sidebar.
To start a Tech Talk series of your own, contact us.
The recent Meltdown and Spectre attacks illustrate the problematic nature of modern computing systems. While the earlier Rowhammer attack could read or attack one process running in a virtual environment from another process running on the same processor, the Meltdown and Spectre attacks are of a completely different class, enabling a process to read large amounts of information from another process’ memory space. This is because inside each hosts, CPU makers have embraced a scale up approach. But what would happen with a more scale out approach to the architecture?
As the features of SD-WAN mature, a new wave of organizations are looking to implement SD-WAN technology. Organizations are finding the security capabilities of SD-WAN incorporated with features likes transport independence, intelligent traffic steering, and built-in redundancy are too compelling to ignore for their next WAN refresh.
Do enterprise organizations care what they’re plugging into so long as they get secure, reliable, fast, and cheap public WAN connectivity? In most cases, I don’t think they do. Whether it’s traditional MPLS terminating right at the branch or the latest SD-WAN device, what’s important isn’t the type of technology, but the business requirements the technology meets.
In this post, we look at how Congruity360 manages to establish themselves as a new company in the enterprise IT landscape by leveraging an impressive legacy of managed service offerings.
Congruity360 takes reusing legacy infrastructure very serious, building their new Fall River, MA data center from a historical cotton mill in the city. The granite structure provided an ideal place to build their state of the art facility. We were treated to a tour of the facility at its grand opening, and spoke to the mayor of Fall River, Jasiel Correia II, about what it means to the city.
In our first post, I gave an overview of Congruity360’s corporate history and portfolio. The company has an impressive suite of managed services for organizations to leverage, providing vital business needs with a minimum of IT hassle. At Storage Field Day last week, the company zeroed in on one particular offering, one that’s increasingly relevant to modern enterprises: data migration.
Russ White considers the challenges of using GPU clusters in high performance computing. Aside from possibly lacking software to take advantage of it, the other challenge lies in the connection network. Ethernet is the default standard for this, but requires additional overhead. Russ sees using PCI Express as a much more efficient solution. He considers a PCIe switch from Liqid to dynamically compose infrastructure.
At Commvault GO 2017, Stephen Foskett sat down with Congruity360 COO Mark Shirman. They discuss how the company’s engineering talent allows them differentiate in their emerging services around infrastructure business. This builds on their legacy storage reseller business, but has expanded to providing services across the data center. This is all provided by their brand new 200,000 sq ft data center located in Fall River, MA.
If the name Congruity360 isn’t immediately familiar, don’t worry, you haven’t fallen out of the IT loop. The company was established in 2017 after a merger with Congruity and KNJ. Congruity itself was only established in 2016 after a prior merger between Rockland IT and MSDI. All this adds up to a company with a long legacy in data management, but without a lot of name recognition.