Ray Lucchesi of RayOnStorage Blog comments:
At Google IO conference this week, they revealed (see Google supercharges machine learning tasks …) that they had been designing and operating their own processor chips in order to optimize machine learning. They called the new chip, a Tensor Processing Unit (TPU). According to Google, the TPU provides an order of magnitude more power efficient machine learning over what’s achievable via off the shelf GPU/CPUs. TensorFlow is Google’s open sourced machine learning software.
When it comes to machine learning, hardware is still a necessity to do some of the really complicated things fast.
Read more at: TPU and hardware vs. software innovation (round 3)
- Solving Complexity with SD-WAN at National Instruments - May 15, 2018
- Detecting Cryptocurrency Mining with Vectra Cognito - April 13, 2018
- Extreme Networks SLX Platform – Extremely Easy Analytics - April 9, 2018
- Succeeding With SaaS and Viptela Cloud On-Ramp - April 5, 2018
- Treating Your Cloud Like an SD-WAN Branch - March 21, 2018
- Taking SD-WAN Even Wider at Acadia - March 14, 2018
- Gaining Visibility with ObserverLive - March 13, 2018
- Rolling Out SD-WAN at REI - March 9, 2018
- Orchestration From the Top Versus Automation From the Bottom - March 1, 2018
- Revealing Security Threats with ExtraHop Reveal(x) - February 13, 2018