Ray Lucchesi of RayOnStorage Blog comments:
At Google IO conference this week, they revealed (see Google supercharges machine learning tasks …) that they had been designing and operating their own processor chips in order to optimize machine learning. They called the new chip, a Tensor Processing Unit (TPU). According to Google, the TPU provides an order of magnitude more power efficient machine learning over what’s achievable via off the shelf GPU/CPUs. TensorFlow is Google’s open sourced machine learning software.
When it comes to machine learning, hardware is still a necessity to do some of the really complicated things fast.
Read more at: TPU and hardware vs. software innovation (round 3)
- Are Vendors and VARs The Enemy? - November 15, 2017
- Pluribus Networks Is the Definition of “Software Defined” - November 2, 2017
- First American Title And Viptela: A Story of Resilience - October 4, 2017
- Flexible NFV with Array Networks - August 7, 2017
- Cloud onRamp – Making The Cloud More Local For Your Users - July 26, 2017
- Feeling Your Flow with Plixer - July 19, 2017
- What Are ASICs? A Human Example - July 11, 2017
- Migrating to Healthcare Cloud Apps With Acadia and Viptela - June 5, 2017
- FutureWAN – The SD-WAN Education You Need - May 26, 2017
- Linksys and the Resurgence of the SMB - May 8, 2017