Ray Lucchesi of RayOnStorage Blog comments:
At Google IO conference this week, they revealed (see Google supercharges machine learning tasks …) that they had been designing and operating their own processor chips in order to optimize machine learning. They called the new chip, a Tensor Processing Unit (TPU). According to Google, the TPU provides an order of magnitude more power efficient machine learning over what’s achievable via off the shelf GPU/CPUs. TensorFlow is Google’s open sourced machine learning software.
When it comes to machine learning, hardware is still a necessity to do some of the really complicated things fast.
Read more at: TPU and hardware vs. software innovation (round 3)
- Feeling Your Flow with Plixer - July 19, 2017
- What Are ASICs? A Human Example - July 11, 2017
- Migrating to Healthcare Cloud Apps With Acadia and Viptela - June 5, 2017
- FutureWAN – The SD-WAN Education You Need - May 26, 2017
- Linksys and the Resurgence of the SMB - May 8, 2017
- Review – Docker Networking Cookbook - February 13, 2017
- The Future of SD-WAN Is Now! - January 10, 2017
- How Kindred Healthcare Uses SD-WAN to Secure Patient Data - December 2, 2016
- The Power of ONUG And What It Means To You - November 30, 2016
- Enabling The Most Remote Offices With Viptela - November 4, 2016