Ray Lucchesi of RayOnStorage Blog comments:
At Google IO conference this week, they revealed (see Google supercharges machine learning tasks …) that they had been designing and operating their own processor chips in order to optimize machine learning. They called the new chip, a Tensor Processing Unit (TPU). According to Google, the TPU provides an order of magnitude more power efficient machine learning over what’s achievable via off the shelf GPU/CPUs. TensorFlow is Google’s open sourced machine learning software.
When it comes to machine learning, hardware is still a necessity to do some of the really complicated things fast.
Read more at: TPU and hardware vs. software innovation (round 3)
- Review – Docker Networking Cookbook - February 13, 2017
- The Future of SD-WAN Is Now! - January 10, 2017
- How Kindred Healthcare Uses SD-WAN to Secure Patient Data - December 2, 2016
- The Power of ONUG And What It Means To You - November 30, 2016
- Enabling The Most Remote Offices With Viptela - November 4, 2016
- ONUG Day 2 Wrap Up – Thoughts on Monitoring 2.0 - October 26, 2016
- ONUG Day 1 Wrap Up: SD-WAN In The Spotlight - October 25, 2016
- ONUG Live Blog – Day 1 - October 24, 2016
- Demystifying Docker overlay networking - October 21, 2016
- Is Media & Entertainment the golden goose of Object Storage vendors? - October 18, 2016