Ray Lucchesi of RayOnStorage Blog comments:
At Google IO conference this week, they revealed (see Google supercharges machine learning tasks …) that they had been designing and operating their own processor chips in order to optimize machine learning. They called the new chip, a Tensor Processing Unit (TPU). According to Google, the TPU provides an order of magnitude more power efficient machine learning over what’s achievable via off the shelf GPU/CPUs. TensorFlow is Google’s open sourced machine learning software.
When it comes to machine learning, hardware is still a necessity to do some of the really complicated things fast.
Read more at: TPU and hardware vs. software innovation (round 3)
- Getting Ready for GDPR - January 18, 2018
- Going Faster with 400Gbps Ethernet and Andy Bechtolsheim - January 12, 2018
- Architecting Container Direction with Nirmata - January 5, 2018
- Continuing the SD-WAN Discussion At FutureWAN - January 4, 2018
- Keeping An Eye On Containers with Ixia CloudLens - January 4, 2018
- Balancing The Cost Of Your Application Delivery Controllers with KEMP Metered Licensing - December 27, 2017
- Arista vs. Cisco – The Tale Up To Now - December 14, 2017
- Unmasking Bad Actors with Gigamon - December 7, 2017
- Dedicated Wireless Troubleshooting Doesn’t Have To Break The Bank - November 28, 2017
- Are Vendors and VARs The Enemy? - November 15, 2017