I’m not going to lie, when I saw the headline to this article, I thought it was going to talk about AMD starting to become a force in the ML/AI workload market. Instead it makes the much more cogent case that FPGAs and ASIC will supplement the industry’s need for GPUs to AI processing. Those with high volume and specialized workloads will be faster to supplant GPUs with more specialized silicon. But the general parallel processing capabilities of GPUs should keep them in use for AI workloads in the foreseeable future. It will be interesting to see what the lower cost efficiencies of FPGAs and ASICs will enable in this emerging field.
Kurt Marko comments:
Note that the entire market for AI acceleration hardware is expanding, so these new forms are so much displacing GPUs, at least initially, but supplementing them.
- A Critique of Network Coding and Automation - August 16, 2018
- Sierra Wireless Misadventures | Gestalt IT Rundown, August 8, 2018 - August 15, 2018
- The Geek Travel Router - August 14, 2018
- AMD’s Second Generation Threadripper Isn’t Quite EPYC - August 14, 2018
- Alastair Cooke – IT Origins - August 9, 2018
- Slack Buying Hipchat is Good for ChatOps - August 3, 2018
- Transcoding with My Little Cluster - August 3, 2018
- AMD Plays the Long Hygon | Gestalt IT Rundown: July 11, 2018 - July 12, 2018
- AIRI: Converging FlashBlade into an AI Reference Architecture - July 12, 2018
- 2018 MacBook Pros Comes Closer to Earning Their Name - July 12, 2018