All Favorites

Memory Consumption of Neural Networks

One of the biggest problems in deep neural networks is memory. There is only so much memory that a DRAM device has and DNNs frequently push DRAMs to the limit. But when you dig deeper into this memory problem, it becomes clear that neural network architectures have varying memory requirements for each stage of the data pipeline.

Image: Sulagna Saha (c) Gestalt IT

Frank Denneman, Chief Technologist at VMware has an illuminating article on this where he explains the memory consumption of neural networks at training and inferencing stages. In his article titled “TRAINING VS INFERENCE – MEMORY CONSUMPTION BY NEURAL NETWORKS” he writes,

What exactly happens when an input is presented to a neural network, and why do data scientists mainly struggle with out-of-memory errors?

Read the rest of his article -“TRAINING VS INFERENCE – MEMORY CONSUMPTION BY NEURAL NETWORKS” to find the answer to this.

About the author

Sulagna Saha

Sulagna Saha is a writer at Gestalt IT where she covers all the latest in enterprise IT. She has written widely on miscellaneous topics. On she writes about the hottest technologies in Cloud, AI, Security and sundry.

A writer by day and reader by night, Sulagna can be found busy with a book or browsing through a bookstore in her free time. She also likes cooking fancy things on leisurely weekends. Traveling and movies are other things high on her list of passions. Sulagna works out of the Gestalt IT office in Hudson, Ohio.