Artificial Intelligence (AI) is a rapidly growing field. In “The State of AI in 2020” report, McKinsey found that 50% of the organizations surveyed have adopted AI in at least one function within their business. BrainChip developed the Akida Event-Domain Neural Processor to enable AI to run at an organization’s edge, a location AI typically couldn’t reach. BrainChip introduced and demonstrated the Akida solution at AI Field Day in November.
AI Decisions Necessitate Intense Requirements
The typical AI system requires thousands of data points to successfully train the algorithm and a powerful system to process the model. The nature of AI doesn’t naturally lend itself to edge computing solutions. Sensors can reside at the edge, though they require network connectivity to stream data into a central system to process and enable decision making. AI at the edge requires a balance of low power budget and real-time performance, all while operating with constraints on available memory capacity and bandwidth.
Neuromorphic Architecture Overcomes Edge AI Challenges
BrainChip developed the Akida Neural Processor with the edge in mind. The Akida AKD 1000 adopts an architecture like the human brain, aka a neuromorphic architecture. Neuromorphic chips are designed to mimic the thought patterns and processing of the human brain. The AKD 1000 only consumes power when an event causes an electronic spike to a mechanical synapse. If there is no event, there is no computation. It reduces memory requirements by utilizing only 1, 2, or 4 bits for weights and activations, significantly smaller than the typical 8, 16, or 32 bit required by other AI processors. Akida is very accurate despite the smaller bit size. In a head-to-head test utilizing the same model, Akida’s accuracy was within 3% accuracy from a standard 32-Bit AI chip while using significantly less power.
Bring Your Existing Toolset
BrainChip understands that one of the biggest challenges of neuromorphic chips is the software and tooling. There are many digital neural network tools, but there are very few system frameworks for neuromorphic chips. BrainChip’s Akida processors don’t require new tooling. You can utilize existing solutions with Akida, such as Tensorflow. There is a CNN2SNN tool that translates an existing Convolutional Neural Network (CNN) into a Spiking Neural Network (SNN). The BrainChip hardware creates the frame and keeps you from having to program the neural network. The runtime environment hides all the translation from CNN to SNN. A user doesn’t need to learn anything new to leverage the system.
One-Shot Learning
It is easiest to realize Akida’s power at the edge by seeing it in action. BrainChip demonstrated this with a camera attached to an Akida neural processor during AI Field Day.
Anil Mankar trained the camera to detect the background as a model using a single data point. Then, they placed a toy tiger in front of the camera and once again trained the model with one shot. They were able to turn the tiger toy upside down, point it at different angles, and even use a picture of a completely different tiger and the Akida processor detected it as a tiger each time.
This sort of accurate inference typically requires hundreds of shots to accomplish with a typical CNN, but BrainChip accomplishes it with a single shot. The demo goes further, and they do the same training with a single shot of an elephant toy. With two shots, the Akida solution can determine that a tiger is not an elephant, no matter how they are oriented.
To learn more about BrainChip and how they solve issues with AI at the edge, check them out at AI Field Day.