Scores of applications call for lightning reactions nonetheless. One such is autonomous driving. If a ball rolls into the road, closely followed by a child, quick decisions are a matter of life and death. As a rule, the tasks that artificial intelligence is called upon to perform – detecting a child in the road, say – are broken down into a great many small computing operations, which are then processed in turn. A bottleneck. Then add another choke point to the mix: the data needed to calculate these numerous small steps has to be loaded onto the processor from a larger memory each time and the relevant results saved in the same place. However, there is a limit to the amount of data that can be passed back and forth between the memory and the processor per time unit.
Neuromorphic hardware: Goodbye, bottlenecks!
Neuromorphic hardware not only expands these bottlenecks – it avoids them in the first place. This is done by building special circuits for each computing operation (artificial neural networks mainly have three: addition, multiplication and a non-linear function) and duplicating them several times. Where the classical chip would work through these operations one by one, the “mini-processors” execute them simultaneously. Moreover, in-memory computing (and near-memory computing) means that part of the data is stored directly where it is required. In other words, the data is either processed in the memory itself or in the immediate vicinity. Borrowed from biology, this concept is not only faster, but also much more energy-efficient, shifting the intelligence exactly to where it is needed – in the end device. This is also known as edge AI.
Analog or digital?
There are three basic approaches to implementing neuromorphic hardware and representing the signals. The first question is: Analog or digital? Despite being harder to tackle, the analog approach is probably also more efficient. Experimental chips demonstrate that analog processing requires less energy – a fundamental factor if the intelligence is to be shifted to the device. While the analog approach works directly with continuous signals (currents and voltages), the signals have to be discretized in time and value for the digital approach. This means an additional step. Digital implementations have the advantage that it is easier to compensate for any interference. However, special innovative training methods are progressively making interference less of an issue in the analog scenario, allowing both approaches to remain relevant. With a focus on hardware/software co-design, researchers at Fraunhofer IIS give thought to the hardware when developing algorithms and vice versa.
How about spiking?
Spiking neural networks are another option. Whereas signals are emitted at regular intervals in classical deep neural networks – providing a constant data stream composed of either digital or analog signals – energy-efficient, robust spiking neural networks rely on binary pulses. Rather than sending a regular signal, the neurons “fire” short pulses at irregular intervals, or spikes, which enable pulse-modulated signals to be transmitted to the other neurons. Processing can still be analog or digital, however.
Neuromorphic hardware: Perfectly customized
The exact configuration of the neuromorphic hardware has less to do with the applications and more with the technical specifications. If the emphasis is on flexibility, digital circuits are ideal, but if energy efficiency is key it is worth considering the analog approach. As an example: assuming the input signal is made up of analog structure-borne sound signals (such as measuring vibrations), it is advisable to first adopt the analog approach. In the case of digital cameras, where both the input and output signals are digital, the digital approach is the logical first step. Meanwhile, the research teams at Fraunhofer IIS have acquired formidable expertise in all three approaches – digital, analog and spiking neural network – developing customized neuromorphic hardware that best meets the different requirements.