What is neuromorphic hardware?
When traditional computers (with von Neumann architectures) calculate Deep Neural Networks (DNNs), they suffer from the von Neumann bottleneck. This means that the computing performance of the computing systems is limited by the data rate which can be transferred between an external memory unit and the processing unit.
Neuromorphic hardware refers to brain-inspired computers or components modeling neurological or artificial neural networks consisting of highly connected parallel synthetic neurons and synapses. Current neural network architectures like DNNs require high computational complexity and power consumption. Efficient architectures for neuromorphic hardware with respect to computational performance, power consumption and chip area are therefore a key element for a widespread deployment of neural networks in embedded devices.
High-performance neuromorphic hardware architectures tailored for efficient computation of neural networks are applying massive parallel processing and colocation of memory and processing. By applying these approaches, calculations required by complex neural networks can be performed faster and with less power compared to von-Neumann architectures. As neural networks exhibit highly regular structures, massive parallel processing can be applied by using the same type of computational units/cells in parallel. Architectural approaches known from parallel computing architectures can therefore be applied:
- Single-Instruction Multiple Data (SIMD): multiple parallel processing elements (e.g. MAC units) perform the same operation on different pieces of distributed data simultaneously (data-level parallelism).
- Very Long Instruction Word (VLIW): several not necessarily equal instructions are performed in parallel (instruction-level parallelism).
- Systolic arrays: dataflow architectures based on a network of tightly-coupled homogeneous processing elements. Computations are performed in a pipelined manner by passing data through the systolic array.