AI applications on edge devices can no longer be implemented by means of improving CPUs (central processing units) and GPUs (graphics processing units) – we have reached the end of Moore’s law. New ideas and architectures are required. To meet this need, new neuromorphic hardware has been developed to enable the highly efficient processing of sensor data on edge devices.
In general terms, neuromorphic hardware refers to a hardware design with efficiently running “deep neural networks” inspired by the human brain. Embedding AI directly on edge devices and processing the data locally offers advantages over conventional computing architectures, among them lower latency, higher energy efficiency and better data protection. By using neuromorphic hardware, many calculations can be carried out in parallel, such that the hardware can work more efficiently and deliver faster results.
We are developing highly efficient and customized integrated circuits for AI accelerator IPs, which permit challenging and difficult applications. Our co-design framework lets customers benefit from optimized development times. In this way, we offer a solution to bring AI energy efficiently to edge devices with secure and rapid data processing. There are possible use cases in domains such as audio technology, Industry 4.0, wearables and autonomous driving.