How it works
The mobile processing unit uses a so-called convolutional neural network to calculate the current position based on a camera image. First of all, data is automatically captured from thousands of camera images and the corresponding positions. Using this data, an existing network is further trained for the purpose of adapting it to the target environment. The finished network is used on the target platforms to determine the positions of new images. To prevent the system from degenerating over time, updated information is continuously collected on a central processing unit for further training of the network and distribution to the mobile processors.
System components of CNNLok
The mobile processing unit is typically either a simple smartphone or an ARM- or Intel-based single-board computer with a standard camera. Thanks to the high flexibility of the platform, many application scenarios are possible that cannot be accomplished by standard, infrastructure-based solutions. Adapted motion models and special pre-processing of the collected data allow new data to populate an existing positioning system. Because continuously learning more about the environment is a very computing-intensive task, the system needs to be connected up to other hardware via a network or docking station or similar means. Depending on the dynamics of the area, a central processor with powerful standard deep learning hardware – such as graphics cards or special vector processors – may be additionally required. This takes over the duties of the mobile processing units at appropriate times – for example, when they need to be charged.