Neural 3D-Reconstruction and View Interpolation

Neural reconstruction methods such as NeRFs (Neural Radiance Fields) or Gaussian Splatting optimize a 3D-representation such that its renderings resemble a number of input photos as closely as possible. By these means, they achieve high visual fidelity.

We have compared their quality with traditional photogrammetry methods. To this end, we flicker a rendering obtained from the 3D-representation with the respective original photo. The method showing the fewest difference between the rendering and the original photo is considered best.

The following video shows a corresponding example. The right side flickers between the original photo and the photogrammetry rendering. The background of the photogrammetry image is black, because it could not be reconstructed. The left side flickers between the original photo and the neural reconstruction. Arrows note some remarkable differences. 

The following example shows that neural methods reproduce fine details more faithfully.

Web-Rendering Using WebGPU

Our Mission

Unfortunately, these neural methods are not directly compatible with existing mesh-based workflows. Moreover, rendering is more compute intensive. We address this challenge by researching how to combine neural reconstruction with mesh-based representations.

We aim to create visual digital twins in 3D for complex objects. Our goal is to reproduce their real appearance as closely as possible. To this end, we aim to reproduce both shape and texture including view-dependent appearance.

 

Our Technology

We have developed our own reconstruction pipeline for converting several input photos into a 3D representation based on neural radiance fields. To meet the real-time requirements imposed by web applications, we create a digital twin based on meshes.

By leveraging our neural representation to capture view-dependent effects, our method can reproduce real-world objects more accurately than traditional photogrammetry methods.

The following video shows a corresponding example. The right side flickers between the photogrammetry rendering and an original photo showing the same perspective on the camera array. The left side flickers between our reconstruction and the same original photos. Arrows note some remarkable differences. The smaller the differences, the better the method achieves its goal of faithful reproduction.

The following videos compare a view rendering of a virtual camera path around the reconstructed 3D-model in different levels of details.

The reconstructed objects can be played back in the web. We have chosen WebGPU over WebGL as our backend, which provides the following advantages:

  • Support of larger texture sizes enabling high-resolution assets
  • Improved performance
  • New features (e.g. compute shader)
  • Modern, low-level API for fine-grained control over our custom rendering pipeline

WebGPU currently requires a Chromium based browser. Support for other browsers is expected to follow soon, however. Consequently, we have opted to directly go for WebGPU as an advanced web rendering framework to demonstrate its capabilities.

You can try out an early prototype of our renderer assuming you are using a Chromium based browser, such as Microsoft Edge, Google Chrome, or Chromium. Our Web-Rendering will download an asset of about 100 MBytes that will then be rendered on your GPU. Depending on your Internet connection, download may take a few moments.

Web Rendering

 

Unity Plugin

In addition to our web renderer, we also have a plugin for Unity. If you want to know more about it, please contact us.

Contact

Joachim Keinert

Contact Press / Media

Dr.-Ing. Joachim Keinert

Head of Group Computational Imaging

Fraunhofer-Institut für Integrierte Schaltungen IIS
Am Wolfsmantel 33
91058 Erlangen, Germany

Phone +49 9131 776-5152