One of the questions most addressed at ECCV 2020 was: How can we represent the 3D space in a suitable way for learning-based methods? Point-Clouds, Voxels, Meshes are some of the existing approaches, all with their pros (few) and cons (many). Implicit representation, on the other hand, is a relatively new idea in which the 3D geometry – and texture – is encoded as the decision boundary of a binary classifier. It’s implicit because you can’t access the information directly but have to approximate through sampling.
Implicit representation nets are queried with one 3D point at a time and output properties (e.g. color, occupancy) of the queried point. The more points you query the more accurate you can reconstruct the geometry.
We selected two exciting papers presented at ECCV 2020, both based on this idea.
Convolutional Occupancy Networks use implicit Representation to perform 3D surface reconstruction from a sparse point cloud. Here, authors show that their approach is able to reconstruct a large scene, even when trained on small synthetic crops. Conversely, NeRF uses implicit representation to synthesize novel views from a sparse set of input images and camera poses. Each 3D point is mapped through a simple MLP to a volume density and respective view-dependent color.