PaperReading

LEGO Learning Edge with Geometry all at Once by Watching Videos

July 2019

tl;dr: Build on SfM-Learner and added multi-task learning of edges and surface normals and self-consistency between the tasks. SOTA for static scenes.

Overall impression

The general idea is inherited from sfm-learner that it also uses view synthesis as supervision. However this work also predicts surface normals and edges, and added quite a few losses.

The key motivation of the work is that the estimated depth and surface normals are blurry and there are discontinuities inside a smooth surface. So the paper proposed a strong “as smooth as possible in 3D” prior, that all pixels should lie in the same planar surface if no edges exists in-between.

Seems that lots of technical implementation details to make this work. This paper is tightly coupled with their previous work at AAAI2018 Unsupervised Learning of Geometry with Edge-aware Depth-Normal Consistency, but the AAAI work is not well written up. The AAAI idea is basically enforce a normal and depth consistency, and the edges are obtained using CV methods instead of jointly learned.

This paper also assumes a static scene and does not work well with occlusion and dis-occlusion.

Key ideas

Technical details

Notes