This paper proposes a novel unsupervised learning framework for depth recovery and camera ego-motion estimation from monocular video. The framework exploits the optical flow (OF) property to jointly train the depth and the ego-motion models. Unlike the existing unsupervised methods, our method extracts the features from the optical flow rather than from the raw RGB images, thereby enhancing unsupervised learning. In addition, we exploit the forward-backward consistency check of the optical flow to generate a mask of the invalid region in the image, and accordingly, eliminate the outlier regions such as occlusion regions and moving objects for the learning. Furthermore, in addition to using view synthesis as a supervised signal, we impose ad...
Abstract(#br)Depth estimation from monocular video plays a crucial role in scene perception. The sig...
Estimating scene depth, predicting camera motion and localizing dynamic objects from monocular video...
Despite learning based methods showing promising results in single view depth estimation and visual ...
We present an occlusion-aware unsupervised neural network for jointly learning three low-level visio...
Learning to predict scene depth from RGB inputs is a challenging task both for indoor and outdoor ro...
Self-supervised monocular depth estimation enables robots to learn 3D perception from raw video stre...
Despite well-established baselines, learning of scene depth and ego-motion from monocular video rema...
In this paper we propose USegScene, a framework for semantically guided unsupervised learning of dep...
none3noGGS Class 1 GGS Rating A++This paper deals with the scarcity of data for training optical...
Disentangling the sources of visual motion in a dynamic scene during self-movement or ego motion is ...
none7noWhole understanding of the surroundings is paramount to autonomous systems. Recent works have...
The main topic of the present thesis is scene flow estimation in a monocular camera system. Scene fl...
We present a new method for self-supervised monocular depth estimation. Contemporary monocular depth...
Self-supervised monocular methods can efficiently learn depth information of weakly textured surface...
We introduce a way to learn to estimate a scene representation from a single image by predicting a l...
Abstract(#br)Depth estimation from monocular video plays a crucial role in scene perception. The sig...
Estimating scene depth, predicting camera motion and localizing dynamic objects from monocular video...
Despite learning based methods showing promising results in single view depth estimation and visual ...
We present an occlusion-aware unsupervised neural network for jointly learning three low-level visio...
Learning to predict scene depth from RGB inputs is a challenging task both for indoor and outdoor ro...
Self-supervised monocular depth estimation enables robots to learn 3D perception from raw video stre...
Despite well-established baselines, learning of scene depth and ego-motion from monocular video rema...
In this paper we propose USegScene, a framework for semantically guided unsupervised learning of dep...
none3noGGS Class 1 GGS Rating A++This paper deals with the scarcity of data for training optical...
Disentangling the sources of visual motion in a dynamic scene during self-movement or ego motion is ...
none7noWhole understanding of the surroundings is paramount to autonomous systems. Recent works have...
The main topic of the present thesis is scene flow estimation in a monocular camera system. Scene fl...
We present a new method for self-supervised monocular depth estimation. Contemporary monocular depth...
Self-supervised monocular methods can efficiently learn depth information of weakly textured surface...
We introduce a way to learn to estimate a scene representation from a single image by predicting a l...
Abstract(#br)Depth estimation from monocular video plays a crucial role in scene perception. The sig...
Estimating scene depth, predicting camera motion and localizing dynamic objects from monocular video...
Despite learning based methods showing promising results in single view depth estimation and visual ...