Self-supervised monocular depth estimation enables robots to learn 3D perception from raw video streams. This scalable approach leverages projective geometry and ego-motion to learn via view synthesis, assuming the world is mostly static. Dynamic scenes, which are common in autonomous driving and human-robot interaction, violate this assumption. Therefore, they require modeling dynamic objects explicitly, for instance via estimating pixel-wise 3D motion, i.e. scene flow. However, the simultaneous self-supervised learning of depth and scene flow is ill-posed, as there are infinitely many combinations that result in the same 3D point. In this paper we propose DRAFT, a new method capable of jointly learning depth, optical flow, and scene flow ...
Whole understanding of the surroundings is paramount to autonomous systems. Recent works have shown ...
With an unprecedented increase in the number of agents and systems that aim to navigate the real wor...
In this paper we propose USegScene, a framework for semantically guided unsupervised learning of dep...
Learning to predict scene depth from RGB inputs is a challenging task both for indoor and outdoor ro...
Self-supervised monocular methods can efficiently learn depth information of weakly textured surface...
Human visual perception is a powerful tool to let us interact with the world, interpreting depth usi...
Self-supervised monocular depth estimation has shown impressive results in static scenes. It relies ...
This paper proposes a novel unsupervised learning framework for depth recovery and camera ego-motion...
We introduce a way to learn to estimate a scene representation from a single image by predicting a l...
We present a new method for self-supervised monocular depth estimation. Contemporary monocular depth...
none7noWhole understanding of the surroundings is paramount to autonomous systems. Recent works have...
Estimating scene depth, predicting camera motion and localizing dynamic objects from monocular video...
Self-supervised monocular depth estimation networks are trained to predict scene depth using nearby ...
Despite well-established baselines, learning of scene depth and ego-motion from monocular video rema...
Whole understanding of the surroundings is paramount to autonomous systems. Recent works have shown ...
Whole understanding of the surroundings is paramount to autonomous systems. Recent works have shown ...
With an unprecedented increase in the number of agents and systems that aim to navigate the real wor...
In this paper we propose USegScene, a framework for semantically guided unsupervised learning of dep...
Learning to predict scene depth from RGB inputs is a challenging task both for indoor and outdoor ro...
Self-supervised monocular methods can efficiently learn depth information of weakly textured surface...
Human visual perception is a powerful tool to let us interact with the world, interpreting depth usi...
Self-supervised monocular depth estimation has shown impressive results in static scenes. It relies ...
This paper proposes a novel unsupervised learning framework for depth recovery and camera ego-motion...
We introduce a way to learn to estimate a scene representation from a single image by predicting a l...
We present a new method for self-supervised monocular depth estimation. Contemporary monocular depth...
none7noWhole understanding of the surroundings is paramount to autonomous systems. Recent works have...
Estimating scene depth, predicting camera motion and localizing dynamic objects from monocular video...
Self-supervised monocular depth estimation networks are trained to predict scene depth using nearby ...
Despite well-established baselines, learning of scene depth and ego-motion from monocular video rema...
Whole understanding of the surroundings is paramount to autonomous systems. Recent works have shown ...
Whole understanding of the surroundings is paramount to autonomous systems. Recent works have shown ...
With an unprecedented increase in the number of agents and systems that aim to navigate the real wor...
In this paper we propose USegScene, a framework for semantically guided unsupervised learning of dep...