With recent advances in the field of autonomous driving, autonomous agents need to safely navigate around humans or other moving objects in unconstrained, highly dynamic environments. In this thesis, we demonstrate the feasibility of reconstructing dense depth, optical flow and motion information from a neuromorphic imaging device, called Dynamic Vision Sensor (DVS). The DVS only records sparse and asynchronous events when the changes of lighting occur at camera pixels. Our work is the first monocular pipeline that generates dense depth and optical flow from sparse event data only. To tackle this problem of reconstructing dense information from sparse information, we introduce the Evenly-Cascaded convolutional Network (ECN), a bio-inspired...
We live in a dynamic world, which is continuously in motion. Perceiving and interpreting the dynamic...
While the keypoint-based maps created by sparsemonocular Simultaneous Localisation and Mapping (SLAM...
Event cameras are novel bio-inspired sensors which mimic the function of the human retina. Rather th...
none7noWhole understanding of the surroundings is paramount to autonomous systems. Recent works have...
Motivated by the astonishing capabilities of natural intelligent agents and inspired by theories fro...
Visual Simultaneous Localization and Mapping (SLAM) is crucial for robot perception. Visual odometry...
We present the first event-based learning approach for motion segmentation in indoor scenes and the ...
Self-supervised monocular depth estimation enables robots to learn 3D perception from raw video stre...
Humans and most animals can run/fly and navigate efficiently through cluttered environments while av...
In this paper we propose USegScene, a framework for semantically guided unsupervised learning of dep...
This paper deals with the scarcity of data for training optical flow networks, highlighting the limi...
The motion of the world is inherently dependent on the spatial structure of the world and its geomet...
Estimating scene depth, predicting camera motion and localizing dynamic objects from monocular video...
In this work we train in an end-to-end manner a convolutional neural network (CNN) that jointly hand...
For self-driving vehicles, aerial drones, and autonomous robots to be successfully deployed in the r...
We live in a dynamic world, which is continuously in motion. Perceiving and interpreting the dynamic...
While the keypoint-based maps created by sparsemonocular Simultaneous Localisation and Mapping (SLAM...
Event cameras are novel bio-inspired sensors which mimic the function of the human retina. Rather th...
none7noWhole understanding of the surroundings is paramount to autonomous systems. Recent works have...
Motivated by the astonishing capabilities of natural intelligent agents and inspired by theories fro...
Visual Simultaneous Localization and Mapping (SLAM) is crucial for robot perception. Visual odometry...
We present the first event-based learning approach for motion segmentation in indoor scenes and the ...
Self-supervised monocular depth estimation enables robots to learn 3D perception from raw video stre...
Humans and most animals can run/fly and navigate efficiently through cluttered environments while av...
In this paper we propose USegScene, a framework for semantically guided unsupervised learning of dep...
This paper deals with the scarcity of data for training optical flow networks, highlighting the limi...
The motion of the world is inherently dependent on the spatial structure of the world and its geomet...
Estimating scene depth, predicting camera motion and localizing dynamic objects from monocular video...
In this work we train in an end-to-end manner a convolutional neural network (CNN) that jointly hand...
For self-driving vehicles, aerial drones, and autonomous robots to be successfully deployed in the r...
We live in a dynamic world, which is continuously in motion. Perceiving and interpreting the dynamic...
While the keypoint-based maps created by sparsemonocular Simultaneous Localisation and Mapping (SLAM...
Event cameras are novel bio-inspired sensors which mimic the function of the human retina. Rather th...