We present NeuSE, a novel Neural SE(3)-Equivariant Embedding for objects, and illustrate how it supports object SLAM for consistent spatial understanding with long-term scene changes. NeuSE is a set of latent object embeddings created from partial object observations. It serves as a compact point cloud surrogate for complete object models, encoding full shape information while transforming SE(3)-equivariantly in tandem with the object in the physical world. With NeuSE, relative frame transforms can be directly derived from inferred latent codes. Our proposed SLAM paradigm, using NeuSE for object shape and pose characterization, can operate independently or in conjunction with typical SLAM systems. It directly infers SE(3) camera pose constr...
The availability of real-time semantics greatly improves the core geometric functionality of SLAM sy...
We propose a dense neural simultaneous localization and mapping (SLAM) approach for monocular RGBD i...
Autonomous robots that interact with their environment require a detailed semantic scene model. For ...
We present NeuSE, a novel Neural SE(3)-Equivariant Embedding for objects, and illustrate how it supp...
We present ESLAM, an efficient implicit neural representation method for Simultaneous Localization a...
In this work, we explore the use of objects in Simultaneous Localization and Mapping in unseen world...
We present a novel 3D mapping method leveraging the recent progress in neural implicit representatio...
Neural implicit representations have recently demonstrated compelling results on dense Simultaneous ...
Efficient representation of articulated objects such as human bodies is an important problem in comp...
Object SLAM uses additional semantic information to detect and map objects in the scene, in order to...
International audienceWe propose a new SLAM system that uses the semantic segmentation of objects an...
We present ObPose, an unsupervised object-centric inference and generation model which learns 3D-str...
Classical visual simultaneous localization and mapping (SLAM) algorithms usually assume the environm...
Applications in the field of augmented reality or robotics often require joint localisation and 6D p...
We present a unified and compact representation for object rendering, 3D reconstruction, and grasp p...
The availability of real-time semantics greatly improves the core geometric functionality of SLAM sy...
We propose a dense neural simultaneous localization and mapping (SLAM) approach for monocular RGBD i...
Autonomous robots that interact with their environment require a detailed semantic scene model. For ...
We present NeuSE, a novel Neural SE(3)-Equivariant Embedding for objects, and illustrate how it supp...
We present ESLAM, an efficient implicit neural representation method for Simultaneous Localization a...
In this work, we explore the use of objects in Simultaneous Localization and Mapping in unseen world...
We present a novel 3D mapping method leveraging the recent progress in neural implicit representatio...
Neural implicit representations have recently demonstrated compelling results on dense Simultaneous ...
Efficient representation of articulated objects such as human bodies is an important problem in comp...
Object SLAM uses additional semantic information to detect and map objects in the scene, in order to...
International audienceWe propose a new SLAM system that uses the semantic segmentation of objects an...
We present ObPose, an unsupervised object-centric inference and generation model which learns 3D-str...
Classical visual simultaneous localization and mapping (SLAM) algorithms usually assume the environm...
Applications in the field of augmented reality or robotics often require joint localisation and 6D p...
We present a unified and compact representation for object rendering, 3D reconstruction, and grasp p...
The availability of real-time semantics greatly improves the core geometric functionality of SLAM sy...
We propose a dense neural simultaneous localization and mapping (SLAM) approach for monocular RGBD i...
Autonomous robots that interact with their environment require a detailed semantic scene model. For ...