In this paper, we present a novel approach to incrementally learn an Abstract Model of an unknown environment, and show how an agent can reuse the learned model for tackling the Object Goal Navigation task. The Abstract Model is a finite state machine in which each state is an abstraction of a state of the environment, as perceived by the agent in a certain position and orientation. The perceptions are high-dimensional sensory data (e.g., RGB-D images), and the abstraction is reached by exploiting image segmentation and the Taskonomy model bank. The learning of the Abstract Model is accomplished by executing actions, observing the reached state, and updating the Abstract Model with the acquired information. The learned models are memorized ...
As vision and language processing techniques have made great progress, mapless-visual navigation is ...
As deep reinforcement learning methods have made great progress in the visual navigation field, meta...
Agents (humans, mice, computers) need to constantly make decisions to survive and thrive in their e...
Can the intrinsic relation between an object and the room in which it is usually located help agents...
International audienceIn this work, we present a memory-augmented approach for image-goal navigation...
Representation learning is a central topic in the field of deep learning. It aims at extracting usef...
If a robotic agent wants to exploit symbolic planning techniques to achieve some goal, it must be ab...
While artificial learning agents have demonstrated impressive capabilities, these successes are typi...
Existing work on Deep reinforcement learning-based visual navigation mainly focuses on autonomous ag...
Autonomous agents embedded in a physical environment need the ability to recognize objects and their...
In this paper, we consider the problem of reinforcement learning in spatial tasks. These tasks have ...
Object-based approaches for learning action-conditioned dynamics has demonstrated promise for genera...
A key requirement for any agent that wishes to interact with the visual world is the ability to unde...
In the context of visual navigation, the capacity to map a novel environment is necessary for an age...
Learning to navigate in a realistic setting where an agent must rely solely on visual inputs is a ch...
As vision and language processing techniques have made great progress, mapless-visual navigation is ...
As deep reinforcement learning methods have made great progress in the visual navigation field, meta...
Agents (humans, mice, computers) need to constantly make decisions to survive and thrive in their e...
Can the intrinsic relation between an object and the room in which it is usually located help agents...
International audienceIn this work, we present a memory-augmented approach for image-goal navigation...
Representation learning is a central topic in the field of deep learning. It aims at extracting usef...
If a robotic agent wants to exploit symbolic planning techniques to achieve some goal, it must be ab...
While artificial learning agents have demonstrated impressive capabilities, these successes are typi...
Existing work on Deep reinforcement learning-based visual navigation mainly focuses on autonomous ag...
Autonomous agents embedded in a physical environment need the ability to recognize objects and their...
In this paper, we consider the problem of reinforcement learning in spatial tasks. These tasks have ...
Object-based approaches for learning action-conditioned dynamics has demonstrated promise for genera...
A key requirement for any agent that wishes to interact with the visual world is the ability to unde...
In the context of visual navigation, the capacity to map a novel environment is necessary for an age...
Learning to navigate in a realistic setting where an agent must rely solely on visual inputs is a ch...
As vision and language processing techniques have made great progress, mapless-visual navigation is ...
As deep reinforcement learning methods have made great progress in the visual navigation field, meta...
Agents (humans, mice, computers) need to constantly make decisions to survive and thrive in their e...