The purpose of this thesis was to compare the performance of three different imitation learning algorithms with human experts, with limited expert time. The central question was, ”How should one implement imitation learning in a simulated car racing environment, using human experts, to achieve the best performance when access to the experts is limited?”. We limited the work to only consider the three algorithms Behavior Cloning, DAGGER, and HG-DAGGER and limited the implementation to the car racing simulator TORCS. The agents consisted of the same type of feedforward neural network that utilized sensor data provided by TORCS. Through comparison in the performance of the different algorithms on a different amount of expert time, we can concl...
Test Automation is becoming a more vital part of the software development cycle, as it aims to lower...
Imitation learning techniques aim to mimic human behavior in a given task. An agent (a learning mach...
In this paper we discuss how agents can learn to do things by imitating other agents. Especially we ...
The purpose of this thesis was to compare the performance of three different imitation learning algo...
End-to-end autonomous driving can be approached by finding a policy function that maps observation (...
Optimal control for multicopters is difficult in part due to the low processing power available, and...
The way characters move and behave in computer and video games are important factors in their believ...
Many existing imitation learning datasets are collected from multiple demonstrators, each with diffe...
ABSTRACT: Imitation learning is based on learning from the actions of an observed third party. One o...
Advances in robotics have resulted in increases both in the availability of robots and also their co...
Machine learning is an appealing and useful approach to creating vehicle control algorithms, both fo...
This paper examines the possibilities of faking human behavior with artificial intelligence in compu...
One way to approach end-to-end autonomous driving is to learn a policy that maps from a sensory inpu...
Imitation learning algorithms, such as AggreVaTe, have proven successful in solving many challenging...
Autonomous racing with scaled race cars has gained increasing attention as an effective approach for...
Test Automation is becoming a more vital part of the software development cycle, as it aims to lower...
Imitation learning techniques aim to mimic human behavior in a given task. An agent (a learning mach...
In this paper we discuss how agents can learn to do things by imitating other agents. Especially we ...
The purpose of this thesis was to compare the performance of three different imitation learning algo...
End-to-end autonomous driving can be approached by finding a policy function that maps observation (...
Optimal control for multicopters is difficult in part due to the low processing power available, and...
The way characters move and behave in computer and video games are important factors in their believ...
Many existing imitation learning datasets are collected from multiple demonstrators, each with diffe...
ABSTRACT: Imitation learning is based on learning from the actions of an observed third party. One o...
Advances in robotics have resulted in increases both in the availability of robots and also their co...
Machine learning is an appealing and useful approach to creating vehicle control algorithms, both fo...
This paper examines the possibilities of faking human behavior with artificial intelligence in compu...
One way to approach end-to-end autonomous driving is to learn a policy that maps from a sensory inpu...
Imitation learning algorithms, such as AggreVaTe, have proven successful in solving many challenging...
Autonomous racing with scaled race cars has gained increasing attention as an effective approach for...
Test Automation is becoming a more vital part of the software development cycle, as it aims to lower...
Imitation learning techniques aim to mimic human behavior in a given task. An agent (a learning mach...
In this paper we discuss how agents can learn to do things by imitating other agents. Especially we ...