In this paper, we study transformers for text-based games. As a promising replacement of recurrent modules in Natural Language Processing (NLP) tasks, the transformer architecture could be treated as a powerful state representation generator for reinforcement learning. However, the vanilla transformer is neither effective nor efficient to learn with a huge amount of weight parameters. Unlike existing research that encodes states using LSTMs or GRUs, we develop a novel lightweight transformer-based representation generator featured with reordered layer normalization, weight sharing and block-wise aggregation. The experimental results show that our proposed model not only solves single games with much fewer interactions, but also achieves bet...
Text-based games (TBGs) have emerged as useful benchmarks for evaluating progress at the intersectio...
Since its first appearance, transformers have been successfully used in wide ranging domains from co...
This chapter presents an overview of the state of the art in natural language processing, exploring ...
In this paper, we consider the task of learn-ing control policies for text-based games. In these gam...
While Reinforcement Learning (RL) approaches lead to significant achievements in a variety of areas ...
Text-based games are a natural challenge domain for deep reinforcement learning algorithms. Their st...
While Reinforcement Learning (RL) approaches lead to significant achievements in a variety of areas ...
Recently, deep models have shown tremendous improvements in neural machine translation (NMT). Howeve...
Usage of reinforcement learning (RL) in natural language processing (NLP) tasks has gained momentum ...
Thesis (Ph.D.)--University of Washington, 2017-07Reinforcement learning refers to a class of algorit...
The ability to learn optimal control policies in systems where action space is defined by sentences ...
Originally developed for natural language problems, transformer models have recently been widely use...
The transformer architecture and variants presented remarkable success across many machine learning ...
Text-based games can be used to develop task-oriented text agents for accomplishing tasks with high-...
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Comput...
Text-based games (TBGs) have emerged as useful benchmarks for evaluating progress at the intersectio...
Since its first appearance, transformers have been successfully used in wide ranging domains from co...
This chapter presents an overview of the state of the art in natural language processing, exploring ...
In this paper, we consider the task of learn-ing control policies for text-based games. In these gam...
While Reinforcement Learning (RL) approaches lead to significant achievements in a variety of areas ...
Text-based games are a natural challenge domain for deep reinforcement learning algorithms. Their st...
While Reinforcement Learning (RL) approaches lead to significant achievements in a variety of areas ...
Recently, deep models have shown tremendous improvements in neural machine translation (NMT). Howeve...
Usage of reinforcement learning (RL) in natural language processing (NLP) tasks has gained momentum ...
Thesis (Ph.D.)--University of Washington, 2017-07Reinforcement learning refers to a class of algorit...
The ability to learn optimal control policies in systems where action space is defined by sentences ...
Originally developed for natural language problems, transformer models have recently been widely use...
The transformer architecture and variants presented remarkable success across many machine learning ...
Text-based games can be used to develop task-oriented text agents for accomplishing tasks with high-...
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Comput...
Text-based games (TBGs) have emerged as useful benchmarks for evaluating progress at the intersectio...
Since its first appearance, transformers have been successfully used in wide ranging domains from co...
This chapter presents an overview of the state of the art in natural language processing, exploring ...