Much current research in AI and games is being devoted to Monte Carlo search (MCS) algorithms. While the quest for a single unified MCS algorithm that would perform well on all problems is of major interest for AI, practitioners often know in advance the problem they want to solve, and spend plenty of time exploiting this knowledge to customize their MCS algorithm in a problem-driven way. We propose an MCS algorithm discovery scheme to perform this in an automatic and reproducible way. We first introduce a grammar over MCS algorithms that enables inducing a rich space of candidate algorithms. Afterwards, we search in this space for the algorithm that performs best on average for a given distribution of training problems. We rely on multi-arme...
Classic methods such as A* and IDA* are a popular and successful choice for one-player games. Howeve...
International audienceWe present a new exploration term, more efficient than clas- sical UCT-like ex...
This paper introduces Monte Carlo *-Minimax Search (MCMS), a Monte Carlo search algorithm for turned...
Abstract—Much current research in AI and games is being devoted to Monte Carlo search (MCS) algorith...
Abstract—Much current research in AI and games is being devoted to Monte Carlo search (MCS) algorith...
Classic approaches to game AI require either a high quality of domain knowledge, or a long time to g...
Monte Carlo tree search (MCTS) is a probabilistic algorithm that uses lightweight random simulations...
Classical methods such as A* and IDA* are a popular and successful choice for one-player games. Howe...
Many enhancements for Monte Carlo tree search (MCTS) have been applied successfully in general game ...
Recently, Monte-Carlo Tree Search (MCTS) has advanced the field of computer Go substantially. In thi...
Abstract. Recently, Monte-Carlo Tree Search (MCTS) has advanced the field of computer Go substantial...
Abstract—The application of multi-armed bandit (MAB) algo-rithms was a critical step in the developm...
Classical methods such as a* and ida* are a popular and successful choice for one-player games. Howe...
Recent advances in bandit tools and techniques for sequential learning are steadily enabling new app...
Classic methods such as A* and IDA* are a popular and successful choice for one-player games. Howeve...
International audienceWe present a new exploration term, more efficient than clas- sical UCT-like ex...
This paper introduces Monte Carlo *-Minimax Search (MCMS), a Monte Carlo search algorithm for turned...
Abstract—Much current research in AI and games is being devoted to Monte Carlo search (MCS) algorith...
Abstract—Much current research in AI and games is being devoted to Monte Carlo search (MCS) algorith...
Classic approaches to game AI require either a high quality of domain knowledge, or a long time to g...
Monte Carlo tree search (MCTS) is a probabilistic algorithm that uses lightweight random simulations...
Classical methods such as A* and IDA* are a popular and successful choice for one-player games. Howe...
Many enhancements for Monte Carlo tree search (MCTS) have been applied successfully in general game ...
Recently, Monte-Carlo Tree Search (MCTS) has advanced the field of computer Go substantially. In thi...
Abstract. Recently, Monte-Carlo Tree Search (MCTS) has advanced the field of computer Go substantial...
Abstract—The application of multi-armed bandit (MAB) algo-rithms was a critical step in the developm...
Classical methods such as a* and ida* are a popular and successful choice for one-player games. Howe...
Recent advances in bandit tools and techniques for sequential learning are steadily enabling new app...
Classic methods such as A* and IDA* are a popular and successful choice for one-player games. Howeve...
International audienceWe present a new exploration term, more efficient than clas- sical UCT-like ex...
This paper introduces Monte Carlo *-Minimax Search (MCMS), a Monte Carlo search algorithm for turned...