In field experiments, researchers commonly allocate subjects to different treatment conditions before the experiment starts. While this approach is intuitive, new information gathered during the experiment is not considered. Based on methodological approaches from other scientific fields such as computer science and medicine, we suggest a randomized adaptive allocation for field experiments in organizational research that is based on a Bayesian multi-armed bandit algorithm. By means of Monte Carlo simulations, we test the usefulness of this approach in a comparison with randomized controlled trials that have a fixed and balanced subject allocation. Our findings suggest that randomized adaptive allocation is more efficient in most settings. ...
Multi-armed bandit problems (MABPs) are a special type of optimal control problem that has been stud...
Behavioral scientists are increasingly able to conduct randomized experiments in settings that enabl...
We study learning in a bandit task in which the outcome probabilities of six arms switch (“jump”) ov...
In field experiments, researchers commonly allocate subjects to different treatment conditions befor...
In experiments, researchers commonly allocate subjects randomly and equally to the different treatme...
In experiments that consider the use of subjects, a crucial part is deciding which treatment to allo...
Conducting randomized experiments in education settings raises the question of how we can use machin...
This chapter discusses several important topics related to randomization in field experiments. In th...
Multi-armed bandit algorithms like Thompson Sampling (TS) can be used to conduct adaptive experiment...
Standard experimental designs are geared toward point estimation and hypothesis testing, while bandi...
Online educational technologies facilitate pedagogical experimentation, but typical experimental des...
A multi-armed bandit problem models an agent that simultaneously attempts to acquire new information...
Designing experiments often requires balancing between learning about the true treatment effects and...
The rise of online educational software brings with it the ability to run experiments on users quick...
Adaptive designs for multi-armed clinical trials have become increasingly popular recently because o...
Multi-armed bandit problems (MABPs) are a special type of optimal control problem that has been stud...
Behavioral scientists are increasingly able to conduct randomized experiments in settings that enabl...
We study learning in a bandit task in which the outcome probabilities of six arms switch (“jump”) ov...
In field experiments, researchers commonly allocate subjects to different treatment conditions befor...
In experiments, researchers commonly allocate subjects randomly and equally to the different treatme...
In experiments that consider the use of subjects, a crucial part is deciding which treatment to allo...
Conducting randomized experiments in education settings raises the question of how we can use machin...
This chapter discusses several important topics related to randomization in field experiments. In th...
Multi-armed bandit algorithms like Thompson Sampling (TS) can be used to conduct adaptive experiment...
Standard experimental designs are geared toward point estimation and hypothesis testing, while bandi...
Online educational technologies facilitate pedagogical experimentation, but typical experimental des...
A multi-armed bandit problem models an agent that simultaneously attempts to acquire new information...
Designing experiments often requires balancing between learning about the true treatment effects and...
The rise of online educational software brings with it the ability to run experiments on users quick...
Adaptive designs for multi-armed clinical trials have become increasingly popular recently because o...
Multi-armed bandit problems (MABPs) are a special type of optimal control problem that has been stud...
Behavioral scientists are increasingly able to conduct randomized experiments in settings that enabl...
We study learning in a bandit task in which the outcome probabilities of six arms switch (“jump”) ov...