Body

From clinical trials and public health to development economics and political science, randomized experiments stand out as one of the most reliable methodological tools, as they require the fewest assumptions to estimate causal effects. Adaptive experiment designs – where experimental subjects arrive sequentially and the probability of treatment assignment can depend on previously observed outcomes – are becoming an increasingly popular method for causal inference, as they offer the possibility of improved precision over their non-adaptive counterparts. However, in simple settings (e.g. two treatments) the extent to which adaptive designs can improve precision is not sufficiently well understood.

In this talk, I present my recent work on the problem of Adaptive Neyman Allocation, where the experimenter seeks to construct an adaptive design which is nearly as efficient as the optimal (but infeasible) non-adaptive Neyman design which has access to all potential outcomes. I will show that the experimental design problem is equivalent to an adversarial online convex optimization problem, suggesting that any solution must exhibit some amount of algorithmic sophistication. Next, I present Clip-OGD, an experimental design that combines the online gradient descent principle with a new time-varying probability-clipping technique. I will show that the Neyman variance is attained in large samples by showing that the expected regret of the online optimization problem is bounded by O(\sqrt{T}), up to sub-polynomial factors. Even though the design is adaptive, we construct a consistent (conservative) estimator for the variance, which facilitates the development of valid confidence intervals. Finally, we demonstrate the method on data collected from a micro-economic experiment.

Joint work with Jessica Dai and Paula Gradu, arXiv link: https://arxiv.org/abs/2305.17187