# Research

Presented at the BITSS Annual Meeting in March 2024: Video

Under Review

How many experimental studies would have come to different conclusions had they been run on larger samples? I show how to estimate the expected number of statistically significant results that a set of experiments would have reported had their sample sizes all been counterfactually increased by a chosen factor. The deconvolution estimator is consistent and asymptotically normal. Unlike existing methods, my approach requires no assumptions about the distribution of true treatment effects of the interventions being studied other than continuity. This method includes an adjustment for publication bias in the reported t-scores. An application to randomized controlled trials (RCTs) published in top economics journals finds that doubling every experiment's sample size would only increase the power of two-sided t-tests by 7.2 percentage points on average. I argue that this effect is small by showing that it is comparable to the effect for systematic replication projects in laboratory psychology where previous studies enabled accurate power calculations ex ante. These effects are both smaller than for non-RCTs. This comparison suggests that RCTs are on average relatively insensitive to sample size increases. The policy implication is that grant givers should generally fund more experiments rather than fewer, larger ones. Submission, Transparency R package, Arxiv.

Presented at: CEPR Development Economics Annual Symposium, Urban Economics Association Annual Meeting, and Microeconometrics Class of 2024 Conference in September 2024

We study the problem of estimating the average causal effect of treating every member of a population, as opposed to none, using an experiment that treats only some. This is the policy-relevant estimand when deciding whether to scale up an intervention based on the results of an RCT but differs from the usual average treatment effect in the presence of spillovers. We focus on settings where spillovers decay in magnitude with distance but have global support. Our first result provides the fastest rate at which any experimental design paired with any linear estimator can converge to this estimand. While the optimal rate is straightforward to achieve with IPW estimators, applied researchers almost exclusively use a different approach in practice: they regress each unit’s outcome on the fraction of its nearby neighbors who received treatment. We show that this linear regression converges to an undesirable weighted average of spillover effects. We derive a refined regression that removes the unwanted weighting and converges at the optimal rate. We justify the prevalence of regression-based strategies in practice by showing that linear regressions converge at a faster rate than IPW when treatment clusters are small. We apply our methods to an experimental evaluation of a large-scale cash transfer in rural Kenya and estimate an impact on annual household consumption expenditure that (we argue) is more robust, interpretable, and precise than the total effect reported by Egger et al. (2022). We also study the problem of bandwidth selection and find that a simple rule of thumb outperforms a minimax selection rule in practice. Arxiv

Social Effects, Spillovers, and Scale-up of Teacher Training in Uganda: an RCT (with Vesall Nourani, Moustafa El-Kashlan, and Sara Tamayo)

While nearly half of Ugandan schoolchildren enter secondary school, fewer than 10% complete it. Low teaching quality may be a factor. We study the effects and spillovers of training secondary school teachers in rural Uganda with an RCT. Teachers were randomly assigned to an innovative training program run by Kimanya-Ngeyo in November 2021 and training is ongoing in waves. Our RCT design allows us to study teacher-to-teacher spillovers over time by randomly assigning half of treated schools to treat teachers in "cliques", where treated teachers know each other well vs. the other half of treated schools who were assigned to treat teachers in "anti-cliques", where treated teachers do not know each other well. AEA Registration here.