主讲人:
Xinwei Ma(an Associate Professor of Economics at the University of California San Diego)
主持老师:
(北大经院)王一鸣、巩爱博
参与老师:
(北大经院)刘蕴霆、王熙、王法、李少然
(北大国发院)黄卓、沈艳、张俊妮
时间:
2026年3月27日(周五)
10:00-11:30
地点(线下):
8455新葡萄娱乐场特色101会议室
主讲人简介:
Xinwei Ma is an Associate Professor of Economics at the University of California San Diego. His research interests are interdisciplinary, focusing on developing robust statistical methodologies for economics, social sciences, and biomedical applications. Some specific research directions include kernel-based density estimation and falsification testing in regression discontinuity designs; semi-parametric estimation and inference with limited overlap; Mendelian randomization addressing weak instruments and the winner's curse; design and analysis of adaptive experiments; and language model-assisted inference and data routing.
Xinwei earned his Ph.D. in Economics from the University of Michigan in 2019. Prior to this, he received a Master of Finance from the University of Hong Kong, as well as a Master of Economics and a Bachelor of Science from Peking University. Xinwei serves as an Associate Editor for Econometric Theory, the Journal of Econometrics, and The Econometrics Journal. In 2025, he was elected a Fellow of the International Association for Applied Econometrics.
报告摘要:
Randomized experiments or randomized controlled trials (RCTs) are gold standards for causal inference, yet cost and sample-size constraints limit power. We introduce CALM (Causal Analysis leveraging Language Models), a statistical framework that integrates large language models (LLMs) generated insights of RCTs with established causal estimators to increase precision while preserving statistical validity. In particular, CALM treats LLM-generated outputs as auxiliary prognostic information and corrects their potential bias via a heterogeneous calibration step that residualizes and optimally reweights predictions. We prove that CALM remains consistent even when LLM predictions are biased and achieves efficiency gains over augmented inverse probability weighting estimators for various causal effects. In particular, CALM develops a few-shot variant that aggregates predictions across randomly sampled demonstration sets. The resulting U-statistic-like predictor restores i.i.d. structure and also mitigates prompt-selection variability. Empirically, in simulations calibrated to a mobile-app depression RCT, CALM delivers lower variance relative to other benchmarking methods, is effective in zero- and few-shot settings, and remains stable across prompt designs. By principled use of LLMs to harness unstructured data and external knowledge learned during pretraining, CALM provides a practical path to more precise causal analyses in RCTs.