About Me

Currently, I am a research scientist at Facebook Artificial Intelligence Research. I'm interested in understanding learning and decision-making (human and machine). I am also interested in causal inference, experimentation, active learning and causal discovery, which I view as important components for having truly smart decision-makers. Recently I've also become interested in using simple games as platforms to help build AI which can understand natural language.

I spent 2 years working on Facebook News Feed where I helped started the data science team. We worked on, among other things, advanced experimentation techniques, understanding network effects and improving the way we measure various outcomes. This Slate piece does a great job of describing some of our work, including our human evaluator program. This Fortune article reports on our work on clickbait detection.

I'm a huge fan of data science for civic improvement and social good. Check out a cool project a bunch of us built at a hackathon sponsored by Bayes Impact.

My work has been published in a number of academic journals and I've written popular press articles for WIRED and New York Times.

Before Facebook I was a joint post-doc with David Rand at the Human Cooperation Lab (Yale Psychology) and Martin Nowak at the Program for Evolutionary Dynamics (Harvard Biology). I completed my PhD in Economics at Harvard University under the watchful eyes of Al Roth, Drew Fudenberg, David Laibson and Uma Karmarkar.

Research

Working Papers

"In-Group favoritism caused by Pokemon Go and the use of machine learning for principled investigation of potential moderators" (with Dave Rand)
(Under review) [SSRN PDF]
Summary: A large body of laboratory-based research suggests that arbitrary group assignments (i.e. "minimal groups"') can lead to in-group bias. We use the release of a popular augmented reality game Pokemon Go to study this phenomenon in a hybrid lab-field experiment.
"Learning causal models from many experiments: meta-analysis with regularized instrumental variables" (with Dean Eckles)
Poster at NIPS2016 "What If?" Workshop (Under review) [Arxiv]
Summary: We show how to use modified IV methods to combine hundreds or thousands of randomized trials to learn generalizable knowledge.
"Paying (for) Attention: The Impact of Information Processing Costs on Bayesian Inference" (with Xiaosheng Mu and Scott Kominers)
[Working Paper PDF] (under review)
Summary: What happens to learning if Bayesian updated is even slightly costly? Turns out, things can get arbitrarily bad.
"Detecting heterogeneous treatment effects by combining observational and experimental data" (with Akos Lada)
[Arxiv]
Summary: We show how to use observational data to estimate unit-level causal effects. We show when and why this can make estimation of heterogeneous effects in randomized trials easier despite the fact that observational estimates are often biased by unobserved confounders.

2017

"Multi-Agent Cooperation and the Emergence of (Natural) Language" (with Angeliki Lazaridou and Marco Baroni)
Forthcoming at ICLR 2017 [Arxiv]
Summary: We study how machines can learn language from having to coordinate to solve simple problems rather than from masses of labeled or unlabeled data.

2016

"The Good, the Bad, and the Unflinchingly Selfish: Cooperative Decision-Making Can Be Predicted with High Accuracy Using Only Three Behavioral Types" (with Ziv Epstein and Dave Rand)
Proceedings of the 17th ACM conference on Economics and Computation (EC2016) [SSRN PDF]
Summary: We have people play many dictator games with varying parameters and use machine learning to predict behavior in a game from other games. Willingness to cooperate is heterogeneous, but not very heterogeneous. However, demographics and standard personality psychology variables (eg. the Big 5) are not good predictors of which "type" people are. This is further evidence that "cooperativeness" is a stable dimension on which people differ and is orthogonal to many others that are commonly studied.
"Using methods from machine learning to evaluate models of human choice under uncertainty" (with Jeff Naecker)
Forthcoming in Journal of Economic Behavior and Organization [Working Paper PDF]
Summary: We compare ML models to economic models of decisions under risk and ambiguity. Economic models do surprisingly well at predicting real lottery choices but there is room for improvement in ambiguity. ML can be used as a useful benchmark to see whether there is room for improvement in formal behavioral modeling.
"Recency, Records and Recaps: Learning and Non-equilibrium Behavior in a Simple Decision Problem" (with Drew Fudenberg)
Proceedings of the 15th ACM conference on Economics and Computation (EC2014), reprinted in Transactions on Economics and Computation (2016) [Working Paper PDF] [Online Supplement] [Journal Version]
Summary: Will learning lead individuals to behave optimally? Not always. We show that the human tendency to discount past information (recency bias) can lead to persistent sub-optimal decision-making in a simple game. Informational interventions or "recaps" can be used to mitigate the worst impacts of the bias.

2015

"When Punishment Doesn't Pay: 'Cold Glow' and Decisions to Punish" (with Aurelie Ouss)
Journal of Law and Economics (December 2015) [Working Paper PDF]
Summary: People's intuitions about punishment don't fit with normative models of economics of crime. What kind of outcomes result when individuals set punishment levels in games? They're not great.
"Asymmetric Impacts of Favorable and Unfavorable Information on Decisions Under Ambiguity" (with Uma Karmarkar)
Management Science (2015) [Working Paper PDF] [Journal Version] [Data and Code]
Summary: When information supports a favorable outcome, it strongly increases valuation of an ambiguous financial prospect. However, when information supports an unfavorable outcome, it has significantly less impact. Unfavorable information decreases estimates of a good outcome occurring, but also reduces aversive uncertainty. These factors act in opposition, minimizing the effects of unfavorable information.
"Habits of Virtue: creating norms of cooperation and defection in the laboratory" (with David Rand)
Management Science (2015) [Working Paper PDF] [Journal Version] [Data and Code]
Summary: Why are some groups of people more cooperative than others? We argue that people internalize strategies that they learn are successful in their daily social interaction. Thus, interacting in environments where cooperation is advantageous (due to effective “rules of the game”) causes individuals to adopt cooperation as their default behavior more generally. We show these spill-over effects experimentally.

2014

"Cooperating with the future" (with Oliver Hauser, David Rand and Martin Nowak)
Nature (2014) [Published Version] [Data and Code]
Nature video summary available here (it's good)
Summary: The existence of pro-social preferences lead to institutional designs which are impossible in the standard model. We show how properly designed mechanisms which harness pro-social preferences can achieve good outcomes while poorly designed mechanisms can lead to huge social welfare losses.
"Humans display a 'cooperative phenotype' that is domain general and temporally stable" (with Martin Nowak and David Rand)
Nature Communications (2014) [Working Paper PDF] [Journal Version] [Data and Code]
Summary: Is there a unified underlying trait that we can call "cooperativeness" or is each cooperation decision completely unique? Can we study this trait using simple stripped down economic games? Is propensity to cooperate related to other social behavioral such as propensity to punish non-cooperators or to competitiveness?
"Why We Cooperate" (with Jillian Jordan and David Rand)
Jean Decety & Thalia Wheatley (eds.) “The Moral Brain: Multidisciplinary Perspectives” (2014) [Working Paper PDF]
Summary: Prosocial behavior is a key part of human life. Why do individuals pay costs to give benefits to others? We provide a brief survey to introduce non-specialists into the literature.
"Social Heuristics Shape Intuitive Cooperation" (with David Rand, Gordon Kraft-Todd, George Newman, Owen Wurzbacher, Martin Nowak and Joshua Greene)
Nature Communications (2014) [Working Paper PDF] [Journal Version] [Data and Code]
Summary: In a large meta-analysis we show that intuition favors cooperation while reflection leads to defection in one-shot cooperation games. We also that previous experience with behavioral experiments can undermine the effectiveness of time pressure as a decision-making manipulation.
"How to Commit (If You Must): Commitment Contracts and the Dual-Self Model"
Journal of Economic Behavior and Organization (2014) [Working Paper PDF] [Journal Version]
Summary: This paper looks at three types of commitment technologies: carrot contracts (rewards for ‘good’ behavior), stick contracts (fines for ‘bad’ behavior) and binding commitment. Carrots turn out to work the best in a Fudenberg-Levine style dual self model.

2012

"A Note on Proper Scoring Rules and Risk Aversion" (with Mikkel Plagborg-Moller)
Economics Letters (2012) [Ungated Working Paper] [Journal Version]
Summary: Proper scoring rules are incentive schemes to make individuals report their true beliefs. We show that risk aversion causes individuals to "compress" their belief ratios toward 1 and discuss potential mechanisms for solving the risk aversion problem.

Popular Press

"How Not To Drown In Numbers" (with Seth Stephens-Davidowitz)
New York Times Op-Ed [Web]
"Games Head to the Lab" (with David Rand)
WIRED UK "The World in 2014" [PDF]
"Small is Good When It Comes to Data Creation" (with David Rand)
WIRED UK "The World in 2013" [PDF]