Introduction to Field Experiments and Randomized Controlled Trials

Painting of a girl holding a bottle

Have you ever been curious about the methods researchers employ to determine causal relationships among various factors, ultimately leading to significant breakthroughs and progress in numerous fields? In this article, we offer an overview of field experimentation and its importance in discerning cause and effect relationships. We outline how randomized experiments represent an unbiased method for determining what works. Furthermore, we discuss key aspects of experiments, such as intervention, excludability, and non-interference. To illustrate these concepts, we present a hypothetical example of a randomized controlled trial evaluating the efficacy of an experimental drug called Covi-Mapp.

Why experiments?

Every day, we find ourselves faced with questions of cause and effect. Understanding the driving forces behind outcomes is crucial, ranging from personal decisions like parenting strategies to organizational challenges such as effective advertising. This blog aims to provide a systematic introduction to experimentation, igniting enthusiasm for primary research and highlighting the myriad of experimental applications and opportunities available.

The challenge for those who seek to answer causal questions convincingly is to develop a research methodology that doesn't require identifying or measuring all potential confounders. Since no planned design can eliminate every possible systematic difference between treatment and control groups, random assignment emerges as a powerful tool for minimizing bias. In the contentious world of causal claims, randomized experiments represent an unbiased method for determining what works. Random assignment means participants are assigned to different groups or conditions in a study purely by chance. Basically, each participant has an equal chance to be assigned to a control group or a treatment group. 

Field experiments, or randomized studies conducted in real-world settings, can take many forms. While experiments on college campuses are often considered lab studies, certain experiments on campus – such as those examining club participation – may be regarded as field experiments, depending on the experimental design. Ultimately, whether a study is considered a field experiment hinges on the definition of "the field."

Researchers may employ two main scenarios for randomization. The first involves gathering study participants and randomizing them at the time of the experiment. The second capitalizes on naturally occurring randomizations, such as the Vietnam draft lottery. 

Intervention, Excludability, and Non-Interference

Three essential features of any experiment are intervention, excludability, and non-interference. In a general sense, the intervention refers to the treatment or action being tested in an experiment. The excludability principle is satisfied when the only difference between the experimental and control groups is the presence or absence of the intervention. The non-interference principle holds when the outcome of one participant in the study does not influence the outcomes of other participants. Together, these principles ensure that the experiment is designed to provide unbiased and reliable results, isolating the causal effect of the intervention under study.

Omitted Variables and Non-Compliance

To ensure unbiased results, researchers must randomize as much as possible to minimize omitted variable bias. Omitted variables are factors that influence the outcome but are not measured or are difficult to measure. These unmeasured attributes, sometimes called confounding variables or unobserved heterogeneity, must be accounted for to guarantee accurate findings.

Non-compliance can also complicate experiments. One-sided non-compliance occurs when individuals assigned to a treatment group don't receive the treatment (failure to treat), while two-sided non-compliance occurs when some subjects assigned to the treatment group go untreated or individuals assigned to the control group receive the treatment. Addressing these issues at the design level by implementing a blind or double-blind study can help mitigate potential biases.

Achieving Precision through Covariate Balance

To ensure the control and treatment groups are comparatively similar in all relevant aspects, particularly when the sample size (n) is small, it is essential to achieve covariate balance. Covariance measures the association between two variables, while a covariate is a factor that influences the outcome variable. By balancing covariates, we can more accurately isolate the effects of the treatment, leading to improved precision in our findings.

Fictional Example of Randomized Controlled Trial of Covi-Mapp for COVID-19 Management

Let's explore a fictional example to better understand experiments: a one-week randomized controlled trial of the experimental drug Covi-Mapp for managing Covid. In this case, the control group receives the standard care for Covid patients, while the treatment group receives the standard care plus Covi-Mapp. The outcome of interest is whether patients have cough symptoms on day 7, as subsidizing cough symptoms is an encouraging sign in Covid recovery. We'll measure the presence of cough on day 0 and day 7, as well as temperature on day 0 and day 7. Gender is also tracked. The control represents the standard care for COVID-19 patients, while the treatment includes standard care plus the experimental drug.

In this Covi-Mapp example, the intervention is the Covi-Mapp drug, the excludability principle is satisfied if the only difference in patient care between the groups is the drug administration, and the non-interference principle holds if one patient's outcome doesn't affect another's.

First, let's assume we have a dataset containing the relevant information for each patient, including cough status on day 0 and day 7, temperature on day 0 and day 7, treatment assignment, and gender. We'll read the data and explore the dataset:

library(data.table)

d <- fread("../data/COVID_rct.csv")

names(d)


"temperature_day0"  "cough_day0"        "treat_zmapp"       "temperature_day14" "cough_day14"       "male" 

Simple treatment effect of the experimental drug

Without any covariates, let's first look at the estimated effect of the treatment on the presence of cough on day 7. The estimated proportion of patients with a cough on day 7 for the control group (not receiving the experimental drug) is 0.847458. In other words, about 84.7% of patients in the control group are expected to have a cough on day 7, all else being equal. The estimated effect of the experimental drug on the presence of cough on day 7 is -0.23. This means that, on average, receiving the experimental drug reduces the proportion of patients with a cough on day 7 by 23.8% compared to the control group.

covid_1 <- d[ , lm(cough_day7 ~ treat_drug)]

coeftest(covid_1, vcovHC)


                 Estimate Std. Error t value Pr(>|t|)    

(Intercept)       0.847458   0.047616  17.798  < 2e-16 ***

treat_covid_mapp -0.237702   0.091459  -2.599  0.01079 *  

Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

We know that a patient's initial condition would affect the final outcome. If the patient has a cough and a fever on day 0, they might not fare well with the treatment. To better understand the treatment's effect, let's add these covariates:

covid_2 <- d[ , lm(cough_day7 ~ treat_drug +

                   cough_day0 + temperature_day0)]

coeftest(covid_2, vcovHC)


                  Estimate Std. Error t value Pr(>|t|)   

(Intercept)      -19.469655   7.607812 -2.5592 0.012054 * 

treat_covid_mapp  -0.165537   0.081976 -2.0193 0.046242 * 

cough_day0         0.064557   0.178032  0.3626 0.717689   

temperature_day0   0.205548   0.078060  2.6332 0.009859 **

Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

The output shows the results of a linear regression model, estimating the effect of the experimental drug (treat_covid_mapp) on the presence of cough on day 7, adjusting for cough on day 0 and temperature on day 0. The experimental drug significantly reduces the presence of cough on day 7 by approximately 16.6% compared to the control group (p-value = 0.046242). The presence of cough on day 0 does not significantly predict the presence of cough on day 7 (p-value = 0.717689). A one-unit increase in temperature on day 0 is associated with a 20.6% increase in the presence of cough on day 7, and this effect is statistically significant (p-value = 0.009859).

Should we add day 7 temperature as a covariate? By including it, we might find that the treatment is no longer statistically significant since the temperature on day 7 could be affected by the treatment itself. It is a post-treatment variable, and by including it, the experiment loses value as we used something that was affected by intervention as our covariate.

However, we'd like to investigate if the treatment affects men or women differently. Since we collected gender as part of the study, we could check for Heterogeneous Treatment Effect (HTE) for male vs. female. The experimental drug has a marginally significant effect on the outcome variable for females, reducing it by approximately 23.1% (p-value = 0.05391).

covid_4 <- d[ , lm(cough_day7 ~ treat_drug + treat_drug * male +

                   cough_day0 + temperature_day0)]

coeftest(covid_4, vcovHC)


t test of coefficients:


                  Estimate Std. Error  t value  Pr(>|t|)    

(Intercept)      48.712690  10.194000   4.7786 6.499e-06 ***

treat_zmapp      -0.230866   0.118272  -1.9520   0.05391 .  

male              3.085486   0.121773  25.3379 < 2.2e-16 ***

dehydrated_day0   0.041131   0.194539   0.2114   0.83301    

temperature_day0  0.504797   0.104511   4.8301 5.287e-06 ***

treat_zmapp:male -2.076686   0.198386 -10.4679 < 2.2e-16 ***

Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Which group, those coded as male == 0 or male == 1, have better health outcomes (cough) in control? What about in treatment? How does this help to contextualize any heterogeneous treatment effect that might have been estimated?

Stargazer is a popular R package that enables users to create well-formatted tables and reports for statistical analysis results.

covid_males <- d[male == 1, lm(temperature_day14 ~ treat_drug)]

covid_females <- d[male == 0, lm(temperature_day14 ~ treat_drug)]


stargazer(covid_males, covid_females,

          title = "",

          type = 'text',

          dep.var.caption = 'Outcome Variable:',

          dep.var.labels = c('Cough on Day 7'),

          se = list(

            sqrt(diag(vcov(covid_males))),

            sqrt(diag(vcovHC(covid_females))))

          )


===============================================================

                                 Outcome Variable:             

                               Temperature on Day 14           

                              (1)                   (2)        

treat_covid_mapp           -2.591***              -0.323*      

                            (0.220)               (0.174)      

Constant                  101.692***             98.487***     

                            (0.153)               (0.102)      

Observations                  37                    63         

R2                           0.798                 0.057       

Adjusted R2                  0.793                 0.041       

Residual Std. Error     0.669 (df = 35)       0.646 (df = 61)  

F Statistic         138.636*** (df = 1; 35) 3.660* (df = 1; 61)

===============================================================

Note:                               *p<0.1; **p<0.05; ***p<0.01

Looking at this regression report, we see that males in control have a temperature of 102; females in control have a temperature of 98.6 (which is very nearly a normal temperature). So, in control, males are worse off. In treatment, males have a temperature of 102 - 2.59 = 99.41. While this is closer to a normal temperature, this is still elevated. Females in treatment have a temperature of 98.5 - .32 = 98.18, which is slightly lower than a normal temperature, and is better than an elevated temperature. It appears that the treatment is able to have a stronger effect among male participants than females because males are *more sick* at baseline.

In conclusion, experimentation offers a fascinating and valuable avenue for primary research, allowing us to address causal questions and enhance our understanding of the world around us. Covariate control helps to isolate the causal effect of the treatment on the outcome variable, ensuring that the observed effect is not driven by confounding factors. Proper control of covariates enhances the internal validity of the study and ensures that the estimated treatment effect is an accurate representation of the true causal relationship. By exploring and accounting for sub groups in data, researchers can identify whether the treatment has different effects on different groups, such as men and women or younger and older individuals. This information can be critical for making informed policy decisions and developing targeted interventions that maximize the benefits for specific groups. The ongoing investigation of experimental methodologies and their potential applications represents a compelling and significant area of inquiry. 

Gerber, A. S., & Green, D. P. (2012). Field Experiments: Design, Analysis, and Interpretation . W. W. Norton.

“DALL·E 2.” OpenAI , https://openai.com/product/dall-e-2

“Data Science 241. Experiments and Causal Inference.” UC Berkeley School of Information , https://www.ischool.berkeley.edu/courses/datasci/241

Field experiments, explained

Editor’s note: This is part of a series called “The Day Tomorrow Began,” which explores the history of breakthroughs at UChicago.  Learn more here.

A field experiment is a research method that uses some controlled elements of traditional lab experiments, but takes place in natural, real-world settings. This type of experiment can help scientists explore questions like: Why do people vote the way they do? Why do schools fail? Why are certain people hired less often or paid less money?

University of Chicago economists were early pioneers in the modern use of field experiments and conducted innovative research that impacts our everyday lives—from policymaking to marketing to farming and agriculture.  

Jump to a section:

What is a field experiment, why do a field experiment, what are examples of field experiments, when did field experiments become popular in modern economics, what are criticisms of field experiments.

Field experiments bridge the highly controlled lab environment and the messy real world. Social scientists have taken inspiration from traditional medical or physical science lab experiments. In a typical drug trial, for instance, participants are randomly assigned into two groups. The control group gets the placebo—a pill that has no effect. The treatment group will receive the new pill. The scientist can then compare the outcomes for each group.

A field experiment works similarly, just in the setting of real life.

It can be difficult to understand why a person chooses to buy one product over another or how effective a policy is when dozens of variables affect the choices we make each day. “That type of thinking, for centuries, caused economists to believe you can't do field experimentation in economics because the market is really messy,” said Prof. John List, a UChicago economist who has used field experiments to study everything from how people use  Uber and  Lyft to  how to close the achievement gap in Chicago-area schools . “There are a lot of things that are simultaneously moving.”

The key to cleaning up the mess is randomization —or assigning participants randomly to either the control group or the treatment group. “The beauty of randomization is that each group has the same amount of bad stuff, or noise or dirt,” List said. “That gets differenced out if you have large enough samples.”

Though lab experiments are still common in the social sciences, field experiments are now often used by psychologists, sociologists and political scientists. They’ve also become an essential tool in the economist’s toolbox.  

Some issues are too big and too complex to study in a lab or on paper—that’s where field experiments come in.

In a laboratory setting, a researcher wants to control as many variables as possible. These experiments are excellent for testing new medications or measuring brain functions, but they aren’t always great for answering complex questions about attitudes or behavior.

Labs are highly artificial with relatively small sample sizes—it’s difficult to know if results will still apply in the real world. Also, people are aware they are being observed in a lab, which can alter their behavior. This phenomenon, sometimes called the Hawthorne effect, can affect results.

Traditional economics often uses theories or existing data to analyze problems. But, when a researcher wants to study if a policy will be effective or not, field experiments are a useful way to look at how results may play out in real life.

In 2019, UChicago economist Michael Kremer (then at Harvard) was awarded the Nobel Prize alongside Abhijit Banerjee and Esther Duflo of MIT for their groundbreaking work using field experiments to help reduce poverty . In the 1990s and 2000s, Kremer conducted several randomized controlled trials in Kenyan schools testing potential interventions to improve student performance. 

In the 1990s, Kremer worked alongside an NGO to figure out if buying students new textbooks made a difference in academic performance. Half the schools got new textbooks; the other half didn’t. The results were unexpected—textbooks had no impact.

“Things we think are common sense, sometimes they turn out to be right, sometimes they turn out to be wrong,” said Kremer on an episode of  the Big Brains podcast. “And things that we thought would have minimal impact or no impact turn out to have a big impact.”

In the early 2000s, Kremer returned to Kenya to study a school-based deworming program. He and a colleague found that providing deworming pills to all students reduced absenteeism by more than 25%. After the study, the program was scaled nationwide by the Kenyan government. From there it was picked up by multiple Indian states—and then by the Indian national government.

“Experiments are a way to get at causal impact, but they’re also much more than that,” Kremer said in  his Nobel Prize lecture . “They give the researcher a richer sense of context, promote broader collaboration and address specific practical problems.”    

Among many other things, field experiments can be used to:

Study bias and discrimination

A 2004 study published by UChicago economists Marianne Bertrand and Sendhil Mullainathan (then at MIT) examined racial discrimination in the labor market. They sent over 5,000 resumes to real job ads in Chicago and Boston. The resumes were exactly the same in all ways but one—the name at the top. Half the resumes bore white-sounding names like Emily Walsh or Greg Baker. The other half sported African American names like Lakisha Washington or Jamal Jones. The study found that applications with white-sounding names were 50% more likely to receive a callback.

Examine voting behavior

Political scientist Harold Gosnell , PhD 1922, pioneered the use of field experiments to examine voting behavior while at UChicago in the 1920s and ‘30s. In his study “Getting out the vote,” Gosnell sorted 6,000 Chicagoans across 12 districts into groups. One group received voter registration info for the 1924 presidential election and the control group did not. Voter registration jumped substantially among those who received the informational notices. Not only did the study prove that get-out-the-vote mailings could have a substantial effect on voter turnout, but also that field experiments were an effective tool in political science.

Test ways to reduce crime and shape public policy

Researchers at UChicago’s  Crime Lab use field experiments to gather data on crime as well as policies and programs meant to reduce it. For example, Crime Lab director and economist Jens Ludwig co-authored a  2015 study on the effectiveness of the school mentoring program  Becoming a Man . Developed by the non-profit Youth Guidance, Becoming a Man focuses on guiding male students between 7th and 12th grade to help boost school engagement and reduce arrests. In two field experiments, the Crime Lab found that while students participated in the program, total arrests were reduced by 28–35%, violent-crime arrests went down by 45–50% and graduation rates increased by 12–19%.

The earliest field experiments took place—literally—in fields. Starting in the 1800s, European farmers began experimenting with fertilizers to see how they affected crop yields. In the 1920s, two statisticians, Jerzy Neyman and Ronald Fisher, were tasked with assisting with these agricultural experiments. They are credited with identifying randomization as a key element of the method—making sure each plot had the same chance of being treated as the next.

The earliest large-scale field experiments in the U.S. took place in the late 1960s to help evaluate various government programs. Typically, these experiments were used to test minor changes to things like electricity pricing or unemployment programs.

Though field experiments were used in some capacity throughout the 20th century, this method didn’t truly gain popularity in economics until the 2000s. Kremer and List were early pioneers and first began experimenting with the method in the 1990s.

In 2004, List co-authored  a seminal paper defining field experiments and arguing for the importance of the method. In 2008,  he and UChicago economist Steven Levitt published another study tracing the history of field experiments and their impact on economics.

In the past few decades, the use of field experiments has exploded. Today, economists often work alongside NGOs or nonprofit organizations to study the efficacy of programs or policies. They also partner with companies to test products and understand how people use services.  

There are several  ethical discussions happening among scholars as field experiments grow in popularity. Chief among them is the issue of informed consent. All studies that involve human test subjects must be approved by an institutional review board (IRB) to ensure that people are protected.

However, participants in field experiments often don’t know they are in an experiment. While an experiment may be given the stamp of approval in the research community, some argue that taking away peoples’ ability to opt out is inherently unethical. Others advocate for stricter review processes as field experiments continue to evolve.

According to List, another major issue in field experiments is the issue of scale . Many experiments only test small groups—say, dozens to hundreds of people. This may mean the results are not applicable to broader situations. For example, if a scientist runs an experiment at one school and finds their method works there, does that mean it will also work for an entire city? Or an entire country?

List believes that in addition to testing option A and option B, researchers need a third option that accounts for the limitations that come with a larger scale. “Option C is what I call critical scale features. I want you to bring in all of the warts, all of the constraints, whether they're regulatory constraints, or constraints by law,” List said. “Option C is like your reality test, or what I call policy-based evidence.”

This problem isn’t unique to field experiments, but List believes tackling the issue of scale is the next major frontier for a new generation of economists.

Hero photo copyright Shutterstock.com

More Explainers

A chair on stage

Improv, Explained

Illustration of cosmic rays making contact with Earth

Cosmic rays, explained

Get more with UChicago News delivered to your inbox.

Recommended Stories

A hand holding a paper heart, inserting it into a coin slot

An economist illuminates our giving habits—during the pandemic and…

Michael Kremer meeting with officials in Kenya including Dr. Sara Ruto

Collaborating with Kenyan government on development innovations is…

Related Topics

Latest news, big brains podcast: why are more women saying no to having kids.

Photo of 3 scientists on a stage with The Kavli Prize backdrop behind

UChicago President Paul Alivisatos accepts 2024 Kavli Prize in Nanoscience

a room of scientists cheer in front of a bank of computers

Particle physics

Fermilab short-baseline detector detects its first neutrinos

Artistic rendition showing balls forming with stormclouds in background

Biochemistry

New research suggests rainwater could have helped form the first protocell walls

Inside the Lab

Go 'Inside the Lab' at UChicago

Explore labs through videos and Q&As with UChicago faculty, staff and students

Robyn Schiff with her book Information Desk

Robyn Schiff’s epic poem ‘Information Desk’ draws critical acclaim

Artist illustration of a field of particles

Materials science

UChicago scientists discover new material for optically-controlled magnetic memory

Around uchicago.

Zewei Wu

Dispatches from Abroad

At Max Planck Institute for Astrophysics, UChicago student unravels the mysteries of galaxies

Artificial Intelligence

NSF awards $20 million to build AI models that predict scientific discoveries a…

New chicago booth course empowers student entrepreneurs to tackle global issues.

Campus News

Project to improve accessibility, sustainability of Main Quadrangles

IOP at the DNC

2024 DNC, RNC

An ‘unparalleled experience’: UChicago students at the Democratic, Republican conventions

John Jayne (pictured left) and Jesse Ssengonzi

The Olympics

Living their Olympic dreams: UChicago alumni relish moments on world stage

Meet A UChicagoan

“The trouble comes from using these immortal materials for disposable products.”

Photo of A scientist with black gloves and blue lab coat and a mustache holding a small item

Obama Foundation

University of Chicago Obama Foundation Scholars Program includes 18 emerging leaders for 2024-25

  • Yale Directories

Institution for Social and Policy Studies

Advancing research • shaping policy • developing leaders, why randomize.

About Randomized Field Experiments Randomized field experiments allow researchers to scientifically measure the impact of an intervention on a particular outcome of interest.

What is a randomized field experiment? In a randomized experiment, a study sample is divided into one group that will receive the intervention being studied (the treatment group) and another group that will not receive the intervention (the control group). For instance, a study sample might consist of all registered voters in a particular city. This sample will then be randomly divided into treatment and control groups. Perhaps 40% of the sample will be on a campaign’s Get-Out-the-Vote (GOTV) mailing list and the other 60% of the sample will not receive the GOTV mailings. The outcome measured –voter turnout– can then be compared in the two groups. The difference in turnout will reflect the effectiveness of the intervention.

What does random assignment mean? The key to randomized experimental research design is in the random assignment of study subjects – for example, individual voters, precincts, media markets or some other group – into treatment or control groups. Randomization has a very specific meaning in this context. It does not refer to haphazard or casual choosing of some and not others. Randomization in this context means that care is taken to ensure that no pattern exists between the assignment of subjects into groups and any characteristics of those subjects. Every subject is as likely as any other to be assigned to the treatment (or control) group. Randomization is generally achieved by employing a computer program containing a random number generator. Randomization procedures differ based upon the research design of the experiment. Individuals or groups may be randomly assigned to treatment or control groups. Some research designs stratify subjects by geographic, demographic or other factors prior to random assignment in order to maximize the statistical power of the estimated effect of the treatment (e.g., GOTV intervention). Information about the randomization procedure is included in each experiment summary on the site.

What are the advantages of randomized experimental designs? Randomized experimental design yields the most accurate analysis of the effect of an intervention (e.g., a voter mobilization phone drive or a visit from a GOTV canvasser, on voter behavior). By randomly assigning subjects to be in the group that receives the treatment or to be in the control group, researchers can measure the effect of the mobilization method regardless of other factors that may make some people or groups more likely to participate in the political process. To provide a simple example, say we are testing the effectiveness of a voter education program on high school seniors. If we allow students from the class to volunteer to participate in the program, and we then compare the volunteers’ voting behavior against those who did not participate, our results will reflect something other than the effects of the voter education intervention. This is because there are, no doubt, qualities about those volunteers that make them different from students who do not volunteer. And, most important for our work, those differences may very well correlate with propensity to vote. Instead of letting students self-select, or even letting teachers select students (as teachers may have biases in who they choose), we could randomly assign all students in a given class to be in either a treatment or control group. This would ensure that those in the treatment and control groups differ solely due to chance. The value of randomization may also be seen in the use of walk lists for door-to-door canvassers. If canvassers choose which houses they will go to and which they will skip, they may choose houses that seem more inviting or they may choose houses that are placed closely together rather than those that are more spread out. These differences could conceivably correlate with voter turnout. Or if house numbers are chosen by selecting those on the first half of a ten page list, they may be clustered in neighborhoods that differ in important ways from neighborhoods in the second half of the list. Random assignment controls for both known and unknown variables that can creep in with other selection processes to confound analyses. Randomized experimental design is a powerful tool for drawing valid inferences about cause and effect. The use of randomized experimental design should allow a degree of certainty that the research findings cited in studies that employ this methodology reflect the effects of the interventions being measured and not some other underlying variable or variables.

  • A-Z Publications

Annual Review of Sociology

Volume 43, 2017, review article, field experiments across the social sciences.

  • Delia Baldassarri 1 , and Maria Abascal 2
  • View Affiliations Hide Affiliations Affiliations: 1 Department of Sociology, New York University, New York, New York 10012; email: [email protected] 2 Department of Sociology, Columbia University, New York, New York 10027; email: [email protected]
  • Vol. 43:41-73 (Volume publication date July 2017) https://doi.org/10.1146/annurev-soc-073014-112445
  • First published as a Review in Advance on May 22, 2017
  • © Annual Reviews

Using field experiments, scholars can identify causal effects via randomization while studying people and groups in their naturally occurring contexts. In light of renewed interest in field experimental methods, this review covers a wide range of field experiments from across the social sciences, with an eye to those that adopt virtuous practices, including unobtrusive measurement, naturalistic interventions, attention to realistic outcomes and consequential behaviors, and application to diverse samples and settings. The review covers four broad research areas of substantive and policy interest: first, randomized controlled trials, with a focus on policy interventions in economic development, poverty reduction, and education; second, experiments on the role that norms, motivations, and incentives play in shaping behavior; third, experiments on political mobilization, social influence, and institutional effects; and fourth, experiments on prejudice and discrimination. We discuss methodological issues concerning generalizability and scalability as well as ethical issues related to field experimental methods. We conclude by arguing that field experiments are well equipped to advance the kind of middle-range theorizing that sociologists value.

Article metrics loading...

Full text loading...

Literature Cited

  • Abascal M . 2015 . Us and them: black–white relations in the wake of Hispanic population growth. Am. Sociol. Rev. 80 : 789– 813 [Google Scholar]
  • Adida CL , Laitin DD , Valfort MA . 2016 . Why Muslim Integration Fails in Christian-Heritage Societies Cambridge, MA: Harvard Univ. Press [Google Scholar]
  • Ahmed AM , Hammarstedt M . 2008 . Discrimination in the rental housing market: a field experiment on the Internet. J. Urban Econ. 64 : 362– 72 [Google Scholar]
  • Ahmed AM , Hammarstedt M . 2009 . Detecting discrimination against homosexuals: evidence from a field experiment on the Internet. Economica 76 : 599– 97 [Google Scholar]
  • Arceneaux K , Nickerson DW . 2009 . Who is mobilized to vote? A re-analysis of 11 field experiments. Am. J. Political Sci. 53 : 1– 16 [Google Scholar]
  • Attanasio O , Augsburg B , De Haas R , Fitzsimons E , Harmgart H . 2012 . Group lending or individual lending? Evidence from a randomised field experiment in Mongolia. Work. Pap. No. 136, Eur. Bank Reconstr. Dev. [Google Scholar]
  • Attanasio O , Pellerano L , Reyes SP . 2009 . Building trust? Conditional cash transfer programmes and social capital. Fiscal Stud. 30 : 139– 77 [Google Scholar]
  • Avdeenko A , Gilligan MG . 2015 . International interventions to build social capital: evidence from a field experiment in Sudan. Am. Political Sci. Rev. 109 : 427– 49 [Google Scholar]
  • Ayres I , Siegelman P . 1995 . Race and gender discrimination in bargaining for a new car. Am. Econ. Rev. 85 : 304– 21 [Google Scholar]
  • Baldassarri D . 2015 . Cooperative networks: altruism, group solidarity, and reciprocity in Ugandan farmer organizations. Am. J. Sociol. 121 : 355– 95 [Google Scholar]
  • Baldassarri D . 2016 . Prosocial behavior across communities: evidence from a nationwide lost-letter experiment Presented at Advances with Field Experiments Conf., Sept. 16, Univ Chicago: [Google Scholar]
  • Banerjee A , Bertrand M , Datta S , Mullainathan S . 2009 . Labor market discrimination in Delhi: evidence from a field experiment. J. Comp. Econ. 37 : 14– 27 [Google Scholar]
  • Banerjee A , Duflo E . 2009 . The experimental approach to development economics. Annu. Rev. Econ. 1 : 151– 78 [Google Scholar]
  • Banerjee A , Duflo E . 2011 . Poor Economics: A Radical Rethinking of the Way to Fight Global Poverty. New York: Public Affairs [Google Scholar]
  • Banerjee A , Duflo E , Glennerster R , Kothari D . 2010a . Improving immunization coverage in rural India: Clustered randomized controlled immunisation campaigns with and without incentives. Br. Med. J. 340:c2220 [Google Scholar]
  • Banerjee A , Duflo E , Glennerster R , Kinnan C . 2010b . The miracle of microfinance? Evidence from a randomized evaluation. Work. Pap. No. 13-09, Dep. Econ., MIT [Google Scholar]
  • Barr A . 2003 . Trust and expected trustworthiness: experimental evidence from Zimbabwean villages. Econ. J. 113 : 614– 30 [Google Scholar]
  • Bauchet J , Marshall C , Starita L , Thomas J , Yalouris A . 2011 . Latest findings from randomized evaluations of microfinance. Access Finance Forum Rep. 2 : 1– 27 [Google Scholar]
  • Beath A , Christia F , Enikolopov R . 2013 . Empowering women: evidence from a field experiment in Afghanistan. Am. Political Sci. Rev. 107 : 540– 57 [Google Scholar]
  • Benson PL , Karabenick SA , Lerner RM . 1976 . Pretty pleases: the effects of physical attractiveness, race, and sex on receiving help. J. Exp. Soc. Psychol. 12 : 409– 15 [Google Scholar]
  • Benz M , Meier S . 2008 . Do people behave in experiments as in the field? Evidence from donations. Exp. Econ. 11 : 278– 81 [Google Scholar]
  • Bertrand M , Karlan D , Mullainathan S , Shafir E , Zinman J . 2010 . What's advertising content worth? Evidence from a consumer credit marketing field experiment. Q. J. Econ. 125 : 263– 306 [Google Scholar]
  • Bertrand M , Mullainathan S . 2004 . Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. Am. Econ. Rev. 94 : 991– 1013 [Google Scholar]
  • Besbris M , Faber JW , Rich P , Sharkey P . 2015 . Effect of neighborhood stigma on economic transitions. PNAS 112 : 4994– 98 [Google Scholar]
  • Bettinger EP . 2012 . Paying to learn: the effect of financial incentives on elementary school test scores. Rev. Econ. Stat. 94 : 686– 98 [Google Scholar]
  • Bigoni M , Bortolotti S , Casari M , Gambetta D , Pancotto F . 2016 . Amoral familism, social capital, or trust? The behavioural foundations of the Italian north–south divide. Econ. J. 126 : 1318– 41 [Google Scholar]
  • Blommaert L , Coenders M , van Tubergen F . 2014 . Discrimination of Arabic-named applicants in the Netherlands: an Internet-based field experiment examining different phases in online recruitment procedures. Soc. Forces 92 : 957– 82 [Google Scholar]
  • Bond RM , Fariss CJ , Jones JJ , Kramer AD , Marlow C . et al. 2012 . A 61-million-person experiment in social influence and political mobilization. Nature 489 : 295– 98 [Google Scholar]
  • Bosch M , Carnero MA , Farré L . 2010 . Information and discrimination in the rental housing market: evidence from a field experiment. Reg. Sci. Urban Econ. 40 : 11– 19 [Google Scholar]
  • Brearley HC . 1931 . Experimental sociology in the United States. Soc. Forces 10 : 196– 99 [Google Scholar]
  • Butler DM , Broockman DE . 2011 . Do politicians racially discriminate against constituents? A field experiment on state legislators. Am. J. Political Sci. 55 : 463– 77 [Google Scholar]
  • Butler DM , Nickerson DW . 2011 . Can learning constituency opinion affect how legislators vote? Results from a field experiment. Q. J. Political Sci. 6 : 55– 83 [Google Scholar]
  • Camerer C . 2003 . Behavioral Game Theory: Experiments in Strategic Interaction New York, NY: Russell Sage Found. [Google Scholar]
  • Cardenas J , Carpenter J . 2008 . Behavioural development economics: lessons from field labs in the developing world. J. Dev. Stud. 44 : 337– 64 [Google Scholar]
  • Casey K , Glennerster R , Miguel E . 2012 . Reshaping institutions: evidence on external aid and local collective action. Q. J. Econ. 127 : 1755– 812 [Google Scholar]
  • Castilla EJ , Benard S . 2010 . The paradox of meritocracy in organizations. Adm. Sci. Q. 55 : 543– 676 [Google Scholar]
  • Centola D . 2010 . The spread of behavior in an online social network experiment. Science 329 : 1194– 97 [Google Scholar]
  • Charness G , Gneezy U . 2009 . Incentives to exercise. Econometrica 77 : 909– 31 [Google Scholar]
  • Chetty R , Hendren N , Katz LF . 2015 . The effects of exposure to better neighborhoods on children: new evidence from the moving to opportunity experiment. Work. Pap. 21156, NBER, Cambridge, MA [Google Scholar]
  • Chong D , Junn J . 2011 . Politics from the perspective of minority populations. Cambridge Handbook of Experimental Political Science JN Druckman, DP Green, JH Kuklinski, A Lupia, 602– 33 Cambridge, UK: Cambridge Univ. Press [Google Scholar]
  • Cialdini RB , Ascani K . 1976 . Test of a concession procedure for inducing verbal, behavioral, and further compliance with a request to give blood. J. Pers. Soc. Psychol. 61 : 295– 300 [Google Scholar]
  • Cialdini RB , Vincent JE , Lewis SK , Catalan J , Wheeler D , Darby BL . 1975 . Reciprocal concessions procedure for inducing compliance: the door-in-the-face technique. J. Pers. Soc. Psychol. 31 : 206– 15 [Google Scholar]
  • Clampet-Lundquist S , Massey DS . 2008 . Neighborhood effects on economic self-sufficiency: a reconsideration of the Moving to Opportunity experiment. Am. J. Sociol. 114 : 107– 43 [Google Scholar]
  • Cohen J , Dupas P . 2010 . Free distribution or cost-sharing? Evidence from a randomized malaria prevention experiment. Q. J. Econ. 125 : 1– 40 [Google Scholar]
  • Cole S , Giné X , Tobacman J , Topalova P , Townsend R , Vickery J . 2013 . Barriers to household risk management: evidence from India. Am. Econ. J. Appl. Econ. 5 : 104– 35 [Google Scholar]
  • Cook TD , Shadish WR . 1994 . Social experiments: some developments over the past fifteen years. Annu. Rev. Psychol. 45 : 545– 80 [Google Scholar]
  • Correll SJ , Benard S , Paik I . 2007 . Getting a job: is there a motherhood penalty?. Am. J. Sociol. 112 : 1297– 339 [Google Scholar]
  • Cox D . 1958 . Planning of Experiments New York: Wiley [Google Scholar]
  • Crépon B , Devoto F , Duflo E , Parienté W . 2011 . Impact of microcredit in rural areas of Morocco: evidence from a randomized evaluation. Work. Pap., Dep. Econ., MIT [Google Scholar]
  • Cross H , Kenney GM , Mell J , Zimmerman W . 1990 . Employer hiring practices: differential treatment of Hispanic and Anglo job seekers. Tech. rep., Urban Inst., Washington, DC [Google Scholar]
  • Deaton A . 2010 . Instruments, randomization, and learning about development. J. Econ. Lit. 48 : 424– 55 [Google Scholar]
  • Dehejia R , Pop-Eleches C , Samii C . 2015 . From local to global: external validity in a fertility natural experiment. Work. Pap. 21459, NBER, Cambridge, MA [Google Scholar]
  • Doob AN , Gross AE . 1968 . Status as an inhibitor of horn-honking responses. J. Soc. Psychol. 76 : 213– 18 [Google Scholar]
  • Druckman JN , Green DP , Kuklinski JH , Lupia A . 2011 . Cambridge Handbook of Experimental Political Science Cambridge, UK: Cambridge Univ. Press [Google Scholar]
  • Duflo E , Kremer M , Robinson J . 2008 . How high are rates of return to fertilizer? Evidence from field experiments in Kenya. Am. Econ. Rev. 98 : 482– 88 [Google Scholar]
  • Duflo E , Kremer M , Robinson J . 2011 . Nudging farmers to use fertilizer: theory and experimental evidence from Kenya. Am. Econ. Rev. 101 : 2350– 90 [Google Scholar]
  • Dunn EW , Aknin LB , Norton MI . 2008 . Spending money on others promotes happiness. Science 319 : 1687– 88 [Google Scholar]
  • Dunning T . 2012 . Natural Experiments in the Social Sciences: A Design-Based Approach Cambridge, UK: Cambridge Univ. Press [Google Scholar]
  • Dupas P . 2009 . What matters (and what does not) in households’ decision to invest in malaria prevention?. Am. Econ. Rev. 99 : 224– 30 [Google Scholar]
  • Dupas P . 2011 . Do teenagers respond to HIV risk information? Evidence from a field experiment in Kenya. Am. Econ. J. Appl. Econ. 3 : 1– 34 [Google Scholar]
  • Dupas P . 2014 . Short-run subsidies and long-run adoption of new health products: evidence from a field experiment. Econometrica 82 : 197– 228 [Google Scholar]
  • Dupas P , Robinson J . 2011 . Savings constraints and microenterprise development: evidence from a field experiment in Kenya. Work. Pap. 14693, NBER, Cambridge, MA [Google Scholar]
  • Emswiller T , Deaux K , Willits JE . 1971 . Similarity, sex, and requests for small favors. J. Appl. Soc. Psychol. 1 : 284– 91 [Google Scholar]
  • Enos RD . 2014 . Causal effect of intergroup contact on exclusionary attitudes. PNAS 111 : 3699– 704 [Google Scholar]
  • Enos RD , Fowler A , Vavreck L . 2014 . Increasing inequality: the effect of GOTV mobilization on the composition of the electorate. J. Polit. 76 : 273– 88 [Google Scholar]
  • Fearon JD , Humphreys M , Weinstein JM . 2009 . Can development aid contribute to social cohesion after civil war? Evidence from a field experiment in post-conflict Liberia. Am. Econ. Rev. 99 : 287– 91 [Google Scholar]
  • Fearon JD , Humphreys M , Weinstein JM . 2015 . How does development assistance affect collective action capacity? Results from a field experiment in post-conflict Liberia. Am. J. Political Sci. 109 : 450– 69 [Google Scholar]
  • Fershtman C , Gneezy U . 2001 . Discrimination in a segmented society: an experimental approach. Q. J. Econ. 116 : 351– 77 [Google Scholar]
  • Fisher RA . 1935 . The Design of Experiments New York: Hafner [Google Scholar]
  • Fiszbein A , Schady N . 2009 . Conditional cash transfers: reducing present and future poverty. World Bank Policy Res. Rep., World Bank Washington, DC: [Google Scholar]
  • Forbes GB , Gromoll HF . 1971 . The lost letter technique as a measure of social variables: some exploratory findings. Soc. Forces 50 : 113– 15 [Google Scholar]
  • Freedman JL , Fraser SC . 1966 . Compliance without pressure: the foot-in-the-door technique. J. Pers. Soc. Psychol. 4 : 195– 202 [Google Scholar]
  • Freese J , Peterson D . 2017 . Replication in social science. Annu. Rev. Sociol. 43. In press [Google Scholar]
  • Fryer R . 2011 . Financial incentives and student achievement: evidence from randomized trials. Q. J. Econ. 126 : 1755– 98 [Google Scholar]
  • Gaddis SM . 2015 . Discrimination in the credential society: an audit study of race and college selectivity in the labor market. Soc. Forces 93 : 1451– 79 [Google Scholar]
  • Gaddis SM , Ghoshal R . 2015 . Arab American housing discrimination, ethnic competition, and the contact hypothesis. Ann. Am. Acad. Political Soc. Sci. 660 : 282– 99 [Google Scholar]
  • Galster G , Constantine P . 1991 . Discrimination against female-headed households in rental housing: theory and exploratory evidence. Rev. Soc. Econ. 49 : 76– 100 [Google Scholar]
  • Gantner L . 2007 . PROGRESA: An integrated approach to poverty alleviation in Mexico. Case Studies in Food Policy for Developing Countries: Policies for Health, Nutrition, Food Consumption, and Poverty P Pinstrup-Andersen, F Cheng, Vol 1 211– 20 Ithaca, NY: Cornell Univ. Press [Google Scholar]
  • Garfinkel H . 1967 . Studies in Ethnomethodology Englewood Cliffs, NJ: Prentice-Hall [Google Scholar]
  • Gelman A . 2014 . Experimental reasoning in social science. Field Experiments and Their Critics: Essays on the Uses and Abuses of Experimentation in the Social Sciences DL Teele 185– 95 New Haven, CT: Yale Univ. Press [Google Scholar]
  • Gerber AS . 2011 . Field experiments in political science. Cambridge Handbook of Experimental Political Science JN Druckman, DP Green, JH Kuklinski, A Lupia 115– 38 Cambridge, UK: Cambridge Univ. Press [Google Scholar]
  • Gerber AS , Green DP . 2000 . The effects of canvassing, telephone calls, and direct mail on voter turnout: a field experiment. Am. Political Sci. Rev. 94 : 653– 63 [Google Scholar]
  • Gerber AS , Green DP . 2012 . Field Experiments New York: Norton [Google Scholar]
  • Gerber AS , Green DP , Larimer CW . 2008 . Social pressure and voter turnout: evidence from a large scale field experiment. Am. Political Sci. Rev. 102 : 33– 48 [Google Scholar]
  • Gerber AS , Green DP , Shachar R . 2003 . Voting may be habit-forming: evidence from a randomized field experiment. Am. J. Political Sci. 47 : 540– 50 [Google Scholar]
  • Gil-White F . 2004 . Ultimatum game with an ethnicity manipulation: results from Kohvdiin Bulgan Sum, Mongolia. Foundations of Human Sociality: Economic Experiments and Ethnographic Evidence from Fifteen Small-Scale Societies J Henrich, R Boyd, S Bowles, C Camerer, E Fehr, H Gintis, 260– 304 Oxford, UK: Oxford Univ. Press [Google Scholar]
  • Gilligan MJ , Pasquale BJ , Samii C . 2014 . Civil war and social cohesion: lab-in-the-field evidence from Nepal. Am. J. Political Sci. 58 : 604– 19 [Google Scholar]
  • Giné X , Karlan D . 2014 . Group versus individual liability: short and long term evidence from Philippine-microcredit lending groups. J. Dev. Econ. 107 : 65– 83 [Google Scholar]
  • Giné X , Karlan D , Zinman J . 2010 . Put your money where your butt is: a commitment contract for smoking cessation. Am. Econ. J. Appl. Econ. 213– 35 [Google Scholar]
  • Gneezy U , List J , Price MK . 2012 . Toward an understanding of why people discriminate: evidence from a series of natural field experiments. Work. Pap. 17855, NBER, Cambridge, MA [Google Scholar]
  • Gneezy U , Meier S , Rey-Biel P . 2011 . When and why incentives (don't) work to modify behavior. J. Econ. Perspect. 25 : 191– 210 [Google Scholar]
  • Gneezy U , Rey-Biel P . 2014 . On the relative efficiency of performance pay and noncontingent incentives. J. Eur. Econ. Assoc. 12 : 62– 72 [Google Scholar]
  • Gneezy U , Rustichini A . 2000 . A fine is a price. J. Legal Stud. 29 : 1– 17 [Google Scholar]
  • Goel V . 2014 . Facebook tinkers with users’ emotions in news feed experiment, stirring outcry. New York Times , June 30 B1
  • Gosnell HF . 1927 . Getting Out the Vote: An Experiment in the Stimulation of Voting Chicago: Chicago Univ. Press [Google Scholar]
  • Green DP , Gerber A . 2008 . Get Out the Vote: How to Increase Voter Turnout Washington, DC: Brookings Inst. Press. 2nd ed. [Google Scholar]
  • Green DP , Wong J . 2009 . Tolerance and the contact hypothesis: a field experiment. The Political Psychology of Democratic Citizenship 228– 46 Oxford, UK: Oxford Univ. Press [Google Scholar]
  • Greenberg D , Shroder M . 2004 . The Digest of Social Experiments. Washington, DC: Urban Inst. Press [Google Scholar]
  • Grose CR . 2014 . Field experimental work on political institutions. Annu. Rev. Political Sci. 17 : 355– 70 [Google Scholar]
  • Grossman G , Baldassarri D . 2012 . The impact of elections on cooperation: evidence from a lab in the field experiment in Uganda. Am. J. Political Sci. 56 : 964– 85 [Google Scholar]
  • Grossman G , Paler L . 2015 . Using experiments to study political institutions. Handbook of Comparative Political Institutions J Gandhi, R Ruiz-Rufino 84– 97 London: Routledge [Google Scholar]
  • Habyarimana J , Humphreys M , Posner DN , Weinstein JM . 2009 . Coethnicity: Diversity and the Dilemmas of Collective Action New York: Russell Sage Found. [Google Scholar]
  • Harrison GW . 2013 . Field experiments and methodological intolerance. J. Econ. Methodol. 20 : 103– 17 [Google Scholar]
  • Harrison GW , List JA . 2004 . Field experiments. J. Econ. Lit. 42 : 1009– 55 [Google Scholar]
  • Hausman JA , Wise DA . 1985 . Social Experimentation Chicago: Chicago Univ. Press [Google Scholar]
  • Heckman JJ . 1992 . Randomization and social policy evaluation. Evaluating Welfare and Training Programs CF Manski, I Garfinkel 201– 30 Cambridge, MA: Harvard Univ. Press [Google Scholar]
  • Heckman JJ . 1998 . Detecting discrimination. J. Econ. Perspect. 12 : 101– 16 [Google Scholar]
  • Heckman JJ , Siegelman P . 1993 . The Urban Institute audit studies: their methods and findings. Clear and Convincing Evidence: Measurement of Discrimination in America M Fix, RJ Struyk 187– 258 Washington, DC: Urban Inst. Press [Google Scholar]
  • Henrich J , Boyd R , Bowles S , Camerer C , Fehr E . et al. 2001 . In search of homo economicus: behavioral experiments in 15 small-scale societies. Am. Econ. Rev. 91 : 73– 78 [Google Scholar]
  • Henrich J , Ensminger J , McElreath R , Barr A , Barrett C . et al. 2010 . Markets, religion, community size, and the evolution of fairness and punishment. Science 327 : 1480– 84 [Google Scholar]
  • Henrich J , McElreath R , Barr A , Ensminger J , Barrett C . et al. 2006 . Costly punishment across human societies. Science 312 : 1767– 70 [Google Scholar]
  • Henry PJ . 2008 . College sophomores in the laboratory redux: influences of a narrow data base on social psychology's view of the nature of prejudice. Psychol. Inq. 19 : 49– 71 [Google Scholar]
  • Herberich DH , List JA , Price MK . 2011 . How many economists does it take to change a light bulb? A natural field experiment on technology adoption Work. Pap., Univ. Chicago [Google Scholar]
  • Heyman J , Ariely D . 2004 . Effort for payment: a tale of two markets. Psychol. Sci. 15 : 787– 93 [Google Scholar]
  • Holland J , Silva AS , Mace R . 2012 . Lost letter measure of variation in altruistic behaviour in 20 neighbourhoods. PLOS ONE 7 : e43294 [Google Scholar]
  • Houlette MA , Gaertner SL , Johnson KM , Banker BS , Riek BM , Dovidio JF . 2004 . Developing a more inclusive social identity: an elementary school intervention. J. Soc. Issues 60 : 35– 55 [Google Scholar]
  • Humphreys M , Sanchez de la Sierra R , van der Windt P . 2013 . Fishing, commitment, and communication: a proposal for comprehensive nonbinding research registration. Polit. Anal. 21 : 1– 20 [Google Scholar]
  • Imbens G , Wooldridge J . 2009 . Recent developments in the econometrics of program evaluation. J. Econ. Lit. 47 : 5– 86 [Google Scholar]
  • Isen AM , Levin PF . 1972 . Effect of feeling good on helping: cookies and kindness. J. Pers. Soc. Psychol. 21 : 384– 88 [Google Scholar]
  • Jackson M , Cox DR . 2013 . The principles of experimental design and their application in sociology. Annu. Rev. Sociol. 39 : 27– 49 [Google Scholar]
  • Jensen R , Miller N . 2008 . Giffen behavior and subsistence consumption. Am. Econ. Rev. 98 : 1553– 77 [Google Scholar]
  • Kamenica E . 2012 . Behavioral economics and psychology of incentives. Annu. Rev. Econ. 4 : 427– 52 [Google Scholar]
  • Karlan D . 2005 . Using experimental economics to measure social capital and predict financial decisions. Am. Econ. Rev. 95 : 1688– 99 [Google Scholar]
  • Karlan D , Appel J . 2011 . More Than Good Intentions: Improving the Ways the World's Poor Borrow, Save, Farm, Learn, and Stay Healthy New York: Penguin [Google Scholar]
  • Karlan D , Goldberg N . 2011 . Microfinance evaluation strategies: notes on methodology and findings. The Handbook of Microfinance B Armendáriz, M Labie 17– 58 London: World Scientific [Google Scholar]
  • Karlan D , McConnell M , Mullainathan S , Zinman J . 2014 . Getting to the top of mind: how reminders increase saving. Manag. Sci. 62 : 3393– 3411 [Google Scholar]
  • Karlan D , Osei-Akoto I , Osei R , Udry C . 2010 . Examining underinvestment in agriculture: measuring returns to capital and insurance. Work. Pap., Abdul Latif Jameel Poverty Action Lab. https://www.poverty-action.org/sites/default/files/Panel3-3-Farmers-Returns-Capital.pdf [Google Scholar]
  • Karlan D , Zinman J . 2011 . Microcredit in theory and practice: using randomized credit scoring for impact. Science 332 : 1278– 84 [Google Scholar]
  • Keizer K , Lindenberg S , Steg L . 2008 . The spreading of disorder. Science 322 : 1681– 85 [Google Scholar]
  • Kelly E , Moena P , Oakes J , Fan W , Okechukwu C . et al. 2014 . Changing work and work-family conflict: evidence from the work, family, and health network. Am. Sociol. Rev. 79 : 485– 516 [Google Scholar]
  • Kling JR , Liebman JB , Katz LF . 2007 . Experimental analysis of neighborhood effects. Econometrica 75 : 83– 119 [Google Scholar]
  • Kotran A . 2015 . Opower and utility partners save over eight terawatt-hours of energy power and utility partners save over eight terawatt-hours of energy. News release, May 21
  • Kramer ADI , Guillory JE , Hancock JT . 2014 . Experimental evidence of massive-scale emotional contagion through social networks. PNAS 111 : 8788– 90 [Google Scholar]
  • Kremer M . 2003 . Randomized evaluations of educational programs in developing countries: some lessons. Am. Econ. Rev. 93 : 102– 6 [Google Scholar]
  • Kremer M , Brannen C , Glennerster R . 2013 . The challenge of education and learning in the developing world. Science 340 : 297– 300 [Google Scholar]
  • Kremer M , Leino J , Miguel E , Zwane AP . 2011 . Spring cleaning: rural water impacts, valuation, and property rights institutions. Q. J. Econ. 126 : 145– 205 [Google Scholar]
  • Kugelmass H . 2016 . “Sorry, I'm not accepting new patients”: an audit study of access to mental health care. J. Health Soc. Behav. 57 : 168– 83 [Google Scholar]
  • Lacetera N , Macis M . 2010 . Do all material incentives for pro-social activities backfire? The response to cash and non-cash incentives for blood donations. J. Econ. Psychol. 31 : 738– 48 [Google Scholar]
  • Lacetera N , Macis M , Slonim R . 2013 . Economic rewards to motivate blood donations. Science 340 : 927– 28 [Google Scholar]
  • Landry CE , Lange A , List JA , Price MK , Rupp NG . 2010 . Is a donor in hand better than two in the bush? Evidence from a natural field experiment. Am. Econ. Rev. 100 : 958– 83 [Google Scholar]
  • Langer EJ , Rodin J . 1976 . The effects of choice and enhanced responsibility for the aged: a field experiment in an institutional setting. J. Pers. Soc. Psychol. 34 : 191– 98 [Google Scholar]
  • Lauster N , Easterbrook A . 2011 . No room for new families? A field experiment measuring rental discrimination against same-sex couples and single parents. Soc. Probl. 58 : 389– 409 [Google Scholar]
  • Leuven E , Oosterbeek H , van der Klaauw B . 2010 . The effect of financial rewards on students’ achievement: evidence from a randomized experiment. J. Eur. Econ. Assoc. 8 : 1243– 65 [Google Scholar]
  • Levine M , Prosser A , Evans D , Reicher S . 2005 . Identity and emergency intervention: how social group membership and inclusiveness of group boundaries shape helping behavior. Pers. Soc. Psychol. Bull. 31 : 443– 53 [Google Scholar]
  • Levitt SD , List JA . 2009 . Field experiments in economics: the past, the present, and the future. Eur. Econ. Rev. 53 : 1– 18 [Google Scholar]
  • Levitt SD , List JA , Neckerman S , Sadoff S . 2012 . The behavioralist goes to school: leveraging behavioral economics to improve educational performance. Work. Pap. 18165, NBER Cambridge, MA: [Google Scholar]
  • List JA . 2007 . Field experiments: a bridge between lab and naturally occurring data. B.E. J. Econ. Anal. Policy 5 : 2 [Google Scholar]
  • Lucas JW . 2003 . Theory-testing, generalization, and the problem of external validity. Sociol. Theory 21 : 236– 53 [Google Scholar]
  • Ludwig J , Duncan GJ , Gennetian LA , Katz LF , Kessler RC . et al. 2013 . Long-term neighborhood effects on low-income families: evidence from moving to opportunity. Am. Econ. Rev. 103 : 226– 31 [Google Scholar]
  • Ludwig J , Liebman JB , Kling JR , Duncan GJ , Katz LF . et al. 2008 . What can we learn about neighborhood effects from the moving to opportunity experiment?. Am. J. Sociol. 114 : 144– 88 [Google Scholar]
  • Marwell G , Ames RE . 1979 . Experiments on the provision of public goods: resources, interest, group size, and the free-rider problem. Am. J. Sociol. 84 : 1335– 60 [Google Scholar]
  • Massey DS , Lundy G . 2001 . Use of Black English and racial discrimination in urban housing markets: new methods and findings. Urban Aff. Rev. 36 : 452– 69 [Google Scholar]
  • McDermott R . 2011 . Internal and external validity. Cambridge Handbook of Experimental Political Science JN Druckman, DP Green, JH Kuklinski, A Lupia, 27– 40 Cambridge, UK: Cambridge Univ. Press [Google Scholar]
  • McEwan PJ . 2015 . Improving learning in primary schools of developing countries: a meta-analysis of randomized experiments. Rev. Educ. Res. 85 : 353– 94 [Google Scholar]
  • McNutt M . 2015 . Editorial retraction of Lacour & Green. Science 346 : 1366– 69 Science 348 : 1100 [Google Scholar]
  • Merton RK . 1945 . Sociological theory. Am. J. Sociol. 50 : 462– 73 [Google Scholar]
  • Michelson M , Nickerson DW . 2011 . Voter Mobilization Cambridge, UK: Cambridge Univ. Press [Google Scholar]
  • Miguel E , Kremer M . 2004 . Worms: identifying impacts on education and health in the presence of treatment externalities. Econometrica 72 : 159– 217 [Google Scholar]
  • Milgram S , Liberty HJ , Toledo R , Wackenhut J . 1986 . Response to intrusion into waiting lines. J. Pers. Soc. Psychol. 51 : 683– 89 [Google Scholar]
  • Milgram S , Mann L , Hartner S . 1965 . The lost letter technique: a tool of social research. Public Opin. Q. 29 : 437– 38 [Google Scholar]
  • Milkman KL , Akinola M , Chugh D . 2015 . What happens before? A field experiment exploring how pay and representation differentially shape bias on the pathway into organizations. J. Appl. Psychol. 100 : 1678– 712 [Google Scholar]
  • Milkman KL , Beshears J , Choi JJ , Laibson D , Madrian BC . 2011 . Using implementation intentions prompts to enhance influenza vaccination rates. PNAS 108 : 10415– 20 [Google Scholar]
  • Morgan S , Winship C . 2007 . Counterfactuals and Causal Inference Cambridge, UK: Cambridge Univ. Press [Google Scholar]
  • Morton R , Williams K . 2010 . Experimental Political Science and the Study of Causality Cambridge, UK: Cambridge Univ. Press [Google Scholar]
  • Moss-Racusin CA , Dovidio JF , Brescoll V , Graham MJ , Handelsman J . 2012 . Science faculty's subtle gender biases favor male students. PNAS 109 : 16474– 79 [Google Scholar]
  • Munnell AH . 1986 . Lessons from the Income Maintenance Experiments Boston: Fed. Res. Bank of Boston [Google Scholar]
  • Mutz DC . 2011 . Population-Based Survey Experiments Princeton, NJ: Princeton Univ. Press [Google Scholar]
  • Nagda BRA , Tropp LR , Paluck EL . 2006 . Looking back as we look ahead: integrating research, theory, and practice on intergroup relations. J. Soc. Issues 62 : 439– 51 [Google Scholar]
  • Neumark D , Bank RJ , Nort KDV . 1996 . Sex discrimination in restaurant hiring: an audit study. Q. J. Econ. 111 : 915– 41 [Google Scholar]
  • Nickerson DW . 2008 . Is voting contagious? Evidence from two field experiments. Am. Political Sci. Rev. 102 : 49– 57 [Google Scholar]
  • Nolan JM , Kenefick J , Schultz PW . 2011 . Normative messages promoting energy conservation will be underestimated by experts unless you show them the data. Soc. Influence 6 : 169– 80 [Google Scholar]
  • Nolan JM , Schultz PW , Cialdini RB , Goldstein NJ , Griskevicius V . 2008 . Normative social influence is underdetected. Pers. Soc. Psychol. Bull. 34 : 913– 23 [Google Scholar]
  • Nosek B , Aarts A , Anderson J , Anderson C , Attridge P . et al. 2015a . Estimating the reproducibility of psychological science. Science 349 : 943– 51 [Google Scholar]
  • Nosek B , Alter G , Banks G , Borsboom D , Bowman S . et al. 2015b . Promoting an open research culture. Science 348 : 1422– 25 [Google Scholar]
  • Olken B . 2007 . Monitoring corruption: evidence from a field experiment in Indonesia. J. Political Econ. 115 : 200– 49 [Google Scholar]
  • Olken B . 2010 . Direct democracy and local public goods: evidence from a field experiment in Indonesia. Am. Political Sci. Rev. 104 : 243– 67 [Google Scholar]
  • Pager D . 2003 . The mark of a criminal record. Am. J. Sociol. 108 : 937– 75 [Google Scholar]
  • Pager D . 2007 . The use of field experiments for studies of employment discrimination: contributions, critiques, and directions for the future. Ann. Am. Acad. Political Soc. Sci. 609 : 104– 33 [Google Scholar]
  • Pager D , Quillian L . 2005 . Walking the talk: what employers say versus what they do. Am. Sociol. Rev. 70 : 355– 80 [Google Scholar]
  • Pager D , Western B , Bonikowski B . 2009 . Discrimination in a low-wage labor market: a field experiment. Am. Sociol. Rev. 74 : 777– 99 [Google Scholar]
  • Paluck EL . 2009 . Reducing intergroup prejudice and conflict using the media: a field experiment in Rwanda. Interpers. Relat. Group Process. 96 : 574– 87 [Google Scholar]
  • Paluck EL , Cialdini RB . 2014 . Field research methods. Handbook of Research Methods in Social and Personality Psychology HT Reis, CM Judd 81– 97 New York: Cambridge Univ. Press, 2nd ed.. [Google Scholar]
  • Paluck EL , Green DP . 2009 . Prejudice reduction: what works? A review and assessment of research and practice. Annu. Rev. Psychol. 60 : 339– 67 [Google Scholar]
  • Paluck EL , Shepherd H . 2012 . The salience of social referents: a field experiment on collective norms and harassment behavior in a school social network. J. Pers. Soc. Psychol. 103 : 899– 915 [Google Scholar]
  • Paluck EL , Shepherd H , Aronow PM . 2016 . Changing climates of conflict: a social network driven experiment in 56 schools. PNAS 113 : 566– 71 [Google Scholar]
  • Pedulla DS . 2016 . Penalized or protected? Gender and the consequences of non-standard and mismatched employment histories. Am. Sociol. Rev. 81 : 262– 89 [Google Scholar]
  • Pettigrew TF . 1998 . Intergroup contact theory. Annu. Rev. Psychol. 49 : 65– 85 [Google Scholar]
  • Riach PA , Rich J . 2002 . Field experiments of discrimination in the market place. Econ. J. 112 : 480– 518 [Google Scholar]
  • Rodríguez-Planas N . 2012 . Longer-term impacts of mentoring, educational services, and learning incentives: evidence from a randomized trial in the United States. Am. Econ. J. Appl. Econ. 4 : 121– 39 [Google Scholar]
  • Rondeau D , List JA . 2008 . Matching and challenge gifts to charity: evidence from laboratory and natural field experiments. Exp. Econ. 11 : 253– 67 [Google Scholar]
  • Ross SL , Turner MA . 2005 . Housing discrimination in metropolitan America: explaining changes between 1989 and 2000. Soc. Probl. 52 : 152– 80 [Google Scholar]
  • Rossi PH , Berk RA , Lenihan KJ . 1980 . Money, Work, and Crime: Experimental Evidence New York: Academic Press [Google Scholar]
  • Rossi PH , Berk RA , Lenihan KJ . 1982 . Saying it wrong with figures: a comment on Zeisel. Am. J. Sociol. 88 : 390– 93 [Google Scholar]
  • Rossi PH , Lyall KC . 1978 . An overview evaluation of the NIT experiment. Eval. Stud. Rev. 3 : 412– 28 [Google Scholar]
  • Sabin N . 2015 . Modern microfinance: a field in flux. Social Finance Nicholls A, Paton R, Emerson J Oxford, UK: Oxford Univ. Press [Google Scholar]
  • Salganik MJ , Dodds PS , Watts DJ . 2006 . Experimental study of inequality and unpredictability in an artificial cultural market. Science 311 : 854– 56 [Google Scholar]
  • Sampson RJ . 2008 . Moving to inequality: neighborhood effects and experiments meet social structure. Am. J. Sociol. 114 : 189– 231 [Google Scholar]
  • Sampson RJ . 2012 . Great American City: Chicago and the Enduring Neighborhood Effect Chicago, IL: Chicago Univ. Press [Google Scholar]
  • Schuler SR , Hashemi SM , Badal SH . 1998 . Men's violence against women in rural Bangladesh: undermined or exacerbated by microcredit programmes?. Dev. Pract. 8 : 148– 57 [Google Scholar]
  • Schultz P . 2004 . School subsidies for the poor: evaluating the Mexican Progresa poverty program. J. Dev. Econ. 74 : 199– 250 [Google Scholar]
  • Shadish WR , Cook TD . 2009 . The renaissance of field experimentation in evaluating interventions. Annu. Rev. Psychol. 607– 29 [Google Scholar]
  • Shadish WR , Cook TD , Campbell DT . 2002 . Experimental and Quasi-experimental Designs for Generalized Causal Inference. New York: Houghton, Mifflin and Company [Google Scholar]
  • Simpson BT , McGrimmon T , Irwin K . 2007 . Are blacks really less trusting than whites? Revisiting the race and trust question. Soc. Forces 86 : 525– 52 [Google Scholar]
  • Sniderman PM , Grob DB . 1996 . Innovations in experimental design in attitude surveys. Annu. Rev. Sociol. 22 : 377– 99 [Google Scholar]
  • Steinpreis RE , Anders KA , Ritzke D . 1999 . The impact of gender on the review of the curricula vitae of job applicants and tenure candidates: a national empirical study. Sex Roles 41 : 509– 28 [Google Scholar]
  • Stutzer A , Goette L , Zehnder M . 2011 . Active decisions and prosocial behaviour: a field experiment on blood donations. Econ. J. 121 : 476– 93 [Google Scholar]
  • Teele DL . 2014 . Reflections on the ethics of field experiments. Field Experiments and Their Critics: Essays on the Uses and Abuses of Experimentation in the Social Sciences DL Teele 115– 40 New Haven, CT: Yale Univ. Press [Google Scholar]
  • Thornton RL . 2008 . The demand for, and impact of, learning HIV status. Am. Econ. Rev. 98 : 1829– 63 [Google Scholar]
  • Tilcsik A . 2011 . Pride and prejudice: employment discrimination against openly gay men in the United States. Am. J. Sociol. 117 : 586– 626 [Google Scholar]
  • Travers J , Milgram S . 1969 . An experimental study of the small world problem. Sociometry 32 : 425– 43 [Google Scholar]
  • Turner MA , Bednarz BA , Herbig C , Lee SJ . 2003 . Discrimination in metropolitan housing markets phase 2: Asians and Pacific Islanders Tech. rep., Urban Inst., Washington, DC [Google Scholar]
  • Turner MA , Fix M , Struyk RJ . 1991 . Opportunities Denied, Opportunities Diminished: Racial Discrimination in Hiring Washington, DC: Urban Inst. Press [Google Scholar]
  • Turner MA , Ross SL , Galster GC , Yinger J . 2002 . Discrimination in metropolitan housing markets: national results from phase 1 of the Housing Discrimination Study (HDS) Tech. rep., Urban Inst Washington, DC: [Google Scholar]
  • Van Bavel JJ , Mende-Siedlecki P , Brady WJ , Reinero DA . 2016 . Contextual sensitivity in scientific reproducibility. PNAS 113 : 6454– 59 [Google Scholar]
  • Van de Rijt A , Kang SM , Restivo M , Patil A . 2014 . Field experiments of success-breeds-success dynamics. PNAS 111 : 6934– 39 [Google Scholar]
  • Van Der Merwe WG , Burns J . 2008 . What's in a name? Racial identity and altruism in post-apartheid South Africa. South Afr. J. Econ. 76 : 266– 75 [Google Scholar]
  • Vermeersch C , Kremer M . 2005 . School Meals, Educational Achievement, and School Competition: Evidence from a Randomized Evaluation. New York: World Bank [Google Scholar]
  • Volpp KG , Troxel AB , Pauly MV , Glick HA , Puig A . et al. 2009 . A randomized, controlled trial of financial incentives for smoking cessation. N. Engl. J. Med. 360 : 699– 709 [Google Scholar]
  • Whitt S , Wilson RK . 2007 . The dictator game, fairness and ethnicity in postwar Bosnia. Am. J. Political Sci. 51 : 655– 68 [Google Scholar]
  • Wienk RE , Reid CE , Simonson JC , Eggers FJ . 1979 . Measuring racial discrimination in American housing markets: the housing market practices survey. Tech. Rep. HUD-PDR-444(2), Dep. Hous. Urban Dev Washington, DC: [Google Scholar]
  • Williams WM , Ceci SJ . 2015 . National hiring experiments reveal 2:1 faculty preference for women on STEM tenure track. PNAS 112 : 5360– 65 [Google Scholar]
  • Yamagishi T . 2011 . Trust: The Evolutionary Game of Mind and Society New York: Springer [Google Scholar]
  • Yamagishi T , Cook KS , Watabe M . 1998 . Uncertainty, trust, and commitment formation in the United States and Japan. Am. J. Sociol. 104 : 165– 94 [Google Scholar]
  • Zeisel H . 1982 . Disagreement over the evaluation of a controlled experiment. Am. J. Sociol. 88 : 378– 89 [Google Scholar]

Data & Media loading...

  • Article Type: Review Article

Most Read This Month

Most cited most cited rss feed, birds of a feather: homophily in social networks, social capital: its origins and applications in modern sociology, conceptualizing stigma, framing processes and social movements: an overview and assessment, organizational learning, the study of boundaries in the social sciences, assessing “neighborhood effects”: social processes and new directions in research, social exchange theory, culture and cognition, focus groups.

National Academies Press: OpenBook

Implementing Randomized Field Trials in Education: Report of a Workshop (2004)

Chapter: 1 what is a randomized field trial.

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

1 What Is a Randomized Field Trial? P eople behave in widely varying ways, due to many different causes, including their own individual volition (conscious choices). Social scientists often seek to understand whether or not a specific inter- vention may have an influence on human behavior or performance. For example, a researcher might want to examine the effect of a driver safety course on teenage automobile accidents or the effect of a new reading pro- gram on student achievement. But there are many forces that might cause a change in driving or reading skills, so how can the investigator be confident that it was the intervention that made the difference? An effective way to isolate the effect of a specific factor on human behavior and performance is to conduct a randomized field trial, which is a research method used to estimate the effect of an intervention on a particular outcome of interest. As a first step, investigators hypothesize that a particular intervention or "treatment" will cause a change in behavior. Then they seek to test the hypothesis by comparing the average outcome for individuals in the group who were randomly assigned to receive this intervention with the average outcome for individuals in the group who do not. This method helps social scientists to attribute changes in the outcome of interest (e.g., read- ing achievement) to the specific intervention (e.g., the reading program), rather than to the many other possible causes of human behavior and performance. 1

2 IMPLEMENTING RANDOMIZED FIELD TRIALS IN EDUCATION MAJOR FEATURES In this section, we sketch the defining features of randomized field trials. In particular, we focus on the two key concepts of randomization and control and then briefly situate randomized field trials within the broader context of establishing cause-and-effect relationships. A research design is randomized when individuals (or schools or other units of study) are put into an "experimental" group (which receives the intervention) or a "control"1 group (which does not) on the basis of a random process like the toss of a coin.2 The power of this random assign- ment is that, on average, the two groups that result are initially the same, differing only in terms of the intervention.3 This allows researchers to more confidently attribute differences they observe between the two groups to the intervention, rather than to the myriad other factors that influence human behavior and performance. As in any comparative study, research- ers must be careful to observe and account for any other confounding vari- ables that could differentially affect the groups after randomization has taken place. That is, even though randomization creates (statistically) equivalent groups at the outset, once the intervention is under way, other events or programs could take place in one group and not the other, under- mining any attempt to isolate the effect of the intervention. Randomized field trials are also controlled; that is, the investigator controls the process by which individuals (or other entities of study) are assigned to receive the intervention of interest. If the assignment of indi- viduals or entities is outside the investigator's control, then it is generally 1A control group is a comparison group in a randomized field trial that acts as a contrast to the group receiving the intervention of interest. In randomized field trials involving hu- mans, research participants in the control group typically either continue to receive existing services or receive a different intervention. 2Tossing a coin is a useful way of explaining the situation in which the participants have a 50-50 chance of being assigned to either of two groups: the experimental or the control group. Randomized field trials can have more than two groups; as long as the assignment process is conducted on the basis of a statistical process that has known probabilities (0.5 or otherwise), the groups will be balanced on observable and unobservable characteristics. 3It is logically possible that differences between the groups may still be due to idiosyn- cratic differences between individuals assigned to receive the intervention or to be part of the control group. However, with randomization, the chances of this occurring (a) can be explic- itly calculated and (b) can be made very small, typically by a straightforward manipulation like increasing the number of individuals assigned to each group.

WHAT IS A RANDOMIZED FIELD TRIAL? 3 much more difficult to attribute observed outcomes to the intervention being studied. For example, if teachers assigned some students to experi- ence a novel teaching method and some to a comparison group that did not experience it based on their judgment of which students should experience the method, then other factors (such as student aptitude) may confound or obscure the specific effect of the novel teaching method on student learn- ing outcomes.4 Thus, randomization and control are the foundation of a systematic and rigorous process that enables researchers estimating the effect of an intervention to be more confident in the internal validity of their results-- that is, that differences in outcomes can be attributed to the presence or absence of the intervention, rather than to some other factor. External va- lidity--the extent to which findings of effectiveness (or lack of effective- ness) hold in other times, places, and populations--can be established only when the intervention has been subjected to rigorous study across a variety of settings. The ultimate aim of randomized field trials is to help establish cause- and-effect relationships. They cannot, however, uncover all of the multiple causes that may affect human behavior. Instead, randomized field trials are designed to isolate the effect of one or more possible treatments that may or may not be the cause(s) of an observed behavioral outcome (such as an increase in student test scores) (Campbell, 1957). Furthermore, a single study--no matter how strong the design--is rarely sufficient to establish causation. Indeed, establishing a causal relationship is a matter of some complexity. In short, it requires that a coherent theory predict the specific relationship among the program, outcome, and context and that the re- sults from several studies in varying circumstances are consistent with that prediction. A few final clarifications about terminology are in order. Some observ- ers consider the term "randomized field trial" to be limited only to very 4In some cases, an investigator may conduct a randomized field trial when an interven- tion is allocated to individuals based on a random lottery. As discussed in Chapter 3, some school districts have used randomized lotteries to allocate school vouchers, in order to equita- bly distribute scarce resources when demand exceeds available funding for vouchers. In these cases, the investigator typically does not directly control the random assignment process, but as long as the process is truly random, the statistically equivalent groups that result isolate the relationship between group membership (treatment or control) and outcome from confound- ing influences and the essential features of a randomized field trial are retained.

4 IMPLEMENTING RANDOMIZED FIELD TRIALS IN EDUCATION large medical studies or studies conducted by pharmaceutical companies when testing the safety and efficacy of new drugs. Randomized designs, however, can be part of any research in any field aimed at estimating the effect of an intervention, regardless of the size of the study. In this report, we use the term "randomized field trial" to refer to studies that test the effectiveness of social interventions comparing experimental and control groups that have been created through random assignment. Although most of the workshop discussions focused on large-scale randomized field trials, the key elements for education research do not involve the size of the study, but the focus on questions of causation, use of randomization, and the construction of control groups that do not receive the intervention of inter- est. Indeed, even small "pilot" studies can use randomization and control groups to determine the feasibility of scaling an intervention. CURRENT DEBATES AND TRENDS At the workshop, University of Pennsylvania professor Robert Boruch described how randomized field trials have been used in a range of fields over time. Since World War II, he explained, randomized field trials have been used to test the effectiveness of the Salk polio vaccine and the antibi- otic streptomycin, and these designs are now considered the "gold stan- dard" for testing the effects of different interventions in many fields. Boruch went on to describe the growing use of randomized field trials to evaluate social programs since the 1970s (Boruch, de Moya, and Snyder, 2002) and noted that the World Bank, the government of the United Kingdom, the Campbell Collaboration, and the Rockefeller Foundation, all held confer- ences promoting the use of randomized field trials during 2002 and 2003. Trends in other fields notwithstanding, scholars of education have long debated the utility of this design in education research. Those who ques- tion its usefulness frequently argue that the model of causation that under- lies these designs is too simplistic to capture the complexity of teaching and learning in diverse educational settings (e.g., Cronbach et al., 1980; Bruner, 1996; Willinsky, 2001; Berliner, 2002). Others, in contrast, are enthusias- tic about using randomized field trials for addressing causal questions in education, emphasizing the unique ability of the design to isolate the im- pact of interventions on a specified outcome in an unbiased fashion (e.g., Cook and Payne, 2002; Mosteller and Boruch, 2002; Slavin, 2002). In the past five years, as calls for evidence-based education have be- come common, these debates have intensified and expanded beyond aca-

WHAT IS A RANDOMIZED FIELD TRIAL? 5 demic circles to include policy makers and practitioners. Most visibly, the No Child Left Behind Act, passed by Congress in 2001 and signed by the President in 2002, includes many references to "scientifically based" educa- tional programs and services. The law defines scientifically based research as including research that "is evaluated using experimental or quasi-experi- mental designs in which individuals, entities, programs or activities are as- signed to different conditions and with appropriate controls to evaluate the effects of the condition of interest, with a preference for random-assign- ment experiments, or other designs to the extent that those designs contain within-condition or across-condition controls." Furthermore, in its strategic plan for 2002-2007, the U.S. Department of Education has established as its chief goal to "create a culture of achieve- ment" by, among other steps, encouraging the use of "scientifically based methods in federal education programs." The strategic plan also aims to "transform education into an evidence-based field" (U.S. Department of Education, 2002, pp. 14-15). The Institute of Education Sciences, the De- partment of Educations' primary research arm, has established the What Works Clearinghouse to help reach these goals by identifying "interven- tions or approaches in education that have a demonstrated beneficial causal relationship to important student outcomes" (What Works Clearinghouse [2003], can be found at http://www.w-w-c.org/july2003.html). The expert technical advisory group guiding the clearinghouse has established quality standards to review available research on such critical education problems as improving early reading and reducing high school dropout rates. These standards place high priority on randomized field trials, which are seen as "among the most appropriate research designs for identifying the impact or effect of an educational program or practice" (What Works Clearinghouse [2003], can be found at http://www.w-w-c.org/july2003.html). They also acknowledge that there are circumstances in which they are not feasible, suggesting that quasi experiments (which are comparative studies that attempt to isolate the effect of an intervention by means other than random- ization) may be useful under such circumstances.5 5In a quasi-experimental study, researchers may compare naturally existing groups that appear similar except for the intervention being studied. In this research design, investigators often use statistical techniques to attempt to adjust for known confounding variables that are associated with both the intervention and the outcome of interest, thus invoking additional assumptions about the causal effects of the intervention. While these statistical techniques can address known differences between study groups, they may inadequately adjust unknown

6 IMPLEMENTING RANDOMIZED FIELD TRIALS IN EDUCATION The National Research Council report Scientific Research in Education (National Research Council, 2002) was designed to help clarify the nature of scientific inquiry in education in this rapidly changing policy context. That report links design to the research question and, for addressing causal questions (i.e., "what works") about specified outcomes, highlights ran- domized field trials as the most appropriate research designs when they are feasible and ethical. This report summarizes a workshop in which partici- pants addressed the question: When randomized field trials are conducted in social settings like schools and school districts, how can they be imple- mented and what procedures should be used in implementation? confounding variables. The major drawback of quasi-experimental designs is the possibility that the groups are systematically different (a problem known as "selection bias"), and thus investigators may be less confident about conclusions reached using these methods (National Research Council, 2002, p. 113). In contrast, randomization theoretically creates groups that are not systematically influenced by both known and unknown confounding variables.

The central idea of evidence-based education-that education policy and practice ought to be fashioned based on what is known from rigorous research-offers a compelling way to approach reform efforts. Recent federal trends reflect a growing enthusiasm for such change. Most visibly, the 2002 No Child Left Behind Act requires that "scientifically based [education] research" drive the use of federal education funds at the state and local levels. This emphasis is also reflected in a number of government and nongovernment initiatives across the country. As consensus builds around the goals of evidence-based education, consideration of what it will take to make it a reality becomes the crucial next step. In this context, the Center for Education of the National Research Council (NRC) has undertaken a series of activities to address issues related to the quality of scientific education research. In 2002, the NRC released Scientific Research in Education (National Research Council, 2002), a report designed to articulate the nature of scientific education research and to guide efforts aimed at improving its quality. Building on this work, the Committee on Research in Education was convened to advance an improved understanding of a scientific approach to addressing education problems; to engage the field of education research in action-oriented dialogue about how to further the accumulation of scientific knowledge; and to coordinate, support, and promote cross-fertilization among NRC efforts in education research. The main locus of activity undertaken to meet these objectives was a year-long series of workshops. This report is a summary of the third workshop in the series, on the implementation and implications of randomized field trials in education.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

A Refresher on Randomized Controlled Experiments

by Amy Gallo

In order to make smart decisions at work, we need data. Where that data comes from and how we analyze it depends on a lot of factors — for example, what we’re trying to do with the results, how accurate we need the findings to be, and how much of a budget we have. There is a spectrum of experiments that managers can do from quick, informal ones, to pilot studies, to field experiments, and to lab research. One of the more structured experiments is the randomized controlled experiment­ .

Partner Center

  • Affiliated Professors
  • Invited Researchers
  • J-PAL Scholars
  • Diversity, Equity, and Inclusion
  • Code of Conduct
  • Initiatives
  • Latin America and the Caribbean
  • Middle East and North Africa
  • North America
  • Southeast Asia
  • Agriculture
  • Crime, Violence, and Conflict
  • Environment, Energy, and Climate Change
  • Labor Markets
  • Political Economy and Governance
  • Social Protection
  • Evaluations
  • Research Resources
  • Policy Insights
  • Evidence to Policy
  • For Affiliates
  • Support J-PAL

The Abdul Latif Jameel Poverty Action Lab (J-PAL) is a global research center working to reduce poverty by ensuring that policy is informed by scientific evidence. Anchored by a network of more than 1,000 researchers at universities around the world, J-PAL conducts randomized impact evaluations to answer critical questions in the fight against poverty.

  • Affiliated Professors Our affiliated professors are based at 97 universities and conduct randomized evaluations around the world to design, evaluate, and improve programs and policies aimed at reducing poverty. They set their own research agendas, raise funds to support their evaluations, and work with J-PAL staff on research, policy outreach, and training.
  • Board Our Board of Directors, which is composed of J-PAL affiliated professors and senior management, provides overall strategic guidance to J-PAL, our sector programs, and regional offices.
  • Diversity, Equity, and Inclusion J-PAL recognizes that there is a lack of diversity, equity, and inclusion in the field of economics and in our field of work. Read about what actions we are taking to address this.
  • Initiatives J-PAL initiatives concentrate funding and other resources around priority topics for which rigorous policy-relevant research is urgently needed.
  • Events We host events around the world and online to share results and policy lessons from randomized evaluations, to build new partnerships between researchers and practitioners, and to train organizations on how to design and conduct randomized evaluations, and use evidence from impact evaluations.
  • Blog News, ideas, and analysis from J-PAL staff and affiliated professors.
  • News Browse news articles about J-PAL and our affiliated professors, read our press releases and monthly global and research newsletters, and connect with us for media inquiries.
  • Press Room Based at leading universities around the world, our experts are economists who use randomized evaluations to answer critical questions in the fight against poverty. Connect with us for all media inquiries and we'll help you find the right person to shed insight on your story.
  • Overview J-PAL is based at MIT in Cambridge, MA and has seven regional offices at leading universities in Africa, Europe, Latin America and the Caribbean, Middle East and North Africa, North America, South Asia, and Southeast Asia.
  • Global Our global office is based at the Department of Economics at the Massachusetts Institute of Technology. It serves as the head office for our network of seven independent regional offices.
  • Africa J-PAL Africa is based at the Southern Africa Labour & Development Research Unit (SALDRU) at the University of Cape Town in South Africa.
  • Europe J-PAL Europe is based at the Paris School of Economics in France.
  • Latin America and the Caribbean J-PAL Latin America and the Caribbean is based at the Pontificia Universidad Católica de Chile.
  • Middle East and North Africa J-PAL MENA is based at the American University in Cairo, Egypt.
  • North America J-PAL North America is based at the Massachusetts Institute of Technology in the United States.
  • South Asia J-PAL South Asia is based at the Institute for Financial Management and Research (IFMR) in India.
  • Southeast Asia J-PAL Southeast Asia is based at the Faculty of Economics and Business at the University of Indonesia (FEB UI).
  • Overview Led by affiliated professors, J-PAL sectors guide our research and policy work by conducting literature reviews; by managing research initiatives that promote the rigorous evaluation of innovative interventions by affiliates; and by summarizing findings and lessons from randomized evaluations and producing cost-effectiveness analyses to help inform relevant policy debates.
  • Agriculture How can we encourage small farmers to adopt proven agricultural practices and improve their yields and profitability?
  • Crime, Violence, and Conflict What are the causes and consequences of crime, violence, and conflict and how can policy responses improve outcomes for those affected?
  • Education How can students receive high-quality schooling that will help them, their families, and their communities truly realize the promise of education?
  • Environment, Energy, and Climate Change How can we increase access to energy, reduce pollution, and mitigate and build resilience to climate change?
  • Finance How can financial products and services be more affordable, appropriate, and accessible to underserved households and businesses?
  • Firms How do policies affecting private sector firms impact productivity gaps between higher-income and lower-income countries? How do firms’ own policies impact economic growth and worker welfare?
  • Gender How can we reduce gender inequality and ensure that social programs are sensitive to existing gender dynamics?
  • Health How can we increase access to and delivery of quality health care services and effectively promote healthy behaviors?
  • Labor Markets How can we help people find and keep work, particularly young people entering the workforce?
  • Political Economy and Governance What are the causes and consequences of poor governance and how can policy improve public service delivery?
  • Social Protection How can we identify effective policies and programs in low- and middle-income countries that provide financial assistance to low-income families, insuring against shocks and breaking poverty traps?

Handbook of Field Experiments

The last 15 years have seen an explosion in the number, scope, quality, and creativity of field experiments. To take stock of this remarkable progress, we were invited to edit a Handbook of Field Experiments , published at Elsevier. We were fortunate to assemble a volume made of wonderful papers by the best experts in the field. Some chapters are more methodological, while others are focused on results. All of them provide thoughtful reflections on the advances and issues in the field, useful research tips and insights into what the next steps need to be, all of which should be very useful for graduate students. Taken together, these papers offer an incredibly rich overview of the state of literature. This page collects together all the working paper versions of the chapters, and will also link to the final versions as they become available. We hope you enjoy it.

—Abhijit Banerjee and Esther Duflo

Introduction

An Introduction to the "Handbook of Field Experiments" Abhijit Banerjee and Esther Duflo

Many (though by no means all) of the questions that economists and policymakers ask themselves are causal in nature: What would be the impact of adding computers in classrooms? What is the price elasticity of demand for preventive health products? Would increasing interest rates lead to an increase in default rates? Decades ago, the statistician Fisher (Fisher, 1925) proposed a method to answer such causal questions: Randomized Controlled Trials (RCTs) . In an RCT, the assignment of different units to different treatment groups is chosen randomly. This ensures that no unobservable characteristics of the units are reflected in the assignment, and hence that any difference between treatment and control units reflects the impact of the treatment. While the idea is simple, the implementation in the field can be more involved, and it took some time before randomization was considered to be a practical tool for answering questions in economics.

Some Historical Background

The Politics and Practice of Social Experiments: Seeds of a Revolution Judy Gueron

Between 1970 and the early 2000s, there was a revolution in support for the use of randomized experiments to evaluate social programs. Focusing on the welfare reform studies that helped to speed that transformation in the United States, this chapter describes the major challenges to randomized controlled trials (RCTs), how they emerged and were overcome, and how initial conclusions about conditions necessary to success — strong financial incentives, tight operational control, and small scale — proved to be wrong. The final section discusses lessons from this experience for other fields.

Methodology and Practice of RCTs

The Econometrics of Randomized Experiments Susan Athey and  Guido Imbens

Randomized experiments have a long tradition in agricultural and biomedical settings. In economics they have a much shorter history. Although there have been notable experiments over the years, such as the RAND health care experiment (Manning, Newhouse, Duan, Keeler and Leibowitz, 1987, see the general discussion in Rothstein and von Wachter, 2016) and the Negative Income Tax experiments (e.g., Robins, 1985), it is only recently that there has been a large number of randomized experiments in economics, and development economics in particular. See Duflo, Glennerster, and Kremer (2006) for a survey.  In this chapter we discuss some of the statistical methods that are important for the analysis and design of randomized experiments. A major theme of the chapter is the focus on statistical methods directly justified by randomization, in the spirit of Freedman who wrote “Experiments should be analyzed as experiments, not as observational studies. A simple comparison of rates might be just the right tool, with little value added by ‘sophisticated’ models,” (Freedman, 2006, p. 691) We draw from a variety of literatures. This includes the statistical literature on the analysis and design of experiments, e.g., Wu and Hamada (2009), Cox and Reid (2000), Altman (1991), Cook and DeMets (2008), Kempthorne (1952, 1955), Cochran and Cox (1957), Davies (1954), and Hinkelman and Kempthorne (2005, 2008). We also draw on the literature on causal inference, both in experimental and observational settings, Rosenbaum (1995, 2002, 2009), Rubin (2006), Cox (1992), Morgan and Winship (2007), Morton Williams (2010) and Lee (2005), and Imbens and Rubin (2015). In the economics literature we build on recent guides to practice in randomized experiments in development economics, e.g., Duflo, Glennerster, and Kremer (2006), Glennerster (2016), and Glennerster and Takavarasha (2013) as well as the general empirical micro literature (Angrist and Pischke, 2008).

Decision Theoretic Approaches to Experiment Design and External Validity Abhijit Banerjee, Sylvain Chassang,  and Erik Snowberg

A modern, decision-theoretic framework can help clarify important practical questions of experimental design. Building on our recent work, this chapter begins by summarizing our framework for understanding the goals of experimenters, and applying this to re-randomization.  We then use this framework to shed light on questions related to experimental registries, pre-analysis plans, and most importantly, external validity. Our framework implies that even when large samples can be collected, external decisionmaking remains inherently subjective. We embrace this conclusion, and argue that in order to improve external validity, experimental research needs to create a space for structured speculation.

The Practicalities of Running Randomized Evaluations: Partnerships, Measurement, Ethics, and Transparency Rachel Glennerster

Economists have known for a long time that randomization could help identify causal connections by solving the problem of selection bias. Chapter 1 in this book and Gueron and Rolston (2013) describe the effort in the US to move experiments out of the laboratory into the policy world in the 1960s and 1970s.  This experience was critical in proving the feasibility of field experiments, working through some of the important ethical questions involved, showing how researchers and practitioners could work together, and demonstrating that the results of field experiments were often very different from those generated by observational studies. Interestingly, there was relatively limited academic support for this first wave of field experiments (Gueron and Rolston 2013), most of which were carried out by research groups such as MDRC, Abt, and Mathematica, to evaluate US government programs, and they primarily used individual-level randomization. In contrast, a more recent wave of field experiments starting in the mid-1990s was driven by academics, initially was focused on developing countries, often worked with nongovernmental organizations, and frequently used clustered designs.

The Psychology of Construal in the Design of Field Experiments Elizabeth Levy Paluck and Eldar Shafir

Why might you be interested in this chapter? A fair assumption is that you are reading because you care about good experimental design. To create strong experimental designs that test people’s responses to an intervention, researchers typically consider the classically recognized motivations presumed to drive human behavior.  It does not take extensive psychological training to recognize that several types of motivations could affect an individual’s engagement with and honesty during your experimental paradigm. Such motivations include strategic self-presentation, suspicion, lack of trust, level of education or mastery, and simple utilitarian motives such as least effort and optimization. For example, minimizing the extent to which your findings are attributable to high levels of suspicion among participants, or to their decision to do the least amount possible, is important for increasing the generalizability and reliability of your results.

Understanding Preferences and Preference Change

Field Experiments in Markets Omar Al-Ubaydli and  John List

This is a review of the literature of field experimental studies of markets. The main results covered by the review are as follows: (1) Generally speaking, markets organize the efficient exchange of commodities; (2) There are some behavioral anomalies that impede efficient exchange; (3) Many behavioral anomalies disappear when traders are experienced.

Field Experiments on Discrimination Marianne Bertrand and Esther Duflo

This article reviews the existing field experimentation literature on the prevalence of discrimination, the consequences of such discrimination, and possible approaches to undermine it. We highlight key gaps in the literature and ripe opportunities for future field work.  Section 1 reviews the various experimental methods that have been employed to measure the prevalence of discrimination, most notably audit and correspondence studies; it also describes several other measurement tools commonly used in lab-based work that deserve greater consideration in field research. Section 2 provides an overview of the literature on the costs of being stereotyped or discriminated against, with a focus on self-expectancy effects and self-fulfilling prophecies; section 2 also discusses the thin field-based literature on the consequences of limited diversity in organizations and groups. The final section of the paper, Section 3, reviews the evidence for policies and interventions aimed at weakening discrimination, covering role model and intergroup contact effects, as well as socio-cognitive and technological de-biasing strategies.

Field Experiments on Voter Mobilization: An Overview of a Burgeoning Literature Alan Gerber and Donald Green

In recent years the focus of empirical work in political science has begun to shift from description to an increasing emphasis on the credible estimation of causal effects. A key feature of this change has been the increasing prominence of experimental methods, and especially field experiments. In this chapter we review the use of field experiments to study political participation.  Although several important experiments address political phenomena other than voter participation (Bergan 2009; Butler and Broockman 2015; Butler and Nickerson 2011; Broockman 2013, 2014; Grose 2014), the literature measuring the effect of various interventions on voter turnout is the largest and most fully developed, and it provides a good illustration of how the use of field experiments in political science has proceeded. From an initial focus on the relative effects of different modes of communication, scholars began to explore how theoretical insights from social psychology and behavioral economics might be used to craft messages and how voter mobilization experiments could be employed to test the real world effects of theoretical claims. The existence of a large number of experimental turnout studies was essential, because it provided the background against which unusual and important results could be easily discerned.

Lab in the Field: Measuring Preferences in the Wild Uri Gneezy and Alex Imas

In this chapter, we discuss the “lab-in-the-field” methodology, which combines elements of both lab and field experiments in using standardized, validated paradigms from the lab in targeting relevant populations in naturalistic settings. We begin by examining how the methodology has been used to test economic models with populations of theoretical interest. Next, we outline how lab-in-the-field studies can be used to complement traditional Randomized Control Trials in collecting covariates to test theoretical predictions and explore behavioral mechanisms. We proceed to discuss how the methodology can be utilized to compare behavior across cultures and contexts, and test for the external validity of results obtained in the lab. The chapter concludes with an overview of lessons on how to use the methodology effectively.

Field Experiments in Marketing Duncan Simester

Marketing is a diverse field that draws from a rich array of disciplines and a broad assortment of empirical and theoretical methods. One of those disciplines is economics and one of the methods used to investigate economic questions is field experiments. The history of field experiments in the marketing literature is surprisingly long. Early examples include Curhan (1974) and Eskin and Baron (1977), who vary prices, newspaper advertising, and display variables in grocery stores.  This chapter reviews the recent history of field experiments in marketing by identifying papers published in the last 20 years (between 1995 and 2014). We report how the number of papers published has increased during this period, and evaluate different explanations for this increase. We then group the papers into five topics and review the papers by topic. The chapter concludes by reflecting on the design of field experiments used in marketing, and proposing topics for future research.

The Challenge of Improving Human Capital

Impacts and Determinants of Health Levels in Low-Income Countries Pascaline Dupas and Ted Miguel

Improved health in low-income countries could considerably improve wellbeing and possibly promote economic growth. The last decade has seen a surge in field experiments designed to understand the barriers that households and governments face in investing in health and how these barriers can be overcome, and to assess the impacts of subsequent health gains. This chapter first discusses the methodological pitfalls that field experiments in the health sector are particularly susceptible to, then reviews the evidence that rigorous field experiments have generated so far.  While the link from in utero and child health to later outcomes has increasingly been established, few experiments have estimated the impacts of health on contemporaneous productivity among adults, and few experiments have explored the potential for infrastructural programs to impact health outcomes. Many more studies have examined the determinants of individual health behavior, on the side of consumers as well as among providers of health products and services.

The Production of Human Capital in Developed Countries: Evidence from 196 Randomized Field Experiments Roland Fryer

Randomized field experiments designed to better understand the production of human capital have increased exponentially over the past several decades. This chapter summarizes what we have learned about various partial derivatives of the human capital production function, what important partial derivatives are left to be estimated, and what – together – our collective efforts have taught us about how to produce human capital in developed countries. The chapter concludes with a back of the envelope simulation of how much of the racial wage gap in America might be accounted for if human capital policy focused on best practices gleaned from randomized field experiments.

Field Experiments in Education in Developing Countries Karthik Muralidharan Perhaps no field in development economics in the past decade has benefited as much from the use of experimental methods as the economics of education. The rapid growth in high‐quality studies on education in developing countries (many of which use randomized experiments) is perhaps best highlighted by noting that there have been  several  systematic reviews of this evidence aiming to synthesize findings for research and policy in  just the past three years .   These include Muralidharan 2013 (focused on India), Glewwe et al. 2014 (focused on school inputs), Kremer et al. 2013, Krishnaratne et al. 2013, Conn 2014 (focused on sub‐Saharan Africa), McEwan 2014, Ganimian and Murnane (2016), Evans and Popova (2015), and Glewwe and Muralidharan (2016). While these are not all restricted to experimental studies, they typically provide greater weight to evidence from randomized controlled trials (RCT's).

Designing Effective Social Programs

Social Policy: Mechanism Experiments and Policy Evaluations Bill Congdon,  Jeffrey Kling, Jens Ludwig, and Sendhil Mullainathan

Policymakers and researchers are increasingly interested in using experimental methods to inform the design of social policy. The most common approach, at least in developed countries, is to carry out large-scale randomized trials of the policies of interest, or what we call here policy evaluations. In this chapter we argue that in some circumstances the best way to generate information about the policy of interest may be to test an intervention that is different from the policy being considered, but which can shed light on one or more key mechanisms through which that policy may operate.  What we call mechanism experiments can help address the key external validity challenge that confronts all policy-oriented work in two ways. First, mechanism experiments sometimes generate more policy-relevant information per dollar of research funding than can policy evaluations, which in turn makes it more feasible to test how interventions work in different contexts. Second, mechanism experiments can also help improve our ability to forecast effects by learning more about the way in which local context moderates policy effects, or expand the set of policies for which we can forecast effects. We discuss how mechanism experiments and policy evaluations can complement one another, and provide examples from a range of social policy areas including health insurance, education, labor market policy, savings and retirement, housing, criminal justice, redistribution, and tax policy. Examples focus on the U.S. context.

Field Experiments in Developing Country Agriculture Alain de Janvry, Elisabeth Sadoulet, and Tavneet Suri

This chapter provides a review of the role of field experiments in answering research questions in agriculture that ultimately let us better understand how policy can improve productivity and farmer welfare in developing economies. We first review recent field experiments in this area, highlighting the contributions experiments have already made to this area of research. We then outline areas where experiments can further fill existing gaps in our knowledge on agriculture and how future experiments can address the specific complexities in agriculture.

The Personnel Economics of the State Frederico Finan, Ben Olken, and Rohini Pande

Governments play a central role in facilitating economic development. Yet while economists have long emphasized the importance of government quality, historically they have paid less attention to the internal workings of the state and the individuals who provide the public services. This chapter reviews a nascent but growing body of field experiments that explores the personnel economics of the state.  To place the experimental findings in context, we begin by documenting some stylized facts about how public sector employment differs from that in the private sector. In particular, we show that in most countries throughout the world, public sector employees enjoy a significant wage premium over their private sector counterparts. Moreover, this wage gap is largest among low-income countries, which tends to be precisely where governance issues are most severe. These differences in pay, together with significant information asymmetries within government organizations in low-income countries, provide a prima facie rationale for the emphasis of the recent field experiments on three aspects of the state–employee relationship: selection, incentive structures, and monitoring. We review the findings on all three dimensions and then conclude this survey with directions for future research.

Designing Social Protection Programs: Using Theory and Experimentation to Understand how to Help Combat Poverty Rema Hanna and Dean Karlan

“Anti-poverty” programs come in many varieties, ranging from multi-faceted, complex programs to more simple cash transfers. Articulating and understanding the root problem motivating government and nongovernmental organization intervention is critical for choosing amongst many anti-poverty policies, or combinations thereof. Policies should differ depending on whether the underlying problem is about uninsured shocks, liquidity constraints, information failures, or some combination of all of the above.  Experimental designs and thoughtful data collection can help diagnose the root problems better, thus providing better predictions for what anti-poverty programs to employ in specific conditions and contexts. However, the more complex theories are likewise more challenging to test, requiring larger samples, and often more nuanced experimental designs, as well as detailed data on many aspects of household and community behavior and outcomes. We provide guidance on these design and testing issues for social protection programs, from how to target programs, to who should implement the program, to whether and what conditions to require for program participation. In short, careful experimentation designed testing can help provide a stronger conceptual understanding of why programs do or not work, thereby allowing one to ultimately make stronger policy prescriptions that further the goal of poverty reduction.

Social Experiments in the Labor Market Jesse Rothstein and  Till von Wachter

Large-scale social experiments were pioneered in labor economics, and are the basis for much of what we know about topics ranging from the effect of job training to incentives for job search to labor supply responses to taxation. Random assignment has provided a powerful solution to selection problems that bedevil non- experimental research. Nevertheless, many important questions about these topics require going beyond random assignment.  This applies to questions pertaining to both internal and external validity, and includes effects on endogenously observed outcomes, such as wages and hours; spillover effects; site effects; heterogeneity in treatment effects; multiple and hidden treatments; and the mechanisms producing treatment effects. In this Chapter, we review the value and limitations of randomized social experiments in the labor market, with an emphasis on these design issues and approaches to addressing them. These approaches expand the range of questions that can be answered using experiments by combining experimental variation with econometric or theoretical assumptions. We also discuss efforts to build the means of answering these types of questions into the ex ante design of experiments. Our discussion yields an overview of the expanding toolkit available to experimental researchers.

  • Search Menu

Sign in through your institution

  • Browse content in A - General Economics and Teaching
  • Browse content in A1 - General Economics
  • A11 - Role of Economics; Role of Economists; Market for Economists
  • Browse content in B - History of Economic Thought, Methodology, and Heterodox Approaches
  • Browse content in B4 - Economic Methodology
  • B49 - Other
  • Browse content in C - Mathematical and Quantitative Methods
  • Browse content in C0 - General
  • C00 - General
  • C01 - Econometrics
  • Browse content in C1 - Econometric and Statistical Methods and Methodology: General
  • C10 - General
  • C11 - Bayesian Analysis: General
  • C12 - Hypothesis Testing: General
  • C13 - Estimation: General
  • C14 - Semiparametric and Nonparametric Methods: General
  • C18 - Methodological Issues: General
  • Browse content in C2 - Single Equation Models; Single Variables
  • C21 - Cross-Sectional Models; Spatial Models; Treatment Effect Models; Quantile Regressions
  • C23 - Panel Data Models; Spatio-temporal Models
  • C26 - Instrumental Variables (IV) Estimation
  • Browse content in C3 - Multiple or Simultaneous Equation Models; Multiple Variables
  • C30 - General
  • C31 - Cross-Sectional Models; Spatial Models; Treatment Effect Models; Quantile Regressions; Social Interaction Models
  • C32 - Time-Series Models; Dynamic Quantile Regressions; Dynamic Treatment Effect Models; Diffusion Processes; State Space Models
  • C35 - Discrete Regression and Qualitative Choice Models; Discrete Regressors; Proportions
  • Browse content in C4 - Econometric and Statistical Methods: Special Topics
  • C40 - General
  • Browse content in C5 - Econometric Modeling
  • C52 - Model Evaluation, Validation, and Selection
  • C53 - Forecasting and Prediction Methods; Simulation Methods
  • C55 - Large Data Sets: Modeling and Analysis
  • Browse content in C6 - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling
  • C63 - Computational Techniques; Simulation Modeling
  • C67 - Input-Output Models
  • Browse content in C7 - Game Theory and Bargaining Theory
  • C71 - Cooperative Games
  • C72 - Noncooperative Games
  • C73 - Stochastic and Dynamic Games; Evolutionary Games; Repeated Games
  • C78 - Bargaining Theory; Matching Theory
  • C79 - Other
  • Browse content in C8 - Data Collection and Data Estimation Methodology; Computer Programs
  • C83 - Survey Methods; Sampling Methods
  • Browse content in C9 - Design of Experiments
  • C90 - General
  • C91 - Laboratory, Individual Behavior
  • C92 - Laboratory, Group Behavior
  • C93 - Field Experiments
  • C99 - Other
  • Browse content in D - Microeconomics
  • Browse content in D0 - General
  • D00 - General
  • D01 - Microeconomic Behavior: Underlying Principles
  • D02 - Institutions: Design, Formation, Operations, and Impact
  • D03 - Behavioral Microeconomics: Underlying Principles
  • D04 - Microeconomic Policy: Formulation; Implementation, and Evaluation
  • Browse content in D1 - Household Behavior and Family Economics
  • D10 - General
  • D11 - Consumer Economics: Theory
  • D12 - Consumer Economics: Empirical Analysis
  • D13 - Household Production and Intrahousehold Allocation
  • D14 - Household Saving; Personal Finance
  • D15 - Intertemporal Household Choice: Life Cycle Models and Saving
  • D18 - Consumer Protection
  • Browse content in D2 - Production and Organizations
  • D20 - General
  • D21 - Firm Behavior: Theory
  • D22 - Firm Behavior: Empirical Analysis
  • D23 - Organizational Behavior; Transaction Costs; Property Rights
  • D24 - Production; Cost; Capital; Capital, Total Factor, and Multifactor Productivity; Capacity
  • Browse content in D3 - Distribution
  • D30 - General
  • D31 - Personal Income, Wealth, and Their Distributions
  • D33 - Factor Income Distribution
  • Browse content in D4 - Market Structure, Pricing, and Design
  • D40 - General
  • D41 - Perfect Competition
  • D42 - Monopoly
  • D43 - Oligopoly and Other Forms of Market Imperfection
  • D44 - Auctions
  • D47 - Market Design
  • D49 - Other
  • Browse content in D5 - General Equilibrium and Disequilibrium
  • D50 - General
  • D51 - Exchange and Production Economies
  • D52 - Incomplete Markets
  • D53 - Financial Markets
  • D57 - Input-Output Tables and Analysis
  • Browse content in D6 - Welfare Economics
  • D60 - General
  • D61 - Allocative Efficiency; Cost-Benefit Analysis
  • D62 - Externalities
  • D63 - Equity, Justice, Inequality, and Other Normative Criteria and Measurement
  • D64 - Altruism; Philanthropy
  • D69 - Other
  • Browse content in D7 - Analysis of Collective Decision-Making
  • D70 - General
  • D71 - Social Choice; Clubs; Committees; Associations
  • D72 - Political Processes: Rent-seeking, Lobbying, Elections, Legislatures, and Voting Behavior
  • D73 - Bureaucracy; Administrative Processes in Public Organizations; Corruption
  • D74 - Conflict; Conflict Resolution; Alliances; Revolutions
  • D78 - Positive Analysis of Policy Formulation and Implementation
  • Browse content in D8 - Information, Knowledge, and Uncertainty
  • D80 - General
  • D81 - Criteria for Decision-Making under Risk and Uncertainty
  • D82 - Asymmetric and Private Information; Mechanism Design
  • D83 - Search; Learning; Information and Knowledge; Communication; Belief; Unawareness
  • D84 - Expectations; Speculations
  • D85 - Network Formation and Analysis: Theory
  • D86 - Economics of Contract: Theory
  • D89 - Other
  • Browse content in D9 - Micro-Based Behavioral Economics
  • D90 - General
  • D91 - Role and Effects of Psychological, Emotional, Social, and Cognitive Factors on Decision Making
  • D92 - Intertemporal Firm Choice, Investment, Capacity, and Financing
  • Browse content in E - Macroeconomics and Monetary Economics
  • Browse content in E0 - General
  • E00 - General
  • E01 - Measurement and Data on National Income and Product Accounts and Wealth; Environmental Accounts
  • E02 - Institutions and the Macroeconomy
  • E03 - Behavioral Macroeconomics
  • Browse content in E1 - General Aggregative Models
  • E10 - General
  • E12 - Keynes; Keynesian; Post-Keynesian
  • E13 - Neoclassical
  • Browse content in E2 - Consumption, Saving, Production, Investment, Labor Markets, and Informal Economy
  • E20 - General
  • E21 - Consumption; Saving; Wealth
  • E22 - Investment; Capital; Intangible Capital; Capacity
  • E23 - Production
  • E24 - Employment; Unemployment; Wages; Intergenerational Income Distribution; Aggregate Human Capital; Aggregate Labor Productivity
  • E25 - Aggregate Factor Income Distribution
  • Browse content in E3 - Prices, Business Fluctuations, and Cycles
  • E30 - General
  • E31 - Price Level; Inflation; Deflation
  • E32 - Business Fluctuations; Cycles
  • E37 - Forecasting and Simulation: Models and Applications
  • Browse content in E4 - Money and Interest Rates
  • E40 - General
  • E41 - Demand for Money
  • E42 - Monetary Systems; Standards; Regimes; Government and the Monetary System; Payment Systems
  • E43 - Interest Rates: Determination, Term Structure, and Effects
  • E44 - Financial Markets and the Macroeconomy
  • Browse content in E5 - Monetary Policy, Central Banking, and the Supply of Money and Credit
  • E50 - General
  • E51 - Money Supply; Credit; Money Multipliers
  • E52 - Monetary Policy
  • E58 - Central Banks and Their Policies
  • Browse content in E6 - Macroeconomic Policy, Macroeconomic Aspects of Public Finance, and General Outlook
  • E60 - General
  • E62 - Fiscal Policy
  • E66 - General Outlook and Conditions
  • Browse content in E7 - Macro-Based Behavioral Economics
  • E71 - Role and Effects of Psychological, Emotional, Social, and Cognitive Factors on the Macro Economy
  • Browse content in F - International Economics
  • Browse content in F0 - General
  • F00 - General
  • Browse content in F1 - Trade
  • F10 - General
  • F11 - Neoclassical Models of Trade
  • F12 - Models of Trade with Imperfect Competition and Scale Economies; Fragmentation
  • F13 - Trade Policy; International Trade Organizations
  • F14 - Empirical Studies of Trade
  • F15 - Economic Integration
  • F16 - Trade and Labor Market Interactions
  • F18 - Trade and Environment
  • Browse content in F2 - International Factor Movements and International Business
  • F20 - General
  • F21 - International Investment; Long-Term Capital Movements
  • F22 - International Migration
  • F23 - Multinational Firms; International Business
  • Browse content in F3 - International Finance
  • F30 - General
  • F31 - Foreign Exchange
  • F32 - Current Account Adjustment; Short-Term Capital Movements
  • F34 - International Lending and Debt Problems
  • F35 - Foreign Aid
  • F36 - Financial Aspects of Economic Integration
  • Browse content in F4 - Macroeconomic Aspects of International Trade and Finance
  • F40 - General
  • F41 - Open Economy Macroeconomics
  • F42 - International Policy Coordination and Transmission
  • F43 - Economic Growth of Open Economies
  • F44 - International Business Cycles
  • Browse content in F5 - International Relations, National Security, and International Political Economy
  • F50 - General
  • F51 - International Conflicts; Negotiations; Sanctions
  • F52 - National Security; Economic Nationalism
  • F55 - International Institutional Arrangements
  • Browse content in F6 - Economic Impacts of Globalization
  • F60 - General
  • F61 - Microeconomic Impacts
  • F62 - Macroeconomic Impacts
  • F63 - Economic Development
  • Browse content in G - Financial Economics
  • Browse content in G0 - General
  • G00 - General
  • G01 - Financial Crises
  • G02 - Behavioral Finance: Underlying Principles
  • Browse content in G1 - General Financial Markets
  • G10 - General
  • G11 - Portfolio Choice; Investment Decisions
  • G12 - Asset Pricing; Trading volume; Bond Interest Rates
  • G14 - Information and Market Efficiency; Event Studies; Insider Trading
  • G15 - International Financial Markets
  • G18 - Government Policy and Regulation
  • G19 - Other
  • Browse content in G2 - Financial Institutions and Services
  • G20 - General
  • G21 - Banks; Depository Institutions; Micro Finance Institutions; Mortgages
  • G22 - Insurance; Insurance Companies; Actuarial Studies
  • G23 - Non-bank Financial Institutions; Financial Instruments; Institutional Investors
  • G24 - Investment Banking; Venture Capital; Brokerage; Ratings and Ratings Agencies
  • G28 - Government Policy and Regulation
  • Browse content in G3 - Corporate Finance and Governance
  • G30 - General
  • G31 - Capital Budgeting; Fixed Investment and Inventory Studies; Capacity
  • G32 - Financing Policy; Financial Risk and Risk Management; Capital and Ownership Structure; Value of Firms; Goodwill
  • G33 - Bankruptcy; Liquidation
  • G34 - Mergers; Acquisitions; Restructuring; Corporate Governance
  • G38 - Government Policy and Regulation
  • Browse content in G4 - Behavioral Finance
  • G40 - General
  • G41 - Role and Effects of Psychological, Emotional, Social, and Cognitive Factors on Decision Making in Financial Markets
  • Browse content in G5 - Household Finance
  • G50 - General
  • G51 - Household Saving, Borrowing, Debt, and Wealth
  • Browse content in H - Public Economics
  • Browse content in H0 - General
  • H00 - General
  • Browse content in H1 - Structure and Scope of Government
  • H10 - General
  • H11 - Structure, Scope, and Performance of Government
  • Browse content in H2 - Taxation, Subsidies, and Revenue
  • H20 - General
  • H21 - Efficiency; Optimal Taxation
  • H22 - Incidence
  • H23 - Externalities; Redistributive Effects; Environmental Taxes and Subsidies
  • H24 - Personal Income and Other Nonbusiness Taxes and Subsidies; includes inheritance and gift taxes
  • H25 - Business Taxes and Subsidies
  • H26 - Tax Evasion and Avoidance
  • Browse content in H3 - Fiscal Policies and Behavior of Economic Agents
  • H31 - Household
  • Browse content in H4 - Publicly Provided Goods
  • H40 - General
  • H41 - Public Goods
  • H42 - Publicly Provided Private Goods
  • H44 - Publicly Provided Goods: Mixed Markets
  • Browse content in H5 - National Government Expenditures and Related Policies
  • H50 - General
  • H51 - Government Expenditures and Health
  • H52 - Government Expenditures and Education
  • H53 - Government Expenditures and Welfare Programs
  • H54 - Infrastructures; Other Public Investment and Capital Stock
  • H55 - Social Security and Public Pensions
  • H56 - National Security and War
  • H57 - Procurement
  • Browse content in H6 - National Budget, Deficit, and Debt
  • H63 - Debt; Debt Management; Sovereign Debt
  • Browse content in H7 - State and Local Government; Intergovernmental Relations
  • H70 - General
  • H71 - State and Local Taxation, Subsidies, and Revenue
  • H73 - Interjurisdictional Differentials and Their Effects
  • H75 - State and Local Government: Health; Education; Welfare; Public Pensions
  • H76 - State and Local Government: Other Expenditure Categories
  • H77 - Intergovernmental Relations; Federalism; Secession
  • Browse content in H8 - Miscellaneous Issues
  • H81 - Governmental Loans; Loan Guarantees; Credits; Grants; Bailouts
  • H83 - Public Administration; Public Sector Accounting and Audits
  • H87 - International Fiscal Issues; International Public Goods
  • Browse content in I - Health, Education, and Welfare
  • Browse content in I0 - General
  • I00 - General
  • Browse content in I1 - Health
  • I10 - General
  • I11 - Analysis of Health Care Markets
  • I12 - Health Behavior
  • I13 - Health Insurance, Public and Private
  • I14 - Health and Inequality
  • I15 - Health and Economic Development
  • I18 - Government Policy; Regulation; Public Health
  • Browse content in I2 - Education and Research Institutions
  • I20 - General
  • I21 - Analysis of Education
  • I22 - Educational Finance; Financial Aid
  • I23 - Higher Education; Research Institutions
  • I24 - Education and Inequality
  • I25 - Education and Economic Development
  • I26 - Returns to Education
  • I28 - Government Policy
  • Browse content in I3 - Welfare, Well-Being, and Poverty
  • I30 - General
  • I31 - General Welfare
  • I32 - Measurement and Analysis of Poverty
  • I38 - Government Policy; Provision and Effects of Welfare Programs
  • Browse content in J - Labor and Demographic Economics
  • Browse content in J0 - General
  • J00 - General
  • J01 - Labor Economics: General
  • J08 - Labor Economics Policies
  • Browse content in J1 - Demographic Economics
  • J10 - General
  • J11 - Demographic Trends, Macroeconomic Effects, and Forecasts
  • J12 - Marriage; Marital Dissolution; Family Structure; Domestic Abuse
  • J13 - Fertility; Family Planning; Child Care; Children; Youth
  • J14 - Economics of the Elderly; Economics of the Handicapped; Non-Labor Market Discrimination
  • J15 - Economics of Minorities, Races, Indigenous Peoples, and Immigrants; Non-labor Discrimination
  • J16 - Economics of Gender; Non-labor Discrimination
  • J18 - Public Policy
  • Browse content in J2 - Demand and Supply of Labor
  • J20 - General
  • J21 - Labor Force and Employment, Size, and Structure
  • J22 - Time Allocation and Labor Supply
  • J23 - Labor Demand
  • J24 - Human Capital; Skills; Occupational Choice; Labor Productivity
  • J26 - Retirement; Retirement Policies
  • Browse content in J3 - Wages, Compensation, and Labor Costs
  • J30 - General
  • J31 - Wage Level and Structure; Wage Differentials
  • J33 - Compensation Packages; Payment Methods
  • J38 - Public Policy
  • Browse content in J4 - Particular Labor Markets
  • J40 - General
  • J42 - Monopsony; Segmented Labor Markets
  • J44 - Professional Labor Markets; Occupational Licensing
  • J45 - Public Sector Labor Markets
  • J48 - Public Policy
  • J49 - Other
  • Browse content in J5 - Labor-Management Relations, Trade Unions, and Collective Bargaining
  • J50 - General
  • J51 - Trade Unions: Objectives, Structure, and Effects
  • J53 - Labor-Management Relations; Industrial Jurisprudence
  • Browse content in J6 - Mobility, Unemployment, Vacancies, and Immigrant Workers
  • J60 - General
  • J61 - Geographic Labor Mobility; Immigrant Workers
  • J62 - Job, Occupational, and Intergenerational Mobility
  • J63 - Turnover; Vacancies; Layoffs
  • J64 - Unemployment: Models, Duration, Incidence, and Job Search
  • J65 - Unemployment Insurance; Severance Pay; Plant Closings
  • J68 - Public Policy
  • Browse content in J7 - Labor Discrimination
  • J71 - Discrimination
  • J78 - Public Policy
  • Browse content in J8 - Labor Standards: National and International
  • J81 - Working Conditions
  • J88 - Public Policy
  • Browse content in K - Law and Economics
  • Browse content in K0 - General
  • K00 - General
  • Browse content in K1 - Basic Areas of Law
  • K14 - Criminal Law
  • K2 - Regulation and Business Law
  • Browse content in K3 - Other Substantive Areas of Law
  • K31 - Labor Law
  • K36 - Family and Personal Law
  • Browse content in K4 - Legal Procedure, the Legal System, and Illegal Behavior
  • K40 - General
  • K41 - Litigation Process
  • K42 - Illegal Behavior and the Enforcement of Law
  • Browse content in L - Industrial Organization
  • Browse content in L0 - General
  • L00 - General
  • Browse content in L1 - Market Structure, Firm Strategy, and Market Performance
  • L10 - General
  • L11 - Production, Pricing, and Market Structure; Size Distribution of Firms
  • L13 - Oligopoly and Other Imperfect Markets
  • L14 - Transactional Relationships; Contracts and Reputation; Networks
  • L15 - Information and Product Quality; Standardization and Compatibility
  • L16 - Industrial Organization and Macroeconomics: Industrial Structure and Structural Change; Industrial Price Indices
  • L19 - Other
  • Browse content in L2 - Firm Objectives, Organization, and Behavior
  • L21 - Business Objectives of the Firm
  • L22 - Firm Organization and Market Structure
  • L23 - Organization of Production
  • L24 - Contracting Out; Joint Ventures; Technology Licensing
  • L25 - Firm Performance: Size, Diversification, and Scope
  • L26 - Entrepreneurship
  • Browse content in L3 - Nonprofit Organizations and Public Enterprise
  • L33 - Comparison of Public and Private Enterprises and Nonprofit Institutions; Privatization; Contracting Out
  • Browse content in L4 - Antitrust Issues and Policies
  • L40 - General
  • L41 - Monopolization; Horizontal Anticompetitive Practices
  • L42 - Vertical Restraints; Resale Price Maintenance; Quantity Discounts
  • Browse content in L5 - Regulation and Industrial Policy
  • L50 - General
  • L51 - Economics of Regulation
  • Browse content in L6 - Industry Studies: Manufacturing
  • L60 - General
  • L62 - Automobiles; Other Transportation Equipment; Related Parts and Equipment
  • L63 - Microelectronics; Computers; Communications Equipment
  • L66 - Food; Beverages; Cosmetics; Tobacco; Wine and Spirits
  • Browse content in L7 - Industry Studies: Primary Products and Construction
  • L71 - Mining, Extraction, and Refining: Hydrocarbon Fuels
  • L73 - Forest Products
  • Browse content in L8 - Industry Studies: Services
  • L81 - Retail and Wholesale Trade; e-Commerce
  • L83 - Sports; Gambling; Recreation; Tourism
  • L84 - Personal, Professional, and Business Services
  • L86 - Information and Internet Services; Computer Software
  • Browse content in L9 - Industry Studies: Transportation and Utilities
  • L91 - Transportation: General
  • L93 - Air Transportation
  • L94 - Electric Utilities
  • Browse content in M - Business Administration and Business Economics; Marketing; Accounting; Personnel Economics
  • Browse content in M1 - Business Administration
  • M11 - Production Management
  • M12 - Personnel Management; Executives; Executive Compensation
  • M14 - Corporate Culture; Social Responsibility
  • Browse content in M2 - Business Economics
  • M21 - Business Economics
  • Browse content in M3 - Marketing and Advertising
  • M31 - Marketing
  • M37 - Advertising
  • Browse content in M4 - Accounting and Auditing
  • M42 - Auditing
  • M48 - Government Policy and Regulation
  • Browse content in M5 - Personnel Economics
  • M50 - General
  • M51 - Firm Employment Decisions; Promotions
  • M52 - Compensation and Compensation Methods and Their Effects
  • M53 - Training
  • M54 - Labor Management
  • Browse content in N - Economic History
  • Browse content in N0 - General
  • N00 - General
  • N01 - Development of the Discipline: Historiographical; Sources and Methods
  • Browse content in N1 - Macroeconomics and Monetary Economics; Industrial Structure; Growth; Fluctuations
  • N10 - General, International, or Comparative
  • N11 - U.S.; Canada: Pre-1913
  • N12 - U.S.; Canada: 1913-
  • N13 - Europe: Pre-1913
  • N17 - Africa; Oceania
  • Browse content in N2 - Financial Markets and Institutions
  • N20 - General, International, or Comparative
  • N22 - U.S.; Canada: 1913-
  • N23 - Europe: Pre-1913
  • Browse content in N3 - Labor and Consumers, Demography, Education, Health, Welfare, Income, Wealth, Religion, and Philanthropy
  • N30 - General, International, or Comparative
  • N31 - U.S.; Canada: Pre-1913
  • N32 - U.S.; Canada: 1913-
  • N33 - Europe: Pre-1913
  • N34 - Europe: 1913-
  • N36 - Latin America; Caribbean
  • N37 - Africa; Oceania
  • Browse content in N4 - Government, War, Law, International Relations, and Regulation
  • N40 - General, International, or Comparative
  • N41 - U.S.; Canada: Pre-1913
  • N42 - U.S.; Canada: 1913-
  • N43 - Europe: Pre-1913
  • N44 - Europe: 1913-
  • N45 - Asia including Middle East
  • N47 - Africa; Oceania
  • Browse content in N5 - Agriculture, Natural Resources, Environment, and Extractive Industries
  • N50 - General, International, or Comparative
  • N51 - U.S.; Canada: Pre-1913
  • Browse content in N6 - Manufacturing and Construction
  • N63 - Europe: Pre-1913
  • Browse content in N7 - Transport, Trade, Energy, Technology, and Other Services
  • N71 - U.S.; Canada: Pre-1913
  • Browse content in N8 - Micro-Business History
  • N82 - U.S.; Canada: 1913-
  • Browse content in N9 - Regional and Urban History
  • N91 - U.S.; Canada: Pre-1913
  • N92 - U.S.; Canada: 1913-
  • N93 - Europe: Pre-1913
  • N94 - Europe: 1913-
  • Browse content in O - Economic Development, Innovation, Technological Change, and Growth
  • Browse content in O1 - Economic Development
  • O10 - General
  • O11 - Macroeconomic Analyses of Economic Development
  • O12 - Microeconomic Analyses of Economic Development
  • O13 - Agriculture; Natural Resources; Energy; Environment; Other Primary Products
  • O14 - Industrialization; Manufacturing and Service Industries; Choice of Technology
  • O15 - Human Resources; Human Development; Income Distribution; Migration
  • O16 - Financial Markets; Saving and Capital Investment; Corporate Finance and Governance
  • O17 - Formal and Informal Sectors; Shadow Economy; Institutional Arrangements
  • O18 - Urban, Rural, Regional, and Transportation Analysis; Housing; Infrastructure
  • O19 - International Linkages to Development; Role of International Organizations
  • Browse content in O2 - Development Planning and Policy
  • O23 - Fiscal and Monetary Policy in Development
  • O25 - Industrial Policy
  • Browse content in O3 - Innovation; Research and Development; Technological Change; Intellectual Property Rights
  • O30 - General
  • O31 - Innovation and Invention: Processes and Incentives
  • O32 - Management of Technological Innovation and R&D
  • O33 - Technological Change: Choices and Consequences; Diffusion Processes
  • O34 - Intellectual Property and Intellectual Capital
  • O38 - Government Policy
  • Browse content in O4 - Economic Growth and Aggregate Productivity
  • O40 - General
  • O41 - One, Two, and Multisector Growth Models
  • O43 - Institutions and Growth
  • O44 - Environment and Growth
  • O47 - Empirical Studies of Economic Growth; Aggregate Productivity; Cross-Country Output Convergence
  • Browse content in O5 - Economywide Country Studies
  • O52 - Europe
  • O53 - Asia including Middle East
  • O55 - Africa
  • Browse content in P - Economic Systems
  • Browse content in P0 - General
  • P00 - General
  • Browse content in P1 - Capitalist Systems
  • P10 - General
  • P16 - Political Economy
  • P17 - Performance and Prospects
  • P18 - Energy: Environment
  • Browse content in P2 - Socialist Systems and Transitional Economies
  • P26 - Political Economy; Property Rights
  • Browse content in P3 - Socialist Institutions and Their Transitions
  • P37 - Legal Institutions; Illegal Behavior
  • Browse content in P4 - Other Economic Systems
  • P48 - Political Economy; Legal Institutions; Property Rights; Natural Resources; Energy; Environment; Regional Studies
  • Browse content in P5 - Comparative Economic Systems
  • P51 - Comparative Analysis of Economic Systems
  • Browse content in Q - Agricultural and Natural Resource Economics; Environmental and Ecological Economics
  • Browse content in Q1 - Agriculture
  • Q10 - General
  • Q12 - Micro Analysis of Farm Firms, Farm Households, and Farm Input Markets
  • Q13 - Agricultural Markets and Marketing; Cooperatives; Agribusiness
  • Q14 - Agricultural Finance
  • Q15 - Land Ownership and Tenure; Land Reform; Land Use; Irrigation; Agriculture and Environment
  • Q16 - R&D; Agricultural Technology; Biofuels; Agricultural Extension Services
  • Browse content in Q2 - Renewable Resources and Conservation
  • Q25 - Water
  • Browse content in Q3 - Nonrenewable Resources and Conservation
  • Q32 - Exhaustible Resources and Economic Development
  • Q34 - Natural Resources and Domestic and International Conflicts
  • Browse content in Q4 - Energy
  • Q41 - Demand and Supply; Prices
  • Q48 - Government Policy
  • Browse content in Q5 - Environmental Economics
  • Q50 - General
  • Q51 - Valuation of Environmental Effects
  • Q53 - Air Pollution; Water Pollution; Noise; Hazardous Waste; Solid Waste; Recycling
  • Q54 - Climate; Natural Disasters; Global Warming
  • Q56 - Environment and Development; Environment and Trade; Sustainability; Environmental Accounts and Accounting; Environmental Equity; Population Growth
  • Q58 - Government Policy
  • Browse content in R - Urban, Rural, Regional, Real Estate, and Transportation Economics
  • Browse content in R0 - General
  • R00 - General
  • Browse content in R1 - General Regional Economics
  • R11 - Regional Economic Activity: Growth, Development, Environmental Issues, and Changes
  • R12 - Size and Spatial Distributions of Regional Economic Activity
  • R13 - General Equilibrium and Welfare Economic Analysis of Regional Economies
  • Browse content in R2 - Household Analysis
  • R20 - General
  • R23 - Regional Migration; Regional Labor Markets; Population; Neighborhood Characteristics
  • R28 - Government Policy
  • Browse content in R3 - Real Estate Markets, Spatial Production Analysis, and Firm Location
  • R30 - General
  • R31 - Housing Supply and Markets
  • R38 - Government Policy
  • Browse content in R4 - Transportation Economics
  • R40 - General
  • R41 - Transportation: Demand, Supply, and Congestion; Travel Time; Safety and Accidents; Transportation Noise
  • R48 - Government Pricing and Policy
  • Browse content in Z - Other Special Topics
  • Browse content in Z1 - Cultural Economics; Economic Sociology; Economic Anthropology
  • Z10 - General
  • Z12 - Religion
  • Z13 - Economic Sociology; Economic Anthropology; Social and Economic Stratification
  • Advance Articles
  • Editor's Choice
  • Author Guidelines
  • Submission Site
  • Open Access Options
  • Self-Archiving Policy
  • Why Submit?
  • About The Quarterly Journal of Economics
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic
  • < Previous

Power to the People: Evidence from a Randomized Field Experiment on Community-Based Monitoring in Uganda

  • Article contents
  • Figures & tables
  • Supplementary Data

Martina Björkman, Jakob Svensson, Power to the People: Evidence from a Randomized Field Experiment on Community-Based Monitoring in Uganda, The Quarterly Journal of Economics , Volume 124, Issue 2, May 2009, Pages 735–769, https://doi.org/10.1162/qjec.2009.124.2.735

  • Permissions Icon Permissions

This paper presents a randomized field experiment on community-based monitoring of public primary health care providers in Uganda. Through two rounds of village meetings, localized nongovernmental organizations encouraged communities to be more involved with the state of health service provision and strengthened their capacity to hold their local health providers to account for performance. A year after the intervention, treatment communities are more involved in monitoring the provider, and the health workers appear to exert higher effort to serve the community. We document large increases in utilization and improved health outcomes—reduced child mortality and increased child weight—that compare favorably to some of the more successful community-based intervention trials reported in the medical literature.

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code
  • Add your ORCID iD

Institutional access

Sign in with a library card.

  • Sign in with username/password
  • Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Short-term Access

To purchase short-term access, please sign in to your personal account above.

Don't already have a personal account? Register

Month: Total Views:
December 2016 11
January 2017 17
February 2017 170
March 2017 182
April 2017 161
May 2017 118
June 2017 64
July 2017 69
August 2017 106
September 2017 119
October 2017 195
November 2017 256
December 2017 112
January 2018 125
February 2018 119
March 2018 109
April 2018 179
May 2018 46
June 2018 24
July 2018 24
August 2018 22
September 2018 30
October 2018 35
November 2018 39
December 2018 55
January 2019 28
February 2019 50
March 2019 71
April 2019 81
May 2019 81
June 2019 32
July 2019 27
August 2019 30
September 2019 47
October 2019 48
November 2019 64
December 2019 42
January 2020 77
February 2020 34
March 2020 46
April 2020 50
May 2020 48
June 2020 40
July 2020 22
August 2020 25
September 2020 52
October 2020 25
November 2020 46
December 2020 47
January 2021 48
February 2021 38
March 2021 52
April 2021 31
May 2021 33
June 2021 26
July 2021 19
August 2021 16
September 2021 32
October 2021 28
November 2021 64
December 2021 39
January 2022 41
February 2022 40
March 2022 45
April 2022 27
May 2022 36
June 2022 22
July 2022 22
August 2022 16
September 2022 25
October 2022 37
November 2022 32
December 2022 29
January 2023 35
February 2023 31
March 2023 56
April 2023 23
May 2023 50
June 2023 15
July 2023 26
August 2023 14
September 2023 16
October 2023 20
November 2023 39
December 2023 21
January 2024 9
February 2024 37
March 2024 24
April 2024 38
May 2024 31
June 2024 13
July 2024 9
August 2024 15
September 2024 3

Email alerts

Citing articles via.

  • Recommend to Your Librarian

Affiliations

  • Online ISSN 1531-4650
  • Print ISSN 0033-5533
  • Copyright © 2024 President and Fellows of Harvard College
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Digital Strategies for Screen Time Reduction: A Randomized Field Experiment

Affiliations.

  • 1 IE University, IE Tower, Madrid, Spain.
  • 2 Cornell Tech, New York, USA.
  • PMID: 36577008
  • DOI: 10.1089/cyber.2022.0027

Many consumers nowadays wish to reduce their smartphone usage in the hope of improving productivity and well-being. We conducted a pre-registered field experiment ( N = 112) over a period of several weeks to test the effectiveness of two widely available digital strategies for screen time reduction. The effectiveness of a design friction intervention (i.e., activating grayscale mode) was compared with a goal-setting intervention (i.e., self-commitment to time limits) and a control condition (i.e., self-monitoring). The design friction intervention led to an immediate, significant reduction of objectively measured screen time compared with the control condition. Conversely, the goal-setting intervention led to a smaller and more gradual screen time reduction. In contrast to the popular belief that reducing screen time has broad benefits, we found no immediate causal effect of reducing usage on subjective well-being and academic performance.

Keywords: design friction; digital nudge; goal-setting; screen time; smartphone; time limits.

PubMed Disclaimer

Similar articles

  • Digital Intervention for Problematic Smartphone Use. Kent S, Masterson C, Ali R, Parsons CE, Bewick BM. Kent S, et al. Int J Environ Res Public Health. 2021 Dec 14;18(24):13165. doi: 10.3390/ijerph182413165. Int J Environ Res Public Health. 2021. PMID: 34948774 Free PMC article.
  • Discrepancies Between Self-reported and Objectively Measured Smartphone Screen Time: Before and During Lockdown. Júdice PB, Sousa-Sá E, Palmeira AL. Júdice PB, et al. J Prev (2022). 2023 Jun;44(3):291-307. doi: 10.1007/s10935-023-00724-4. Epub 2023 Jan 24. J Prev (2022). 2023. PMID: 36692818 Free PMC article.
  • A Mobile Intervention for Self-Efficacious and Goal-Directed Smartphone Use in the General Population: Randomized Controlled Trial. Keller J, Roitzheim C, Radtke T, Schenkel K, Schwarzer R. Keller J, et al. JMIR Mhealth Uhealth. 2021 Nov 23;9(11):e26397. doi: 10.2196/26397. JMIR Mhealth Uhealth. 2021. PMID: 34817388 Free PMC article. Clinical Trial.
  • Measuring and influencing physical activity with smartphone technology: a systematic review. Bort-Roig J, Gilson ND, Puig-Ribera A, Contreras RS, Trost SG. Bort-Roig J, et al. Sports Med. 2014 May;44(5):671-86. doi: 10.1007/s40279-014-0142-5. Sports Med. 2014. PMID: 24497157 Review.
  • Smartphone and tablet self management apps for asthma. Marcano Belisario JS, Huckvale K, Greenfield G, Car J, Gunn LH. Marcano Belisario JS, et al. Cochrane Database Syst Rev. 2013 Nov 27;2013(11):CD010013. doi: 10.1002/14651858.CD010013.pub2. Cochrane Database Syst Rev. 2013. PMID: 24282112 Free PMC article. Review.
  • Daily links between objective smartphone use and sleep among adolescents. Burnell K, Garrett SL, Nelson BW, Prinstein MJ, Telzer EH. Burnell K, et al. J Adolesc. 2024 Aug;96(6):1171-1181. doi: 10.1002/jad.12326. Epub 2024 May 3. J Adolesc. 2024. PMID: 38698757
  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

JavaScript seems to be disabled in your browser. For the best experience on our site, be sure to turn on Javascript in your browser.

  • Create an Account

A Randomized Field Experiment to Explore the Impact of Herding Cues as Catalysts for Adoption

Publication history.

Received : March 7, 2019 Revised : February 14, 2020; September 28, 2020; January 30, 2021; May 10, 2021 Accepted : May 14, 2021 Published Online as Articles in Advance : May 19, 2022 Published Online in Issue : June 1, 2022

https://doi.org/10.25300/MISQ/2022/16141

Author Yue (Katherine) Feng, Jennifer L. Claggett, Elena Karahanna, and Kar Yan Tam
Year 2022
Volume 46
Issue 2
Keywords Herding cue, herd behavior, social influence, time of adoption, randomized field experiment
Page Numbers 1135-1164; DOI: 10.25300/MISQ/2022/16141
  • Visit the University of Nebraska–Lincoln
  • Apply to the University of Nebraska–Lincoln
  • Give to the University of Nebraska–Lincoln

Search Form

Nebraska field experiments investigate biochar impact to soil health and crop yields.

Tractors disking fields

Does applying biochar improve the health of soils and crop yields? This is a question that some may ask when they hear about biochar. There are little data from long-term and well-designed field experiments to answer this question. The few field studies in temperate regions show limited or mixed effects of biochar on soils and crop yields. In contrast, biochar has been shown to improve soils and crop yields in subtropical and tropical regions.

Often, biochar benefits observed in other regions (subtropical and tropical regions) are extended by popular media to temperate soils. Biochar benefits observed elsewhere may or may not be observed in temperate soils because site-specific conditions including soil type, cropping systems, crop management, scale of farming, biochar feedstock, biochar amount applied, and biochar properties vary. More field data from temperate soils are needed to better understand the potential benefits and limitations of biochar use.

Research Questions

In 2020, we set out to answer the specific questions in Table 1. We selected three environmentally sensitive soils, including sandy, sloping, and semi-arid soils across Nebraska. We hypothesized that environmentally sensitive soils could rapidly benefit from biochar. In each soil, we measured soil health indicators and crop yields after wood biochar application with and without the addition of winter cover crops. This ongoing multi-site biochar project is funded by the USDA National Institute of Food and Agriculture Foundational Program.

Table 1. Research questions targeted.
Background Research Questions
1. The few biochar studies in temperate regions were on nearly-level, fertile and highly productive soils.

2. These few studies observed small or no biochar effects on soil properties and crop yields.
Wouldn’t biochar application to degradation-prone soils including sandy, sloping and semi-arid soils be more beneficial than in highly productive soils?
1. Most studies were conducted in the greenhouse using large amounts of biochar (>20 ton/acre).

2. High biochar application rates may not be economical.
What is the minimum amount of biochar that can be applied to croplands to observe any potential benefits on soils and crops?
1. Soil and crop benefits of biochar may be small or inconsistent, depending on site-specific conditions.

2. Studies coupling biochar with other amendments are rare.
Would combining biochar with cover crops enhance biochar performance relative to biochar alone?

Where and How are the Experiments Being Conducted?

The experiment was conducted at three sites with environmentally sensitive soils (sandy, sloping and semi-arid) beginning in spring 2020 (Figure 1). The sandy and sloping sites were on-farm experiments located in eastern Nebraska near Stanton and Beaver Crossing, Nebraska, while the semi-arid site was located at the University of Nebraska-Lincoln’s High Plains Agricultural Laboratory near Sidney, Nebraska. The annual mean precipitation was 29 inches for the sandy and sloping (7-11% slope) sites in eastern Nebraska, and 18 inches for the semi-arid site in western Nebraska.  

The experiment was a split-plot arranged in a randomized complete block design with five wood biochar treatments (0, 2.81, 5.62, 11.25, and 22.5 ton/ac) as main plots (30 feet by 35 feet) and three cover treatments (no cover crop, winter rye, and mixture of winter rye, Austrian winter pea and radish) as split plots (15 feet by 35 feet) replicated four times. Wood biochar (83.6% carbon and 94% of particles sized between 1 to 4 mm) was acquired from High Plains Biochar after pyrolysis at 760°C.

The biochar was disked into the soil to about four inches, then the experiment was managed as no-till going forward. The crop rotation was corn-soybean for the sites in eastern Nebraska, and the winter cover crops were drilled in late October after corn or soybean harvest. At the semi-arid site, we planned to have a pea-winter wheat–pea rotation, but persistent drought did not allow successful crop establishment during the three years (2020, 2021, 2022) after biochar application.

What Have We Learned After Three Years?

Cover crop biomass graph

  • Cover crops did not grow at the semi-arid site (western Nebraska) due to persistent drought. This suggests combining cover crops with biochar may not be doable in water-limited regions, especially when precipitation is below normal.
  • Cover crops established well at the sandy and sloping sites in eastern Nebraska due to higher precipitation than in the semi-arid site (Figure 2). However, cover crop biomass was relatively low and did not exceed 0.27 tons/ac. Winter cover crops drilled in fall after corn or soybean harvest do not often produce high amounts of biomass.   
  • The synergism between wood biochar and cover crops was not significant. The relatively low production of cover crop biomass and short experiment duration may have limited biochar-cover crop synergism.
  • Soil C concentration increased, as expected, but a significant amount of C was lost during the three years likely due to C emissions, soil erosion and decomposition.
  • Biochar improved some soil properties more at the sandy and sloping sites than at the semi-arid site. The highest rate of biochar application (22.5 ton/ac) improved soil sorptivity (early stages of water infiltrations) in the sandy and sloping sites in the first year, but not in the third year (Table 2).
  • At the sandy site, biochar application at high rates (>11.25 ton/ac) increased soil organic matter concentration, soil pH, and available water in the first year but not in the third year (Table 2). This suggests biochar benefits in sandy soils could be short-lived.
  • At the sloping site, biochar application at high rates increased organic matter concentration and microbial biomass after three years (Table 2).
  • Biochar did not affect nitrate leaching potential and phosphorus concentration but increased K concentration (Table 2).
  • Biochar did not increase corn and soybean yields (Table 3), but 22.5 ton/ac of biochar increased cover crop biomass production at the sloping site by 0.12 ton/ac (Figure 2).
  • Cover crop biomass between single species and cover crop mixture did not differ.
Table 2. Biochar minimally impacts soil properties (zero to 6-inch depth) after three years. Means with the same letter do not differ. Data were averaged across cover crops for the sandy and sloping sites as the interaction was not significant (ns).
Biochar
(ton/ac)
Soil sorptivity
(inch s )
Available water
(inch)
pH Organic matter
(%)
Nitrate
(ppm)
P
(ppm)
K
(ppm)
Microbial biomass
(nmol g )
Sandy soil
0 0.06ns 0.04ns 5.07ns 1.37ns 7.28ns 85.17ns 195b 68ns
2.81 0.06 0.03 5.13 1.37 6.28 71.75 193b 67
5.62 0.06 0.06 5.15 1.29 7.47 64.50 211ab 67
11.25 0.06 0.07 5.36 1.54 8.91 67.17 240a 81
22.5 0.08 0.05 5.38 1.53 10.58 72.42 236a 73
Sloping soil
0 0.07ns 0.04ns 6.62ns 3.01c 21.93ns 9.00ns 254ns 118b
2.81 0.07 0.04 6.83 3.19c 17.73 12.42 271 112b
5.62 0.08 0.04 6.67 3.40bc 17.59 11.08 262 124ab
11.25 0.09 0.03 6.53 3.99b 18.60 10.50 244 121b
22.5 0.12 0.04 6.46 5.01a 19.43 13.42 266 139a
Semi-arid soil
0 0.03ns 0.07ns 6.68ns 3.42ns 24.80ns 33.75ns 858ns 139ns
2.81 0.04 0.07 6.80 3.20 21.10 32.25 800 119
5.62 0.04 0.06 6.88 3.20 28.70 46.50 822 109
11.25 0.04 0.07 6.73 3.30 21.13 40.25 773 118
22.5 0.04 0.08 6.98 3.45 19.25 41.00 755 105
Table 3. Biochar does not significantly increase crop yields (Mean±standard deviation) for three years. Crops were not harvested at the semi-arid site due to poor stands from drought. (ns = not significant)
Site Biochar Crop Yield (ton/ac)
Year 1 (Corn) Year 2 (Soybean) Year 3 (Corn)
Sandy 0 5.14±3.67ns 0.64±0.40ns 2.28±2.19ns
2.81 6.44±1.94 0.84±0.28 3.06±1.50
5.62 6.18±2.99 0.75±0.32 2.40±2.36
11.25 6.55±0.43 0.78±0.30 2.66±2.35
22.5 5.89±1.48 0.87±0.38 2.65±2.44
Year 1 (Soybean) Year 2 (Corn) Year 3 (Soybean)
Sloping 0 0.95±0.32ns 6.03±1.44ns 1.62±0.90ns
2.81 1.08±0.38 5.90±1.25 1.55±0.42
5.62 1.41±0.21 5.70±0.93 1.51±0.35
11.25 1.33±0.39 5.57±1.37 1.51±0.55
22.5 1.61±0.47 6.74±1.12 1.82±0.76

What Is the Take-home Message?

  • Biochar can improve some soil properties in some soils, but large amounts of biochar (at least 11.25 ton/ac) are needed, which may be uneconomical for practical use.
  • Biochar benefits generally decrease with time (about one year) after biochar application.  
  • Biochar does not generally boost crop yields but may slightly increase cover crop biomass production on a site-specific basis when applied at high rates (22.5 ton/ac).
  • Growing cover crops after biochar application does not boost biochar benefits in environmentally sensitive soils in Nebraska.

Overall, the small and site-specific effects of biochar suggest that biochar use should be targeted to solve specific soil problems. We expect that biochar benefits are more likely in acidic, low organic matter, and low K soils.

Additional Reading

Amonette, J.E., H. Blanco-Canqui, C. Hassebrook, D.A. Laird, R. Lal, J. Lehmann, and D. Page-Dumroes. 2021. Integrated biochar research: A roadmap. J. Soil Water Conserv. 76:24A-29A.

Blanco-Canqui, H., C.F. Creech, A.C. Easterly, R.A. Drijber, and E.S. Jeske. 2024. Does biochar combined with cover crops improve the health and productivity of sandy, sloping, and semi-arid soils? Soil Sci. Soc. Am. J. 88:1340-1357

Blanco-Canqui, H., C. Creech, and A.C. Easterly. 2024. Soil and crop response to biochar application in a semi-arid environment: A 5-yr assessment. Field Crops Res. 310:109340.

Online Master of Science in Agronomy

With a focus on industry applications and research, the online program is designed with maximum flexibility for today's working professionals.

A field of corn.

A smart Travel Survey. Results of a push-to-smart field experiment in the Netherlands

the randomized field experiment

In 2022, Statistics Netherlands again fielded a travel-app assisted experiment, but this time including the regular online Travel Survey as a concurrent option. A population sample was offered the online option at different time points. These time points were randomized across the sample. Simultaneously, also the requested tracking period, one full day or one full week, was randomized.

In this paper, we discuss the design and outcomes of the field experiment. We focus on response and representation. Measurement data quality and the in-app behaviour of respondents are studied and reported in separate papers. Our main conclusion is that the concurrent option had a backfiring impact on response and needs to be introduced differently.

  • Paper - DP - AVA22 field test - design and response

Gender Peer Effects in the Workplace: A Field Experiment in Indian Call Centers

54 Pages Posted: 10 Sep 2024

Deepshikha Batheja

One Health Trust

Several theories suggest that gender integration in the workplace may have negative effects in gender-segregated societies. This paper presents the results of a randomized controlled trial on the effect of gender integration on work productivity. The study was implemented in call centers located in five Indian cities. Employees were randomized to either mixed gender teams (30-50% female peers) or control groups of same gender teams. For male employees, I find precisely estimated zero effects on both intensive and extensive margins, of being assigned to gender integration treatment. Within mixed-gender teams, men with progressive gender attitudes have higher productivity than those with regressive gender attitudes. Gender attitudes decrease and, knowledge sharing and comfort with opposite gender increases for men in treatment. For female employees, I do not find an effect of treatment on productivity but find a significant decrease in time spent talking and socializing with co-workers, relative to control.

Keywords: firm productivity, experiment, Female Labor Force Participation, gender attitude, gender norms, India

Suggested Citation: Suggested Citation

Deepshikha Batheja (Contact Author)

One health trust ( email ).

1616 P St NW Suite 600 Washington DC, DC 20036 United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics, related ejournals, women & work ejournal.

Subscribe to this free journal for more curated articles on this topic

Development Economics: Women, Gender, & Human Development eJournal

Subscribe to this fee journal for more curated articles on this topic

Gender & Global Issues eJournal

Gender & social protection ejournal, sociology of gender ejournal.

IMAGES

  1. The stages of the randomized field experiment (adapted from Dillahunt

    the randomized field experiment

  2. Randomized experiment

    the randomized field experiment

  3. Randomized Experiments

    the randomized field experiment

  4. (PDF) A Randomized Field Experiment Using Self-Reflection on School

    the randomized field experiment

  5. 2: Overview of the randomized field experiment We prepared two flower

    the randomized field experiment

  6. (PDF) A Randomized Field Experiment to Explore the Impact of Herding

    the randomized field experiment

VIDEO

  1. Part 1: introduction

  2. Week 1 Part 2: Randomized experiment

  3. Minimum Number of Replication For Field Experiments

  4. Magnetic field science EXPERIMENT

  5. Field and Mass Renormalization

  6. Video 1: high agreeableness

COMMENTS

  1. Introduction to Field Experiments and Randomized Controlled Trials

    Field experiments, or randomized studies conducted in real-world settings, can take many forms. While experiments on college campuses are often considered lab studies, certain experiments on campus - such as those examining club participation - may be regarded as field experiments, depending on the experimental design. ...

  2. What is a field experiment?

    Field experiments, explained. Editor's note: This is part of a series called "The Day Tomorrow Began," which explores the history of breakthroughs at UChicago. Learn more here. A field experiment is a research method that uses some controlled elements of traditional lab experiments, but takes place in natural, real-world settings.

  3. Why randomize?

    About Randomized Field Experiments Randomized field experiments allow researchers to scientifically measure the impact of an intervention on a particular outcome of interest. What is a randomized field experiment? In a randomized experiment, a study sample is divided into one group that will receive the intervention being studied (the treatment ...

  4. Field experiment

    Field experiments are experiments carried out outside of laboratory settings. ... Agricultural science researcher R.A. Fisher analyzed randomized actual "field" experimental data [30] for crops. Political Science researcher Harold Gosnell conducted an early field experiment on voter participation in 1924 and 1925.

  5. Randomized experiment

    Randomized experiment. Flowchart of four phases (enrollment, intervention allocation, follow-up, and data analysis) of a parallel randomized trial of two groups, modified from the CONSORT 2010 Statement [1] In science, randomized experiments are the experiments that allow the greatest reliability and validity of statistical estimates of ...

  6. Field Experiments Across the Social Sciences

    Using field experiments, scholars can identify causal effects via randomization while studying people and groups in their naturally occurring contexts. In light of renewed interest in field experimental methods, this review covers a wide range of field experiments from across the social sciences, with an eye to those that adopt virtuous practices, including unobtrusive measurement ...

  7. PDF The Role of Theory in Field Experiments

    This brief historical review shows how different the role of theory is in laboratory and fi eld experiments. Models have always played a key role in laboratory experi-ments, with an increasing trend. Field experiments have been largely Descriptive, with only a recent increase in the role for models.

  8. 15 Field Experiments and Natural Experiments

    Because the term "field experiment" is often used loosely to encompass randomized studies that vary widely in terms of realism, Harrison and List (2004, 1014) offer a more refined classification system. "Artefactual" field experiments are akin to laboratory experiments, except that they involve a "non‐standard" subject pool.

  9. 1 What is a Randomized Field Trial?

    A few final clarifications about terminology are in order. Some observ- ers consider the term "randomized field trial" to be limited only to very 4In some cases, an investigator may conduct a randomized field trial when an interven- tion is allocated to individuals based on a random lottery.

  10. PDF Running Randomized Field Experiments for Energy Efficiency Programs: A

    HOW RANDOMIZED FIELD EXPERIMENTS WORK g Field experiments provide insight into what would have happened to the same participants over the same time period, absent the treatment. In statistical terms, they create credible counterfactuals (Duflo, Glennerster, and Kramer, 2007). Since the true counterfactual can

  11. A Refresher on Randomized Controlled Experiments

    A Refresher on Randomized Controlled Experiments. In order to make smart decisions at work, we need data. Where that data comes from and how we analyze it depends on a lot of factors — for ...

  12. PDF The Production of Human Capital in Developed Countries: Evidence from

    Evidence from 196 Randomized Field Experiments Roland G. Fryer, Jr. Harvard University and NBER March 2016 Abstract Randomized field experiments designed to better understand the production of human capital have increased exponentially over the past several decades. This chapter summarizes what we have learned

  13. from a Randomized Field Experiment

    This paper uses a randomized field experiment to identify which start-up characteris tics are most important to investors in early-stage firms. The experiment. investors' information sets of fund-raising start-ups. The average investor. strongly to information about the founding team, but not to firm traction or.

  14. American Journal of Political Science

    We present the first randomized field experiment on the topic. The experiment focuses on whether contributions facilitate access to influential policy makers. In the experiment, a political organization attempted to schedule meetings between 191 congressional offices and the organization's members in their districts who were campaign donors.

  15. Handbook of Field Experiments

    Randomized field experiments designed to better understand the production of human capital have increased exponentially over the past several decades. This chapter summarizes what we have learned about various partial derivatives of the human capital production function, what important partial derivatives are left to be estimated, and what ...

  16. Power to the People: Evidence from a Randomized Field Experiment on

    Abstract. This paper presents a randomized field experiment on community-based monitoring of public primary health care providers in Uganda. Through two rounds of village meetings, localized nongovernmental organizations encouraged communities to be more involved with the state of health service provision and strengthened their capacity to hold their local health providers to account for ...

  17. Motivating User-Generated Content with Performance Feedback: Evidence

    Ni Huang, Gordon Burtch, Bin Gu, Yili Hong, Chen Liang, Kanliang Wang, Dongpu Fu, Bo Yang (2018) Motivating User-Generated Content with Performance Feedback: Evidence from Randomized Field Experiments. Management Science 65(1):327-345.

  18. The Value of Fit Information in Online Retail: Evidence from a

    By implementing a series of randomized field experiments, we study the value of virtual fit information in online retail. In our experiments, customers are randomly assigned to a treatment condition where virtual fit information is available or to a control condition where virtual fit information is not available. Our results show that offering ...

  19. PDF How Do Consumers Use Firm Disclosure? Evidence from a Randomized Field

    Evidence from a Randomized Field Experiment BY SINJA LEONELLI, MAXIMILIAN MUHN, THOMAS RAUTER, AND GURPAL SRAN* October 2023 Abstract We combine a large-scale field experiment with a customized survey to study whether and how consumers use firm disclosure. In a sample of more than 24,000 U.S. households, we first establish

  20. Evaluating the Impact of Humanization and Emotion in Social ...

    By collaborating with a Minneapolis-based social justice organization, the research examines the role of emotional versus informational appeals and anthropomorphism in chatbot interactions within a randomized field experiment involving 276 participants.

  21. Power to the People: Evidence from a Randomized Field Experiment on

    Community-based, randomized, controlled field trials have been used extensively in medical research to evaluate the 1. For anecdotal and case study evidence, see World Bank (2003). Chaudhury et al. (2006) provide evidence on the rates of absenteeism. On misappropriation of public funds and drugs, see McPake et al. (1999) and Reinikka and ...

  22. Digital Strategies for Screen Time Reduction: A Randomized Field Experiment

    Many consumers nowadays wish to reduce their smartphone usage in the hope of improving productivity and well-being. We conducted a pre-registered field experiment (N = 112) over a period of several weeks to test the effectiveness of two widely available digital strategies for screen time reduction.The effectiveness of a design friction intervention (i.e., activating grayscale mode) was ...

  23. The Hidden Cost of Accommodating Crowdfunder Privacy ...

    We study the impact of these information (privacy) control mechanisms on crowdfunder behavior. Employing a randomized experiment at one of the world's largest online crowdfunding platforms, we find evidence of both positive (e.g., comfort) and negative (e.g., privacy priming) causal effects.

  24. A Randomized Field Experiment to Explore the Impact of Herding Cues as

    In this vein, we conducted a randomized field experiment to examine the use of a herding cue as an implementation intervention to hasten adoption behaviors. The research model was evaluated using survival analysis by combining the data from the field experiment with two waves of surveys, and archival logs of adoption. ...

  25. The Effects of Generative AI on High Skilled Work: Evidence from Three

    These field experiments, which were run by the companies as part of their ordinary course of business, provided a randomly selected subset of developers with access to GitHub Copilot, an AI-based coding assistant that suggests intelligent code completions. ... Randomized Social Experiments eJournal. Subscribe to this fee journal for more ...

  26. Nebraska Field Experiments Investigate Biochar Impact to Soil Health

    There are little data from long-term and well-designed field experiments to answer this question. The few field studies in temperate regions show limited or mixed effects of biochar on soils and crop yields. ... The experiment was a split-plot arranged in a randomized complete block design with five wood biochar treatments (0, 2.81, 5.62, 11.25 ...

  27. A smart Travel Survey. Results of a push-to-smart field experiment in

    A population sample was offered the online option at different time points. These time points were randomized across the sample. Simultaneously, also the requested tracking period, one full day or one full week, was randomized. In this paper, we discuss the design and outcomes of the field experiment. We focus on response and representation.

  28. Gender Peer Effects in the Workplace: A Field Experiment in ...

    This paper presents the results of a randomized controlled trial on the effect of gender integration on work productivity. The study was implemented in call centers located in five Indian cities. Employees were randomized to either mixed gender teams (30-50% female peers) or control groups of same gender teams.