1. Two or more comparison groups: a treatment group and a control group or sometimes, two or more different treatment groups. (e.g. "notorious" Zimbardo prison experiment--the prisoners and the guards)
2. The experimenter introduces variation in the independent variable before assessment of change in the dependent variable
3. Random assignment to the treatment and comparison groups. This is what allows you to equalize the effects of outside (including unknown) factors. From the point of view of establishing causality, this is the single most important advantage of experiment as a research method because it eliminates the possibility that the causal connection(s) we find is(are) really spurious.
Example: what would have happened to the Zimbardo prison experiment if research subjects had been able to choose whether they wanted to be prisoners or guard?
4. Experimental control: each run of the experiment follows the same carefully monitored script. e.g., the Milgram obedience to authority experiement... the lab assistant running the experiment had just 3 or 4 scripted phrases he could use if the research subject didn't want to go on with administering the shocks
5. Pretesting and post-testing... in C/S's hypothetical coffee drinking experiment, each group writes an essay before and after the coffee intervention is introduced.
1. Laboratory: far more common, especially in psychology, but also used in sociology
2. Field experiments... e.g., the effects of bumper stickers on the behavior of car drivers. Most evaluation research, which we'll cover next week, is a variation of field experiments.
1. Nonequivalent control group designs (aggregate matching). Subjects are not randomly assigned to treatment and comparison group. Instead there's an effort to match groups on characteristics that the researcher thinks may be relevant. For example, what if we wanted to study the effects large-scale civic engagement programs on colleges?
2. Before and after designs. We have a pretest and a post-test but no comparison groups. Same example as above.
Notice that as I begin to talk about quasi-experimental design, I begin to move from pure experiments into evaluation research. Sociologists/criminologists don't do a lot of lab experimentation, but evaluation research is an area of growing importance. e.g., Jeff Maahs evaluation of the St. Louis Country Drug Court. We'll spend much of next week on evaluation research.
1. Selection bias, resulting in nonequivalent comparison groups
2. Differential attrition (dropping out or even dying) resulting in nonequivalent comparison groups
3. Endogenous change, particularly a problem with before-and- after designs not involving comparison groups
a. the effects of pretesting itself (notice, this is no problem for true experiments, since all the comparison groups take the pretest, if there is one)
c. regression toward the mean
4. Historical events, again a problem for before-and-after designs
5. Treatment misidentification... placebo effect (experimental group in an anti-racism experiment believe their racial attitudes should "improve") or Hawthorne effect (attention itself changes behavior or attitudes, quite apart from the content of the intervention)
1. Relative balance of societal and/or disciplinary benefit vs. possible harm to individuals
2. Honesty about the purposes of the experiment? Be as honest as you can, but generally accepted that deception is sometimes necessary. See statement the American Sociological Association at the top of p. 130.
3. Debriefing and follow-up. Find out whether your subjects had concerns or suspicions and help them to understand the purpose and utility of your research.
Aronson: Treat your research subjects as colleagues in the scientific enterprise... Solicit and value their input.
|No initiation||Mild initiation||Severe initiation||Totals|
|Rated group discussion dull||80%||60%||40%||60%|
|Rated mildly interesting||20%||30%||40%||30%|
|Rated very interesting||0%||10%||20%||10%|