About Us
Child Abuse Training
International Activities
Health Policy Collaboration
Rossi Award for Program Evaluation
UMD Capstone Courses
Mailing List
Contact Us

Return to the Rossi Award page.

2007 Rossi Award Winner - Grover J. (Russ) Whitehurst

Acceptance Remarks

(November 9, 2007)

Iím deeply honored to receive an award named after Peter Rossi and previously received by Rob Hollister and Fred Mosteller. It is always gratifying to be in good company. Peter, Rob, and Fred are very good company indeed.

Why me? I have not made academic contributions to the science of evaluation like those of Peter or the previous awardees. Iím proud of the work I did in my prior life as a researcher, but Iím receiving this award because I head an agency, the Institute of Education Sciences (IES), that has had a significant impact on the use of rigorous methods to evaluate the effectiveness of education programs and practices. A very brief and substantially abbreviated list of IESís accomplishments includes: 25 rigorous large scale evaluations of federal education programs in the field (compared to one in the Department of Education in 2000); hundreds of active grants to support research employing randomized experiments, high quality quasi-experiments, and sophisticated modeling of longitudinal data (compared to very few such grants in 2000); the ďWhat Works ClearinghouseĒ that vets research based on clear rule-based standards vs. the mysterious consensus panel approach that existed previously; interdisciplinary doctoral training programs in the education sciences at 10 leading universities that are producing a new generation of superbly trained education researchers; and most importantly, a rising tide of findings from our investments in research and evaluation that promise to enhance education achievement substantially.

Iíll take some credit for these accomplishments, but this is the work of an orchestra not a soloist. The orchestra includes the very capable leadership team and 185 employees of IES, the contractors and hundreds of their staff who work with us, the research teams for our 522 active grants, the couple of hundred scientists who serve as our peer reviewers, and organizations such as the Association for Public Policy Analysis and Management (APPAM) that share our vision. As the conductor of this orchestra I accept the Rossi award on behalf of all those who have played their parts well in what has been a strong group performance.

While acknowledging the contributions of everyone who has volunteered to be part of IESís effort to advance evidence-based education, I have to reserve the largest measure of appreciation for the group that didnít volunteer but has perhaps contributed more and been compensated less than anyone else: my family. Iíve been commuting to Washington from our home on Long Island for seven years. My wife and sons have had far less of my time and attention than they deserved during this period. I thank them for their love and their sacrifice.

What have I learned in the last seven years that may be relevant for the future of rigorous and relevant evaluation in education as carried out at the Federal level?

First, Peter Rossiís iron law of evaluation is more or less right, but the implications are wrong. Peterís iron law is that ďthe expected value of any net impact assessment of any large scale social program is zero.Ē In other words, the average social program doesnít work and this will be revealed in a rigorous evaluation. But there is more variation around the mean than Peter anticipated, and that variation may have increased in the twenty years since Peter wrote about the iron law. The program that has a substantial positive impact may be rare but it exists. The probability of finding it will be remote unless we search widely, frequently, and intelligently. In short, experiment, experiment, experiment.

Second, science operates on the logic of disconfirmation while policy operates on the logic of confirmation. Good scientists design studies that test their hypotheses, not because they think their hypotheses are wrong but because their hypotheses are strengthened to the degree they survive studies that could generate results that are disconfirming. Policymakers, in contrast, look for evidence that confirms their decisions. They have committed to a course of action that requires public support. That requires justification, and justification takes the form of evidence consistent with the action taken or proposed. Put another way, policymakers donít have a strong appetite for activities that may call their decisions into question. As a result, evaluations of policies and programs that are widely implemented and around which there is substantial consensus will be funded only if no one is paying attention and will be dismissed if they produce negative evidence. In short, donít spit into the wind.

Third, evaluations can be influential if they occur while policy is uncertain, programs have not been implemented, and opinions are divided. If policymakers donít know what to do but want to do something, they are quite receptive to good evidence as a basis for their decision, and the stronger the evidence the better. In short, get them when theyíre undecided.

Fourth, evaluations can be influential, even of widely deployed social programs, if the evaluations are designed not to disconfirm program effectiveness but to improve it. Thus the control group isnít the absence of the program but the current version of the program, while the intervention group is a planned variation that is hypothesized to improve outcomes. The threat of such an evaluation to advocates of the program is low because the results canít be used as an argument to shut down or reduce funding for the program. In short, focus on program improvement.

Fifth, policymakers have unrealistic timelines for findings from research and evaluation and low tolerance for expressions of ignorance from the research community. ďJust tell me what to do and Iíll do it,Ē is a frequent refrain. If the response is, ďresearch hasnít produced any answers to date,Ē the reaction is that the research enterprise must be flawed if it hasnít produced solutions to important education problems. A frequent next step is for policymakers who are frustrated by lack of direction from harder nosed members of the research community to turn to some entity or another to spin ďresearch-basedĒ answers from the flimsiest of empirical threads. Policymakers who understand that multi-billion dollar annual investments in health research may take decades to generate breakthroughs will expect a couple of hundred million dollars a year of investments in education research to generate solutions in a few years, or maybe a few months. In short, generate as much of value in as short a time frame as possible but help them understand that transformational knowledge isnít produced overnight or on the cheap.

What weíre about requires a transformation in the way society carries out education decision-making. We need to become a learning society, a society that plans and invests in learning how to improve its education programs by turning to rigorous evidence when it is available, and by embedding evaluation into programs and policies that canít wait for a strong research base. The challenge of becoming a learning society involves striking a balance between the need to convince ourselves that we know enough to take action while acknowledging that the evidence upon which we are basing our decisions is incomplete and, indeed, may be wrong. Policymakers and leaders in a learning society would speak openly about the uncertainty in particular policy actions and about organizing to learn how to improve policies and practices over time. IES has made a good start towards creating a learning society in education. Thank you for acknowledging that with the Peter H. Rossi award. Moving from a good start to something much closer to a finish will be the marathon of a generation, not a sprint of a few years. Thank you for being an important part of the community that will make that happen.

Back to top