I recently participated in a meeting sponsored by the US Department of Education for project directors and evaluators award i3 (Investing In Innovation) grants. I am the lead evaluator on one of those 49 projects. It is clear that the overarching purpose of the i3 program is to improve educational practice (in terms of learning outcomes and quality of instruction) in America’s schools. It is also clear that the grantees are expected to have and implement very high quality research and evaluation plans to support claims about improved learning and instruction. There were many references to the What Works Clearinghouse and its standards (see http://ies.ed.gov/ncee/wwc/). If one does a search on the word ‘learning’ in the category of ‘evaluation reports’ one finds only one entry since January 1, 2009. Perhaps this is why some people refer to this site as the “Nothing Works Clearinghouse.”Entries in the Clearinghouse must meet specific standards set by the Institute for Education Sciences (IES; see http://ies.ed.gov/).
It occurs to me that there is some tension with regard to what IES and some educational researchers might regard as clear and convincing evidence that a particular intervention (instructional approach, strategy, technology, teacher training, etc.) works well with certain groups of students compared with what some educational practitioners would be inclined to accept as clear and convincing evidence. The stakes are different for these two groups. IES and its researchers are spending federal dollars – often quite a lot of money – to make systemic and systematic improvements in learning and instruction. They are accountable to congress and the nation who want to see a certain kind of evidence that investments have been used wisely. These people do not have to take the implications of findings back into classrooms.
On the other hand, educational practitioners do have to go into classrooms and their primary responsibility is doing their best, given many serious constraints and limitations, to improve the learning and instruction that occurs in our schools. Teachers are the ones who will put new instructional approaches, strategies and technologies into practice. Teachers are not trained in experimental design and advanced statistical analysis. Teachers are trained in implementing curricula appropriate for their students. While the experimental research may show that using an interactive whiteboard rather than a non-interactive whiteboard has no significant difference in terms of measured student outcomes, a teacher may believe that such use does gain and maintain the attention of students and result in a more well organized lesson, or something else that is not so easily measured. One can imagine other cases where the experimental research suggests no significant difference in X compared with Y but teachers believe that there are significant differences of some kind involved.
Should we simply disregard these teachers and only support the very few things that appear to have clear and convincing evidence of effectiveness as determined by IES standards? Should we expect teachers to understand the sophisticated statistical analysis that supports that kind of clear and convincing evidence? If so, then perhaps we ought to expect those making policy and funding decisions to understand the realities of teaching in a classroom for six or more hours every day. It strikes me that what is needed is research that can be practically implemented in classrooms that has reasonable evidence of effectiveness which can be understood by teachers. These teachers must be properly trained and supported in implementing innovations, which means their schools and school districts must understand both the value and likely impact of an innovation and the need to properly support such innovations.
This now comes full circle, since such innovations typically require funds, which means that parents and the community must then be convinced of the value of properly supporting education and electing officials who will provide the necessary local, state, and national support. What matters in the end is not the quality of educational research findings but the quality of professional teaching practice. I would like to see much less distance between [federally funded] educational research and [locally funded] educational practice. I would like to see research aimed at promoting teachers in achieving their widely held goals rather than research aimed at promoting the careers of researchers and program officials at federal agencies. It is clear that conducting rigorous randomized control trials and quasi-experimental studies in school settings presents serious challenges for those collecting and analyzing data – much more serious than that associated with similar kinds of studies in other sectors, such as medical care or computer science (which are admittedly complex and challenging areas for research). The variations in students, teachers, schools, communities, subject areas, and more make classroom practice a very tough research area. As a result, increasingly sophisticated analytical techniques are emerging which only a few researchers understand and can implement. The worry I am trying to express is that we may be closing the door on evidence that should be considered and might prove quite practical and effective in the classroom. We may be creating a research area that is closed to all except for an elite few who do not have to put findings into practice in the classroom. Is this an unfounded worry?