Expanding Implementation Research 1 Expanding the Scope of Implementation Research in Education to Inform Design

msra(2009)

引用 24|浏览14
暂无评分
摘要
In this paper, we present a vision for implementation research in education that can inform all stages of program development. We present examples of implementation research that have performed three potential functions can serve to have a greater impact on practice: (1) identify problems of practice that can become targets of design, (2) bridge gaps between current system capacity and ambitious visions for change reflected in curricular reforms, and (3) test experimentally contrasting models of implementation support. These functions expand the scope for implementation research in education, since the functions rarely are central in sociological analyses of implementation or in current approaches to measuring and analyzing fidelity of implementation advocated by proponents of experimental research on program efficacy. In addition, by taking both curricular interventions and contexts of implementation as objects of design and study, this kind of forward-looking implementation research can inform the process of systemlevel changes in education in ways that improve implementation of curricular interventions. Expanding Implementation Research 3 Expanding the Scope of Implementation Research in Education to Inform Design The results of recent large-scale, experimental studies to identify effective programs and curriculum materials have been disheartening to policymakers and researchers alike. The findings from large studies of federally-funded programs completed in the last five years in reading (Gamse et al., 2008), mathematics (Agodini et al., 2009), educational technology (Dynarski et al., 2007), and afterschool programming (James-Burdumy, Dynarski, & Deke, 2007) are all similar, in that all have all found either very small or no positive impacts on student achievement. In response to these findings, critics have raised questions about the quality and depth with which programs were implemented (Bissell et al., 2003; Mahoney & Zigler, 2006). How can researchers conclude, these critics argue, that programs do not work, if they have not been implemented well or consistently under different conditions? Some researchers argue that answers to such questions can be developed within the context of large-scale experiments, but to date, the field has developed no consensus about methods for interpreting implementation results. Researchers can and do measure implementation within experimental studies, as was done in each of the studies cited above. In addition, researchers can and do model how implementation processes are related to outcomes in the contexts of experimental study, which can help inform refinements to the design of programs (Judd & Kenny, 1981; Krull & MacKinnon, 1999; MacKinnon & Dwyer, 1993; O'Donnell, 2008). But estimates of the strength of associations between implementation and outcomes in experiments in some cases may be difficult to interpret, and deriving causal inferences from those results is tricky and may Expanding Implementation Research 4 be contested (Angrist, Imbens, & Rubin, 1996; Werner, 2004). Implementation processes are endogenous to experimental studies; programs’ effectiveness is inextricably intertwined with the ease of implementation, the ways that programs are enacted, and the contexts of implementation, much in the way that the effectiveness of a particular diet is bound up with how easy it is for people to follow (cf., Dansinger, Gleason, Griffith, Selker, & Schaefer, 2005). Further, if programs are difficult for teachers to implement well and have not already been adapted to a range of contexts and shown to be reliably usable, researchers may not uncover any significant associations between implementation quality and outcomes (McDonald, 2009). An alternative approach is to foster more research on implementation and its contexts at all stages of program development, not just at the point at which scale up begins. In developing a new program, for example, developers could begin by investigating the contexts where they plan to try out the program, so they have a better sense of new capacities local schools and districts will need to develop to implement the new program. As programs transition from either the laboratory to the classroom, or from 1-2 classrooms to many, implementation research can help identify gaps in program guidelines and specifications, a more refined sense of professional development needs, and likely variations in implementation that could be associated with differences in program effectiveness. Finally, at the efficacy and scale-up phases of development, analyses of how implementation mediates impacts can identify ways to strengthen programs and suggest designs for future experimental studies. The idea that implementation research should be more integral to the design and refinement of programs or infrastructures needed to support implementation is not new Expanding Implementation Research 5 (see, especially Elmore, 1980), but few programs use implementation research in this way, and there is little accumulation of knowledge of implementation processes across programs. Strong disciplinary boundaries exist that separate policy research from scholars engaged in early-stage research and development efforts, who are more focused on using such efforts to generate new insights into the science of learning. These latter scholars’ own research and development efforts often entail grapping with issues of how to promote learning in real classroom environments (e.g., Barab & Luehmann, 2003), but the insights they develop from their research about implementation are only infrequently a focus in their publications. One solution for researchers to adopt is to rely on different kinds of experts for each stage of program design and development, as has been done in other fields (Sloane, 2008). But such approach limits the possibility that implementation research could develop useful knowledge of basic processes, such as how teachers adapt programs to their local contexts, which are implicated across different stages of program development. In this paper, we present three new functions for implementation research at different stages of program development that can inform the design of effective programs that improve student learning. These functions emphasize ways that implementation research can inform the earliest stages of program design, program refinement, and the identification of effective models for scaling-up programs. By expanding the range of implementation research, we argue, implementation research can advance the science of learning as the study of linked systems of curriculum materials, learning supports for teachers and school leaders, and organizational forms and processes needed to support enactment. Expanding Implementation Research 6 Past and Current Implementation Research in Education Problems of program implementation have been a focus of education research for decades. In the late 1950s, when the National Science Foundation first funded the design of instructional materials for schools, curriculum developers became frustrated by what they saw as teachers’ failure to enact curricula in ways that reflected an understanding of the structure of scientific disciplines (Bruner, 1960). Later, in the 1970s, policy researchers suggested that adaptations teachers make to curriculum materials are necessary and always occur, to meet the needs of students and demands of local contexts (Berman & McLaughlin, 1975; McLaughlin, 1976). Since that time, researchers have remained concerned with the degree to which teachers’ adaptations are congruent with designers’ intentions, seeking to distinguish “creative transformations” of curriculum materials from “lethal mutations” in teachers’ enactments (A. L. Brown & Campione, 1996; M. W. Brown & Edelson, 2001). Researchers have also investigated the extent to which poor implementation quality can diminish the strength of an intervention, making it less likely that investigators will be able to detect significant effects of programs (Cordray & Pion, 2006). For most of that time, implementation studies were conducted principally by sociologists and political scientists in education, and their research has focused developing explanations for variability in implementation informed by theories from those disciplines. A recent example is Rowan and Miller’s (2007) study of the efficacy of three different school reform models’ approaches to supporting changes in teaching. The study used agency theory from sociology and political science (Eisenhardt, 1989; Emirbayer & Mische, 1998) as a lens for exploring strategies program developers, Expanding Implementation Research 7 policymakers and other educational leaders have for resolving so-called “agency dilemmas” that derive from the fact that in education, the agents who develop policies and programs do not always share goals with but are dependent on teachers as agents to implement them. The Rowan and Miller study identified some types of controls on teachers’ implementation—essentially different strategies for addressing agency dilemmas”—as more likely to yield reliable patterns of implementation than others. Their conclusion that relying on collegial influence (professional controls) and implementation specification and monitoring (procedural controls) has potential, if not yet realized, implications for the design not only of programs and policies but also of mechanisms to support their implementation. In the past decade, researchers engaged in curriculum development have become more involved in implementation research. Spurred by calls for more rigorous research to identify effective programs and conditions of their effectiveness (National Research Council, 2002; President's Committee of Advisors on Science and Technology Panel on Educational Technology, 1997) and supported by new funding streams (e.g., NSF’s Interagency Education Research Initiative and the U.S. Department of Educati
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要