ASTK15494U SEMINAR: Impact Evaluation: Estimating the Causal Effects of Policies and Programs

Volume 2017/2018
Content

The purpose of this course is to prepare the participants to design and carryout social science research estimating the causal effects of interventions, programs and policies.  Estimating causal effects for evaluations has two goals: the effects will be sufficiently credible to influence decisions about these educational practices in a direction that will benefit the target population and the methods are sufficiently rigorous to contribute to the scholarly literature in social science journals, including evaluation, public policy, and political science.  The course will develop the participants’ understanding of the theoretical constructs that underlie causal inference and support the development and application of appropriate criteria for assessing the credibility of the effect estimates provided in specific studies in the social science research literature

Learning Outcome

Will follow later

Murnane, Richard J. and Willett, John B. (2011). Methods Matter: Improving Causal Inference in Educational and Social Science Research. New York: Oxford University Press.

 

Angrist, Joshua D. and Pischke, J-S. (2008). Mostly Harmless Econometrics: An Empiricist’s Companion. Princeton: Princeton University Press.
 

Stephen L. Morgan and Christopher Winship. (2014). Counterfactuals and Causal Inference: Methods and Principles for Social Research, 2nd ed. Cambridge: Cambridge University Press.
 

William R. Shadish, Thomas D. Cook and Donald T. Campbell. (2002). Experimental and Quasi-Experimental Designs. Boston: Houghton-Mifflin Co.

 

Holland, Paul W. (1986). Statistics and causal inference. Journal of the American Statistics Association, 81, 945-960.

Rubin, Donald B. (1986). Statistics and causal inference: Comment: Which If’s Have Causal Answers. Journal of the American Statistics Association, 81, 961-962.

Shadish, William R. (2010). Campbell and Rubin: A primer and comparison of their approaches to causal inference in field settings. Psychological Methods, 15, 3-17.

Imai, Kosuke, King, Gary, & Stuart, Elizabeth A. (2008). Misunderstandings between experimentalists and observationalists about causal inference. Journal of the Royal Statistical Society, 171, 481–502.

Henry, Gary T. (2015). Comparison group designs. In Joseph S. Wholey, Harry P. Hatry, and Kathryn E. Newcomer (eds.) Handbook of Practical Program Evaluation, 4th Edition. San Francisco: Jossey-Bass.

Todd, Petra E., & Wolpin, Kenneth I. (2003). On the specification and estimation of the production function for cognitive achievement. The Economic Journal,113, 3-33.

Angrist, Joshua, & Pischike, Jörn-Steffen (2010). The credibility revolution in empirical economics: How better research design is taking the con out of econometrics. Journal of Economic Perspectives, 28(2) 3-30.

Reardon, Sean F., & Raudenbush, Stephen R. (2009). Assumptions of value-added models for estimating school effects. Education Finance and Policy,4(4), 492-519.

Henry, Gary T., Purtell, Kelly M., Bastian, Kevin C., Fortner, C. Kevin, Thompson, Charles L., Campbell, Shanyce L., and Patterson, Kristina M. (2014) “The Effects of Teacher Entry Portals on Student Achievement” Journal of Teacher Education 65, 7-23.

Rubin, Donald B., Stuart, Elizabeth A., & Zanutto, Elaine L. (2004). A potential outcomes view of value-added assessment in education. Journal of Educational and Behavioral Statistics, 29(1), 103-116.

Rubin, Donald B. (2008). For objective causal inference, design trumps analysis.  The Annals of Applied Statistics, 2(3), 808-840.

Lemons, C.J., Fuchs, D., Gilbert, J.K., & Fuchs, L. (2014) Evidence-based practices in a changing world: reconsidering the counterfactual in education researchEducational Researcher, 43, 242-252.

Borman, et al. (2007). Final reading outcomes of the national randomized field trial of Success for All. American Educational Research Journal 44(3), 701 –731.

Dong, N. and Maynard, R. A. (2013). PowerUp!: A tool for calculating minimum detectable effect sizes and sample size requirements for experimental and quasi-experimental designs. Journal of Research on Educational Effectiveness, 6(1), 24-67. doi: 10.1080/​19345747.2012.673143.

Berk, Richard A., & Sherman, Lawrence W. (1988). An analysis of experimental design with incomplete randomization. Journal of the American Statistical Association, 83, 70-76.

Pate, Anthony M., & Hamilton, Edwin E. (1992). Formal and informal deterrents to domestic violence: The Dade County Spouse Assault Experiment. American Sociological Review, 57(5), 691-697.

Stuart, E.A. (2010). Matching Methods for Causal Inference: A review and a look forward. Statistical Science 25(1): 1-21. PMCID: PMC2943670.

Xu, D. & Jaggars, S. S. (2011). The effectiveness of distance education across Virginia's community colleges: Evidence from introductory college-level math and English courses. Educational Evaluation and Policy Analysis, 33(3), 360–377.

Henry, Gary T. and Yi, Pan. (2009). Design matters: A within-study of propensity score matching designs.  Unpublished manuscript.

Ravillion, Martin. (2000). The mystery of vanishing benefits: An introduction to impact evaluation. The World Bank Review, 15(1), 115-140.

Meyer, Bruce D. (1995). Natural and quasi-experiments in economics. Journal of Business & Economic Statistics,13, 151-161.

Henry, Gary T., & Gordon, Craig S. (2003). Driving less for better air: impacts of a public information campaign. Journal of Policy Analysis and Management, 22(1), 45-63.

Dee, Thomas S., & Jacob, Brian. (2011). The impact of No Child Left Behind on student achievement. Journal of Policy Analysis and Management, 30(3), 418-446.

Zimmer, Ron., Henry, Gary T. & Kho, Adam. (2017) The Role of Governance and Management in School Turnaround Policies: The Case of Tennessee’s Achievement School District and iZones, Educational Evaluation and Policy Analysis.

Harris, Douglas, & Sass, Tim.  (2011). Teacher training, teacher quality, and student achievement.  Journal of Public Economics, 95(7-8), 798-812.

Taylor, Eric S., & Tyler, John H. (2012). The effect of evaluation on teacher performance. The American Economic Review, 102(7), 3628-3651.

Belasco, A.S., Rosinger, K.O., & Hearn, J.C. (2015). The test-optional movement at America’s selective liberal arts colleges: A boon for equity or something else? Educational Evaluation and Policy Analysis, 37 (2), 206-223.

Reichardt, Charles S. and Henry, Gary T. (2012) in Harris Cooper (ed.) Handbook of Research Methods in Psychology, Washington D.C: American Psychological Association.

Gormley, W.T., Phillips, D., & Gayer, T. (2008) Preschool programs can boost school readiness. Science 320: 1723-1724.

Henry, Gary T., Fortner, C. Kevin, and Thompson, Charles L. (2010) “Targeted Funding for Educationally Disadvantaged Students: A Regression Discontinuity Estimate of the Impact on High School Student Achievement” Educational Evaluation and Policy Analysis, 32, 183-204.

Dee, T. & Wyckoff, J. (2013) “Incentives, Selection, and Teacher Performance: Evidence from IMPACT” NBER Working Paper 19529 (downloaded from http:/​/​www.nber.org/​papers/​w19529 ).

Schochet, P., Cook, T., Deke, J., Imbens, G., Lockwood, J.R., Porter, J., Smith, J. (2010). Standards for Regression Discontinuity Designs. Retrieved from What Works Clearinghouse website: http:/​/​ies.ed.gov/​ncee/​wwc/​pdf/​wwc_rd.pdf

Imbens, G.W. & Lemieux, T. (2007) Regression discontinuity designs: a guide to practice. Journal of Econometrics

Wing, C. and Cook, T. D. (2013), Strengthening the regression discontinuity design using additional design elements: A within-study comparison. Journal of Policy Analysis and Management, 32: 853–877.

Lipsey, M. W. et al. (2014) The prekindergarten age-cutoff regression-discontinuity design: methodological issues and implications for application. Educational Evaluation and Policy Analysis.

Shadish, William R., Clark, M. H., & Steiner, Peter M. (2008). Can nonrandomized experiments yield accurate answers? A randomized experiment comparing random and nonrandom assignments. Journal of the American Statistics Association, 103(484), 1334-1356.

Bifulco, Robert. (2012). Can nonexperimental estimates replicate estimates based on random assignment in evaluations of school choice? A within‐study comparison. Journal of Policy Analysis and Management, 31(3), 729-751.

Cook, Thomas D., Shadish, William R., & Wong, Vivian C. (2008). Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within-study comparisons. Journal of Policy Analysis and Management, 27(4), 724-750.

Glazerman, Steven, Levy, Dan M., & Myers, David. (2003). Nonexperimental versus experimental estimates of earnings impacts. The Annals of the American Academy of Political and Social Science, 589(1), 63-93.

Frank, Kenneth A., Maroulis, Spiro J., Duong, Minh Q., & Kelcey, Benjamin M. (2013). What Would It Take to Change an Inference? Using Rubin’s Causal Model to Interpret the Robustness of Causal Inferences. Educational Evaluation and Policy Analysis 35: 437-460.

Basic education in International Relations
A variety of class formats will be used throughout the semester including lectures, discussions, and seminars, depending upon the topic and readings.
  • Category
  • Hours
  • Class Instruction
  • 28
  • Total
  • 28
Credit
7,5 ECTS
Type of assessment
Written assignment
Individuel written assignment
Marking scale
passed/not passed
Censorship form
No external censorship
Criteria for exam assesment

Passed/Not passed