Design and analysis of education trials

This Centre for Research in Mathematics Education Seminar is presented by Dr Adetayo Kasim, Research Statistician for the Wolfson Institute for Health and Wellbeing, Durham University.

Dr Kasim’s abstract: Evidence-based policy is encouraged in all areas of public service, particularly in education to improve educational attainment of disadvantaged children. Educational stakeholders want to know “how much of a difference an intervention has made” and whether the intervention effect is large or small, meaningful or trivial (Valentine and Cooper 2003). But what constitutes evidence to support effectiveness or efficacy of an intervention and how such evidence is measured is controversial, particularly when such evidence is based on null hypothesis significance testing. Validity of scientific conclusions, including their reproducibility, depends on more than statistical methods alone. Appropriately chosen study design, properly conducted analysis and correct interpretation of statistical results also play key roles in ensuring that conclusions are sound and that uncertainty surrounding them is represented properly (Wassersteing and Lazard 2016).

I will discuss some statistical challenges in the design and analysis of education trials. Most of the trials predominantly focused on establishing whether or not one intervention is superior to a comparison group, but does the lack of statistical significance mean the intervention is equivalent to the comparison group? What is the implication of noncompliance and missing data in the analysis of education trials? Is it time for adaptive design in education trials to ensure right pupils are targeted? How practical is Bayesian analysis for education trials? Lastly, do we need a new metric to improve communication of results to education stakeholders? My discussion will focus on statistical perspectives rather than education context.

Free, all welcome. Please register online.

Tags: , , , ,

Leave a Reply