Teaching adult learners how to evaluate research evidence is an important skill in sexuality education. The term “evidence-based” can be applied to many aspects of sexuality education, including public health practice, medicine, public policy, and curriculum development. For example, in medicine, this concept means “the conscientious explicit and judicious use of current best evidence in making decisions about the care of individual patients” (Sackett, Rosenberg & Gray, 1996). The application of evidence-based research is the gold standard for sexuality education.
However, in my sexuality education doctoral program at Widener, I often hear my fellow learners moan and groan during our required research courses. Many of them say that they are teachers by profession and do not plan to conduct research. So why is it important for them to learn to evaluate the evidence? Research is essential to furthering scientific knowledge, improving current education methods, and informing public policy – all very relevant aspects to being a teacher. However, too often, people hear about new research results through newspapers, broadcast news, or magazines. They may never get the chance to look closely at the methodology and research questions to determine the validity of the study. Convincing adult learners that they should take the time to evaluate the merit of the evidence can be a challenge for educators.
What are some ways to teach this skill to sexuality students? Judging the merits of evidence is a skill that takes practice. However, there are many peer-reviewed journal articles that describe a methodical review and evaluation of the evidence on a specific topic that can be used as teaching examples. One example is an article by Major, Appelbaum, Beckman, Dutton, Russo, & West (2009) which evaluates the evidence regarding abortion and mental health, a topic that has received much press in the sexual health landscape in recent years. The strength of the article is not just the conclusion (that there was no difference in relative risk for mental health problems between women who had abortion and women who did not) but in the way the authors painstakingly describe the various methodological problems with the published studies to explain why they did not pass muster as good evidence. They carefully review common methodological problems such as incorrect application of conceptual frameworks, inappropriate use of comparison groups, inadequate control of risk factors, sampling bias, and use of inappropriate, non-validated measurement tools in language that is clear, jargon-free, specific, and understandable. After reading this article, I came away with a new understanding of what to look for when evaluating published research studies.
This article would be a perfect learning tool for students to read before they practice critiquing published research on their own. Teachers can use this example, or a similar one, to help learners develop a personal checklist of themes or questions to use when evaluating research. For example, APA guidelines suggest looking at the methodology, authors’ framework(s), statistics, theoretical framework, and results (Driscoll, 2010). The acronym MASTR (pronounced “master”) may be a helpful reminder. A classroom exercise where learners must come up with one or two questions for each of those areas and then apply them to multiple articles could be an excellent way to increase their confidence in this important skill – evaluating the evidence.
Driscoll, D. L. (2010, April 21). Social Work Literature Review Guidelines. Retrieved from http://owl.english.purdue.edu/owl/resource/666/01
Major, B., Appelbaum, M., Beckman, L., Dutton, M. A., Russo, N. F., & West, C. (2009). Abortion and mental health: Evaluating the evidence. American Psychologist, 64, 863-890.
Sackett, D. L., Rosenberg, W. C., & Gray, J. A. M. (1996). Evidence based medicine: What it is and what it isn’t. BMJ, 312, 71-2.