You have been calculating an effect size for each of your CME activities, right? And now you have a database full of activities with corresponding effect sizes for say, knowledge and competence outcomes. Sound familiar? Anyone…anyone…Bueller? Okay, for the one straggler, here’s a refresher:
- What is effect size? (link)
- How to calculate effect size (link)
- Reporting effect size (link)
- Effect size – other methodologic/statistical considerations (link)
Now that we’re all on the same page, let’s move on to the next question…what exactly is a “good” effect size? Well, you would first start with Cohen (Cohen J. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Hillsdale, NJ: Lawrence Earlbaum Associates; 1988), who identified the following general benchmarks: 0.2 = small effect, 0.5 = medium effect, and 0.8 = large effect.
Although effect size is relatively new to CME, thankfully more specific effect size data are available. Starting with recent literature (specifically, meta-analyses), the following effect sizes have been reported:
- Competence effect size (live activities) = 0.85 (Drexel et al, 2011)
- Knowledge effect size (live activities) = 0.6 (Mansouri et al, 2007)
- Knowledge effect size (eLearning) = 0.82 (Casebeer et al, 2010) and 1.0 (Cook et al, 2008)
It’s important to note that these effect sizes are the result of mixed measurement methods (and that measurement approach influences effect size), but they are certainly more relevant than Cohen’s benchmarks (and we know that Cohen wouldn’t take offense, because refining effect sizes through repeated measurement in a given area is exactly what he recommended).
In regard to repeated measurement, Med-IQ has been measuring knowledge- and competence-level effect sizes for a variety of CME activities over the past four years. In a future post, we’ll be publishing our effect size results for a variety of live and enduring material formats. We’d love to hear how these results jibe with your findings.