I have to admit that during my reading of Chambles and Hollon, I suppressed urges to email old friends and go eat ice cream, but succumbed to cutting my toenails. Perhaps in part because I read it at the end of the day when I was a little drained, I found this article difficult follow and kind of a downer. I think it was a dizzying downer for me, however, because it’s a rigorous survey of the seemingly infinite number of pitfalls of something I think is infinitely important: testing whether therapies that sound good really help people. C and H are critically skeptical and highly conservative in their unpacking of therapies outcome research. C and H have us thinking, wow. That’s really hard to do right. ("No one definition of sound methodology suffices for all areas of psychological treatment research.") But C and H fueled my determination to attain methodological sophistication and not repeat others’ mistakes when testing my own interventions (which I sincerely hope to do for my own dissertation). I especially love the Appendix, “Summary of Criteria for Empirically Supported Psychological Therapies”— i.e., The Five Commandments for evaluating empirical support. Good to keep close at hand.
Hunsley and DiGiulio provided my first exposure to “the well-known ‘Dodo bird effect.’” The Dodo stars in a cautionary tale straight out of Chambles and Hollon: a few spurious arguments and some badly designed meta-analyses deemed therapeutic treatments to be equivalent, and that idea really caught on. Once the statistical errors in these meta-analyses were corrected, the Dodo effect turned out to be entirely inaccurate, and cognitive and behavioral treatments reign supreme. One wonders what other fallacies permeate psychology today, fallacies without colorful names, or even names at all.
No comments:
Post a Comment