Certainty is the cage that keeps us safe from curiosity. I've been released from the cage. I am the songbird and I am flying for the window. I know it's closed but I plan on breaking through. – Charlie Coté, Jr. (1987-2005)

Saturday, January 8, 2011

Why Skepticism Is Useful
















Don't trust every study you hear. In an article by John M Grohol PsyD at PsychCentral, he takes issue with an article in The New Yorker by Jonah Lerner that suggests the scientific method is dead. Grohol, taking the lead from ScienceBlogs writer PZ Meyers, asserts that science is not dead, and gives reasons that are well established in the scientific community for why sketchy research results are allowed to go public. He lists seven, and I quote:
  1. Regression to the mean: As the number of data points increases, we expect the average values to regress to the true mean…and since often the initial work is done on the basis of promising early results, we expect more data to even out a fortuitously significant early outcome.
  2. The file drawer effect: Results that are not significant are hard to publish, and end up stashed away in a cabinet. However, as a result becomes established, contrary results become more interesting and publishable.
  3. Investigator bias: It’s difficult to maintain scientific dispassion. We’d all love to see our hypotheses validated, so we tend to consciously or unconsciously select results that favor our views.
  4. Commercial bias: Drug companies want to make money. They can make money off a placebo if there is some statistical support for it; there is certainly a bias towards exploiting statistical outliers for profit.
  5. Population variance: Success in a well-defined subset of the population may lead to a bit of creep: if the drug helps this group with well-defined symptoms, maybe we should try it on this other group with marginal symptoms. And it doesn’t… but those numbers will still be used in estimating its overall efficacy.
  6. Simple chance: This is a hard one to get across to people, I’ve found. But if something is significant at the p=0.05 level, that still means that 1 in 20 experiments with a completely useless drug will still exhibit a significant effect.
  7. Statistical fishing: I hate this one, and I see it all the time. The planned experiment revealed no significant results, so the data is pored over and any significant correlation is seized upon and published as if it was intended. See previous explanation. If the data set is complex enough, you’ll always find a correlation somewhere, purely by chance.
Conclusion: Don't let shoddy research methods give science a bad name.

By the way, I'm saddened to learn that psychology studies are rife with potential error given the small, homogeneous samples (n=20). So, dear reader, be skeptical when you read about so-called scientific findings, even ones I post here, and keep these points in mind. 

No comments: