Tuesday, May 21, 2013

"Studies have shown..."

We've all been told that "studies have shown" something at one time or another.  Sometimes, our interlocutor is kind enough to give us a citation (and sometimes they aren't).  Well, let's do a thought experiment (if you're already familiar with significance testing, feel free to skim the next paragraph).

Suppose you give 20 labs a drug and a placebo, and tell them to test one against the other in clinical trials.  But instead of actually giving them a drug and a placebo, you give them two identical placebos (originally, I was going to use a homeopathic remedy vs. a placebo, but I didn't want to get sidetracked).  Assume the labs all use large sample sizes, statistical normalization, double blinding, and various other best practices.  None of them make any mistakes (or outright fraud, for that matter) and they all conduct proper, well-designed experiments.  Even under these ideal conditions, one of those labs (on average, and for pedants, we're assuming they all use α=5%) will tell you there's a statistically significant difference between the placebo and itself.

Now, in and of itself, this is less of a big deal than it sounds because the other nineteen identical experiments will tend to drown the one out.  But think about what makes the news.  When was the last time you read about an experiment which failed to prove anything (for pedants: the proper phrasing is "failed to reject the null hypothesis")?  They do sometimes make the news, but not often enough, and frequently only after positive results have drawn attention to the area.  Many journals do require advance notice of an experiment, so they can at least keep an accounting of failed experiments and recognize this in peer review, but the tendency of the news media to latch onto singular results is nonetheless disturbing.  A single result does not mean something is "proven."  It means the area merits further exploration and study.

There are, however, some exceptions to this rule.  The most important are metastudies, which take whole groups of studies into consideration.  A metastudy of 20 studies with one positive and 19 negative would give a negative result.  Of course, we're still assuming a best-case scenario, particularly that those 19 negatives are available to the metastudy's authors.  Still, meta-analysis provides welcome insight, and is a good sign of the maturity of an area of research.  If enough studies have been performed for a metastudy to be published, there's probably enough evidence for one side to be clearly right.

No comments:

Post a Comment

This is pretty much a free-for-all. If your comment is very hard to read (e.g. txtspk, l33t, ALL CAPS, etc.) I may fix it for you or remove it. If your comment brings absolutely nothing to the discussion (e.g. pointless flaming, spam, etc.), it is subject to removal, unless I find it sufficiently amusing. If you don't like my judgment, tough shit, it's my blog. If the blog post itself is old enough, your comment is subject to moderation, so don't panic if it's not visible immediately.