As mentioned in my last post, genome-wide association studies generated many positive "correlations" for genetic loci related to a particular disorder, that could not be consistently replicated. This should have tipped them off that they were dealing with false positives. It didn't, though. They were determined to turn their lemons into lemonade. They often did this by using meta-analysis. The idea behind meta-analysis is that you combine the results of numerous studies, effectively increasing your sample size to give you a more accurate big picture of all the results. I won't say that this approach is never useful in science, but in these instances, it was nothing more than an attempt to get a positive result. There are two common approaches to using meta-analysis to bolster your negative results. The first is focusing on a single genetic polymorphism. Those performing the meta-analysis would focus their attention on a particular genetic locus for a particular disorder using all the published studies (worth noting that there may have been other studies, more likely with a negative result, that didn't get published). Here is why you are going to create a false positive result:
AUTHOR BIAS! They choose one genetic loci with its correspondence to a particular disorder and the studies done to date, some with positive results and some with negative results, and combine them in a meta-analysis, say for a specific locus on chromosome 4 in correlation to bipolar disorder (for the sake of brevity, I'm ignoring the fact that these studies might not really be of completely similar design, which is another problem with meta-analysis). What happens here is that, the people picking which loci to look at are obviously not in a double-blind mode. Why did they choose to look at this particular loci on Chromosome 4 for bipolar disorder for their meta-analysis rather than any of the dozens of other loci? Because they are familiar with all the studies and probably knew it would be the most likely to generate a positive result. You could eyeball all the study results and have a good idea which would come up positive in a meta-analysis. What you have done in such and instance, is hand-pick the biggest false positive and put lipstick on it, pretending it is now a proven association.
The second approach for using meta-analysis to bolster your negative results, is to look at all the different equivocal, false positives, crank out a new genome-wide scan and see which of the of these false positives comes out positive in the new genome scan. So if you had, say, 200 loci to start with and you now narrow it down to 20 or 30 that you "confirm." The problem here is that you have simply doubled down on false positives. Any time you do a new genome wide scan, you are going to get false positives, and some of those are going to match up with the previous false positives, simulating "replication" when it is really nothing of the kind. What's lost here, and a bit ironic, is that the only valid part of such a study is the fact that you have now refuted a large number of your previous false positives.
When I tried to point this out, study authors seemed to either not understand what I was saying, or feigned confusion in order to dismiss my points. I'm not in their heads, so I don't know what their level of integrity is, but I believe that there was at least willful ignorance going on.
In any case, as an example, here is a letter I sent to the American Journal of Psychiatry criticizing a a study using meta-analysis for a proposed ADHD gene (I was an angry little bugger back then. Maybe a nicer attitude might have helped my cause). Here is their somewhat more polite, but misguided reply. How's that ADHD gene working out?
No comments:
Post a Comment