John Ioannidis has a recent paper in Nature Reviews Neuroscience arguing that many results in neuroscience are wrong. The argument follows his previous papers of why most published results are wrong (see here and here) but emphasizes the abundance of studies with small sample sizes in neuroscience. This both reduces the chances of finding true positives and increases the chances of obtaining false positives. Under powered studies are also susceptible to what is called the “winner’s curse” where the effect sizes of true positives are artificially amplified. My take is that any phenomenon with a small effect should be treated with caution even if it is real. If you really wanted to find what causes a given disease then you probably want to find something that is associated with all cases, not just in a small percentage of them.
9 thoughts on “Most of neuroscience is wrong”
i was just reading some article saying ‘now we know ADHD is genetic’; because they compared like 3000 young people without ADHD with about 300 who did, and found that one part of the genome which has ‘repeated elements’ (i forget the term) was found in 14% of the ADHD youth versus 7% of the others.
apart from not appearing to show they have found a complete genetic characterization of the disease, i wonder (and wouldnt be surprised) if one could sample an arbitrary set of DNA segments and find differences between possibly any sample of groups.
(eg i wonder if one compared a random sample of people in jail with ones out of jail based on some part of their DNA if one couldn’t be able to find a (possibly) spurious correlation between the DNA segment and social position.
(My impression is there are a ton of examples—-in part it is a sampling porblem (small size) but also a data mining issue (eg Bishop Berkely’s idea that its all in your mind, or F Celine (the great french freedom fight or flight – er )—‘men see only what they look at, and look at only what they already have in mind’.
Anyway, even if most papers and everything else is wrong, doesn’t mean one shouldn’t be able to design an algorithm so that the correct people nonetheless get paid in full. One just applies a gauge transformation to the value function and then everything (including the sample) is the right size and significant too. )
I don’t have the numbers for ADHD but many cognitive disorders like autism and schizophrenia are highly heritable, which can be determined using the old fashion methods of comparing penetrance between family members. Figuring out your gauge transformation is probably an NP hard problem.
When done right, there is nothing wrong with small samples.
It is better to test many things quickly and cheaply, maybe repeatedly… than do fewer but longer experiments. Once you have something promising, you can do more extensive experiments.
The problem is the curse of statistical significance. People *need* statistical significance so they manufacture it. This is what throws off the whole thing.
“If you really wanted to find what causes a given disease then you probably want to find something that is associated with all cases, not just in a small percentage of them.”
True, but suppose that taking vitamin C helped 1% of all cancer patients (note: it probably does not). Would you dismiss it? You shouldn’t dismiss it.
many are ‘highly heritable’. (kety, plomin, gimme a break—i guess u got stuart newman .(NYU) and jay joseph, but that aint a vacation ). what does that mean? h**2(bethesda md.)=2(r(mz)-r(dz)). (you can allus check ned block head or wikipeida for sic).
it appears you dont have data or skills to deal with that simple question so stick with patn integrals.
well, i’ll just go back to the minority view p=np. all this stuff is on line.
@ishi I may not have the skills but I do have the data (although we haven’t analyzed it yet). Understanding the genetic basis of cognitive disorders is a big theme in my lab.
These were warm up papers:
@daniel I wouldn’t outright dismiss any data but I would sure want to know what the biological mechanism was for that effect before I prescribed extra vitamin C to everyone.
i think u have the skills but one could do a few examples like the one i mentioned. and i don’t trust the data (plomin of uk, kendler and the whole u va lab—gottfredsman i think had a reasonable article on chaos in behavioral genetics)
i gather people are getting dumber (nijenhuis)—-much better papers from netherlands are on ‘g’ using center manfiold theorem (h haken used it) by molenaar or something.
this is also funny http://www.arxiv.org/abs/1305.3913
@ishi There are two issues here. The first is whether or not a trait is heritable. You don’t need genomic data for this and it’s pretty well established that some traits like height are highly heritable. All cases of establishing genetic bases for diseases are done through family studies as far as I know. The second issue is whether you can find genetic markers associated with that trait. For any finite sample, you will always find spurious correlations. Your example has been done for similar things like finding genes for chopstick use, etc. The criteria for accepting associated markers is quite stringent and my guess is we have more false negatives than false positives right now. Once, people find a candidate, they do lots of validation studies to see if it holds water.
[…] explorations in neuroplasticity. Well, some people think it is all a bit over the top – see here and here. The brain has not evolved over the last decade. Evolution takes millenia even when Google […]
[…] Last night were two talks about how to make science more reproducible. As I’ve posted before, many published results are simply wrong. The very enterprising Elizabeth Iorns has started […]