LC: An often-cited problem in research is that replicating results is very rare, that nobody goes through the effort of checking published papers for accuracy. I’ve read some studies where they do a review of other papers and they say something like “We found errors in 20-30 percent of these.” Is that a problem, is there a way to solve it, that there’s no incentive to repeat other peoples’ experiments?
TP: It all depends on the field. For the fields I work in, essentially everything is reproducible. We work on a genetic model organism and we send our yeast strains to our friends all over the world, and if our friends want to look at those cells in a microscope they’ll see exactly what we saw. It’s really easy to reproduce the results—same in biophysics. Anytime anybody wanted to measure the rate at which actin filaments grow, I guarantee you they will grow at the rate I measured in 1986, okay? There’s no question about it.
Now, there is some question about how to do the experiment right, so there have been some variations on the original theme. I would say we have no replication problems in the fields where I work. On the other hand, there are obviously huge problems in some other fields. For example, people using vertebrate animal models try their cancer experiment on the mice they happen to have in their lab and someone else tries it on the mice they have in their lab, they might get a different result because they aren’t exactly the same mice, you know?
They usually use specifically bred mice though, right?
Yes, but the conditions in the lab differ and there are all sorts of variables. It is way more difficult to reproduce exactly what somebody has done with a vertebrate animal model. Then there’s widespread problems where some of the reagents aren’t any good. Some of the antibodies people have used are just complete trash and they don’t appreciate it, so much work needs to be done to do quality control on some of those things.
Then there are problems definitely with people cherry picking results. There’s absolutely no doubt that that is going on, and that’s really hard to control. When people publish different results in a complicated system maybe with bad reagents and un-quantitative methodology there are definitely some problems, but it’s not terrible.
I think part of the given cause is that there’s incentive in science to discover something new, to always be working on a new field or a new area. There’s very little incentive in terms of publication value, the journal you’ll publish in, the prestige you’ll get from working on something someone has worked before, which seems inherent in the publish or perish system.
So that’s touching on another important point. The premium the fancy journals put on novelty is a definite hazard. The rate of retraction in the fancy journals is way higher than the rate of retractions in the society journals.
Because people are reaching, trying to find something new, and there’s a lot of pressure. There’s a lot of pressure, especially in other parts of the world, to publish in a fancy journal and claim something important; it’s horrible. That’s a mess. In China and in India, they put such a huge premium on publishing in Science and Nature and so on. It distorts their whole system.
There’s lots of beautiful work going on in those countries, but the pressure on the people is immense to publish in those journals. Bad things can happen. It isn’t universal though. In the fields where I work, we are always repeating everybody’s experiments, sort of building on them. We sort of start by doing somebody’s experiment all over again, and then we take a different look at it from a different angle or make some modifications in the system. There’s a lot of validation.