More Recent Comments

Saturday, July 11, 2015

Science and skepticism

The National Academies of Sciences (USA) formed a committee to look into scientific integrity. A summary of the report was published in the June 26th issue of Science (Alberts et al., 2015)

I'd like to highlight two paragraphs of that report.
Like all human endeavors, science is imperfect. However, as Robert Merton noted more than half a century ago "the activities of scientists are subject to rigorous policing, to a degree perhaps unparalleled in any other field of activity." As a result, as Popper argued, "science is one of the very few human activities—perhaps the only one—in which errors are systematically criticized and fairly often, in time, corrected." Instances in which scientists detect and address flaws in work constitute evidence of success, not failure, because they demonstrate the underlying protective mechanisms of science at work.
All scientists know this, but some of us still get upset when other scientists correct our mistakes. We have learned to deal with such criticism—and dish it out ourselves—because we know that's how knowledge advances. Our standards are high.

The general public doesn't get this. They think that everything that is published in the scientific literature must be correct or it wouldn't have passed peer review. They don't realize that most work has to be repeated and scrutinized before it is accepted by the scientific community. They don't understand that skepticism is an integral and important part of science.

When the scientific process of criticism and controversy is on full display in the public forum, the general public sees this as a weakness and it affects their confidence in science. Scientists, on the other hand, see this as evidence that the process is working as it should. Some groups (e.g. creationists) exploit the proper workings of science to try and convince their followers that debates among scientists mean that all of science is wrong.

Scientist have to recognize that legitimate debate and discussion is a good thing but they also have to take steps to avoid creating controversy when it isn't necessary. The ENCODE publicity fiasco is a good example. The ENCODE Consortium created a controversy by claiming that 80% of the human genome was functional. They should have known that this extreme statement would be challenged and they should have made sure that they represented the evidence against their claim. Instead, what they did was ignore that contrary evidence and not cite any of the scientific literature that would have weakened their case. That was bad science, even though we all agree that the Consortium members are entitled to express an opinion (even if they are wrong). They are not entitled to abandon skepticism and present only one side of a controversial issue. That's not what scientific integrity is about.

The NAS committee was mainly concerned with fraud and with papers containing results that are not reproducible. However, some of their advice relates to papers that are not fraudulent and the experimental results are valid.
Universities should insist that their faculties and students are schooled in the ethics of research, their publications feature neither honorific nor ghost authors, their public information offices avoid hype in publicizing findings, and suspect research is promptly and thoroughly investigated. All researchers need to realize that the best scientific practice is produced when, like Darwin, they persistently search for flaws in their arguments. Because inherent variability in biological systems makes it possible for researchers to explore different sets of conditions until the expected (and rewarded) result is obtained, the need for vigilant self-critique may be especially great in research with direct application to human disease. [my emphasis LAM]
It's all about critical thinking—something that seems to be in short supply these days.


Alberts, B., Cicerone, R.J., Fienberg, S.F., Kamb, A., McNutt, M., Nerem, R.M., Schekman, R., Shiffrin, R., Stodden, V., Suresh, S., Zuber, M.T., Pope, B.K. and Jamieson, K.H. (2015) Self-correction in science at work. Science 348: 1420-1422. [PDF]

11 comments :

Diogenes said...
This comment has been removed by the author.
Jmac said...

Larry

I don't think you are going to like my saying it but science like any other part of our unfortunate society is just as corrupt. We as humans are for some reason greedy and proud. All the wars in the 20th century were either linked to $ or $$$. The second one quite remote one was pride.

Jmac said...

Diogenes

You should develop a new format of arguments because your old one is not going to do it. It's predictable and it lacks any kind of known evidence.

Alex SL said...

I believe that the fraud issue is somewhat overblown. There have been several spectacular cases in highly competitive fields where a lot of money is involved, especially medicine, and then this unthinkingly generalised onto other fields, going as far as to sensationalist claims on the lines of "science is broken". But if we are honest there is virtually no fraud in areas like anorganic chemistry, plant taxonomy or particle physics, perhaps because the incentives aren't in place.

And this brings me to what I see as the much more intractable problem: hype and confirmation bias towards 'interesting' results on the part of the individual researcher. Intractable, that is, as long as the system is organised in a way that rewards papers with spectacular results published in a handful of high impact journals over practical, basic, and unspectacular work. If the surest way towards a professorship in evolutionary biology is having a Nature paper with a "paradigm-breaking" claim, then that is what people will try to achieve.

Georgi Marinov said...

Sorry, but this is really not helpful

Jonathan Badger said...

How would we *know* if there is fraud or not in uncompetitive fields? Fraud is generally discovered when a competitors try to replicate a result, fail, and eventually begin to suspect the initial report. As for motivation for fraud, all scientists regardless of field have pretty much the same motivation -- promotion and/or tenure decisions are generally based on how many papers get published and the impact factor of the journals they appear in.

Alex SL said...

There are differences. Cannot speak for your field, but in mine people don't generally sit down and say, "okay, we want to replicate this now, because if we can't then that will embarrass our competitors". But what they often do is try to build on previous results, and then it shows whether they can be reproduced or not, even in areas where nobody is in direct competition for getting some breakthrough published three months before that other group in Stockholm or whatever.

Similar differences with the publications you need for jobs and tenure. If you can only get a publication for showing that a medicine works on disease XYZ in mice (but not for showing that it doesn't), then there is an incentive to manipulate. If you get a publication for describing a new species of rainforest tree from Ecuador or for testing the intermediate disturbance hypothesis in ecology, then there is pretty much no incentive, because there are actually still hundreds of undescribed rainforest tree species in Ecuador, and because you can publish evidence for the relevant hypothesis just as well as evidence against it.

(If there is an incentive for bad science in the latter case then it comes from somebody being personally biased towards one of the two possible outcomes, but that is a completely different issue than inventing data because you can only get a spectacular result published.)

steve oberski said...

Has there ever been a time when critical thinking wasn't in short supply ?

Unknown said...

Alex, one thing you mentioned is one of the weaknesses of science. Not that it is a weakness with the actual theory (process), but a weakness in the actual practice.

In its simplest form, science proceeds by developing a hypothesis that explains something, testing this hypothesis, and then publishing on the results of the testing. If the results of the testing supports the hypothesis, and there are no glaring errors in the experimental design and the assumptions made, publication is pretty much guaranteed. But if the testing is inconclusive, publication is much more difficult; even though the inconclusive nature of the research is valuable information.

Alex SL said...

I can understand why it would be hard to publish something that is inconclusive, not least because the reason for that is usually inadequate sampling by the authors or something like that. But the problem is that it IS difficult to publish something that supports a well-established theory, at least in high impact journals, because that isn't interesting enough.

This is why there is an incentive to cry Darwin Was Wrong! There Is No Tree Of Life! Epigenetics! The Central Dogma Is Wrong! Junk DNA Is Wrong! Third Way! In the rainbow press of science that gives you more pull than "you know, our current understanding of evolution is apparently pretty much correct, and my data fit precisely into the existing paradigm".

Robert Byers said...

Oh no this is wrong. they do not correct each other except in minor , already, understood conclusions. the whole point of a novel hypothesis is that its new in its insights. How can someone correct it? They only agree or disagree with it.
Oddly enough I just saw a interview with a Mr Pollack about the fourth syage of water. He got rather famous for, it seems , finding a fourth stage of water.
He stressed how mainstream science must be taken on for more accurate ideas. He mentioned a mr Huxley , knighted, who was wrong on ideas about proteins etc.

When presumptions like evolution are a foundation then other "scientists" don'r question anybody except within the paradigm.
All criticism is STILL within paradigms and so easily error can go undetected.
ID/YEC today only is correcting successfully evolution and so it will not last long now.
Creationism is the true critic as it questions outside the paradigm any particular yopic evolution trues to make hypothesis on.