More Recent Comments

Wednesday, February 05, 2025

Why Trust Science?

Bruce Alberts,1 Karen Hopkin, and Keith Roberts have published an essay on Why Trust Science.

In this essay, we address the question of why we can trust science—and how we can identify which scientific claims we can trust. We begin by explaining how scientists work together, as part of a larger scientific community, to generate knowledge that is reliable. We describe how the scientific process builds a consensus, and how new evidence can change the ways that scientists—and, ultimately, the rest of us—see the world. Last, but not least, we explain how, as informed citizens, we can all become “competent outsiders” who are equipped to evaluate scientific claims and are able to separate science facts from science fiction.

Most of the essay describes an idealized version of how science works with an emphasis on collaboration and rigorous oversight. They claim that the work of scientists can usually be trusted because it is self-correcting.

Scientists are trained to be skeptical—even (or especially) of their own hypotheses. Good scientists operate with the knowledge that their initial ideas or models may require revision or even outright rejection. Some might even argue that a major goal of science is to eliminate erroneous notions, irreproducible results, and incorrect interpretations.

Because science advances through a rigorous community-based testing of hypotheses, it effectively corrects its own mistakes. A rigorous system of checks and balances is “baked in” to the scientific method, steering us away from misinformation and toward an increasingly accurate and reliable understanding of the world.

There may have been a time when this was an accurate description of how scientists actually behave but that time has passed in the fields that I'm familiar with. Molecular biology is rife with "erroneous notions" and "incorrect interpretations." Over the past few decades I've been documenting some of the most egregious examples such as alternative splicing, junk DNA, epigenetics, molecular evolution, the Central Dogma, the origin of life, the Three Domain hypothesis, and the number of non-coding genes.

In an ideal world, the "rigorous system of checks and balances" would have stopped scientists from making false claims about the history of science in the introductions to their papers and would have insisted that scientists address all alternative explanations of their data. It would have vocally opposed the misrepresentation of scientific discoveries in the popular press and criticized scientists who promoted false claims.

That's not happening. I wish that prominent scientists would stop trying to sell the general public on the fairytale version of science and spend more time on policing those scientists who are abusing the system. Where is the outrage about the huge amount of sloppy science that's being published today—even in the most prestigious journals?

I don't know about the rest of you but in my field it's very rare to see any open criticism of bad science. The last example I can think of is the vocal opposition to the false ENCODE claims of 2012 that were published in Nature but even those criticisms were ignored by the vast majority of scientists working in the field. That's not a "rigorous system of check and balances." In most cases these days, scientists avoid the very system that was designed to keep science honest because they don't want to rock the boat. It can be dangerous to criticize your peers. Besides, most scientists know that hype and exaggeration are what gets attention and grants and they want to do it themselves whenever they get a chance.

There's a lot of bad science out there but what about the good stuff? Let's imagine that science has worked the way it is supposed to and there really is a strong consensus among scientists. How can the average person recognize good science?

We all need to think critically when we read or see stories on the web, on social media, or in the popular press. However, given that we can’t be experts in most fields of science, how can we determine whether a particular study or story is trustworthy?

How can we inoculate ourselves against being fooled by scientific untruths or misrepresentations? Researchers devoted to promoting science literacy have devised a three-step process for separating science fact from science fiction.

This is an important topic. I used to teach a course on critical thinking and I can assure you that it's very difficult to get across the fundamentals to students who aren't used to it. The authors of this essay have some guidelines that they propose but before looking at them I want to emphasize that critical thinking is not as common among scientists as Alberts, Hopkin, and Richards imagine. That means that the focus should be on explaining these rules to scientists and not just the general public.

Here's the three-step process that they recommend.

This may work for some topics such as climate change and the efficacy of vaccines but is it always reliable? Let's take junk DNA as an example because that's a topic that I'm familiar with. Imagine that you've just read an article about junk DNA where the author claims that junk DNA is a myth and recent results have shown that most of our genome is functional.

Is the author credible? Probably. They may be a prominent scientist at a prestigious university and they may have hundreds of publications and millions of dollars in grant money.

Is the author an expert in the subject? It might appear so (see above).

Is there consensus? I would certainly look like a consensus view unless you were a real expert because a simple Google Scholar search would reveal tons of papers touting the demise of junk DNA and very, very few papers expressing a different point of view.

The average science writer, and the average citizen, has no easy way of telling whether our genome is full of junk DNA or whether the concept of junk DNA has been refuted. If they use this three-step procedure, they will come to an incorrect conclusion. Why is that? It's because scientists haven't followed their own rules. In other words, science can't be trusted.

I don't mean to disparage all of science but I think there's a problem that we need to address. I'm particularly concerned about textbook authors and teachers who may be shirking their responsibility to make sure that they are applying critical thinking to what they tell their students.

Bruce Alberts, Karen Hopkin, and Keith Roberts are very smart people. They know all this so they end their essay with a section on ...

To Remain Worthy of Public Trust, Scientists Must Police Their Own Ranks to Root Out and Punish Those Who Behave Unethically

In an ideal world, no scientist would ever stray from a virtuous search for truth. Unfortunately, scientists—like all professionals—are not only human, but are under intense pressure to succeed. They must compete constantly to garner recognition, research grants, and the trainees they need to help them carry out their work. They must often work quickly to avoid being “scooped”, and they seek to present their findings in the most widely read journals (a phenomenon sometimes referred to as “publish or perish”). This ever-present pressure can lead to shortcuts in the scientific process that go undetected by peer review, such as the manipulation of data or images by a member of the research team in order to create a more convincing publication. In an analysis conducted in 2009, some 2% of the scientists surveyed admitted to fabricating, falsifying, or modifying data at least once.

How can the scientific community prevent such ethical breaches? Best practices and proper conduct need to be outlined, exemplified and practiced at all levels of the scientific enterprise—from individual scientists to their institutions and funders. At the same time, all of these participants must remain ready to identify and investigate allegations of misconduct. Technology can help: software programs, for example, can facilitate detection of manipulated figures or plagiarized text.

Transgressions, when caught, must lead to formal sanctions. These can include the retraction of publications and the subsequent correction of the scientific record, suspension or removal of the perpetrators from their positions, and the revocation of their funding—either temporarily or permanently. In instances in which the misbehavior amounts to a violation of the law, the individual may even face time in prison. Such was the case for the Chinese researcher who used gene editing to irreversibly alter human embryos, a practice that is not only unethical, but based on the current consensus of the scientific community—illegal in China and throughout the world.

In the end, the responsibility for improving the public image of science falls largely on scientists themselves. Only by energetically identifying and punishing the “bad actors,” while supporting and rewarding those who play fairly and operate with openness and honesty, can the worldwide scientific enterprise ensure that we can continue to trust in the community of scientists—and in the science they produce.

That applies to deliberate fraud but I don't think that's the most important issue in science these days. There's far too much emphasis on fraudulent data in science instead of on fraudulent interpretations.

The real problem, in my opinion, is how you interpret your results and how you put them in the proper context, not whether the data are accurate or not. It's true, for example, that transcription factors bind to millions of sites in the human genome. That fact is not in dispute but what does it mean? Does it mean that there are millions of true regulatory sites in our genome or is it possible that many of these sites are spurious as Bruce Alberts and Keith Yamamoto explained 50 years ago? [see The specificity of DNA binding proteins] Best practices require that both points of view be expressed, and evaluated, in any paper on the abundance of regulatory sites. But that doesn't happen. Peer review doesn't fix it and scientists who misrepresent their data don't suffer any consequences.

The photo shows Bruce Alberts with his first three graduate students: Glenn Herrick (far right), Keith Yamamoto (left), and Larry Moran (2nd from right).


1. Full disclosure: Bruce Alberts is my friend and former Ph.D. supervisor.

1 comment :

Anonymous said...

Robert Byers. science does not exist outside human intelligence. So one in being asked to trust science is being asked to trist human intelligence. Its human incompetence or h8man lack of imagination or lack of innocent knowledge that is the problem.
All the coorperation, consensus, in the world means nothing before human error. the author of Sandwalk offers examples based on frustration in certain issues. We creationists have more issues about human competence in origin conclusions. yet would be told SCIENCE can not be wrong and must bev trusted. Nope. Its just people, in origin matters few people, and these subjects not open to testing, and plain incompetence in people. everybody but more some then others. Science is a verb and not a noun. Its just a claim, a possible reality, to a higher standard of investigation that can demand confidence in its conclusions. like in court a criminal case demands more evidence then a civil before conclusion. then everyone complains everyone comes up short of the standard except thier stuff. Creationists do.