More Recent Comments

Saturday, March 23, 2024

More genomes, more variation

The "All of Us Research Program" is an American effort to sequence one million genomes. The stated goal is to study human genetic variants and link them to genetic diseases. The study is complimentary to similar studies in Great Britain, Iceland, and Japan but the American team hopes to include more diversity in their study by recruiting people from different ethnic backgrounds.

All of Us published the results from almost 250,000 genome sequences in a recent issue of Nature (All of Us Research Program Investigators, 2024). They found one billion variants of which 275 million had not been seen before.

Recall that the UK study (UK Biobank) emphasized the importance of variation in determining whether a given region of DNA was functional or not. They noted that regions that were constrained (i.e. fewer variants) were likely under purifying selection whereas regions that accumulated variants were likely junk [Identifying functional DNA (and junk) by purifying selection]. Their results indicated that only about 10% of the genome was constrained and that's consistent with the view that 90% of our genome is junk. The American study did not address this issue so we don't know how it related to the junk DNA controversy.

Note that if 90% of our genome is junk then that represents 2.8 billion base pairs and the potential for more than 8 billion variants in the human population.1 Some of these will be quite frequent in different groups just by chance but most of them will be quite rare. We'll have to wait and see how this all pans out when more genomes are sequenced. The idea of increasing the detection of unusual variants by sequencing more diverse populations is a good one but the real key is just more genome sequences.

One of the things you can do with this data is to cluster the variants according to the self-identified ethnic group of the participants and All of Us didn't hesitate to do this. They even identified the clusters as races, proving once again that there are clear genetic diffences between these groups, just as you would expect. Given the sensitive nature of this fact, you would also expect a lot of criticism on the internet and that's what happened.


1. I'm defining a "variant" as a difference from the reference genome sequence. I'm aware of the terminology issue but it's not important here. There will also be a large number of variants in the functional regions.

All of Us Research Program Investigators (2024) Genomic data in the All of Us Research Program. Nature 627:340. [doi: 10.1038/s41586-023-06957-x].

Friday, March 22, 2024

Toronto is number 3 in health sciences!

I'm a retired member of the Faculty of Medicine at the University of Toronto. For many decades researchers here have been complaining that they don't get the recognition they deserve. They were convinced that the University of Toronto and its associated hospital research insitutes were among the top health science research centers in the world.

That seems to be changing. In the March 14, 2024 issue of Nature we rank #3 in the world, ahead of many American health science centers that you might think are better [Leading 200 institutions in health sciences]. The University of Toronto is the only non-American institution in the top ten and one of only four in the top 20.

I expect to see the Chinese institutions move up in the next few years.


Thursday, March 21, 2024

Science misinformation is being spread in the lecture halls of top universities

Should universities remove online courses that contain incorrect or misleading information?

There are lots of scientific controversies where different scientists have conflicting views. Eventually these controversies will be solved by normal scientific means involving evidence and logic but for the time being there isn't enough data to settle a genuine scientific controversy. Many of us are interested in these controversies and some of us have chosen to invest time and effort into defending one side or the other.

But there's a dark side of science that infects these debates—false or misleading information used to support one side of a legitimate controversy. To give just one example, I'm frustrated at the constant reference to junk DNA being defined as non-coding DNA. Many scientists believe that this was the way junk DNA was defined by its earliest proponents and then they go on to say that the recent discovery of functional non-coding DNA refutes junk.

I don't know where this idea came from because there's nothing in the scientific literature from 50 years ago to support such a ridiculous claim. It must be coming from somewhere since the idea is so widespread.

Where does misinformation come from and how is it spread?

Monday, March 18, 2024

Western scientists should continue to cooperate with Chinese scientists

China has become a science powerhouse and it achieved this goal, in part, by sending its young scientitsts abroad to train in universities in Canada, Australia, United States, and Europe. Many of these countries have signed scientific cooperation agreements with China but some of those agreements are in danger of lapsing as China is increasingly seen as an untrustworthy enemy.

Intelligent design creationists think junk DNA is a placeholder for ignorance

Paul Nelson is a Senior Fellow of the Discovery Institute—the most important source of intelligent design propaganda. Paul and I have been disagreeing about science for many years. He is prone to interpret anything he finds in the scientific literature as support for the idea that scientists have misunderstood their subject matter and failed to recognize that science supports intelligent design. My goal has always been to try and explain the actual science and why his interpretations are misguided. I have not been very successful.

The photo was taken in London (UK) in 2016 at a meeting on evolution. It looks like I'm holding my breath because I'm beside a creationist but I assure you that's not what was happening. We actually get along quite well in spite of the fact that he's wrong about everything. :-)

Sunday, March 17, 2024

Happy St. Patrick's Day 2024

Happy St. Patrick's Day! These are my great-grandparents Thomas Keys Foster, born in County Tyrone on September 5, 1852 and Eliza Ann Job, born in Fintona, County Tyrone on August 18, 1852. Thomas came to Canada in 1876 to join his older brother, George, on his farm near London, Ontario, Canada. Eliza came the following year and worked on the same farm. Thomas and Eliza decided to move out west where they got married in 1882 in Winnipeg, Manitoba, Canada.

The couple obtained a land grant near Salcoats, Saskatchewan, a few miles south of Yorkton, where they build a sod house and later on a wood frame house that they named "Fairview" after a hill in Ireland overlooking the house where Eliza was born. That's where my grandmother, Ella, was born.

Other ancestors in this line came from the nearby counties of Donegal (surname Foster) and Fermanagh (surnames Keys, Emerson, Moore) and possibly Londonderry (surname Job).

One of the cool things about studying your genealogy is that you can find connections to almost everyone. This means you can celebrate dozens of special days. In my case it was easy to find other ancestors from England, Scotland, Netherlands, Germany, France, Spain, Poland, Lithuania, Belgium, Ukraine, Russia, and the United States. Today, we will be celebrating St. Patrick's Day. It's rather hectic keeping up with all the national holidays but somebody has to keep the traditions alive!

It's nice to have an excuse to celebrate, especially when it means you can drink beer. However, I would be remiss if I didn't mention one little (tiny, actually) problem. Since my maternal grandmother is pure Irish, I should be 25% Irish but my DNA results indicate that I'm only 8% Irish. That's probably because my Irish ancestors were Anglicans and were undoubtedly the descendants of settlers from England, Wales, and Scotland who moved to Ireland in the 1600s. This explains why they don't have very Irish-sounding names.

I don't mention this when I'm in an Irish pub. Instead, I focus on my mother's maiden name, which was Doherty, and her ancestors on her father's side who were O'Doughertys. The O'Doughertys were a prominent Irish clan from Donegal and they were fierce enemies of the English invaders. Unfortunately, my ancestor was Donald O'Dougherty (1760 - 1810) who came to Canada in 1803 from the Isle of Skye in Scotland where his family had been for several generatons after fleeing Ireland in the 1600s. His wife was Anne Stewart and she wasn't Irish.

I don't mention that part either.


Saturday, March 16, 2024

How do proteins move around amidst the jumble of molecules inside a living cell?

I've been reading Philip Ball's book on "How Life Works" and I find it increasingly frustrating because he consistently describes things that he's "discovered" that biochemists like me must have missed. Here's an example from pages 231-232.

He presents a cartoon image of a cell showing that it's full of all kinds of molecules packed closely together, then he says,

Friday, March 15, 2024

Nils Walter disputes junk DNA: (9) Reconciliation

I'm discussing a recent paper published by Nils Walter (Walter, 2024). He is arguing against junk DNA by claiming that the human genome contains large numbers of non-coding genes.

This is the ninth and last post in the series. I'm going to discuss Walker's view on how to tone down the dispute over the amount of junk in the human genome. Here's a list of the previous posts.


"Conclusion: How to Reconcile Scientific Fields"

Walter concludes his paper with some thoughts on how to deal with the controversy going forward. I'm using the title that he choose. As you can see from the title, he views this as a squabble between two different scientific fields, which he usually identifies as geneticists and evolutionary biologists versus biochemists and molecular biologists. I don't agree with this distinction. I'm a biochemist and molecular biologist, not a geneticist or an evolutionary biologist, and still I think that many of his arguments are flawed.

Let's see what he has to say about reconciliation.

Science thrives from integrating diverse viewpoints—the more diverse the team, the better the science.[107] Previous attempts at reconciling the divergent assessments about the functional significance of the large number of ncRNAs transcribed from most of the human genome by pointing out that the scientific approaches of geneticists, evolutionary biologists and molecular biologists/biochemists provide complementary information[42] was met with further skepticism.[74] Perhaps a first step toward reconciliation, now that ncRNAs appear to increasingly leave the junkyard,[35] would be to substitute the needlessly categorical and derogative word RNA (or DNA) “junk” for the more agnostic and neutral term “ncRNA of unknown phenotypic function”, or “ncRNAupf”. After all, everyone seems to agree that the controversy mostly stems from divergent definitions of the term “function”,[42, 74] which each scientific field necessarily defines based on its own need for understanding the molecular and mechanistic details of a system (Figure 3). In addition, “of unknown phenotypic function” honors the null hypothesis that no function manifesting in a phenotype is currently known, but may still be discovered. It also allows for the possibility that, in the end, some transcribed ncRNAs may never be assigned a bona fide function.

First, let's take note of the fact that this is a discussion about whether a large percentage of transcripts are functional or not. It is not about the bigger picture of whether most of the genome is junk in spite of the fact that Nils Walter frames it in that manner. This becomes clear when you stop and consider the implications of Walter's claim. Let's assume that there really are 200,000 functional non-coding genes in the human genome. If we assume that each one is about 1000 bp long then this amounts to 6.5% of the genome—a value that can easily be accommodated within the 10% of the genome that's conserved and functional.

Now let's look at how he frames the actual disagreement. He says that the groups on both sides of the argument provide "complementary information." Really? One group says that if you can delete a given region of DNA with no effect on the survival of the individual or the species then it's junk and the other group says that it still could have a function as long as it's doing something like being transcribed or binding a transcription factor. Those don't look like "complimentary" opinions to me.

His first step toward reconciliation starts with "now that ncRNAs appear to increasingly leave the junkyard." That's not a very conciliatory way to start a conversation because it immediately brings up the question of how many ncRNAs we're talking about. Well-characterized non-coding genes include ribosomal RNA genes (~600), tRNA genes (~200), the collection of small non-coding genes (snRNA, snoRNA, microRNA, siRNA, PiWi RNA)(~200), several lncRNAs (<100), and genes for several specialized RNAs such as 7SL and the RNA component of RNAse P (~10). I think that there are no more than 1000 extra non-coding genes falling outside these well-known examples and that's a generous estimate. If he has evidence for large numbers that have left the junkyard then he should have presented it.

Walter goes on to propose that we should divide non-coding transcripts into two categories; those with well-characterized functions and "ncRNA of unknown function." That's ridiculous. That is not a "agnostic and neutral term." It implies that non-conserved transcripts that are present at less that one copy per cell could still have a function in spite of the fact that spurious transcription is well-documented. In fact, he basically admits this interpretation at the end of the paragraph where he says that using this description (ncRNA of unknown function) preserves the possibility that a function might be discovered in the future. He thinks this is the "null hypothesis."

The real null hypothesis is that a transcript has no function until it can be demonstrated. Notice that I use the word "transcript" to describe these RNAs instead of "ncRNA" or "ncRNA of unknown phenotypic function." I don't think we lose anything by using the word "transcript."

Walter also address the meaning of "function" by claiming that different scientific fields use different definitions as though that excuses the conflict. But that's not an accurate portrayal of the problem. All scientists, no matter what field they identify with, are interested in coming up with a way of identifying functional DNA. There are many biochemists and molecular biologists who accept the maintenance definition as the best available definition of function. As scientists, they are more than willing to entertain any reasonable scientific arguments in favor of a different definition but nobody, including Nils Walter, has come up with such arguments.

Now let's look at the final paragraph of Walter's essay.

Most bioscientists will also agree that we need to continue advancing from simply cataloging non-coding regions of the human genome toward characterizing ncRNA functions, both elementally and phenotypically, an endeavor of great challenge that requires everyone's input. Solving the enigma of human gene expression, so intricately linked to the regulatory roles of ncRNAs, holds the key to devising personalized medicines to treat most, if not all, human diseases, rendering the stakes high, and unresolved disputes counterproductive.[108] The fact that newly ascendant RNA therapeutics that directly interface with cellular RNAs seem to finally show us a path to success in this challenge[109] only makes the need for deciphering ncRNA function more urgent. Succeeding in this goal would finally fulfill the promise of the human genome project after it revealed so much non-protein coding sequence (Figure 1). As a side effect, it may make updating Wikipedia and encyclopedia entries less controversial.

I agree that it's time for scientists to start identifying those transcripts that have a true function. I'll go one step further; it's time to stop pretending that there might be hundreds of thousands of functional transcripts until you actually have some data to support such a claim.

I take issue with the phrase "solving the enigma of human gene expression." I think we already have a very good understanding of the fundamental mechanisms of gene expression in eukaryotes, including the transitions between open and closed chromatin domains. There may be a few odd cases that deviate from the norm (e.g. Xist) but that hardly qualifies as an "enigma." He then goes on to say that this "enigma" is "intricately linked to the regulatory roles of ncRNAs" but that's not a fact, it's what's in dispute and why we have to start identifying the true function (if any) of most transcripts. Oh, and by the way, sorting out which parts of the genome contain real non-coding genes may contribute to our understanding of genetic diseases in humans but it won't help solve the big problem of how much of our genome is junk because mutations in junk DNA can cause genetic diseases.

Sorting out which transcripts are functional and which ones are not will help fill in the 10% of the genome that's functional but it will have little effect on the bigger picture of a genome that's 90% junk.

We've known that less than 2% of the genome codes for proteins since the late 1960s—long before the draft sequence of the human genome was published in 2001—and we've known for just as long that lots of non-coding DNA has a function. It would be helpful if these facts were made more widely known instead of implying that they were only dscovered when the human genome was sequenced.

Once we sort out which transcripts are functional, we'll be in a much better position to describe the all the facts when we edit Wikipedia articles. Until that time, I (and others) will continue to resist the attempts by the students in Nils Walter's class to remove all references to junk DNA.


Walter, N.G. (2024) Are non‐protein coding RNAs junk or treasure? An attempt to explain and reconcile opposing viewpoints of whether the human genome is mostly transcribed into non‐functional or functional RNAs. BioEssays:2300201. [doi: 10.1002/bies.202300201]

Thursday, March 14, 2024

Nils Walter disputes junk DNA: (8) Transcription factors and their binding sites

I'm discussing a recent paper published by Nils Walter (Walter, 2024). He is arguing against junk DNA by claiming that the human genome contains large numbers of non-coding genes.

This is the seventh post in the series. The first one outlines the issues that led to the current paper and the second one describes Walter's view of a paradigm shift/shaft. The third post describes the differing views on how to define key terms such as 'gene' and 'function.' In the fourth post I discuss his claim that differing opinions on junk DNA are mainly due to philosophical disagreements. The fifth, sixth, and seventh posts address specific arguments in the junk DNA debate.

Wednesday, March 13, 2024

Nils Walter disputes junk DNA: (7) Conservation of transcribed DNA

I'm discussing a recent paper published by Nils Walter (Walter, 2024). He is arguing against junk DNA by claiming that the human genome contains large numbers of non-coding genes.

This is the seventh post in the series. The first one outlines the issues that led to the current paper and the second one describes Walter's view of a paradigm shift/shaft. The third post describes the differing views on how to define key terms such as 'gene' and 'function.' In the fourth post I discuss his claim that differing opinions on junk DNA are mainly due to philosophical disagreements. The fifth and sixth posts address specific arguments in the junk DNA debate.


Sequence conservation

If you don't know what a transcript is doing then how are you going to know whether it's a spurious transcript or one with an unknown function? One of the best ways is to check and see whether the DNA sequence is conserved. There's a powerful correlation between sequence conservation and function: as a general rule, functional sequences are conserved and non-conserved sequences can be deleted without consequence.

There might be an exception to the the conservation criterion in the case of de novo genes. They arise relatively recently so there's no history of conservation. That's why purifying selection is a better criterion. Now that we have the sequences of thousands of human genomes, we can check to see whether a given stretch of DNA is constrained by selection or whether it accumulates mutations at the rate we expect if its sequence were irrelevant junk DNA (neutral rate). The results show that less than 10% of our genome is being preserved by purifying selection. This is consistent with all the other arguments that 90% of our genome is junk and inconsistent with arguments that most of our genome is functional.

This sounds like a problem for the anti-junk crowd. Let's see how it's addressed in Nils Walter's article in BioEssays.

There are several hand-waving objections to using conservation as an indication of function and Walter uses them all plus one unique argument that we'll get to shortly. Let's deal with some of the "facts" that he discusses in his defense of function. He seems to agree that much of the genome is not conserved even though it's transcribed. In spite of this, he says,

"... the estimates of the fraction of the human genome that carries function is still being upward corrected, with the best estimate of confirmed ncRNAs now having surpassed protein-coding genes,[12] although so far only 10%–40% of these ncRNAs have been shown to have a function in, for example, cell morphology and proliferation, under at least one set of defined conditions."

This is typical of the rhetoric in his discussion of sequence conservation. He seems to be saying that there are more than 20,000 "confirmed" non-coding genes but only 10%-40% of them have been shown to have a function! That doesn't make any sense since the whole point of this debate is how to identify function.

Here's another bunch of arguments that Walter advances to demonstrate that a given sequence could be functional but not conserved. I'm going to quote the entire thing to give you a good sense of Walter's opinion.

A second limitation of a sequence-based conservation analysis of function is illustrated by recent insights from the functional probing of riboswitches. RNA structure, and hence dynamics and function, is generally established co-transcriptionally, as evident from, for example, bacterial ncRNAs including riboswitches and ribosomal RNAs, as well as the co-transcriptional alternative splicing of eukaryotic pre-mRNAs, responsible for the important, vast diversification of the human proteome across ∼200 cell types by excision of varying ncRNA introns. In the latter case, it is becoming increasingly clear that splicing regulation involves multiple layers synergistically controlled by the splicing machinery, transcription process, and chromatin structure. In the case of riboswitches, the interactions of the ncRNA with its multiple protein effectors functionally engage essentially all of its nucleotides, sequence-conserved or not, including those responsible for affecting specific distances between other functional elements. Consequently, the expression platform—equally important for the gene regulatory function as the conserved aptamer domain—tends to be far less conserved, because it interacts with the idiosyncratic gene expression machinery of the bacterium. Consequently, taking a riboswitch out of this native environment into a different cell type for synthetic biology purposes has been notoriously challenging. These examples of a holistic functioning of ncRNAs in their species-specific cellular context lay bare the limited power of pure sequence conservation in predicting all functionally relevant nucleotides.

I don't know much about riboswitches so I can't comment on that. As for alternative splicing, I assume he's suggesting that much of the DNA sequence for large introns is required for alternative splicing. That's just not correct. You can have effective alternative splicing with small introns. The only essential parts of introns sequences are the splice sites and a minimum amount of spacer.

Part of what he's getting at is the fact that you can have a functional transcript where the actual nucleotide sequence doesn't matter so it won't look conserved. That's correct. There are such sequences. For example, there seem to be some examples of enhancer RNAs, which are transcripts in the regulatory region of a gene where it's the act of transcription that's important (to maintain an open chromatin conformation, for example) and not the transcript itself. Similarly, not all intron sequences are junk because some spacer sequence in required to maintain a minimum distance between splice sites. All this is covered in Chapter 8 of my book ("Noncoding Genes and Junk RNA").

Are these examples enough to toss out the idea of sequence conservation as a proxy for function and assume that there are tens of thousands of such non-conserved genes in the human genome? I think not. The null hypothesis still holds. If you don't have any evidence of function then the transcript doesn't have a function—you may find a function at some time in the future but right now it doesn't have one. Some of the evidence for function could be sequence conservation but the absence of conservation is not an argument for function. If conservation doesn't work then you have to come up with some other evidence.

It's worth mentioning that, in the broadest sense, purifying selection isn't confined to nucleotide sequence. It can also take into account deletions and insertions. If a given region of the genome is deficient in random insertions and deletions then that's an indication of function in spite of the fact that the nucleotide sequence isn't maintained by purifying selection. The maintenance definition of function isn't restricted to sequence—it also covers bulk DNA and spacer DNA.

(This is a good time to bring up a related point. The absence of conservation (size or sequence) is not evidence of junk. Just because a given stretch of DNA isn't maintained by purifying selection does not prove that it is junk DNA. The evidence for a genome full of junk DNA comes from different sources and that evidence doesn't apply to every little bit of DNA taken individually. On the other hand, the maintenance function argument is about demonstrating whether a particular region has a function or not and it's about the proper null hypothesis when there's no evidence of function. The burden of proof is on those who claim that a transcript is functional.)

This brings us to the main point of Walter's objection to sequence conservation as an indication of function. You can see hints of it in the previous quotation where he talks about "holistic functioning of ncRNAs in their species-specific cellular context," but there's more ...

Some evolutionary biologists and philosophers have suggested that sequence conservation among genomes should be the primary, or perhaps only, criterion to identify functional genetic elements. This line of thinking is based on 50 years of success defining housekeeping and other genes (mostly coding for proteins) based on their sequence conservation. It does not, however, fully acknowledge that evolution does not actually select for sequence conservation. Instead, nature selects for the structure, dynamics and function of a gene, and its transcription and (if protein coding) translation products; as well as for the inertia of the same in pathways in which they are not involved. All that, while residing in the crowded environment of a cell far from equilibrium that is driven primarily by the relative kinetics of all possible interactions. Given the complexity and time dependence of the cellular environment and its environmental exposures, it is currently impossible to fully understand the emergent properties of life based on simple cause-and-effect reasoning.

The way I see it, his most important argument is that life is very complicated and we don't currently understand all of it's emergent properties. This means that he is looking for ways to explain the complexity that he expects to be there. The possibility that there might be several hundred thousand regulatory RNAs seems to fulfil this need so they must exist. According to Nils Walter, the fact that we haven't (yet) proven that they exist is just a temporary lull on the way to rigorous proof.

This seems to be a common theme among those scientists who share this viewpoint. We can see it in John Mattick's writings as well. It's as though the logic of having a genome full of regulatory RNA genes is so powerful that it doesn't require strong supporting evidence and can't be challenged by contradictory evidence. The argument seems somewhat mystical to me. Its proponents are making the a priori assumption that humans just have to be a lot more complicated than what "reductionist" science is indicating and all they have to do is discover what that extra layer of complexity is all about. According to this view, the idea that our genome is full of junk must be wrong because it seems to preclude the possibility that our genome could explain what it's like to be human.


Walter, N.G. (2024) Are non‐protein coding RNAs junk or treasure? An attempt to explain and reconcile opposing viewpoints of whether the human genome is mostly transcribed into non‐functional or functional RNAs. BioEssays:2300201. [doi: 10.1002/bies.202300201]

Sunday, March 10, 2024

The neutralist-selectionist debate in 2024

The neutral theory was first proposed by Mootoo Kimura in 1968 (Kimura, 1968). The following year, a similar idea was published in a seminal paper by Jack King and Thomas Jukes (King and Jukes, 1969). King and Jukes emphasized the importance of non-Darwinian mechanisms of evolution in order to explain protein based phylogenetic trees and the molecular clock. They made it clear that neutral alleles fixed by random genetic drift play an important part in evolution.

There appears to be considerable latitude at the molecular level for random genetic changes that have no effect upon the fitness of the organism. Selectively neutral mutations, if they occur, become passively fixed as evolutionary changes through the action of random genetic drift.

The idea of selectively neutral changes at the molecular level has not been readily accepted by many classical evolutionists, perhaps because of the pervasiveness of Darwinian thought (King and Jukes, 1969).

Thursday, March 07, 2024

Why Philosophy of Biology?

Robert Lawrence Kuhn has published a series of videos on his "Closer to Truth" site. On March 4, 2024 he posted a teaser video introducing Season 23: "Why Philosophy of Biology." The video contains short clips of his interviews with philosophers of biology (see list below).

Here's the blurb covering the introduction to the new season.

How can philosophy advance biology? How can biology influence philosophy? In this first series on Philosophy of Biology, Closer to Truth explores the challenges and implications of evolution. We ask how life on earth came to be as it is, and how humans came to be as we are. We address biologically based issues, such as sex/gender, race, cognition, culture, morality, healthcare, religion, alien life, and more. When philosophy and biology meet, sparks fly as both are enriched.

Those are all interesting questions. Some of them can only be answered by philosophers but others require major input from scientists. One of the important issues for philosophy of science seems to be the confict between the philosophy of the early 20th century, which was developed with physics as the model science, and the the success of molecular biology in the latter half of the 20th century, which didn't play by the same rules. (See the short interview with Paul Griffiths, whom I greatly admire, for a succinct explanation of this problem.)

I'm very conflicted about the role of philosphy in understanding the science of biology and even more conflicted about whether philosophers can recognize good science from bad science (Richard Dawkins, Denis Noble). I'm also puzzled by the apparent reluctance of philosophers to openly challenge their colleagues who get the science wrong. Watch the video to see if my scepticism is warranted.


Monday, March 04, 2024

Nils Walter disputes junk DNA: (6) The C-value paradox

I'm discussing a recent paper published by Nils Walter (Walter, 2024). He is arguing against junk DNA by claiming that the human genome contains large numbers of non-coding genes.

This is the fifth post in the series. The first one outlines the issues that led to the current paper and the second one describes Walter's view of a paradigm shift/shaft. The third post describes the differing views on how to define key terms such as 'gene' and 'function.' In the fourth post I discuss his claim that differing opinions on junk DNA are mainly due to philosophical disagreements.

Sunday, March 03, 2024

Nils Walter disputes junk DNA: (5) What does the number of transcripts per cell tell us about function?

I'm discussing a recent paper published by Nils Walter (Walter, 2024). He is arguing against junk DNA by claiming that the human genome contains large numbers of non-coding genes.

This is the fifth post in the series. The first one outlines the issues that led to the current paper and the second one describes Walter's view of a paradigm shift. The third post describes the differing views on how to define key terms such as 'gene' and 'function.' The fourth post makes the case that differing views on junk DNA are mainly due to philosophical disagreements.

-Nils Walter disputes junk DNA: (1) The surprise

-Nils Walter disputes junk DNA: (2) The paradigm shaft

-Nils Walter disputes junk DNA: (3) Defining 'gene' and 'function'

-Nils Walter disputes junk DNA: (4) Different views of non-functional transcripts

Transcripts vs junk DNA

The most important issue, according to Nils Walter, is whether the human genome contains huge numbers of genes for lncRNAs and other types of regulatory RNAs. He doesn't give us any indication of how many of these potential genes he thinks exist or what percentage of the genome they cover. This is important since he's arguing against junk DNA but we don't know how much junk he's willing to accept.

There are several hundred thousand transcripts in the RNA databases. Most of them are identified as lncRNAs because they are bigger than 200 bp. Let's assume, for the sake of argument, that 200,000 of these transcripts have a biologically relevant function and therefore there are 200,000 non-coding genes. A typical size might be 1000 bp so these genes would take up about 6.5% of the genome. That's about 10 times the number of protein-coding genes and more than 6 times the amount of coding DNA.

That's not going to make much of a difference in the junk DNA debate since proponents of junk DNA argue that 90% of the genome is junk and 10% is functional. All of those non-coding genes can be accommodated within the 10%.

The ENCODE researchers made a big deal out of pervasive transcription back in 2007 and again in 2012. We can quibble about the exact numbers but let's say that 80% of the human is transcribed. We know that protein-coding genes occupy at least 40% percent of the genome so much of this pervasive transcription is introns. If all of the presumptive regulatory genes are located in the remaining 40% (i.e. none in introns), and the average size is 1000 bp, then this could be about 1.24 million non-coding genes. Is this reasonable? Is this what Nils Walter is proposing?

I think there's some confusion about the difference between large numbers of functional transcripts and the bigger picture of how much total junk DNA there is in the human genome. I wish the opponents of junk DNA would commit to how much of the genome they think is functional and what evidence they have to support that position.

But they don't. So instead we're stuck with debates about how to decide whether some transcripts are functional or junk.

What does transcript concentration tell us about function?

If most detectable transcripts are due to spurious transcription of junk DNA then you would expect these transcripts to be present at very low levels. This turns out to be true as Nils Walter admits. He notes that "fewer than 1000 lncRNAs are present at greater than one copy per cell."

This is a problem for those who advocate that many of these low abundance transcripts must be functional. We are familiar with several of the ad hoc hypotheses that have been advanced to get around this problem. John Mattick has been promoting them for years [John Mattick's new paradigm shaft].

Walter advances two of these excuses. First, he says that a critical RNA may be present at an average of one molecule per cell but it might be abundant in just one specialized cell in the tissue. Furthermore, their expression might be transient so they can only be detected at certain times during development and we might not have assayed cells at the right time. I assume he's advocating that there might be a short burst of a large number of these extremely specialized regulatory RNAs in these special cells.

As far as I know, there aren't many examples of such specialized gene expression. You would need at least 100,000 examples in order to make a viable case for function.

His second argument is that many regulatory RNAs are restricted to the nucleus where they only need to bind to one regulatory sequence to carry out their function. This ignores the mass action laws that govern such interactions. If you apply the same reasoning to proteins then you would only need one lac repressor protein to shut down the lac operon in E. coli but we've known for 50 years that this doesn't work in spite of the fact that the lac repressor association constant shows that it is one of the tightest binding proteins known [DNA Binding Proteins]. This is covered in my biochemistry textbook on pages 650-651.1

If you apply the same reasoning to mammalian regulatory proteins then it turns out that you need 10,000 transcription factor molecules per nucleus in order to ensure that a few specific sites are occupied. That's not only because of the chemistry of binary interactions but also because the human genome is full of spurious sites that resemble the target regulatory sequence [The Specificity of DNA Binding Proteins]. I cover this in my book in Chapter 8: "Noncoding Genes and Junk RNA" in the section titled "On the important properties of DNA-binding proteins" (pp. 200-204). I use the estrogen receptor as an example based on calculations that were done in the mid-1970s. The same principles apply to regulatory RNAs.

This is a disagreement based entirely on biochemistry and molecular biology. There aren't enough examples (evidence) to make the first argument convincing and the second argument makes no sense in light of what we know about the interactions between molecules inside of the cell (or nucleus).

Note: I can almost excuse the fact that Nils Walter ignores my book on junk DNA, my biochemistry textbook, and my blog posts, but I can't excuse the fact that his main arguments have been challenged repeatedly in the scientific literature. A good scientist should go out of their way to seek out objections to their views and address them directly.


1. In addition to the thermodynamic (equilibrium) problem, there's a kinetic problem. DNA binding proteins can find their binding sites relatively quickly by one dimensional diffusion—an option that's not readily available to regulatory RNAs [Slip Slidin' Along - How DNA Binding Proteins Find Their Target].

Walter, N.G. (2024) Are non‐protein coding RNAs junk or treasure? An attempt to explain and reconcile opposing viewpoints of whether the human genome is mostly transcribed into non‐functional or functional RNAs. BioEssays:2300201. [doi: 10.1002/bies.202300201]

Saturday, March 02, 2024

Nils Walter disputes junk DNA: (4) Different views of non-functional transcripts

I'm discussing a recent paper published by Nils Walter (Walter, 2024). He is trying to explain the conflict between proponents of junk DNA and their opponents. His main focus is building a case for large numbers of non-coding genes.

This is the third post in the series. The first one outlines the issues that led to the current paper and the second one describes Walter's view of a paradigm shift. The third post describes the differing views on how to define key terms such as 'gene' and 'function.' In this post I'll describe the heart of the dispute according to Nils Walter.

-Nils Walter disputes junk DNA: (1) The surprise

-Nils Walter disputes junk DNA: (2) The paradigm shaft

-Nils Walter disputes junk DNA: (3) Defining 'gene' and 'function'

Thursday, February 29, 2024

Nils Walter disputes junk DNA: (3) Defining 'gene' and 'function'

I'm discussing a recent paper published by Nils Walter (Walter, 2024). He is trying to explain the conflict between proponents of junk DNA and their opponents. His main focus is building a case for large numbers of non-coding genes.

This is the third post in the series. The first one outlines the issues that led to the current paper and the second one describes Walter's view of a paradigm shift.

-Nils Walter disputes junk DNA: (1) The surprise

-Nils Walter disputes junk DNA: (2) The paradigm shaft

Any serious debate requires some definitions and the debate over junk DNA is no exception. It's important that everyone is on the same page when using specific words and phrases. Nils Walter recognizes this so he begins his paper with a section called "Starting with the basics: Defining 'function' and 'gene'."

Tuesday, February 27, 2024

Nils Walter disputes junk DNA: (2) The paradigm shaft

I'm discussing a recent paper published by Nils Walter (Walter, 2024). He is trying to explain the conflict between proponents of junk DNA and their opponents. His main focus is building a case for large numbers of non-coding genes.

This is the second post in the series. The first one outlines the issues that led to the current paper.

Nils Walter disputes junk DNA: (1) The surprise

Walter begins his defense of function by outlining a "paradigm shift" that's illustrated in Figure 1.

FIGURE 1: Assessment of the information content of the human genome ∼20 years before (left)[110] and after (right)[111] the Human Genome Project was preliminarily completed, drawn roughly to scale.[9] This significant progress can be described per Thomas Kuhn as a “paradigm shift” flanked by extended periods of “normal science”, during which investigations are designed and results interpreted within the dominant conceptual frameworks of the sub-disciplines.[9] Others have characterized this leap in assigning newly discovered ncRNAs at least a rudimentary (elemental) biochemical activity and thus function as excessively optimistic, or Panglossian, since it partially extrapolates from the known to the unknown.[75] Adapted from Ref. [9].

Reference #9 is a paper by John Mattick promoting a "Kuhnian revolution" in molecular biology. I've already discussed that paper as an example of a paradigm shaft, which is defined as a strawman "paradigm" set up to make your work look like revolutionary [John Mattick's new paradigm shaft]. Here's the figure from the Mattick paper.

The Walter figure is another example of a paradigm shaft—not to be confused with a real paradigm shift.1 Both pie charts misrepresent the amount of functional DNA since they don't show regulatory sequences, centromeres, telomeres, origins of replication, and SARS. Together, these account for more functional DNA than the functional regions of protein-coding genes and non-coding genes. We didn't know the exact amounts in 1980 but we sure knew they existed. I cover this in Chapter 5 of my book: "The Big Picture."

The 1980 view also implies, incorrectly, that we knew nothing about the non-functional component of the genome when, in fact, we knew by then that half of our genome was composed of transposon and viral sequences that were likely to be inactive, degenerate fragments of once active elements. (John Mattick's figure is better.)

The 2020 view implies that most intron sequences are functional since introns make up more than 40% of our genome but only about 3% of the pie chart. As far as I know, there's no evidence to support that claim. About 80% of the pie chart is devoted to transcripts identified as either small ncRNAs or lncRNAs. The implication is that the discovery of these RNAs represents a paradigm shift in our understanding of the genome.

The alternative explanation is that we've known since the late 1960s that most of the human genome is transcribed and that these transcripts—most of which turned out to be introns—are junk RNA that is confined to the nucleus and rapidly degraded. Advances in technology have enabled us to detect many examples of spurious transcripts that are present transiently at low levels in certain cells. I cover this in Chaper 8 of my book: "Noncoding Genes and Junk RNA.

The whole point of Nils Walter's paper is to defend the idea that most of these transcripts are functional and the alternative explanation is wrong. He's trying to present a balanced view of the controversy so he's well aware of the fact that some of us interpret the red part of the pie chart as spurious transcripts (junk RNA). If he's wrong, and I am right, then there's no paradigm shift.

You don't get to shift the paradigm all on our own, even if John Mattick is on your side. A true paradigm shift requires that the entire community of scientists changes their perspective and that hasn't happened.

In the next few posts we'll see whether Nils Walter can make a strong case that all those lncRNAs are functional. They cover about two-thirds of the genome in the pie chart. If we assume that the average length of these long transcripts is 2000 bp then this represents one million transcripts and potentially one million non-coding genes.


1. The term "paradigm shaft" was coined by reader Diogenes in a comment on this blog from many years ago.

Walter, N.G. (2024) Are non‐protein coding RNAs junk or treasure? An attempt to explain and reconcile opposing viewpoints of whether the human genome is mostly transcribed into non‐functional or functional RNAs. BioEssays:2300201. [doi: 10.1002/bies.202300201]

Nils Walter disputes junk DNA: (1) The surprise

Nils Walter attempts to present the case for a functional genome by reconciling opposing viewpoints. I address his criticisms of the junk DNA position and discuss his arguments in favor of large numbers of functional non-coding RNAs.

Nils Walter is Francis S. Collins Collegiate Professor of Chemistry, Biophysics, and Biological Chemistry at the University of Michigan in Ann Arbor (Michigan, USA). He works on human RNAs and claims that, "Over 75% of our genome encodes non-protein coding RNA molecules, compared with only <2% that encodes proteins." He recently published an article explaining why he opposes junk DNA.

Walter, N.G. (2024) Are non‐protein coding RNAs junk or treasure? An attempt to explain and reconcile opposing viewpoints of whether the human genome is mostly transcribed into non‐functional or functional RNAs. BioEssays:2300201. [doi: 10.1002/bies.202300201]

The human genome project's lasting legacies are the emerging insights into human physiology and disease, and the ascendance of biology as the dominant science of the 21st century. Sequencing revealed that >90% of the human genome is not coding for proteins, as originally thought, but rather is overwhelmingly transcribed into non-protein coding, or non-coding, RNAs (ncRNAs). This discovery initially led to the hypothesis that most genomic DNA is “junk”, a term still championed by some geneticists and evolutionary biologists. In contrast, molecular biologists and biochemists studying the vast number of transcripts produced from most of this genome “junk” often surmise that these ncRNAs have biological significance. What gives? This essay contrasts the two opposing, extant viewpoints, aiming to explain their basis, which arise from distinct reference frames of the underlying scientific disciplines. Finally, it aims to reconcile these divergent mindsets in hopes of stimulating synergy between scientific fields.

Saturday, February 17, 2024

How to end the war in Ukraine according to a Canadian Conservative "diplomat"

In my opinion, the war in Ukraine is much more complicated than most people realize. We are constantly bombarded with propaganda from all sides and it inhibits rational thinking. One of the few reliable facts is that Vladimir Putin is a very smart bad person.

Lots of people think they have the answer to ending the war in Ukraine. One of the latest pundits is Chris Alexander who has published his thoughts in the Feb. 16, 2024 edition of Canada's Globe and Mail: Ukraine is paying the price for our nonchalance toward Russia’s leadership. Alexander spent years in Canada's Foreign Service, including many years in Moscow and a stint as Canada's ambassador to Afghanistan. In 2011 he was elected to Parliament as a Conservative MP and served as Minister of Citizenship and Immigration in Stephen Harper's cabinet. His reputation as a politician was very different than his previous, mostly admirable, reputation as a diplomat. Here's a excerpt from his Wikipedia article.

Wednesday, February 14, 2024

Copilot answers the question, "What is junk DNA?"

The Microsoft browser (Edge) has a built in function called Copilot. It's an AI assistant based on ChatGPT-4.

I decided to test it byt asking "What is junk DNA?" and here's the answer it gave me.

Sunday, February 11, 2024

Older but wiser?

With age comes wisdom, but sometimes age comes alone.

Oscar Wilde

Like many baby boomers, I sometimes forget people's names and other important bits of information. Sometimes I can't find a word that's been in my vocabulary for decades. These lapses are often temporary but very annoying. It's a sign of age. (I am 77 years old.)

We often make fun of these incidents and consol ourselves with the knowledge that we may be old but we are much wiser than we were in our younger days. We have years and years of experience behind us and over the years we've learned a thing or two that we never understood when we were listening to the Beatles on the radio. We've lived through the Cuban Missile crisis, the war in Viet Nam, the assassination of two Kennedys and Martin Luther King, and a host of cultural changes. We've lived in several different countries and we've raised children. All of these experiences have made us wiser, or so we think.

Friday, February 09, 2024

Open and closed chromatin domains (and epigenetics)

Gene expression in eukaryotes is influenced by the state of chromatin. Tightly packed nucleosomes inhibit the binding of transcription factors and RNA polymerase so that genes in these regions are "repressed." From time to time these regions loosen up a bit allowing access to transcription complexes and subsequent transcription.

The tightly packed regions are known as closed domains and the accessible regions are open domains. Some authors add an intermediate domain called a permissive domain. This model of eukaryotic gene expression has been around for 50 years and the important mechanisms controlling the switch were worked out in the 1980s. I found a recent review that covers this issue in the context of epigenetics and the image below comes from that paper (Klemm et al., 2019).

Wednesday, February 07, 2024

Philip Ball's new book: "How Life Works"

Philip Ball has just published a new book "How Life Works." The subtitle is "A User’s Guide to the New Biology" and that should tell you all you need to know. This is going to be a book about how human genomics has changed everything.

Monday, January 29, 2024

"People also ask" about junk DNA

I'm interested in the spread of science misinformation on the internet. The misinformation about the human genome is a good example that illustrates the problem. There are many other examples but I happen to know a lot about this particular one.

Anyone trying to find out about junk DNA will find it impossible to get a correct answer by searching the internet. The correct answer is that the amount of junk DNA in the human genome is controversial: some scientists think that most of our genome is functional while others think that as much as 90% is junk. The scientific evidence strongly favors the junk side of the controvesy and that's very well explained in the Wikipedia articles on Junk DNA and Non-coding DNA.

Wednesday, January 10, 2024

Benjamin Lewin's new book and his view of the human genome

I was a big fan of Benjamin Lewin. Back in the 1970's he published the first volumes of what was to become Genes, the authoritative textbook of molecular biology. I admired his ability to understand the latest experiments and put the results in the appropriate context.

Later on, when he founded the journal Cell, his editorials and other writings were always insightful. His editorial judgement was impeccable—he always published the very best papers in molecular biology.1

Saturday, January 06, 2024

Why do Intelligent Design Creationists lie about junk DNA?

A recent post on Evolution News (sic) promotes a a new podcast: Casey Luskin on Junk DNA’s “Kuhnian Paradigm Shift”. You can listen to the podcast here but most Sandwalk readers won't bother because they've heard it all before. [see Paradigm shifting.]

Luskin repeats the now familiar refrain of claiming that scientists used to think that all non-coding DNA was junk. Then he goes on to list recent discoveries showing that some of this non-coding DNA is functional. The truth is that no knowledgeable scientist ever claimed that all non-coding DNA was junk. The original idea of junk DNA was based on evidence that only 10% of the genome is functional and these scientists knew that coding regions occupied only a few percent. Thus, right from the beginning, the experts on genome evolution knew about all sorts of functional non-coding DNA such as regulatory sequences, non-coding genes, and other things.

Saturday, December 16, 2023

Kat Arney interviews me on her podcast

I had a long chat with Kat Arney a few weeks ago and she has now taken the best parts of that conversation and put them in her latest Genetics Society podcast: Genes, junk and the 'dark genome'. My comments are in the last twelve minutes. At the end, Kat asks me "Is there like one thing you would really want a student or researcher, working in genetics today to really understand about the human genome?"

Kat was kind enough to write a blurb for my book last year where she said,

What's in Your Genome? is a thought-provoking and pugnatious book that will make you wonder afresh at the molecular intracies of life. When it comes to our genomes, we humans are nothing special—Moran makes a convincing argument that the vast majority of our sloppy human genome is not mysterious genetic treasures but boring junk.

In this podscast, she combines my thoughts on the human genome with those of two people who don't agee with the idea that the human genome is full of junk. Here's a brief summary of their positions.

Naomi Allen is Chief Scientist at UK Biobank, a consortium that's sequencing the genomes of UK citizens. So far, they've published data on 500,000 genome sequences. I wrote about one of their more significant findings last year (August, 2022) where they reported on the fraction of the human genome that was under purifying selection. This is an excellent proxy for functional DNA and the results are in line with (my) expectations: less that 10% of the genome is conserved and most of it is in the non-coding fraction [Identifying functional DNA (and junk) by purifying selection.

It's too bad that Kat's interview with Naomi Allen doesn't mention that important result, especially since the podcast is about junk DNA. Here's how Naomi Allen begins her part of the interview.

Whole genome sequencing enables researchers to look at all of the genetic variation across the entire genome. So not just in the 2% of the genome that encodes for proteins, but all of the genetic variation, much of which was previously considered "junk DNA" precisely because we didn't know what it did.

This is disappointing for two important reasons. First, surely in 2023 we've gone beyond the tired myth that all of the information in the human genome was concentrated in coding DNA? Second, no knowledgeable scientist ever said that all non-coding DNA was junk DNA and the idea of junk DNA was not based on ignorance so surely it's time to stop repeating that myth as well.

The rest of that interview focuses on how mapping genetic variation could contribute to our understanding of health and disease. I would have loved to ask how Biobanks proposes to do this if most of the variation is in junk DNA and also ask whether mutations in junk DNA can contribute to genetic disease. (They can.)

Danuta Jeziorska is the CEO of Nucleome Therapeutics, a company that's described as "spun out of Oxford University with a new set of technologies for exploring the dark genome." Kat asks her about the dark genome and here's her response.

So if you think about it, we have 22,000 genes in our genome, and we can compare that to having 22,000 ingredients in the fridge. We use the same set of ingredients to create different meals, just like how we have the same DNA within each cell, but then we have hundreds of different cell types. So this dark genome determines the combination of ingredients of the genes that you take and at which level you use them, to produce the different cell types that build our body. And you can just imagine that if you make a mistake in that - so let's say that you add the wrong ingredients in the wrong meal, you can mess up the meal. And in this same way you can mess up the cell type. So if you, for example, if you don't produce enough of haemoglobin to transport oxygen around the body, you will end up with a genetic form of anaemia or if you turn on a gene that's not supposed to be turned on, like an oncogene, you may end up having cancer.

So the dark genome is now very well understood as the mechanism that is causing diseases.

This is a slightly different definition of the dark genome than those I discussed in a recent post [What is the "dark matter of the genome"?]. In that post I suggested that most scientists were referring to all of the functions in non-coding DNA but Danuta Jeziorska seems to be restricting her use of "dark genome" to just regulatory sequences. In the rest of the interview she goes on to describe various types of regulatory sequences, with an emphasis on 3D structure, and to explain that many common genetic diseases are caused by mutations in regulatory sequences. Her company is using machine learning to find the functional elements in the dark genome and which variants are associated with disease. They are also investing in drug discovery.


What is the "dark matter of the genome"?

The phrase "dark matter of the genome" is used by scientists who are skeptical of junk DNA so they want to convey the impression that most of the genome consists of important DNA whose function is just waiting to be discovered. Not surprisingly, the term is often used by researchers who are looking for funding and investors to support their efforts to use the latest technology to discover this mysterious function that has eluded other scientists for over 50 years.

The term "dark matter" is often applied to the human genome but what does it mean? We get a clue from a BBC article published by David Cox last April: The mystery of the human genome's dark matter. He begins the article by saying,

Twenty years ago, an enormous scientific effort revealed that the human genome contains 20,000 protein-coding genes, but they account for just 2% of our DNA. The rest of was written off as junk – but we are now realising it has a crucial role to play.

Friday, December 08, 2023

What really happened between Rosalind Franklin, James Watson, and Francis Crick?

That's part of the title of podcast by Kat Arney who interviews Matthew Cobb [Double helix double crossing? What really happened between Rosalind Franklin, James Watson and Francis Crick?].

Matthew Cobb is one of the world's leading experts on the history of molecular biology.

The way it’s usually told, Franklin was effectively ripped off and belittled by the Cambridge team, especially Watson, and has only recently been restored to her rightful place as one of the key discoverers of the double helix. It’s a dramatic narrative, with heroes, villains and a grand prize. But, as I found out when I sat down for a chat with Matthew Cobb, science author and Professor of Zoology at the University of Manchester, the real story is a lot more nuanced.

Photo 51 did not belong to Rosalind Franklin and it had (almost) nothing to do with solving the structure of DNA. Franklin and Wilkins would never have gotten the structure on their own. Crick and Watson did not "steal" any data. Whether they behaved ethically is debatable.


Sunday, November 26, 2023

ChatGPT gets two-thirds of science textbook questions wrong: time to bring it into the classroom!

The November 16th issue of Nature has an article about ChatGPT: ChatGPT has entered the classroom: how LLMs could transform education. It reports that the latest version (GPT4) can only answer one third of questions correctly in physical chemistry, physics, and calculus. Nevertheless, the article promotes the idea that ChatGPT should be brought into the classroom!

An editorial in the same issue explains Why teachers should explore ChatGPT’s potential — despite the risks.

Many students now use AI chatbots to help with their assignments. Educators need to study how to include these tools in teaching and learning — and minimize pitfalls.

I don't get it. It seems to me that the problems with ChatGPT far outweigh the advantages and the best approach for now is to warn students that using AI tools may be terribly misleading and could lead to them failing a course if they trust the output. That doesn't mean that there's no potential for improvement in the future but this can only happen if the sources of information used by these tools were to become much more reliable. No improvements in the algorithms are going to help with that.