More Recent Comments

Thursday, December 22, 2022

Junk DNA, TED talks, and the function of lncRNAs

Most of our genome is transcribed but so far only a small number of these transcripts have a well-established biological function.

The fact that most of our genome is transcribed has been known for 50 years but that fact only became widely known with the publication of ENCODE's preliminary results in 2007 (ENCODE, 2007). The ENOCDE scientists referred to this as "pervasive transription" and this label has stuck.

By the end of the 1970s we knew that much of this transcription was due to introns. The latest data shows that protein coding genes and known noncoding genes occupy about 45% of the genome and most of that is intron sequences that are mostly junk. That leaves 30-40% of the genome that is transcribed at some point producing something like one million transcripts of unknown function.

Wednesday, December 21, 2022

A University of Chicago history graduate student's perspective on junk DNA

A new master's thesis on the history of junk DNA has been posted. It's from the Department of History at the University of Chicago.

My routine scan for articles on junk DNA turned up the abstract of an M.A. thesis on the history of junk DNA: Requiem for a Gene: The Problem of Junk DNA for the Molecular Paradigm. The supervisor is Professor Emily Kern in the Department of History at the University of Chicago. I've written to her to ask for a copy of the thesis and for permission to ask her, and her student, some questions about the thesis. No reply so far.

Here's the abstract of the thesis.

“Junk DNA” has been at the center of several high-profile scientific controversies over the past four decades, most recently in the disputes over the ENCODE Project. Despite its prominence in these debates, the concept has yet to be properly historicized. In this thesis, I seek to redress this oversight, inaugurating the study of junk DNA as a historical object and establishing the need for an earlier genesis for the concept than scholars have previously recognized. In search of a new origin story for junk, I chronicle developments in the recognition and characterization of noncoding DNA sequences, positioning them within existing historiographical narratives. Ultimately, I trace the origin of junk to 1958, when a series of unexpected findings in bacteria revealed the existence of significant stretches of DNA that did not encode protein. I show that the discovery of noncoding DNA sequences undermined molecular biologists’ vision of a gene as a line of one-dimensional code and, in turn, provoked the first major crisis in their nascent field. It is from this crisis, I argue, that the concept of junk DNA emerged. Moreover, I challenge the received narrative of junk DNA as an uncritical reification of the burgeoning molecular paradigm. By separating the history of junk DNA from its mythology, I demonstrate that the conceptualization of junk DNA reveals not the strength of molecular biological authority but its fragility.

It looks like it might be a history of noncoding DNA but I won't know for certain until I see the entire thesis. It's only available to students and staff at the University of Chicago.


Sunday, December 18, 2022

Protein concentrations in E. coli are mostly controlled at the level of transcription initiation

The most important step in the regulation of protein-coding genes in E. coli is the rate of binding of RNA polymerase to the promoter region.

A group of scientists at the University of California at San Diego and their European collaborators looked at the concentrations of proteins and mRNAs of about 2000 genes in E. coli. They catalogued these concentrations under several different growth conditions in order to determine whether the level of protein being expressed from each of these genes correlated with transcription rate, translation rate, mRNA stability or other levels of gene expression.

The paper is very difficult to understand because the authors are primarily interested in developing mathematical formulae to describe their results. They expect you to understand equations like,

even though they don't explain the parameters very well. A lot of important information is in the supplements and I couldn't be bothered to download and read them. I don't think the math is anywhere near as important as the data and the conclusions.

Friday, December 16, 2022

Publishing a science book - Lesson #1: The publisher is always right about everything

Don't bother trying to reason with a publisher. All of them have different views on proper style and every single one of them is absolutely certain that their style is the only correct one.

I'm in the middle of the copyedit stage of my book. This is the stage where a copyeditor goes through your manuscript and makes any corrections to spelling and grammar. This is a lot of work for any copyeditor having to deal with one of my manuscripts and I greatly appreciate the effort. My book is a lot better now than it was a few weeks ago. (Who knew that there was only one l in canceled?)

It's also the stage where the publisher imposes their particular style on the manusript and that can be a problem. I'll document some of the issues in subsequent posts but to give you an example, consider the titles of books in the reference list. I wrote it like this: The Selfish Gene and Molecular and Genome Evolution. This is not in line with my publisher's handbook of style so the titles were converted to lowercase as in: The selfish gene and Molecular and genome evolution. I objected, pointing to numerous other science books that used the same titles that are on the covers of the books and suggesting that my readers were more familiar with The Selfish Gene than with The selfish gene.

I was overruled by my publisher who noted that they make their style choices for good reasons—it's for "consistency, clarity, and ease of reading." I assume that publishers, such as Oxford, would make the same argument while insisting that the title should be The Selfish Gene.

In case you ever find yourself in this position, you should keep in mind that your contract will almost certainly say that the publisher has complete control of your book and they can make any changes they want as long as it doesn't affect the meaning of what you wrote.

Here's what it says in my contract, "The Publisher shall publish the Author's work in whatever style and format it thinks most suitable ... While the Publisher may, in its sole discretion, consult the Author with respect to said style and format, the Publisher retains the right to make all final decisions on matters of format, design, selling price and marketing."

I was aware of some issues with inappropriate covers and tiles in the past so I had an extra sentence added to the contract that said, "The Publisher and Author will discuss and agree upon the title and cover design." It's a good thing I put that in because the publisher was pressuring me to change the title of the book and I was able to resist.

Authors can't win most fights over style and format. I've been discussing the publishing of science books with a number of other authors over the past few months and several of them told me not to bother trying to argue with a publisher because they will never give in. They have a set style for all books and they won't make an exception for an individual author no matter how good an argument you make.

I didn't listen to those other authors. Silly me.

I'm thinking of trying to write a standard set of guidelines that scientists could put into their contracts to cover the most egregious style restrictions. It might be helpful if all science writers would insist on inserting these guidelines into their contracts.


Can the AI program ChatGPT pass my exam?

There's a lot of talk about ChatGPT and how it can prepare lectures and get good grades on undergraduate exams. However, ChatGPT is only as good as the information that's popular on the internet and that's not always enough to get a good grade on my exam.

ChatGPT is an artificial intelligence (AI) program that's designed to answer questions using a style and language that's very much like the responses you would get from a real person. It was developed by OpenAI, a tech company in San Francisco. You can create an account and log in to ask any question you want.

Several professors have challenged it with exam questions and they report that ChatGPT would easily pass their exams. I was skeptical, especially when it came to answering questions on controversial topics where there was no clear answer. I also suspected that ChatGPT would get it's answers from the internet and this means that popular, but incorrect, views would likely be part of ChatGPT's response.

Here are my questions and the AI program's answers. It did quite well in some cases but not so well in others. My main concern is that programs like this might be judged to be reliable sources of information despite the fact that the real source is suspect.

Monday, December 12, 2022

Did molecular biology make any contribution to evolutionary theory?

Some evolutionary biologists think—incorrectly, in my opinion—that molecular biology has made no contributions to our understanding of evolution.

PNAS published a series of articles on Gregor Mendel and one of them caught my eye. Here's what Nicholas Barton wrote in his article The "New Synthesis".

During the 1960s and 1970s, there were further conceptual developments—largely independent of the birth of molecular biology during the previous two decades (15). First, there was an understanding that adaptations cannot be explained simply as being “for the good of the species” (16, 17). One must explain how the genetic system (including sexual reproduction, recombination, and a fair meiosis, with each copy of a gene propagating with the same probability) is maintained through selection on individual genes, and remains stable despite mutations that would disrupt the system (17, 19, 20). Second, and related to this, there was an increased awareness of genetic conflicts that arise through sexual reproduction; selfish elements may spread through biased inheritance, even if they reduce individual fitness (19, 21, 22). In the decade following the discovery that DNA carries genetic information, all the fundamental principles of molecular biology were established: the flow of information from sequences of DNA through RNA to protein, the regulation of genes by binding to specific sequences in promoters, and the importance of allostery in allowing arbitrary regulatory networks (23, 24). Yet, the extraordinary achievements of molecular biology had little effect on the conceptual development of evolutionary biology. Conversely, although evolutionary arguments were crucial in the founding of molecular biology, they have had rather little influence in the half-century since (e.g., ref. 25). Of course, molecular biology has revealed an astonishing range of adaptations that demand explanation—for example, the diversity of biochemical pathways, that allow exploitation of almost any conceivable resource, or the efficiency of molecular machines such as the ribosome, which translates the genetic code. Technical advances have brought an accelerating flood of data, most recently, giving us complete genome sequences and expression patterns from any species. Yet, arguably, no fundamentally new principles have been established in molecular biology, and, in evolutionary biology, despite sophisticated theoretical advances and abundant data, we still grapple with the same questions as a century or more ago.

This does not seem fair to me. I think that neutral theory, nearly neutral theory, and the importance of random genetic drift relied heavily on work done by molecular biologists. Similarly, the development of dating techniques using DNA and protein sequences is largely the work of molecular biologists. It wasn't the adaptationists or the paleontologists who discovered that humans and chimpanzees shared a common ancestor 5-7 million years ago and it wasn't either of those groups who discovered the origin of mitochondria.

And some of us are grappling with the idea that most of our genome is junk DNA, a question that never would have occurred to evolutionary biologists from a century ago.

Barton knows all about modern population genetics and the importance of neutral theory because later on he says,

If we consider a single allele, then we can see it as “effectively neutral” if its effect on fitness is less than ∼1/2Ne. This idea was used by Ohta (54) in a modification of the neutral theory, to suggest why larger populations might be less diverse than expected (because a smaller fraction of mutations would be effectively neutral), and why rates of substitution might be constant per year rather than per generation (because species with shorter generation times might tend to have large populations, and have a smaller fraction of effectively neutral mutations that contribute to long-term evolution). Lynch (21) has applied this concept to argue that molecular adaptations that are under weak selection cannot be established or maintained in (relatively) smaller populations, imposing a “drift barrier” to adaptation. Along the same lines, Kondrashov (55) has argued that deleterious mutations with Nes ≈ 1 will accumulate, steadily degrading the population. Both ideas seem problematic if we view adaptation as due to optimization of polygenic traits: Organisms can be well adapted even if drift dominates selection on individual alleles, and, under a model of stabilizing selection on very many traits, any change that degrades fitness can be compensated.

Barton may think that the drift-barrier hypothesis is "problematic" but it certainly seems like a significant advance that owes something to molecular biology.

What do you think? Do you agree with Barton that, "... the extraordinary achievements of molecular biology had little effect on the conceptual development of evolutionary biology."


Friday, December 02, 2022

Sequencing both copies of your diploid genome

New techniques are being developed to obtain the complete sequences of both copies (maternal and paternal) of a typical diploid individual.

The first two sequences of the human genome were published twenty years ago by the International Human Genome Project and by a company called Celera Genomics. The published sequences were a consensus using DNA from multiple indivduals so the final result didn't represent the sequence of any one person. Furthermore, since each of us has inherited separate genomes from our mother and father, our DNA is actually a mixture of two different haploid genomes. Most published genome sequences are an average of these two separate genomes where the choice of nucleotide at any one position is arbitrary.

The first person to have a complete genome sequence was James Watson in 2007 but that was a composite genome sequence. Craig Venter's genome sequence was published a few months later and it was the first complete genome sequence containing separate sequences of each of his 46 chromosomes. (One chromosome from each of his parents.) In today's language, we refer to this as a diploid sequence.

The current reference sequence is based on the data published by the public consortium (International Humand Genome Project)—nobody cares about the Celera sequence. Over the years, more and more sequencing data has been published and this has been incorporated into the standard human reference genome in order to close most gaps and improve the accuracy. The current version is called GRCh38.p14 from February 3, 2022. It's only 95% complete because it's missing large stretches of repetitive DNA, especially in the centromere regions and at the ends of each chromosome (telomeric region).

The important point for this discussion is that CRCh38 is not representative of the genomes of most people on Earth because there has been a bias in favor of sequencing European genomes. (Some variants are annotated in the reference genome but this can't continue.) Many scientists are interested the different kinds of variants present in the human population so they would like to create databases of genomes from diverse populations.

The first complete, telomere-to-telomere (T2T), human genome sequence was published last year [A complete human genome sequence (2022). It was made possible by advances in sequencing technology that generated long reads of 10,000 bp and ultra-long reads of up to 1,000,000 bp [Telomere-to-telomere sequencing of a complete human genome]. The DNA is from a CHM13 cell line that has identical copies of each chromosome so there's no ambiguity due to differences in the maternal and paternal copies. The full name of this sequence is CHM13-T2T.

The two genomes (CRCh38 and CHM13) can't be easily merged so right now there are competing reference genomes [What do we do with two different human genome reference sequences?].

The techniques used to sequence the CHM13 genome make it possible to routinely obtain diploid genome sequences from a large number of individuals because overlapping long reads can link markers on the same chromosome and distinguish between the maternal and paternal chromosomes. However, in practice, the error rate of long read sequencing made assembly of separate chromosomes quite difficult. Recent advances in the accuracy of long read sequencing have been developed by PacBio, and this high fidelity sequencing (PacBio HiFi sequencing) promises to change the game.

The Human Pangene Reference Consortium has tackled the problem by sequencing the genome of an Ashkenazi man (HG002) and his parents (HG002-father and HG004-mother) using the latest sequencing techniques. They then asked the genome community to submit their assemblies using their best software in a kind of "assembly bakeoff." They got 23 responses.

Jarvis, E. D., Formenti, G., Rhie, A., Guarracino, A., Yang, C., Wood, J., et al. (2022) Semi-automated assembly of high-quality diploid human reference genomes. Nature, 611:519-531. [doi: 10.1038/s41586-022-05325-5]

The current human reference genome, GRCh38, represents over 20 years of effort to generate a high-quality assembly, which has benefitted society. However, it still has many gaps and errors, and does not represent a biological genome as it is a blend of multiple individuals. Recently, a high-quality telomere-to-telomere reference, CHM13, was generated with the latest long-read technologies, but it was derived from a hydatidiform mole cell line with a nearly homozygous genome. To address these limitations, the Human Pangenome Reference Consortium formed with the goal of creating high-quality, cost-effective, diploid genome assemblies for a pangenome reference that represents human genetic diversity. Here, in our first scientific report, we determined which combination of current genome sequencing and assembly approaches yield the most complete and accurate diploid genome assembly with minimal manual curation. Approaches that used highly accurate long reads and parent–child data with graph-based haplotype phasing during assembly outperformed those that did not. Developing a combination of the top-performing methods, we generated our first high-quality diploid reference assembly, containing only approximately four gaps per chromosome on average, with most chromosomes within ±1% of the length of CHM13. Nearly 48% of protein-coding genes have non-synonymous amino acid changes between haplotypes, and centromeric regions showed the highest diversity. Our findings serve as a foundation for assembling near-complete diploid human genomes at scale for a pangenome reference to capture global genetic variation from single nucleotides to structural rearrangements.

We don't need to get into all the details but there are a few observations of interest.

  • All of the attempted assemblies were reasonably good but the best ones had to make use of the parental genomes to resolve discrepancies.
  • Some assemblies began by separating the HG002 (child) sequences into two separate groups based on their similarity to one of the parents. Others generated assemblies without using the parental data then fixed any problems by using the parental genomes and a technique called "graph-based phasing." The second approach was better.
  • All of the final assemblies were contaminated with varying amounts of E. coli and yeast DNA or and/or various adaptor DNA sequences that were not removed by filters. All of them were contaminated with mitochondrial DNA that did not belong in the assembled chromosomes.
  • The most common sources of assembly errors were: (1) missing joins where large stretches of DNA should have been brought together, (2) misjoins where two large stretches (contigs) were inappropriately joined, (3) incorrect inversions, and (4) false duplications.
  • The overall accuracy of the best assemblies was one base pair error in 100,000 bp (10-5).
  • Using the RefSeq database of 27,225 genes, most assemblies captured almost all of these confirmed and probable genes but several hundred were not complete and many were missing.
  • No chromosome was complete telomere-telomere (T2T) but most were nearly complete including the complicated centromere and telomere regions.
  • The two genomes (parental and maternal) differed at 2.6 million SNPs (single nucleotides), 631,000 small structural variations (<50 bp), and 11,600 large structural variations (>50 bp).
  • The consortium used the best assembly algorithm to analyze the genomes of an additional 47 individuals. They began with the same coverage used for HG002; namely, 35X coverage. (Each stretch of DNA was sequenced 35 times on average - about equal amounts in both directions.) This was not successful so they had to increase the coverage to 130X to get good assemblies. They estimate that each additional diploid sequence will reguire 50-60X coverage. This kind of coverage would have been impossible in the 1990s when the first human genome was assembled but now it's fairly easy as long as you have the computer power and storage to deal with it.


Thursday, December 01, 2022

University of Michigan biochemistry students edit Wikipedia

Students in a special topics course at the University of Michigan were taught how to edit a Wikipedia article in order to promote function in repetitive DNA and downplay junk.

The Wikipedia article on Repeated sequence (DNA) was heavily edited today by students who were taking an undergraduate course at the University of Michgan. One of the student leaders, Rasberry Neuron, left the following message on the "Talk" page.

This page was edited for a course assignment at the University of Michigan. The editing process included peer review by four students, the Chemistry librarian at the University of Michigan, and course instructors. The edits published on 12/01/2022 reflect improvements guided by the original editing team and the peer review feedback. See the article's History page for information about what changes were made from the previous version.

References to junk DNA were removed by the students but quickly added back by Paul Gardner who is currently fixing other errors that the students have made.

I checked out the webpage for the course at CHEM 455_505 Special Topics in Biochemistry - Nucleic Acids Biochemistry. The course description is quite revealing.

We now realize that the human genome contains at least 80,000 non-redundant non-coding RNA genes, outnumbering protein-coding genes by at least 4-fold, a revolutionary insight that has led some researchers to dub the eukaryotic cell an “RNA machine”. How exactly these ncRNAs guide every cellular function – from the maintenance and processing to the regulated expression of all genetic information – lies at the leading edge of the modern biosciences, from stem cell to cancer research. This course will provide an equally broad as deep overview of the structure, function and biology of DNA and particularly RNA. We will explore important examples from the current literature and the course content will evolve accordingly.

The class will be taught from a chemical/molecular perspective and will bring modern interdisciplinary concepts from biochemistry, biophysics and molecular biology to the fore.

Most of you will recognize right away that there are factually incorrect statements (i.e. misinformation) in that description. It is not true that there are at least 80,000 noncoding genes in the human genome. At some point in the future that may turn out to be true but it's highly unlikely. Right now, there are at most 5,000 proven noncoding genes. There are many scientists who claim that the mere existence of a noncoding transcript is proof that a corresponding gene must exist but that's not how science works. Before declaring that a gene exists you must present solid evidence that it produces a biologically relevant product [Most lncRNAs are junk] [Wikipedia blocks any mention of junk DNA in the "Human genome" article] [Editing the Wikipedia article on non-coding DNA] [On the misrepresentation of facts about lncRNAs] [The "standard" view of junk DNA is completely wrong] [What's In Your Genome? - The Pie Chart] [How many lncRNAs are functional?].

I'm going to email a link to this post to the course instructors and some of the students. Let's see if we can get them to discuss junk DNA.


Monday, November 21, 2022

How not to write a Nature abstract

A friend recently posted a figure on Facebook that instructs authors in the correct way to prepare a summary paragraph (abstract) for publication in Nature. It uses a specific example and the advice is excellent [How to construct a Nature summary paragraph].

I thought it might be fun to annotate a different example so I randomly selected a paper on genomics to see how it compared. The one that popped up was An integrated encyclopedia of DNA elements in the human genome.


Saturday, November 19, 2022

How many enhancers in the human genome?

In spite of what you might have read, the human genome does not contain one million functional enhancers.

The Sept. 15, 2022 issue of Nature contains a news article on "Gene regulation" [Two-layer design protects genes from mutations in their enhancers]. It begins with the following sentence.

The human genome contains only about 20,000 protein-coding genes, yet gene expression is controlled by around one million regulatory DNA elements called enhancers.

Sandwalk readers won't need to be told the reference for such an outlandish claim because you all know that it's the ENCODE Consortium summary paper from 2012—the one that kicked off their publicity campaign to convince everyone of the death of junk DNA (ENCODE, 2012). ENCODE identified several hundred thousand transcription factor (TF) binding sites and in 2012 they estimated that the total number of base pairs invovled in regulating gene expression could account for 20% of the genome.

How many of those transcription factor binding sites are functional and how many are due to spurious binding to sites that have nothing to do with gene regulation? We don't know the answer to that question but we do know that there will be a huge number of spurious binding sites in a genome of more than three billion base pairs [Are most transcription factor binding sites functional?].

The scientists in the ENCODE Consortium didn't know the answer either but what's surprising is that they didn't even know there was a question. It never occured to them that some of those transcription factor binding sites have nothng to do with regulation.

Fast forward ten years to 2022. Dozens of papers have been published criticizing the ENCODE Consortium for their stupidity lack of knowledge of the basic biochemical properties of DNA binding proteins. Surely nobody who is interested in this topic believes that there are one million functional regulatory elements (enhancers) in the human genome?

Wrong! The authors of this Nature article, Ran Elkon at Tel Aviv University (Israel) and Reuven Agami at the Netherlands Cancer Institute (Amsterdam, Netherlands), didn't get the message. They think it's quite plausible that the expression of every human protein-coding gene is controlled by an average of 50 regulatory sites even though there's not a single known example any such gene.

Not only that, for some reason they think it's only important to mention protein-coding genes in spite of the fact that the reference they give for 20,000 protein-coding genes (Nurk et al., 2022) also claims there are an additional 40,000 noncoding genes. This is an incorrect claim since Nurk et al. have no proof that all those transcribed regions are actually genes but let's play along and assume that there really are 60,000 genes in the human genome. That reduces the average number of enhancers to an average of "only" 17 enhancers per gene. I don't know of a single gene that has 17 or more proven enhancers, do you?

Why would two researchers who study gene regulation say that the human genome contains one million enhancers when there's no evidence to support such a claim and it doesn't make any sense? Why would Nature publish this paper when surely the editors must be aware of all the criticism that arose out of the 2012 ENCODE publicity fiasco?

I can think of only two answers to the first question. Either Elkon and Agami don't know of any papers challenging the view that most TF binding sites are functional (see below) or they do know of those papers but choose to ignore them. Neither answer is acceptable.

I think that the most important question in human gene regulation is how much of the genome is devoted to regulation. How many potential regulatory sites (enhancers) are functional and how many are spurious non-functional sites? Any paper on regulation that does not mention this problem should not be published. All results have to interpreted in light of conflicting claims about function.

Here are some example of papers that raise the issue. The point is not to prove that these authors are correct - although they are correct - but to show that there's a controvesy. You can't just state that there are one million regulatory sites as if it were a fact when you know that the results are being challenged.

"The observations in the ENCODE articles can be explained by the fact that biological systems are noisy: transcription factors can interact at many nonfunctional sites, and transcription initiation takes place at different positions corresponding to sequences similar to promoter sequences, simply because biological systems are not tightly controlled." (Morange, 2014)

"... ENCODE had not shown what fraction of these activities play any substantive role in gene regulation, nor was the project designed to show that. There are other well-studied explanations for reproducible biochemical activities besides crucial human gene regulation, including residual activities (pseudogenes), functions in the molecular features that infest eukaryotic genomes (transposons, viruses, and other mobile elements), and noise." (Eddy, 2013)

"Given that experiments performed in a diverse number of eukaryotic systems have found only a small correlation between TF-binding events and mRNA expression, it appears that in most cases only a fraction of TF-binding sites significantly impacts local gene expression." (Palazzo and Gregory, 2014)

One surprising finding from the early genome-wide ChIP studies was that TF binding is widespread, with thousand to tens of thousands of binding events for many TFs. These number do not fit with existing ideas of the regulatory network structure, in which TFs were generally expected to regulate a few hundred genes, at most. Binding is not necessarily equivalent to regulation, and it is likely that only a small fraction of all binding events will have an important impact on gene expression. (Slattery et al., 2014)

Detailed maps of transcription factor (TF)-bound genomic regions are being produced by consortium-driven efforts such as ENCODE, yet the sequence features that distinguish functional cis-regulatory sites from the millions of spurious motif occurrences in large eukaryotic genomes are poorly understood. (White et al., 2013)

One outstanding issue is the fraction of factor binding in the genome that is "functional", which we define here to mean that disturbing the protein-DNA interaction leads to a measurable downstream effect on gene regulation. (Cusanovich et al., 2014)

... we expect, for example, accidental transcription factor-DNA binding to go on at some rate, so assuming that transcription equals function is not good enough. The null hypothesis after all is that most transcription is spurious and alterantive transcripts are a consequence of error-prone splicing. (Hurst, 2013)

... as a chemist, let me say that I don't find the binding of DNA-binding proteins to random, non-functional stretches of DNA surprising at all. That hardly makes these stretches physiologically important. If evolution is messy, chemistry is equally messy. Molecules stick to many other molecules, and not every one of these interactions has to lead to a physiological event. DNA-binding proteins that are designed to bind to specific DNA sequences would be expected to have some affinity for non-specific sequences just by chance; a negatively charged group could interact with a positively charged one, an aromatic ring could insert between DNA base pairs and a greasy side chain might nestle into a pocket by displacing water molecules. It was a pity the authors of ENCODE decided to define biological functionality partly in terms of chemical interactions which may or may not be biologically relevant. (Jogalekar, 2012)


Nurk, S., Koren, S., Rhie, A., Rautiainen, M., Bzikadze, A. V., Mikheenko, A., et al. (2022) The complete sequence of a human genome. Science, 376:44-53. [doi:10.1126/science.abj6987]

The ENCODE Project Consortium (2012) An integrated encyclopedia of DNA elements in the human genome. Nature, 489:57-74. [doi: 10.1038/nature11247]

Academic workers on strike at University of California schools

Graduate students, postdocs, and other "academic workers" are on strike for higher wages and better working conditions at University of California schools but it's very difficult to understand what's going on.

Several locals of the United Auto Workers union are on strike. The groups include Academic Researchers, Academic Student Employees (ASEs), Postdocs, and Student Researchers. The list of demands can be found on the UAW website: UAW Bargaining Highlights (All Units).

Here's the problem. At my university, graduate students can make money by getting a position as a TA (teaching assistant). This is a part-time job at an hourly rate. This may be a major source of income for humanities students but for most science students it's just a supplement to their stipend. The press reports on this strike keep referring to a yearly income and they make it sound like part-time employment as a TA should pay a living wage. For example, a recent Los Angeles Times article says,

The workers are demanding a base salary of $54,000 for all graduate student workers, child-care subsidies, enhanced healthcare for dependents, longer family leave, public transit passes and lower tuition costs for international scholars. The union said the workers earn an average current pay of about $24,000 a year.

I don't understand this concept of "base salary." In my experience, most TAs work part time. If they were paid $50 per hour then they would have to work about 30 hours per week over two semesters in order to earn $54,000 per year. That doesn't seem to leave much time for working on a thesis. Perhaps it includes a stipend that doesn't require teaching?

Our graduate students are paid a living allowance (currently about $28,000 Cdn) and their tuition and fees are covered by an extra $8,000. Most of them don't do any teaching. Almost all of this money comes from research grants and not directly from the university.

The University of California system seems to be very different from the one I'm accustomed to. Is the work of TAs obvious to most Americans? Do you understand the issues?

I also don't get the situation with postdocs. The union is asking for a $70,000 salary for postdocs and the university is offering an 8% increase in the first year and smaller increases in subsequent years. In Canada, postdocs are mostly paid from research grants and not from university funds. The average postdoc salary at the University of Toronto is $51,000 (Cdn) but the range is quite large ($40K - $100K). I don't think the University of Toronto can dictate to PIs the amount of money that they have to pay a postdoc but it does count them as employees and ensures that postdocs have healthcare and suitable working conditions. These postdocs are members of a union (CUPE 3902) and there is a minimum stipend of $36,000 (Cdn).

Can someone explain the situation at the University of California schools? Are they asking for a minimum salary of $70,000 (US) ($93,700 Cdn)? Will PIs have to pay postdocs more from their research grants if the union wins a wage increase but the postdocs are already earning more than 70,000?

It's all very confusing and the press doesn't seem to have a good handle on the situation.

Note: I know that the union doesn't expect the university to meet it's maximum demands. I'm sure they will settle for something less. That's not the point I'm trying to make. I'm just trying to understand how graduate students and postdocs are paid in University of California schools.


Friday, November 18, 2022

Higher education for all?

I discuss a recent editorial in Science that advocates expanding university education in order to prepare students for jobs.

I believe that the primary goal of a university education is to teach students how to think (critical thinking). This goal is usually achieved within the context of an in-depth study in a particular field such as history, English literature, geology, or biochemistry. The best way of achieving that goal is called student-centered learning. It involves, among other things, classes that focus on discussion and debate under the guidance of an experienced teacher.

Universities and colleges also have job preparation programs such as business management, medicine, computer technology, and, to a lesser extent, engineering. These programs may, or may not, teach critical thinking (usually not).

About 85% of students who enter high school will graduate. About 63% of high school graduates go to college or university. The current college graduation rate is 65% (within six years). What this means is that for every 100 students that begin high school roughly 35 will graduate from college.

Now let's look at an editorial written by Marcia McNutt, President of the United States National Academy of Sciences. The editorial appears in the Nov. 11 issue of Science [Higher education for all]. She begins by emphasizing the importance of a college degree in terms of new jobs and the wealth of nations.

Currently, 75% of new jobs require a college degree. Yet in the US and Europe, only 40% of young adults attend a 2-year or 4-year college—a percentage that has either not budged or only modestly risen in more than two decades— despite a college education being one of the proven ways to lift the socioeconomic status of underprivileged populations and boost the wealth of nations.

There's no question that well-educated graduates will contribute to society in many ways but there is a question about what "well-educated" really means. Is it teaching specific jobs skills or is it teaching students how to think? I vote for teaching critical thinking and not for job training. I think that creating productive citizens who can fill a variety of different jobs is a side-benefit of preparing them to cope with a complex society that requires critical thinking. I don't think my view is exactly the same as Marcia McNutt's because she emphasizes training as a main goal of college education.

Universities, without building additional facilities, could expand universal and life-long access to higher education by promoting more courses online and at satellite community-college campuses.

Statements like that raise two issues that don't get enough attention. The first one concerns the number of students who should graduate from college in an ideal world. What is that number and at what stage should it be enhanced? Should here be more high school graduates going to college? If so, does that require lowering the bar for admission or is the cost of college the main impediment? Is there a higher percentage of students entering college in countries with free, or very low, tuition? Should there be more students graduating? If so, one easy way to do that is to make university courses easier. Is that what we want?

The question that's never asked is what percentage of the population is smart enough to get a college degree? Is it much higher than 40%?

The second issue concerns the quality of education. The model that I suggested above is not consistent with online courses and there's a substantial number of papers in the pedagogical literature showing the student centered education doesn't work very well online. Does that mean that we should adopt a different way of teaching in order to make college education more accessible? If so, at what cost?

McNutt gives us an example of the kind of change she envisages.

At Colorado College, students complete a lab science course in only 4 weeks, attending lectures in the morning and labs in the afternoon. This success suggests that US universities could offer 2-week short courses that include concentrated, hands-on learning and teamwork in the lab and the field for students who already mastered the basics through online lectures. Such an approach is more common in European institutions of higher education and would allow even those with full-time employment elsewhere to advance their skills during vacations or employer-supported sabbaticals for the purpose of improving the skills of the workforce. Opportunities abound for partnerships with industry for life-long learning. The availability of science training in this format could also be a boon for teachers seeking to fill gaps in their science understanding.

This is clearly a form of college education that focuses on job skills and even goes as far as suggesting that industry could be a "partner" in education. (As an aside, it's interesting that government employers, schools, and nonprofits are never asked to be partners in education even though they hire a substantial number of college graduates.)

Do you agree that the USA should be expanding the number of students who graduate from college and do you agree that the goal is to give them the skills needed to get a job?


Tuesday, November 08, 2022

Science education in an age of misinformation

I just read an annoying article in Boston Review: The Inflated Promise of Science Education. It was written by Catarina Dutilh Novaes, a Professor of Philosophy, and Silvia Ivani, a teaching fellow in philosophy. Their main point was that the old-fashioned way of teaching science has failed because the general public mistrusts scientists. This mistrust stems, in part, from "legacies of scientific or medical racism and the commercialization of biomedical science."

The way to fix this, according to the authors, is for scientists to address these "perceived moral failures" by engaging more with society.

"... science should be done with and for society; research and innovation should be the product of the joint efforts of scientists and citizens and should serve societal interests. To advance this goal, Horizon 2020 encouraged the adoption of dialogical engagement practices: those that establish two-way communication between experts and citizens at various stages of the scientific process (including in the design of scientific projects and planning of research priorities)."

Clearly, scientific education ought to mean the implanting of a rational, sceptical, experimental habit of mind. It ought to mean acquiring a method – a method that can be used on any problem that one meets – and not simply piling up a lot of facts.

George Orwell

This is nonsense. It has nothing to do with science education; instead, the authors are focusing on policy decisions such as convincing people to get vaccinations.

The good news is that the Boston Review article links to a report from Stanford University that's much more intelligent: Science Education in an Age of Misinformation. The philosophers think that this report advocates "... a well-meaning but arguably limited approach to the problem along the lines of the deficit model ...." where "deficit model refers to a mode of science communication where scientists just dispense knowledge to the general public who are supposed to accept it uncritically.

I don't know of very many science educators who think this is the right way to teach. I think the prevailing model is to teach the nature of science (NOS) [The Nature of Science (NOS)]. That requires educating students and the general public about the way science goes about creating knowledge and why evidence-based knowledge is reliable. It's connected to teaching critical thinking, not teaching a bunch scientific facts. The "deficit model" is not the current standard in science education and it hasn't been seriously defended for decades.

"Appreciating the scientific process can be even more important than knowing scientific facts. People often encounter claims that something is scientifically known. If they understand how science generates and assesses evidence bearing on these claims, they possess analytical methods and critical thinking skills that are relevant to a wide variety of facts and concepts and can be used in a wide variety of contexts.”

National Science Foundation, Science and Technology Indicators, 2008

An important part of the modern approach as described in the Stanford report is teaching students (and the general public) how to gather information and determine whether or not it's reliable. That means you have to learn how to evalute the reliabiltiy of your sources and whether you can trust those who claim to be experts. I taught an undergraduate course on this topic for many years and I learned that it's not easy to teach the nature of science and critical thinking.

The Stanford Report is about the nature of science (NOS) model and how to implement it in the age of social media. Specifically, it's about teaching ways to evaluate your sources when you are inundated with misinformation.

The main part of this approach is going to seem controversial to many because it emphasizes the importance of experts and there's a growing reluctance in our society to trust experts. That's what the Boston Globe article was all about. The solutions advocated by the authors of that article are very different than the ones presented in the Sanford report.

The authors of the Standford report recognize that there's a widepread belief that well-educated people can make wise decisions based entirely on their own knowledge and judgement, in other words, that they can be "intellectually independent." They reject that false belief.

The ideal envisioned by the great American educator and philosopher John Dewey—that it is possible to educate students to be fully intellectually independent—is simply a delusion. We are always dependent on the knowledge of others. Moreover, the idea that education can educate independent critical thinkers ignores the fact that to think critically in any domain you need some expertise in that domain. How then, is education to prepare students for a context where they are faced with knowledge claims based on ideas, evidence, and arguments they do not understand?

The goal of science education is to teach students how to figure out which source of information is supported by the real experts and that's not an easy task. It seems pretty obvious that scientists are the experts but not all scientists are experts so how do you tell the difference between science quacks and expert scientists?

The answer requires some knowledge about how science works and how scientists behave. The Stanford reports says that this means acquiring an understanding of "science as a social practice." I think "social practice" is bad choice of terms and I would have preferred that they stick with "nature of science" but that was their choice."

The mechanisms for recognizing the real experts relies on critical thinking but it's not easy. Two of the lead authors1 of the Stanford Report published a short synopsis in Science last month (October 2022) [https://doi.org/10.1126/science.abq8-93]. Their heuristic is shown on the right.

The idea here is that you can judge the quality of scientific information by questioning the credentials of the authors. This is how "outsiders" can make judgements about the quality of the information without being experts themselves. The rules are pretty good but I wish there had been a bit more on "Unbiased scientific information" as a criterion. I think that you can make a judgement based on whether the "experts" take the time to discuss alternative hypotheses and explain the positions of those who disagree with them but this only applies to genuine scientific controversies and if you don't know that there's a controversy then you have no reason to apply this filter.

For example, if a paper is telling you about the wonderful world of regulatory RNAs and points out that there are 100,000 newly discovered genes for these RNAs, you would have no reason to question that if the scientists have no conflict of interest and come from prestigious universities. You would have to reply on the reviewers of the paper, and the journal, to insist that alternative explanations (e.g. junk RNA) were mentioned. That process doesn't always work.

There's no easy way to fix that problem. Scientists are biased all the time but outsiders (i.e. non-experts) have no way of recognizing bias. I used to think that we could rely on science journalists to alert us to these biases and point out that the topic is controversial and no consensus has been reached. That didn't work either.

At least the heuristic works some of the time so we should make sure we teach it to students of all ages. It would be even nicer if we could teach scientists how to be credible and how to recognize their own biases.


1. The third author is my former supervisor, Bruce Alberts, who has been interested in science education for more than fifty years. He did a pretty good job of educating me! :-)

Saturday, November 05, 2022

Nature journalist is confused about noncoding RNAs and junk

Nature Methods is one of the journals in Nature Portfolio published by Springer Nature. Its focus is novel methods in the life sciences.

The latest issue (October, 2022) highlights the issues with identifying functional noncoding RNAs and the editorial, Decoding noncoding RNAs, is quite good—much better than the comments in other journals. Here's the final paragraph.

Despite the increasing prominence of ncRNA, we remind readers that the presence of a ncRNA molecule does not always imply functionality. It is also possible that these transcripts are non-functional or products from, for example, splicing errors. We hope this Focus issue will provide researchers with practical advice for deciphering ncRNA’s roles in biological processes.

However, this praise is mitigated by the appearance of another article in the same journal. Science journalist, Vivien Marx has written a commentary with a title that was bound to catch my eye: How noncoding RNAs began to leave the junkyard. Here's the opening paragraph.

Junk. In the view of some, that’s what noncoding RNAs (ncRNAs) are — genes that are transcribed but not translated into proteins. With one of his ncRNA papers, University of Queensland researcher Tim Mercer recalls that two reviewers said, “this is good” and the third said, “this is all junk; noncoding RNAs aren’t functional.” Debates over ncRNAs, in Mercer’s view, have generally moved from ‘it’s all junk’ to ‘which ones are functional?’ and ‘what are they doing?’

This is the classic setup for a paradigm shaft. What you do is create a false history of a field and then reveal how your ground-breaking work has shattered the long-standing paradigm. In this case, the false history is that the standard view among scientists was that ALL noncoding RNAs were junk. That's nonsense. It means that these old scientists must have dismissed ribosomal RNA and tRNA back in the 1960s. But even if you grant that those were exceptions, it means that they knew nothing about Sidney Altman's work on RNAse P (Nobel Prize, 1989), or 7SL RNA (Alu elements), or the RNA components of spliceosomes (snRNAs), or PiWiRNAs, or snoRNAs, or microRNAs, or a host of regulatory RNAs that have been known for decades.

Knowledgeable scientists knew full well that there are many functional noncoding RNAS and that includes some that are called lncRNAs. As the editorial says, these knowledgeable scientists are warning about attributing function to all transcripts without evidence. In other words, many of the transcripts found in human cells could be junk RNA in spite of the fact that there are also many functional nonciding RNAs.

So, Tim Mercer is correct, the debate is over which ncRNAs are functional and that's the same debate that's been going on for 50 years. Move along folks, nothing to see here.

The author isn't going to let this go. She decides to interview John Mattick, of all people, to get a "proper" perspective on the field. (Tim Mercer is a former student of Mattick's.) Unfortunately, that perspective contains no information on how many functional ncRNAs are present and on what percentage of the genome their genes occupy. It's gonna take several hundred thousand lncRNA genes to make a significant impact on the amount of junk DNA but nobody wants to say that. With John Mattick you get a twofer: a false history (paradigm strawman) plus no evidence that your discoveries are truly revolutionary.

Nature Methods should be ashamed, not for presenting the views of John Mattick—that's perfectly legitimate—but for not putting them in context and presenting the other side of the controversy. Surely at this point in time (2022) we should all know that Mattick's views are on the fringe and most transcripts really are junk RNA?