More Recent Comments

Thursday, December 31, 2020

On the importance of controls

When doing an exeriment, it's important to keep the number of variables to a minimum and it's important to have scientific controls. There are two types of controls. A negative control covers the possibility that you will get a signal by chance; for example, if you are testing an enzyme to see whether it degrades sugar then the negative control will be a tube with no enzyme. Some of the sugar may degrade spontaneoulsy and you need to know this. A positive control is when you deliberately add something that you know will give a positive result; for example, if you are doing a test to see if your sample contains protein then you want to add an extra sample that contains a known amount of protein to make sure all your reagents are working.

Lots of controls are more complicated than the examples I gave but the principle is important. It's true that some experiments don't appear to need the appropriate controls but that may be an illusion. The controls might still be necessary in order to properly interpret the results but they're not done because they are very difficult. This is often true of genomics experiments.

Consider the ENCODE experiments where a great effort was made to map RNA transcripts, transcription factor binding sites, and open chromatin domains. In order to interpet these results correctly, you need both positive and negative controls but the most important was the negative control. Here's how Sean Eddy describes the required control (Eddy 2013):

To clarify what noise means, I propose the Random Genome Project. Suppose we put a few million bases of entirely random synthetic DNA into a human cell, and do an ENCODE project on it. Will it be reproducibly transcribed into mRNA-like transcripts, reproducibly bound by DNA-binding proteins, and reproducibly wrapped around histones marked by specific chromatin modifications? I think yes.

... Even as a thought experiment, the Random Genome Project states a null hypothesis that has been largely absent from these discussions in genomics. It emphasizes that it is reasonable to expect reproducible biochemical activities ... in random unselected DNA.

This may be a case where creating the control isn't easy but we are reaching the stage where it may become necessary because stamp-collecting will only get you so far. Ford Doolittle has come up with a similar type of control to interpret the functional elements (FE) described by ENCODE (Doolittle, 2013):

Suppose that there had been (and probably, some day, there will be) ENCODE projects aimed at enumerating, by transcriptional and chromatin mapping, factor footprinting, and so forth, all of the FEs in the genomes of Takifugu and a lungfish, some small and large genomed amphibians (including several species of Plethodon), plants, and various protists. There are, I think, two possible general outcomes of this thought experiment, neither of which would give us clear license to abandon junk. The first outcome would be that FEs (estimated to be in the millions in our genome) turn out to be more or less constant in number, regardless of C-value—at least among similarly complex organisms. ... The second likely general outcome of my thought experiment would be that FEs as defined by ENCODE increase in number with C-value, regardless of apparent organismal complexity.

I've been thinking a lot lately about transcripts and alternative splicing. Massive numbers of RNAs are being identified in all kinds of tissues and all kinds of species now that the techniques have become routine. When multiple transcript variants from the same gene are identified they are usually interpreted as genuine examples of alternative splicing. The field needs controls. The negative control is similar to the one proposed by Sean Eddy but it's important to have a positive control, which in this case would be a well-characterized set of genes with real alternative splicing where the function of the splice variants has been demonstrated. If your RNA-Seq experiment fails to detect the known alternatively spliced genes then something is wrong with the experiment.

It's not easy to identify this set of genes; that's why I admire the effort made by a graduate student (soon to be Ph.D.) at the University of British Columbia, Shams Bhuiyan, who tried very hard to comb the literature to come up with some gold standards to serve as positive controls (Bhuiyan, 2018). His efforts were not very successful because there aren't very many of these genuine examples. This is a problem for the field of alternative splicing but most workers ignore it.

This brings me to a recent paper that caught my eye:

Uebbing, S., Gockley, J., Reilly, S.K., Kocher, A.A., Geller, E., Gandotra, N., Scharfe, C., Cotney, J. and Noonan, J.P. (2019) Massively parallel discovery of human-specific substitutions that alter neurodevelopmental enhancer activity. Proc. Natl. Acad. Sci. (USA) 118: e2007049118. [doi: 10.1073/pnas.2007049118]

Genetic changes that altered the function of gene regulatory elements have been implicated in the evolution of human traits such as the expansion of the cerebral cortex. However, identifying the particular changes that modified regulatory activity during human evolution remain challenging. Here we used massively parallel enhancer assays in neural stem cells to quantify the functional impact of >32,000 human-specific substitutions in >4,300 human accelerated regions (HARs) and human gain enhancers (HGEs), which include enhancers with novel activities in humans. We found that >30% of active HARs and HGEs exhibited differential activity between human and chimpanzee. We isolated the effects of human-specific substitutions from background genetic variation to identify the effects of genetic changes most relevant to human evolution. We found that substitutions interacted in both additive and nonadditive ways to modify enhancer function. Substitutions within HARs, which are highly constrained compared to HGEs, showed smaller effects on enhancer activity, suggesting that the impact of human-specific substitutions is buffered in enhancers with constrained ancestral functions. Our findings yield insight into how human-specific genetic changes altered enhancer function and provide a rich set of candidates for studies of regulatory evolution in humans.

This is a very complicated set of experiments using techniques that I'm not familiar with. I suspect that there are only a few hundred scientists in the entire world that can read this paper and understand exactly what was done and whether the experiments were performed correctly. I imagine that there are even fewer who can evaluate the results in the proper context.

The objective is to identify mutations in the human genome that are responsible for making us different from our ancestors, notably the common ancestor we share with chimps. The authors assume, correctly, that these differences are likely to reside in regulatory sequences. They focused on regions of the genome that have been previously identified as the sites of chromatin modifications and/or transcription factor binding sites. They then narrowed down the search by choosing only those sites that showed either accelerated changes in the human lineage (1,363 HARs) or increased enhancer activities in humans (3,027 HGEs).

All of these sites, plus their chimp counterparts, were linked to reporter genes and the constructs were assayed for their ability to drive transcription of the reporter gene in cultures of human neural stem cells. Those cells were chosen because the authors expect a lot of human-specific changes in brain cells as opposed to other tissues. (That's not a reasonable assumption and, furthermore, it looks like brain cells have a lot more spurious transcription than other cells (except for testes).)

They found that only 12% of their HARs were active in this assay and only 34% of HGEs were active. That's interesting but it doesn't tell us a lot; for example, it doesn't tell us whether any of these sites are biologically significant because we don't have the results of Sean Eddy's Random Genome Project to tell us how many of ENCODE's sites are significant. We know that some small fraction of random DNA sequences have enhancer activity and we know that this fraction increases when you select for stretches of DNA that are known to bind transcription factors. What that means is that many of these sites are not real regulatory sequences but we don't know which ones are real and which ones are spurious.

Next, they focused on those sites that showed differential expression of the reporter genes when you compared the chimp and human versions. About 3% of all HARs and 12% of all HGEs fell into this category. Then they looked at the specific nucleotide differences to see if they were responsible for the differential expression and they found some examples, but most of them were modest changes (less than 2-fold). Here's the conclusion:

We identified 424 HARs and HGEs with human-specific changes in enhancer activity in human neural stem cells, as well as individual sequence changes that contribute to those regulatory innovations. These findings now enable detailed experimental analyses of candidate loci underlying the evolution of the human cortex, including in humanized cellular models and humanized mice. Comprehensive studies of the HARs and HGEs we have uncovered here, both individually and in combination, will provide novel and fundamental insights into uniquely human features of the brain.

This is a typical ENCODE-type conclusion. It leaves all the hard work to others. But here's the rub. How many labs are willing to take one of those 424 candidates and devote money, graduate students, and post-docs, to finding out whether they are really regulatory sites? I bet there are very few because, like the rest of us, they are so skeptical of the result that they are unwilling to risk their careers on it.

The experiments conducted by Uebbing et al. lack proper controls. There are times when simple data collection experiments are justified and there are times when additional genomics survey experiments are useful but as we enter 2021 we need to recognize that those times are behind us. The time has come to sort the wheat from the chaff and that means calling a halt to publishing experiments that can't be meaningfully interpreted.


Image Credit: The control flowchart is from ErrantScience.com.

Bhuiyan, S.A., Ly, S., Phan, M., Huntington, B., Hogan, E., Liu, C.C., Liu, J. and Pavlidis, P. (2018) Systematic evaluation of isoform function in literature reports of alternative splicing. BMC Genomics 19: 637. [doi: 10.1186/s12864-018-5013-2]

Doolittle, W.F. (2013) Is junk DNA bunk? A critique of ENCODE. Proc. Natl. Acad. Sci. (USA) 110: 5294-5300. [doi: 10.1073/pnas.1221376110]

Eddy, S.R. (2013) The ENCODE project: missteps overshadowing a success. Current Biology 23: R259-R261. [10.1016/j.cub.2013.03.023]

1 comment :

William Spearshake said...

The importance of controls is not limited to the field of “experimentation”. My field is routine analytical chemistry. Things like lead content in drinking water. In addition to the calibration standards we run, we also run controls such as an independent reference material, blanks and spikes. In addition, we will often measure the analyte on several lines (wavelengths) to rule out background interferences.