Here's a paper that has recently been posted on the preprint server bioRxiv.
Nurk et al. (2021) The complete sequence of a human genome. [doi: 10.1101/2021.05.26.445798]
I usually don't like to comment on preprints but this one is surely going to be published somewhere and it's important.
The authors have sequenced the entire chromosomes (telomere-to-telomere) of the 22 autosomes and the X chromosome of the cell line CHM13. The cell line is a complete hydatiform mole, which means it is derived from a molar pregnancy where a sperm combines with an egg cell that has lost its nucleus. The sperm DNA duplicates giving rise to cells that have two identical copies of each chromosome. The karyotype of the CHM13 cell line is 46,XX. The advantage of sequencing the DNA from such cell lines is that the interpretation of the sequencing results is not complicated by the heterogeneity of normal diploid cell lines. This was important because the focus of this study was on sequencing repetitive regions of the chromosomes and most chromosome pairs have different numbers of repeats.
Long-read sequencing technology has been improved in the past few years and that's what allows sequencing across the many gaps in the current standard reference genome (GRCh38.p13). That sequence has a number of gaps that are estimated to cover 151 Mb. The gaps cover large repetitive DNA seqments such as satellite DNA in the centromeres and arrays of ribosomal RNA genes at the ends of the small autosomes. The old sequencing technology relied on cloning these regions but they were not stable in cloning vectors. Furthermore, the lengths of sequencing reads using the older technology were not capable of spanning enough DNA to assemble the entire regions.
Ultra-long sequencing can produce reads of 1 Mb but the error rate is 15% making it difficult to assemble large regions but, nevertheless, this technology produced the complete telomere-to-telomere assembly of an X chromosome [see: First complete sequence of a human chromosome]. More recent advances can produce 20 kb reads with an accuracy of 99.9% so the authors of this work used a combination of ultra-long but inaccurate sequencing coupled with shorter but more accurate reads to assemble the chromosomes.
The important advance is assembling chromosomes that contain highly repetitive regions that are found on multiple chromsomes, such as ribosomal RNA arrays that are located on the short arms of the small autosomes. Much of the paper is devoted to describing the assembly and it's validation. I'm not in a position to evaluate the accuracy of this step, in fact there are probably only a few dozen scientists in the world who can understand this part of the paper. The algorithm generates string graphs that look really cool. (see figure above)
The total length of the current reference genome was estimated as 3,099,706,404 bp and that includes the 22 autosomes plus one copy of each sex chromosome. The total length of this assembly is 3,054,815,472 bp and if you add in the length of the Y chromsome (58 Mb) then it comes to 3,113 Mb so there's pretty good agreement with the prediction.
The amount of DNA in your nucleus varies from individual to individual and I don't know if this value (3.1 Gb) is close to the average or possibly on the low side of the variation. I'm a little bit nervous about using the sequence of DNA from a cell line as a standard. I normally use 3.2 Gb as the average size but perhaps it would be better to se 3.1 Gb in the future.
The authors collected data on transcripts in CHM13 cells in order to estimate the number of genes. They claim that they detected 19,969 protein-coding genes but they missed 263 that are in the standard reference genome (CRCh38.p13). Some of these are due to false duplications in the current reference genome.
They detected 140 new protein-coding genes in the CHM13 data but some of these (25) are just paralogs of existing genes that are presumably due to segmental duplications in the CHM13 cell line. However, 115 new protein-coding genes were detected in the newly sequenced regions that are not part of the current standard reference genome. I assume these predictions are based on the presence of an RNA with an open reading frame and their identification as true genes needs to be verified.
There are a total of 43,535 noncoding genes, according to the authors, and 2,111 of these "genes" are new discoveries in the CHM13 genome. I'm pretty sure this estimate is based entirely on the presence of transcripts so it is unreliable. You can't just declare that a stretch of DNA is a gene because it's transcribed. You need more evidence than that to show that it is a functional gene.
There's an extensive discussion of the organization of ribosomal RNA genes on the short arms of chromosomes 13, 14, 15, 21, and 22 but there are no significant discoveries. The organization of the centromeres will be covered in another paper.
This study is important because many of us have been predicting that the exact sequence of all human chromosomes will never be determined. It's a surprise that seqeuncing technology has advanced to the point where you can get reliable sequences of several hundred thousand kilobases. It's also surprising that the assembly algorithms can cope with such large arrays of repetitive sequences. I guess this just proves that you should never say "never"!
[I'm rewriting the relevant section of my book. :-) ]
3 comments :
These are the types of papers that remind me of how we tend to overthink some problems in biology, in part due to our over-reliance on newer technologies. When considering telomere-to-telomere assemblies, my first thoughts go to the problems that heterozygosity and repeat regions cause. Using this cell line is such a simple approach to the heterozygosity problem.
"Ultra-long sequencing can produce reads of 1 Mb but the error rate is 15% making it difficult to assemble large regions but, nevertheless, this technology produced the complete telomere-to-telomere assembly of an X chromosome"
A 15% error rate isn't as bad as it seems at first glance. It's not a big problem if the errors are distributed randomly and if you have enough money to get enough sequencing depth of coverage to get consensus sequences. I'm a bit skeptical of data coming directly form PacBio, but they do claim that their error rate is mostly unbiased. I remember going to lectures ten years ago where the presenters argued that "one chromosome one contig" assemblies are feasible.
ps: it's nice when we can get along (-:
The error rates for ultra-long reads is coming down all the time and already well below the early 15%. If you want to see what's coming soon, visit the ONT website: https://nanoporetech.com/about-us/news/oxford-nanopore-tech-update-new-duplex-method-q30-nanopore-single-molecule-reads-0
We'll soon have access to Mb reads at >99% accuracy from single molecules of DNA. There is a definite bias towards insertion-deletion (indel) errors in homopolymers (i.e. getting the wrong length for runs of the same DNA base), but there does not appear to be really strong biases like short reads for GC content etc. As a result, the error-correction is very good at eliminating base call errors, but does still tend to leave a bunch of indel errors scattered around the place. (Though I'm sure tech improvements will soon fix this too.)
You CAN claim a gene is functional if transcribed…if you’re ENCODE….
Post a Comment