More Recent Comments

Tuesday, December 30, 2014

What do you do in Los Angeles in February?

I've visited Los Angeles in February. There are lots of cool things you can do, like going to the beach with your granddaughter or spending a day at Disneyland. I won't be there this February and that's a shame because I'll be missing an important event [Things to do in LA in February].
Bad Biology: How Adaptationist Thinking Corrupts Science

11 a.m., Sun., Feb. 15 – Hollywood; 4:30 p.m. – Costa Mesa

In 1979, Gould and Lewontin published an important paper, “The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme”, in which they deplored the narrow focus on seeing evolution as only a process of adaptation. They further criticized the field of evolutionary biology for perpetuating this pattern of flawed thinking.

Biologist PZ Myers bemoans that their warnings have gone unheeded in some biological sub-disciplines. He’ll be discussing a few examples of bad evolutionary biology, including the “human biodiversity” movement that is little more than a collection of pseudoscientific rationalizations for racism; evolutionary psychology, which seeks to explain modern human biology with imaginary scenarios of adaptive constraints from 10,000 years ago; and the ENCODE project, which was an obvious and overt exercise in adaptive bias applied to genomic data. A common thread among these examples is an excessive reliance on adaptationist thinking and a lack of appreciation of the diversity of mechanisms underlying our evolutionary history.
Don't miss it if you are anywhere near Los Angeles this February.

Simulated meteorite impact produces RNA bases. So what?

A group of Czech scientists have fired a big laser at a solution of formamide and found traces of adenine, guanine, cytosine, and uracil (Ferus et al., 2014). The result is reported in an article in Science [From hell on Earth, life's building blocks]. The image is from the Science article.

Here's the abstract of the PNAS article ...
The coincidence of the Late Heavy Bombardment (LHB) period and the emergence of terrestrial life about 4 billion years ago suggest that extraterrestrial impacts could contribute to the synthesis of the building blocks of the first life-giving molecules. We simulated the high-energy synthesis of nucleobases from formamide during the impact of an extraterrestrial body. A high-power laser has been used to induce the dielectric breakdown of the plasma produced by the impact. The results demonstrate that the initial dissociation of the formamide molecule could produce a large amount of highly reactive CN and NH radicals, which could further react with formamide to produce adenine, guanine, cytosine, and uracil. Based on GC-MS, high-resolution FTIR spectroscopic results, as well as theoretical calculations, we present a comprehensive mechanistic model, which accounts for all steps taking place in the studied impact chemistry. Our findings thus demonstrate that extraterrestrial impacts, which were one order of magnitude more abundant during the LHB period than before and after, could not only destroy the existing ancient life forms, but could also contribute to the creation of biogenic molecules.
In case you don't appreciate the significance of this research, PNAS provides you with a brief summary ...
This paper addresses one of the central problems of the origin of life research, i.e., the scenario suggesting extraterrestrial impact as the source of biogenic molecules. Likewise, the results might be relevant in the search of biogenic molecules in the universe. The work is therefore highly actual and interdisciplinary. It could be interesting for a very broad readership, from physical and organic chemists to synthetic biologists and specialists in astrobiology.
The problem with all these studies is that they don't answer the most important question; what happens next?

Let's assume that the four bases were created in the atmosphere as meteorites crashed into Earth four billion years ago. Let's assume there was water in the form of early oceans or big lakes. Then what happens? Do these researchers imagine that the concentrations of these bases built up gradually over thousands of years until there were spontaneous reactions with five-carbon sugars and phosphate to form nucleotides? Then did these nucleotides assemble into short RNA molecules?

It's a very large step from demonstrating that RNA bases can be made from formamide under extreme conditions to showing that their concentrations could have been high enough to make RNA spontaneously.

We need to demand more of these researchers. If they are going to postulate that life arose in a primordial soup then it's no longer sufficient to publish one more paper on how you can make organic molecules from inorganic precursors. Enough already. That's the easy part of the hypothesis. Let's see some evidence for the hard part.

Ferus, M., Nesvorný, D., Šponer, J., Kubelík, P., Michalčíková, R., Shestivská, V., Šponer, J.E., and Civiš, S. (2014) "High-energy chemistry of formamide: A unified mechanism of nucleobase formation." Proceedings of the National Academy of Sciences Published online before print December 8, 2014. [doi: 10.1073/pnas.1412072111]

Sunday, December 28, 2014

How do we teach our students that basic research is important?

There's a fabulous editorial in the Toronto Star today. It's critical of the Prime Minister and the Conservative Party of Canada for the damage they are doing to science in Canada [Canada needs a brighter federal science policy: Editorial].

Here's some excerpts ...
Finding a fan of Canada’s current science policy among those who care about such things would be a discovery worthy of Banting and Best. Few if any would contend that Ottawa’s approach is sound; rather, the debate in 2014 has been over what in the world would possess a government to pursue such a catastrophic course.

According to one school of thought, the answer is simple: the Conservatives are cavemen set on dragging Canada into a dark age in which ideology reigns unencumbered by evidence. Let’s call this the Caveman Theory.

The other, more moderate view holds that Prime Minister Stephen Harper et al are not anti-science – that they at least understand the importance of research and development to their "jobs and growth" agenda – but are instead merely confused about how the enterprise works and about the role government must play to help it flourish. Let’s call this the Incompetence Theory.
The rest of the editorial describes how Stephen Harper and his Conservative buddies have directed funding agencies to concentrate on research that will be of direct benefit to Canadian for-profit companies.

It concludes with ...
Whatever the government’s motives, whatever it understands or does not about how science works, it has over the last eight years devastated Canadian research in a way that will be hard to reverse. Private sector R&D continues to lag, but in our efforts to solve that problem we have seriously reduced our capacity for primary research, squandering a long-held Canadian advantage. Meanwhile, we have earned an international reputation for muzzling scientists, for defunding research that is politically inconvenient and for perversely conflating scientific goals with business ones, thus dooming both. Our current funding system is less well placed than it was in 2006 to promote innovation and our science culture has been so eroded that we are unlikely to attract the top talent we need to compete in the knowledge economy.

Whether it was anti-intellectualism, incompetence or both that led us to this dark place, let this coming election year bring the beginning of a climb back into the light.
How can the government of Canada be so ignorant? It's because they have a huge amount of support from the general public who see all research as technology. They are only willing to support research that helps the economy.

Most people are not interested in research that simply advances our knowledge of the natural world.

What are we doing as educators to reverse this trend? Not very much, as it turns out. Many of our courses in biochemistry focus on how biochemistry can benefit medicine as though this was the only reason for learning about biochemistry.1 Our department is discussing whether we should have undergraduate courses on drug discovery and how drugs are brought to market. We are considering a co-op program where students will spend some time working in the private sector. We are toying with the idea of creating an entirely new program that will train students to work in the pharmaceutical industry.

It's no wonder that the general public thinks of science as the servant of industry. We are not doing a very good job of teaching undergraduates about the importance of knowledge and the value of scientific thinking. In fact, we are doing the opposite. We are supporting the Stephen Harper agenda.

Don't be surprised if it comes back to bite you in the future.

1. We teach medical case studies in our introductory biochemistry course for science undergraduates!

Friday, December 19, 2014

How to think about evolution

New Scientist published a short article on How to think about. Evolution. It was written by Michael Le Page who contacted me a few months ago.

I think it's better than most such articles but I may be a little biased. Here's an excerpt.
What's more surprising is that even mutations that don't increase fitness can spread through a population as a result of random genetic drift. And most mutations have little, if any, effect on fitness. They may not affect an animal's body or behaviour at all, or do so in an insignificant way such as slightly altering the shape of the face. In fact, the vast majority of genetic changes in populations – and perhaps many of the physical ones, too – may be due to drift rather than natural selection. "Do not assume that something is an adaptation until you have evidence," says biologist Larry Moran at the University of Toronto, Canada.

So it is wrong to think of evolution only in terms of natural selection; change due to genetic drift counts too. Moran's minimal definition does not specify any particular cause: "Evolution is a process that results in heritable changes in a population spread over many generations."

Professors and stress

Last January I posted a note about how stressful the job of university professor can be [University Professor is one of the least stressful jobs in America?]. That was a followup to a previous post that I wrote in response to Forbes magazine claiming that university professor have an easy job [I Have the Least Stressful Job!!!]. (That's me on the right in the picture. If you don't think this is stressful then you ought to try dressing up like that!)

A writer for the Globe & Mail (Toronto, Canada) contacted me about an article she was writing on stress in academia. The article has been published and she got it right! [Increased pressures, class sizes taking their toll on faculties in academia. I even got quoted.
Research funds are also difficult to access. New funding rules that emphasize commercial potential, particularly in the sciences, mean that professors have to deal with the prospect of their careers being cut short if they don’t win grants to run a lab.

“My younger colleagues are having to survive in stressful situations that I never had to survive,” said Larry Moran, a professor of biochemistry at the University of Toronto. “Government policies have redirected research funds so that it’s hit and miss if you get grants. ... When you fail at this job, there aren’t a lot of other places to go,” he said.

Thursday, December 18, 2014

Questions about alternative splicing

Alternative splicing is a mechanism where am intron-containing gene is transcribed and the primary transcript is spliced in two or more different ways to produce different functional RNAs. If it's a protein-coding gene then the idea is that different forms of the protein are produced in this way and each of them is functional.

It's important to emphasize that the products of alternative splicing must be functional because we know that splicing is error-prone and that mispliced, nonfunctional, RNAs will be quite common. Every gene will produce a bunch of these aberrantly spliced variants but that doesn't mean that every primary transcript is alternatively spliced.

It's important to distinguish between real functional alternative splicing and junk RNAs that arise from splicing errors. One of the ways to do this is to report on the concentrations of the various transcripts but that's rarely done in papers that promote alternative splicing [see: The most important rule for publishing a paper on alternative splicing].

The importance of alternative splicing is related to the debate over the importance of pervasive transcription and junk DNA since advocates of alternative splicing are often the same people who object to junk DNA [see: Vertebrate Complexity Is Explained by the Evolution of Long-Range Interactions that Regulate Transcription?]. I call this The Deflated Ego Problem because these scientists are usually looking for way to "explain" the complexity of humans in light of the fact that we seem to have the same number of genes as many other species.

If it's true that most human genes are alternatively spliced then let's see the evidence. That means actually demonstrating that different proteins with different functions are produced from the same gene. We've known for 35 years that this is possible but that's not the point. The point is whether all, or most, human RNAs are alternatively spliced. I've issued a simple challenge to those who use the alternative splice databases [A Challenge to Fans of Alternative Splicing]. So far, nobody has stepped up to the plate.

Some of the examples that are promoted in those databases make no sense whatsoever [Two Examples of "Alternative Splicing"] [The Frequency of Alternative Splicing ].

Someone raised this issue in the comments to another post and send me a link to a paper published in 2010. Here's the paper and part of the introduction.

Keren, H., Lev-Maor, G. and Ast, G. (2010) Alternative splicing and evolution: diversification, exon definition and function. Nature Reviews Genetics 11:345-355 [doi: 10.1038/nrg2776].
Splicing of precursor mRNA (pre-mRNA) is a crucial regulatory stage in the pathway of gene expression: introns are removed and exons are ligated to form mRNA. The inclusion of different exons in mRNA — alternative splicing (AS) — results in the generation of different isoforms from a single gene and is the basis for the discrepancy between the estimated 24,000 protein-coding genes in the human genome and the 100,000 different proteins that are postulated to be synthesized.

... Comparing species to see what has changed and what is conserved is proving valuable in addressing these issues and has recently yielded substantial progress. For example, new high-throughput sequencing technology has revealed that >90% of human genes undergo AS — a much higher percentage than anticipated. Such technological progress is providing more comprehensive studies of splicing and genomic architecture in an increasing number of species, and these studies have extended our evolutionary understanding.
I'd like you to answer two questions.
  1. Do you believe that there are about four (4) different, functional, proteins produced on average from every human protein-encoding gene?
  2. Do you believe that more than 90% of human genes produce a transcript that can be alternatively spliced, where alternative splicing is restricted to producing different functional RNAs and not just noise?

Monday, December 15, 2014

On the importance of course evaluations at the University of Toronto

My university (the University of Toronto, Toronto, Canada) recently developed a new policy and new procedures on undergraduate student evaluations [Policy on the Student Evaluation of Teaching in Courses]. The policy was the work of a committee that began with the assumption that student evaluations were a good thing. As far as I can tell, the committee did not spend any time examining the pedagogical literature to see if the evidence supported their assumptions. As you can see from the title of the policy, the assumed purpose of student evaluations is to judge the quality of teaching.

The university has a website: Course Evaluations. Here's what the administrators say ...
Course evaluations are read by many people at UofT, including instructors, chairs, deans, the provost and the president. They are used for a variety of purposes, including:
  • To help instructors design and deliver their courses in ways that impact their students’ learning
  • To make changes and improvements to individual courses and programs
  • To assess instructors for annual reviews
  • To assess instructors for tenure and promotion review
  • To provide students with summary information about other students’ learning experiences in UofT courses
It is essential that students have a voice in these key decision-making processes and that’s why your course evaluation is so important at the University of Toronto.

Your feedback is used to improve your courses, to better your learning experience and to recognize great teaching.

Evaluating students' evaluations

Student evaluations are an important part of the undergraduate experience at most universities. But how effective are they?

You would think that universities might have studied this question and applied critical thinking and evidence-based reasoning to the question. You might think that the popularity of student evaluations at universities is largely because they have proven to be reliable indicators of teaching effectiveness.

Think again. The pedagogical literature contains dozens of studies indicating that student evaluations are not reliable indicators of teaching effectiveness. Here are some recent papers that show you what the experts are thinking and what kind of evidence is being published.

Gormally, C., Evans, M. and Brickman, P. (2014) Feedback about Teaching in Higher Ed: Neglected Opportunities to Promote Change. CBE-Life Sciences Education 13, 187-199. [doi: 10.1187/cbe.13-12-023]
Despite ongoing dissemination of evidence-based teaching strategies, science teaching at the university level is less than reformed. Most college biology instructors could benefit from more sustained support in implementing these strategies. One-time workshops raise awareness of evidence-based practices, but faculty members are more likely to make significant changes in their teaching practices when supported by coaching and feedback. Currently, most instructional feedback occurs via student evaluations, which typically lack specific feedback for improvement and focus on teacher-centered practices, or via drop-in classroom observations and peer evaluation by other instructors, which raise issues for promotion, tenure, and evaluation. The goals of this essay are to summarize the best practices for providing instructional feedback, recommend specific strategies for providing feedback, and suggest areas for further research. Missed opportunities for feedback in teaching are highlighted, and the sharing of instructional expertise is encouraged.
Osborne, D. and Janssen, H. (2014) Flipping the medical school classroom: student acceptance and student evaluation issues (719.4). The FASEB Journal 28, 719.4.
Flipping the classroom has generated improvement on summative exam scores in our first-year medical students; however, faculty members adopting this teaching methodology often receive lower satisfaction rating on student evaluations. In previous years, these same professors received outstanding evaluation ratings when the same topics were presented using standard didactic lectures. We feel that this decreased student satisfaction may be result of two distinct causes. First, students who have been accustomed to didactic lectures often come to class unprepared and therefore are incapable of the critical thinking and problem solving skills need in the flipped classroom. Second, the evaluation tool which was appropriate for didactic lectures is inappropriate for the analyzing the flipped classroom methodology. The student evaluations have improved in that last several years; however, the transition was not accomplished without difficulty. Anyone planning to pursue this teaching approach should be prepared to weather the storm of sub-standard student evaluations and, if possible, prepare their administrators for this potential outcome. Our experience suggests that faculty persistence targeted at changing student culture and expectations can help in this process. Accurately determining student’s acceptance of this relative new teaching methodology is important. Improvements in the teaching methodology can be made only when the evaluation tool is valid. It is felt a new evaluation tool can be developed based on results obtained in student focus-groups coupled with cognitive assessment outcomes.
Wilson, J.H., Beyer, D. and Monteiro, H. (2014) Professor age affects student ratings: halo effect for younger teachers. College Teaching 62, 20-24. [doi: 10.1080/87567555.2013.825574]
Student evaluations of teaching provide valued information about teaching effectiveness, and studies support the reliability and validity of such measures. However, research also illustrates potential moderation of student perceptions based on teacher gender, attractiveness, and even age, although the latter receives little research attention. In the present study, we examined the potential effects of professor age and gender on student perceptions of the teacher as well as their anticipated rapport in the classroom. We also asked students to rate each instructor's attractiveness based on societal beliefs about age and beauty. We expected students to rate a picture of a middle-aged female professor more negatively (and less attractive) than the younger version of the same woman. For the young versus old man offered in a photograph, we expected no age effects. Although age served as a detriment for both genders, evaluations suffered more based on aging for female than male professors.
Blair, E. and Valdez Noel, K. (2014) Improving higher education practice through student evaluation systems: is the student voice being heard? Assessment & Evaluation in Higher Education, 1-16. [doi: 10.1080/02602938.2013.875984]
Many higher education institutions use student evaluation systems as a way of highlighting course and lecturer strengths and areas for improvement. Globally, the student voice has been increasing in volume, and capitalising on student feedback has been proposed as a means to benefit teacher professional development. This paper examines the student evaluations at a university in Trinidad and Tobago in an effort to determine whether the student voice is being heard. The research focused on students’ responses to the question, ‘How do you think this course could be improved?’ Student evaluations were gathered from five purposefully selected courses taught at the university during 2011–2012 and then again one year later, in 2012–2013. This allowed for an analysis of the selected courses. Whilst the literature suggested that student evaluation systems are a valuable aid to lecturer improvement, this research found little evidence that these evaluations actually led to any real significant changes in lecturers’ practice.
Braga, M., Paccagnella, M., and Pellizzari, M. (2014) Evaluating students’ evaluations of professors. Economics of Education 41:71–88. [doi: 10.1016/j.econedurev.2014.04.002]
This paper contrasts measures of teacher effectiveness with the students’ evaluations for the same teachers using administrative data from Bocconi University. The effectiveness measures are estimated by comparing the performance in follow-on coursework of students who are randomly assigned to teachers. We find that teacher quality matters substantially and that our measure of effectiveness is negatively correlated with the students’ evaluations of professors. A simple theory rationalizes this result under the assumption that students evaluate professors based on their realized utility, an assumption that is supported by additional evidence that the evaluations respond to meteorological conditions.
Stark, P.B. and Freishtat, R. (2014) An Evaluation of Course Evaluations. [PDF]
Student ratings of teaching have been used, studied, and debated for almost a century. This article examines student ratings of teaching from a statistical perspective. The common practice of relying on averages of student teaching evaluation scores as the primary measure of teaching effectiveness for promotion and tenure decisions should be abandoned for substantive and statistical reasons: There is strong evidence that student responses to questions of “effectiveness” do not measure teaching effectiveness. Response rates and response variability matter. And comparing averages of categorical responses, even if the categories are represented by numbers, makes little sense. Student ratings of teaching are valuable when they ask the right questions, report response rates and score distributions, and are balanced by a variety of other sources and methods to evaluate teaching....

  1. Drop omnibus items about “overall teaching effectiveness” and “value of the course” from teaching evaluations (SET): They are misleading.
  2. Do not average or compare averages of SET scores: Such averages do not make sense statistically. Instead, report the distribution of scores, the number of responders, and the response rate.
  3. When response rates are low, extrapolating from responders to the whole class is unreliable.
  4. Pay attention to student comments—but understand their limitations. Students typically are not well situated to evaluate pedagogy.
  5. Avoid comparing teaching in courses of different types, levels, sizes, functions, or disciplines.
  6. Use teaching portfolios as part of the review process.
  7. Use classroom observation as part of milestone reviews.
  8. To improve teaching and evaluate teaching fairly and honestly, spend more time observing the teaching and looking at teaching materials.

Thursday, December 11, 2014

Ann Gauger moves the goalposts

We've been discussing Ann Gauger's claim that evolution is impossible because she was unable to transform a modern enzyme into another related one by changing a small number of amino acids.

I pointed out that this is not how evolution works. In some cases, you can easily show that two enzymes with different specificities can evolve from a common ancestor that could carry out both reactions. Such enzymes are said to be "promiscuous."

Here's Ann Gauger's latest post: In Explaining Proteins (and Life), Here's What Matters Most. She says ...
So now, let's address enzyme evolution and the divergence of enzymes to produce related families and superfamilies. Larry Moran says that modern enzymes evolved by specializing from a promiscuous ancestor. As evidence, he says modern enzymes can sometimes catalyze reactions with several substrates (the chemicals they bind to and change), and that it is possible to shift these enzymes to favor one substrate over another. He gives several examples or provides links to them.

Here's another place where he and I agree. Promiscuous enzymes can be shifted with just a few mutations to a new reaction specificity, provided the capacity for the reaction already exists in the starting enzyme, and each step is small and selectable. They can evolve easily, because they can already carry out the reaction in question. Larry Moran's description of the process is actually quite good, despite the digs he takes at us.

It strikes me that Larry Moran would know we agree with him on these points if he had read our papers.
So, what's the problem?

Turns out that changing one related enzyme into another with a different specificity wasn't the goal of her experiment. Here's what she was really trying to do ...
The Big Problem

Here's the big problem -- the arrival of novelty.

Novelty or innovation means the appearance of something not already present. It's the opposite of promiscuity. So a way to create novelty is absolutely essential to explain modern cells, as I will demonstrate.


Here's the heart of the matter. Promiscuity cannot solve the problem of novelty. Mutation, natural selection, and drift cannot drive the creation of novelty of all those new protein folds. That's what Doug Axe and I have been testing all along, from Doug Axe's 2004 paper to this most recent one. Based on our experiments, the problem of how innovation originates remains unsolved.
Now I get it (not). What she and Doug Axe were really trying to do was to intelligently design an entirely new enzyme.

They failed.

Therefore Intelligent Design Creationism is falsified.

That seems logical to me.

How to become a better teacher (not)

Here's a video by Dr. Lodge McCammon. He has a website: Here are his credentials.
Dr. Lodge McCammon is an educational innovator. His career began in 2003 at Wakefield High School in Raleigh, North Carolina, where he taught Civics and AP Economics. McCammon received a Ph.D. from NC State University in 2008 and continued his work by developing innovative practices and sharing them with students, teachers and schools across the world. McCammon is a musician who spends much of his time in the recording studio composing curriculum-based music. His songs and related materials can be found in Discovery Education Streaming. He is also an education consultant who provides professional services, including keynote speeches, presentations, curriculum development, and a variety of training programs.
Watch the video and discuss. I think you can guess what I think. I reject one of the basic premise; namely that online courses are taught by the very best teachers. How do we know who is the best teacher just by watching videos?

Here's a question for your consideration. It concerns "reflective teaching." Imagine that you record yourself teaching an incorrect version of the citric acid cycle or a flawed version of the Central Dogma of Molecular Biology. How many times do you have to watch that video to recognize that what you are teaching is wrong? Is it more than three? Less than ten?

Tuesday, December 09, 2014

On the meaning of pH optima for enzyme activity

The students in my lab course measured the activity of trypsin at different pH's. They discovered that the enzyme was most active at a pH of about 8.0-8.5 and that activity fell off rapidly at pH values above and below this optimum. This is consistent with results in the published literature (see figure from Sipsos and Merkel, 1970). Here's the exam question ...
What was the pH optimum of trypsin activity? Can you explain this in terms of the normal biological function of the enzyme and the physiological conditions under which it is active? Do you expect there to be a strong correlation between the optimal pH of an enzyme’s activity and the pH of the cell/environment where it is active?

Sipos, T., and Merkel, J. R. (1970) An effect of calcium ions on the activity, heat stability, and structure of trypsin. Biochemistry, 9:2766-2775 [doi: 10.1021/bi00816a003]

On the specificity of enzymes

Most biochemistry students are taught that enzymes are highly specific. It's certainly true that the stereospecificity of some enzymes is extraordinary but is it true in general? Here's one of the exam questions that the students in my course had to answer ....
All three of the enzymes (trypsin, alcohol oxidase, β-galactosidase) that you assayed in the past three months are active with several different substrates substrates. Is this behaviour typical or are most enzymes highly specific? Aminoacyl tRNA synthetases are the classic examples of enzymes that are highly specific. Why? Do aminoacyl-tRNA synthetases ever make mistakes?

Using mass spec to find out how many protein-encoding genes we have

One of the other exam questions is based on an experiment students did with an enzyme they purified. They digested the enzyme with trypsin and then analyzed the peptides by mass spectrometry. They were able to match the peptides to the sequence databases to identify the protein and the species. The exam question is ...
Nobody knows for sure how many functional protein-encoding genes there are in the human genome. About 20,000 potential protein-encoding genes have been identified based on open reading frames and sequence conservation but it is not known if all of them are actually expressed. How can you use Mass Spec to find out how many functional protein encoding genes we have? [see the cover of Nature from May 29, 2014: click on about the cover]

King Dick and PCR

The students in my lab course are writing their final exam. Prior to the exam they were given 22 questions and they knew that five of them would be on the exam. I thought that Sandwalk readers might enjoy coming up with answers to some of the questions.
The possible remains of King Richard III of England have recently been discovered. His identity has been confirmed by DNA PCR analysis. Descendants of his mother in the female line have the same mitochondrial DNA as King Richard. However, the results with the Y chromosome were surprising. None of the descendants in the all-male lineage had the same Y chromosome markers as King Richard. This is almost certainly due to something called a "false-paternity" event. (There are other ways of describing this event.) Given what you know about PCR, what are some possible sources of error in this analysis? Would you be prepared to go back in time and accuse one of the Kings of England of being a bastard? [Identification of the remains of King Richard III]
(The lab experiment was to analyze various foods to see if they were made from genetically modified plants.)

Note: It's extremely unlikely that the "false-paternity" event occurred in the lineage leading directly to any of the Kings and Queens of England.

How many microRNAs?

MicroRNAs are a special class of small functional RNA molecules. The functional RNA is only about 22 nucleotides long and most of the well-characterized examples bind to mRNA to inhibit translation and/or destabilize the message.

The big questions for many of us are how many different microRNAs are there in a typical cell and how many of them have a real biological function. These questions are, of course, part of the debate over junk DNA. Are there thousands and thousands of microRNA genes in a typical genome and does this mean that there's a lot less junk DNA than some of us claim?

The journal Cell Death and Differentiation has devoted a special issue to microRNAs [Special Issue on microRNAs – the smallest RNA regulators of gene expression]. There are four reviews on the subject but none of them address the big questions.

That didn't stop the journal from leading off with this introduction ...
It is now well recognised that the majority of non-protein-coding genomic DNA is not “junk” but specifies a range of regulatory RNA molecules which finely tune protein expression. This issue of CDD contains an editorial and 5 reviews on a particular class of these regulatory RNAs, the microRNAs (miRs) of around 22 nucleotides, and which exert their effects by binding to consensus sites in the 3'UTRs of mRNAs. The reviews cover the role of miRs from their early association with CLL to other forms of cancer, their importance in the development of the epidermis and their potential as disease biomarkers as secreted in exosomes.
I'm not certain what the editors mean when they say that "it is now well recognised ..." I interpret this to mean that there are a large number of scientists who are completely uniformed about the structure of genomes and the debate over junk DNA. In other words, it is now well recognized that some scientists don't know what they are talking about.

I don't know any expert who would claim that 50% of large genomes consist of genes that specify regulatory RNAs involved in fine-tuning protein expression. Do you?

On a related issue, Wilczynska and Bushell begin their review with ...
Since their discovery 20 years ago, miRNAs have attracted much attention from all areas of biology. These short (~22 nt) non-coding RNA molecules are highly conserved in evolution and are present in nearly all eukaryotes.
Sequence conservation is an important criterion in deciding whether something is functional. In order to use conservation as a measure of function you have to establish some standards that let you distinguish between sequences that are "conserved" by negative selection and those that have drifted apart by random genetic drift.

What do Wilczynska and Bushell mean when they say that microRNAs are "highly conserved"? The most highly conserved genes exhibit about 50% sequence identify between prokaryotes and eukaryotes. They are almost identical within mammals. Other highly conserved genes are about 80% identical within animals (e.g. between insects and mammals). As far as I know, the sequences of most putative microRNAs aren't even similar within mammals and certainly not between mammals and fish.

The phrase "highly conserved" has become meaningless. It's now a synonym for "conserved" because nobody ever wants to just say "conserved" and they certainly don't want to say "moderately conserved" or "weakly conserved" even if it's the truth.

Monday, December 08, 2014

Ann Gauger keeps digging

Ann Gauger and her creationist collaborator, Doug Axe, have been swapping amino acid residues in one kind of protein hoping to show that they cannot change it into another. They have deliberately ignored any clues that might be derived from assuming that evolution happened.

They have succeeded in their goal. None of their constructs have a different activity. They conclude that evolution is disproved.

Friday, December 05, 2014

Why fund basic science?

This video was the winner in the 2013 FASEB competition for "Stand Up for Science." The title was "Funding Basic Science to Revolutionize Medicine."

I'm sure their hearts are in the right place but I fear that videos like this are really just contributing to the problem. It makes the case that basic research should be funded because ultimately it will pay off in technologies to improve human health. If you buy into that logic then it's hard to see why you should fund research on black holes or studies of plate tectonics.

Don't we have a duty to stand up for ALL basic research and not just research that may become relevant to medicine? Besides, if the only important basic research that deserves funding is that which has the potential to contribute to medicine, then shouldn't funding be directed toward the kind of "basic research" that's most likely to pay off in the future? Is that what we want? I don't think evolutionary biologists would be happy but everyone working with cancer cells will be happy.

The best argument for basic research, in my opinion, is that it contributes to our knowledge of the natural world and knowledge is always better than ignorance. This argument works for black holes, music theory, and for research on the history of ancient India. We should not be promoting arguments that only apply to our kind of biological research to the exclusion of other kinds of basic research. And we should not be using arguments that reinforce the widespread belief that basic research is only valuable if it leads to something useful.

A creationist argument against the evolution of new enzymes

Intelligent Design Creationists have found it impossible to make a positive case for intelligent design and the existence of a supernatural designer. Instead, they concentrate on trying to prove that evolution is wrong. Turns out, they're not very good at that either.

The latest attempt is by Ann Gauger posting on the best Intelligent Design Creationist website.1 She outlines her case at: Is Evolution True? Laying Out the Logic.

On the irrelevance of Michael Behe

Michael Behe is one of the few Intelligent Design Creationists who have come up with reasonable, scientific, defenses of creationism. I give him credit for that and for the fact that it often takes some effort to show why he is wrong.

However, when he has been proven wrong he should admit it and move on. He should instruct his fellow creationists to move on as well. That's not what happened with respect to his book on The Edge of Evolution. Last July he doubled down when a new study appeared that refuted his claims. Amazingly, Behe said that his ideas were vindicated. I questioned his logic in: CCC's and the edge of evolution and Michael Behe's final thoughts on the edge of evolution.

After a thorough discussion, we conclude that Behe is wrong about his edge of evolution. There's nothing in evolutionary theory, or in experimental results, that prevents the evolution of new functions with multiple mutations.

Now Ken Miller has posted an article making many of the same points [Edging towards Irrelevance]. Miller rightly demands an apology from Behe but his main point is that Michael Behe's recent behavior has made him largely irrelevant in the debate over Intelligent Design Creationism. I agree.

What this means is that there is nobody left in the Intelligent Design Creationist community who deserves serious attention from scientists. It will still be fun dealing with them but the game has become more like whac-a-mole than science. PZ Myers1 makes the same point: Aren’t we all more than a little tired of Michael Behe?.

It's kinda sad.

1. Unfortunately, PZ weakens his case by misrepresenting Behe's argument. PZ says, The hobby horse he’s been riding for the past few years is the evolution of chloroquine resistance in the malaria parasite: he claims it is mathematically impossible." That's just not true. Behe's entire case rests on the fact that chloroquine resistance is well within the edge of evolution. Behe has no problem with the evolution of chloroquine resistance.

Thursday, December 04, 2014

How to revolutionize education

I believe that we need to change the way we teach. But not the way you probably think. Watch this video to see what's really important about teaching.

Hat Tip: Alex Palazzo, who I hope will help us make the transition to 21st century teaching.