More Recent Comments

Thursday, March 26, 2009

Carl Zimmer on Science Journalism

 
Carl Zimmer has written a lengthy blog posting about the troubles with science journalism [Visions of the Crash]. You should read all of it but I want to comment on one small part.
The rise of blogs about science has brought me many pleasures. I’ve particularly liked the astringent criticism of bad science journalism. As soon as a piece is published, scientists who know the lot about the subject can, if necessary, rip a journalist a new one. I personally have been very influenced by Mark Liberman, a linguist at Penn, who has time and again shown how important it is for reporters to pay attention to the statistics in science. What seems at first like stark results–like the difference between the male and female brain–can melt away if you look at the actual data.

But some bloggers go a step further. They claim that these individual cases of journalistic misconduct add up to an indictment of the whole business. Hence, as Moran declares, we can live without science journalists.

It’s odd that many of the people making these pronouncements are scientists themselves–people, in other words, who know that you don’t do science by anecdote. If a blogger sits down in the morning and reads ten stories in a newspaper’s science section and notices that one that makes a howler of a mistake, you know what that blogger will be writing about. Blogs are an outlet for righteous fury. Bloggers are much less likely to write a post that begins, “I read nine articles this morning about science that were fairly accurate and pretty well written.” Ho hum.
I'm not an expert in everything. Most of the science articles I read are explaining things that are way outside my area of expertise. They may be good articles or they may not be. I'm usually skeptical.

However, the majority of articles I read that fall within my areas of expertise—biochemistry, molecular biology, genomes, evolution—do not impress me. It's not just a case of picking out the worst article out of ten to criticize. It's more like every second article has a problem.

When I talk to people in other fields there report the same statistics. It's looks like the average quality of science journalism, even in popular science magazines like SEED, Discovery, New Scientist, and National Georgraphic, leaves a lot to be desired.

I'm not very happy with most scientific papers either.


9 comments :

The Other Jim said...

Regarding your comment on scientific papers, I've noticed that lab journal clubs are getting more and more critical of everything they read. I've wondered if this is just our increasing cynicism, or if it is a true drop in quality for a lot of scientific publications.

Any thoughts?

Anonymous said...

Of course it is good to catch mistakes in reporting. But we need a lot more critically intelligent coverage of research and science as an institution and more public analysis of the relationships between science and government, science and the owners of the economy, basic science and technology development for profit, science and the military, science and society, etc. etc.

I'm afraid that blog attacks on errors of fact are really a way of suppressing science journalism. For journalists, errors are very serious in part (but only in part) because they give the Censor -- CEO, politician, scientist, police chief, lawyer, doctor, general -- a club to beat you to death with. Surveillance is constant the the club is wielded relentlessly every day.

Censorship is very widely distributed in our society. Every company president, university president, deputy minister, etc., spends a large part of the day ordering his or her staff to spin, suppress, divert or undermine honest attempts at reporting. Newspaper publishers spend a good deal of their time suppressing coverage of their activities and decisions! The space for genuine coverage without censorship, suppression, threats and harrassment in Canada today is very narrow.

Larry Moran said...

Anonymous says,

I'm afraid that blog attacks on errors of fact are really a way of suppressing science journalism.

What a strange think to say.

What are we supposed to do when we discover that an article by a science journalist misrepresents the science?

If it turns out that most of the articles by science journalists are bad then this is a critique of the entire field. The solution is for science journalists to get their act together and start producing better material.

Do I want to "eliminate" bad science writers? You bet I do.

What's your solution?

Larry Moran said...

Jim asks,

Regarding your comment on scientific papers, I've noticed that lab journal clubs are getting more and more critical of everything they read. I've wondered if this is just our increasing cynicism, or if it is a true drop in quality for a lot of scientific publications.

Any thoughts?


I think it represent a lowering of standards. There were always bad papers being published but they usually didn't appear in the major journals. Today we have a different situation. There are lots of really horrible papers being published in Nature and Science.

Even graduate students in journal clubs are able to see the flaws in these papers. Apparently the reviewers missed those flaws.

I don't know why.

Any thoughts?

Anonymous said...

I think scientists and all scholars should:

1) Demand free inquiry and free expression for the non-academic writers and editors who work for them in their universities. Demand institutional journalism of integrity and accuracy, even when it challenges your claims or uncovers things about your institution that you do not wish to see revealed.

2) Demand that the print media assign teams of reporters to cover universities on a day to day basis and science on a day to day basis. Demand the same of broadcast and new media outlets. Disinterested coverage + critical analysis of research results, programs and institutional and disciplinary developments. Every day and intensively.

3) Reject Quirks & Quarks and Daily Planet gee-whizzy PR programs. Demand critically intelligent coverage and analysis.

4) Demand that journalism schools mount training programs for people with undergraduate and graduate degrees in science so that they can be trained to step outside the social control of Science and cover research (results), whole programs as they emerge and Science, institutionally, as a whole, intelligently and critically, particularly in its relationship with the state and the owners of the economy.

5) Comment publicly on your colleagues' work and their PR efforts when they make false or exaggerated claims about their work and/or its importance. Researchers, like artists, can be expected to talk up their own work. But journalists should be skeptical, to say the least, of promotion. There aren't many Darwins or Picassos out there and even great scientists go wrong.

6) Understand that Science goes where the money is. Yes, there will always be a few important reseachers and thinkers who follow their own muse, but scientists, like almost everyone else, migrate toward resources. Or, rather, the availability of jobs and research money draws researchers. The decisions about where to put that money are made by politicians and others based on their ideologies. Science as an institution is dependent, not independent.

7) Further, understand that, in historical terms, scientists have rarely defied the politically and economically powerful. More often than not, scientists have been eagre to comply with state and corporate policies later deemed to be pernicious (or worse). Individual scientists can act with great integrity and at great personal cost, but Science as an institution is a faithful servant.

8) Correct errors of fact in journalism in the context of 1-7above. Challenge analysis and interpretation that you believe is inadequate to the facts. Defend your work in public and in print, not by censorship, suppression and PR spin.

The Other Jim said...

Larry Moran said...

Even graduate students in journal clubs are able to see the flaws in these papers. Apparently the reviewers missed those flaws.

I don't know why.

Any thoughts?
---


This is the puzzle to me. My experience with peer review is that they found flaws, or poorly explained parts of my papers (mind you, I have nothing in Science or Nature, so the comparison is not precise). These were not high-impact journals, but the reviews were thorough and fair. Maybe it's different at the big journals?

Whenever this conversation come up, we end up discussing;

1) "Interdisciplinary" research making good choices for reviewers more difficult.

2) Commercialization of journals. Maybe poor editorial decisions are being influenced by the "sexy science" or "controversy sells" ideas. The explosion of low end journals in recent years may also contribute, but we'll keep this discussion on the bigger, older journals.

3) Scientists are too "busy" now, and are not as careful in their peer review duties. The fast turn-around times at the journals may also speed up the process, but decrease the quality of the review.

4) We are in a bit of an information boom at the moment. I think too many people are being caught up in the excitement of the newest observations, and are clearly forgetting the past research, and failing to interpret their results in the full context of the literature.

5) Does the culture of "my research will cure cancer" thinking in grant proposals, etc carry over into our thinking of our work at the time of publication? I see more and more of this in conference presentations. And bold claims are what tend to get in Science and Nature at the moment.

This is a really important issue. I think too many people are being swept up in it to stop and notice the poor quality work that is piling up.

Unknown said...

As to bad papers, In my field (climate) it is often a case of passing a highly mathematical paper on to the wrong reviewers. The first paper I ever reviewed had a clear error in its mathematics. However, no other reviewer spotted it and the author, a professor of mathematics, was able to convince the editor (not a mathematically inclined person) that I must be wrong (argument from authority). The paper was published as is, but it didn't take long for diligent readers to note the blunder.

I've rejected or called for the heavy modification of about 2/3 of the papers I have reviewed. This takes much, much, more time than just skimming the submission, not trying to understand the math, and giving it a pass. Career-wise, doing thorough reviews is a very bad idea.

Editors appreciate a reviewer who will do the dirty work of rejecting such submissions. The problem is that your reward is to get the next dubious manuscript sent your way. I had to decline to do any reviewing for an entire year to break the cycle.

However, our field does not suffer alone: the great chess player and author, Lev Polygayevsky (I'm sure you have all heard of him) made a similar comment about chess literature:

"In ninety-five percent of all chess books you will find so egregious an error on page one that you can with a clear conscience close the book and never look at it again"

William Hyde

Unknown said...

Scientific American has more scientifically realistic/accurate articles than most. Interestingly, also a high proportion of them are written by very articulate scientists. (No doubt with help from the "journalists" on the editorial staff).

Larry Moran said...

bay says,

Scientific American has more scientifically realistic/accurate articles than most. Interestingly, also a high proportion of them are written by very articulate scientists.

That used to be true 30 years ago but it's not true today.

Can you give me an example of a good article in a field that I can evaluate? Anything about genomes, biochemistry, molecular biology, or evolution will do. Right now I can only think of one or two article in the past decade that would pass muster. They weren't written by science journalists.