October 25, 2005

Surveys and Statistics

In Chapters two and three of It Ain't Necessarily So, the authors examine the ways in which the media presents and reports scientific surveys, research and statistics. They began by giving examples of instances where the news has incorrectly reported on scientific findings, or where they gave more attention to unimportant or inconclusive research rather than valid and well-founded research.

The last section in Chapter Two attempts to examine "why big news emerges from small findings." The authors take a look back at the examples they gave of where the media gets it wrong and claim that reporters cover the "unimportant" science because it fits their template and generates excitement in their readers. Now, I know newspapers are business just like any other and that they need to report news that they think will generate readership, but I have a hard time believing that they are so ruthless as to report false scientific findings because they think that their readers would be interested in the findings.

I am not planning on becoming a journalist. However, considering that I am an English major there is a possibility that I could become one. And I know that I don't know anything about the ins and outs of a sound and well-balanced scientific study. Common sense can tell me if a particular study is too biased, but the authors of this book are asking journalists and readers alike to know exactly when and how scientific reports are wrong.

Just as the authors of this book are asking readers not to take the media's word for what scientific findings are valid, I ask everyone reading this book not to take the authors' word for how poorly journalists cover scientific news. The authors offer no suggestions on ways in which journalists can more accurately cover scientific news, and don't take into consideration the timeliness of such reports. If a very valid and "newsworthy" study comes out at the same time as a large scandal or natural disaster, the study will not make front page news. However, if a not as noteable study comes at a time when there is not as much "newsworthy" news, then it is more likely to be more widely covered.

Chris noted in a comment on Jay's blog that: "The authors want to be sure that their message sticks in their readers' minds, and the best way to do that is to deliver it point blank, over and over again, without concern for the other sides of the issue." Wouldn't this make them just as guilty as the journalists they're trying to attack? By singly pointing out where and how journalists go wrong, they call attention to their own biases and one-sidedness.

The first part of Chapter three gives an overview of "Tomato Statistics" and how journalists (the bad guys yet again) use statistics to their advantage, without giving thought to the circumstances surrounding the statistics or to the wide umbrella of crimes and and instances that those statistics report.

Yet again, the authors place the guilt of abusing these statistics completely on the journalists who report them. However, shouldn't some of the blame go to the scientists publishing the statistics, and to the readers who don't realize that all statistics are somewhat sketchy and skewed?

The authors state that: "Coverage of research that is proportionate to its actual importance would obviously be welcome. But in its absence, readers often can and should decide for themselves whether something journalists depict as a mountain is actually only a small pile of dirt." Journalists should report scientific findings proportionately, but in our competitive country, those that advertise get business. A newspaper may not know a particular, yet valid, study exists unless it is approached by the organization or the organization creates its own buzz. Because journalists are not scientists, they need some help figuring out which reports are "newsworthy." Those that promote themselves, by default, will get more coverage.

Posted by JohannaDreyfuss at October 25, 2005 08:40 PM | TrackBack
Comments

Yes, the authors' methods in the text aren't very fair to journalists, but it's obvious that they aren't trying to target journalists -- they're trying to target the readers of newspapers who are most likely to believe reported statistics.

Propaganda is a powerful tool, especially when used for criticism.

As for journalists needing help figuring out scientific studies -- if outside organizations start helping newspapers decide what statistics to report, that will only cause them to become more biased (since they will be inclined to ally with those organizations that help them).

I think the real answer is for journalists to start becoming more specialized (in political science, biology, mathematics, business, criminal justice, etc.), and for their editors to start assigning them to those stories that they are best suited for.

Posted by: ChrisU at October 26, 2005 09:01 AM

I forgot to mention that there is a particular propaganda technique known as "card-stacking," which applies to the material in the text.

Card-stacking involves giving only information to support or to tear down something, and not giving information that does the opposite. In other words, propagandists "stack the deck" in favor of the thing they support or against the thing they don't support.

Posted by: ChrisU at October 26, 2005 09:05 AM

I agree with you that journalists should be more specialized. That would solve this problem in a number of areas, not only science.

In terms of the outside organizations helping reporters, I meant only that the organizations creating reports should take it more upon themselves to advertize their findings. They shouldn't wait for reporters to report on them, they should take the initiative to publicize their work if they feel that it is newsworthy. If they started telling journalists what was newsworthy, then we'd have another whole host of problems in the biases and alliances and there still wouldn't be any solution to proportionality.

Posted by: Johanna at October 26, 2005 10:32 AM
Post a comment









Remember personal info?