The simmering cauldron that is science is a mysterious and wonderful thing. It takes a while before it produces results that are ready to be served up for consumption by society. The progress of science depends on an open exchange of results, healthy skepticism, and often, spirited criticism and rebuttal. So, how do we know when it’s soup?

Just in the last few weeks there were all kinds of new scientific results served up by the national news media—some useful, some just interesting. Geneticists report that some of Sally Hemming’s descendants are also descendants of one of the male Jeffersons of Virginia. Dozens of fossilized eggs containing embryonic sauropods are discovered in Patagonia. The Pacific Ocean is releasing large quantities of nitrous oxide, or laughing gas, which although it doesn’t cause surfers’ euphoria, is an important greenhouse gas. The number of college students who smoke jumped 28 percent between 1993 and 1997.

What all of these science-based news stories have in common is that they were published in such prestigious journals as Science, Nature, New England Journal of Medicine, and the Journal of the American Medical Association. Typically, these results are released to the press just a few days before they are to be published, after rigorous peer review, in the journal with a prominent warning that the information is embargoed until the publication date. A series of articles in the Oct. 30 issue of Science provides a variety of perspectives of scientists, editors and journalists about this embargo practice, which is aimed fundamentally at quality control.

Coincidentally, in that very same issue of Science, there was a letter to the editor signed by 20 prominent ecologists pledging the authors to the task of transforming the science of ecology to better serve humanity. They argue that, given the urgency of the problems of humankind and the biosphere, it is not enough that scientists just do first-rate research and publish it in the technical literature for the benefit of scientific colleagues. We must also inform the general public of the relevance and importance of our work. As public teachers, we must interpret science even when knowledge is incomplete.

For those of us in the Chesapeake Bay scientific community, the dilemma in resolving conflicts between the deliberate scientific process involving peer review, publication and independent replication of results, and the urgent need to inform the public and decisionmakers was brought home last year by the Pfiesteria piscicida phenomenon. Then, the public imperative for action to protect public health and find solutions demanded scientific conclusions before careful studies could be conducted, results reproduced and journal articles reviewed by peers. Yet the time from submission of a paper to a scientific journal to its actual publication can be a year or more.

This may be an extreme example, but the fact of the matter is that new scientific results concerning the Chesapeake Bay and its restoration are routinely reported to management committees and the public at large before they are formally published. The BAY JOURNAL does it in every issue.

To a large extent, this is a demand-driven and constructive process. To think a rigid embargo of new scientific information could or should be maintained until formal publication is naive. Moreover, scientific information is often reported in the media based on communications in open meetings or in response to press queries in which scientists are asked to interpret and speculate. Even so, scientists can still release, interpret, share, evaluate and communicate scientific information more responsibly.

A case in point is the renewed controversy over whether the lesions observed on fish in the areas suspected of toxic pfiesteria outbreaks were, in fact, caused by a fungus rather than pfiesteria. This was summarized in a well-balanced article in the November 1998 issue of the BAY JOURNAL, which itself followed a spate of news articles in the popular press on this subject.

Many of the news articles were inspired by a press release titled, “USGS find fungus to be a cause of fish lesions in Chesapeake.” The press release contained some interesting and useful information. While it duly admitted that the results could not rule out that pfiesteria played a role in the development of lesions, the press release went on to speculate that the fungus may have been introduced and that the fungal-induced lesions may stimulate pfiesteria blooms.

Neither of these speculations appeared to be based on observational or experimental evidence. But, as one who was asked by many reporters to comment on these findings, the most troubling thing is that the scientific details—the data, a report or paper—were not available to the scientific community. How could I, or any other scientist, thoughtfully respond to the reported findings without the opportunity to review the supporting information and its limitations? If they had been reported in “this week’s issue of the journal Science,” I could have at least been able to read the paper and would have found some comfort in the knowledge that it had been subjected to rigorous peer review.

The press spin of this story was astounding. A story carried by the Associated Press and widely reported in regional papers ignored the caveats of the press release and said flatly “skin ulcers on Chesapeake Bay fish are caused by fungus and not pfiesteria.” The Delmarva Farmer went so far as to indicate that information on the fungal infection of lesions was ignored for political reasons (in fact, uncertainty regarding the cause of lesions was stressed throughout the assessment process). It also suggested that actions taken to close tidal rivers and reduce nutrient inputs were, therefore, “off the mark.” The result: The public is further confused and the agricultural community grows more cynical toward the new regulations which require its cooperation.

I am disturbed by what seems to be a trend toward “press release science” in our Bay community. In some cases, results are reported that have not been shared with the scientific community, much less peer-reviewed and published. For example, the U.S. Geological Survey also recently issued a press release headlined, “Chesapeake Bay waters have warmed, evidence suggests.” It reported the conclusion of a poster presentation at a scientific meeting in Canada, based on results that are not yet accessible to the scientific community. I was quoted by a Baltimore Sun reporter who called for my reaction as saying: “I’m from Missouri. Show me.” Although I would have preferred a more elegant way to have my views expressed, the quote succeeded in communicating the key point: The process of science is based on skepticism. Fellow scientists must be able to see and evaluate the evidence.

Another recent USGS press release illustrates another dimension of “press release science”: oversimplifying or twisting the results of published reports to attract attention. This release was headlined: “High nutrient concentrations are not the only factors controlling algal biomass.” Commendably, this press release accompanied a newly published report. But, my own reading of the report leads me to conclude that the public relations person who wrote the release got just a little carried away. The study shows that in addition to nutrient concentrations, light availability and substrate also determine where algae grow in the nontidal Susquehanna River. Wow, is that news? Is the lay reader left with the impression that nutrients don’t matter, so why are we spending so much effort to reduce them? Do we risk worsening public confusion and cynicism by emphasizing iconoclastic “hooks” to catch press interest?

My purpose is not to criticize the USGS. It just happened to provide these recent examples of “press release science.” Other government agencies, universities (including my own) and research institutions have done similar things. Even those of us who think we don’t live in glass houses certainly have large picture windows.

What can we do then to improve the situation? A couple of things come to mind. First, the scientific community in the Bay region, perhaps after a discussion with the regional press, could develop general guidelines concerning the sharing of information (even if it’s only on a web site) with the scientific community prior to, or at least, at the time of issuing press releases. The Bay Program’s Scientific and Technical Advisory Committee could sponsor a dialogue toward this end.

Second, it seems to me that the Bay scientific community could do a far better job in developing consensus on issues of timely importance. In that way, constructive criticism can at least be applied and conclusions constrained by the limitations of data, before serving up the soup. Groups of environmental, agricultural and medical scientists, in fact, did a relatively good job in that regard during the pfiesteria controversies in 1997. But, too often, we content ourselves with holding large workshops that may be effective in sharing information, but are not structured to engage debate and build consensus.

In either case, we need to recognize that science is not just something that individual scientists do. It is a social process, one in which we scientists have responsibilities to each other, as well as to the public that supports our enterprise. As the late Carl Sagan said, “If you look at the collective enterprise of science, you see that it has elegant, self-correcting machinery built into it in a fundamental way, which makes it different from anything else. And it works.”