This summer’s news articles about the disconnect between monitoring data from the Bay’s tributaries and modeled predictions of the Bay restoration effort are a fine thing for those of us who support the Chesapeake Bay Program from the sidelines. Most people who read this journal have long been aware that the monitoring data show something quite different from the Bay model runs.

To have the wider public puzzling over this problem can only serve to increase pressure to resolve the differences and to make the necessary changes to get the Bay restoration effort on track.

One way to react to a public airing of this disconnect would be to circle the wagons. Defend the results as a work in progress, and the best that money can buy.

But, the Bay Program’s claims of having (nearly) achieved the Chesapeake Bay agreement goals were almost always given with a note that they were based on modeled results—not monitoring data. I would like to think that most people can understand the difference between those two types of measures.

Still, it may be that the public believed more “real” success had been achieved than what actually occurred.

Another way to respond to public concerns that the Bay has not been restored as much as one might have thought is to reconsider the Bay Program’s original mandate. A succinct version of this is provided in the opening paragraph of the 1983 Chesapeake Bay Agreement: “To fully address the extent, complexity and sources of pollutants entering the Bay.” And, “…[to] share the responsibilities for management decisions and resources regarding the high priority issues of the Chesapeake Bay.”

Many hairs can doubtless be split about what constitutes “addressing” the extent and sources of pollutants entering the Bay. But, at the very least, it implies an intention to identify and to monitor pollutant loads.

To achieve this, the Bay Program placed its money on the creation of a large mathematical model for the entire drainage and on gathering the data needed to feed this model. Such a model was thought preferable to simple site monitoring because, among other things, it would allow Bay Program managers to match implementation levels of pollution-reducing activities with predicted restoration results.

The fact that there is a wide gap between what the monitoring data say and what the Bay model predicts indicates that the model has not yet achieved the level of accuracy needed for estimating the net impact of our pollution reduction efforts.

Perhaps the single most important element in this failure has been the lack of research on the technical efficiencies of practices that were intended to reduce pollution loads.

Tremendous effort went into determining how many acres of various Best Management Practices were applied across the drainage, but relatively less effort was applied to ascertaining the effectiveness of those actions in reducing pollution loads. In many cases, the categories of BMPs were so broad, there is no single technical efficiency that could be accurately applied to them.

The biggest harm has not been done by the model being imprecise. The bigger issue is its consumption of resources, maybe at the expense of getting a clearer picture of what is happening in the major tributaries, individually.

The nutrient load “caps,” whether imposed under state-specific or Baywide Total Maximum Daily Loads, will be applied at the tributary level. It is here that the challenges of allocating load allowances among polluters will most require precise modeling.

New approaches to modeling and measuring the tributaries are being developed—but if all the money and attention goes to the Bay model…

Some might argue that this first element of the Bay Program’s mandate was more demanding than what I have described. The adverb “fully” as in “fully address” could be taken to mean that not only would the Bay Program track pollution loads, it would also address them by reducing them. But, when you consider the second element in their mandate, it seems that the Bay Program’s role was always intended to be one of facilitation and assistance, not management authority.

To “share the responsibilities for management decisions and resources regarding the high priority issues of the Chesapeake Bay” implies that the Bay Program intended to work cooperatively with the Bay partners to achieve restoration results.

When one looks at the Bay Program’s sharing of responsibility for management decisions, it is, perhaps, holding together the coalition of Bay Partners that shines as its greatest achievement. To maintain this coalition, the Bay Program has had to exactly not push too hard on any priority issues that might force politically painful choices on the partners. In this sense, it is more a lack of responsibility that has been shared by all.

The final step in this process could be the Bay partner states undertaking expensive but fallacious studies that explain why they cannot meet the terms agreed under the Chesapeake Bay 2000 agreement.

These failures of the Chesapeake Bay Program will not be fixed by simply applying for more money.

The managers need to go back to their fundamental mandate and determine how they can rejig this political-scientific-administrative construct to better achieve it.

This may require reappraising the relationship between the Bay Program and its partner states. It should entail reassessing their dependence on “cooperative” and “voluntary” approaches for achieving restoration goals.

They might consider abandoning their incestuous partnering and open more of the Bay Program’s work to competitive bidding by private companies.

And, it certainly will require going back and better determining both the technical and cost efficiencies of BMPs.

This may put the Bay Program in front of the Bay partners a little farther than they are accustomed to being, but the job will not get done by everyone keeping their head down.