I had the honor of chairing a panel convened by the National Research Council that recently published the report, “Adaptive Management for Water Resources Project Planning.”

Carl Hershner, chair of the Chesapeake Bay Program’s Scientific and Technical Advisory Committee, was also a member of the panel.

Our report is one of five developed to advise the U.S. Army Corps of Engineers on the analysis and planning of its water resources projects. The NRC, an arm of the National Academies of Science and Engineering, engages the broad scientific and technological community to advance knowledge and advise the federal government on important issues.

In our case, the NRC assembled a panel of 12 diverse scientists, engineers and legal scholars from throughout the country to make recommendations to the Corps and Congress on the effective use of adaptive management.

Adaptive management involves flexible decision-making that can be adjusted in the face of uncertainties as outcomes from management actions and other events become better understood. It places emphasis on the careful monitoring of these outcomes to advance scientific understanding and to help adjust policies or operations as part of an iterative learning process.

Adaptive management has been embraced as the central organizing framework for the Corps’ large ecosystem restoration efforts in the Everglades, Coastal Louisiana, Upper Mississippi River and Missouri River as well as other restoration programs involving water management, such as the Colorado River and San Francisco Bay/Sacramento-San Joaquin Delta Estuary.

Adaptive management approaches are also being used in endangered species recovery, salmon conservation, forest management, the remediation of hazardous contamination, and an expanding array of applications that must confront uncertainty.

Adaptive management is also identified as a guiding principle for the new approach to ecosystem-based management called for in the recent report of the U.S. Commission on Ocean Policy (See “Panel seeks to revamp ocean, coast policies,” Bay Journal, September 2004).

Shouldn’t adaptive management be appropriate for the Chesapeake Bay restoration and, if so, why isn’t it often mentioned, much less discussed?

In his paper, “Governance and Adaptive Management for Estuarine Ecosystems: the Case of Chesapeake Bay,” published in 1994, political scientist Tim Hennessey of the University of Rhode Island considered the Bay Program a good example of adaptive management of a large ecosystem in that it has adjusted goals based on experience and information.

Indeed, the Bay Program has, to significant degrees, the six elements of adaptive management identified in our NRC report: 1. management objectives that are regularly revisited and revised; 2. model(s) of the system being managed; 3. a range of management choices; 4. the monitoring and evaluation of outcomes; 5. mechanism(s) for incorporating learning into future decisions; and 6. a collaborative structure for stakeholder participation and learning.

In my view, where the Bay Program comes up a bit short in meeting adaptive management requirements is in the monitoring and evaluation of outcomes. This is different from the tracking of “progress” based on certain expectations about management actions. Instead, the primary focus is placed on the critical evaluation of the effectiveness of those actions themselves.

Adaptive management requires not only evolving and refining goals, but the timely determination of whether things are working as expected and then adapting or changing approaches to improve effectiveness.

Coming up short on evaluating outcomes lies at the very heart of the controversy over the appropriate uses of monitoring and modeling now receiving attention in the media, a Congressional hearing and a forthcoming General Accounting Office investigation. Adaptive management requires a model of some sort; in fact, multiple models are preferred because their discrepancies and performances relative to observations allow one to focus on reducing the most important uncertainties.

But, model projections must be rigorously tested through the monitoring of outcomes in an adaptive learning cycle. As President Ronald Reagan said: “Trust, but verify.”

Coincidentally, the first NRC committee that I chaired 15 years ago (I was not then working in the Chesapeake Bay area) included a case study that evaluated the then fairly young Chesapeake Bay Monitoring Program. In its report, “Managing Troubled Waters: the Role of Marine Environmental Monitoring,” (See www.nap.edu/books/0309041945/html) our committee was favorably impressed by the scope and design of the program, particularly with its plan for the frequent, multilevel reporting, which seems somehow to have fallen by the wayside.

But we noted even back then that the program was not sufficiently responsive to the information requirements of decision makers. We did not specifically mention adaptive management, but the monitoring program design principles that the report espoused are consistent with this approach.

In the intervening time, the interpretation and use of monitoring results have advanced rather slowly, while watershed and Bay modeling has grown more complex and assumed the dominant influence on decision making, even to the point of relying on model projections rather than observations to assess progress.

Don’t get me wrong. I think that these kinds of models are indispensable. But Carl Walters, one of the founding visionaries of adaptive management, observed that occupation with complex simulation modeling is commonly a significant impediment to adaptive management, which recognizes and even embraces the inherent uncertainties in prediction.

Walter’s 1997 essay, “Challenges in adaptive management of riparian and coastal ecosystems,” (available online: at www.consecol.org/vol1/iss2/art1), should be required reading for anyone contributing to, developing or using Bay Program models.

So, how do we bring monitoring and modeling together into the successful marriage needed for adaptive management?

About a decade ago, I suggested at a STAC meeting that the monitoring and modeling subcommittees of the Bay Program’s Implementation Committee be merged to foster the needed integration. This simply could not be done, it was explained to me. The experts on these two subcommittees, their training and abilities, and the agencies they represented were really quite different. They would just not work together well.

It was missed that these were precisely the barriers that only a merger could overcome. But, these things take time. Maybe the time is now ripe for a fusion of these groups into an Adaptive Management Support Subcommittee (AMaSS)!

The National Research Council’s report, “Adaptive Management for Water Resources Project Planning” is available at www.nap.edu/books/0309091918/html/