トップページ» 研究成果 » 2011年度第1回シンポジウム » Lawrence Susskind

第1回国際シンポ 「共同事実確認の可能性: 政策形成における科学的情報の役割」

基調講演(ビデオプレゼンテーション)
Joint Fact-Finding and Collaborative Adaptive Management
ローレンス・サスカインド(マサチューセッツ工科大学教授)


当日のビデオプレゼンテーションをご覧いただけます。

Good morning, and thank you so much for the invitation to participate in this conference on integrating joint fact-finding into policymaking processes. My name is Larry Susskind. I’m speaking to you from the offices of the Consensus Building Institute in Cambridge, Massachusetts, near to MIT where I’m a member of the faculty and to Harvard University where I’m also part of the faculty at Harvard Law School.

I’m going to be talking about joint fact-finding. That’s something that we do here at the Consensus Building Institute, so while I’m going to be talking about it in somewhat of a summary or theoretical way, in fact JFF, as we call it, is something we do in practice. So I will be trying to merge both theory and practice as I describe the key steps in the joint fact-finding process and some of the considerations that go into how, when and why we do joint fact-finding.

As you see in the diagram that is projected, we think about joint fact-finding in the policymaking process as having six discrete steps. I’m going to take a few minutes and highlight what happens at each of those steps in the practice of joint fact-finding.

In a preliminary way, though, let me begin by saying why we want to do joint fact-finding. Many people believe that, if there are scientific and technical issues that are at stake in the making of particular public policy decisions, experts should just make those decisions and the product of that work done by experts should be handed to whoever the policymakers or the stakeholders might be and they can make of it what they will.

We don’t believe that. We don’t think that works very well, regardless of the legal or constitutional or cultural system that you’re talking about. So, we think of joint fact-finding as a step in a collaborative decision making process. Everything that I’m going to say with regard to these six stages of the work all assumes that if a public policy decision has to be made over whether to build something--where to build it, how to design it, whether the effects of whatever is proposed are acceptable, whether policy priorities of one sort or another should be given dominance, whether standards for public health and safety should cut one way or another way--whatever those decisions are, if science is important, then joint fact-finding should be the rule. Again, that’s what we believe. That’s what our practice assumes.

Step one in joint fact-finding is that, whomever the decision maker or whatever the decision making agency is, they need to decide whether they want to initiate any kind of collaborative process because they feel scientific and technical issues need to be taken into account. It’s not just a political choice. And if a convening or organizing or decision making body says, “Yes, this decision of where to put that major facility, that enormously expensive bit of infrastructure, is going to raise questions. It’s going to be an issue, politically, but we care about getting the science and technical issues right,” then they need to assess when and how to initiate a joint fact-finding process.

So the first step we call assess whether joint fact-finding is necessary and how to get it organized. And the assessment process, in our view, needs to be build around the involvement of someone who is not a partisan, someone who has not already decided whether this infrastructure should be built where and how it should be designed, but someone committed to managing this process of joint learning or joint enquiry.

The agency or body that needs to make a decision taps an independent--we would say neutral--assessor and that person begins the process of talking to the most obvious, most important stakeholders or agency groups or departments or companies or interest groups and then keeps expanding the groups it talks to based on what one set of potential groups says about others who should be possibly involved.

Based on those conversations, the assessor says, “Look, there are big issues here in which science and technical considerations are going to have to be taken seriously, and so a joint fact-finding process covering these issues would be a good idea, and these are the stakeholders who want to be involved, and whoever is going to be making this decision should therefore use joint fact-finding.” Based on that decision, we would then move to the next step, called “Convene.”

Whatever the body is that has to make the final decision would then invite representatives of the relevant stakeholder groups, agencies and organizations and say, “We’re going to have a decision process and joint fact-finding is going to be part of it. These are the terms. We hope you’ll participate. We’re going to commit this much time, this much money. This is the agenda. Come to the table.” And from our standpoint, at the point at which you bring the groups together, one of the first things they have to decide is what are the scientific or technical questions that they’re going to have to address in order to make a wise and effective recommendation on the policy choice that started the ball rolling.

The stakeholders need to say what the technical questions are. The process of generating technical questions needs to be facilitated, in our view, again, by someone who has not already decided the answer. A neutral facilitator. Someone who knows enough about the subject matter, someone who’s gone out during the assessment and talked to people and they could see that this person listened to what they said. That facilitator works with the parties to identify a list of scientific or technical questions that are at stake and will need to be addressed in the course of making whatever the policy choice or decision is.

One of the things that should happen during this convening process is that the facilitator should gather whatever information already exists and summarize it and say, “Look there are some things we know. This isn’t the first time a decision like this has bee made,” and put together a review of relevant scientific and technical information, reports from parallel situations, in a form that people can look at, which will help them figure out what questions they really have and help them frame those questions in a useful way.

At that stage, we can move to the third step, which we would call define the scope of the joint fact-finding effort embedded in this collaborative attempt to build agreement on what a wise approach to this policymaking process is. Now, defining the scope of the study means getting advice from scientific and technical people who may or may not be part of the stakeholder group.
The stakeholders say, “If you’re going to build that facility, we’re worried about its environmental impacts. We’re worried about its long-term and its short-term economic or social impacts.” They’ve looked at writing on the subject, published work in peer-reviewed journals and they say, “Well, it seems to us an unresolved question of whether building one of those in this location will or won’t have the effects that some of the stakeholders are concerned about. So we would like a study to look at this.”
If environmental impact assessment is required, as it is under the law in Japan, for example, for a major facility that’s being proposed, the scoping of that impact assessment--what set of options should be looked at, what mentions of impact should be studied, over what geographic boundary, over what time horizon--that’s what I mean by defining the scope of the joint fact-finding study.
It doesn’t always have to be in the context of an impact assessment required by law. It could be a supplementary study. It goes beyond what’s required by law, but that the stakeholders feel is necessary to give them shared information that will allow them to make, collaboratively, a recommendation about the larger policy question that motivated this process in the first place.

So during the third step, defining the scope of the study, the stakeholders need to evaluate what kinds of information should be gathered in what ways. They need to talk about a time frame and the kinds of expert advisors, maybe multiple expert advisors in particular arenas who have different orientations toward studying that certain kind of problem. Someone might say, “I study this problem from an engineering perspective.” Someone else says, “Well, I look at the basic science.” Someone else says, “Well, I look at the applied social science aspects of it.” So under defining the scope of the study, it’s defining what are the questions that need to be addressed, how ought those questions to be addressed from a standpoint of a group of the lay users, the stakeholders who are going to have to use the results of this to help make a decision?

At that point, we move from scoping the study to actually conducting the study. Here, again, most people would imagine you’d hire some experts. You’d tell them the scope of the study that all the stakeholders came up with, and then you’d send them away, hopefully with a budget. Then they’d report back the result and it would then be the job of the stakeholder groups to take the science and work with it.

We don’t think of joint fact-finding that way. I realize this flies in the face of what a lot of people imagine you want, which is a strict divide between the users of scientific and technical information and the producers of that information. We would argue that that’s a mistake. We would argue that as the group decides what kind of technical help it wants, it should bring in potential experts in those areas to talk with the group and hear from the group why they have certain questions, how they think about those questions.

Different technical people should say, “Well, given your concern about the environmental impacts of that kind of facility in this location, I propose the following kinds of studies, and here’s why.” Then they would hear a second or a third person say, “I have a different approach.” Based on listening to that, the stakeholder group would say, “We want that professional, that one and that one, and they need to work together to now put together a detailed scope of the study that the experts agree they can do with the time and the budget we’re talking about.”

Then the stakeholder group has to say, “OK, go ahead.” The agencies, the stakeholder group, have to hear what the experts who they’ve talked to propose as the way they’re going to proceed. At that stage, you need to let the experts go off and do the work that they’ve in a sense contracted to do with all the stakeholders.

As they move through the work, I would not say let them finish before they come back. I would argue, and I think the theory of joint fact-finding described in books like the Consensus Building Handbook that we’ve written here at the Consensus Building Institute would say have the experts report part of the way through on what they’ve discovered about whether they’re going to be able to do what they promised. For example, they may discover that certain data are not available or that the cost is going to exceed the budget that they have, or that the scientists have now decided, based on what they started to do, that in some ways they really ought to change what they said they were going to do.

We want some chance for mid-course review, discussion, or revision, and then will let the experts finish the work on the timetable that they agreed to work under. Then, when they think they are almost done, we would argue, again, that they should meet with the group and should say, “Here’s what we’ve been able to do based on what we’ve found, given what we’ve promised. And to some extent, we couldn’t get all the data so this is what we did to fill in gaps. This is what we did to interpolate when we were missing something. So here’s our product, but here are the decisions we made along the way which are not so much science as they are best judgment of scientists.” But these should be transparent to the group. At that point, the study should be submitted. The group will now understand what was or wasn’t done in the way that was or wasn’t promised.

At this stage, I would argue that the stakeholder group and the agencies should be able, now, to evaluate--I’m moving to the fifth step--the product. They should look at it and be able to say to the experts, “So, how much confidence should we have in this analysis, given the judgments you had to make, and given the limitations you had to work with?” We would like some kind of sensitivity analysis, the engineers would call it, which is shared in a discussion between the stakeholders trying to evaluate the result and the experts who did the study.

And the experts would say, “We have 90% confidence in what we did, or we have 80% confidence in what we did, given the constraints we worked under, given the data we couldn’t find, given the way you asked us the question. You asked a question that goes beyond what we know for certain, so we made our best judgment together in answering it. We have some disagreement within our technical team.” All of that should be shared as the group tries to evaluate what it’s going to do with the results of this study.

Also, some members of the stakeholder group, if we’ve been truly representative, won’t have the same expertise or technical sophistication or experience as other members of the group. So this facilitator is going to have to cross-examine some members of the expert team in a way designed to bring out what they’ve found in language that will be accessible to the group. I’m not talking about translation. I’m talking about asking the kinds of questions that a facilitator knowledgeable enough to understand what’s going on could ask on behalf of some members of the group that are sitting silent because they don’t fully understand all the choices and the implications of those choices that the technical team made.

That question-asking process isn’t designed to embarrass the experts. It’s designed to help stakeholders understand that experts make certain non-objective judgments that influence the usefulness and change the value of the fact-finding work. Not to say it’s not important, but if I, on behalf of the group as a facilitator say to the experts, “How confident are you given that we asked you for a 50 year forecast of what the impact of this infrastructure would be on this environment?” the expert might say, “Well, if you ask me for 10 years, I have much more confidence. Twenty-five, less confidence. Fifty, I don’t have a lot of confidence at all. But you said you wanted a 50 year forecast, so we did the best that we could.”

So now the group understands the changing nature, over time, of the value of the scientific work. That’s not to say the work isn’t important. It’s just to say that someone has to ask the experts to explain more of the background of their decision making so that some of the participants in the group who have very little scientific or technical ability on their own can fully appreciate what they can do with that work and where they’re going to have to jump off from the scientific work and make judgments based on their own thinking or the thinking, collectively, of the stakeholder and user group.

At this point, I think the stakeholder group then has to decide what they’re going to recommend about the big policy question. Should we build this facility in this location? They’re going to draw on the joint fact-finding work as best they can, but political and economic and other considerations are going to all come into play. At this stage, the facilitator shifts from talking just about the results of the joint fact-finding study to talking about how to blend that scientific and technical input with political and organizational and institutional and legal and cultural and other concerns.

At that stage, the group will reach a recommendation if there is skillful consensus building going on. After all, the agency that started this whole thing is asking for a recommendation from the group. The group has to reach an agreement. If it reaches no agreement, its recommendation in all likelihood will be ignored by the agency, which will do the best it can in spite of the process not producing an agreement. So evaluation, the fifth step in joint fact-finding ends with a recommendation.

There is really still one more step, because if I’m the agency and I’m getting this recommendation from my 25-member task force that I put together, representative of all the groups, I want to know that those 25 people took the results of the joint fact-finding and their group discussion back to the various constituencies, the various groups, that the people around the table presumably represent in some way. They may not necessarily be elected to represent them, but they need to be chosen to be like the people in all these categories whom we’re trying to have participate in this collaborative process.

At this point, the members of the collaborative process need to take the recommendation in its near-final form back to all the groups that they supposedly represent and say, “Look. Here’s what the joint fact-finding produced. Here’s what we make of that set of findings. Here’s how we used those to make a recommendation. Are you OK with my going along with what the group as a whole now thinks is going to be its recommendation?”

Then each member of that collaborative process comes back to the table and says, “Look, I tried to communicate the results of the joint fact-finding back to the people that I’m supposed to be talking like or speaking for. Here’s what they said.” So now there’s one last discussion in the full group in which they decide, “Well, are we now able to reach an informed consensus that reflects not just what we learned from the joint fact-finding sitting around the table talking to the experts, but also what the people we speak for now say when we’ve presented to them as best we could the results of that joint fact-finding study?”

Those are the six steps. (1) Assess the need for joint fact-finding; (2) Convene a collaborative process in which joint fact-finding will be embedded; (3) Define the questions that joint fact-finding will address and how they’ll be addressed; (4) Organize to conduct the study by people with expertise outside the collaborative process but chosen by the people in the collaborative process; (5) Interact with those experts at the point at which they have what they have discovered to report; and (6) Evaluate the results of the joint fact-finding, incorporate them into a provisional set of recommendations, and then communicate those recommendations back to the basic constituencies so that the collaborative process produces an informed proposal that takes account of the concerns of all the groups that were represented.

Now, if you do this, you ought to produce a recommendation that reflects the political ideological concerns of all the groups and that incorporates science and technical analysis into decision making but doesn’t separate scientific and technical analysis from political considerations. It joins them. That should increase the legitimacy of what is being proposed to the policymakers who spurred this whole process.

The entire process can take place on a schedule, within a budget that was organized with the commitments of all the participants when they started, and the entire process can be facilitated by someone with enough technical background to help ask good questions on behalf of participants who don’t have a lot of technical background on their own, but not someone who’s trying to steer the result toward a particular outcome.

We’ve done this process many times. It can work. It can produce something that happens on a timetable, within a budget. It can involve people with a lot of technical background and a little. It doesn’t tell policymakers what to do. It gives them an idea of what actions would be supported by groups, not just on political grounds but where those groups have been required to take account of scientific and technical information. It avoids the typical controversy of every political group using scientists or technical people of their own choosing to advance their political agenda, which then just gets us this confrontation between scientific and technical groups. This eliminates that.

The beauty of joint fact-finding is it produces joint learning. It doesn’t say that science should trump policy and politics, but it allows for a lot of careful consideration of science before people make up their minds and produces something that decision makers can then rely on not just for political reasons but in terms of its scientific credibility as well.

I hope that quick review of what is usually a very elaborate process is understandable. We have at the Consensus Building Institute and at MIT’s Science Impact Collaborative, written a lot about this. We’ve described examples of how to do it. If you look at http://scienceimpact.mit.edu/, you’ll see more about the actual joint fact-finding work that our team at MIT has done.

I hope you have a great conference. I’m sorry I’m not there with you. I look forward to hearing about the results.