I didn’t manage a follow-up to my post on the first part of Kitcher’s book. This is actually a topic I’ve written about, so you can see my thoughts on Kitcher in the early sections of that paper. I seem to have failed to convince anybody in class that “curiosity” is not a good way of capturing the whole of epistemic significance, but it is one of my main arguments in the paper. (I’m working on getting that paper published, so any comments on it would be appreciated.)
I’d like to talk a little bit about Kitcher’s recommendation for understanding the role of science in a democracy. First, let me point out what I think is an inconsistency in Kitcher’s setup for the discussion.
As you’ve probably read, Kitcher spends some time looking at the question of research on the qualities and capacities of members of an underprivileged group, such as research on IQ differences between different races, or the capacity of men vs. women for careers in science, in a social context in which the consequences of a positive result (i.e., identifying a real difference that coincides with social prejudices) are highly negative (further marginalization, the end of social programs meant to correct the previous discrimination, etc.). In such cases, Kitcher clearly states that standards of evidence must go up (pp. 96-97), just as Richard Rudner does in his article. But in many places, all over the book, he says that value judgments do not enter in to the use of evidence to evaluate hypotheses. This is the way in which his standard of “objectivity” is supposed to prevent the influence of values and the claims of democracy from undermining science. But, as Rudner cogently argues, and Kitcher here seems to agree, values necessarily influence the standards of evidence in each case. When the risks are low, we can tolerate a fairly permissive standard of evidence in order to accept a hypothesis. In cases where there is more riding on it (e.g., the safety and efficacy of medical treatments, research charged in some political way), the standards need to be higher.
One of the reasons why Kitcher may not see the inconsistency here is that he doesn’t do much with the claim that standards of evidence should go up. He doesn’t have anything particularly detailed or helpful to say about how we might alter our standards given the value-situation. Indeed, he goes on immediately to consider cases in which, as a psychological matter, most inquirers are likely to regard the hypothesis as extremely well supported, even when the evidence is small or even equivocal. In such cases, presumably, higher standards of evidence don’t matter. A second problem is that, while banning such research might be counter-productive, it certainly wouldn’t enter in to the ideal of well-ordered science. So, Kitcher might say, the right thing to do is not to advise the scientists on how to increase their standards in this case; rather, we should advise them not to pursue the research, and instead to push their research closer to the set of ideal priorities.
I talked in class about how “well-ordered science” is supposed to work, and the two-stage process it is a part of. First, there is the abstract, idealized generation of the ideal of well-ordered science. Next, a set of recommendations for policy and individual scientists are drawn on the basis of the ideal. This is taking a play taken right out of the playbook of analytic political philosophy ala Rawls—we begin with ideal theory, setting out what the principles of justice would be in an ideal society; then we attempt applications of that ideal insofar as it is possible given real-world constraints, aka “non-ideal theory.” Well-ordered science is, in effect, the set of research priorities that an idealized set of representatives—perfectly informed about the current state of scientific knowledge via “significance graphs,” completely informed about each others desires and preferences, and meeting some ideal of empathy and desire to reach consensus—would come up with in an ideal situation of democratic deliberation.
We can ask many questions about how Kitcher sets out well-ordered science and about the two-stage process, but right now I want to pose two kinds of questions (both of which I mentioned in class):
1. There are several places in which experts have a special input into the process of deliberation: they create the significance graphs and use them to tutor the deliberators, and they provide information about cost and probability of meeting various goals of different lines of research. Is Kitcher’s idea that such contributions are value-neutral with respect to the questions at hand even coherent? When enumerating practical and epistemic goals at the edges of significance graphs, do social values enter in? Should the ideal democratic process play a role in the earlier stage of determining significance, as well as in the stage of determining well-ordered science?
2. Is it reasonable to begin with such an abstract ideal? Rawlsians have certainly been criticized for the notion of ideal theory. Would it be better to replace this abstract ideal end-state with a set principles or a proposed mechanism for actual democratic influence on the research priorities of science?
What do you think? And can you suggestion some further questions for Kitcher?
- Jeremy Simon (2006), The Proper Ends of Science: Philip Kitcher, Science, and the Good, Philosophy of Science, 73 (April 2006) pp. 194–214
- Exchange between Kitcher and Longino in Philosophy of Science, 69 (December 2002)
- P.D. Magnus, Draft regarding: Scientific significance (unpublished)