Time to finish up my thoughts on the first section of The Challenge of the Social and the Pressure of Practice. If you haven’t done so, you might want to read my earlier post first. I hope you’ll share your thoughts as well.
One thing we don’t want to do in this complex and controversial discussion of science and values is to talk past one another, to seem to agree or disagree when we are in fact talking about different subjects. One way we might make that mistake is to not specify, or not be clear about, where, exactly, science is supposed to be influenced by values. The authors in this section are, by and large, fairly clear about this, though you might miss it if you don’t look to carefully.
One’s values might impact one’s choice of research question. That is, for example, as a pharmaceutical researcher at a university or medical school, one has the option of doing research on the question of how to more effectively and cheaply treat malaria in those places where it is still a major cause of death, or you can do research on a drug for treating sexual dysfunction. Here, whether your values incline you to saving and improving the lives of many in some of the world’s most impoverished nations, or to improve the lives of mostly middle-class and up people with sufficient leisure time to be significantly distraught by questions of their sexual performance, will contribute to which field you will work in (so will the number of people working in each field, funding of and prospects for such research, your economic goals, etc.).
One’s values might also impact the type of explanations proposed in answer to one’s research questions. This is undoubtedly a contribution to the content of scientific theories, and it is exemplified in some cases of the feminist research that Longino and Kourany discuss. For example, Kourany points out that feminist primatology proposes theories in which female primates play an active role, rather than prior theories driven exclusively by male behavior (treating females as merely passive resources for males). This influence can go all the way from merely carefully exposing any sexist, paternalist, racist, etc. biases in the science, to theories whose content is explicitly in favor of a political agenda.
One’s values might impact the methods and techniques of research. For example, one might insist that medical trials include not only white males, but also subjects of different races and genders. This might seem obvious, but medical researchers only adopted such methods because Congress made them in 1993! Attempts to point out that medical research which only tested while males were biased were largely ignored until then.
One’s values might lastly impact the justification and acceptance of some hypothesis or theory. This is largely what is at issue in discussions of underdetermination: if evidence isn’t sufficient to determine which hypothesis we accept, then our values and politics are one explanation for how we make a decision. More subtly, as Rudner pointed out in one of our first readings, values will have a definite impact on things like what significance levels or margins of error one adopts in research. For example, if we look at research on differences in average intelligence between racial groups, it is far, far more important to avoid false positives (accepting a hypothesis that there is a certain difference, when in fact there is no significant difference in the actual population) than false negatives (rejecting a hypothesis of difference that is in fact true), because of the detrimental socio-political consequences of such claims. Therefore, the standards for statistical significance ought to be considered extremely high (and, historically, this has been the failing of research of this kind, that it set to low a bar for evidence, and further evidence later disproved it (if the failings weren’t even more manifest)).
Let’s switch gears and talk more narrowly about John Norton’s paper. There, he argues that the problem with all this talk about “underdetermination” and values influencing science is that such discussions are based on an oversimplified account of inductive inference (simple hypothetico-deductivism) and ignore all the major work in confirmation theory and inductive logic. Thus, it isn’t really the case that the evidence underdetermines theory; given the right account of how evidence determines theory, it is not at all clear that underdetermination is a common or even possible phenomena.
Let’s look at Norton’s different “major accounts” of inductive inference.
- Inductive Generalization – An instance confirms its generalizations, but a deductive consequence doesn’t necessarily confirm that which entails it. A simple example would be the move from, e.g., “Some ravens are black (namely, all of those I’ve seen thusfar)” to “All ravens are black.” This is too basic an account, though, and Norton offers Mill’s methods and Glymour’s bootstrap (which allows us to use well-confirmed auxiliary hypotheses to move from observational to theoretical language, thus expanding the range of possible generalizations) as more sophisticated forms.
- Hypothetical Induction – Hypothetico-deduction is the simple core, here, but Norton points out the many ways in which the account can be improved. One way is to exclude certain kinds of evidence as count as confirmatory. That is, not only must some hypothesis H entail the evidence E for E to confirm H, but it must also be the case that if H is false, E is not very likely to occur. Further restrictions might state that H must also be simpler than hypothesis that predict E, that that H must provide the best explanation of E (in a way that Norton doesn’t specify, but which can be found in Peter Lipton’s book, Inference to the Best Explanation), or that H must be produced by a reliable method (such as sitting under an apple tree and being bonked on the head, one presumes).
- Probabilistic Accounts – Here, Bayes’ Theorem rules the roost, telling us that we can compute the probability of a hypothesis given new evidence, P(H|E), from the prior probabilities of the hypothesis and evidence separately, P(H) and P(E), and the probability of the evidence given the hypothesis, roughly, how likely the hypothesis tells us the evidence is, P(E|H). The magic equation is P(H|E) = [P(H)/P(E)] P(E|H). This has all sorts of nice results which are too technical to explore in any detail at the moment.
So far, so good. Now I want to ask the question, what is the difference between suggesting that evidence underdetermines theory on a simple hypothetico-deductive account of confirmation, and so we must use values to determine the choice, or from suggesting that evidence determines theory on a complex account? One important difference is that while hypothetico-deduction depends only on a simple and uncontroversial bit of logical apparatus—deductive logic—all of the other accounts introduce more complex and more controversial apparatus. The controversial nature of such accounts can be easily seen by just examining the literature on confirmation and induction, or really just by looking at Norton’s account of it. After all, he offers us three very different kinds of view, all of which contain many diverse sub-views. One might further ask, what allows us to choose between different types of inductive logic? (Could it be cognitive/social/political values?)
Finally, I would argue that, at least for many of Norton’s cases, the difference between logic-plus-values-plus-evidence determines theory and complex-logic-plus-evidence determines theory is a merely verbal dispute. Consider:
- Inductive Generalization – While this account surely avoids many problems of underdetermination, it does so by introducing a value of conservatism (of one type or another) into its logic. The simple method, where only hypotheses which are direct generalizations of particular instances are allowed, is a very strict conceptual conservatism, allowing little innovation from the terms/concepts of basic observation. Glymour’s bootstrap, on the other hand, only allows for hypotheses that are broadly consistent with a variety of other accepted hypotheses. These forms of conservatism and consistency are common cognitive values cited by Quine, Kuhn, and others.
- Hypothetical Induction – Here, simplicity and the various explanatory virtues that go into accounts of inference to the best explanation are key members of the lists of cognitive values.
- Probabilistic Induction – Here, we have no solution at all. Looking back to Rudner, if all the Bayesian account can tell us are the probabilities or likelihoods of hypotheses, we still have to make decisions about whether to accept those hypotheses, and area ripe with the influence of values. The insistence that science merely assigns the probabilities and doesn’t concern itself with when to believe a hypothesis is subject to Rudner’s argument that this line will, if consistently followed, create a regress.
There’s plenty more I could talk about, but I’ve gone on long enough. What do you think?