Often we wish to determine the cause of an observed effect. Logically, this is a two-step procedure. The first step is to formulate a list of suspected causes which, to the best of our knowledge, includes the actual cause. The second is to rule out by observation as many of these suspected causes as possible. If we narrow the list down to one item, it is reasonable to conclude that this item is probably the cause.
The justification of the first step (i.e., the evidence that the actual cause is included on our list of suspected causes) is generally inductive. The eliminative reasoning of the second step is deductive. Since both inductive and deductive reasoning are involved, the reasoning as a whole is inductive. (See Section 2.3.)
We arrive at the list of suspected causes by a process of inductive (frequently analogical) reasoning. Suppose, for example, that we wish to find the cause of a newly discovered disease. Now this disease will resemble some familiar diseases more than others. We note the familiar diseases to which it is most closely analogous and then conclude (by analogy) that its cause is probably similar to the causes of the familiar diseases which it most closely resembles. This will give us a range of suspected causes.
Suppose, for example, that the familiar diseases which the new disease most closely resembles are all viral infections. The suspected causes will then be viral. Close observation of the disease victims will establish which viruses are present in their tissues. We will conclude that the actual cause is probably one of these viruses. These viruses, then, form our list of suspected causes.
At this stage, however, our investigation is only half finished. For it is quite likely that we will find several kinds of virus in the tissues of the victims. To determine which of these actully caused the disease, we now employ a deductive process designed to eliminate from our list as many of the suspected causes as possible. The kind of eliminative process we use will depend on the kind of cause we are looking for.
Here we shall discuss four different kinds of causes and, corresponding to each, a different method of elimination. The eliminative methods were named and investigated by the nineteenth-century philosopher John Stuart Mill. Mill actually discussed five such methods, but the fifth (the method of residues) does not correspond to any specific kind of cause and will not be discussed here. Before discussing Mill’s methods, however, we need to define the kinds of causes to which they apply.
The first kind of cause is a necessary cause, or causally necessary condition. A necessary cause for an effect E is a condition which is needed to produce E. If C is a necessary cause for E, then E will never occur without C, though perhaps C can occur without E. For example, the tuberculosis bacillus is a necessary cause of the disease tuberculosis. Tuberculosis never occurs without the bacillus, but the bacillus may be present in people who do not have the disease.
A given effect may have several necessary causes. Fire, for example, requires for its production three causally necessary conditions: fuel, oxygen (or some similar substance), and heat.
The second kind of cause is a sufficient cause, or causally sufficient condition. A condition C is a sufficient cause for an effect E if the presence of C invariably produces E. If C is a sufficient cause for E, then C will never occur without E, though there may be cases in which E occurs without C. For example (with respect to higher animal species), decapitation is a sufficient cause for death. Whenever decapitation occurs, death occurs. But the converse does not hold; other causes besides decapitation may result in death.
A given effect may have several sufficient causes. In addition to decapitation, as just noted, there are many sufficient causes for death: boiling in oil, crushing, vaporization, prolonged deprivation of food, water, or oxygen—to name only a few of the unpleasant alternatives.
Some conditions are both necessary and sufficient causes of a given effect. That is, the effect never occurs without the cause nor the cause without the effect. This is the third kind of causal relationship. For example, the presence of a massive body is causally necessary and sufficient for the presence of a gravitational field. Without mass, no gravitational field can exist. With it, there cannot fail to be a gravitational field. (This does not mean, of course, that one must experience the gravitational field. Moving in certain trajectories relative to the field will produce weightlessness, but the field is still there.)
The fourth kind of causal relation we shall discuss is causal dependence of one variable quantity on another. A variable quantity B is causally dependent on a second variable quantity A if a change in A always produces a corresponding change in B. For example, the apparent brightness B of a luminous object varies inversely with the square of the distance from that object, so that B is a variable quantity causally dependent on distance. We can cause an object to appear more or less bright by varying its distance from us.
An effect (such as apparent brightness) may be causally correlated with more than one quantity. If the object whose apparent brightness we are investigating is a gas flame, its apparent brightness will also depend on the amount of fuel and oxygen available to it, and on other factors as well.
Classify the kind of causality intended by the following statements:
(a) Flipping the wall switch will cause the light to go on.
(b) Closing the electricity supply from the main lines will cause the light to go off.
(c) Making a lot of noise will cause the neighbors to complain.
(d) Pulling the trigger will cause the gun to fire.
(e) Raising the temperature of a gas will cause an increase in its volume.
(f) Raising the temperature of the freezer above +32 degrees Fahrenheit will cause the ice cubes in the freezer to melt.
(g) Killing the President will cause new presidential elections.
(h) Raising the temperature in the environment will cause the death of many plants.
(a) Necessary (but not sufficient: the light will not go on unless the light bulb is working).
(b) Sufficient (but not necessary: the light will go off also if the wall switch is turned to the “off” position).
(c) Sufficient (but not necessary: the neighbors may complain for a number of other reasons).
(d) Necessary (but not sufficient: the gun won’t fire unless it is loaded).
(e) Dependent (the higher the temperature, the higher the volume).
(f) Necessary and sufficient.
(g) Sufficient (but not necessary).
(h) Dependent (the higher the temperature, the greater the number of plants that will die).
Now, to reiterate, Mill’s methods aim to narrow down a list of suspected causes (of one of the four kinds just described) in order to find a particular cause for an effect E. Each of the four methods listed below is appropriate to a different kind of cause:
Mill’s Method of:
Rules Out Conditions Suspected of Being:
Necessary causes of E
Sufficient causes of E
Agreement and difference
Necessary and sufficient causes of E
Quantities on which the magnitude of E is causally dependent
If by using the appropriate method we are able to narrow the list of suspected causes down to one entry, then (presuming that a cause of the type we are seeking is included in the list) this entry is a cause of the kind we are looking for. We now examine each of the four methods in detail.
The Method of Agreement
Mill’s method of agreement is a deductive procedure for ruling out suspected causally necessary conditions. Recall that if a circumstance C is a causally necessary condition of an effect E, then E cannot occur without C. So to determine which of a list of suspected causally necessary conditions really is causally necessary for E, we examine a number of different cases of E. If any of the suspected necessary conditions fails to occur in any of these cases, then it can certainly be ruled out as not necessary for E. Our hope is to narrow the list down to one item.
Suppose we are looking for the necessary cause of a certain disease E, and we have by using our background knowledge and expertise formulated a list of five viral agents, V1 through V5, which we suspect may cause E. We examine a number of patients with E and check to see which of the suspected causes is present in each case. The results are as follows:
Only one of the five suspected causes (namely, V1) is present in each of the four patients with the disease. This proves that none of the suspected causes, except possibly V1, really is causally necessary for E.
Once V2 through V5 are eliminated as necessary causes, it follows deductively that
(1) If the list V1 through V5 includes a necessary cause of E, then V1 is that necessary cause.
This is the conclusion which Mill’s method of agreement yields. If we wish to advance further to the unconditional conclusion
(2) V1 is a necessary cause of E
then we need the premise
(3) The list V1 through V5 includes a necessary cause of E.
Such a premise cannot in general be proved with certainty, but can only be established by inductive reasoning. Typically, such inductive reasoning will be analogical. In the case in question, it may look something like this:
(4) Disease E has characteristics F1, F2,. . ., Fn.
(5) The known diseases similar to E have characteristics F1, F2,. . ., Fn.
(6) Viruses are necessary causes of the known diseases similar to E.
∴ (7) Some virus is a necessary cause of E.
Here the characteristics F1, F2,. . ., Fn might be such things as infectiousness or the presence of fever. To get from statement 7 to statement 3, we need to add to statement 7 the premise that
(8) The only viruses present in the cases of patients 1 through 4, who had E, were V1 through V5.
Statement 8 together with statement 7 deductively implies 3, since (by definition) any necessary cause for E must occur in every case of E. The entire argument may now be summarized in the following diagram:
The basic premises in statements 4, 5, 6, and 8 are obtained by observation or previous investigation. Statement 1 is the conclusion obtained by using Mill’s method of agreement. The reliability of the argument as a whole depends on the adequacy of the analogical inference (the inference from 4, 5, and 6 to 7) and on the truth of the basic premises. The premise in statement 8, for example, could prove to be false if our observations of the patients were not sufficiently thorough. That would undermine the whole argument, since in that case the real cause of E might be a virus that was present but undetected in the cases we studied. The adequacy of the analogical inference, of course, depends on the factors discussed in Section 9.5. We should be especially wary of suppressed evidence. (Does E have any unusual characteristics which suggest a nonviral cause?)
Not every application of the method of agreement works out so neatly. Suppose V1 and V2 both occur in all cases of E that we examine. Does that mean both are necessary for E? No, this does not follow. We may not have examined a large enough sample of patients to rule out one or the other. Our investigation is inconclusive, and we need to seek more data.
It may also happen that the method of agreement rules out all the suspected causes on our list. In that case, statement 3 is false, and so either 7 or 8 must be false as well. That is, either the necessary cause is not viral (as our analogical argument led us to suspect) or we failed to detect some other virus that was present in the patients. If this occurs, we need to recheck everything and probably gather more data before any firm conclusions can be drawn.
The Method of Difference
If we are seeking a sufficient cause rather than a necessary cause, the method to use is the method of difference. Recall that a sufficient cause for an effect E is an event which always produces E. If cause C ever occurs without E, then C is not sufficient for E. Often it is useful, however, to speak of sufficient causes relative to a restricted class of individuals. A small quantity of a toxic chemical, for example, may be sufficient to produce death in small animals and children but not in healthy adults. Hence, relative to the class of children and small animals it is a sufficient cause for death, but relative to a larger class which includes healthy adults it is not. Claims of causal sufficiency are often implicitly to be understood as relative to a particular class of individuals or events.
A number of people have eaten a picnic lunch which included five foods, F1 through F5. Many of them are suffering from food poisoning. It is assumed that among the five foods is one which is sufficient to produce the poisoning among this group of people. Now suppose that we find two individuals, one of whom has eaten all five foods and is suffering from food poisoning and the other of whom has eaten all but F1 and is feeling fine. Thus if P is the effect of poisoning, the situation is this:
What is the sufficient cause of P?
Since P failed to occur in person 2 in the presence of F2 through F5, clearly none of these foods is sufficient for P. On the assumption, then, that a sufficient cause for P occurs among F1 through F5, it follows that the cause is F1.
The weakest part of this reasoning is the assumption that a sufficient cause for P occurs among F1 through F5. As with the analogous premise in our discussion of necessary causes, it generally cannot be proved deductively but must be supported by an inductive argument. In this case, we might argue that in most past cases in which food poisoning has occurred, some toxic substance present in one food has been the culprit and that it has been sufficient to produce the poisoning in anyone who consumed a substantial amount of the food.
Once again, it is important to see how this sort of inductive reasoning could go wrong. Perhaps none of the foods is by itself sufficient for P, but ingestion of F1 and F2 together causes a chemical reaction which results in toxicity. Under these conditions we would still observe the poisoning in person 1 and no effect on person 2, but the assumption that a sufficient cause for P occurs among F1 through F5 would be false.
It may also happen that none of F1 through F5, or any combination of F1 through F5, is sufficient for P. A toxin may be present, say in F1, but consumption of this toxin may produce P only in certain susceptible individuals. That is, F1 may be sufficient for P in certain people but not in every member of the population we are concerned with. If this is so, then once again the assumption that a sufficient cause for P occurs among F1 through F5 is false, and so is the unqualified conclusion that F1 is sufficient for P. These errors can occur even if we make no faulty observations, because of the fallibility of the reasoning needed to establish this assumption. Therefore, caution is needed in applying the method of difference.
In summary, Mill’s method of difference is used to narrow down a list of suspected sufficient causes for an effect E. It does this by rejection of any item on the list which occurs without E. We hope to make enough observations to narrow the list down to one item. If we do, then we may conclude deductively that if there is a sufficient cause for E on the list, it is the one remaining. However, to establish that our list contains a sufficient cause, we must rely on induction from past experience.
The Joint Method of Agreement and Difference
Mill’s joint method of agreement and difference is a procedure for eliminating items from a list of suspected necessary and sufficient causes. It incorporates nothing new; it merely involves the simultaneous application of the methods of agreement and difference.
If C is a necessary and sufficient cause of E, then C never occurs without E and E never occurs without C. Hence, if we find any case in which C occurs but E does not or E occurs but C does not, C can be ruled out as a necessary and sufficient cause for E (though it may still be a necessary or sufficient cause, as the case may be).
Suppose that a student in a college dormitory notices a peculiar sort of interference on her television set. She has seen similar kinds of interference before and suspects that its necessary and sufficient cause (provided the television is on) is the nearby operation of some electrical appliance. This leads her to formulate the following list of suspected causes:
S = electric shaver
H = hair dryer
D = clothes dryer
W = washing machine
She now observes which appliances are operating in nearby rooms while her television is on. The results are as follows (I is the interference):
Which of the suspected causes, if any, is the necessary and sufficient cause of the interference?
The only one of the suspected necessary and sufficient causes which is always present when I is present and always absent when I is absent is D. Hence, if one of the suspected necessary and sufficient causes really is necessary and sufficient for I, it must be D.
If the student goes on to conclude that D is actually necessary and sufficient for I, once again the weakest point of her reasoning will be the assumption that a necessary and sufficient cause was included on her list of suspected causes. As before, this premise can be justified only by induction from past experience with similar situations.
The Method of Concomitant Variation
Mill’s method of concomitant variation differs from the other methods in that it is not concerned with the mere presence or absence of cause and effect, but with their relative magnitudes. It is a means of narrowing down a list of variable magnitudes suspected of being responsible for a specific change in the magnitude of an effect E. A variable is rejected as not responsible for a particular change if that variable remains constant throughout the change. If all but one of a list of variables remain constant while the magnitude of an effect changes, then, presuming that the variable responsible for the change appears on the list, it must be the one which has not remained constant.
A houseplant exhibits a sudden spurt of growth. We suspect that the variables relevant to its growth rate are these:
S = sunlight
W = water
F = fertilizer
T = temperature
But we observe that only one of these variables, namely, the amount of water the plant receives, has been altered recently. This observation may be schematized as follows:
Here G is the growth rate and the plus signs stand for increases of magnitude. No plus sign indicates no change. Which, if any, of the variables on our list is causally relevant to the observed change in the growth rate of the plant?
Since the amount of water the plant receives is the only one of the variables on our list that has changed, only it among these variables could be responsible for the observed change in growth rate.
Notice that this method does not eliminate the possibility that changes in S, F, or T also affect G. What it shows is that these three variables were not responsible for the particular changes observed here. Hence, if some variable on our list was responsible, it must have been W. To verify our conjecture further, we may cut back on the water the plant receives. Suppose that we then find this:
where the minus signs indicate decreases in magnitude. Then we will be still more confident that the rate of watering is the variable responsible for the observed changes in growth rate.
As with the other three methods, the process of eliminating S, F, and T as possible causes of the observed effect is deductive, but induction from past experience with plants is required to support the premise that one of the four variables on our list caused the changes in G.
The increased confidence provided by case 3 is due to the additional support case 3 lends to this premise. For repetition of instances of the correlation between W and G enhances by simple induction the probability that W and G have varied and will continue to vary together. If we were perfectly confident that the variable responsible for the change was one of the four on our list, this additional confirmation would be superfluous, and cases 1 and 2 alone would suffice to establish that the responsible variable is W.
9.7: SCIENTIFIC THEORIES
The most sophisticated forms of inductive reasoning occur in the justification or confirmation of scientific theories. A scientific theory is an account of some natural phenomenon which in conjunction with further known facts or conjectures (called auxiliary hypotheses) enables us to deduce consequences which can be tested by observation. Often a theory is expressed by a model , a physical or mathematical structure claimed to be analogous in some respect to the phenomenon for which the theory provides an account.
For example, prior to the twentieth century there were two theories of the phenomenon of light, the corpuscular theory and the wave theory. According to the corpuscular theory (whose most notable advocate was Isaac Newton), light consists of minute particles, or corpuscles, expelled in straight trajectories by luminous objects. According to the wave theory (first propounded by the Dutch astronomer Christian Huygens), light consists of spherical waves spreading out from luminous objects like the circular ripples from a stone dropped into a lake. According to the wave theory, light waves are propagated through a fluid substance, the ether, which permeates the universe. Now, both theories were able to account for the phenomenon of color and for many of the reflective and refractive properties of light. But by the end of the nineteenth century the wave theory had temporarily won out, because of its superiority in explaining diffraction effects—patterns of light and dark bands formed when light is passed through a small aperture. Such patterns are predicted by the wave theory but difficult to explain by the corpuscular theory.
Each theory modeled light as a physical structure—moving particles, in one case; waves in a fluid medium, in the other. Both, however, were succeeded in the twentieth century by the quantum theory, in which light is modeled as a mathematical structure that has some features of both waves and particles but is not completely analogous to any familiar physical structure.
This example shows that scientific theories are justified primarily by their success in making true predictions. By ‘prediction’ we mean a statement about the results of certain tests or observations, not necessarily a statement about the future. Even theories about the past make predictions in this sense, since (in conjunction with appropriate auxiliary hypotheses) they imply that certain tests or observations will have certain results. A theory about the evolution of dinosaurs, for example, will have implications concerning the sorts of fossils we should expect to find in certain geological strata. These implications, then, are among its predictions. Since a theory’s predictions are deduced from the theory together with its auxiliary hypotheses, if any of them prove false, then either the theory itself or one or more of the auxiliary hypotheses must be false. (One cannot deduce a false conclusion from a set of true premises.) If we are confident of all the auxiliary hypotheses, then we may confidently reject the part of the theory used to derive the prediction. The corpuscular theory of light, together with what seems to be the most reasonable auxiliary hypotheses about the way small particles ought to behave, implies that diffraction ought not to occur. Since it does occur, nineteenth-century physicists, confident of these auxiliary hypotheses, rejected the corpuscular theory.
are not completely confident of the truth of the auxiliary hypotheses; hence there may be controversy about the soundness of the deduction used to reject the theory. If one or more of the auxiliary hypotheses are indeed false, then the falsity of a prediction made with the aid of those hypotheses does not entail the falsity of the theory.
Whereas the reasoning by which scientific theories are refuted is deductive, the reasoning by which they are confirmed is inductive. After the demise of the corpuscular theory of light, the wave theory became increasingly confirmed. Unlike the corpuscular theory, the wave theory (in conjunction with plausible auxiliary hypotheses about the orientation and amplitude of the waves) does predict diffraction effects. Hence, when these were observed, confidence in the wave theory increased.
However, confirmation of a prediction (or even many predictions) of a theory does not prove deductively that the theory is true. Theories together with their auxiliary hypotheses always imply many more predictions than can actually be tested. Even if all the predictions tested so far have been verified, some untested prediction may still be false. That would imply the falsity of the theory, provided that the auxiliary hypotheses are true. Hence, from a logical point of view, confidence in any scientific theory should never be absolute.
Nevertheless, it is often held that as more and more of the predictions entailed by a theory are verified, the theory itself becomes more probable. This principle may be formulated more precisely as follows:
· (P): If E is some initial body of evidence (including auxiliary hypotheses) and C is the additional verification of some of the theory’s predictions, the probability of the theory given E & C is higher than the probability of the theory given E alone.
Principle (P) seems to be the principle underlying the inductions by which scientific theories are confirmed. But it is not self-evidently true, and it is not provable as a law of logic or probability theory. Moreover, some of its instances are evidently false, suggesting that (P) needs further restriction.
To illustrate this point, consider the situation with respect to theories of light at the time when serious attention was first paid to diffraction phenomena in the middle of the nineteenth century. What happened historically was that the corpuscular theory was rejected and the wave theory accepted. But one might have accounted for diffraction by maintaining a corpuscular theory, augmented by the hypothesis that a strange force acts on the corpuscles of light as they pass through small apertures, separating them into distinct sheaths and thus giving rise to the observed effects. Alternatively, one might have argued that the diffraction phenomena are an illusion due to peculiarities of our cameras and eyes. Or one might have rejected both the wave and corpuscular theories and argued that light is something else entirely—say, filaments or strands emitted from luminous objects. This could be made compatible with the known properties of light by adopting sufficiently ingenious auxiliary hypotheses. One could create such alternatives ad infinitum.
Each of these theories, if augmented by appropriate auxiliary hypotheses, predicts diffraction phenomena as well as the other properties of light known in the nineteenth century. Does the observation of diffraction, then, make each more probable, as unrestricted use of (P) suggests? This seems doubtful. In practice, only the wave theory was regarded as having been confirmed or rendered more probable. Theories like those mentioned in the previous paragraph were not seriously considered. The reason is that the auxiliary hypotheses required by these other theories (such as the hypothesis that a strange force affects light corpuscles traveling through small apertures) were themselves unjustified. They were not plausible independent of the theory. Auxiliary hypotheses which have no independent justification and are adopted only to make a theory fit the facts are called ad hoc hypotheses.
In practice, principle (P) is not applied equally to all theories, but preferentially to those theories which do not require ad hoc hypotheses. The wave theory predicted diffraction by means of auxiliary hypotheses which seemed perfectly natural. All competing theories were either extremely complex in themselves or required complex and ad hoc auxiliary hypotheses. So even though other theories could be made to imply the same predictions, only the wave theory was regarded as substantially confirmed by the observation of diffraction. (We might note, incidentally, that the wave theory itself was succeeded by the quantum theory primarily because of the discovery of new phenomena which could not be predicted by the wave theory unless it too were burdened with ad hoc hypotheses.)
Not only is (P) applied preferentially to theories which do not require ad hoc hypotheses; it is also (as suggested by the example above) applied preferentially to theories which are themselves simple. That is, other things being equal, simple theories are regarded as more highly confirmed by verification of their predictions than are complex theories. Various restrictions on (P) have been proposed by various theorists, but they are generally controversial and need not be discussed here.
Arrange each of the following sets of statements in order from strongest to weakest.
· (a)Iron is a metal.
· (b)Either iron is a metal or copper is a metal.
· (c)Either iron is a metal, or copper or zinc is a metal.
· (d)It is not true that iron is not a metal.
· (e)Iron, zinc, and copper are metals.
· (f)Something is a metal.
· (g)Some things are both metals and not metals.
· (h)Either iron is a metal or it is not a metal.
· (i)Iron and zinc are metals.
· (a)Most Americans are employed.
· (b)There are Americans, and all of them are employed.
· (c)Some Americans are employed.
· (d)At least 90 percent of Americans are employed.
· (e)At least 80 percent of Americans are employed.
· (f)Someone is employed.
· (a)About 51 percent of newborn children are boys.
· (b)Exactly 51 percent of newborn children are boys.
· (c)Some newborn children are boys.
· (d)It is not true that all newborn children are not boys.
· (e)Somewhere between one-fourth and three-fourths of all newborn children are boys.
· (a)Leonardo was a great scientist, inventor, and artist who lived during the Renaissance.
· (b)Leonardo did not live during the Renaissance.
· (c)Leonardo lived during the Renaissance.
· (d)Leonardo was a Renaissance artist.
· (e)Leonardo was not a Renaissance artist.
· (f)Leonardo was not a Renaissance artist and scientist.
Arrange the following sets of argument forms in order from greatest to least inductive probability.
· (a)60 per cent of observed F are G.
x is F.
∴ x is G.
· (b)20 per cent of F are G.
x is F.
∴ x is G.
· (c)60 per cent of F are G.
x is F.
∴ x is G.
· (a)All ten observed F are G.
∴ All F are G.
· (b)All ten observed F are G.
∴ If three more F are observed, they will be G.
· (c)All ten observed F are G.
∴ If two more F are observed, they will be G.
· (d)All ten observed F are G.
∴ If two more F are observed, at least one of them will be G.
· (e)All F are G.
∴ If an F is observed, it will be G.
· (a)8 of 10 doctors we asked prescribed product X.
∴ About 80 percent of all doctors prescribe product X.
· (b)80 of 100 doctors we asked prescribed product X.
∴ About 80 percent of all doctors prescribe product X.
· (c)80 of 100 randomly selected doctors prescribed product X.
∴ About 80 percent of all doctors prescribe product X.
· (d)My doctor prescribes product X.
∴ All doctors prescribe product X.
· (e)My doctor prescribes product X.
∴ Some doctor(s) prescribe(s) product X.
· (f)All 10 doctors we asked prescribe product X.
∴ All doctors prescribe product X.
· (a)Objects a, b, c, and d all have properties F and G.
Objects a, b, c, and d all have property H.
Object e has properties F and G.
∴ Object e has property H.
· (b)Objects a, b, c, and d all have properties F, G, and H.
Objects a, b, c, and d all have property I.
Object e has properties F, G, and H.
∴ Object e has property I.
· (c)Object a has property F.
Object a has property G.
Object b has property F.
∴ Object b has property G.
· (d)Object a has property F.
∴ Object b has property F.
· (e)Object a has properties F and G.
Object a has property H.
Object b has properties F and G.
∴ Object b has property H.
· (f)Object a has property F.
∴ Objects b and c have property F.
· (a)Objects a, b, c, d, and e have property F.
∴ All objects have property F.
· (b)Objects a, b, c, d, and e have property F.
∴ Objects f and g have property F.
· (c)Objects a, b, and c have property F.
∴ All objects have property F.
· (d)Objects a, b, c, d, and e have property F.
Objects a, b, c, d, and e have property G.
Objects f and g have property F.
∴ Objects f and g have property G.
· (e)Objects a, b, c, d, and e have property F.
Objects a, b, c, d, and e have property G.
Objects f and g have property F.
∴ Object f has property G.
Each of the following problems consists of a list of observations. For each, answer the following questions. Are the observations compatible with the assumption that exactly one cause of the type indicated (necessary, sufficient, etc.) is among the suspected causes? If so, do the observations enable us to identify it using Mill’s methods? If they do, which of the unsuspected causes is it, and by what method is it identified?
Answers to Selected Supplementary Problems
(1) (g), (e), (i), (a) and (d), (b), (c), (f), (h) ((a) and (d) are of equal strength)
(4) (a), (d), (c), (b), (e), (f)
(2) (e), (d), (c), (b), (a)
(4) (b), (a), (e), (c), (d), (f)
(3) None of the suspected causes is necessary for E (method of agreement).
(6) G is the only one of the suspected causes which could be necessary and sufficient for E (joint method of agreement and difference).
(9) H is the only one of the suspected causal variables on which E could be dependent (method of concomitant variation).
1 Readers who wish to understand the mathematical details of the relationships among n, s, the margin of error, and the probability of the conclusion, given the premise, of a statistical generalization should consult the material on confidence intervals in any standard work on statistics.