# PHI 1101 REASONING AND CRITICAL THINKING

PHI 1101: R EASONINGAND

I NFORMATION

C RITICAL T HINKING

ON

T EST 3

Test 3 covers Units 12–16. The questions will be similar to those in the text, the

quizzes, and the tutorial exercises.

U NIT 12: B ASIC P ROPOSITIONAL L OGIC (1)

– Common propositional operators; symbolization

Operator

Negation

Conjunction

Disjunction

Conditional

Symbol

¬

&

∨

→

English

Not

And

Or

If . . . then

– Common forms and their symbolizations:

Both P and Q.

Not both P and Q.

Neither P nor Q.

P or Q, but not both.

P &Q

¬(P &Q)

¬(P ∨ Q) or ¬P &¬Q

(P ∨ Q)&¬(P &Q)

U NIT 13: B ASIC P ROPOSITIONAL L OGIC (2)—C ONDITIONALS

– Common forms:

Statement form

If P then Q

P if Q

P only if Q

P unless Q

Not P unless Q

Symbolization(s)

P →Q

Q→P

P → Q or ¬Q → ¬P

¬Q → P

¬Q → ¬P

– Converse and contrapositive; biconditionals

– Necessary and/or sufficient conditions

– Common argument forms involving conditionals

∗ Valid

· Modus ponens: A → B, A ∴ B

· Modus tollens: A → B, ¬B ∴ ¬A

∗ Invalid

· Affirming the consequent: A → B, B ∴ A

· Denying the antecedent: A → B, ¬A ∴ ¬B

1

U NIT 14: B ASIC P ROPOSITIONAL L OGIC (3): P ROOFS

– Proofs and formal proofs

– Inference rules, a selection:

∗

∗

∗

∗

∗

∗

∗

∗

Modus ponens

Modus tollens

Hypothetical syllogism

Double negation

Conjunction

Simplification

Weakening

Disjunctive syllogism

– Proofs in propositional logic

– Proof by reduction to absurdity (not covered on Test 3)

Units 15 and 16: Inductive and causal arguments; their strengths and

weaknesses

– Inductive generalizations—factors to consider:

∗ quality of data

∗ sample size

∗ representativeness of sample

– Causal arguments: key points

∗

∗

∗

∗

Correlation and causation

Confusing cause and effect

Common cause

Post hoc fallacy

– Applications—factors to consider:

∗ Has the most specific information available about the individuals concerned been used?

∗ Has all available data been used?

2

S AMPLE Q UESTIONS

1. Let D=Dave will go to the store and S=Sam is out of milk. Which of the

following formulas corresponds to the statement below? (6 multiple choice

questions, 2 points each)

Dave won’t go to the store if Sam is out of milk.

(a) ¬D → S

(b) ¬(D → S)

(c) S → ¬D

(d) ¬S → ¬D

(e) ¬D → ¬S

2. Let P=Paula will go to the concert and R=Ryan will go to the concert. Which

of the following English statements corresponds to the formula below? (6 multiple choice questions, 2 points each)

¬R → ¬P

(a) Ryan won’t go to the concert if Paula doesn’t.

(b) Ryan won’t go to the concert unless Paula does.

(c) Ryan will only go to the concert if Paula does.

(d) Paula won’t go to the concert unless Ryan does.

(e) Paula will go to the concert if Ryan doesn’t.

3. State whether A is a necessary and/or a sufficient condition for B. (3 multiple

choice questions, 2 points each)

A: Being a sister; B: Having a sibling

4. Show that the following argument form is valid by constructing a proof of

its conclusion from its premises using the valid forms of inference we have

studied in this course. (You can copy and paste the symbols you see below

when typing your answer. 1 question, 4 points)

P ∨ Q, P → R, Q → S, ¬R ∴ S

(¬ & ∨ →)

3

5. Discuss the strengths and weaknesses of the following arguments, which

involve inductive reasoning. (2 questions, 3 points each)

As the deadline for RRSP contributions approaches, many financial institutions advertise their investment products. Prominently

featured are the performance figures for the previous year. This is

obviously important information. For someone making a contribution, it makes sense to go with the fund that earned the most money

last year, right?

6. Discuss the strengths and weaknesses of the following argument, which

involves inductive and/or causal reasoning. (2 questions, 3 points each)

It has been documented that during the sixties and seventies, the

incidence of illicit drug use among teenagers increased in direct

proportion to the number of students who had drug education programs in the schools. Obviously, those programs served to increase

the number of teenagers using drugs.

7. Discuss the strengths and weaknesses of the following application. (2

questions, 3 points each)

I heard that your chances of winning something with this particular

lottery ticket are 1 in 6, and I’ve already bought 5 losing ones in a

row. Surely, that next one I buy will be a winner.

4

PHI 1101: Reasoning and Critical Thinking:

Course Notes Part 12:

Basic Propositional Logic (1)

P. Rusnock

Copyright c 2020 Paul Rusnock

B ASIC P ROPOSITIONAL LOGIC

I NTRODUCTION

In this and the following two parts, we will study a special kind of argument

forms, those in which the variable components are complete sentences or propositions.

Here are some examples of such forms, some valid, some invalid, along

with an instance of each:

Not P .

Not both P and Q.

I don’t have rice.

I don’t have rice and beans.

Not both P and Q.

Not P .

I don’t have rice and beans.

I don’t have rice.

Either P or Q.

Not P .

Q.

Either Jasmine went or Kate did.

Jasmine didn’t go.

Kate went.

If P then Q.

Not Q.

Not P .

If Kate goes, Jasmine will too.

Jasmine won’t go.

Kate won’t go.

If P then Q.

Not P .

Not Q.

If Kate goes, Jasmine will too.

Kate won’t go.

Jasmine won’t go.

We’ll also take this opportunity to introduce symbolic logic. The use of special symbols here has two important advantages: by providing symbols for

frequently occurring logical concepts, it allows us to abbreviate, giving us a

clearer view of the forms of propositions and arguments; and, by taking us one

step farther from our original arguments, the symbolic representations help us

to concentrate on the forms of arguments, reducing the risk of being distracted

by their subject matter when we assess their reasoning.

S OME

COMMON LOGICAL CONCEPTS ; SYMBOLIZATION

In this section, we present a handful of concepts which are central to the study

of propositional logic, along with a set of symbols commonly used to designate

them.

3

Reasoning and Critical Thinking

Negation The concept of negation is one of the simplest logical concepts. As

we use it here, negation is a propositional operator, one that converts true

propositions into false ones, and false ones into true ones. Thus if a proposition is true, e.g.,

February comes after January

—its negation

February does not come after January.

is false. While if a proposition is false, e.g.,

Great white sharks are vegetarians.

—its negation:

Great white sharks are not vegetarians.

is true.

Logicians mostly use the symbol ‘¬’ to represent negation: Thus ‘¬A’ symbolizes ‘not A’, or ‘It is not the case that A’.1

Negation is expressed in English in a variety of different ways: sometimes

with the help of the word ‘not’ or a contracted form of the same word (‘can’t’),

other times with the help of prefixes, including ‘in-’, ‘im’, ‘un-’ ‘a-’, and ‘dis-’,

among others. Thus for example, if we replace ‘Sam can walk’ with S, then

each of the following sentences can be represented as ¬S:

Sam is not able to walk.

Sam is unable to walk.

Sam isn’t able to walk.

Sam is incapable of walking.

Because negation simply changes the truth-value of a given proposition and

there are only two truth-values (true and false), applying negation twice in

a row simply ends you right back where you started. Thus ‘not not A’ will

always have the same truth-value as ‘A’ itself, as will ‘not not not not A’, ‘not

not not not not not A’, and so on. Thus, for example, if ‘Sam is not unable to

walk’ is true, so is ‘Sam is able to walk’; and if the former is false, so is the latter.

Note, however, that this equivalence only applies in cases where two negations occur in a row. In other situations, two negations may not “cancel out”.

The Lottery corporation, for example, would be in deep trouble if the two negations in the following sentence cancelled each other out:

Joe didn’t win the lottery and Sam didn’t either.

With the above notation, we can write the above result as follows:

For any proposition A, A and ¬¬A always have the same truthvalue (as do ¬¬¬¬A, ¬¬¬¬¬¬A, etc.) .

1 Other

symbols you may see are ‘∼ A’, ‘A’ or ‘−A’.

4

Propositional Logic (1)

Conjunction The concept of conjunction is most often expressed in English by

the word ‘and’. But the same meaning can be and often is conveyed by words

such as ‘but’, ‘however’, ‘nevertheless’, ‘although’, and several others. We will

use the symbol ‘& ’ to represent conjunction.2 Propositions of the form A&B

are also called conjunctions, and the propositions A, B occurring in them are

called conjuncts.

Now that we have two logical concepts, we can also express certain others

in terms of them. A noteworthy case here is the logical operator expressed in

English by ‘not both . . . and . . . ’, as used, for example, in the sentence:

Joe and Sam didn’t both get a raise.

With negation and conjunction, we can express this as follows:

It’s not true that Joe got a raise and Sam got a raise.

Or, in symbols:

¬(J&S)

Note that we introduce parentheses here to indicate what part of the sentence

is covered by the negation.

E XERCISES

I. How would the following expressions read in English? Are there any simpler

expressions that are equivalent?

1. ¬J&S

5. ¬J&¬S

2. ¬S&¬¬J

6. ¬(¬J&S)

3. S&¬S

7. ¬(J&¬S)

4. J&¬S

8. ¬(¬J&¬S)

II. Symbolize the following (where J, S are as above)

1. Joe didn’t get a raise, and neither did Sam.

2. Sam got a raise, but Joe didn’t.

3. Joe got a raise but Sam didn’t.

4. Joe failed to get a raise, but Sam got one.

5. Joe failed to get a raise, but Sam didn’t.

6. Both Joe and Sam failed to get a raise.

7. Neither Joe nor Sam got a raise.

8. Neither Joe nor Sam failed to get a raise.

2 Other

common symbols are ‘∧’ and ‘·’.

5

Reasoning and Critical Thinking

Disjunction is usually expressed in English by the word ‘or’. Logicians use

the symbol ‘∨’. A proposition of the form ‘A ∨ B’ is called a disjunction, and the

propositions A, B occurring in it are called disjuncts.

As logicians use this word, ‘A or B’ should be understood to mean the same

as ‘At least one of A, B is true’; thus, in saying ‘A or B’, we do not exclude the

possibility that both A and B are true. (Sometimes the expression ‘and/or’ is

used to represent this sense). This sense of ‘or’ is sometimes called inclusive, as

opposed to the exclusive sense, according to which ‘A or B’ comes out false if

both A and B are true. We won’t introduce a special symbol for the exclusive

‘or’. If we want to express it, we can do so in terms of inclusive disjunction,

negation, and conjunction, as follows:

(A orexcl B) = (A orincl B) but not both (A and B)

In symbols:

(A ∨ B)&¬(A&B)

We can also express ‘neither . . . nor’ in terms of disjunction and negation as

follows:

¬(A ∨ B)

Recall from the exercises above that ‘neither A nor B’ can also be expressed

in terms of negation and conjunction as follows:

¬A&¬B

Thus ‘¬(A ∨ B)’ and ‘¬A&¬B’ convey the same information, in the sense that

whenever one of them is true the other is, and when one of them is false the

other is as well. (We say that two propositions of these forms have the same

truth-conditions.) We have already encountered the same phenomenon—recall

‘A’ and ‘¬¬A’. Such formulas are said to be equivalent.

Conditionals are most often expressed in English by the words ‘if . . . then’.

We will symbolize ‘if . . . then’ with the arrow ‘→’.3 Other common forms involving conditionals are ‘A only if B’, ‘A unless B’, and ‘A provided that B’.

In a conditional statement A → B, the first part, A, is called the antecedent,

and the second part, B, the consequent.

Symbolizing ‘if . . . then . . . ’ statements is relatively straightforward, though

it should be noted that English allows one to reverse the clauses in such statements, so that:

1. He will have a fit if you come home late tonight.

2. If you come home late tonight, he will have a fit.

are equivalent, and should both be symbolized as L → F .

We’ll discuss how to represent statements involving ‘only if’ and ‘unless’ in

the next Part.

3 Another

commonly used symbol is the horseshoe: ‘⊃’.

6

Propositional Logic (1)

E XERCISES

I. Symbolize the following, using the dictionary provided:

D ICTIONARY: S: Sam will go/goes/went to the store. D: Dave will go/goes/went

to the store. M: Sam will be/is/was out of milk. T: Dave will be/is/was tired.

C: The store will be/is/was closed.

1. Sam and Dave both went to the store.

2. Sam went to the store but Dave didn’t.

3. Sam and Dave didn’t both go to the store.

4. Neither of them went to the store.

5. Only one of them went to the store.

6. Sam was out of milk, but neither he nor Dave went to the store.

7. Sam and Dave went to the store, but it was closed.

8. If Sam goes to the store, Dave will too.

9. One of them will go to the store if Sam runs out of milk.

10. Sam will go to the store if it’s not closed.

11. Dave won’t go to the store if he’s tired.

12. If Sam’s not out of milk, he won’t go the store, but Dave will.

13. Neither Sam nor Dave goes to the store if it’s closed.

14. If the store is open, and he’s not tired, Dave will go, provided that Sam

goes too.

15. Only one of them will go to the store if Sam runs out of milk, and it won’t

be Dave if he’s tired.

II. Using the dictionary given above, render in plain English:

1. S&¬C

2. (¬M &(S&D))&C

3. ¬(S&D)

4. ¬(S ∨ D)

5. S → D

6. ¬S → D

7

Reasoning and Critical Thinking

7. ¬S → ¬D

8. C → (¬S&¬D)

9. (S&¬D) ∨ (D&¬S)

10. (¬M &T ) → ¬(S ∨ D)

11. (T &¬M )&(S&D)

12. T ∨ D

13. S → C

14. ¬(S ∨ D)&M

15. ¬M

S OLUTIONS

TO

E XERCISES

Exercises (p. 5): Part I. How would the following expressions read in English?

1. ¬J&S

[Joe didn’t get a raise, but Sam did.]

2. ¬S&¬¬J

[Sam didn’t get a raise, and Joe did not fail to get a raise. Equivalently

but more simply: Sam didn’t get a raise but Joe did.]

3. S&¬S

[Sam got a raise and he didn’t.]

4. J&¬S

[Joe got a raise but Sam didn’t.]

5. ¬J&¬S

[Joe didn’t get a raise, and Sam didn’t either.]

6. ¬(¬J&S)

[It is not true to say that Joe didn’t get a raise but Sam did.]

7. ¬(J&¬S)

[It’s not true that Joe got a raise but Sam didn’t.]

8. ¬(¬J&¬S)

[They didn’t both fail to get a raise. Or, equivalently but more simply: At

least one of them got a raise.]

8

Propositional Logic (1)

II. Symbolize the following (where J, S are as above)

1. Joe didn’t get a raise, and neither did Sam.

[¬J&¬S]

2. Sam got a raise, but Joe didn’t.

[S&¬J]

3. Joe got a raise but Sam didn’t.

[J&¬S]

4. Joe failed to get a raise, but Sam got one.

[¬J&S]

5. Joe failed to get a raise, but Sam didn’t.

[¬J&¬¬S]

6. Both Joe and Sam failed to get a raise.

[¬J&¬S]

7. Neither Joe nor Sam got a raise.

[¬J&¬S]

8. Neither Joe nor Sam failed to get a raise.

[¬¬J&¬¬S]

Exercises, p. 7 ff.

Part I.

1. Sam and Dave both went to the store

[S&D]

2. Sam went to the store but Dave didn’t.

[S&¬D]

3. Sam and Dave didn’t both go to the store

[¬(S&D)]

4. Neither of them went to the store.

[¬S&¬D or ¬(S ∨ D)]

5. Only one of them went to the store.

[(S&¬D) ∨ (D&¬S) or (S ∨ D)&(¬S ∨ ¬D) or (S ∨ D)&(S → ¬D) , etc.]

6. Sam was out of milk, but neither he nor Dave went to the store.

[M &¬(S ∨ D) or M &(¬S&¬D)]

9

Reasoning and Critical Thinking

7. Sam and Dave went to the store, but it was closed.

[(S&D)&C]

8. If Sam goes to the store, Dave will too.

[S → D]

9. One of them will go to the store is Sam is out of milk.

[M → (S ∨ D)]

10. Sam will go to the store if it’s not closed.

[¬C → S]

11. Dave won’t go to the store if he’s tired.

[T → ¬D]

12. If Sam’s not out of milk, he won’t go the store, but Dave will.

[¬M → (¬S&D)]

13. Neither Sam nor Dave goes to the store if it’s closed.

[C → ¬(S ∨ D) or C → (¬S&¬D)]

14. If the store is open, and he’s not tired, Dave will go, provided that Sam

goes too.

[(¬C&¬T ) → (S → D)]

15. Only one of them will go to the store if Sam runs out of milk, and it won’t

be Dave if he’s tired.

[M → (((S&¬D) ∨ (D&¬S))&(T → ¬D))]

Part II.

1. S&¬C

[Sam went to the store, and it was open (not closed).]

2. (¬M &(S&D))&C

[Even though Sam wasn’t out of milk, he and Dave went to the store, but

it was closed.]

3. ¬(S&D)

[Sam and Dave didn’t both go to the store.]

4. ¬(S ∨ D)

[Neither of them went to the store.]

5. S → D

[If Sam goes to the store , Dave will too.]

10

Propositional Logic (1)

6. ¬S → D

[Dave will go if Sam doesn’t.]

7. ¬S → ¬D

[If Sam doesn’t go, Dave won’t either.]

8. C → (¬S&¬D)

[If the store is closed, neither of them will go.]

9. (S&¬D) ∨ (D&¬S)

[Exactly one of them will go to the store.]

10. (¬M &T ) → ¬(S ∨ D)

[If Sam’s not out of milk and Dave’s tired, neither of them will go to the

store.]

11. (T &¬M )&(S&D)

[Dave was tired and Sam wasn’t out of milk, but they both went to the

store anyway.]

12. T ∨ D

[Either Dave is tired or he’ll go to the store.]

13. S → C

[If Sam goes to the store, it will be closed.]

14. ¬(S ∨ D)&M

[Neither Sam nor Dave will go to the store, even though Sam’s out of

milk.]

15. ¬M

[Sam’s not out of milk.]

11

PHI 1101: Reasoning and Critical Thinking:

Course Notes Part 13:

Basic Propositional Logic (2): Conditionals

P. Rusnock

Copyright c 2020 Paul Rusnock

B ASIC P ROPOSITIONAL L OGIC (2):

C ONDITIONALS

R EMINDERS

In the last part, we introduced the concept of a conditional as a proposition of

the form ‘If A then B’, and introduced the arrow, ‘→’, to help us symbolize the

forms of such propositions, reading:

A→B

as

If A, then B.

We also noted that English allows us to swap the two clauses in a conditional.

That is, we can say ‘If A then B’ or, equivalently, ‘B if A’.

Only if The English language has even more ways to express conditionals,

however. First, we sometimes join the word ‘if’ with ‘only’, as in the following

example:

She will buy the car only if they lower the price.

—or, equivalently,

She will only buy the car if they lower the price.

How should we symbolize such a statement? I will give you several ways

to get it right.

First, we can try paraphrasing the “only if’ statement, using ordinary ‘if

. . . then’ and negation. I hope you can see that if it’s true that she will only buy

if the price is lowered, then it is also true that if the price is not lowered, she

won’t buy. A little reflection should help to convince you that this statement

is in fact equivalent to the original statement. In general, statements of the

following forms are equivalent:

A only if B.

If not B, then not A.

Now we can symbolize the second form as ¬B → ¬A; and, since the second

form is equivalent to the first, this symbolization also works for ‘A only if B’.

A second way you can get to a correct symbolization is to consider what

must have been the case if she had bought the car. Clearly, if she would only

have done so if the price had been lowered, we can conclude that the price

must have been lowered.

3

Reasoning and Critical Thinking

Accordingly,

She will buy the car only if they lower the price.

is equivalent to:

If she bought the car, they must have lowered the price.

And this we can symbolize as B → L.

In general, statements of the following forms are equivalent:

A only if B.

If A then B.

—so both can be symbolized as ‘A → B’.

Notice that ‘A if B’ is symbolized as ‘B → A’, while ‘A only if B’ is symbolized as ‘A → B’. Logically, the work that the word ‘only’ is doing here is to

change the direction of the arrow.

Form

A if B

A only if B

Symbolization

B→A

A→B

This gives us a third way to get the symbolization right. Beginning with an

only if statement ‘P only if Q’, first symbolize the corresponding ‘if’ statement,

‘P if Q’; then change the direction of the arrow. For example, starting with:

Smith will win the election only if Jones withdraws.

We consider:

Smith will win the election if Jones withdraws.

This is equivalent to:

If Jones withdraws, Smith will win the election.

Which we symbolize as :

J →S

Finally, we change the direction of the arrow to obtain the symbolization of our

original statement:

S→J

A fourth and final technique that is generally useful is to consider a different conditional of the same form, one for which it is easier to check the correctness of the symbolization. Personally, I like to use examples involving plants

and water. For example, the following are obviously true when said about

most plants:

4

Propositional Logic (2)

The plants will die if they don’t get water.

The plants won’t live if they don’t get water.

The plants will live only if they get water.

The plants won’t die only if they get water.

And the following are false:

The plants will live if they get water. (Frost might kill them.)

The plants will live if they don’t get water.

The plants will live only if they don’t get water.

The plants will die only if they don’t get water. (Again, frost)

In most if not all cases, it’s easy to figure out when conditional statements

about plants and water are true, and when they are false. This makes it easier to

check our symbolizations. If the English statement is true, our symbolization,

when read back in English, should also be true.

For example, if we tried ‘W → L’ as a symbolization of ‘The plants will

live only if they get water’, we could notice that the English is true, while the

formula ‘W → L’, when read back in English, says:

If the plants get water, they will live.

—which we recognize as false. Since the original statement is true, we can tell

that the symbolization is wrong.

Now if we have an example like:

She will only buy the car if they lower the price.

we can write a parallel example beneath it, like this:

The plants will live only if they get water.

And since we can see that the one about plants can be symbolized as

¬W → ¬L

—we can obtain the right symbolization for the first statement by doing exactly

the same thing, except for replacing ‘The plants will live’ by ‘She’ll buy the car’

and ‘The plants get water’ by ‘they lower the price’; that is:

¬L → ¬B

Unless The word ‘unless’ is also often used to make conditional statements.

We can use some of the same techniques to make sure we get the correct symbolization for such statements. Let’s begin with an example:

You won’t get into the concert unless you have a ticket.

As before, we can paraphrase this using ‘if . . . then’ and negation:

If you don’t have a ticket, you won’t get into the concert.

5

Reasoning and Critical Thinking

—which we can symbolize as:

¬T → ¬C

We can also use the technique of checking by finding a parallel statement

involving a more friendly subject matter:

You won’t get into the concert unless you have a ticket.

The plants won’t live unless they get water.

Here, we can see that the second statement should be symbolized as:

¬W → ¬L

—and we can verify that, like the English statement, this one is true. Then

we simply do the same thing with our original statement, only replacing ‘The

plants won’t live’ with ‘You won’t get into the concert’, etc. In this way we

arrive again at the statement above.

From our examples, we can see that ‘unless’ is equivalent to ‘if not’. In

particular, ‘P unless Q’ is equivalent to ‘P if not Q’, The last form, finally, is

equivalent to ‘If not Q then P ’, which we symbolize as ‘¬Q → P ’.

Summing up, here are some common forms of conditionals with their symbolizations:

Statement form

If P then Q

P if Q

P only if Q

P unless Q

Not P unless Q

Symbolization(s)

P →Q

Q→P

P → Q or ¬Q → ¬P

¬Q → P

¬Q → ¬P

C ONVERSE

C ONTRAPOSITIVE

AND

Converse Given a conditional

A→B

its converse is the conditional

B→A

When a conditional is true, its converse may or may not be. Here are a pair

of examples to prove the point:

Conditional

(T) If Joe is older than Kate, then

then Kate is younger than Joe.

(T) If you’ve been in Moosonee

you’ve been in Ontario.

Converse

(T) If Kate is younger than Joe,

then Joe is older than Kate.

(F) If you’ve been in Ontario, then

you’ve been in Moosonee.

6

Propositional Logic (2)

Biconditional When both a conditional A → B and its converse B → A are

true, we may assert a biconditional:

(A → B)&(B → A)

Often, biconditionals are abbreviated with the help of the double-arrow ‘↔’:

A↔B

Recall that ‘B → A’ (i.e., ‘A ← B’) can be read as ‘A if B’ and ‘A → B’ as ‘A

only if B’; this is why biconditionals are often stated in the following form:

A if and only if B.

Or, using ‘iff’ to abbreviate ‘if and only if’

A iff B.

Definitions are often stated as biconditionals. A little reflection should make

the reason for this obvious. A standard definition is just a statement indicating

that two expressions are synonymous. For example:

A number is said to be even if and only if it is divisible by two.

This tells us that wherever we apply either one of the two expressions ‘even’,

‘divisible by two’ to a number, we may also apply the other—hence the biconditional.

Contrapositive The contrapositive of a conditional A → B is the conditional:

¬B → ¬A

While a conditional and its converse may differ in truth-value, this is never

the case with a conditional and its contrapositive. For whenever a conditional

is true, its contrapositive is too, and conversely. That is, a conditional is always

equivalent to its contrapositive. Here are a pair of examples:

Conditional

(T) If Joe is older than Kate, then

Kate is younger than Joe.

(F) If you’ve been in Ontario, then

you’ve been in Moosonee.

Contrapositive

(T) If Kate is not younger than Joe,

then Joe is not older than Kate.

(F) If you haven’t been in Moosonee,

then you haven’t been in Ontario.

7

Reasoning and Critical Thinking

E XERCISES I

Symbolize the following, using the dictionary provided.

Dictionary: S: Simon will sing; G: Garfunkel will sing;

1. If Simon sings, Garfunkel will too.

2. Simon will sing if Garfunkel does.

3. If Simon doesn’t sing, Garfunkel will.

4. Simon won’t sing if Garfunkel does.

5. Simon will only sing if Garfunkel does.

6. Simon won’t sing unless Garfunkel does.

7. Simon will sing unless Garfunkel does.

8. Garfunkel will sing if Simon doesn’t.

9. Simon only sings if Garfunkel doesn’t.

10. If Simon doesn’t sing, Garfunkel won’t either.

N ECESSARY

AND

S UFFICIENT C ONDITIONS

This seems a good place to discuss the distinction between necessary and sufficient conditions. A is said to be a sufficient condition for B if the presence of

A guarantees the presence of B. Since there are only 12 months in a year, for

example, the presence of 13 people in a room is a sufficient condition for at

least two of them to have been born in the same month.

A is called a necessary condition for B, by contrast, if B cannot be present

unless A is. In order to have a wedding, for example, at least two people must

be present. The presence of at least two people is thus a necessary condition

for a wedding to take place.

The examples we have given so far indicate that there are sufficient conditions that are not necessary (we could have two people born in the same month

in a room even if fewer than 13 were present), and also necessary conditions

that are not sufficient (a wedding does not take place every time at least two

people are present).

This being said, there are conditions that are both necessary and sufficient.

A number’s being divisible by two, for instance, is both a necessary and a

sufficient condition for its being even, and receiving more votes than any other

candidate is a necessary and sufficient condition for winning an election in the

first-past-the-post system.

8

Propositional Logic (2)

The difference between necessary and sufficient conditions is nicely captured by the direction of the associated conditionals. If A is a sufficient condition for B, we can say that if A occurs B will too, i.e.:

A→B

While if A is a necessary condition for B, we can say that if A doesn’t occur, B

won’t either, i.e.:

¬A → ¬B

As we have seen, this is equivalent to its contrapositive:

B→A

Thus the distinction between necessary and sufficient conditions corresponds

to the difference between ‘if’ and ‘only if’ among conditional statements, and

hence to a simple difference in the way the arrow points. Finally, a condition

that is both necessary and sufficient will be expressed by means of a biconditional, e.g.:

A↔B

For example:

A candidate wins the election if and only if he or she receives more

votes than any other candidate.

E XERCISES II:

In the following questions, state whether A is a necessary and/or a sufficient

condition for B. Provide a brief explanation of your findings.

1. A: a number n is divisible by 4; B: the number n is even

2. A: a bill is passed by the House of Commons; B: the bill becomes law

3. A: a geometrical plane figure has two right angles; B: the figure is not a

triangle

4. A: Martha is a parent; B: Martha has a son

5. A: Fred is heavier than George; B: Fred weighs 90 Kg. and George weighs

88 Kg.

6. A: The number n is divisible by 3; B: The number n is even

7. A: Plants exist; B: mammals exist

8. A: bears exist; B: mammals exist

9. A: Jones was not convicted of any crime; B: Jones has done nothing

wrong.

10. A: Smith and Jones did not both win prizes; B: Neither Smith nor Jones

won a prize.

9

Reasoning and Critical Thinking

C OMMON

ARGUMENT FORMS INVOLVING CONDITIONALS

We now turn to some common forms of inference involving conditionals. For

starters, note that if the antecedent of a conditional is true and the consequent

is false, then the conditional itself is false. For example, if I say:

If my novel is published this year, I’ll win the Man-Booker Prize

next year.

—and I later publish the novel but do not win the prize, then my conditional

claim is revealed to be false.

Hence, if a conditional is true, it cannot be the case that the antecedent is

true and the consequent is false. Thus if one of these two things happens, the

other must not. This shows the following two forms of inference to be valid.

Modus ponens deals with the case where A is true, which rules out the falsity

of B:

A→B

A

B

It is usually applied so automatically that people are not even aware of having

used it.

Modus tollens deals with the case where B is false, which rules out the truth

of A, thus guaranteeing the truth of its negation:

A→B

¬B

¬A

This rule is used in the following inference:

If you had locked the door, it would be closed.

The door isn’t closed.

So you didn’t lock it.

Recall that negation (¬) simply changes true statements into false ones, and

false statements into true ones. With respect to truth and falsity, then, applying

negation twice just gets you back where you started. With this in mind, we

sometimes tacitly add or drop double negations when applying rules of inference. For example, we will count the following as “sort of” instances of modus

tollens:

10

Propositional Logic (2)

“Sort of” MT

Actual MT

¬P → Q, ¬Q ∴ P

¬P → Q, ¬Q ∴ ¬¬P

P → ¬Q, Q ∴ ¬P

P → ¬Q, ¬¬Q ∴ ¬P

¬P → ¬Q, Q ∴ P

¬P → ¬Q, ¬¬Q ∴ ¬¬P

S OME

COMMON INVALID ARGUMENT FORMS

Common mistakes in reasoning rooted in the use of invalid argument forms

(in the belief that they are in fact valid) are called formal fallacies. We mention

two of them here, both having to do with conditionals.

Affirming the consequent

A→B

B

A

Perhaps because of their similarity to modus ponens arguments, many people

are fooled by arguments of this form. The following example should suffice to

show that the form is invalid:

If this cat just had kittens, then it is a female.

It is a female.

Therefore it just had kittens.

Denying the Antecedent: again, a common fallacy, perhaps due to the similarity between this argument form and the valid modus tollens.

A→B

¬A

¬B

A modification of the above example will prove the invalidity of this form:

If this cat just had kittens, then it is a female.

It didn’t just have kittens.

Therefore it’s not a female.

11

Reasoning and Critical Thinking

E XERCISES III.

Symbolize the following arguments using the dictionary provided, and identify the form of inference used (either modus ponens, modus tollens, affirming

the consequent, or denying the antecedent—including the “sort of” cases), stating at the same time whether or not the argument forms are valid.

D ICTIONARY: P = We’ll have a picnic. R = It will rain. D = The plants die. W =

The plants are watered.

1. If it rains, we won’t have a picnic. But it will rain. So we won’t have a

picnic.

2. If it rains, we won’t have a picnic. We won’t have a picnic. So it will rain.

3. If it doesn’t rain, we’ll have a picnic. We will have a picnic. So it won’t

rain.

4. If it doesn’t rain, we’ll have a picnic. But it will rain. So we won’t have a

picnic.

5. We’ll only have a picnic if it doesn’t rain. We won’t have a picnic. So it’s

going to rain.

6. We’ll only have a picnic if it doesn’t rain. It’s going to rain. So we won’t

have a picnic.

7. We’ll only have a picnic if it doesn’t rain. It’s not going to rain. So we will

have a picnic.

8. We’ll have a picnic unless it rains. It isn’t going to rain. So we’ll have a

picnic.

9. The plants will die unless they are watered. But they will be watered. So

they won’t die.

10. The plants will die unless they’re watered. But they won’t be watered.

So they’re going to die.

12

Propositional Logic (2)

S OLUTIONS

TO

E XERCISES

Part I.

1. S → G

2. G → S

3. ¬S → G

4. G → ¬S

5. S → G or ¬G → ¬S

6. ¬G → ¬S or S → G

7. ¬G → S

8. ¬S → G

9. S → ¬G or G → ¬S

10. ¬S → ¬G

Part II.

1. A is a sufficient but not a necessary condition for B. Any number divisible by 4 (= 2 × 2) is even but there are even numbers that aren’t divisible

by 4, e.g., 6.

2. A is a necessary, but not a sufficient condition for B. To become law, a

bill must be passed by the House but also by the Senate, and must then

receive Royal assent.

3. A is a sufficient condition for B, as a triangle can have at most one right

angle. However A is not necessary for B, as there are figures that are not

triangles which do not have two right angles, e.g., a regular pentagon.

4. A is a necessary, but not a sufficient condition for B; Martha could not

have a son unless she was a parent, but would still be a parent if she only

had daughters.

5. A is necessary but not sufficient for B. If Fred were not heavier than

George, there is no way they could have the stated weights. Yet Fred

could still be heavier if they had different weights (e.g. Fred 92, George

90).

6. A is neither necessary nor sufficient for B. There are numbers divisible

by two but not by three (e.g., 4) as well as numbers divisible by three but

not by two (e.g., 9).

13

Reasoning and Critical Thinking

7. Debatable. It might be argued that A is a necessary condition for B, since

mammals (which do not produce their own food) could not exist unless

organisms that do produce their own food existed. On the other hand,

such organisms might not be plants. We can say in any case that A is

not a sufficient condition for B, since at one point in the past there were

plants but no mammals.

8. A is a sufficient but not a necessary condition for B, since once bears exist

there are mammals, while there could still be mammals (e.g., squirrels)

even if there were no bears.

9. A is neither a necessary nor a sufficient condition for B. Jones may have

done something wrong without being caught or prosecuted, or simply

have done something that is wrong without being illegal, so A is not

sufficient for B. Nor is A necessary for B, since Jones might have done

nothing wrong but have been convicted nonetheless (either wrongly, or

else under an unjust law).

10. A is a necessary, but not a sufficient, condition for B. For A not to hold,

both would have to have won prizes, in which case B would be false. On

the other hand, A can be satisfied even though B isn’t—if, for instance,

Jones wins but Smith doesn’t.

Part III. Symbolize the following arguments, and identify the form of inference

used, stating at the same time whether or not the argument forms are valid.

1. If it rains, we won’t have a picnic. But it will rain. So we won’t have a

picnic.

[R → ¬P, R ∴ ¬P

Modus Ponens, valid.]

2. If it rains, we won’t have a picnic. We won’t have a picnic. So it will rain.

[R → ¬P, ¬P ∴ R

Affirming the consequent; invalid.]

3. If it doesn’t rain, we’ll have a picnic. We will have a picnic. So it won’t

rain.

[¬R → P, P ∴ ¬R

Affirming the consequent, invalid.]

4. If it doesn’t rain, we’ll have a picnic. But it will rain. So we won’t have a

picnic.

[¬R → P, R ∴ ¬P

Denying the antecedent (sort of), invalid.]

5. We’ll only have a picnic if it doesn’t rain. We won’t have a picnic. So it’s

going to rain.

[P → ¬R, ¬P ∴ R

[R → ¬P, ¬P ∴ R

Denying the antecedent (sort of); invalid.] or

Affirming the consequent; invalid.]

14

Propositional Logic (2)

6. We’ll only have a picnic if it doesn’t rain. It’s going to rain. So we won’t

have a picnic.

[P → ¬R, R ∴ ¬P

[R → ¬P, R ∴ ¬P

Modus tollens (sort of); valid.] or

Modus ponens, valid.]

7. We’ll only have a picnic if it doesn’t rain. It’s not going to rain. So we will

have a picnic.

[P → ¬R, ¬R ∴ P

[R → ¬P, ¬R ∴ P

Affirming the consequent, invalid.] or

Denying the antecedent, invalid]

8. We’ll have a picnic unless it rains. It isn’t going to rain. So we’ll have a

picnic.

[¬R → P, ¬R ∴ P

Modus ponens, valid.]

9. The plants will die unless they are watered. But they will be watered. So

they won’t die.

[¬W → D, W ∴ ¬D Denying the antecedent, invalid.]

10. The plants will die unless they’re watered. But they won’t be watered.

So they’re going to die.

[¬W → D, ¬W ∴ D

Modus ponens, valid.]

15

PHI 1101: Reasoning and Critical Thinking:

Course Notes Part 14:

Basic Propositional Logic (3): Proofs

P. Rusnock

Copyright c 2020 Paul Rusnock

B ASIC P ROPOSITIONAL L OGIC (3): P ROOFS

I NTRODUCTION

A VARIETY

OF I NFERENCE

R ULES

There are infinitely many valid argument forms (or patterns of inference), and

infinitely many invalid ones, so there is no point in trying to list them all (in

advanced logic courses, we develop general methods for testing many argument forms for validity). Nevertheless, some argument forms (both valid and

invalid) occur so frequently that they have been given special names. We will

present a few of these here. We begin by reviewing two that we’ve already

seen.

Modus ponens (MP)

A→B

A

B

Modus tollens (MT)

A→B

¬B

¬A

To these, we add the following rule, which justifies our chaining conditionals together in certain circumstances.

A→B

B→C

A→C

This rule is called Hypothetical Syllogism or HS for short.

(DN) Adding or dropping double negation Since a proposition A and its

double negation ¬¬A are always equivalent, it is always safe to infer one from

the other, for if one is true, so is the other. We can write this rule of inference as

follows:

A

¬¬A

Where the double line indicates that the inference may go in either direction. This rule justifies inferences such as the following:

3

Reasoning and Critical Thinking

Sam did not fail to get a raise.

Therefore, Sam got a raise.

Note that the rule applies not only to simple propositions but also to complex ones. For instance, if (P → Q) is true, then so is ¬¬(P → Q), and conversely. We use the script letter A to indicate that we may apply the rule to any

proposition or formula (and not just a simple one consisting of a single letter).

I NFERENCE

FORMS INVOLVING CONJUNCTION

Conjunction Our first rule tells us that if two propositions A and B are both

true, then so is the more complex proposition A&B:

A

B

A&B

The order of the premises does not matter: one might equally well conclude

B&A.

Simplification On the other hand, if A and B are true together, then they

must also be true separately. The rule (actually, a pair of rules) corresponding

to this fact is called simplification:

A&B

B

A&B

A

F ORMS

INVOLVING DISJUNCTION

Weakening is the pair of rules:

A

A∨B

B

A∨B

This rule is so-called because the conclusion one reaches is, generally speaking,

logically weaker than the premise. For if we know, for example, that A is true,

we also know that at least one of the propositions A, B is true, but the converse

doesn’t hold.

Disjunctive syllogism

A∨B

¬A

B

A∨B

¬B

A

This rule is the basis of arguments which proceed by the elimination of

alternatives, for example:

4

Propositional Logic (3)

You’ll either pay me now or pay me later.

You won’t pay me now.

So you’ll pay me later.

Similar rules might be formulated for cases in which there are more than two

possible outcomes, for example: if it’s either A or B or C, but it isn’t A and it

isn’t B, then it must be C. In the memorable phrase of Sherlock Holmes:

When you have eliminated all which is impossible, then whatever

remains, however improbable, must be the truth.

As was the case with modus tollens, we will sometimes tacitly add or drop

double negations when applying the rule of disjunctive syllogism. Thus, for

example, we will count the inferences on the left below as “sort of” instances

of Disjunctive syllogism:

“Sort of” DS

Actual DS

¬P ∨ Q, P ∴ Q

¬P ∨ Q, ¬¬P ∴ Q

P ∨ ¬Q, Q ∴ P

P ∨ ¬Q, ¬¬Q ∴ P

¬P ∨ ¬Q, P ∴ ¬Q

¬P ∨ ¬Q, ¬¬P ∴ ¬Q.

¬P ∨ ¬Q, Q ∴ ¬P

¬P ∨ ¬Q, ¬¬Q ∴ ¬P

P ROOFS

A formally valid inference necessarily preserves truth: if we begin with one

or more propositions A, B, C, . . . , and validly infer a different proposition, M,

from them, then that proposition must also be true provided that all of A, B, C, . . .

are. If we then validly infer yet another proposition N from the larger set of

propositions A, B, C, . . . , M, we know that it must be true if all of A, B, C, . . . , M

are. But since M has to be true whenever all of A, B, C, . . . are, we also know

that N must be true whenever A, B, C, . . . are. Consequently, whatever conclusion we reach from a set of premises by a sequence of valid inferences will

follow from (be implied by) this set of premises, even if other premises, deduced along the way, are used in some of these inferences. This shows that

chains of valid inferences (or, as they are sometimes called, proofs) themselves

constitute valid inferences.

Let us give a simple example. We will show that the conclusion C&D follows from the premises A&B, A → C, B → D by deducing it in a series of

steps using some of the valid forms mentioned above. We begin by listing our

premises, drawing a line under the last one, and drawing a line along the left

side:

5

Reasoning and Critical Thinking

1.

2.

3.

A&B

A→C

B→D

Premise

Pr.

Pr.

We now note that A follows from A&B by the rule we called simplification.

We make this inference the next step in our proof, using the notation ‘Simp., 1’

to indicate that we inferred A from line 1 using this rule:

1.

2.

3.

4.

A&B

A→C

B→D

A

Premise

Pr.

Pr.

Simp., 1

Next, noting that B also follows from line 1, we add a new line:

1.

2.

3.

4.

5.

A&B

A→C

B→D

A

B

Premise

Pr.

Pr.

Simp., 1

Simp., 1

We now use modus ponens on lines 2 (A → C) and 4 (A) to infer the conclusion

C, and similarly infer D from B → D (line 3) and B (line 5):

1.

2.

3.

4.

5.

6.

7.

A&B

A→C

B→D

A

B

C

D

Premise

Pr.

Pr.

Simp., 1

Simp., 1

m.p., 2, 4

m.p., 3, 5

Finally, we apply the rule called conjunction to lines 6 and 7 to obtain the conclusion C&D:

1.

2.

3.

4.

5.

6.

7.

8.

A&B

A→C

B→D

A

B

C

D

C&D

Premise

Pr.

Pr.

Simp., 1

Simp., 1

m.p., 2, 4

m.p., 3, 5

conj., 6, 7

What we have here is a complete proof, showing how the conclusion C&D follows from the premises (lines 1–3) via a series of valid inferences.

6

Propositional Logic (3)

This is a formal proof: we derive a sequence of formulas from given premise

formulas using valid forms of inference. But it also provides a proof pattern

that shows the formal validity of countless arguments, e.g.:

The government will pass tax cuts yet unemployment will continue

to increase. If the tax cuts are passed, however, revenues will decrease. On the other hand, if unemployment continues to increase,

expenditures will also increase. So revenues will decrease and expenditures will increase.

Some Valid Forms of inference:

N EGATIONS

DN

¬¬A

A

DN

A

¬¬A

C ONJUNCTIONS

Conj.

A

B

A&B

Simp.

Simp.

A&B

A

A&B

B

D ISJUNCTIONS

Weak.

Weak.

A

A∨B

B

A∨B

DS

A∨B

¬A

B

DS

A∨B

¬B

A

C ONDITIONALS

MP

A→B

A

B

MT

A→B

¬B

¬A

7

HS

A→B

B→C

A→C

Reasoning and Critical Thinking

E XERCISES

I. Each of the following questions contains a list of premises and a conclusion.

Show that the conclusion follows from the premises by constructing an appropriate chain of inferences, or proof, as illustrated above.

1. Premises: A → (B&C), ¬(B&C), A ∨ D; Conclusion: D

2. Premises: A → ¬B, A&D, ¬B → C, (D ∨ E) → F ; Conclusion: C&F

3. Premises: A → B, A ∨ C, ¬C; Conclusion: B

4. Premises: A&B, A → C, B → D, ¬(C&D) ∨ E; Conclusion: E

5. Premises: (A ∨ B) → (C ∨ D), A&¬D; Conclusion: C

6. Premises: A&(B&C), A → D, (B∨E) → F ; Conclusion: (C∨G)&(D&F )

7. Premises: U ∨ W, P → R, Q → S, P &Q, (R&S) → (T &¬U ); Conclusion: W &T

8. Premises: (A ∨ B) ∨ C, C → ¬D, ¬A&D; Conclusion: B

9. Premises: A&¬B, (C ∨ A) → (B ∨ D), C → ¬D; Conclusion: ¬C

10. Premises: A, A → C, B → D, ¬C ∨ ¬D; Conclusion: ¬B

II. Symbolize the following arguments, then show that they are valid by constructing proofs of their conclusions from their premises.

1. If Peters went to the lecture, then Quine didn’t. Either Quine went, or

Russell didn’t. If Sellars went, then Russell did. But Peters did go. So

Sellars didn’t.

2. Either Peters or Quine didn’t go to the lecture. If Sellars went, then Quine

did too. Now Peters did go to the lecture. So Sellars didn’t.

3. If Peters had gone to the lecture, then Quine would have as well. Either

Peters or Russell went. If Sellars went, then Quine didn’t. And Sellars

did go. So Russell went.

4. If Peters goes, the Quine will too. And if Quine goes, Russell will be sure

to tag along. But Russell isn’t going. So neither are Peters and Quine.

5. Carnap only goes to the pub if Wittgenstein doesn’t. But Wittgenstein

went to the pub, and so did Neurath. And Neurath never goes to the

pub unless Hahn does too. But whenever Hahn and Neurath both go

to the pub, either Carnap or Schlick goes too. So Schlick was there, but

Carnap wasn’t.

8

Propositional Logic (3)

P ROOFS

BY REDUCTION TO ABSURDITY

Our final topic in this part is a special kind of argument called argument by

reduction to absurdity (in Latin, reductio ad absurdum), indirect arguments or sometimes also apagogic arguments. In such arguments, we add to the premises the

opposite (i.e., the negation) of the proposition we wish to prove. We then proceed to deduce, by valid inference, a proposition which cannot be true (generally speaking, this will be a contradiction, though sometimes arguments which

arrive at an obviously false conclusion are also called arguments by reduction

to absurdity). We then reason as follows: if all our premises (including the assumption) had been true, then (since our reasoning was valid) the conclusion

would also have to be true. But since this conclusion isn’t true not all of our

premises can be true (i.e., they are inconsistent). It follows that if all the original premises were true, the additional assumption would have to be false. But

then it follows that if all the original premises had been true, the opposite (i.e.,

the negation) of the additional assumption would also have to be true. That is,

the opposite of our assumption follows from the original premises.

Reductio arguments are very common in mathematics. Here is an example:

Theorem: If a square number n2 is even, then so is its root n.

Proof. Suppose that n2 is even, but that n is not even (this is the opposite of

what we want to prove). Then n is odd, and so bigger by one than some even

number; we can therefore say that n = 2k + 1 for some number k. Then, doing

the algebra:

n2 = (2k + 1)2 = 4k 2 + 4k + 1 = 2(2k 2 + 2k) + 1

That is, n2 is one bigger than an even number (namely, 2[2k 2 + 2k]), and hence

is odd. But n2 is also even, by assumption. Hence the number n2 must be both

even and odd (contradiction). This is impossible. So our assumption that n is

not even must be false. We conclude that n is even.

Here is another example, due to Galileo. Galileo Galilei (1564-1642) is famous for, among other things, having discovered the law of free fall, in particular for the somewhat counterintuitive discovery that bodies of different

weights fall at the same speed. His opponents, following Aristotle, thought

otherwise: they maintained that heavier bodies fall faster than lighter ones, all

other things being equal. Here is a thought experiment Galileo used to refute

them.

Suppose that a lighter body is attached to a heavier body by a rope. If the

lighter body falls more slowly than the heavier one, it will act as a drag on the

heavier one, and the two together will fall more slowly than the heavier body

alone. But if we gradually shorten the rope until the two bodies are touching,

we get a single, heavier body, which should (according to Aristotle) fall more

quickly. So if heavier bodies fall more rapidly than lighter ones, the two bodies

9

Reasoning and Critical Thinking

would have to fall both more and less rapidly than the heavy body alone. This

is impossible. So the light and the heavy body must fall at the same speed.

One final example is drawn from ancient Greek philosophy. In the dialogue

entitled Euthyphro, Socrates discusses the nature of piety with a fellow named

Euthyphro, who suggests that pious acts can be defined as those which are

pleasing to the Gods (that is, Zeus, Hera, Poseidon, Athena, Ares, Apollo, et

al.). At first, Socrates interprets this to mean that an act is pious if it is pleasing

to one or more of the Gods, and impious if it is displeasing to one or more of

them. He then notes that, according to what people say, the Gods have many

disagreements: what pleases Zeus may not please Poseidon, and so on. But if

so, on Euthyphro’s definition, the same actions will be both pious and impious,

which is absurd. Hence, Socrates concludes, the proposed definition must be

incorrect.

We can add a rule to accommodate reductio proofs to the simple proof system we used above. According to this rule, we may add an assumption to our

premises. Once we have derived a contradiction, we may conclude the opposite of our assumption. Here is a simple example, showing that ¬A follows

from ¬(A ∨ B).

1.

2.

3.

4.

¬(A ∨ B)

A

A∨B

¬A

Premise

assumption for Reductio

Weakening, 2

RAA, contradiction on lines 1, 3

E XERCISES

Prove the following by reduction to absurdity:

1. Premise: A&¬B; Conclusion: ¬(A → B)

2. Premise: A&B; Conclusion: ¬(A → ¬B)

3. Premises: ¬A&¬B; Conclusion: ¬(A ∨ B)

4. Premises: A&B; Conclusion: ¬(¬A ∨ ¬B)

5. Premises: A → B, C → D, ¬B&¬D; Conclusion: ¬(A ∨ C)

6. Premise: ¬(A ∨ B); Conclusion: ¬(A&B)

10

Propositional Logic (3)

S OLUTIONS

TO

E XERCISES

Exercises I, p. 8 Each of the following questions contains a list of premises and a conclusion. Show that the conclusion follows from the premises by constructing an appropriate chain of inferences, or proof.

1)

1.

2.

3.

4.

5.

A → (B&C)

¬(B&C)

A∨D

¬A

D

Pr.

Pr.

Pr./ Show D

MT, 1, 2

DS, 3,4

2)

1.

2.

3.

4.

5.

6.

7.

8.

9.

10.

11.

Pr.

Pr.

Pr.

Pr./ Show C&F

HS, 1, 3

simp., 2

m.p., 5, 6

simp., 2

weak., 8

m.p., 4, 9

conj., 7, 10

A → ¬B

A&D

¬B → C

(D ∨ E) → F

A→C

A

C

D

D∨E

F

C&F

3)

1.

2.

3.

4.

5.

A→B

A∨C

¬C

A

B

Pr.

Pr.

Pr./Show B

DS, 2, 3

MP, 1, 4

4)

1.

2.

3.

4.

5.

6.

7.

8.

9.

10.

11.

A&B

A→C

B→D

¬(C&D) ∨ E

A

B

C

D

C&D

¬¬(C&D)

E

11

Pr.

Pr.

Pr.

Pr./Show E

simp., 1

simp., 1

m.p., 2, 5

m.p., 3, 6

conj., 7, 8

DN, 9

DS, 4, 10

Reasoning and Critical Thinking

5)

1.

2.

3.

4.

5.

6.

7.

(A ∨ B) → (C ∨ D)

A&¬D

A

A∨B

C ∨D

¬D

C

Pr.

Pr./Show C

simp., 2

weak., 3

MP, 1, 4

simp., 2

DS, 5, 6

6)

1.

2.

3.

4.

5.

6.

7.

8.

9.

10.

11.

12.

13.

A&(B&C)

A→D

(B ∨ E) → F

A

D

B&C

B

B∨E

F

C

C ∨G

D&F

(C ∨ G)&(D&F )

Pr.

Pr.

Pr./ Show (C ∨ G)&(D&F )

simp., 1

MP, 2, 4

simp., 1

simp., 6

weak., 7

MP, 3,8

simp., 6

weak., 10

conj., 5,9

conj., 11, 12

7)

1.

2.

3.

4.

5.

6.

7.

8.

9.

10.

11.

12.

13.

14.

15.

U ∨W

P →R

Q→S

P &Q

(R&S) → (T &¬U )

P

Q

R

S

R&S

T &¬U

T

¬U

W

W &T

12

Pr.

Pr.

Pr.

Pr.

Pr./ Show W &T

simp., 4

simp., 4

MP, 2, 6

MP, 3, 7

conj., 8, 9

MP, 5, 10

simp., 11

simp., 11

DS, 1, 12

conj., 12, 14

Propositional Logic (3)

8)

1.

2.

3.

4.

5.

6.

7.

8.

9.

(A ∨ B) ∨ C

C → ¬D

¬A&D

D

¬¬D

¬C

A∨B

¬A

B

Pr.

Pr.

Pr./Show B

simp., 3

DN, 4

MT, 2, 5

DS, 1, 6

simp., 3

DS, 7, 8

9)

1.

2.

3.

4.

5.

6.

7.

8.

9.

10.

A&¬B

(C ∨ A) → (B ∨ D)

C → ¬D

A

C ∨A

B∨D

¬B

D

¬¬D

¬C

Pr.

Pr.

Pr./Show ¬C

simp., 1

weak., 4

MP, 2, 5

simp., 1

DS, 6, 7

DN, 8

MT, 3, 9

10)

1.

2.

3.

4.

5.

6.

7.

8.

A

A→C

B→D

¬C ∨ ¬D

C

¬¬C

¬D

¬B

Pr.

Pr.

Pr.

Pr./Show ¬B

MP, 1, 2

DN, 5

DS, 4, 6

MT, 3, 7

II. Symbolize the following arguments, then show that they are valid by constructing

proofs of their conclusions from their premises.

1. If Peters went to the lecture, then Quine didn’t. Either Quine went, or Russell

didn’t. If Sellars went, then Russell did. But Peters did go. So Sellars didn’t.

P → ¬Q, Q ∨ ¬R, S → R, P ∴ ¬S

1.

2.

3.

4.

5.

6.

7.

P → ¬Q

Q ∨ ¬R

S→R

P

¬Q

¬R

¬S

13

Pr.

Pr.

Pr.

Pr.

MP, 1, 4

DS, 2, 5

MT, 3,6

Reasoning and Critical Thinking

2. Either Peters or Quine didn’t go to the lecture. If Sellars went, then Quine did

too. Now Peters did go to the lecture. So Sellars didn’t.

¬P ∨ ¬Q, S → Q, P ∴ ¬S

1.

2.

3.

4.

5.

6.

¬P ∨ ¬Q

S→Q

P

¬¬P

¬Q

¬S

Pr.

Pr.

Pr.

DN, 3

DS, 1, 4

MT, 2, 5

3. If Peters had gone to the lecture, then Quine would have as well. Either Peters or

Russell went. If Sellars went, then Quine didn’t. And Sellars did go. So Russell

went.

P → Q, P ∨ R, S → ¬Q, S ∴ R

1.

2.

3.

4.

5.

6.

7.

P →Q

P ∨R

S → ¬Q

S

¬Q

¬P

R

Pr.

Pr

Pr.

Pr.

MP, 3, 4

MT, 1, 5

DS, 2, 6

4. If Peters goes, the Quine will too. And if Quine goes, Russell will be sure to tag

along. But Russell isn’t going. So neither are Peters and Quine.

P → Q, Q → R, ¬R ∴ ¬P &¬Q

1.

2.

3.

4.

5.

6.

P →Q

Q→R

¬R

¬Q

¬P

¬P &¬Q

Pr.

Pr.

Pr.

MT, 2, 3

MT, 1, 4

conj., 4,5

5. Carnap only goes to the pub if Wittgenstein doesn’t. But Wittgenstein went to

the pub, and so did Neurath. And Neurath never goes to the pub unless Hahn

does too. But whenever Hahn and Neurath both go to the pub, either Carnap or

Schlick goes too. So Schlick was there, but Carnap wasn’t.

C → ¬W, W &N, N → H, (H&N ) → (C ∨ S) ∴ S&¬C

14

Propositional Logic (3)

1.

2.

3.

4.

5.

6.

7.

8.

9.

10.

11.

12.

13.

C → ¬W

W &N

N →H

(H&N ) → (C ∨ S)

N

H

H&N

C∨S

W

¬¬W

¬C

S

S&¬C

Pr.

Pr.

Pr.

Pr.

simp., 2

MP, 3,5

conj., 5,6

MP, 4,7

simp., 2

DN, 9

MT, 1, 10

DS, 8, 11

conj., 11, 12

Exercises, p. 10 Prove the following by reduction to absurdity:

1. Premise: A&¬B; Conclusion: ¬(A → B)

1.

2.

3.

4.

5.

6.

A&¬B

A→B

A

B

¬B

¬(A → B)

Pr.

Assumption for RAA

simp., 1

MP, 2, 3

simp., 1

RAA, contradiction on lines 4,5

2. Premise: A&B; Conclusion: ¬(A → ¬B)

1.

2.

3.

4.

5.

6.

A&B

A → ¬B

A

¬B

B

¬(A → ¬B)

Pr.

Assumption for RAA

simp., 1

MP, 2, 3

simp., 1

RAA, contradiction on lines 4,5

3. Premise: ¬A&¬B; Conclusion: ¬(A ∨ B)

1.

2.

3.

4.

5

6.

¬A&¬B

A∨B

¬A

B

¬B

¬(A ∨ B)

P r.

Assumption for RAA

simp., 1

DS, 2, 3

simp., 1

RAA, contradiction on lines 4, 5

4. Premises: A&B; Conclusion: ¬(¬A ∨ ¬B)

1.

2.

3.

4.

5.

6

7.

A&B

¬A ∨ ¬B

A

¬¬A

¬B

B

¬(¬A ∨ ¬B)

Pr.

Assumption for RAA

simp., 1

DN, 3

DS, 2, 3

simp., 1

RAA, contradiction on lines 5, 6

15

Reasoning and Critical Thinking

5. Premises: A → B, C → D, ¬B&¬D; Conclusion: ¬(A ∨ C)

1.

2.

3.

4.

5.

6.

7.

8.

9.

10.

A→B

C→D

¬B&¬D

A∨C

¬B

¬A

C

D

¬D

¬(A ∨ C)

Pr.

Pr.

Pr.

Assumption for RAA

simp., 3

MT, 1, 5

DS, 4, 6

MP, 2, 7

simp., 3

RAA, contradiction on lines 8, 9

6. Premise: ¬(A ∨ B); Conclusion: ¬(A&B)

1.

2.

3.

4.

5

¬(A ∨ B)

A&B

A

A∨B

¬(A&B)

P r.

Assumption for RAA

simp., 2

weak, 3

RAA, contradiction on lines 1, 4

16

PHI 1101: Reasoning and Critical Thinking:

Course Notes Part 15:

Inductive and Causal Arguments (1)

P. Rusnock

Copyright c 2020 Paul Rusnock

I NDUCTIVE AND C AUSAL A RGUMENTS (1)

I NTRODUCTION

In valid deductive arguments, the truth of the premises guarantees the truth

of the conclusion. But many arguments are not like this, among them most of

those in which we draw conclusions from experience. The next two parts look

at three kinds of such non-deductive arguments, namely:

• Inductive generalizations: arguments in which our premises tell us about

a number of things of a certain kind and we draw a conclusion about a

larger number of things of that kind.

• Causal arguments, in which we draw a conclusion stating that one thing

or kind of thing causes another.

• Applications: Arguments in which a general claim is used to support a

conclusion about a particular thing or things of a certain kind.

Here are some examples of arguments of these kinds:

• Inductive generalization: A recent poll of residents of Toronto indicates that

only 30% of those surveyed believe that the Mayor should resign. So the

majority of Torontonians do not think that the Mayor should resign.

• Causal argument: In communities where Fluoride is added to the municipal water supply, the rate of dental cavities is lower than in communities where Fluoride is not added. Also, in communities where Fluoride

is added to the water supply, we find a higher rate of cavities in families

who drink bottled water, which does not contain added Fluoride. Finally,

studies have found a strong positive correlation between the presence of

Fluoride in saliva and the rate at which tooth enamel remineralizes, and

the latter has been associated with a lower incidence of cavities. So there

is some reason to believe that Fluoride prevents cavities.

• Application: On average, Canadian women live longer than Canadian

men. Alice and Ralph, a Canadian woman and and a Canadian man,

respectively, are the same age. So Ralph will likely die before Alice does.

In this part, we’ll take a closer look at inductive generalizations. The next part

will deal with causal arguments and applications.

3

Reasoning and Critical Thinking

I NDUCTIVE

GENERALIZATIONS

Many inductive generalizations belong to the following form:

x% of things of kind A observed so far have had property P.

So approximately x% of all things of kind A have property P.

Here, based upon what we find in a number of observed cases, we draw a conclusion covering all cases, whether we have observed them or not. When, for

example, we conclude that all (i.e., 100% of) crows are black because all the

ones we have seen so far have been black, we make an inference of this sort.

The observed cases are usually called the sample, while the collection of

things spoken of in the conclusion is called the population. In an inductive generalization, then, we draw a conclusion about a population based on information concerning a sample.

A little reflection should suffice to convince you that we constantly rely on

inductive generalizations. Most of our knowledge about the general features

of nature, for example, is derived by means of such arguments. We know that

fire burns, that snow is cold, that leaves turn colour and fall in autumn, that

day follows night, that people grow old and eventually die, that blackflies bite,

and so on, because we and a great many others have had repeated experiences

of fire burning, etc., and have drawn general conclusions from them.

By their very nature, inductive generalizations are almost always fallible,

since the part of the population we have not seen might upset the expectations

we have formed, however reasonably, from looking at a sample.1 Even if all the

crows we have seen so far were black, the next one might happen to be white,

or green, etc. Even so, inductive generalizations are not all equally likely to

lead us astray: there are better and worse ones. And the following factors play

an important role in determining how reliable an inductive inference will be.

• Quality of data

• Representativeness of sample

• Sample size

Q UALITY OF

DATA

No matter how good your reasoning is, if the data you begin with are unreliable, you should not be surprised if the conclusions you draw are too. As they

say in computer science:

Garbage in, garbage out.

1 This will not be the case if the sample includes every object in the population, in which case we

may speak of complete induction. The claim that no twentieth-century US President was born in

South Dakota, for example, can be verified in this way.

4

Inductive and Causal Arguments (1)

Most of you, I expect, will remember taking a high school chemistry class,

where a simple experiment is performed with a lab partner (e.g., using titration

to determine the pH of a solution). You may also remember how different pairs

of students obtained widely different results, even though they were all using

the same method to test the same batch of solution. As we know, people are

more or less skilled or careful, and people have bad days. And even the most

accurate measurements can be recorded or transcribed incorrectly, etc. Equipment can malfunction, as can the computers or software used to record and

analyze data. When people report their own opinions, there are even more

sources of error. In surveys, for example, the wording of a question can significantly affect responses, as can the order in which questions are asked, the

number of choices presented, and so on. The upshot is that reported data cannot simply be assumed to be perfectly accurate.

Bias, whether conscious or unconscious, can also affect the quality of data.

For this reason, scientists have devised elaborate methods of avoiding bias or,

where this is not possible, controlling for it.

When political factors are involved, things get even more complicated. If

you look in the news, for example, you can find various statistics concerning

the Covid-19 pandemic: numbers of reported cases, rates of hospitalization

among those who tested positive, rates of admission to intensive care, numbers

of deaths, mortality rates, and so on. Since most of these figures come from

national or local governments, you should be extremely sceptical about the

quality of this information. Questions to ask include:

• Does the political unit have the resources and medical system required to

undertake widespread testing?

• If so, did they make use of this capacity?

• Are the tests that were used reliable?2

• How widely were they administered?

• Does the government have a track record of putting out false or misleading statements when the truth is politically inconvenient?

We have all the more reason to wonder about actual rates of infection in various

jurisdictions given that many of them were slow to react to the pandemic and

also because their response is or at least should be focused on saving lives and

not simply on obtaining the most accurate estimates possible.

If you follow news media carefully, you will see that some media sources

are careful to distinguish cases of infection from reported cases of infection, while

others are not. Those that are not making this distinction are simply skating

past the question of quality of data.

2 The tests used to detect Covid-19 infection, like most medical tests, give rise to false positives

(indicating infection when it is not present) as well as false negatives (not indicting infection when it

is present).

5

Reasoning and Critical Thinking

Fraud is also a common cause of unreliable data. In many cases, this is

obvious. In some cases, only the results favouring a certain conclusion are

recognized as valid. The others may either be suppressed or discounted. Consider, for example, some of the most common methods used to commit electoral fraud: ballots for the “wrong” candidates may be destroyed, or declared

“spoiled”. If necessary, extra marks may be added to make sure that they are

spoiled. And extra ballots may be added to the box to round out the total appropriately. Given this, it is hardly surprising to find an overwhelming majority of the votes deemed valid supporting the dictator in charge of the election.

Similarly, students will sometimes fudge their data to obtain the conclusion

they think their teacher wants, throwing out some measurements, altering or

even inventing others, etc. A number of prominent cases show that more than

a few professional scientists are also prepared to commit fraud.

Could someone with a financial or other interest deliberately falsify data?

You bet. This is why you always do well to carefully consider the source of the

data used in inductive arguments.

As we noted previously, bias can also be unintentional and even unconscious. A measuring instrument that is not calibrated correctly, for example,

will predictably produce skewed results, and it is clear that this can happen

quite easily without us noticing it. Furthermore, the expectations of those performing measurements or making observations can also influence their perceptions. In the early days of the microscope, for instance, people imagined

they saw all sorts of marvellous things when they looked through the lens.3

R EPRESENTATIVENESS

OF THE SAMPLE

When a sampling method is random, every individual in the population has an

equal chance of being selected. Most sampling methods used in actual life do

not measure up to this ideal. As a result, there is an even greater risk that what

we find in the sample will not accurately reflect the real situation. We call such

samples unrepresentative, biased, or skewed.

Suppose, for example, that a professor of education wants to do research

on student engagement, and has developed a questionnaire for this purpose.

It might seem that nothing could be easier than to find a suitable sample. One

way would be to ask other professors to distribute the questionnaire in their

classes. Another would be to send it by e-mail to all the students enrolled at the

university. Yet both of these methods would be flawed, because the possibility

cannot be discounted that less engaged students would be less likely to be

present in class and less likely to respond to such a survey, thus skewing the

results.

Many opinion polls are conducted by telephone. This method can produce a biased sample for a number of reasons. First, some people do not have

phones at all, and hence would never be sampled. Others who do have phones

may not even answer, or refuse to respond to the poll if they do. It is hardly

3 See, e.g., http://www.wellcomecollection.org/full-image.aspx?page=999&image=a-monster-soup.

6

Inductive and Causal Arguments (1)

obvious that those who do answer will be representative of the general population in their responses to the questions asked in a given poll.

Still worse are self-selected samples. For example, a web-site may provide

readers with the opportunity to participate in a poll. Only those who choose

to do so are sampled, and we have every reason to fear that they won’t be

representative of the general population (not to mention the possibility of some

people taking the poll repeatedly). In self-selected samples of this sort, it only

costs time to participate. In others, money may also be required. Needless

to say, those willing to pay to participate in an opinion poll make up a very

special subset of the general population.

Though, as noted above, most sampling methods fall short of the ideal of

perfect randomness, some are more flawed than others, and more likely to

result in weak inductive arguments. So when evaluating an inductive generalization, you should think carefully about possible sources of bias in sampling,

and adjust your estimate of the strength of the argument accordingly. Similarly,

when thinking about drawing a conclusion yourself, you should take care not

to claim more than is justified given the sampling method that produced the

data.

S AMPLE SIZE

The size of the sample, finally, is always an important factor to consider when

assessing the strength of an inductive generalization. All other things being

equal, a larger sample makes for a stronger argument. This being said, the improvement is not directly proportional to the increase of the sample size. After

a while, the benefit of increasing the sample size by a constant amount becomes

progressively smaller. And at a certain point, it can become prohibitively expensive to attain a significant increase in accuracy.

When reputable firms report the results of public opinion polls, they will

indicate the number of people surveyed as well as an estimate of the survey’s

reliability. These usually take the following form:

The results are deemed to be accurate to within ±x% nineteen times

out of twenty.

In a typical poll of Canadian public opinion, for example, the sample size is

somewhere between 1000 and 2000, and the results claimed to be accurate to

within 3 or 4 percent nineteen times out of twenty.

You should not hurry past this claim. What it says is that there is a 95%

chance that the results of the poll accurately reflect the true situation to within

x%. Since unlikely things do sometimes occur, you should not entirely discount the possibility that this particular poll does not attain that degree of accuracy. For this reason, it is always best to look at several polls on the same

subject, and even at aggregate polls, where the results of different polls are combined.

The other thing that is important to note is that even if the poll were accurate to within x%, the true value could still depart significantly from the

7

Reasoning and Critical Thinking

reported one. Put otherwise, what the poll provides is a range of values, rather

than one precise value. Suppose, for example, we read a poll of voting preference in Ontario that tells us that the Progressive Conservatives (PCs) have

35% support and the New Democrats (NDP) 32%, where the poll is claimed

to be accurate within ±4% nineteen times out of twenty. Even if we make the

most likely assumption that the actual numbers are within ±4% of the sample numbers, the poll just tells us that support for the PCs is between 31 and

39%, while support for the NDP is between 28 and 36%. In particular, we do

not have enough information to say that the PCs enjoy more support than the

NDP. All the same, if you pay attention you will find quite a few news reports

where just that sort of mistaken inference is made.

C OMMON

FLAWS OF INDUCTIVE GENERALIZATIONS

Some mistakes made in inductive generalizations are so common that they

have been given special names. We consider a few of these here.

To begin with, we have a deeply ingrained tendency to think that there

must be a pattern to events, a tendency that can lead us to find order when the

data indicate no such thing (the predictable world bias). This inclination is no

doubt the ultimate source of many of the errors we make in inductive generalizations. It should be resisted. For even if nothing is completely random, in the

vast majority of cases we will never be in a position to know whatever order

and structure there is.

One prominent type of failure in this respect is called the Texas sharpshooter

fallacy, in memory of the mythical gunman who shot repeatedly at the side

of a barn and then painted a bull’s-eye where a number of bullet holes were

clustered purely by chance. In real life, one encounters this fallacy when people

tinker with raw data until they find a way to make them look significant, and

then stop.

For example, suppose we obtain information concerning the home addresses of people who have developed cancer in a given city over the last ten years.

We then check to see if there is any significant difference in the rates of cancer between those who live within a certain distance from high voltage power

lines and those who live farther away. By varying the distance, we might well

find one for which there is a significant difference. At this point, many people

would be ready to conclude that this difference proves that it’s dangerous to

live close to high-voltage wires. They shouldn’t, any more than we should consider the Texas gunman an excellent shot. For the distance, like the location of

the bull’s-eye, was chosen precisely because it yielded a significant result. But

the clumping up of cancer cases, like that of bullet holes, can and does occur

purely by chance, and the existence of some distance that produce a significant

difference is to be expected. This is not to mention even more serious problems involved in lumping all the different forms of cancer together, and other

problems besides.

When people favour a certain conclusion, they may be more likely to notice

individual cases that favour that conclusion and to overlook those that count

8

Inductive and Causal Arguments (1)

against it. This effect is known as the confirmation bias. When this is done deliberately, and data are selected and presented precisely because they support

a given conclusion, we encounter the fallacy of cherry picking.

One source of unconsciously biased sampling is the conspicuousness or

salience of certain events or things. We have a natural tendency to remember dramatic or shocking events while forgetting the less exciting ones. For

instance, plane crashes, though relatively uncommon, are quite spectacular.

Automobile crashes, by contrast, tend to receive less attention, despite the fact

that they are far more common and often just as deadly. Perhaps this is why

some people who are terrified of getting on an airplane don’t think twice before driving a car, perhaps even while talking on a cell phone, texting, etc. For

similar reasons, people may easily be convinced that crime rates are increasing

even when they aren’t because the crimes that are committed are prominently

discussed in the news, while the absence of crime is rarely remarked upon.

Another very common source of bias is a preference for the data that are

easiest to obtain (the availability bias). A researcher, for example, may prefer

to look at trees from his truck rather than to take a long walk through woods

teeming with biting flies (practising roadside forestry). When they can, many

people will rely entirely on data they can find with a simple internet search.

The vast majority of people rely on a small number of sources for news. It

should be obvious from the start that these are not ideal sampling methods.

With respect to the size of the sample, two flaws are usually singled out.

When a conclusion is drawn based on a sample that is too small to support it,

we speak of a hasty generalization. There is also the fallacy of apriorism, where

the sample size is the smallest possible, namely, zero. This is, unfortunately, the

sample size favoured by many people on certain issues, nicely summed up in

the famous quote:

My mind’s made up. Don’t confuse me with facts.

Finally, since inductive generalizations are by their nature fallible, it is important not to regard their conclusions as definitive. If new data come forward

which cast doubt on a conclusion we drew previously, we should be prepared

to revise it. Indeed, in most cases we should continue to seek new data. This

can lead to surprises.4

E XERCISES

I. Comment on the strengths and weaknesses of the sampling methods described below.

1. Measuring public opinion by means of a voluntary internet poll posted

on the website of the Ottawa Sun.

2. Looking at a dozen or so social media sites to determine how the provincial government is perceived by the voting public.

4 For an interesting discussion of this and some related topics, see J. Lehrer, “The truth wears

off,” New Yorker, December 13, 2010.

9

Reasoning and Critical Thinking

3. Comparing the time devoted to various topics on the television newscasts of the national networks in Canada to determine the most discussed

Canadian news stories of 2013.

4. Estimating the incidence of Attention Deficit Hyperactivity Disorder in

children worldwide by looking at data from developed countries such as

the USA, Canada, or Japan.

5. Randomly surveying 1500 45–50 year old Ontario residents by means of

personal interviews in order to determine how many people between

these ages in Ontario (a) have saved money for retirement; (b) have cheated

on their taxes.

II. Discuss the strengths and weaknesses of the following arguments, which

involve inductive reasoning.

1. Police reports indicate that the number of robberies in Ottawa increased

sharply in February 2011. We may conclude that the number of criminals

engaged in such robberies also increased sharply at that time.

2. Over her four years of study in the Faculty of Engineering at the University of Ottawa, Yasmina has had 35 male instructors and only 5 female

instructors. She concludes that a solid majority of instructors at the University of Ottawa are male.

3. A survey conducted by a Canadian polling firm indicated that while

55% of Americans surveyed said they did not object to terrorism suspects being tortured, an even greater proportion (57%) said that they did

not object to the use of enhanced interrogation techniques on terrorism suspects. The same poll indicates that while torture of terrorism suspects

was said to be always justified by 13% of those polled, enhanced interrogation techniques were deemed always justified by 26%. Since ‘torture’

and ‘enhanced interrogation techniques’ are just two names for the same

thing, this shows beyond a doubt that euphemisms do make a difference

to public opinion. (The survey asked the questions to a randomly selected group of more than 1000 Americans; the results are claimed to be

accurate to within plus or minus 3.5% nineteen times out of twenty).5

4. As the deadline for RRSP contributions approaches, many financial institutions advertise their investment products. Prominently featured are

the performance figures for the previous year. This is obviously important information. For someone making a contribution, it makes sense to

go with the fund that earned the most money last year, right?

5. I have used the library every Friday night for the past three years while

carrying out my research, and I have consistently found that most of the

5 Angus Reid, as reported by UPI, 24 February, 2010:

http://www.upi.com/Top News/US/2010/02/24/.

10

Inductive and Causal Arguments (1)

people working there are part-time workers, who don’t know enough to

help researchers. What is more, there seem to be very few people working in the library, and when I have tried to complain to the management,

I have had a hard time finding someone. The library is obviously understaffed and poorly run.

S OLUTIONS

TO

E XERCISES

PART I.

1. Voluntary polls, to begin with, do not provide a random sample, since

only those who decide to answer them are counted. We might expect,

for example, that those with strong opinions on the matter will be overrepresented, as will those who have more time on their hands to bother

with such things. Both are potential sources of bias. A poll from a single

newspaper is also problematic, since different newspapers tend to have

different readerships, so that we would only be obtaining a non-random

subset of a non-random subset of the public. This method is therefore

highly unreliable.

2. To begin with, much would depend upon which sites were consulted. If,

for example, we checked Facebook pages devoted to opposing a provincial government initiative, we shouldn’t expect to obtain a random sample of public opinion. Newspaper comments sections are often cluttered

up with contributions of paid commenters, so they too can be unreliable

reflections of what people actually think. There are many people who do

not use social media at all, who would not be sampled. Finally, the sample size is quite small. On the whole, this would not be a good sampling

method.

3. This would be a fairly good sample for this specific question. It could be

made better by including local stations and other forms of media, however.

4. This sampling method would have advantages as well as drawbacks. On

the plus side, developed countries tend to have more extensive medical

services, so we might expect the number of reported cases of ADHD to

be closer to the actual number than in countries where fewer children are

seen by medical personnel. On the minus side, there might be factors

present in developing countries that favour the development of ADHD.

If so, and the incidence is lower in less developed countries, the sample

would not accurately reflect the global population. Finally, the particular choice of developed countries might make a difference, since rates of

diagnosis could vary considerably from one to another. On the whole,

then, this is a poor method of sampling.

11

Reasoning and Critical Thinking

5. On question (a), this sampling method would be fairly good, though

there are other, better ways of getting information concerning this question (e.g., statistics on RRSP contributions, pension plans, etc.). This

method would be less reliable for question (b) since, in a personal interview, we have reason to fear that people might not tell the truth.

PART II.

1. There are two concerns here. First, as unlikely as it seems, it might be

that the number of reported robberies increased sharply even though the

actual number did not. A more significant flaw is the assumption that an

increase in robberies indicates a corresponding increase in robbers. For

it could well be that a few robbers were very active, etc. On the whole,

then, this is a weak argument.

2. A very weak argument, because the sample includes only the professors

who taught a student enrolled in an engineering program. Since information on all professors is readily available, moreover, there is no need

to base an estimate on such a sample.

3. Two sets of results are reported here. The first does not support the conclusion, since, with the margin of error, support for torture likely lies between 51.5% and 58.5%, while support for enhanced interrogation techniques likely lies between 53.5% and 60.5%. Thus there is not enough

information to say that one receives more support than the other. The

second set of results is significant even with the margin of error, and thus

supports the conclusion. This being said, it provides only limited support

for that conclusion, since the poll is only claimed to be accurate nineteen

times out of twenty, and also concerns only one comparison of attitudes

involving a euphemism. The argument would be stronger if other polls

(on the same and on related questions) produced similar results.

4. This is a very poor basis for drawing conclusions. Here, the main issue is

sample size: the results from one year may not accurately reflect longer

term performance, and are also unreliable as an indicator of what will

happen in the near future. Still, the fact that such numbers are widely

used in advertising suggests that many people can be convinced by such

poor arguments.

5. The sample here is biased, since the observations were all made on Friday

nights, when one would expect the regular staff not to be working, and

fewer employees working overall. The argument is consequently a weak

one. Observations made at various other times during the week would

be required in order to support the conclusion adequately.

12

PHI 1101: Reasoning and Critical Thinking:

Course Notes Part 16:

Inductive and Causal Arguments (2)

P. Rusnock

Copyright c 2020 Paul Rusnock

I NDUCTIVE AND C AUSAL A RGUMENTS (2)

C AUSES

AND EFFECTS

Much of what we know, or want to know, or claim to know, concerns causes

and effects. We know, for example, that a hammer blow will drive in a nail

or hurt our thumb, that the Moon’s gravity is responsible for the tides, that

smoking can cause lung cancer, and so on. We would like to know what causes

certain diseases, whether there are any general causes of poverty, societal problems of various sorts, and so on.

Knowledge of causes is valued because it brings understanding and, in

some cases at least, the possibility of changing things for the better. But such

knowledge is in many cases hard to come by: it is not a simple matter to produce a convincing proof that one thing causes another, and easy enough to

make mistakes in this business. But we are an optimistic species on the whole,

and often expect that it will be fairly easy to know the things we want to know,

in particular, what makes things tick, their causes and effects. Not surprisingly,

we are more often than not mistaken in such judgments, especially with respect

to the confidence with which we make them.

We saw above how difficult it can be to produce a truly convincing inductive generalization. Making a solid case for causal claims is even harder, for in

their case we do not merely need to show (by inductive generalization) that A

and B are connected, but also that this connection is not merely accidental. For

in any concrete case, there will be many things that go along with both A and

B. To single out A as the cause of B, we have to convince ourselves that even

if those other things hadn’t been present, A would still have brought about B.

C ORRELATION

AND CAUSATION

We speak of a positive correlation between events, features, or objects of kinds

A and B when the presence of B is more likely given the presence of A. For

instance, one study reported that approximately 17% of men who smoke regularly, but only about 1.5% of those who do not, develop lung cancer. The

chances that a randomly selected smoker will develop lung cancer are thus

considerably (more than ten times) higher than those that a randomly selected

non-smoker will. Thus we have a positive correlation in men between smoking and lung cancer. We speak of a negative correlation, by contrast, when the

presence of A makes the absence of B more likely. The hotter it is outside, for

example, the fewer people go skiing. Temperature is thus negatively correlated with the number of skiers. In a great many cases, of course, we have no

evidence of any significant correlation whatsoever.

If A is a cause of B, then we should expect that the presence of A will make

the presence of B more likely. Thus in order to establish causation, we must

3

Reasoning and Critical Thinking

first check to see whether there is any correlation. But while the failure to find

any positive correlation makes a good case against causation, the presence of

a positive correlation is not enough by itself to prove causation. The size of

a person’s vocabulary, for example, is positively correlated with his shoe size,

but this hardly proves that learning new words causes your feet to grow.

In the most straightforward cases, we may find that there is a positive correlation between A and B:

• Because A is a cause of B;

• Because B is a cause of A;

• Because some third thing causes both A and B; or

• By pure chance.

The first two options show that even when a correlation does exist on account

of a causal relationship, it is possible to get things backwards. Finding a correlation between insomnia and depression might lead us to conclude that depression causes insomnia (i.e., when people are depressed, this prevents them

from sleeping), when in fact the causation might just as easily run the other

way.

The third possibility is easily illustrated by a familiar example. My neighbour’s thermometer goes up and down in parallel to mine. There is thus an

almost perfect correlation, but obviously my thermometer going up or down

doesn’t cause his to do the same. Rather, a common cause, namely, the temperature, is responsible for the correlation.

The last possibility, finally, is too frequently neglected. Many correlations

do occur by pure chance. For instance: in the 1990s, the cities of Kitchener

and Waterloo in Ontario had populations of roughly the same age composition. Waterloo had fluoridated water, while Kitchener did not. About twice as

many people live in Kitchener as in Waterloo, so we should expect the number

of babies born each year in Waterloo to be about one-half the number of those

born in Kitchener. Yet, year in and year out, fewer than one-tenth as many babies were born in Waterloo. There was thus a very strong negative correlation

between fluoridation and the number of births in these two cities. Yet it existed

purely by chance.

By itself, then, mere correlation is not enough to establish causation. What

is? Two things are generally thought to be highly desirable in this connection.

First, we must make every reasonable effort to rule out other possible causes

of a given effect. In some cases, this can be done by running a series of controlled

experiments. In others, by the collection of additional data or reanalysis of existing data.

Second, unless we think that the causal relation in question is fundamental, we should try to discover more basic causes that give rise to the observed

relation. In the case of smoking and lung cancer, for example, it was found

that some of the chemicals present in cigarette smoke had been observed to

react with DNA, and cause genetic changes which had in turn been associated

4

Inductive and Causal Arguments (2)

with cancer. The discovery of this mechanism strengthens the case for causation

by showing how more generally operative and better confirmed causes might

have combined to give rise to this particular effect.

Of course, neither of these procedures will ever result in complete certainty:

such causal arguments, like all non-deductive arguments, are fallible. On the

other hand, there are plenty of cases (smoking and lung cancer being just one)

where the evidence in favour of causal links is strong enough to justify acting

on the assumption that they do exist. So we should be careful to distinguish

cases where there is insufficient proof of causation from those where there is

substantial, but not absolute, proof. In both cases, we might say that proof is

lacking, or that the science is not settled. But, given that absolute proof is never

to be had, such a claim will usually only be noteworthy if the former is meant.

C ONTROLLED EXPERIMENTS

AND BLINDING

The history of science bears witness to many of the subtleties involved in trying

to verify that causal links exist. A first, important step was the development

of the concept of a controlled experiment. In these experiments, a sample, or

population, is divided into two groups, one of which is treated in a certain

way (e.g., given a certain medication), while the other (the control group) is

simply left alone. Afterwards, we compare the condition of the two groups, to

see if there is any significant correlation between treatment and outcome. For

example, a group of people suffering from arthritis might be divided into two,

and half given an experimental drug. If it turns out that many of the people

receiving the drug report feeling less pain, we might want to conclude that the

drug works. But if a similar proportion of the control group also report similar

improvement, we would not be tempted to draw that conclusion. The control

group thus acts as a check on our natural enthusiasm to leap to conclusions.

So far, so good. Unfortunately, however, the data we obtain in this way

might not be good enough, on account of the so-called placebo effect. It turns

out that people who believe they are receiving treatment often report feeling

better, even when they receive no treatment at all. So even if there was a significant difference between the treatment and control groups, we might still have

grounds to doubt that the people in the treatment group are better because of

the medicine.

The response to this problem was the blind or masked controlled experiment,

where the participants in the study do not know whether they belong to the

treatment or to the control group. In the case of drug studies, for example,

the control group might be given a pill that looks and tastes just like the pill

given to the treatment group, but one that contains only inert ingredients. Since

neither group knows whether or not it has received treatment, it is reasoned,

the placebo effect can be discounted.

Alas, even blind controlled trials have been found to be unreliable in some

cases, because those who conduct the experiments still know who is really receiving treatment and who is in the control group, and they can affect the data

in a variety of subtle ways, for example, by giving cues to the participants, or

5

Reasoning and Critical Thinking

showing bias (whether conscious or unconscious) in the recording and interpretation of data.

These problems are addressed in a double-blind experiment, where neither

the experimenters nor the participants know who belongs to which group. A

further refinement is a so-called triple-blind experiment, where the data obtained in a double-blind experiment are analyzed by someone who does not

know which group is which (they are simply called, say, group A and group

B).

C OMMON MISTAKES

IN CAUSAL REASONING

As noted above, we human beings are quick to assume we know the causes of

things, even when the evidence we possess is completely insufficient to justify

our conclusions. Quite commonly, for example, people latch onto a personal

experience they and perhaps several other people have had, and then leap immediately to the conclusion that there is a causal connection. Parents who observe that their child’s earache disappeared soon after they gave her antibiotics

may conclude that the antibiotic caused the infection to go away. Others, noting that symptoms of autism appeared soon after their child was vaccinated,

have concluded that the vaccines caused the autism. When large windmills are

installed, people living nearby may conclude that some health problems they

encounter soon afterwards were somehow caused by them. And so on. To

draw such conclusions is to commit the fallacy called post hoc, ergo propter hoc

(this happened after that, so that caused this).

While understandable, these conclusions are in no way justified. For personal experiences of this sort are not even sufficient to establish a correlation,

still less causation. Often enough, the testimony people rely on is cherrypicked, or interpreted so as to commit the Texas Sharpshooter fallacy. This

being said, the personal experiences should simply not be discounted either.

Such anecdotal evidence is sometimes the first indication we obtain of causal

connections. And just because it fails to prove that there is a causal connection,

we should not conclude that we have proof that there is no causal connection.

The absence of proof is not a proof of absence. This, in turn, is not to say that

every case where anecdotal evidence is presented warrants the expenditure

of scarce resources on studies aimed at discovering whether or not there is a

connection.1

Even when there is a genuine correlation between two things, it may, as

noted above, exist for a number of reasons. We can err by assuming that causation must exist when the correlation is there by pure chance; or by assuming

that causation runs one way when in fact it runs the other way (confusing cause

and effect); by overlooking the possibility that a third thing is responsible for

both A and B (common cause).

We may also go wrong by neglecting the possibility of mutual causation or

causal loops. If we just think about hammers and nails and the like, it’s hard to

1 An instructive recent case is the “liberation therapy” for Multiple Sclerosis championed by Dr.

Paolo Zamboni.

6

Inductive and Causal Arguments (2)

see how causation could be mutual, A causing B and B also causing A. But

in many situations where we speak of cause and effect, this is exactly what

occurs. Take, for example, the relation between overeating and not getting

enough sleep. Overeating can produce indigestion, etc., which can certainly

interfere with our sleep. On the other hand, it appears that lack of sleep can

also cause people to eat more. As a consequence, the assumption that causation

can only be one-way is just as mistaken as the assumption that correlation must

always be due to causal connections.

It’s Simple (only it isn’t) Finally, this seems a good place to give a general

warning about drawing conclusions about causes and effects in complex situations or systems (e.g., ecosystems, human societies, economic systems). It is

this: it’s hard even for highly intelligent, careful people to make accurate judgments about how such systems work. In fact, people who have studied such

systems the most are often the least confident in making such claims. All the

same, there is rarely a shortage of people willing to say loud and clear that it’s

all very simple (and hence that there is a simple solution).

Is there a problem with crime, for example? Well, it’s all very simple: we

don’t punish criminals severely enough. Persistently high unemployment and

depressed wages? The greed of the wealthy explains it all. Why did the teenage

pregnancy rate go up? Salacious music videos. Why did Detroit go bankrupt?

Unions. No, corrupt city government. No, greedy corporations. Etc., etc. As

H. L. Mencken wrote:

For every complex problem there is an answer that is clear, simple,

and wrong.

E XERCISES I

Discuss the strengths and weaknesses of the following arguments, which involve inductive and causal reasoning.

1. Data collected by Statistics Canada consistently show that, on average,

people with university degrees have significantly higher incomes than

people without such degrees. The difference is large enough, moreover,

to more than make up for the earning potential lost during the years of

study. In the long run, university graduates simply earn more than the

rest of the population, and also pay more taxes. It is clear that university education creates economic benefits both for individuals and for the

country as a whole.

2. Data collected by the National Institutes of Health in the USA indicate

that, on average, poor people have significantly more health problems

than people who are not poor. It should be obvious from this that the

stress of living in poverty makes people unhealthy.

7

Reasoning and Critical Thinking

3. A recent study found that people who had sex four or more times a week

had, on average, significantly higher salaries than those who had sex less

frequently. The Globe and Mail reported on the study under the headline:

“Want a big…