and we can compute using Bayes' theorem
P(DjP ) = 60% P(Dj P ) = 1%
P( Dj P ) = 99% P( DjP ) = 40%
Now we are interested in comparing and contrasting
this standard exercise with the analogous probabilities
of material implications and biconditionals. We show
the results of the computations in the following tables.
P (D ! P ) = 100% P (D ! P ) = 91%
P ( D ! P ) = 7% P ( D ! P ) = 94%
P(P ! D) = 94% P( P ! D) = 7%
P( P ! D) = 100% P(P ! D) = 91%
:
We begin by observing that the probability a pa-
tient has the disease, given that the test was positive
P (DjP ) ; is only 60%. This is often surprising to
those who rst encounter this, and it is explained by
the fact that the number of those who have the dis-
ease and test positive is only moderately larger than
the number of those who don't have the disease and
test positive. The surprise is sometimes explained as
due to people confusing P (DjP ) with P (P jD).
We contrast P (DjP ) with P (P ! D) = 94%
which would seem to provide the result which
people initially expect, except for the fact that
P (P ! D) = 91%. Apparently, testing posi-
tive for the disease implies with about 94% proba-
bility that a person has the disease, and it also im-
plies with about 91% probability that a person does
not have the disease! One may explain this appar-
ent paradox by recalling that P (? ! Q) = 1, for
any Q: From here we see that since P is a small
set, one is able to infer both a proposition D and
its negation D with high probability. We note
however that P (P ! D) > P (P ! D) just as
P (DjP ) > P(Dj P ):
Arguably, it is the probability of the biconditional,
or probable logical correlation, which coincides best
with the intuition that people mistakenly apply to
P (DjP ) initially. The biconditional probabilities are
as follows:
P(P $ D) = 93% P(P $ D) = 7%
:
We may compute their logical correlation coefcients
as:
(P; D) = 0:86 (P; D) = 0:86
We see that testing positive for the disease and ac-
tually having the disease have a strong direct logical
correlation, as one would expect. Therefore, testing
positive and not having the disease, or testing negative
and having the disease, have a logical correlation with
the same magnitude but the opposite sign.
6.2 Application 2: Manufacturing and Qual-
ity Control
Example 2 A company manufactures items using
three machines. 40% of the manufactured items come
from Machine 1, 40% from Machine 2 and 20% from
Machine 3. Of those items coming from machine 1,
95% work properly, of those items coming from Ma-
chine 2, 90% work properly, and of those items com-
ing from Machine 3, 93% work properly.
Let M 1; M2; and M 3 denote the events that a
manufactured item comes from Machines 1, 2, and 3
respectively. Let W denote the event that a manufac-
tured item works properly. Then we have
P(M1) = 40% P( M1) = 60%
P(M2) = 40% P( M2) = 60%
P(M3) = 20% P( M3) = 80%
and
P(W jM1) = 90% P( W jM 1) = 10%
P(W jM2) = 95% P( W jM 2) = 5%
P(W jM3) = 93% P( W jM 3) = 7%
:
Furthermore we can compute using the Law of Total
Probability:
P(W ) = 93% P( W ) = 7%
and we can compute using Bayes' theorem
Credit Blame
P(M1jW ) = 39% P(M 1j W ) = 57%
P(M2jW ) = 41% P(M 2j W ) = 29%
P(M3jW ) = 20% P(M 3j W ) = 19%
Total = 100% Total = 100%
:
At this point a very important but standard text-
book exercise is completed. Given the knowledge
that the item is working properly we can apportion the
credit according to the rst column and given that the
item is defective we can apportion the blame accord-
ing to the second column.
While this analysis is standard and correct, we are
not entirely satised. Which machine correlates most
with a working item, (direct logical correlation) and
which machine correlates most with a defective item
(inverse logical correlation)? To answer this let us
begin with the analogs of the above computations for
the probability of a conditional.
AMERICAN CONFERENCE ON APPLIED MATHEMATICS (MATH '08), Harvard, Massachusetts, USA, March 24-26, 2008
ISSN: 1790-5117 ISBN: 978-960-6766-47-3 176