nancylebov (nancylebov) wrote,
nancylebov
nancylebov

Research on how to explain probabiilties

Some ideas on communicating risks to the general public:




This strikes me as easier to understand than trying to keep track of a bunch of numbers.

However, number overload isn't the only problem with the usual presentations:
What may seem unambiguous is actually interpreted by different people in different ways. A survey of people in 5 international cities found no agreement on what a 30% chance of rain means. Some thought it means rain on 30% of the the day’s minutes, others thought rain in 30% of the land area, and so on [1]. A further problem with the statement is that it gives no information about what it means to rain. Does one drop of rain count as rain? Does a heavy mist? Does one minute of rain count?

Making a problem harder or easier:
Doctors given problems of the type:
The probability of colorectal cancer in a certain population is 0.3% [base rate]. If a person has colorectal cancer, the probability that the haemoccult test is positive is 50% [sensitivity]. If a person does not have colorectal cancer, the probability that he still tests positive is 3% [false-positive rate]. What is the probability that a person from the population who tests positive actually has colorectal cancer?

give mostly incorrect answers that span the range of possible probabilities. Typical answers include 50% (the “sensitivity”) or 47% (the sensitivity minus the “false positive rate”). The correct answer is 5%. [2]

as compared to
Consider the colorectal cancer example given previously. Only 1 in 24 doctors tested could give the correct answer. The following, mathematically-equivalent, representation of the problem was given to doctors:
Out of every 10,000 people, 30 have colorectal cancer. Of these 30, 15 will have a positive haemoccult test. Out of the remaining 9,970 people without colorectal cancer, 300 will still test positive. How many of those who test positive actually have colorectal cancer

Without any training whatsoever, 16 out 24 physicians obtained the correct answer to this version. That is quite a jump from 1 in 24.

Statements like 15 out of 30 are “natural frequency” statements. They correspond to the, trial-by-trial way we experience information in the world. (For example, we’re more likely to encode that 3 of our last 4 trips to JFK airport were met with heavy rush-hour traffic than encoding p = .75, which removes any trace of the sample size). Natural frequency statements lend themselves to simpler computations than does Bayes’ Theorem, and verbal protocols show that given statements like the above, many people correctly infer that the probability of cancer would be the number testing positive and who have the disease (15) divided by the number who get back positive test results (15 who actually have it + 300 false alarms). 15 divided by 315 is 5%, the correct answer. Bet you didn’t know you were doing a Bayesian computation

Link from Lightwave at Less Wrong.

This entry was posted at http://nancylebov.dreamwidth.org/447278.html. Comments are welcome here or there. comment count unavailable comments so far on that entry.
Subscribe
  • Post a new comment

    Error

    Anonymous comments are disabled in this journal

    default userpic

    Your reply will be screened

    Your IP address will be recorded 

  • 1 comment