Monday, September 21, 2020

Thinking about Thinking: Are You a Bayesian?

 



Are you now, or have you ever been -- a Bayesian???

One of my long-time teaching tropes has been "thinking about thinking," in which I discuss how economists think, how normal people think (by normal, I mean non-economists of course), some tricks and traps in economic thinking, some lessons from psychology.  But one of the best-loved segments, according to a scientific sample of both students who've come to my office to tell me this, is my discussion of how to be a (mostly "informal") Bayesian.

By popular demand, and by that I mean one person, my friend and colleague Joe Walsh, I'm posting my notes on Bayesian thinking here, for students and others who might be interested.

In this Power Point deck I discuss very briefly, and informally, the relationship between probability and "truth;" and review some basic material from your first stats course on Type 1 and Type 2 errors.

Armed with these tools, we then delve into the three kinds of people in the world:

  • People who divide the world into three kinds of people;
  • People who don't do that; and
  • People who don't care, one way or another.

Then we tackle another tripartite division:

  • "Classical" thinkers, a.k.a. "frequentists;
  • Ideologues; and
  • Bayesians.

Most of our discussion of Bayesian approaches is very loose and informal.  However I do go through the basics of Bayes' rule for decision making.  In teaching, for years I used examples from medicine, i.e. how to think about test results for breast or prostate cancer, in light of Bayes' rule.

Bayes' rule leads to some results that initially appear counterintuitive to many people, e.g. that a woman who gets a positive test for a mammogram using a test with 20 percent false positives may be somewhat unlikely to have cancer.  (Though she's maybe 8 times more likely than someone who tested negative!)

Having gone through that example, traditionally I then went back to discussing the importance of the broader Bayesian approach to problems, namely:

  • Starting with an explicit statement of one's prior belief;
  • Discussion (with oneself as well as maybe others) of how strong this prior is, and where it came from;
  • Whether the strength of your prior is really justified, given its source;
  • And how you choose to update your prior, in light of new information.

Getting back to Bayes' rule, before I posted this, I reflected on the fact that this rule is extremely relevant today, when we are all thinking about the coronavirus, including who should be tested, how often, and what it means to get a positive or a negative test.  So I added a discussion of testing for COVID-19 in this framework.  For no extra charge, I built a little spreadsheet model that allows you to calculate the probabilities that you have the virus, if your test is positive, or negative.  

You will find that the results depend on three things:  the specificity of the test (Type 1 error to a statistician), the sensitivity of the test (Type 2 error), and (often most critically) on your prior belief on how prevalent the virus is within the population under study.  Don't believe me?  

Click here for the PowerPoint presentation.

Click here for the spreadsheet model.

And as many readers of this blog know, I've been obsessively collecting detailed teaching notes about the virus, which you can find here.

One more thing -- a short discussion of the coronavirus at UW, created in late August, can be found here.