# For Magsj: Bayes Theorem and Why It's Relevant

Imagine, for a moment, that you’re a doctor. A woman has come in and asked to be given a test for breast cancer. Now your hospital has 2 types of tests in stock - an expensive, incredibly reliable one, and a cheaper one that’s far less reliable. Your hospital only has 100 of the expensive test in stock, so your hospital has a policy of rationing out the expensive test, so you give the woman the cheaper test first.

And she tests positive.

As a doctor, it’s your job not to just inform her of the results of the test, but specifically what the result means. Now, you know that your test is not 100% reliable - you’ve given her the cheaper test. You know the following things about the cheaper test:

1% of women who take the test do in fact have breast cancer.
Of the women who do have breast cancer, 80% will get a positive result from this test, 20% negative.
Of the women who do not have breast cancer, 9.6% will get a false positive result from this test, and 90.4% of them will get a negative result.

What do you think her probability of having cancer his, after getting a positive result from this test?

Don’t worry about getting the answer wrong - literally 85% of doctors also get it wrong. What does your intuition tell you?

She has a 1% chance of having breast cancer, because she took the test.

The real question is: if she DIDN’T take the test what are her chances of having breast cancer?

Let’s have a look:

Moral of the story: You can decrease your chances of having breast cancer from 13% to 1% just by taking the test! LOL

My intuition?

They make money off tests & unnecessary procedures. They slowly kill ignorant people with same. If further testing requires flesh, flip them off, test again later elsewhere, lo’ & behold, they find nothing.

Also true of dentists.

Be okay with death.

Hit reset if they mandate bull****.

So… regardless of the outcome of the tests, she either has or has not got cancer, so a 50/50 probability of having cancer.

…or have I been too pragmatic with my answer.

Do you have a 50/50 chance of dying tomorrow? After all, you either will or you won’t. 50/50 right?

The answer most people intuitively give is 80%. They read this and tend to discount the rest of the information very quickly:

“Of the women who do have breast cancer, 80% will get a positive result from this test, 20% negative.”

80% seems like an intuitive number to pull out of there, but they’re missing one very important detail:

People without Cancer outnumber people with cancer, drastically.

“1% of women who take the test do in fact have breast cancer.
Of the women who do have breast cancer, 80% will get a positive result from this test, 20% negative.
Of the women who do not have breast cancer, 9.6% will get a false positive result from this test, and 90.4% of them will get a negative result.”

For every woman with breast cancer taking the test, there are 99 without. And even though most people without cancer get a negative result, the amount that get a positive result is still 9.6%, which still means that actually, you’re expecting most people who get a positive result to not actually have cancer.

Think about it like if you were to have a group of 1000 people who represent the data set perfectly.

1% have cancer - that’s 10 people. 80% of people with cancer get a positive result - that’s 8 people with cancer and a positive result.

99% do not have cancer - that’s 990 people. 9.6% of them get a positive result - that’s about 95 people with no cancer, but with a positive result.

Which means, if you get a positive result, you’re not 80% likely to have cancer - you’re far more likely to not have cancer. More than 10 times more likely to not have cancer, in fact.

Bayes theorem is a formalization of this logic, essentially. It’s a way that explicitly forces us to not throw away the useful information about what the likelihood of seeing this result is, in both cases, and the likelihood of both cases to begin with. All that information is relevant.

P(A|B)=P(B|A) * P(A) / P(B)
A, B = events
P(A|B) = probability of A given B is true
P(B|A) = probability of B given A is true
P(A), P(B) = the independent probabilities of A and B

That’s the formalization of it, and when applied correctly, it forces you to consider every detail, so you avoid the pitfall that leads to the intuitive but incorrect conclusion of 80%

_
1% do in fact have breast cancer.

So the probability of her having cancer are 1 in 100, so 0.01%

Breast Cancer is not a joke magsj, you should feel very ashamed

I was laughing at my not having instantly taken that stat into account, not at the 1% having cancer.

That’s the probability before the test - the question is, what’s the probability after the test? After the test comes back positive?

FJ: With or without a profit-driven health care industry?

Everybody wants to make everything into a trick question lmao

Oh yeah???
What exactly do you mean by that??

Hmmm…

_
Hahaha! what does Sculptor mean by that?

I’ll try do the math, and get back to you as imminently as is possible… I’m finding this quite interesting an endeavour to undertake.