[FoRK] Black Belt Bayesian vs. Authority, Fight!

James Tauber <jtauber at jtauber.com> on Thu Aug 9 11:09:45 PDT 2007

Lisp in Small Pieces too (currently #1)

It seems the accounts are shared between .ca and .com -- I've never  
ordered from .ca before but it pulled all my info from .com

James

On 09/08/2007, at 10:48 AM, Matt Jensen wrote:

> Speaking of Bayesian statistics, run, don't walk, over to Amazon  
> Canada.  For some reason Jayne's monumental "Probability Theory :  
> The Logic of Science" is priced at $4 (versus $49 in U.S., $80  
> list).  Maybe that's why it's currently at #3 on the overall  
> bestseller list!  I just bought my copy...
>
>   http://www.amazon.ca/gp/bestsellers/books
>
> Matt Jensen
> http://mattjensen.com
> Seattle
>
>
> Quoting Jeff Bone <jbone at place.org>:
>
>>
>> BBB is quickly becoming one of my favorite blogs.  This post from
>> earlier today is a perfect example, and a standalone gem.  Anything
>> that begins with the lines "Tim is a famous geologist. Tom is a  
>> famous
>> clown." --- is a keeper.  :-)  (Despite the bit of naming confusion
>> that appears midway...)
>>
>> Cf.
>>
>>   http://www.acceleratingfuture.com/steven/?p=33
>>
>> --
>>
>> (This post will be a more in-depth explanation of something I was
>> trying to get across in much of the Rapture of the Nerds essay.)
>>
>> Tim is a famous geologist. Tom is a famous clown. Tim gives us a  
>> theory
>> about rocks. We judge it to be 90% probable. In a parallel universe,
>> Tom gives us the same theory about rocks. We judge it to be 10%
>> probable.
>>
>> Jim gives us a theory about fish and presents a full technical  
>> case  that is good — the facts all fit. In a parallel universe,  
>> Jom gives  us a theory about fish and presents a full technical  
>> case that is  bad — it needs coincidences or leaps of logic. We  
>> judge Jim’s theory  to be 90% probable. We judge Jom’s theory to  
>> be 10% probable.
>>
>> These two situations might seem the same. In the first case, we  
>> used  only indirect evidence — the theorist’s credentials — to  
>> assess  probabilities. In the second case, we used only direct  
>> evidence —  the known facts of the matter — to assess  
>> probabilities. Both are  useful kinds of evidence. But there is an  
>> important difference.
>>
>> Suppose we ask Tim and Tom to make a full technical case. Tim the   
>> geologist gives us a full technical case that is, as expected,  
>> quite  good. Tom the clown, in his own parallel universe, gives us  
>> the same  full technical case — one much better than we expected  
>> from a clown.  Since a full technical case relies in no way on  
>> authority, we put  the same probabilities on Tim’s claim and Tom’s  
>> claim. Anything else  would be unreasonable.
>>
>> Suppose we ask Jim and Jom about all of their credentials. It  
>> turns  out their credentials are exactly the same. Maybe they’re  
>> both  equally famous clowns, who both took a course in marine  
>> biology once  — surprising in Jim’s case, given that his arguments  
>> are so good. Or  maybe they’re both famous marine biologists of  
>> exactly equal fame  and competence — surprising in Jom’s case,  
>> given that his arguments  are so bad. None of this matters for our  
>> probabilities. Again, we  already have a full technical case, and  
>> a full technical case relies  in no way on authority. Jim’s theory  
>> is still 90% probable, Jom’s  theory still 10% probable.
>>
>> So once we knew Tim and Tom’s full technical arguments, their   
>> credentials no longer mattered. But once we knew Jim and Jom’s  
>> full  credentials, their technical arguments still mattered.  
>> Technical  arguments and credentials are useful types of  
>> information  individually, but when both types are available, one  
>> trumps the other.
>>
>> If I’m not mistaken (but I need to read up on this!), what I’ve  
>> been  doing here is just repeating the definition of “screening  
>> off” from  the theory of causal diagrams. If we have three  
>> variables (A, B, C),  and A and C are independent conditional on  
>> the value of B, then B  screens off A from C, and A and C do not  
>> cause each other. In the  authority example of this post, you  
>> could see the causality running  as follows. If a theory is true,  
>> that causes the technical case for  it to be good. If people have  
>> good credentials, that causes them to  adopt theories for which  
>> the technical cases are good. But causality  does not run directly  
>> from truth to adoption by people with good  credentials, or from  
>> adoption by people with good credentials to  truth.
>>
>> Maybe this all sounds like a complicated way to make a simple  
>> point,  but it matters, because people’s intuitions sometimes get  
>> it all  wrong. If an  idea is adopted by silly people, or is not  
>> adopted by  competent people, that is seen as a “bad point” that  
>> is weighed  against the “good point” of solid technical  
>> argumentation. But this  weighing makes no sense — to a rational  
>> thinker, the “bad point”  counts until the “good point” arrives,  
>> and is then annihilated. In  real life, everything interesting is  
>> a mix of things you’ll always  have to take on authority and  
>> things you can check for yourself, but  you can still apply this  
>> insight.
>> _______________________________________________
>> FoRK mailing list
>> http://xent.com/mailman/listinfo/fork
>
>
>
>
> _______________________________________________
> FoRK mailing list
> http://xent.com/mailman/listinfo/fork



More information about the FoRK mailing list