[FoRK] Capitalism, Contracts and Cronyism

Stephen Williams sdw at lig.net
Wed Aug 27 12:14:53 PDT 2008


Jeff Bone wrote:
>
> On Aug 27, 2008, at 12:14 PM, Stephen Williams wrote:
>
>> I have little basis for assigning probability of correctness of the 
>> suggested probabilities.  There are many possible forces and actions 
>> that would be triggered when passing various tipping points that I 
>> cannot accurately factor in for myself, and many that I cannot even 
>> guess at.  If you can account for all of those factors with some 
>> certainty, great, but I you'll have to show your work to be 
>> considered too seriously.  It's useful to know you think there's 
>> 10-20% probability of disaster, but not too useful.
>
> Nonsense.
>
> Consider the conflict in what you're asking for, here.  On the one 
> hand you bemoan the complexity of making such predictions yourself.  
> Now consider if I were to offer you an oracle --- a program --- that 
> had a significant track record in making correct a priori predictions 
> of the kinds of things you are trying to predict, but had no ability 
> to explain its reasoning;  it merely answers yes-or-no questions about 
> its field of specialization.  Further, lets say that examining the 
> source code is no good, because the
The source code in this case is the number of variables, weighting, 
encoding, training sets, success, and general knowledge on the 
technology about performance on outliers or model shifts.  These 
"inputs" to such a tool help indicate what it is capable and not capable 
of and where the probabilistic boundaries of usefulness are.  Anything 
much outside of that may render it useless to various degrees.  Without 
such detail, I don't know what the boundaries and strength of success 
probability is.
> learned prediction capability is stored in a neural network of 
> tremendous complexity, and reverse-engineering it is on a scale that 
> is prohibitive.  Would you, given a track record, trust it?  Sure you 
> would, just like you trust other computer tools to perform complex 
Given a specifically quantified track record along with the above 
qualification, yes.
> tasks for you that in fact you might not be able to (or at least are 
> not willing to) perform yourself.  Indeed, based on your previous 
> complaint about the complexity of the problem space, even if the 
> oracle's "reasoning" *could* be explained to you, it might be complex 
> enough to escape your comprehension.
I can comprehend whether variables are taken into account directly 
and/or indirectly and whether the historical ranges will be predictive 
of the future.  The existence of tipping points, good and bad, which 
haven't been seen yet are just one necessary deficiency.  Conceiving of 
some tipping points is possible, and you can try to reason on their 
causes and effects, however adding them to models with any certainty is 
still voodoo in most cases, no?
>
> Now, I'm not claiming to either be or possess such an oracle.  I'm 
> merely pointing out that it's ridiculous to assert that you only 
> accept answers based on fully-elaborated reasoning.  You accept all 
> sorts of things all the time based on reputation, on certainty gained 
> via epistemological method, etc.
And I discount others because they would require detailed mind reading, 
irrational correlations (alignment of the planets, etc.), perfect 
prediction, intuiting a chaotic result, and other forms of magic.  I 
trust the weather prediction for this afternoon pretty well and not at 
all for a month from now.  That might be a close analogy to what we're 
talking about, but at multiple resolutions simultaneously.

Even with a neural net or other machine learning system, you should be 
able to feed it ranges of inputs to derive approximate formulas that 
explain its behavior.  While that may take a long time to iterate 
through all variables, it is possible.

sdw
> jb



More information about the FoRK mailing list