[FoRK] Machine super intelligence

Jeff Bone jbone at place.org
Thu Nov 5 10:33:18 PST 2009

Shane Legg is *the* man.  He manages to formalize a lot of the things  
that we've kicked around from time to time here on this list, and  
carry it a whole lot further.

In short he takes all the various informal definitions of  
intelligence, the rather academic formal definitions of intelligence  
(e.g. Solomonoff induction), mixes in a little Kolmogorov complexity  
and related information theoretic / learning-as-compression analysis,  
stirs in some Bayes, and comes up with a universal intelligence  
measure that's reasonable.  He then makes incomputable Solomonoff  
induction computable by approximating the universal prior with  
standard (and parallelizable) statistical techniques.  Situates this  
in the context of a generalized model of reinforcement learning,  
provides a formalization, then ties this to Hutter's AIXI and proves  
that a computable approximation of AIXI / MC-AIXI is universal /  
generally intelligent according to his reasonable definition for any  
environment for which a general agent may perform optimally.  (In the  
thesis there's a LOT more specificity of what all these terms mean  
than I've ever seen offered before;  I'm now on my second pass through  
it and finding it to be illuminating *other areas of interest* for me  
as well as its own direct subject matter.  That's the hallmark of  
great theory, IMHO...)

He then reviews a bit of the present state-of-the-art of how the brain  
does its thing (Ken, take note) and muses a bit about how this relates  
to the more abstract models of learning and reasoning out there,  
including AIXI and friends / including restricted Boltzman machines.

For a bit of spice, he then has a punch slide (in his Halloween  
ExoBrit presentation) in which he anticipates that we'll almost  
certainly get this kind of machine super intelligence *before we have  
a theory of Friendly AI.*  (I personally speculate that no such  
"strong" theory of Friendly AI is even theoretically possible, though  
I'm not convinced this is necessarily a bad thing;  there may be game  
theoretic / economic approaches to dealing with potential FAI  
impossibility that nonetheless may be  satisfactory to some;  cf. J.  
Storrs Hall, Robin Hanson, and others lately for discussion along  
these lines.  YMMV.)

If you have any interest in this area and / or technical grounding,  
you *must* start tracking this guy if you haven't already.



   http://www.vetta.org/documents/Machine_Super_Intelligence.pdf *


* also available in print form on Lulu if you feel like giving a  
little back...

More information about the FoRK mailing list