[FoRK] Physical limit on total amount of computation vs. Bostrom...

J. Andrew Rogers andrew at ceruleansystems.com
Thu Apr 29 14:13:46 PDT 2004


Jeff Bone wrote:
> Random speculation:
>
> We're eventually going to unify whatever GUT / QG 
> we arrive at with a similarly consolidated 
> information theory / thermodynamics in such a way 
> that the ultimate GUT describes reality purely in 
> terms of a universal monotype:  information, 
> informational structures / distribution / density,
> information dynamics, etc.  The result is roughly 
> a reified computational hypothesis w/o (necessarily) 
> any simulation interpretation being required.


I strongly agree, which probably isn't surprising.

We are pretty close to being able to show this in the mathematics. In
fact, I expect a semi-complete version of the math to be published
within the next five years.  While I don't think I ever talk about it on
this list, some of you have probably picked up bits and pieces of me
yammering on about the theoretical research I do in other places.  My
own speculations with respect to this many years ago when I first dove
into the math were slightly off actually, primarily because the math had
a well-established concept of "finite state" but no concept of
"algorithmically finite" (a far more useful distinction theoretically
that I developed later).  Our universe is clearly an example of an
algorithmically finite system, which I should point out does not
restrict it to being finite state, but which does restrict it to a class
of systems in which many nominally distinct fields of mathematics
(computation theory, information theory, reasoning/logic systems,
probability theory, et al) can be "easily" unified into a single
relatively simple model by virtue of the definitional constraints.

When I first started working on the mathematical unification of
algorithmically finite systems about a decade ago, it was largely
speculation -- I had no proof that it could even be done even though it
seemed reasonable from my understanding of the space -- and hardly
anyone was looking at this particular space.  Today, most of the
mathematical work has already been done and by a growing number of
people (much of which is as yet unpublished, insofar as I keep track of
what people are putting to paper or planning to), and that you can build
what is more or less a complete, elegant, and predictive mathematical
model of algorithmically finite systems (like our universe) appears to
be a highly probable.  Its application to physics should be very
interesting when everything is ironed out.

The AI tangent:

A lot of the early math to support all this was directly or indirectly
developed in the context of abstract theoretical AI research, in the
domain of finite non-axiomatic models of reasoning and computation in
particular.  There are now a substantial number of proofs and papers
that support a conjecture that solving this mathematical integration
problem is essentially equivalent to solving the general problem of AI
in computer science.  The guys over at IDSIA (Hutter, Schmidhuber, et
al) and some others are doing a lot of helpful publishing and proofs in
this general space.  While it is a well accepted concept now, I
distinctly remember getting into lengthy and animated arguments over it
when I first made the conjecture in the late '90s.  This is still a very
new idea and hasn't been fully absorbed in the memesphere, though it
does have quite a bit of traction.

While Hutter showed that intelligence of all forms was only expressible
in computational models of this type, the practical problem has been
that the space and time complexity of such algorithms was also proven to
be an exponential function.  While no one has published anything on the
specific limits, all incidentally pre-existing published universal
approximations of this type of computation system (i.e. known algorithms
that just happen to meet the classification criteria) have brutal
exponents to the point of intractability for non-toy purposes.  While
some folks have written it off because of this, a good understanding of
the mathematical proto-model makes it clear that those approximations
are very poor and suboptimal, which isn't surprising when you consider
those algorithms were developed for other applications in which they
*are* narrowly optimal. I will assert that we can demonstrate today
(i.e. in implementation) that the exponent is less than 2 generally, and
less than 1 for pattern spaces normally of interest to humans -- the
complexity exponent varies as a function of environmental entropy.


In my general estimation, practical technologies derived from these
mathematical developments will be one of the major components of the
next tech boom cycle.  Early development is already sufficiently
interesting in capability that I don't see how it wouldn't be,
particularly in the absence of theoretical showstoppers.  At this point,
I'm quite optimistic.  And maybe I'll even make a fat chunk of change
from all this too. :-)

It is nice to see that more and more people in other fields are starting
to dip their toes into this conceptual space.



j. andrew rogers (off to NYC momentarily)





More information about the FoRK mailing list