[FoRK] Scalability of intelligence
andrew at ceruleansystems.com
Wed Aug 4 22:29:45 PDT 2004
On Aug 4, 2004, at 2:16 PM, daniel grisinger wrote:
> What this notion is doing is making a strong statement about
> the relation between how hard it is to advance from one level
> of intelligence to the next. Basically, it's saying that after
> some threshhold is reached intelligence will begin to accelerate
> at some rate, say I(x) = 2^x. But there's an implicit assumption
> that that rate is faster than the rate at which the problem of
> becoming more intelligent is becoming hard. If how hard it is
> to become more intelligent is described by H(x) = 2^x^x, then the
> entire runaway becomes impossible. Sure, you become 2^x smarter
> at each step, but if the next step is 2^x^x times harder to take
> you certainly aren't running away.
It isn't easy to quantify, at least not in terms humans normally think
Intelligence has a space complexity of the form 2^a, where "a" is the
general order of algorithmic abstraction supported by the machine. A
machine with a=n can directly represent and manipulate a machine with
a=n-1, which is a huge qualitative difference. The difference between
very stupid and very smart humans is probably on the order of 0.2-0.3
in this term, but this is difficult to measure in practice because
humans do not evenly allocate all those resources, allowing people who
are stupid on average to be locally intelligent. It does give you an
idea of how important a little improvement in "a" actually is.
Anything a full point higher than smart humans would essentially be
god-like in its intelligence from the perspective of a human and we
would be incapable of comprehending it by definition. While we might
care in the abstract, we would rapidly have difficulty discerning the
rate of intelligence growth. I don't think a squirrel can grok the
intelligence of humans either nor discern whether or not it is
The resource complexity is a bitch though. Hardware can not scale like
this forever, but each time you increment "a" you can work in an entire
new space of systems that you previously could not even be properly
conceptualized. While intelligence will appear to be accelerating
using some kind of linear measure (arguably a stupid way to look at
it), the growth of "a" will almost certainly be sub-linear in practice
and each new step up will be harder to obtain since it is completely
dependent on exponential hardware growth.
So it won't run away in any terms that matter, but that fact won't mean
much to you or I when it happens.
You may have noticed that monkeys currently cannot build machines
larger than a=5 because of the aforementioned geometric resource
complexity problem, and you can't do a whole lot at a=5. Fortunately,
it looks like you can do good universal approximations with a resource
complexity that is merely exponential (roughly a^2), which puts "a" in
the ballpark of the low 30s on modern monkey hardware, which is
sufficiently complex that you can start doing very interesting things.
Things like human language seem to be around a=24 level algorithm
I should add that I generally do not believe a genuine "hard take-off"
is possible in the absence of proper general purpose MNT, and I think
it currently looks like we'll have general purpose AI before we have
general purpose MNT (obviously arguable).
j. andrew rogers
More information about the FoRK