[FoRK] On the unreality of bottom-up brain simulation

Jebadiah Moore jebdm at jebdm.net
Tue Aug 17 11:05:32 PDT 2010


On Tue, Aug 17, 2010 at 12:41 PM, Dr. Ernie Prabhakar <
drernie at radicalcentrism.org> wrote:

> Hmm, that sounds a lot like Wolfram's definition of "computationally
> irreducible":
>
> http://en.wikipedia.org/wiki/Computational_irreducibility
>
> If the brain is computationally reducible, then we can simulate it using
> shortcuts.  If it isn't, then we would need to actually understand
> -everything-.
>
> Right?


I haven't read the book and the Wikipedia page is a bit lacking, but based
on my understanding of it, computational irreducibility is about prediction.
 If a brain is computationally irreducible, then we can't predict what it
would do without simulating it.  But that doesn't mean we'd have to
understand everything happening in the simulation.  To quote the wikipedia
page:

Complex behavior features can be captured with models that have simple
> underlying structures.


and


> An overall system's behavior based on simple structures can still exhibit
> behavior undescribeable by reasonably "simple" laws.


So it's not saying that a system with complex output has to have complex
rules (obviously not, given emergent phenomena), but rather the opposite;
that for some types of system, particularly ones exhibiting emergent complex
phenomena, there is no way to predict the output short of simulating it
completely.  So, for instance, take a traffic simulation; if we could easily
predict how many cars would be at each intersection at a given time with
reasonable accuracy, then traffic would NOT be computationally irreducible;
but if we had to actually watch the cars, or put together a detailed
simulation, to make such predictions, then traffic WOULD be computational
irreducible.

So I guess a computationally irreducible system is just what it sounds like;
a system of computation which you cannot optimize any further and get the
same (or reasonably similar) results.

I don't think you can really say whether the "brain" is computationally
reducible itself (unless you mean from the base level particle interactions,
perhaps), only whether a particular model of the brain is.

Regardless, if you understand the base level processes, and have enough data
about initial positions and whatnot, you should be able to simulate the
brain even without understanding the particular functional role of all the
different proteins in the human brain.  This might be considered the
opposite of a computational reduction, because it might be possible to
simulate a brain without all these nitty-gritty details.  For a model of a
brain based on an understanding of the roles of proteins to work, in
fact, the brain would have to be at least somewhat computationally
reducible, because you're raising the level at which you're operating.  It
would be a reduction in computation to simulate proteins and such based on
their role instead of as atoms.  It would be a further reduction to simulate
at the level of nerves.  For traditional AI to simulate human intelligence
would be an even further computational reduction.

But I only read the page once, so I might be misunderstanding the concept.

-- 
Jebadiah Moore
http://blog.jebdm.net


More information about the FoRK mailing list