[FoRK] Kudos for Jeff Hawkins' _On Intelligence_
eugen at leitl.org
Thu May 26 07:42:30 PDT 2005
On Thu, May 26, 2005 at 08:04:17AM -0500, Jeff Bone wrote:
> >>electrochemical activity of the human brain gives rise to
> >>intelligence. And he may have done it. The book presents what ---
> >I doubt that very much.
> Have you read the book?
No. I've read the blurb and the reviews on Amazon because a few people
enthused about the book. It's right there at the very bottom of my reading
list, which means effectively never.
The gist of it seems to be a list of trivial truths (the brain architecture is important, the
current computers are not intelligent -- well, duh) and hare-brained claims like "[T]he
ability to make predictions about the future... is the crux of
intelligence,". Substitute him with Hecht-Nielsen and prediction/memory with
confabulation, and you've got a yet another talking head.
Of course the brain builds models, and derives predictions. Why is
supposed to be the crux of intelligence? Computation is irrelevant, huh?
Where is the evidence? Can he build systems which can learn any better than
current systems? Not according to his RNI publications. Feel free to point me
towards anything more exciting.
> Or chased the references? Or talked to him
No. I'm not the intended audience of the book.
I've looked at http://www.rni.org/ and what they seem to be doing there
http://www.rni.org/pubs.html (ah, a few new papers up) which is good, solid seat-of-the-pants
neuroscience -- nothing new there.
(With "new", I'm referring to the implementation part, how you cast this into ASICs, or at
least FPGAs with a very large pipe to the on-die embedded memory (because, as
you might or might not be familiar with, software doesn't cut the mustard but
on Very Large Very Expensive Clusters -- he doesn't seem to be doing these).
Unfortunately, I do not have single-user rights to a Very Large Very
Expensive Cluster, and spare time to burn, and in 20 years I will be too old to care.
> about it?
Speaking to people is largely useless, if you can read their original
publications. Way more bandwidth there.
> Do you have any competing hypotheses?
Yes. It runs basically "it's a big dirty machine, nobody knows the details,
and switches are different from neurons anyway, so build a good
morphogenetic code for a sufficiently flexible biologically inspired automaton network
framework, seed the pool from empirical data from neuroanatomy properties
from biology, and let it drive co-evolving freely behaving virtual critters in a
fake reality simulator to evolve -- task solving and people observing being
fitness function along with drive intrinsic in co-evolution". It's about as
short as it gets, without turning into raw word salad. It would only take a
few two-digit megabucks and a few ten PhD-years to get started.
My hypothesis is that: there's no single hypothesis. There's no theory which
you could destill in a couple of pages filled with neat equations. There's no
simple code (unless the complexity is in the data) to write.
The human baby is a system with a lot of knowledge about the world already built-in,
enough of it for it to learn. How did it all get into there? It all starts with
a single cell. The zygote is the minimal seed, but the embryomorphogenesis development
trajectory is a hash, so you can't just take the final structure
apart to see how you can arrive at the initial state (which might be small
enough to fit on a single current data medium, in a moderately rich context).
Circuitry is a very different realm from biology, so you can't simply use a
baby as a blueprint, and low-level emulation is right out of question,
because it will take another half century until we have the raw crunch
resources. (No live babies will be harmed in the act, promise).
> Can you falsify his?
Why should I? There's nothing to falsify, unless he builds something which
works well enough to be interesting.
(My values for interesting are in the baby equivalent country, though).
> As I said, and as he admits, it's probably very wrong in many
> specifics. Yet the overall direction makes a hell of a lot more
> sense than either earlier attempts to explain cognition or the
> pervasive tendency to just ignore it. The general suggestions seem
You seem to be referring to the AI school a la Minsky. This is a major bunch
of steaming excrement, and also a very good strawman. The people I listen to
never ignored no cognition, Sir.
> quite reasonable and sound. Also note that there's not much primary
> research going on here --- he's merely synthesized disparate research
> from many quarters. Whether he's generally wrong or generally right
> rests on the validity of many other observations and hypotheses.
Cognition is a very specific physical process appearing a very specific
biological system. A system which has been designed and tuned by several gigayears of
evolution. It is very easy to pick an arbitrary facet of operation of that
system, and build a very succinct, provable theory about the aspect of that
behaviour. We've been doing it since Santiago Ramon y Cajal.
Unfortunately, it doesn't help us to build machines that think -- which is
all this is about.
It has better be a theory with a shitload of facets.
> Science is about getting it wrong, repeatedly, until you get
> something that's right enough to make some reasonable predictions.
> But you've got to be willing to make some testable guesses in the
> first place in order to make progress at all.
Thanks, my degree isn't in Comp.Sci. I do keep current, too.
> Hawkins' hypothesis yields some testable predictions.
He seems to be doing science, then! Good man.
> Unf. "the establishment" in neuroscience is so focused on the micro
> that the more important and more interesting macro picture gets
> ignored. Chronically. And the general tendency to treat cognition
Anyone who's been building top-down models pulled right out of /dev/ass has
so far only publically embarassed himself, as far as AI is concerned.
> as metaphysics is just another example of the dark side of the way
> science often happens --- science as dogma, driven by career interest
> and academic politics. Better to not make any controversial claims
> at all lest you should be wrong and suffer academically. And
> sentience often seems to be the electric third rail of "real"
> He may not be right, but it's AFAICT a best-of-breed attempt at the
> moment at a comprehensive high-level theory and I for one think we
> should be applauding and encouraging the effort rather than taking a
> lemondrops "can't be perfectly right, therefore it's worthless"
> stance driven, I assume, mostly because of Hawkins' outsider
> relationship to the field in question.
I'm kinda tired of all the jack-jump-into-field mavericks spouting
nonsense routine, so my tolerance level is a bit lower than yours
at this stage.
> >>coherent, and maps fairly well to observation. And --- while this is
> >>a very apples / oranges sort of thing --- it fits well with my own
> >>recent experiences with neurally-inspired software architectures,
> >>learning systems, and so forth.
> >There are no neurally inspired software architectures.
> This is just a ridiculous statement to make, Gene. OF COURSE there
> are neurally-inspired software architecture; not only do I and many,
> many, many people work with them on a daily basis, the literature is
> replete with them. Whether or not they are accurate models of neural
> architecture is a moot point, and I'm not making any claims about
> that. I said *neurally-inspired* software architectures --- and the
> existence of such is incontrovertible.
> Really, Gene, sometimes you're just a sourpuss. Lighten up.
You're correct. There are some systems, very loosely inspired by biology.
They're all a bunch of crap, and won't go anywhere, especially given vanilla
computer architectures to run on, because Moore is not about performance
here. AI is hardware bound, especially the bootstrap stage.
I'm a sourpuss, because I'm getting old. For 25-30 years I'm only hearing
promises (which were totally bogus, even using the state of the art at the
time they've been made).
I'm interested in seeing some action before I lose the interest completely
(which will be probably at some 65+ years), and I'm starting to suspect I'm
not going to see any.
Does this suck, or what?
Eugen* Leitl <a href="http://leitl.org">leitl</a>
ICBM: 48.07100, 11.36820 http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
More information about the FoRK