Re: Right on schedule?

Date view Thread view Subject view Author view

From: eugene.leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Sun Jun 25 2000 - 13:33:49 PDT


On Sun, 25 Jun 2000, Brian Atkins wrote:

> I vote that progress keeps accelerating, and we get to human-level-brain
> (perhaps using Beowulfs or larger clusters) by 2015 or before. If the

SPEC2k does say little about connectivist/neural processing. All the
metrics I've seen ignore realistic constraints of advanced neural codes:
no code nor data locality, plus BPU/pipelining worst case. I think anyone
who wants to do it other than spiking high-connectivity automata is out of
his mind. We know this stuff works, so let's don't waste playing around
with other possibilities. The search space is just too damn large for
that.

The spread between worst and best case is dramatic in modern architecture.
Additionally, nonlocal memory bandwidth does certainly not follow a
straight log plot.

Massively parallel systems ultra-fast networked together with embedded RAM
organized in ultra-long words (including instruction streams and SIMD in a
register parallelism) and no external memory, plus some reconfigurable
logic area do indeed promise very significant problemspace relevant
performance boosts, but I fail to see the trend moving towards these
architectures yet. Playstation 2 is the only end user buyable instance of
such an architecture, featuring a 1 kBit 4 MByte embedded DRAM endstage
rendering engine. Don't expect to be able to run your networked codez on
it. Then, rather give a SHARC box with 8 kCPUs, or so. Analog FPGA and
other reconfigurable architectures (cellular automata rules hardcoded in
hardware would be very intesting, since suitable for wafer scale
integration), but I know of no single instances of them working, not even
in the academia(?).

Remember, Moore is just about how many transistors you can put on a single
die economically (i.e. with nonzero yield). The architectural issues, both
software and hardware are distinctly the bottleneck here.

> software is ready we should have real live strong AI by then. If not then
> we should be able to brute-force it by 2020 at the latest.

Should we have computronium (bulk molecular circuitry) by 2020? Then,
maybe. But, I'm not holding my breath.

As to brute-force, you have to brute-force at meta level, using ALife
techniques to breed your instructions/reconfig logic patterns. This is
rather nontrivial to code. As of now, not many people are even aware that
such a problem exists, and even less are working on it. John Koza doesn't
seem to think it's important, and Holland is out of the loop. I know of no
other people apart from de Garis, who walks the narrow edge of genius and
crackpot. (Ok, his modules are an artefact of his dedicated hardware box,
but imo you should only switch to dedicated hardware when mainstream fails
to provide any enhancements, which is far from being true right now.
Beowulfs and DSP clusters are the most dramatic demonstrations of how
rapidly the economies of scale push things onward).

After initial splash of activity and interdisciplinarity, ALife seems to
be settling in the usual cyclic pattern of resurging vogues. Right now
we're in a lull. I hope this is not the beginning of the canonical debacle
(see the chronical sorry state of AI, ack, ptui).

As usual, blame the dotcoms. Syphon bright people out of all fields,
including CompSci. Nuke them, nuke them hard. (Sorry for any harsh
feelings).


Date view Thread view Subject view Author view

This archive was generated by hypermail 2b29 : Sun Jun 25 2000 - 14:40:26 PDT