[FoRK] Brain mapping and the connectome

Stephen Williams sdw at lig.net
Tue Nov 10 10:19:08 PST 2009


Jeff Bone wrote:
>
> Ken writes:
>
> ...
>> if it's not essentially "us" I'm not interested.
>
> Me either:  problem is, I don't think it's *possible* to have a 
> non-trivial, objective, useful classification boundary between "us" 
> and "them."  And, in fact, I think it's pretty close to provable (cf. 
> Ship of Theseus-type arguments.)

Agreed.  We're already augmented, and we're going to become far more 
augmented.  There are many examples of extension of our mental bodies 
through tools and out into the world in other ways.  This includes new 
senses, adding a compass sensor for instance. [1][2][3]  Whether those 
extensions are integrated or even replace some of our bodies is just 
details.

> ...
>
>> And I haven't seen Consciousness
>
> Capitalized, no less!

Consciousness is not that big a deal.  Really.  Interesting, and 
important to understand, but it is really just a matter of self-aware 
attention tracking.  If what is really meant is having enough cognition 
to be self-aware in a meaningful and useful sense, then that is closer 
to the interesting problems of AGI.  The enabling of general cognition 
to support consciousness is hard, consciousness itself isn't so impressive.

>
> I thought you said you weren't a theist (or were anti-theist, or what 
> have you.)  Deus ex machina, much?

atheist->Atheist->anti-theist?  Is an anti-theist a proselytizing 
Atheist?  Seems like a matter of indifference, degree, and politeness.

>
> ...
>> Anything I read about AI
> Apparently either isn't much or isn't the right stuff. ;-)  We 
> *clearly* have learning;  what we don't yet have is generalized 
> agency, situation, context, etc.

We are far far closer to usable theories than 10 years ago.  We have a 
lot of things working well, providing a lot of useful sensory handling 
for instance: speech recognition, vision of all kinds including almost 
instant accurate 3D models of everything visible in video.  Available in 
open source software...

The useful thing about mapping better how the brain works is to learn 
new mechanisms, structures, methods, and generate creative ideas to help 
us solve problems better.  We don't need to be able to exactly map a 
running system to learn a lot of useful stuff from it.

If we were to find some inscrutable alien computer technology, even 
knowing what molecules, energy, frequency (or not), they were using 
would give us extremely useful clues to areas we hadn't thought were 
useful or competitive.

Once you understand the principles involved, insisting that you have to 
know the quantum state of every subatomic particle involved is obvious 
overkill.  Knowing and trying to duplicate the exact state of a running 
system might require it, however that is not really a goal unless you 
are building a Trek transporter.

We keep getting stuck in local minima and we need constant ideas and 
evidence to climb out.  Nature / evolution has done pretty well, but it 
has plenty of local design minima too.  The "mulefa" in the His Dark 
Materials / Amber Spyglass novel was a brilliant solution to "nature 
didn't invent wheels". [4]

>
>> It still sounds to me that we're quite a ways away
>
> It's worth nothing that whatever it sounds like to you, the various 
> subject matter experts quoted in various of the links in this rambling 
> pseudo-thread seem to have a different opinion.  (Others disagree.  
> YMMV.  My money's on the connectionists --- *very* literally.)

I'm glad you find trading interesting.  I can't get into it.  I have a 
usefulness filter that just won't quit.
Now, if you are discovering useful techniques and publishing them, 
great.  Seems like anything useful is black-holed in that realm though.

>
> How far we are depends on how you measure distance.  We clearly have 
> *nothing* even remotely approximating the order of complexity of a 
> human neocortex.  However, that doesn't mean that such is untenable in 
> relatively short human-subjective timeframes (i.e., if accelerating 
> change laws hold, for example...  Consider again the lifecycle of the 
> Human Genome Project.  We're recycling arguments here, except we 
> aren't arguing --- you're just reasserting.)

I expect non-linearity.  We've seen it before.  We couldn't fly for a 
long time, then we could, and a short time later we're walking on the 
moon.  We barely knew what to do with electricity, barely had radio, 
then suddenly we had lights, TV, cell phones, etc.  We have evolved our 
evolving so that new discoveries don't take 150 or 60 or 20 years, they 
get to market in 18 months or less many times, sometimes creating whole 
new markets in a few years.  AI will be the same.

>
> jb
>
[1] http://www.signtific.org/en/discover/signals/4122
[2] http://www.signtific.org/en/signals/human-compass-technology
[3] http://www.signtific.org/en/signals/video-display-tongue
[4] http://bit.ly/1Sjo8s
 http://books.google.com/books?id=9YZczctTugEC&pg=PA133&lpg=PA133&dq=his+dark+materials+wheel&source=bl&ots=7Iw6BTBpRj&sig=Dt_QNEIfSse14xpOwry8TOXv0YQ&hl=en&ei=Kqr5Su7jJJCMswOczOTKCQ&sa=X&oi=book_result&ct=result&resnum=2&ved=0CAsQ6AEwAQ#v=onepage&q=&f=false

sdw



More information about the FoRK mailing list