[FoRK] Various followups (Eugen, Ken, Stephen, etc.)

Jeff Bone jbone at place.org
Sat Nov 14 05:58:58 PST 2009


Ken asks:

> Falling behind what, exactly? Is this where you think we need to  
> have the accelerator jammed to the firewall...

Not really the same phenomenon I was previously referring to, though  
related.  In this case, it's much more mundane and direct:  at present  
rate of progress in software development tools, we are falling behind  
our own ability to innovate with the other existing technologies  
(networking, rich and large data, non-VN hardware architectures,  
etc.)  I.e., we're building a world that's increasingly, not  
decreasingly, hard to program, even as it is ever-more connected and  
*computable.*  That's unfortunate.  It's also a phenomenon that's been  
noticed and remarked on frequently by some folks working in the field  
(Backus, etc.) for three decades.  And making it ever-harder, for  
example, for human children to sit down at a keyboard and make  
something interesting and new isn't going to help things, never mind  
the tradesman.

Eugen says (in two separate messages):

> There's very good reason to assume that a naturally intelligent  
> system has no code anybody, itself including could understand...
> Darwinian processes certainly scale. You don't have to understand
> anything, yet you're still able to optimize. And of course the end
> result is not understandable, at least using our current methods.

Of course.  In fact, this is a phenomenon I am *intimately* acquainted  
with, even at our current prokaryotic stage in the development of such  
things.  I think I've blathered about this at length here before...   
and I think it's true of any system of any complexity that  
approximates any reasonable definition of "intelligent."  I live with  
this fact and its consequences in my day-to-day activities.  Trying to  
explain this to folks not working directly in the field is tough,  
though.  Kevin Kelly / Technium . "Science Without Theory / Google Way  
of Science" etc. --- some of the more accessible introductions for the  
layman.

> The whole idea of massaging text in a text editor does only make  
> sense if one has been doing it for most of your conscious life.


To be clear, I don't think a new shell language, better data  
languages, etc. are going to immanentize the eschaton or  
anything. ;-)  On the other hand, until we're fully bootstrapped,  
having better tools for the mucky lower-complexity bits of human- 
computer (and human-computer-human) interactions doesn't just seem  
like a good idea, it seems downright necessary.  It's a real buzz (and  
momentum) kill to move from e.g. working with / on a machine-learned  
kernel of something using some complex, shared, generally mathematical  
language and mode of interaction to writing a bunch of crap to wrangle  
some data necessary to feed the damn thing.  Welcome to my world... ;-)
The less time folks working in the area struggle with that sort of  
thing --- and the higher-quality and easier our "conversations" with  
the machines (and, via machine, with each other) can be in the  
meantime, the more time working on the high-value bits.  This extends  
throughout everyday life;  the less time mucking around with the  
mundane (even to the level of not having to waste 30 seconds heading  
to the thermostat to adjust the interior temperature of your house)  
the more time to spend on high-value activities.
But even beyond that, even *primarily* beyond that, there's a lot of  
technological activity going on that has nothing obvious to do with  
creating intelligence, yet does impact our lives in significant ways  
and will continue to through the duration of the bootstrap or until  
the crunch, if any / either.  You up the odds of that impact being  
positive by better tools, better integration, better automation, and  
better communication.  And the ambient level of such technology *does*  
impact overall development and progress.  (Drexler realized this;   
that's part of what led him to talk about the need for e.g. shared  
hypertext way back in the early-mid-80s...)

> There's a continuum from an infant you to the adult you. Are you  
> that infant? Of course not. Does the question even make sense?

Ship of Theseus argument.  I do agree with you, actually;  I'm  
inclined to the "identity is largely meaningless" argument, so I  
overstated the case in an attempt to simplify.  Nonetheless, it is my  
hope that whatever comes later will bear the same resemblance ---  
perhaps to a lesser degree, but still there --- that e.g. we bear to  
our infant-selves.  If not, no big deal, however I think *that* is  
what defines a "successful" or "survivable" Singularity.  Will "we" be  
alien if we make it through?  Absolutely.  But I'm hopeful that after  
the fact there's a continuum of intelligences rather than either a  
"single" one *or* a bimodal population of them.

> The faster [a Singularity] is, the higher probability of nobody  
> squishy making it.

I'm actually not sure I buy that there's any universal relationship  
between "speed" and "survivability."  The metaphor is weak in this  
regard, IMHO, even though I've used it (and almost precisely the same  
way) before.

Going to short-change Stephen, here:
> ...dot (etc.)

All excellent comments and mostly-agreed.  We've managed to get lost  
in the discussion of the implications, motivations, etc. a bit.  Would  
love to carve off and have a more substantive and detailed discussion  
about this once I've got something better to place as a stake in the  
ground, but as mentioned that's been stalled out for most of the year  
due to various familial events.  Hope to get back on that bandwagon  
again soon...  (and btw, part of the motivation there is that it seems  
to me that most open technology efforts work better if you get  
"working and useful" before "public discussion and contribution" ---  
Linux being the major obvious exception, IIRC.)

jb



More information about the FoRK mailing list