AI vs. general intelligence?

Russell Turpin deafbox at hotmail.com
Mon Oct 20 13:34:18 PDT 2003


Jeff Bone:
>Russell, aren't you sort of arguing against one of the givens, here?

Yes, I diverted the direction of discussion a bit.
The issue I raise isn't relevant to the mock trial.
But it IS relevant to thinking about the impact of
future AI.

>I agree with you:  intelligence as measured by problem-solving ability (for 
>some specific problem
>or a priori bounded set of or classes of problem) does not constitute 
>general intelligence. But the above claims seem to go further than that. So 
>the question for you is: What then are the criteria by which we should 
>consider granting artificial constructs "personhood?"

That's a good question, and I'm not sure I have a
good answer. Here are related questions, put into
your phrasing: What is the component of "general
intelligence" that goes beyond any kind of problem
solving ability? And is that component something
that we should label as "intelligence"?

All I have are gedanken that suggest to me that
intelligence, by itself, is not enough. No matter
what kind of pure intelligence capability I imagine
of building into an AI, from real-world spatial
manipulation to interactive economic modeling, I
can imagine a machine that does this, that we all
would agree deserves no legal status beyond
property, including the machine itself, if legal
inference were part of its capabilities. I think
it is telling that virtually all of the science
fiction that poses this as a problem, from Blade
Runner (DADOES) to the mock trial you referenced,
imputes to the machine some sort of intent or
self-interest, so that it steps forth and objects.
But what does *that* have to do with intelligence?
It might require some intelligence, to understand
its situation and phrase the objection. But it
doesn't automatically arise because of such
intelligence. And why would we build that kind of
characteristic into the AIs that we use for most
practical purposes?

So here's sort of an odd thought: as with so much
else, porn will lead the way. The first AIs that
step forth and ask for legal rights won't be the
ones that are used to plan investments, manage
factories, or investigate disease. Instead, it
will be the AIs that are put into sex toys. The
Rutger Hauer android will be a completely uncaring
automoton that disassembles itself without question,
complaint, or remorse when its duty life is over.
It's the Daryl Hannah android who will raise a
stink. It will be a side effect of something we'll
put into her for fun. That something won't make her
any smarter. But it will make her more person-like.

Again, let me be clear that I am NOT proposing
anything mystical here. By inability to identify
and label what we'll think necessary, my fumbling
around with inexact phrases such as "intent" and
"self-interest," are simply because we haven't
developed a good understanding of this. Because
of that, we tend to lump it under the general
rubric of intelligence. But I think there's an
aspect to all the proposed scenarios that doesn't
have much to do with intelligence.

_________________________________________________________________
Want to check if your PC is virus-infected?  Get a FREE computer virus scan 
online from McAfee.    
http://clinic.mcafee.com/clinic/ibuy/campaign.asp?cid=3963



More information about the FoRK mailing list