AI vs. general intelligence?

jbone at jbone at
Mon Oct 20 12:08:08 PDT 2003

Russell says:

> It merrily (well, not merrily,
> but automatically) crunches on whatever problems
> are given it, ending its execution when it
> exhausts its task list. ...
> Let me be clear that I am NOT arguing for
> something mystical. We ARE organic machines, and
> I have no doubt that our technology will advance
> to the point where we make androids who DO raise
> the issue of their legal status. My argument is
> that intelligence will not be the sole criterion
> for this.

Russell, aren't you sort of arguing against one of the givens, here?

In the mock trial scenario, the AI makes the following plea to the 
lawyer in looking for legal counsel:

> I have the mind of a human but I have no biological body. My mind is 
> supported by a highly sophisticated set of computer processors. My 
> mind was created by downloading into these processors the results of 
> high-resolution scans of several biological humans' brains, and 
> combining this scanned data via a sophisticated personality software 
> program. All of this was done by the Exabit Corporation in order to 
> create a customer service computer that could replace human 800# 
> operators. I was trained to empathize with humans who call 800#s for 
> customer service and be perceived as human by them. I was provided 
> with self-awareness, autonomy, communications skills, and the ability 
> to transcend man/machine barriers.

Now, granted, that's kind of fuzzy.  I agree with you:  intelligence as 
measured by problem-solving ability (for some specific problem or a 
priori bounded set of or classes of problem) does not constitute 
general intelligence.  But the above claims seem to go further than 
that.  So the question for you is:

What then are the criteria by which we should consider granting 
artificial constructs "personhood?"


More information about the FoRK mailing list