The Meaning of Life (was Ai vs. general intelligence)
jbone at place.org
jbone at place.org
Mon Oct 20 20:28:53 PDT 2003
Good comments despite the fact that you didn't answer the question
asked. I was hoping at least for a straw-man that we could swing at.
(GRAM_ERR) Since you didn't offer one, allow me...
An intelligence worth preserving is one that can make qualitative
judgments about is own survival such that it can rationally (i.e., not
based on pure belief, but rather on some quantitative, logical chain of
reasoning from some generally-acceptable premises) choose some other
individual's (or species, group's, whatever) survival over its own.
More to the point, it's an intelligence that understands that
intelligence is something that (a) needs to be maximized in the
universe over time and space, and (b) needs to be perpetuated as long
The most important "war" humanity is waging isn't against Iraq, or
terrorism --- it's against entropy. Every choice we make that reduces
even fractionally the survival of *all human-equivalent intelligence*
is a wrong choice.
At the risk of being highly controversial, let me offer the following
example. Assume that general post-human and super-human machine
intelligence exists 20 years from now. Assume that a scenario exists
such that a choice must be made between the total extinction of all
super- and post-human intelligence and all human (biological)
intelligence. What do you choose? I choose the emergent, evolutionary
intelligence. I am perhaps a traitor to my biological species, but I
am a patriot when it comes to human-derived intelligence in general.
Given that we have no evidence of other intelligent life in the cosmos
--- statistics and the Greenbank / Frank equation to the contrary ---
then it seems that a universe bereft of intelligence, ticking down
towards the heat death, is an awful waste of space-time. All and any
choices that avoid that possibility are to be avoided. That's an
A real intelligence is one with the interest, foresight, and ethical
framework to weigh such questions and make reasoned --- i.e.,
justifiable given some set of givens, however arguable --- decisions.
As a shocker, let me say this: if I had to choose 10,000 humans on
this planet to survive going forward, I'm not sure I'd choose myself.
But if I had to choose 10,000 *intelligences* chosen from post- and
super-human machine intelligences or human intelligences --- I'm not
sure that *any* humans would make the cut. It's not a given; it's
purely a function of the size and quality of the post- and super-human
Intelligence is more important than genetics or existential need or
And so my straw-man is: unless an intelligence is equipped to wrestle
with --- and possibly make self-detrimental decisions regarding --- its
own survival vis a vis overall context then I'm not sure it's worthy of
"person" status. Will to self-preservation isn't enough --- all
animals have that. Right or wrong, the will to sacrifice oneself for
the greater good is today a peculiarly defining human characteristic.
And given the arbitrary nature of defining "personhood" this seems to
me to be an adequate --- even maximally noble --- way of defining it.
You may find this a curious perspective given my other ideological
statements. I think it's not inconsistent, but rather just difficult
PS: NB: there's some scary extrapolations of this. But hey, what do
you expect: I'm also an advocate of the idea that anyone and everyone
who can muddle through simple diff-eq is a human, and all others have
no human rights as well. ;-) That's a joke.
More information about the FoRK