[Fwd: Eliezer speaks (forwardable)] - was loserhood and analysis

Date view Thread view Subject view Author view

From: Brian Atkins (brian@posthuman.com)
Date: Thu Aug 17 2000 - 13:40:04 PDT


Ok a response from the author of the doc.

-------- Original Message --------
Subject: Eliezer speaks (forwardable)
Date: Thu, 17 Aug 2000 15:33:27 -0400
From: "Eliezer S. Yudkowsky" <sentience@pobox.com>
To: Brian Atkins <brian@posthuman.com>

The way "FAQ about the Meaning of Life" got started is that I asked
Jeeves (www.askjeeves.com) "What is the meaning of life?" and got
directed to a sophomoric joke - a "404 not found at heaven.org/~god"
thing. Seeing an opportunity - it was a question I knew the answers to
- I worked up the FAQ, then asked Ask Jeeves to link to it. Jeeves
still sends the header page between one and two hundred visitors per
day, of which 50 go on to the start page, and a dozen or so read all the
way through to the end. More than a hundred people have selected the
"This page changed my life" option in the current and previous polls. I
get emails from students who say that I've moved them to select a
college major in computer science or cognitive science or neurology.
Some excellent Singularitarians got their start from the FAQ. So I'd
say it was a good investment.

The FAQ was written for complete novices to transhumanism or Extropian
concepts, fresh off of Jeeves - people not necessarily having a lot of
patience. I wasn't trying for a precise chain of logic in
ready-to-critique format; rather, I was trying to convey the underlying
content that resulted in the conclusions I was presenting. I was
*working* with sequiturs (the unconscious conclusions we make), rather
than *describing* them - showing, rather than telling. I neither
apologize for nor laud this technique as a general rule; it depends on
what your goals are.

The reason that more detailed documents do not yet exist is that my work
on "Coding a Transhuman AI 2.0" takes priority. Again, I do not argue
that this statement should be taken as proof, or even suggestive
evidence, that my reasoning is more precise and detailed than is
directly visible in existing documents; you can evaluate the probability
of that scenario for yourselves. However, I hope you will realize that
the action of not writing a more detailed argument is logically
consistent with my stated priorities.

Let me see if I can roughly answer some particular objections.

Eugene Leitl and I have been having this discussion for ages. My
standing reply, to which Eugene has not yet responded, is that while
Eugene may assume that developing AI requires evolutionary competition,
my described method of developing AI does not. Developing a survival
instinct would require evolutionary competition on survival tasks.
Developing humanlike observer-biased or observer-centered perceptions
would require politics-associated selection pressures, which would
require AIs in social competition - not just interactive competition,
but competition in which survival or reproductive success depended on
the pattern of alliances or enmities. Survival evolutionary competition
and political evolutionary competition are the forces that are causally
responsible for, respectively, human observer-biased goals and human
observer-biased perceptions. As my AI development plan relies strictly
on self-enhancement and invokes neither form of evolutionary
development, there is nothing implausible about a goal set that includes
the happiness or unhappiness or freedom of humans, but does not include
the observing AI, in utilitarian calculations of the total desirability
of the Universe. There is nothing implausible about assuming that the
entire Human universe involves a single, underlying, superintelligent
AI. The first seed AI to achieve transhumanity can invent
nanotechnology, and whatever comes after nanotechnology, and thereby
become the sole guardian of the Solar System, maintaining distinct and
inviolable memory spaces for all the uploads and superintelligences
running on its hardware ("The Sysop Scenario"). No ecology of
superintelligence is involved. Sure, this is an infinitely small spike
in the possible state space. So is a skyscraper. So is any other
designed configuration of quarks. So what?

Strata Rose Chalup: I want to personally design an AI. A major subgoal
of this task is having a correct functional decomposition of human
intelligence. My working methods on that task include examining
specific concepts and sequences and decomposing them. For example, the
concept of "three" in "Coding a Transhuman AI 2.0", or "Investigate
cases close to extremes" in CaTAI 1. Introspective analysis is a
requirement of my profession, and in my opinion I'm damn good at it.
Just remember that this sort of precision burns time. For me, it burns
a LOT of time - I know a lot of sublevels. The only time the TMOL FAQ
tries for above-average precision is in the Extended version of "Logic".

Jeff Bone: I was writing about the Singularity for at least two years
before "The Age of Spiritual Machines", and Kurzweil doesn't tip a hat
to Moravec or Vernor Vinge or myself. I have no respect whatsoever for
arguments from the history of philosophy, and no compunction whatsoever
about stomping all over them. Taking a guess at the "most significant
person" is a cognitive task like any other, and if you experience an
emotional reaction, it's your problem.

To everyone who thinks that I haven't proved the existence of an
objective morality - a desirability gradient which objectively exists,
in the same way that quarks or the Schrodinger equation exist - I
haven't! I never claimed to have done so. Similarly, nobody has proved
that it doesn't exist; my current understanding of reality strongly
allows for the possibility; and I am obligated, by my profession, to
take both possibilities into account. In the default case that all
morality is subjective and more or less arbitrary (though internal
consistency may constrain the space of possible moralities for sane
entities above a certain level of intelligence, and so on), then AIs -
artificially designed beings that can, in theory, occupy any point in
the space of all possible minds - can have any morality the programmer
understands how to specify, and the task is wisely choosing the morality
and ensuring the morality is stable.

I apologize for the sonorosity of this message, but I do not have the
time to make it more casual.

Yours,
Eliezer.

-- 
        sentience@pobox.com    Eliezer S. Yudkowsky
               http://singinst.org/home.html


Date view Thread view Subject view Author view

This archive was generated by hypermail 2b29 : Thu Aug 17 2000 - 13:42:56 PDT