Re: [Fwd: Eliezer speaks (forwardable)] - was loserhood and analysis

Date view Thread view Subject view Author view

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Aug 18 2000 - 09:40:13 PDT


Eugene Leitl wrote:
>
> Your method requires explicit coding of an AI bootstrap core by
> a human team.

Which we both know.

> My method involves [evolutionary algorithms] [dedicated hardware]

Which we both know.

> Your method [...] aims for a high-complexity
> system while humans are demonstrably unable to create working
systems
> beyond a certain complexity threshold.
>
> My method is low-complexity [...]

Translation: "I don't think you can do it without evolution. I think
my method is better."

Well, in that case, Jeff Bone doesn't need to worry about me, right? He
needs to worry about you.

Guess what: *I* don't think that *your* method is going to work.

So the situation is symmetrical, except for one thing: You claim that
your method will result in a competing ecology of superintelligences
with survival instincts, and I claim that my method would result in an
singleton altruistic superintelligence with no particular emotional
attachment to itself.

> I realize there are enough brilliantly stupid people out there who
> will want to build the Golem

So now who's brilliantly stupid?

> short. Let's upgrade ourselves, so that we have a smidgen of a
chance
> in the RatRace++. If we don't, welcome to a yet another great
> extinction event, by cannibal mind children.

Gene, you WOULD get eaten by any upgraded human that wasn't altruistic
enough to build a Sysop. If you can live in a world of upgraded humans
- if upgraded humans can be Good Guys - then AIs can be designed to be
Good Guys. "The space of all possible minds", remember?

> You presume you can code such a goal, and that the system can indeed
> use such a goal constructively. You're remarkably hazy on how the
seed
> AI will recognize which modifications will bring it nearer to a
goal,
> and which will farther.

Did you read CaTAI 2.0 yet?

> I do not buy the "nothing unplausible" without backing up your
> assertions with arguments. So far you're describing an arbitrarily
> implausible, arbitrarily unstable construct. You do no show you get
> there (a plausible traversible development trajectory is missing),
and
> you do not show how you intend to stay there, once/if you got there.

"Singularity Analysis", section on trajectories of self-enhancement as a
function of hardware, efficiency, and intelligence.

> We're 10-15 years away from practical molecular memory, and soon
after
> computronium. I'd call that nanotechnology, albeit not a
machine-phase
> system. Once we have that kind of hardware, finding a good enough CA
> rule and the type of data-processing state pattern can be
brute-forced
> essentially overnight. De Garis is close enough on that one.

See "The Plan to Singularity", "If nanotech comes first", "Brute-forcing
a seed AI".

> > become the sole guardian of the Solar System, maintaining
distinct and
>
> Don't put your eggs in one basket, however large the egg, and
however
> large the basket. Eventually, it gets smashed, and creates a huge
mess.

Put your eggs in enough baskets and I GUARANTEE that one of them will
break.

> The fitness delta of the first AI will not be dramatically
> higher than of all the rest of us/other AIs

> For the record, I consider ALife AI development route currently
> extremely dangerous since intrinsically noncontainable once it
enters
> the explosive autofeedback loop

??

> Nor do you describe how you intend to make
> the goals inviolable. There are a billion ways around the
neoAsimovian
> laws robotics. If I can think 10^6 times faster than you, and I'm
even
> a little tiny bit smarter than you, I can hack my way through any
> safeguards you might want to create.

See "Coding a Transhuman AI 1.0", "Precautions", "Why Asimov Laws are a
bad idea".

According to you, morality is arbitrary and any set of motivations is as
good as any other - so why would it tamper with the initial
suggestions? What need for elaborate Asimov Laws? Remember, we are
talking about a NON-EVOLVED system here.

-- 
        sentience@pobox.com    Eliezer S. Yudkowsky
               http://singinst.org/home.html


Date view Thread view Subject view Author view

This archive was generated by hypermail 2b29 : Fri Aug 18 2000 - 09:47:15 PDT