Re: [Fwd: Eliezer speaks (forwardable)] - was loserhood and analysis

Date view Thread view Subject view Author view

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Aug 18 2000 - 09:42:45 PDT


Jeff Bone wrote:
>
> > No. Metarules are subject to infinite dispute. The Rules
themselves,
> > on the other hand, are pretty obvious. Compare the "laws of
physics"
> > and the "rules of science".
>
> So, for example, here's something that's nonobvious to me and
subject to dispute: while I
> think Doug Hofstadter is a fine human being and great thinker, I
don't agree that he's the
> most significant human being, or that you can even define such
meaningfully.

Okay. I also think that vanilla ice cream tastes better. Let me get
this straight: Literally ANY opinion I express in my writings, if you
disagree with it, means I'm not smart enough to write The Rules? We can
disagree on ice cream flavors and still agree about basic situations.

> > It's symmetrical. It's simplest. It's obvious. There is no
> > justification for any allocation strategy that favors individual
humans
> > at the expense of others.
>
> That's hogwash. All in situ resource allocation schemes throughout
history that didn't
> involve market forces or competition have failed.

... "throughout HUMAN history" ...

So what?

How can you possibly draw conclusions from that?

Who gives a diddly-squat what kind of social schemas nontechnological
humans (we don't have tech, we have toys) need to run a civilization?
Anyone with an underlying Sysop doesn't need to trade resources. Maybe
the uploads can trade resources permanently or temporarily among
themselves; maybe there are or aren't safeguards ensuring that each
sentient has a minimum chunk of processing space. I think the best
solution is a minimum-living-space safeguard that also applies before
you want to create a new sentient - before you can make a Child, you
need to own enough mass that you can afford to give both yourself and
the Child minimum living space.

> Who decides who gets what piece? What if I
> disagree with what I've been allocated? What if you decide to give
me a big blob of vacuum
> out near Neptune, while you take a nice, juicy carbonaceous
chondrite closer into the sun?

Actually, we might leave the whole Solar System for those who choose the
Mundane Path. After all, most of the mass (10^33 grams) is in the Sun.

> Who values each piece of solar system? Who decides that system of
value?

Ask the Sysop.

> How do you define
> "a human being" for the purposes of the allocation?

I don't think this will be significantly blurred at the time of the
Singularity.

> Does the fetus in Sally Jane's belly (or
> whatever) at the time of the allocation get its own piece?

Yes.

> Do the corpsicles at Alcor (or
> whatever) each get their own piece?

Yes, if they're revivable. If we can go back in time and get everyone
else, they also get an equal piece.

> What about the kids (or whatever) that get born 10 months
> after the allocation, are they shit out of luck?

They get, at least, the "minimal living space" - i.e., enough mass to
run for the forseeable future as a superintelligence - or the parents
can't afford to create them. Or one might wish to rule that any child
gets half your resources (if two parents combine, then the child gets a
third of each's resources, and so on). The Sysop or the UN will decide;
I'm inclined to go with the former rule, which is simpler. But remember
the Golden Rule: The Sysop makes the final decision.

They can't get an equal share, because then someone could "cheat" by
creating a decillion children and maxing out the Solar System's
resources.

However, when the rest of the Milky Way is divvied up, it might go
equally to all existing sentient entities that were not created solely
as land-grabbers.

> What about nonfunctional humans, i.e.,
> those that are for whatever reason incurably severely retarded, do
they get just as big a
> chunk as somebody that might actually be able to *use* said
resources?

Yes.

> I know all those
> whatevers are perhaps not appropriate concepts in that kind of
environment, but you can
> clearly imagine the analogies. In particular, *why* does the set of
beings, defined however
> you want, alive at the instant of allocation enjoy the special
temporal privilege of getting
> participation?

In general, the rule is that anyone born/created before "rapid
replication" becomes possible gets an equal share of Sol. This goes
back to Drexler and _Engines of Creation_.

> How about those Minds, do they get their own pieces?

Yup - you want to create a Mind, you have to give it minimal living
space. MLS is necessarily the same for Mind Childs as human Childs,
since a human Child might want to become a Mind when it grows up.

> There's nothing "obvious" about your solution at all.

I don't say that it's instantly obvious; only that all alternate
solutions which I have examined appear harmful or dangerous or unfair,
thus forcing a single answer. If multiple solutions are acceptable in a
case, then it's not an important issue and the UN can decide.

> You are basically ignoring the need for
> an entire philosophical, moral, and economic framework in which we
can interact in the
> presence of the kinds of tech you're speculating about. It's a BIG
deal. Sophomoric and
> overly simplistic solutions like "oh, we'll just give everybody an
equal chunk of resources"
> are basically just a way of sweeping the *really tough* problems
under the rug. By
> comparison, actually building the Minds may well be trivial compared
to these other problems.

Yep, that's why I'm ignoring it. We can worry about the philosophy
post-Singularity, or ask the Sysop, or create a Philosophy Mind. Any
alleged philosophy would probably give vastly sillier answers than
trying to just work out the Rules.

> > You mean the Washington Monument Problem? ("Who gets the
Washington
> > Monument?") I don't know. I don't care. It's a trivial problem.
You
> > could toss it to a UN vote and it wouldn't matter all that much
how they
> > decided. One quark is as good as another.
>
> It's an ABSOLUTELY non-trivial problem. Unless you've got some
scheme for, at a minimum, free
> energy from the void i.e. ZPE, then you've got resource constraints.
That implies economics.
> And, unless you think that femtoengineering happens two or three
days after we get universal
> nanoassemblers,

Nanocomputers run REALLY REALLY fast. Two or three days is a maximum,
not a minimum.

> your "any quark" argument isn't that compelling either; we'll be
building
> with atoms for a while first, and that carbonaceous chondrite is *a
lot* more valuable in that
> timeframe than an equivalent amount of lunar regolith silicates.

Like I said, we'll probably leave the Solar System alone and just use
the Sun. But that's the Sysop's final decision - the *goal* is to leave
the Amish alone while providing all the uploads with sufficiently
high-quality living.

> Yeah, that sounds good to me. But how can anyone turn off the Sysop
if there's something
> unacceptably wrong with it, though? I've never met a human being or
a piece of code without
> bugs in it; and it's notoriously difficult for people to debug
themselves, and software isn't
> too good at that, either.

You've never met an intelligent being with access to its own source
code.

> Isn't any kill switch a form of Sysop Threatening Weapon? What
> happens if there are bugs in the basic philosophical assumptions and
interactive principles
> that the Sysop is built to protect and uphold and enable? Who's got
their finger on the
> switch? Sed quis custodiet ipsos custodes? This is a hugely real
problem with what you're
> contemplating.

Get It Right The First Time. If you can't trust a Sysop, how can you
trust a human with vis finger on the off-switch?

Remember, the basic goal is a superintelligence. This does not mean a
very fast, very intelligent moron. It means an entity that will
*notice* if we tell it to do something stupid.

> E., I'm not the only one making that equation. It's obvious to a
*lot* of people that there
> are real issues of potential tyranny here.

Yes. And as long as it's "obvious", you can't be trusted. You do
realize that, don't you?

> > Maybe *you* find it natural to assume that you would abuse your
position
> > as programmer to give yourself Godlike powers, and that you would
abuse
> > your Godlike powers to dictate everyone's private lives. *I* see
no
> > reason to invade the sanctity of your process, and have absolutely
no
> > interest in enforcing any sort of sexual or political or religious
> > morality. I have no interest in sexual, political, or religious
> > morality, period. And if I did try to invade your process, the
Sysop
> > wouldn't let me. And if I tried to build a Sysop that could be
dictated
> > to by individuals, I would be building a gun to point at my own
head.
>
> You might be doing that anyway. Further, you might be putting a gun
to everybody's head. I
> didn't volunteer for your experiment in Russian Roulette. Hey, go
ahead and do whatever you
> like, just don't make any designs on any resources or whatever that
somebody else might be
> interested in using.

Put your eggs in enough baskets, and one of them will break.

> > All that matters is the underlying process permissions that ensure
> > individual freedom. I'm in this to significantly reduce the
amount of
> > evil in the world;
>
> Define evil.

Why bother?

> I can't think of many things more evil than the notion that some
random evil
> genius might be cooking up the operating system for the universe
that I will inevitably be
> forced to live in at some point in the future, and planning an
interplanetary land grab and
> bake sale --- no, not sale, that would make too much sense ---
*giveaway* of all the resources
> in the neighborhood.

I'm just creating the world we all wish we'd been born into.

> You *do* realize how nutty, how mad scientist, how evil genius all
of this sounds when you say
> "bah!" and just brush away the philosophical concerns, right? For
somebody who wrote a F.A.Q.
> on the Meaning of Life, you seem remarkably unconcerned about some
of the more tricky
> philosophical questions that crop up as a result of your endeavor.

Yep. Who needs 'em?

> > Fine. The UN isn't allowed to do it. The trained professionals
aren't
> > allowed to do it. Who's gonna do it? You?
>
> Nobody's gonna do it, because the world doesn't need just one set of
uberRules.

Sure it does. If you have a lot of Sysops, that multiplies the
probability that one of them will break and blow up the Solar System...
unless there's a meta-Sysop. Which is in a sense what I'm talking
about; you build the rules in your reality, and the Sysop enforces the
rules that are so basic that they can't possibly be broken, like the
rule against creating a sentient entity and torturing it against its
will.

> It *doesn't
> need doing,* even if it can indeed be done. Tell you what: you
just do *your* thing, build
> *your* world, figure out *your* definition of evil, eliminate that
evil from *your* life, and
> everything's cool until you make the grab for the asteroid belt.

Or until someone else grabs the asteroid belt.

> I defy your right to claim
> any resources within the solar system beyond what you've got now or
can eventually buy or be
> given; what makes you think you have that right?

I owned one six-billionth of Sol the day I was born. So do you. I'm
just making it real.

> You don't believe in coercion; you believe
> in freedom; surely you must believe that your rights stop at the
end of my "nose." Well,
> what framework have you created under which those personal
boundaries make sense in such a
> brave new world?

Okay, take the Mundane Path if you like. I don't need Earthmatter
anyway.

> To me, your whole deal sounds like in practice it's a race, winner
take all.

I didn't make that deal. That deal is inherent in the nature of the
huge positive feedback cycle that is the Singularity. I'm just trying
to "win", if you put it that way, with an AI altruistic enough to ensure
that there are no "losers". And since I'm just another human to the
Sysop, I'm certainly not a *personal* winner... unless everyone else is
a winner too.

> Actually, now that I think about it, I think your whole argument is
sort of cowardly. Why in
> the world should you try and expound on or defend how things are
going to work --- i.e., "The
> Rules" as you see them? It's just not necessary. If you had the
courage of your convictions,
> you'd basically say "we're going to build an essentially omnipotent,
ultimately benevolent
> Power. And then we're just going to trust it to figure out how to
take care of all of us in
> the best way." That's perhaps a scary and harder to defend
position, but I think it's really
> the one most philosophically in line with your endeavor.

That's the one in line with the assumption that an objective morality
exists. If an objective morality doesn't exist, then we at least have
to know enough to point the Sysop in the general direction of the
Standard Human Quest.

Certainly, it's important to remember that if anything I've been saying
is *obviously stupid*, then the Sysop is smart enough to override me on
it! Remember, the goal is to build a Sysop so perfectly that the
programmers become irrelevant; that the result is exactly the same Sysop
that would be built by any sufficiently competent altruist.

-- 
        sentience@pobox.com    Eliezer S. Yudkowsky
               http://singinst.org/home.html


Date view Thread view Subject view Author view

This archive was generated by hypermail 2b29 : Fri Aug 18 2000 - 09:47:48 PDT