[Fwd: [Fwd: Eliezer speaks (forwardable)] - was loserhood and analysis]

Date view Thread view Subject view Author view

From: Jeff Bone (jbone@jump.net)
Date: Fri Aug 18 2000 - 11:11:20 PDT


Woops.


attached mail follows:



> Okay. I also think that vanilla ice cream tastes better. Let me get
> this straight: Literally ANY opinion I express in my writings, if you
> disagree with it, means I'm not smart enough to write The Rules? We can
> disagree on ice cream flavors and still agree about basic situations.

Okay, agreed, but there are *basic situations* and *big subjective value judgements,* and our basic
disagreement is which is which. I'm not particularly worried about whether you like vanilla ice
cream or not. I *am* concerned that there may be disagreeable "value system bleed-through"
depending on how you define "The Rules." Further, you seem to be the only one who thinks he can
make the call what consitutes things like "basic," "fair," "significant," etc. Again, probably the
thing that sets me off the most here is this sort of glib way you toss of bon mots like, "oh, we'll
just divvy up every natural resource within a few AUs of Sol to those lucky enough to exist when we
do it, it's really a trivial problem." Given that we can't even agree on how to go about achieving
consensus on simple, truly trivial things, yeah, I'm concerned about you and yours making the tough
calls for all of posthumanity for all time. (Like your resource allocation scheme.)

> > > It's symmetrical. It's simplest. It's obvious. There is no
> > > justification for any allocation strategy that favors individual humans
> > > at the expense of others.
> >
> > That's hogwash. All in situ resource allocation schemes throughout history that didn't
> > involve market forces or competition have failed.
>
> ... "throughout HUMAN history" ...
>
> So what?

AFAIK, you're *still* human, and you're the one proposing the allocation scheme. Refer to comment
at the end: if you want to simplify your position and make it easier to defend, just punt all the
hard issues (like fairly and equitably allocating resources) off on the superbrain you're building.
Surely it can do a better job, hein?

> How can you possibly draw conclusions from that?

Common sense?

> Who gives a diddly-squat what kind of social schemas nontechnological
> humans (we don't have tech, we have toys) need to run a civilization?

Well, if we're proposing to create a successor race that we would *like* to become, then I think
some sort of philosophical evolutionary path that we generally agree on would be desirable.

> Anyone with an underlying Sysop doesn't need to trade resources. Maybe
> the uploads can trade resources permanently or temporarily among
> themselves; maybe there are or aren't safeguards ensuring that each
> sentient has a minimum chunk of processing space. I think the best
> solution is a minimum-living-space safeguard that also applies before
> you want to create a new sentient - before you can make a Child, you
> need to own enough mass that you can afford to give both yourself and
> the Child minimum living space.

Now we're picking at the problem, but again you're just tossing off suggested solutions to specific
problems, which clearly you haven't given much thought to. You do see how this could be a concern,
don't you? Again, IMO, you need the *process,* the *framework,* not just a gestalt answer. When
you write code, you do gather requirements first (even if just from yourself) don't you? And then
you design before you crank code, don't you? During that process, you do get other people to look
at, comment on, review the intermediate products, don't you?

> Actually, we might leave the whole Solar System for those who choose the
> Mundane Path. After all, most of the mass (10^33 grams) is in the Sun.

That sounds reasonable.

> > Who values each piece of solar system? Who decides that system of value?
>
> Ask the Sysop.

That's consistent, it's what I suggested at the bottom of your message.

> > How do you define
> > "a human being" for the purposes of the allocation?
>
> I don't think this will be significantly blurred at the time of the
> Singularity.

This surprises me; if we've truly got exponential growth, we should be done with bodies within a
few days of nanoengineering. What's your "shock level" again? ;-) Further, it should be "obvious"
that the proliferation in types of "humanity" will commence the *instant* we shrug off this mortal
coil. Let's say I merge with my girlfriend, my pet daemonic AI-constructs, and a few other close
friends a few seconds after we all upload. Are we "one human being" or several? I think your
comment on blurring of identity is baffling, given your self-professed "shock level."

> ...zzz...

Picking at the surface problems, avoiding the real issue.

> > What about nonfunctional humans, i.e.,
> > those that are for whatever reason incurably severely retarded, do they get just as big a
> > chunk as somebody that might actually be able to *use* said resources?
>
> Yes.
>

Why? (Note here that I'm trying to bait you into the "how do you define a human" trap, which is at
the root of the abortion debate as well. Clearly, it's an unwinnable argument.)

> > I know all those
> > whatevers are perhaps not appropriate concepts in that kind of environment, but you can
> > clearly imagine the analogies. In particular, *why* does the set of beings, defined however
> > you want, alive at the instant of allocation enjoy the special temporal privilege of getting
> > participation?
>
> In general, the rule is that anyone born/created before "rapid
> replication" becomes possible gets an equal share of Sol. This goes
> back to Drexler and _Engines of Creation_.

Fine, but just because Eric said it doesn't mean it's Gospel. There's all kinds of problems with
this concept. I like your original idea; leave the home system to the Mundane. I don't plan to
stick around, but it seems like a reasonable thing to do if you, like me, don't believe in coercion.

> I don't say that it's instantly obvious; only that all alternate
> solutions which I have examined appear harmful or dangerous or unfair,
> thus forcing a single answer. If multiple solutions are acceptable in a
> case, then it's not an important issue and the UN can decide.

Unfortunately, it's more "instantly obvious" than it is durable in the face of criticism or, indeed,
even minimal cursory examination. And what's all this waffling on deferring issues to the UN? Who
made them the arbiter of such things? Are you in favor of letting them make these calls, or not?

> > You are basically ignoring the need for
> > an entire philosophical, moral, and economic framework in which we can interact in the
> > presence of the kinds of tech you're speculating about. It's a BIG deal. Sophomoric and
> > overly simplistic solutions like "oh, we'll just give everybody an equal chunk of resources"
> > are basically just a way of sweeping the *really tough* problems under the rug. By
> > comparison, actually building the Minds may well be trivial compared to these other problems.
>
> Yep, that's why I'm ignoring it. We can worry about the philosophy
> post-Singularity, or ask the Sysop, or create a Philosophy Mind. Any
> alleged philosophy would probably give vastly sillier answers than
> trying to just work out the Rules.

Now you're arguing against yourself. As Eugene has pointed out (following lots of other thinkers in
this space) even a tiny increment of increased intelligence enables amazing intellectual feats given
extreme time compression / increases in speed-of-thought. Now, if you're building a Mind, then
don't you think it's vastly silly to argue that it's going to give you vaster sillier answers to
these questions than you can generate with your own puny, messy, wet, merely-human head? That's
totally inconsistent.

Tell you what, here's the answer: build a "Bootstrap Mind" whose basic job is to figure out what
kinds of things a Sysop should be empowered to do to achieve optimum / maximum fairness, prosperity,
tranquility, and survivability for all of its constituents. Arm it with all the knowledge and
thought in the world about human culture, history, philosophy, behavior, wants, needs, happiness,
economics, math (esp. game theory), political philosophy and actual government, good, bad, etc.
etc. Tell it to create an optimal, practical social framework which will minimize the total amount
of "net disagreement" with that framework among its constituents. Then have it implement the Sysop
with that framework as the underpinning of The Rules.

> Nanocomputers run REALLY REALLY fast. Two or three days is a maximum,
> not a minimum.

Yeah, but at some point you're talking about mechanical processes. The nanocytes are still going to
have to take some time to do some experiments, etc. They're going to have to build some massive
objects (like supercolliders) to generate the data necessary to achieve femtoscale engineering. And
they're going to generate a *whole lot* of heat doing those things if they run at maximum speed
doing all this stuff; I'd prefer that you not flash-bake-sterilize the planet in your rush to
femto, please. They might even have to do some of that experimentation off the planet, and they're
going to have to make those trips at more or less human-like (few orders of magnitude) speeds, due
to the problems carting around reaction-mass. Approaching-infinite computational ability doesn't
immediately translate into ability to avoid all those messy Physical Laws.

> > Yeah, that sounds good to me. But how can anyone turn off the Sysop if there's something
> > unacceptably wrong with it, though? I've never met a human being or a piece of code without
> > bugs in it; and it's notoriously difficult for people to debug themselves, and software isn't
> > too good at that, either.
>
> You've never met an intelligent being with access to its own source
> code.

Well, that's true, but neither have you.

> Get It Right The First Time. If you can't trust a Sysop, how can you
> trust a human with vis finger on the off-switch?

Okay, listen very carefully: *never, ever, never, ever, ever, ever, never* in the history of
software engineering has anyone written a bug-free nontrivial program. If there's not bugs in the
code, there's bugs in the requirements (which are incorrect, ambiguous, or incomplete) etc. You're
proposing to wire this thing up and then, essentially, immediately hand over the reins i.e. the
ability to make a global utility fog to enforce The Rules with. How do you propose to test that?
Frankly, I don't want any purely-AI-controlled utility fog running around playing Benevolent Global
Cop *AT LEAST* until I've got my own defensive immune system, defensive subIntelligences, and my own
utility fog (under my control) in place. The chance of it all getting horribly out of control is
substantially higher than, say, the Mahatten Project's chance of setting the atmosphere on fire at
the first atomic detonation. Gray Goo is bad enough, but IMO worth the risk for building
assemblers; you're actually proposing to build and unleash Gray Goo with a Moral Purpose. *Your*
Moral Purpose. And if we don't like it, tough titty, because you / your gray goo has staked a claim
on all the resources in the solar system. Oh, but that's okay, because we can be certain that you /
your gray goo will Do The Right Thing by us with all those resources.

That's a pretty workable definition of "evil" in my book.

> Yes. And as long as it's "obvious", you can't be trusted. You do
> realize that, don't you?

I'm not asking for your trust. You *should be* asking for mine, and everybody elses that is
impacted by your little Master Plan.

> Put your eggs in enough baskets, and one of them will break.

True, but you're proposing that we all put our eggs in your one basket, and then you're going to
swing it around at high speed and see what happens.

> > Define evil.
>
> Why bother?

Oh, I dunno, how about in order to convince *someone* --- maybe even yourself, if you'd stop to
think about it --- that you're not creating the worst possible kind of institutionalized evil?

> I'm just creating the world we all wish we'd been born into.

That's the crux of the problem. YOU DO NOT KNOW AND CANNOT DECIDE WHAT KIND OF WORLD EVERY ONE ELSE
WISHES THEY'D BEEN BORN INTO. Those kinds of assumptions are, IMO, the root of all evil, such as it
exists. (Examples of those kinds of broken assumptions: it's "obvious that abortion is wrong,
because abortion is murder and everyone would agree that murder is wrong." It's "obvious that we
should execute murderers, because murderers are dangerous and nobody wants to live in a dangerous
world." Sticky little stuff like that.)

> Yep. Who needs 'em?

Reminds me of a great book: _Philosophy: Who Needs It?_ by Ayn Rand. Great book. You should read
it.

>
> > Nobody's gonna do it, because the world doesn't need just one set of uberRules.
>
> Sure it does.

And you're the guy to decide what those are for all the rest of us. What a load of bullshit.

> Or until someone else grabs the asteroid belt.

Well, you (or your philosophical agent, your Sysop) better do it first then, because it's so obvious
that "you'll" be better stewards?

> I owned one six-billionth of Sol the day I was born. So do you. I'm
> just making it real.

The problem, here, is that if everybody "withdraws" their little piece of the Sun, then it's
basically Game Over for the home system as we know it. You wanna own a piece of Sol, fine, you own
a piece of Sol. The important thing to understand is that that piece is held in trust for the
future, and for all domestic life. You can't withdraw it, you are only allowed to live on its
interest. (I.e., the energy it sheds on us all the time.)

> > You don't believe in coercion; you believe
> > in freedom; surely you must believe that your rights stop at the end of my "nose." Well,
> > what framework have you created under which those personal boundaries make sense in such a
> > brave new world?
>
> Okay, take the Mundane Path if you like.

Huh? I don't see the disconnect between wanting Nonmundanity (which I do) and wanting to
*understand and help formulate* that Nonmundanity before taking the leap. (Or, more like, letting
you push me off the cliff.)

> I don't need Earthmatter
> anyway.

Okay, great. Don't touch the Sun, either, and leave the rest of the Solar system alone, too. Or,
here's a thought: you can "clean up" and assimilate any objects whose orbits are likely to pose
some impact threat to the inner planets over the next, say few 100s of megayears. You ought to be
able to simulate a hell of an orrery with all that horsepower, you should be able to do a much
better job of identifying those threats than anybody has imagined doing to date. Take as much of
the Oort cloud as you want, too, as long as you leave a representative sampling of everything there
and a reasonable surplus for resource use. Send us the map and analysis data as you assimilate that
stuff, too.

> > Actually, now that I think about it, I think your whole argument is sort of cowardly. Why in
> > the world should you try and expound on or defend how things are going to work --- i.e., "The
> > Rules" as you see them? It's just not necessary. If you had the courage of your convictions,
> > you'd basically say "we're going to build an essentially omnipotent, ultimately benevolent
> > Power. And then we're just going to trust it to figure out how to take care of all of us in
> > the best way." That's perhaps a scary and harder to defend position, but I think it's really
> > the one most philosophically in line with your endeavor.
>
> That's the one in line with the assumption that an objective morality
> exists. If an objective morality doesn't exist, then we at least have
> to know enough to point the Sysop in the general direction of the
> Standard Human Quest.

Okay, now the tyrant comes out. So you're denying that there is any meaningful objective morality,
which I think is a wise position... and basically coming clean on the fact that you want to make
sure the universe functions according to *your* notion of morality. That's not concensus, that's de
facto coercion. At least you're on the verge of being explicit about it.

> Certainly, it's important to remember that if anything I've been saying
> is *obviously stupid*, then the Sysop is smart enough to override me on
> it! Remember, the goal is to build a Sysop so perfectly that the
> programmers become irrelevant; that the result is exactly the same Sysop
> that would be built by any sufficiently competent altruist.

There's an old philosophical saw about whether an imperfect being can create a perfect being at
all.... but then, I forgot, you're not really interested in philosophy (it's all just trivial messy
details anyway) even though you wrote the FAQ on the Meaning of Life.

;-)

jb


Date view Thread view Subject view Author view

This archive was generated by hypermail 2b29 : Fri Aug 18 2000 - 11:14:15 PDT