Re: [Fwd: Eliezer speaks (forwardable)] - was loserhood and analysis

Date view Thread view Subject view Author view

From: Jeff Bone (jbone@jump.net)
Date: Thu Aug 17 2000 - 22:52:46 PDT


> No. Metarules are subject to infinite dispute. The Rules themselves,
> on the other hand, are pretty obvious. Compare the "laws of physics"
> and the "rules of science".

So, for example, here's something that's nonobvious to me and subject to dispute: while I
think Doug Hofstadter is a fine human being and great thinker, I don't agree that he's the
most significant human being, or that you can even define such meaningfully. Now, I'll
stipulate that we're both bright people with points to make. But if we can't agree on the
obviousness of such assertions, how can you convince me that "The Rules" you're going to code
are going to be acceptable to me or indeed anyone else but yourself?

> Give me credit for basic common sense, Jeff. Anything you can see, I
> can see.

Fine, but don't assume that's reflexive; I sure don't see some of what you see, and I gotta
say I think some of what you're seeing isn't really there. :-) I think we might have a tough
time agreeing on common sense, too.

> It's symmetrical. It's simplest. It's obvious. There is no
> justification for any allocation strategy that favors individual humans
> at the expense of others.

That's hogwash. All in situ resource allocation schemes throughout history that didn't
involve market forces or competition have failed. Who decides who gets what piece? What if I
disagree with what I've been allocated? What if you decide to give me a big blob of vacuum
out near Neptune, while you take a nice, juicy carbonaceous chondrite closer into the sun?
Who values each piece of solar system? Who decides that system of value? How do you define
"a human being" for the purposes of the allocation? Does the fetus in Sally Jane's belly (or
whatever) at the time of the allocation get its own piece? Do the corpsicles at Alcor (or
whatever) each get their own piece? What about the kids (or whatever) that get born 10 months
after the allocation, are they shit out of luck? What about nonfunctional humans, i.e.,
those that are for whatever reason incurably severely retarded, do they get just as big a
chunk as somebody that might actually be able to *use* said resources? I know all those
whatevers are perhaps not appropriate concepts in that kind of environment, but you can
clearly imagine the analogies. In particular, *why* does the set of beings, defined however
you want, alive at the instant of allocation enjoy the special temporal privilege of getting
participation? How about those Minds, do they get their own pieces?

There's nothing "obvious" about your solution at all. You are basically ignoring the need for
an entire philosophical, moral, and economic framework in which we can interact in the
presence of the kinds of tech you're speculating about. It's a BIG deal. Sophomoric and
overly simplistic solutions like "oh, we'll just give everybody an equal chunk of resources"
are basically just a way of sweeping the *really tough* problems under the rug. By
comparison, actually building the Minds may well be trivial compared to these other problems.

> You mean the Washington Monument Problem? ("Who gets the Washington
> Monument?") I don't know. I don't care. It's a trivial problem. You
> could toss it to a UN vote and it wouldn't matter all that much how they
> decided. One quark is as good as another.

It's an ABSOLUTELY non-trivial problem. Unless you've got some scheme for, at a minimum, free
energy from the void i.e. ZPE, then you've got resource constraints. That implies economics.
And, unless you think that femtoengineering happens two or three days after we get universal
nanoassemblers, your "any quark" argument isn't that compelling either; we'll be building
with atoms for a while first, and that carbonaceous chondrite is *a lot* more valuable in that
timeframe than an equivalent amount of lunar regolith silicates.

> Anything you want to do - *anything* at all - that doesn't harm another
> human, you can do through the Sysop API. The Sysop is not corruptible
> and has no temptation to meddle; the API to external reality would be
> *invisible* unless you specifically concentrated on seeing it. Until
> you tried to fire your gun at some poor guy who didn't volunteer for it,
> when the gun would suddenly stop working. Isn't that how you *want*
> reality to work?
>

Yeah, that sounds good to me. But how can anyone turn off the Sysop if there's something
unacceptably wrong with it, though? I've never met a human being or a piece of code without
bugs in it; and it's notoriously difficult for people to debug themselves, and software isn't
too good at that, either. ;-) Isn't any kill switch a form of Sysop Threatening Weapon? What
happens if there are bugs in the basic philosophical assumptions and interactive principles
that the Sysop is built to protect and uphold and enable? Who's got their finger on the
switch? Sed quis custodiet ipsos custodes? This is a hugely real problem with what you're
contemplating.

> I don't even need to put any of that explicitly in the Sysop Instructions. It follows
> logically from the goal of maximum individual
> freedom plus not letting anyone shoot me without my permission.

Well, when you put it that way it sounds reasonable. ;-P :-/

> > Well, see above, clearly there are big gaping holes of consensus.
>
> As long as you keep thinking it in terms of a set of Gestapolike
> instructions, you will keep finding points of dispute. Think of it in
> terms of good and evil and dictatorial powers, all of which you claim to
> despise, and of course you'll despise the result. Think of it in terms
> of finding a set of underlying rules that guarantee *everyone's*
> autonomy, and there is a single, forced solution.

E., I'm not the only one making that equation. It's obvious to a *lot* of people that there
are real issues of potential tyranny here. My point is: you haven't done anything at all to
prove to me or anyone else that (a) there is such a set, and (b) you can find or have found
that set.

> Maybe *you* find it natural to assume that you would abuse your position
> as programmer to give yourself Godlike powers, and that you would abuse
> your Godlike powers to dictate everyone's private lives. *I* see no
> reason to invade the sanctity of your process, and have absolutely no
> interest in enforcing any sort of sexual or political or religious
> morality. I have no interest in sexual, political, or religious
> morality, period. And if I did try to invade your process, the Sysop
> wouldn't let me. And if I tried to build a Sysop that could be dictated
> to by individuals, I would be building a gun to point at my own head.

You might be doing that anyway. Further, you might be putting a gun to everybody's head. I
didn't volunteer for your experiment in Russian Roulette. Hey, go ahead and do whatever you
like, just don't make any designs on any resources or whatever that somebody else might be
interested in using.

> All that matters is the underlying process permissions that ensure
> individual freedom. I'm in this to significantly reduce the amount of
> evil in the world;

Define evil. I can't think of many things more evil than the notion that some random evil
genius might be cooking up the operating system for the universe that I will inevitably be
forced to live in at some point in the future, and planning an interplanetary land grab and
bake sale --- no, not sale, that would make too much sense --- *giveaway* of all the resources
in the neighborhood.

You *do* realize how nutty, how mad scientist, how evil genius all of this sounds when you say
"bah!" and just brush away the philosophical concerns, right? For somebody who wrote a F.A.Q.
on the Meaning of Life, you seem remarkably unconcerned about some of the more tricky
philosophical questions that crop up as a result of your endeavor.

> Fine. The UN isn't allowed to do it. The trained professionals aren't
> allowed to do it. Who's gonna do it? You?

Nobody's gonna do it, because the world doesn't need just one set of uberRules. It *doesn't
need doing,* even if it can indeed be done. Tell you what: you just do *your* thing, build
*your* world, figure out *your* definition of evil, eliminate that evil from *your* life, and
everything's cool until you make the grab for the asteroid belt. I defy your right to claim
any resources within the solar system beyond what you've got now or can eventually buy or be
given; what makes you think you have that right? You don't believe in coercion; you believe
in freedom; surely you must believe that your rights stop at the end of my "nose." Well,
what framework have you created under which those personal boundaries make sense in such a
brave new world? To me, your whole deal sounds like in practice it's a race, winner take all.

Actually, now that I think about it, I think your whole argument is sort of cowardly. Why in
the world should you try and expound on or defend how things are going to work --- i.e., "The
Rules" as you see them? It's just not necessary. If you had the courage of your convictions,
you'd basically say "we're going to build an essentially omnipotent, ultimately benevolent
Power. And then we're just going to trust it to figure out how to take care of all of us in
the best way." That's perhaps a scary and harder to defend position, but I think it's really
the one most philosophically in line with your endeavor.

jb


Date view Thread view Subject view Author view

This archive was generated by hypermail 2b29 : Thu Aug 17 2000 - 23:19:12 PDT