From: Gordon Mohr (email@example.com)
Date: Fri Aug 18 2000 - 18:49:37 PDT
Karl Anderson writes:
> "Gordon Mohr" <firstname.lastname@example.org> writes:
> > We could then, just for kicks, live a few hundred "simulated natural
> > lifetimes", in the blink of an eye -- if such an exercise were at all
> > interesting. We could rerun the same life with slightly different
> > initial conditions; we could invite friends over for guest-starring
> > roles; we could discuss the twists and turns between lifetime sorties.
> Yeah, but that would be extremely unethical, unless you don't believe
> in hard AI, in which case "cannot be differentiated" is false.
> "Whoops, I messed up the input parameters a notch - a version of you
> just spent a lifetime getting gangraped by pirates. Butterfly effect,
I don't exactly follow your comments on the relevance of "hard AI". I
can see ethical dangers, but not insurmountable problems.
If the main character in such simulations is ethically "me", and I
voluntarily agree to enter and live with the outcomes, knowing that
any pain will at least be finite in duration, who has been unethically
treated? Mortality itself may just be a safety mechanism, to ensure
nothing goes too awry for too long on any one "run".
Perhaps at every second, the simulation freezes. A forked version
of me is explained the situation, and given a chance to rescue the
naive version before continuing. Would that provide suitable ongoing
What about all the supporting characters? They need not be suffering
sentients -- they could be actors/puppets for hire. When they fool
the-me-inside-the-simulation, with the help of all their outside-the-
simulation cycles, that may prove they're sentient relative to me but
it doesn't prove they're being wronged by their involvement.
A tricky situation to reason about, definitely.
This archive was generated by hypermail 2b29 : Fri Aug 18 2000 - 18:52:17 PDT