[FoRK] Extreme Life Extension: Investing in Cryonics for the Long, Long Term

Jebadiah Moore jebdm at jebdm.net
Mon Jun 21 19:15:05 PDT 2010


On Mon, Jun 21, 2010 at 5:03 PM, Jeff Bone <jbone at place.org> wrote:
>
> Jeb --- can we call you that?  Sweet.  Substance.  Thank you.


Sure, that's what I'm usually called.


> > But the next step in the argument is.  With the refutation of Pascal's
> > Wager, the argument is that since you cannot rationally influence your
> > outcome after death due to a lack of information, you should live your
> life
> > ignoring the possibility and simply maximizing your expected value where
> you
> > can.  You certainly shouldn't strive for some particular possibility,
> > because doing so likely costs you something on Earth for no expected gain
> > afterwards.  In the case of cryonics, however, you do have some
> information
> > about "life after death"--you know that the chance of resuscitation is
> > rather small, and that the cost is somewhat of trying is somewhat high.
>  The
> > expected value is sort of up for grabs--perhaps you'll just get a few
> more
> > years, perhaps you'll live virtually forever.
>
> Given the implied assumptions, you're correct.  But I'm not sure the
> assumptions are warranted.
>
> First, there may well be ways to influence both the probability of success
> and the outcome post-resuscitation.  (The latter may depend largely on the
> former;  the sooner wake-up becomes practical, the more likely you are that
> actions taken now influence the outcome.)


I guess so, but since you hopefully have been reconsidering the cost of
freezing throughout your life, there should be a relatively small gap
between when you last gave the freeze order and when you kicked the bucket.
 I'm assuming here that the brunt of the cost is payed at freeze-time, which
might not actually be the case.  If that's the case, then we can approximate
the calculation as being made immediately before death, so that there's no
way that you as an actor can affect your odds in the intermediate time.

If you have to pay well in advance, then that does muck up the computation a
bit (for instance if you personally are a researcher in this area, or if you
plan to invest significantly in research beyond the simple cost of
freezing).  But since we were talking about the idea that cryonics should be
considered in the same category as defibrillation, and most people aren't
cryonics researchers or major investors, I think it's safe to ignore the
factor.


> And high is relative;  whether or not the cost is high depends on your
> definition of high.  Right?


Of course.  But I think that the vast majority of the people on Earth would
consider it expensive.  If you're a fairly typical multimillionaire, you
probably wouldn't consider it especially high.


> So with the assumption that you may, through careful planning, both
> increase the odds of your resuscitation and influence the environment and
> quality-of-life that you experience after the fact, and with the assumption
> that the cost is not "high" in some relative sense, rationality is restored.
>


No--"rationality is restored" only if the cost is sufficiently low relative
to the the expected utility of the investment (assuming fixed monetary cost
in every situation, the sum for each possible outcome X of [expected utility
of X * probability of X]).  My argument is that the expected utility of the
investment is not obviously greater than the expected utility of other
investments in the $100k range, due to the very low probabilities involved,
and the cancelling of infinite utilities.  Assuming homo economicus, it
doesn't matter whether you're a poor farmer or Bill Gates.  Of course, homo
economicus is a myth.

But assuming that you are able to "plan" sufficiently to raise the
probability of success, then of course it would be worthwhile.  The
"highness" of the cost doesn't matter except relative to the probability.


> Mostly agreed in rationale, slight quibble with conclusion.  I'm not
> assuming binary modes or real, absolute infinities --- you may not have been
> around long enough or recall, but I'm a mathematical constructivist of a
> rather peculiar extreme sort --- I reject the absolute reality of any
> infinities and / or non-discrete continua in general.  More Markov than
> Brewer, but even moreso.


Well, even if you don't accept the potential positive/negative infinite
utility of living forever, they'll still have very large utilities that,
given the unknown impact of such a life on the human mind, cancel out.  If
you don't accept that it is possible at all to live forever (I don't) then
it still obviously cancels out.  My point was that there isn't an infinite
utility in the equation to worry about.


> Note the word "qualitative" in my statement.  You are correct that any
> payout, if there is any at all, will be finite.  However, relative to
> current standards, it is likely that any payoff, if there is one, probably
> implies at least the possibility of such greater personal utility that it
> cannot reasonably be compared to utility today.  In the spirit of "any
> sufficiently advanced technology is indistinguishable from magic", it does
> then resemble Pascal's Gamble.
>

Yeah.  If we have the tech to revive, we probably have other significantly
advanced tech as well.  I don't think it would be incomparable, although
possibly there would be much higher utility payoffs *if judged by today's
standards*.  But given results showing that people judge their status
relative to their situation and others, I'd guess that (after an adjustment
period) the utilities would actually be pretty similar.


> The biggest risk is the one you point out:  that you might be resuscitated
> but not able or allowed to have the benefit of such increased utility.  The
> rationale against that line of thinking is:  there would seem to be fairly
> few scenarios in which it would be to the resuscitators' benefit to actually
> revive someone while not giving them full advantage of their restored life.
>  Waking folks up to enslave them, put them in zoos, or what have you doesn't
> really seem to be compatible with any technological and ethical situation
> under which revival might occur.  Implausible, kind of like "human
> batteries" in that movie.
>

I didn't mean situations in which you'd be disallowed.  I meant that you
might not be able to cope with everyone being gone, you're not able to be
nearly as competitive due to your outmoded habits/knowledge/beliefs (thus
putting you in a very low status job, which you might resent),  you dying or
getting sick from new diseases, you being handicapped in some way due to an
error in the cryo process (or a fundamental flaw thereof), possible
persecution of past people, etc. etc.

Minor quibble:  whether things balance out is by no means objectively sure.
>  Personal choice, yes.  But not one that necessarily occurs absent attempts
> at rationalism and quantitative thinking.
>

That's what I meant--they balance each other out only roughly, so it seems
like the result will only lean slightly in one way or the other, depending
on your particular choice of probabilities and utilities.  And everyone will
choose the probabilities and utilities differently, thus the personal
choice.


> That *is* a very significant and serious concern.  There seems to be
> accumulating evidence that it's mostly structural, based on continuity of
> certain characteristics across admittedly brief but significant experimental
> and accidental disruptions of "life" as we judge such things today, but I'll
> let Eugen and his Powerpoints make that case more thoroughly if necessary.
>
> A similar concern:  Roger Penrose's hocus-pocus quantum origin of
> consciousness also argues against the plausibility to a large extent, as
> well as against strong AI in general and any form of uploading.
>

I just don't know enough neurobiology yet to make this call.  But I'd taken
any result at this point with a grain of salt until the mechanisms of the
brain are more fully understood, due to the existence of confounding
factors.

I haven't read Penrose, but what I know about his argument seems implausible
to me.  The business about Gödel's incompleteness theorem and the halting
problem showing that human intelligence is non-algorithmic doesn't make
sense, since the types of problems we actually work on are fairly limited in
breadth, and since heuristics exist.  The evolution of a complex quantum
effect-based mechanism in our brains seems unlikely as well.

-- 
Jebadiah Moore
http://blog.jebdm.net


More information about the FoRK mailing list