Re: Ellison on Loserhood

Date view Thread view Subject view Author view

From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Wed Aug 16 2000 - 22:43:13 PDT


Btw, if somebody wants to annotate Eliezer's document, please use
Crit: http://crit.org/http://singinst.org/tmol-faq/tmol-faq.html

Strata Rose Chalup writes:

> Humans experience things and form goals, without direction. Sometimes
> the results suck.
 
I would paraphrase it as: all living things act on a limited scope,
depending on their built-in capabilities of representing their
environment, plus capability in gathering accurate stats about it,
plus ability to evaluate multiple probable future outcome trajectories
(=intelligence), plus the state of the nonlinearity their environment
is currently in. Notice that co-evolution intrinsically steers into
nonlinear/edge of chaos system regime since fitness function being
modulated by other organisms, thus actively destroying predictability,
even if you happen to be super-informed and super-smart (because the
others are your match, and adjust their strategies in their
competition for limited resources). We're not nearly as much in
control of our world as we think we are, as for instance we're far
from making it a monoculture, even (or particularly) when pathogens
and parasites are concerned. A seed population of machines, while
perfectly capable of sterilizing everything within its light cone from
all traces of _organic_ life, will instantly result in a species
radiation, creating a population of critters on all scales of
complexity, dwarfing a coral reef or a tropical rainforest
ecology. The smarter ones might be much smarter than humans, but
they'll still be unable to strip the darwinian regime, and they'll be
obviously be as matched by their peers as we primates do. Undoing
species radiation will be no more in their power than it is in ours.
 
> Machine intelligences could experience things and form goals, without
> direction but maybe with a little nudge in what somebody thinks is the
> "right" direction. Magically, results would suck less.
>
> Bzzt.
 
Bzzt indeed. This assumes a monoculture of supermart machines, an
infinitely sharp spike over the space of all possible critters. I do
not see how a population of autoreplicating agents can even produce
that, ignoring stability aspects over several generations. A huge
fitness delta to the old players, and perfect replication (extreme
brittleness, stopped evolution), even if it was somehow attainable
(from where we are now, I do not see how) only defines a metastable
condition. Whenever your lightcone intersects that of other cultures
who chose not to fall into the local fitness trap, they'll whip you
butt. (But likely subsystems will have long ago spontaneously become
autonomous and spun off their own ecology by then).

> If I were trying to solve the problem of human happiness, I would go
> about it differently. That may or may not be the problem they are
> trying to solve, only the problem that I *think* they are trying to
> solve.
 
As long as there is evolution, the problem of human (un)happiness is
probably fundamentally unsolvable. (But you might want to consult
http://hedweb.com/ for an alternative view).

> If we were to accept for the purposes of argument that unhappiness was
> usually a result of random inputs to a goal-directed system, I don't

Who says that biological life is a goal-directed system? If you happen
to be defective in your euphoria homeostatic system due to a genetic
or amplified-and-frozenmorphogenetic-noise fluke you can try to fix it
by tweaking you at molecular level. However, you'll likely to prune
out a few interesting mutations, and produce a society of blissful
individuals inert as stones. At the very least one has to rewire
reward to real-world actions, orelse have your butt kicked very
quickly.

> think the solution would be to build a separate goal-directed system
> with slightly less random inputs. The solution would be to try to offer
> humans at all life stages a wide range of tools which had been
> demonstrated to work for some humans to reprogram themselves for more
> happiness.
 
Do healthy people on Prozac peform better? Anyone knows?


Date view Thread view Subject view Author view

This archive was generated by hypermail 2b29 : Wed Aug 16 2000 - 23:49:17 PDT