What I don't get about his stuff is how he sidesteps the concept of self. IE,
in his concept it is somehow fundamental that AI things would not have self. I
get his point that his AI things aren't evolved, so there is no selection, so
there is no pressure to create units of selection, so there's nothing to have
But it makes no sense to me at all that these things wouldn't be evolved. Let's
say he made a few of them. The ones that survived would tend to do things that
caused them to survive(*). ...and evolution is off and running again.
*to get that tautology under control: what I mean is that as soon as there is
(1) differential survival and (2) decision making, there are units of selection
with a sense of self. I don't mean that they necessarily feel selfy, just that
the evolutionary consequences are the same as if they did.
> Cross-infecting from transhumantech, paper on "benevolent" AI by Eli
> via Brian. It seems that, perhaps, our ancient thread on this may
> have induced a small amount of moderation in the thinking...
This archive was generated by hypermail 2b29 : Sun Apr 29 2001 - 20:25:59 PDT