Game Theory and the Golden Rule

J. Andrew Rogers andrew at ceruleansystems.com
Mon Dec 1 22:10:26 PST 2003


On 12/1/03 9:33 PM, "jbone at place.org" <jbone at place.org> wrote:
> 
> Personally it's a slight variation on this kind of reasoning that makes
> me relatively secure in the idea that while e.g. Eli-style
> "Friendliness" is probably impossible to induce and insure, it's
> probably also unnecessary.  A rational being chooses rational
> strategies, and that usually excludes scorched-Earth / purely
> self-serving strategies.


I believe on can only make this assumption if one assumes rough intellectual
parity between agents.  If one does not make this assumption, or assumes
large disparities in practical intelligence, this does not hold.

Computational parity between intelligent agents is one of those pervasive
assumptions in human thinking and institutions based upon historical
reality, one that more or less asserts that 1) only humans are intelligent
agents, and that 2) all humans have functionally equivalent intelligence
(although this illusion frays at the extremes).

Some day, when we play "hamster" to the computer's "human", we'll be in for
a rude awakening on a great many levels.


-- 
J. Andrew Rogers (andrew at ceruleansystems.com)





More information about the FoRK mailing list