[FoRK] reas. conv. 11/7: Against Theocracy

Jeff Bone < jbone at place.org > on > Tue Nov 7 15:05:04 PST 2006

On Nov 7, 2006, at 3:18 PM, Dr. Ernie Prabhakar wrote:

> I'. Every viable society needs some Shared, Transcendent, Unifying  
> Beliefs (a "STUB" :-).
>
> Note that this can just as easily be tribalism, nationalism,  
> Confucian morality, or dialectical materialism as any sort of  
> theism.  Absent such, I honestly don't see how you can sustain a  
> bridge partnership[4], much less any sort of modern society.

Cooperation is clearly possible without more than a minimum, if any,  
"shared, transcendent, unifying beliefs."  Game theory.  Cf. Axelrod,  
cooperation --- evolution and complexity thereof, etc.  (One might  
ask if computer PD players have beliefs at all;  I think we can dodge  
that by simply assuming that hard-coded axioms and game logic ==  
beliefs.  The word "transcendent" is, however, extremely troublesome.)

The belief that I've sort of converged on from the game-theoretic  
perspective, mostly through looking at Axelrod's Prisoner's Dilemma  
analyses and extrapolating from them, is as follows:  in most  
relationships the parties involved at least have some capacity to  
contribute something of value to each other, given long enough /  
enough iterations.  If this is taken as a given, then there are two  
possible scenarios:  a given series of interactions is assumed to be  
(de)finite, or it's assumed to be of uncertain length --- effectively  
infinite.  In the finite series of interactions, the operative  
question is whether the series is positive, zero, or negative sum.   
If negative sum, terminate.  If zero, then seek to win, i.e. seek to  
make the other player lose.  If positive, then play cooperatively as  
long as you can / until the series ceases.

Most serial interactions are of indefinite length.  Given this, it  
usually makes sense to play cooperatively, or at least to adopt  
strategies that don't create zero- or negative-sum situations;  seek  
to cooperate.  If the term is indefinite but probably short, then  
there's a probabilistic cost-benefit;  given the anticipated length  
of the game (number of iterations of Prisoner's Dilemma, as a loose  
model) --- does the expected cost of cooperation exceed the cost of  
non-cooperation?

That may all be fast and loose, but the general idea is there:  I  
believe there's a rational basis for cooperative behavior in almost  
all interactions between reasonable participants that does not rely  
on anything other than a minimal shared set of axioms (game theory /  
basic economics.)

> III'.  When adherents to a STUB perceive a threat to their  
> collective identity, they tend to respond violently, often leading  
> to great evil.

The very notion of "collective identity" is a problem, IMHO.  A  
primitive meme deeply rooted in our evolutionary past.


jb


More information about the FoRK mailing list