[FoRK] reas. conv. 11/8: Culture & Transcendence (WAS: Against Theocracy)

Dr. Ernie Prabhakar < drernie at radicalcentrism.org > on > Wed Nov 8 06:08:06 PST 2006

[#8 in a "reasoned conversation" about Christianity and atheism]

Hi Jeff,

Excellent points.  Let me see if I can respond coherently...

On Nov 7, 2006, at 9:05 PM, Jeff Bone wrote:
> On Nov 7, 2006, at 5:43 PM, Dr. Ernie Prabhakar wrote:
>> We typically have explicit beliefs about what's happening *in* the  
>> game, as well as tacit beliefs about the nature of the game  
>> itself.  Those rules themselves exist outside the game per se,  
>> which is why I think transcendent is a fair term.  Though, if you  
>> prefer "tacit" or "implicit" I can live with that.
>
> I think there's a kind of circular reasoning here, but can't quite  
> put my finger on it.  The problem is this:  there's no rule book  
> for the game of life;  you and I can't necessarily rationally agree  
> on what the rules are, we can only --- through the repeated  
> experience of iterating through the game --- form our own models of  
> the game, its rules, and so on.

Hmm, I agree that we don't *know* all the rules, and that we _do_  
form our own models of how the "game" works.  But, I'm not sure  
that's all there is to it...

> We have no objective shared context about the rules, only objective  
> shared experience of the outcomes of the iterations of the game.   
> Hence, no "transcendent" beliefs --- only empirical evidence.  At  
> least, that's all that we can agree on.  Yet we can learn to  
> cooperate.

Hmm. I think you're overlooking the nature of human culture, and how  
children (and immigrants) assimilate into a group.   It is not  
"merely" a matter of empirical evidence, but a combination of both  
explicit instruction *and* inductive reasoning (out of ours, or  
others, experience) that provides us with heuristics for navigating  
the social landscape ("drive on the right", "smile at the cashier",  
"wear clothes").

I agree that there *is* an arbitrary element to much of this, and  
that these beliefs may not have much correspondence to any "external"  
reality.  However -- assuming that sociology and political science  
are not *completely* worthless occupations -- I do think it is fair  
to say that some cultures respond to certain problems "better" than  
others, which implies that their rules are "more optimal" (relative  
to some extrinsic standard) than others.  And that in fact much of  
education (formal and informal) is an attempt to cultivate belief in  
those more-optimal roles.

No?

> It's a small point, I think --- this objection I'm having to the  
> notion of "transcendent" belief and / or the equivalence of such  
> with individual beliefs about the "tacit" or "implicit" fundamental  
> nature of reality.  But I think that gets right to the heart of the  
> matter, doesn't it?

Yes, it does.  Let me rephrase my position as:

1. Prolonged social interaction requires shared beliefs (both  
implicit and explicit) about the nature of objective reality (that  
is, what is "real" independent of the actions of either party)

2.  Those beliefs are often "useful" whether or not they are  
"true" (e.g., if we both believe the courts are fair, we agree to  
honor our contracts)

3.  The closer those beliefs correspond to objective reality, the  
more effective the resulting the social framework (e.g., belief in  
"germs" is leads to better medical research than belief in "evil  
spirits").

Would you agree with all that?  I agree that "transcendent" is a  
messy word, and perhaps not the optimal one, but I think it raises  
some useful questions.  I use it here in the sense of "bigger than we  
are"; could you suggest a better term?

> I'll go one further.  It doesn't even take *intelligence* for  
> cooperation to exist.  The computer PD models, particularly the  
> learning ones, make that point.  And many of *those* don't have any  
> explicit, internal models of the games they play.

Um, not sure if you're conflating "intelligence" with "belief", and  
limiting that to an "explicit internal model,"  I would argue that an  
appropriately-constructed analog sensor-motor feedback system  
implicitly "believes" that "it is good to move towards the light",  
the same way a moth does.  Or the way a human believes it is "better  
to love than to hate."

The difference in being human, I think, is that we *can* construct --  
and manipulate -- a _partial_ (explicit) mental model of our internal  
(implicit) belief states.  But I've yet to meet a human being who  
truly understands _everything_ they believe, and is 100% accurate in  
predicting their _own_ reactions to everything they encounter...

-- Ernie P.



More information about the FoRK mailing list