Content-Type: text/plain; charset=us-ascii
Received from my brother. Quite interesting stuff. I'll try to
comment later - have to head out now.
The Binmore book (Game Theory and the Social Contract : Playing Fair)
Sounds like Rohit's kind of book too, I would think.
Received: from smtp.interlog.com by actcom.co.il with ESMTP
(8.8.6/actcom-0.2) id SAA23184 for <firstname.lastname@example.org>;
Wed, 27 Aug 1997 18:36:45 +0300 (EET DST)
(rfc931-sender: email@example.com [18.104.22.168])
Received: from gold.interlog.com (firstname.lastname@example.org [22.214.171.124]) by smtp.interlog.com (8.8.3/8.7.6) with ESMTP id LAA27548 for <email@example.com>; Wed, 27 Aug 1997 11:37:05 -0400 (EDT)
Received: from sunset.ma.huji.ac.il (sunset.ma.huji.ac.il [126.96.36.199])
by gold.interlog.com (8.8.5/8.8.5) with SMTP id LAA19881
for <firstname.lastname@example.org>; Wed, 27 Aug 1997 11:36:56 -0400 (EDT)
Received: from localhost (resnick@localhost) by sunset.ma.huji.ac.il (8.6.11/8.6.10) with SMTP id SAA13451 for <email@example.com>; Wed, 27 Aug 1997 18:38:45 +0300
Date: Wed, 27 Aug 1997 18:38:45 +0300 (IDT)
From: Uri Resnick <firstname.lastname@example.org>
Reply-To: Uri Resnick <email@example.com>
To: Ron Resnick <firstname.lastname@example.org>
Subject: Re: A guest post on morality
Content-Type: TEXT/PLAIN; charset=US-ASCII
On Tue, 26 Aug 1997, Ron Resnick wrote:
> Ernest N. Prabhakar wrote:
> > >From email@example.com Thu Aug 21 14:01 PDT 1997
> > From: Dustin <firstname.lastname@example.org>
> > To: "Ernest N. Prabhakar" <email@example.com>
> > Subject: Re: Evolutionary thoughts (fwd)
> > MIME-Version: 1.0
> > Content-Type: TEXT/PLAIN; charset=US-ASCII
> > Content-Length: 3793
> > > Yup, that mailing list.
> > > http://xent.w3.org/FoRK-archive
> >Took me a while, but I did find some messages from the moral relativism
> >thread--I think that is what you mentioned to me. Looks like an
> >interesting discussion. One thing caught my eye, so I'll comment even
> >though it wasn't at all you who said them:
> Oh oh. After Wayne's palate cleansing monkeys (quite funny, btw),
> I wasn't going to mess up these waters again. But since Ernie's
> bringing in outside troops, I suppose I can do the same :-).
> My brother has a bit of an interest in these things..
> Here's his 2 bits on this thread. (He was being cc'd by Wayne
> & me at the time, you'll recall). I spoke to him on the phone
> about this a bit afterwards - this Binmore fellow apparently
> makes the most progress in the directions I've been troubled with.
> I suppose I should throw him on my reading list too....
> Uri is finishing an MA at Hebrew U dealing with applying game theory
> to political models. Something about modelling folks like Arafat
> & Netanyahu as 'players' in a game, tuning the model, and trying
> to use it predictively for future outcomes in various scenarios.
> I've never entirely understood it.
> So while he's in a poli sci faculty, he spends most of his time
> researching game theory & mathematical & economics papers for his model.
> As you see, he's no help at all to a relativist like me - he's
> got his ethics firmly planted on the capital G Good and capital E
> Evil pillars.
> Uri -
> Can I forward this to the FoRK mailing list? (That means it gets
> publicly archived
> on the web, and gets read by >50 people).
> Uri Resnick wrote:
> > Ron.
> > So, you've joined philosophy class. (I didn't quite make out
> > exactly who said what but it sure sounded interesting anyway - I
> > assume the bit about family gatherings and rituals was you)
> Yes :-).
> > See what you've been missing;
> > you could have done an artsy degree like me and done nothing but
> > ramble about such nonsense for credit. (Of course, then you would have
> > ended up an artsy bum like your kid brother.)
> > A couple of simple things seem to have been missing, though, from the
> > learned discussion. Noone in there seems to have heard about Kant, or
> > Rawls, or Binmore. Not that these guys make too much of a difference
> > to the basic gist, but at least they tried.
> Perhaps you could summarize some of the basic contributions of these?
> > The bottom line, though, I think, is that it's all kind of missing
> > the point, which is: it's kind of futile to have a learned discussion
> > about something which you either believe in or don't. If you don't, no
> > amount of tonsil stretching pontification will change that. Same thing
> > if you do.
> True, but I think there is value in trying to figure out what you do
> believe. The pointisn't convincing others, I think, but rather
> trying to come to grips
> with what your internal values are. I don't dwell on these things,
> but it is important to spend some time on them
> when you want to put the rest of your life in context.
> > Uri.
> > By the way, the word 'hypocrites' made its way in their a few times.
> > Who are the hypocrites? The guys who preach goodness and act rotten,
> > or the guys who preach emptiness and pretty much toe the line in \
> > everything but rhetoric?
> I think in my usage of the term, I primarily meant the former. But
> ultimately everybody is hypocricital (or, more gently, has
> inconsistent internal value
> propositions). It can't be helped. BTW, this is not just a minor issue-
> I think it's about to explode in real world importance and urgency.
> Information networks, of the type that I and some friends discuss
> regularly, are combining informational resources about ourselves
> and others together
> with massive processing power. The result is that we are delegating our
> value judgements to software agents that act on our behalf to gather,
> filter, collate etc. information. Now, we as people have inconsistent
> and variable policies on how we relate toinformation. What we will make
> public, what we will authorize to whom. But our agents insist
> on rigour and consistency. Programming & customizing our agents to be
> is going to mean staring right into the mirror and confronting our
> It's going to be a rude awakening for most.
> See? Artsy philosophy suddenly gets very relevant in the gritty techno
> That's why I think you might be quite interested in the issues I've been
> thinking about - everythingfrom economics & governments & taxation to
> organization and responsibility. And more. But all mapped to the
> technologies that are changing all these things. There's tons of techies
> who understand what 'Java' is, but haven't got a clue what its relevance
> is to an emerging world. Then, there's lots of people thinking about the
> changing, globalizing world but don't really have a feel for the
> That space in the middle - trying to pull it all together, seems to be a
> very rarified atmosphere.
> And responding to just one point of Dustin's:
> > As long as I'm a rational being, intellectual honesty will force me to
> > acknowledge that the evidence of history and humanity demonstrates the
> > existence of the natural law.
> See, I just don't get that part. 'Rational being', 'natural law'.
> What are those? The system is tautological. If you have a sense
> of 'absolute', then sure, you have a sense of 'natural law' and
> 'rationality'. But you can't prove 'absolute' by assuming it.
> What "evidence" am I supposed to appeal to
> to 'demonstrate' its existence? As Uri notes, ultimately these
> things are matters of belief, not proof. How did he put it to me?
> You can'tprove an 'ought' statement from an 'is' statement.
> You look at the world around you and look inside yourself, and
> try to understand what you think and believe, and that's about it.
> You can't ask me to appeal to a different 'intellectual honesty'
> than the one I apparently have. I look around and see a world of
> bad guys who get away with it, far more than bad guys who get caught.
> Crime does pay, apparently. Where's the 'evidence' of natural law here?
> Again, not trying to get anyone to see it my way - just pointing
> out that the Dustin's words above are hardly as intuitive (to me)
> as they seem to be to him.
> > Dustin
I'm including all the preceding stuff - I don't know if this is the
most efficient way. Whatever - as you know, I'm kind of a 'landlubber' in
Yes, you can post that last message. I'm writing you a direct message
because...I don't quite know, actually, why. I guess I'm just not used to
speaking in front of an audience.
By the way, in reply to that last little bit (in reply to Dustin),
about what you think and believe and 'intellectual honesty', I'm not sure
I understand your point. You've said, numerous times, that your personal
views (and certainly your behaviour) basically conform with a conventional
version of 'good' behaviour, i.e. not stealing, lying, murdering and so
on. On the whole, despite, perhaps, minor deviations, these are pretty
much the rules of thumb you go by, and, if we want to risk a hasty
prediction, probably will continue to go by. Now, whatever the reasons for
this are (psychological, historical, emotional, etc., etc.) this is an
empiric fact. Its truth, as long as you keep acting this way, seems to
have the same qualities that other facts, like 'dropped objects fall',
have, insofar as what you call your 'knowledge' depends on them. Now,
your problem, if I understand correctly, is not with your PERSONAL maxims
behaviour, but with your inability to apply them universally, right? You
are perfectly comfortable saying: "I personally think X is good" but I
have no way of deciding if Joe Shmoe who thinks that "X is bad" is wrong.
(is this the relativism you're talking about?) Now, there doesn't seem to
be any intrinsic difference (from the point of view of what constitutes
your knowledge) between the 'physical' fact and the 'behavioural' fact,
right? A fact is simply something that is consistently observed to be
true. Probably, the reason why it doesn't offend your 'rational
scientific' sensibilities to generalize your accumulated physical
observations into physical 'laws' is that there seems to be a pretty
convincing criterion which does away with the problem of subjectivity (a
physical 'relativist' who tries to convince us that gravity is all a
matter of perspective, could gently be shown off the side of a building)
Unfortunately, there doesn't seem to be so self-evident a criterion for
establishing normative (i.e. ethical) laws. (despite the nice sound of the
opening lines of the American Declaration of Independence) What is the
point of all this rambling? Not so much to demonstrate that moral beliefs
rest on a sound foundation, as to suggest that what goes by the name of
scientific knowledge isn't intrinsically any sounder, beyond the fact that
you may personally be more convinced of its truth. (For whatever
reason, including the fact that it is difficult for someone to credibly
hold a contrary opinion) And if that is so, then
it really does come to the question of what you believe, and how strongly
you believe it. And I think that's where your own behaviour comes in.
Again, you PERSONALLY probably conduct your life in what western popular
conception would consider a more or less 'good' manner. Why? Are you too
dumb to realize that the whole thing is just a lie? Unlikely. So why do
you keep behaving in this way? Because you're afraid of the consequences
of 'defecting'? Sometimes. Most people take their foot off the gas when
they see a cop. (that's not necessarily a moral issue, but it's the same
idea) But that's not always true, as you say yourself. People do 'bad'
things all the time, and more often than not, they are so much the better
for it. Haven't you ever been in a situation where you thought, with
pretty high probability, that you could get away with something which you
thought was 'bad' and didn't do it anyway? (say giving back incorrect
change) Some people have tried to tackle this problem by devising
'rules of thumb' or heuristics which, though not optimal behaviour in all
situations, in the long run, on an evolutionary time-scale, can be shown
to be in the species' best interests. (for stuff on this, you may want to
read some stuff on socio-biology; rule utilitarianism (as opposed to
act utilitarianism, is in this vein; the game theoretic approach is along
these lines too, and it can offer a more sophisticated approach, in terms
of actually devising rigorous deductive arguments a la Ken Binmore)
But Binmore would be the first to reject his own arguments as 'proof'
of the existence of 'objective morality.' He explicitly deals with human
behaviour, and tries to offer an explanation of why humans act the way
they do. That is decidedly a different question, Ron, than the one you're
asking. But Binmore would probably say that you're asking a misconceived
question, because of what Hume called the 'Naturallistic fallacy.' This
has to do with what you said, about not being able to derive an
'ought' from an 'is.' Except you're only taking half of its meaning. i.e.
sure, if you can't logically derive a 'categorical imperative' (e.g. 'Drop
the apple!') from an 'hypothetical imperative' (e.g. 'if you wish the
apple to fall, drop it!') then the whole project of trying to base an
ethical world view on observation is impossible (and, therefore, besides
depleting forests and fostering fantastic academic careers, pointless.)
But at the same time you have to remember that you also can't look at
fallible human behaviour and expect to derive, consistently, ethical
maxims. That, I think, is your mistake. You look around you and see many
people 'cheating' a lot, and everyone 'cheating', at least, a little. And
from that you are deducing, incorrectly - and in spite of your own
admitted intuitions and beliefs (let alone behaviour) - that ethics, or
moral maxims, as such, are defunct. Looking around you, at best, can offer
you explanations of how and why homo sapiens behaves as it does. It will
never be able, for simple logical reasons, to offer you a code for how
homo sapiens should behave. Does that mean that such a code cannot exist?
No, for the same reason that it can't justify any particular code.
So, you may object, that brings us right back to relativism. Because
all I've said is that we don't really have any tools for deciding 'good'
or 'bad' beyond our own intuitions and beliefs. But if you think about it,
I think deducing relativism from this fact is incorrect. Relativism could
conceivably be an ethical viewpoint, but it certainly isn't the only one,
and at any rate, doesn't derive from our lack of a tool to prove its
unacceptability. Furthermore, when someone believes that 'murder is bad'
he usually believes this to hold true for any member of homo sapiens (of
course, 'murder' would have to be more specifically defined to be of much
use as an ethical concept) and not just himself and his surroundings. You
are asking such a person the wrong question if you want him to 'prove' to
you the truth of such a belief, (for logical reasons mentioned above). One
thing you could intelligibly ask him, though, is to try and convince you
why you too should hold such a belief. He might then offer you some
version of 've'ahavta l're'aha Kamoha' (it's amazing how many places this
simple idea has been regurgitated in more or less original ways: Rabbi
Akiva, New Testament, Kant, Rawls, to name a few). Aside from its
aesthetic appeal, in the sense of being a unitary, parsimoniuos tenet
which can be the basis for a highly sophisticated, deductive ethical world
view, it has other assets such as disposing of many 'ugly'
viewpoints, such as Nazism, say, which a true relativist would have to
allow into his scheme of things. Why? Because you'd be hard put
to find a Nazi, in the past or present, be he as ardent as he may be, who
would honestly want to be gassed himself. An ethical view based solely on
the 'do as you would be done by' principle exposes Nazism, or any
philosophy which preaches the harm of fellow homo sapiens (Nazi attempts
to say that certain races are inferior enough to be excluded from
homo sapiens were more shoddy scholarship than anything else) to be
masochistic psychosis at best, and intellectual bankruptcy at worst.
If you're interested in exploring various attempts to develop the above
principle systematically, you might want to look at Kant's "Metaphysics of
Morals." He makes a whole bunch of other assumptions too, like 'people
should be treated as autonomous agents' as ends in themselves rather than
as means, and it's pretty easy to find inconsistencies, but he has some
interesting ideas. Basically he uses the concept of 'universalization' as
his way of defending a behavioural maxim as ethical or good. If you can
universalize a maxim without reaching a 'physical' or 'natural' paradox
(I don't remember if that's exactly the term he used - he's been
criticized quite a bit for mixing this idea up with a logical
contradiction) then it can be defended as a 'categorical imperative' (i.e.
a prescriptive statement). An example might be 'tell the truth.' This is a
categorical imperative because if you universalize its negation, i.e.
'don't tell the truth' then language falls apart, and human existence, as
such, does too, something which is manifestly untrue, as the very quest
for morality predicates the existence of moral or human beings. It's full
of holes and problems, and it says similar things to what has been said
elsewhere, but it's a neat approach anyway. (Of course, this whole thing
muddles the naturallistic falacy problem, and Kant has been accused,
by Binmore among others, of not having understood Hume)
Another approach is John Rawls' "Theory of Justice." He uses an
interesting idea called the 'veil of ignorance,' to hypothesize about an
initial state ('initial' in a metaphoric sense, not literal or
chronological) in which noone knows what 'position' they will be in, in
the actual world, in terms of assets, capabilities, intelligence, etc...
The world state which would be considered 'fair' by such hypothetical
individuals, is the world state which Rawls would cosider 'just.' (Hence,
the statement: "Justice is fairness.") I hope I'm not misrepresenting
these various thinkers too badly as can't be helped, to some extent, in so
short a description (and with so fallible a memory) but that's the basic
idea. The good thing about Rawls is that he doesn't mix up observation
with prescription, so Hume would feel comfortable with him. i.e. he tells
you, right up front, that his theory of justice is an intellectual
exercise, not a naturalistic proof of ethics. He tries to offer you a
palatable, compelling and more or less consistent view of things, to
justify a specific ethical stand, which everyone can decide for
themselves, to what degree they find it convincing. True, you might point
out that it is just a glorified version of 'do as you would be done by'
but I think it contributes a little bit more insight than that.
Finally, for practical matters, Ron, you'd probably find Binmore of
most use. In "Game theory and the social contract" he makes it quite clear
than he is thoroughly behind Hume (i.e. wants nothing to do with Kant's
metaphysical 'sophism') and offers a game theoretic viewpoint of how
society holds together. The basic idea is that everyone follows
'enlightened self interest', producing equilibria, which can be viewed as
'self policing agreements' or 'clauses' of a social contract. There are
various different ways of analysing how individuals interact based on the
one fundamental assumption of rationality (defined loosely as consistency
between goals and behaviour - i.e. without assuming any particular
distribution of preferences or goals over the players) The great
advantage of this viewpoint is that it provides a framework for
understanding seemingly 'irrational' behaviour, say, Sadaam Hussein's
behaviour preceding and during the Gulf War. Such behaviour may look
irrational to one who assumes a particular preference structure over
Sadaam Hussein, say, a preference structure suspiciously similar to one
conceived in a western, democratic, perhaps Judeo-Christian society. A
little bit of inquiry into the context Sadaam finds himself in, could
suggest a more plausible preference structure, which, when combined with
the other salient players, could explain the historical developments there
as 'rational.' I've actually gone astray, though; all I really meant to
say is that game theory's assumptions are loose enough to allow you to
analyse a lot of divergent possible 'social contracts' within the same
axiomatic framework. That, in turn, should help you deal with your worries
about relativism. Because despite the whole assortment of different
societies and divergent patterns of behaviour, you'd be surprised, simply
empirically, how little actually changes. There are an awful lot of common
denominators between different societies, particularly the more stable
ones. (societal norms such as murder, theft and deceit don't make for very
stable strategies and would be difficult to sustain in equilibrium for any
length of time. Actually, it is precisely the fact that these are not
norms which may make them optimal strategies to the relatively few who
adopt them. A thief has a reasonable hope of capitalizing on his theft -
for which he must take some risk - if
he attributes a low probability to the event of someone else stealing from
him.) All of this may seem to be a continuation of other stuff like Kant
or Rawls, and it is, to a large degree. The practical advantage is that it
is a highly formal, axiomatic language which can be used for sophisticated
deduction. A lot of stuff has been written over the past few decades,
mostly in economic applications, but if you're interested in analysing
models of interacting rational agents, networks, and stuff like that, you
may very well be able to find something of use in the existing literature.
Sheesh! I wasn't intending to write all of that. It just kind of spurted
out. Feel free to censor it at will, if you still want to post it.
That's what you get for encouraging social science people to get in
on your conversations. Jacques calls it 'blah, blah.'