Re: On the sustainability of progress.

Ron Resnick (
Sat, 10 May 1997 12:29:45 +0300 (EET DST)

At 10:07 AM 5/9/97 PDT, Adam wrote on FoRK:
>Remember that discussion we had a week ago about energy sources, etc?
>Well, I found a really good site on the Web about the sustainability of
>human progress (in things like material goods, population growth, food,
>energy consumption, and raw material consumption), and as we all know,
>anything we find on the Web by definition MUST be true. :)
>Check out the site yourself sometime, it's got lots of interesting
>opinions and links:
>Or, for a table of contents:
>Or, for a specific discussion of global warming and how it is both
>possible to avoid it and/or recover from it:
>The page is maintained by John McCarthy, a professor of computer science
>at Stanford. The guy who coined the term "Artificial Intelligence".
>The guy who invented Lisp. The guy who got the Turing Award in 1971.
>So we're not talking about a lunatic here; we're talking about a really
>smart guy who thinks things through and looks to rational explanations
>(like Rob) and/or economic explanations (like Rohit) of things.

Preeminence and rationality in certain fields of inquiry are no guarantee of
lucidity and worthwhile contribution in others. Shockley won a Nobel
Prize for the invention of the transistor, but is a total nutcase on eugenics
and voluntary sterilization theories. So what does that prove? Only that
'trust' is a very complex thing, and trusting object A on subject alpha
doesn't extend to trusting same object A on subject beta.

The only way to effectively manage distributed trust with new objects
competing for our attention all the time is to build an individualized
Web of Trust. That
way, if I trust Adam on subject Karma, and Adam tells me that Rohit
has good stuff to say on Karma too, I can extend my trust to Rohit. But if
I have no reason to trust Adam on subject AlternativeMusic, it means
diddly to me that he recommends GreenDay. We do this all the time,
implicitly, in analog life. The question is how to formalize and extend
and automate the principal in distributed software.
The WoT is all about extended associativity of trust.

>Among the choice answers on the index page (which I still recommend you
>visit, because it's got LOTS of links... for example, the page of
>irrational quotations at
>) are:

Will check.

Anyway, back to the original notion, initially raised by Jim
before we hijacked it with "is the science right?", was that there is more
to life than information technology and networks. We need to occasionally
be reminded, while constantly thinking
about the supposedly really big picture of all these smart bits running around,
that there are of course, far bigger pictures than that.

The biggest picture of all is on the cosmological scale. (actually - there is
an even bigger picture - that of meta-universal truths of mathematics
and logic that
can exist whether there is a universe to house them or not). But
back to cosmology- Who really cares about
insignificant planet Earth with its curious carbon based lifeforms running
around on their petty problems, when the whole of their universe will eventually
either crunch up into nothing, or expand endlessly into a deep freeze? Besides,
long before either scenario the sun will have gone red giant on us.

So, questions of
"does human existence actually matter? who really cares if we 'progress'
or not? " really pale in comparison on the universal stage.
I mean, does it matter if we pollute and despoil a planet that won't be here in
a few billion years? Who really cares if we extinct ourselves as a species
in some
technological folly like a global nuclear war, or burn off all our ozone?

But, since we have the right to be petty, subjective humans,we can limit
ourselves to
our petty closed system with axioms like "Human life is sacred.
We must preserve
and promote a world that allows future generations of humans to have
a life at least as fulfilling as our own, if not more so."

In this (in)formal system, concepts of 'poverty' , 'suffering' , 'disease', etc.
and their more appealing negations like 'comfort', 'pleasure', 'health' become
relevant, where they were meaningless cosmologically.

In this system, the issues of global warming, overpopulation, nuclear
stockpiles being sold on grey markets, killer bacteria immune to antibiotics,
trends of religious (and other) fanaticism around the world, intolerance and
and the old favourites of just-plain meanness and cruelty are what count.
Here we have humanity's
apparently limitless ability to both soar with the angels and wallow in the
deepest slime pits of pure evil. We get societies like Germany, that give us
Bach and Goethe on one hand, and Himmler and Mengele on the other.

So where does that put the supposedly "big picture" of a sea-of-smart-bits?
Put it this way - humanity will probably suffer as much, and derive pleasure
as much, from a world that has smart bits, as one that doesn't.
Like any technology,
the ability to derive benefit from it is about equal to the ability to
create suffering.
For every distributed medical consultation that saves a life, there will
be a criminal act that leverages the infrastructure for harm.

So, does it really matter if we produce it or not? I think the answer here is
the same for all technology and progress. It's pointless to ask "is progress
good? Should we do it?". Although progress is made by humans,
who presumably have a conscious will and so ought to be able to stop,
it's impossible to stop doing it -
it's an essential part of the curiosity and tinkering that define
humans. The most we can do as concerned and responsible people on
the cutting edge of our field is to promote an awareness of what the
negative implications of our work are - how it can be used by
others nefariously. In the same way that there were consciencious
dissenters on the Manhattan Project, I think we are in a similar situation
today. I'm not saying "don't do it" - on the contrary, do it, but be responsible
in addressing the liabilities as well as the advantages of it.

There's also the individual big picture we each have for ourselves.
Just yesterday I had an experience like this. I read an article in the
Toronto Star online - doubtless your local paper has virtually identical
articles every so often - about a foster mother who was looking after
2 young girls, sisters aged 3 and 1. She leaves the 1 year old on a bed
with the sister in the room, goes down to the laundry room, hears a thud
and a cry, and comes back with the baby on the floor. Asks the 3 year
old why she let the baby fall out of the bed. 3 year old answers "she
just did". Stepmum whacks 3 year old on the back of the head hard
enough to cause child to stumble and get disoriented. Child is taken
to hospital and dies 2 hours later. Foster mum gets a 3 year sentence
of which she serves 6 months, for killing a kid.

Then, in my car driving home that afternoon, the radio station plays
Clapton's "Tears in Heaven". I had known that he wrote it for his
dead son, but I hadn't known the story of how it happened. The
radio announcer (in Hebrew of course) claims that Clapton had
been horsing around with his 7 year old son on the balcony of
his 4th floor New York apartment, and in tossing the kid in the air,
accidentally let him fall over the side. (Is this true?) Can you
imagine the pain and guilt the guy must drag around with himself
every day of his life?

Anyway, I had a real sense that the only thing I wanted to do after
these 2 episodes was get home and hug my kids. The whole thing -
Java, objects, the web, - who really cares? In my "closed system",
they pale compared to my family.

Gawd, what a bunch of mush, huh? :-)