[BITS] Skewering AI for Fun and Profit -- 230 Final Exam

Rohit Khare (rohit@bordeaux.ICS.uci.edu)
Fri, 13 Mar 1998 06:18:30 -0800


[Also at http://www.ics.uci.edu/~rohit/230f.html --RK]

ICS 230 Final Exam [1] -- Rohit Khare, March 11, 1998

"Magnetically levitated trains make the trip from Los Angeles to Las
Vegas in 27 minutes."

Something seems utterly fraudulent about that sentence, yet sounds
natural enough recast as "XML-encoded structured personnel records are
reused directly within the tax-withholding and the paycheck-mailing
subsystems." Namely, that information technologists operate in an
everyday world of miracles: we speak in the _future-perfect tense_.1

Debating a scientist or engineer can be exasperating, because he or
she feels fully entitled to presume the logical, eventual consequences
as immediate givens: MIME does this; HTTP does that -- whether or not
currently shipping products do or don't. It seems petulant, almost
pedantic, to keep backtracking for caveats. _Of course_, we can do
these things when we have to; hurry along, we have to move further up
the trail if we wish to look over into the Promised Land...

Or, as Hugo de Garis argues, and Marvin Minsky once did, into the
Dystopian charnel house of mankind obsoleted by his own AI
brainchildren.

Looking at these two articles, "Meet Shaky, the first electronic
person" (Nov 1970) and "The 21st Century Artilect: Moral Dilemmas
Concerning the Ultra Intelligent Machine" (May 1989), one's first
reaction is gleeful mockery of such plainly extremist predictions.
Let's consider why, though. Reflecting on my own analysis, I proceeded
from a model of the authors -- scientists as primary sources,
journalists once removed -- to the community -- scientific and
political -- and only then to a judgment of technical feasibility.
Furthermore, the form of the materials -- one-sided presentation of
the promise -- subverts it as 'unscientific'. The Establishment has
adjudicated similar debates with other technologies, and has set rules
for doing so.

_Trusting Scientists._ The earlier article was written for a
general-interest magazine by a 'lay' reporter. He began with a
laboratory curiosity (a mobile robot) and spiraled outward to
interview other leaders of the community and to explore the wider
implications of intelligent machines. He isn't an adversarial
inquirer, though: he can't put the ideas on trial; he can't even
identify the patron saint of computing theory ("Ronald [sic] Turing").
The CNN correspondent followed the exact same script almost thirty
years later, parroting the creator's line that "the machine quickly
decided it was camera shy", much as the first quoted an anonymous grad
student that "so far, we have not achieved computer orgasm."

Individual actors are free to push their views on credulous observers;
but they also reflect larger communities. In this case, a little
background knowledge about the field of AI informs us there has been a
long-simmering feud between symbolic analysis (search, planning,
language) and 'connectionism' (neural nets, cellular automata,
agents). The earlier article pushes the former viewpoint ("in 3-8
years we will have a machine with the general intelligence of a human
being," Minsky) as strongly as the later praised the latter
("brain-based computers will be a trillion-dollar business within 20
years," de Garis) -- and neither acknowledging the other.

_Trusting Theories._ These communities formed around two different
theoretical world views (the rational mind vs. emergent consciousness)
-- what can Occam's Razor tell us about their plausibility? The only
existing intellect we can study, our own minds, works both ways. Both
theories also seem to scale poorly. Chess, the holy grail of the
symbolic camp, has fallen without shedding light on human thought; it
diverged from its analogue rationale. Neural nets in capital markets
make a profitable living, but without hope of ever justifying their
trading decisions. These stories give us pause in either case: why
should readers believe early successes will bloom into human-scale
robots or artilects? We were asked to have faith because each mimicked
the brain, but neither camp's successes were founded on that belief.

_Trusting Science. _The logical result (completely censored from the
readings) is stalemate. Not just between the two schools, but without
forward progress at all, it seems. Far from racing towards human
intelligence, today's projects seem to be stuck filling in the holes
left behind by previous decades' high-water-mark demos. The
fundamental limits to growth were not presented alongside the breathy
predictions. Not that the Media can't fail in the opposite direction
-- consider the environmental panic over the Club of Rome report --
but that it is less fervently committed than Science to paired off
death-match debates. K. Eric Drexler's _Engines of Creation / Engines
of Destruction_ is an archetypal example: the first half outlines the
inevitable development of nanotechnology, the second details its
threats, the forces arrayed against it, and the technical objections
to its fruition.

These articles are not isolated examples. Today's Los Angeles _Times_
also includes an unchallenged blurb by a Columbia physicist that we
will have artificial intelligence by 2020 -- presumably just in time
for the asteroid collision predicted for 2028 on Page One. In fact,
there's little to these claims beyond the sort of mathematics that
argues multiplying bacteria could outweigh the planet in a week:
unconstrained forecasting. Nevertheless, this isn't a flaw of the
theories; it's a reflection of social structures, the beliefs of
_people_ in _communities_. Science, after all, is just a set of rules
to check our instincts toward magic and mysticism -- unchecked, we are
left with tales of "robot Armageddon."

1 Meme attributed to Phil Agre

_________________________________________________________________

Reactions to [2]Truth in Science: _a post-mortem_

My analysis above may be a textbook illustration of the social
construction of science, but it's not just to suck up to the class
leader. One of the most valuable things about being acculturated
within an intellectual family is reading the front pages of _Science_
every week: the plagiarism scandals, the funding battles, the
conference reports. Not the technical papers, but the insider's view
of the field. That's why I'd claim a precocious understanding of
politics in science -- even if I'm not a successful politician in
practice.

My beliefs haven't much changed from what I wrote a month ago, but the
readings this term affect how I'd articulate it. For one, it would not
use the word 'scientists' as much: it helps to explicitly enumerate
the other roles implicated in the enterprise -- consumers, engineers,
regulators, journalists, &c. And, I would have to make a subtle change
to the second of my criteria ("trusting theories") -- that Occam's
Razor tradeoffs can be primarily measured by 'entropy' (the
algorithmic complexity of a hypothesis-as-program), but the
tie-breaker is political: how a hypothesis fits into the prevailing
_Weltanschauung_. It's hard to argue with a high-predictive power
hypothesis, but when faced by vaguer choices, we start asking whether
"God would knit with superstrings" or such-like. A 'good' hypothesis
would fit our current 'scientific' biases against anthropomorhism,
against extrasensory perception, against God. Those are distinct from
merely technical objections due to investment in a working hypothesis.
It's the difference between rejecting faster-than-light travel based
on general relativity or because it's morally wrong to escape the
Solar system and our environmental responsibility to it.

Quite separately, it would be interesting to contrast student's faith
in engineering legitimacy with scientific claims -- faith in markets,
research & development, regulation, and showmanship that must underlie
technological determinism. There does seem to be a difference in the
social construction of the science of the atomic bomb and the
technology of its development, testing, and delivery, which augurs
well for the coexistence of the history-of-science and the
history-of-technology, even if we can't define the difference between
the two.

In that case, I would have to say I entered with a faith in classical
ecomomics as-modified-by the results of contemporary game theory
(namely, bounded rationality), and it survived largely unchallenged,
The main consequences of social construction illuminated by the
readings were related to the structure of the enterprises: the kind
and nature of the innovating firms under various appropriability
regimes and market topologies; and the feedback cycles guiding the
growth of technological systems through regulation and competition. I
attacked _Consumption Junction_ not just to inflame debate, but to
inform the comparison between models. Frankly, I still think there's a
lot to be said by anayzing the producers' decisions: technology gives
the customer what she needs, not what she wants :-)

References

1. http://www.ics.uci.edu/~king/230mt.html
2. http://www.ics.uci.edu/~rohit/truth.html