[FoRK] Computational Creationism

Eugen Leitl eugen at leitl.org
Fri Jul 2 07:34:04 PDT 2010

(oldie but goodie)



Computational Creationism

Brian Hayes

The great age of automata, or lifelike machines, began toward the close of
the Middle Ages and lasted into the 17th century. The technological marvels
of that era were clockwork confections—intricate assemblies of gears, cranks,
levers and ratchets. Clocks displayed the phases of the moon and the annual
progress of the sun through the zodiac; they had animated figures to strike
the hours and entertain onlookers.

>From machines that imitate life and the heavens, it is an easy step to the
idea that life itself might be a mechanical process and that the stars could
be driven by some kind of celestial geartrain. The clockwork universe figures
in the thinking of Dante, Galileo, Kepler and Newton. Another exponent of
clockwork in the sky was Descartes, who also compared animals to mechanical
automata. And Thomas Hobbes wrote: "For seeing life is but a motion of Limbs
. . . why may we not say, that all Automata (Engines that move themselves by
springs and wheels as doth a watch) have an artificial life?"

Today, the chronometer's ticking escapement is no longer the epitome of high
tech. Brass gears have given way to silicon chips. And as the computer has
conquered technology, it has also taken the place of clockwork in metaphor
and myth. Novels and films no longer portray us as cogs in a machine we can't
control; instead we are bit-players in someone else's virtual reality. At a
slightly more serious philosophical level, an ongoing debate asks whether
computational processes could account for everything happening in the
universe, or whether something more—something nonalgorithmic—is needed. And
occasionally the question is asked whether the entire universe might be a
vast computer cogitating on The Answer.

The World as Machine

The vision of a cosmic computer has inspired literary and philosophical
speculation, but the roots of the idea lie in the everyday practice of
computer science. It's the sort of notion that might occur to anyone who
spends enough time twiddling bits—especially late at night in a caffeine
frenzy. There are two versions of the idea, one belonging to the hardware
hacker and the other to the software wizard. The distinction between them is
this: In the first case the world is computing something; in the second the
world is computed by something.

Figure 1. A computer made of Tinker ToysClick to Enlarge Image

The hardware variant springs from the observation that even though computers
are complicated and finicky devices, you can build one out of almost
anything. The beige box on your desk runs on microelectronic circuits, but in
principle all of its functions could be performed by hydraulic or pneumatic
or photonic devices. Danny Hillis and his friends built a computer out of
Tinker Toys and string. Leonard Adleman performed a computation with strands
of DNA in a test tube. Other schemes would compute with enzymes or living
bacterial cells or spinning atomic nuclei.

The counterpoint to all this technological diversity is theoretical
equivalence. Provided that a machine never runs out of memory and that you're
willing to wait long enough for an answer, almost all computers can compute
exactly the same set of mathematical functions (and they fail on the same set
of uncomputable problems). The proof of equivalence relies on the idea of an
emulator: a program that allows one machine to run programs written for
another. The usual practice is to show that a given computer can emulate a
Turing machine, the theoretical computing device invented by Alan Turing in
the 1930s, whose underlying technology is the marking of paper tape.

Should we be surprised that so many kinds of machines can all compute the
same things? Forty years ago Eugene Wigner wrote of "the unreasonable
effectiveness of mathematics in the natural sciences," asking why
differential equations should work so well to describe the physical world.
The converse question is just as intriguing. Why do all the resources of the
material world lend themselves so readily to computing mathematical
functions? Why is it you can pick up just about any spare parts lying about
the universe and turn them into logic gates or binary adders?

One answer is that the world is a computer. It was designed to have exactly
this property. The most celebrated speculation along these lines is found in
Douglas Adams's Hitchhiker's Guide to the Galaxy. Adams reveals that the
planet Earth was constructed as a gigantic computer meant to carry out a
five-billion-year inquiry into "the meaning of life, the universe and

Others imagine computers on an even grander scale, reaching beyond this
little wet rock of ours to fill the entire universe. One visionary of the
cosmos-as-computer was the late Konrad Zuse, who was also among the earliest
of all hardware hackers (he had a digital computer up and running years
before ENIAC). Zuse conjectured that the ground fabric of the universe might
be a kind of computer called a cellular automaton. This same idea has been
pursued with even greater vigor by Edward Fredkin, a free spirit of computer
science who led the Information Mechanics Group at MIT in the 1980s.

Figure 2. A cellular automatonClick to Enlarge Image

A cellular automaton is an array of many simple processors arranged in a
lattice. Think of a tiled floor with a processor on every tile. Each
processor (or cell) has only a finite number of possible states and can
communicate with only a finite number of neighboring cells. At each tick of a
master clock, every cell chooses its next state according to a fixed
"transition rule." The best-known example of a cellular automaton is the Game
of Life, invented 30 years ago by John Horton Conway of Princeton University.
The cells in Life have two states—alive or dead—and the transition rule
simply counts the number of living neighbor cells.

At first glance a cellular automaton doesn't look much like our world. For
one thing, our space appears to be continuous: Where are the cells? Fredkin
suggests they are simply too fine to see—perhaps as small as the Planck
scale, 10–33 centimeter. A subtler objection is that our world teems with
fast-moving particles, such as electrons and protons whizzing around inside
atoms, whereas only signals travel through the lattice of a cellular
automaton; the cells are immobile. Here too Fredkin has an answer. A fairly
simple transition rule creates packets of information that glide
frictionlessly through the cellular automaton like idealized billiard balls,
rebounding elastically when they collide. Maybe what we perceive as motion
has a similar basis, and elementary particles are made of nothing more
substantial than information.

Cellular automata are a natural choice for a computational universe because
they require only local communication between nearby processors. There is no
need for wires or other long-distance rigging. The deepest laws of nature
also seem to be strictly local, making for a good match between physics and
computation. These aspects of cellular automata—the dual ideas of
"programmable matter" and "computable physics"—have been explored in great
detail by Tommaso Toffoli and Norman Margolus, who were both members of the
Information Mechanics Group.

In the absence of compelling evidence—and this is a case where we have a
compelling absence of evidence—why would anyone choose to believe that the
universe is busy churning out calculations? The Douglas Adams fantasy
suggests the allure of a hidden purpose. Why are we here? To compute the
meaning of life, the universe and everything. All those events that seem so
random and pointless will be explained when the cosmic computer prints out
the final answer. (Either that, or the computer crashed ages ago, and we've
been waiting all this time for someone to reboot us.)

Fredkin's vision of the universe as cellular automaton is a little different.
His computer isn't necessarily searching for bits of wisdom; it may simply be
computing its own next state, over and over, with no goal in mind. Yet
Fredkin too wonders about invisible undercurrents and mysteries of purpose.
He po. ."

    "A model of what?"

    "What do you mean, of what? Of a civilization, obviously, except that
it's a hundred million times smaller."

    "And how do you know there aren't civilizations a hundred million times
larger than our own? And if there were, would ours then be a model? . . ." 

The story has a happy ending, more or less. Trurl's Lilliputians escape their
confinement, overthrow the tyrant and begin playing with nuclear weapons,
like any self-respecting civilization.

Hans Moravec of Carnegie Mellon University offers another perspective on the
theme in his book Mind Children. He imagines a Game of Life where after many
ticks of the master clock some of the patterns in the cellular automaton
develop consciousness. "The cellular intelligences (let's call them the
Cellticks) deduce the cellular nature and the simple transition rule
governing their space and its finite extent. They realize that each tick of
time destroys some of the original diversity of their space and that
gradually their whole universe will run down." So the Cellticks make contact
with their creator by spelling out a message on the computer screen. Then the
Cellticks and the programmer go off together to explore the programmer's
universe, hoping to find another level of reality before this one too runs

A Computational Copernican Principle

In most tales of simulated worlds, the tissue of plausibility becomes
thinnest at the interface between levels of reality. I can believe (just
barely!) in a civilization that exists only as a computer program. Where my
suspension of disbelief becomes least willing is in the crossing over between
a physical world and an algorithmic one. In movies the leap is often made by
putting on a skullcap studded with electrodes or by plugging a cable into
your spinal cord. It seems to me there is a fundamental category violation
here. I am made of atoms and molecules. How could I enter a world of bits and
bytes? (But maybe that's what all simulated creatures say.)

Moravec, in his parable of the Cellticks, handles this issue more carefully
than oontological status. They take a scientific approach, studying the
transition rules that constitute the laws of nature in their universe. "Once
in a long while the transition rules are violated, and a cell that should be
on goes off, or vice versa . . . . After recording many such violations, the
Cellticks detect correlations between distant regions and theorize that these
places may be close together in a larger universe." From this slender clue
they learn the structure of the computer that is running the program that
creates their world, and they decipher its machine language. We would call
this process reverse engineering, but to the Cellticks it is physics.

It seems significant that malfunctions have a role in the Cellticks'
cosmological investigation. In a properly functioning computer, a program
cannot learn anything about the hardware on which it is running. True, the
program might think it has learned something. It might go digging through
read-only memory and find buried there the telltale markers of an Apple II
computer. But the ability of one computer to emulate another makes such
digital archeology untrustworthy. The Apple II might be an emulation running
on an IBM PC, or a HAL 9000. If the emulators are written correctly, they can
reproduce even the most obscure quirks and bugs of the target hardware.
Unless you get lucky and spot a glitch in the Matrix, no program will detect
the fraud.

Once you begin to take such ideas seriously, the situation goes from bad to
worse in a hurry. Consider this: If a simulation is complete enough to have
some kind of intelligent entities within it, then those entities could also
build computers to simulate worlds, which could include still more computers
and simulations of their own. In this tower of simulations, where would our
world fit? To answer that question it seems best to invoke a computational
Copernican principle. Just as the earth is unlikely to lie at the center of
the universe, our level of simulation is unlikely to lie at either the very
top or the very bottom of the tower.

This pguments. Although we might not be directly aware of any levels of
simulation above us, we ought to know about those below us, since they are
our own creations. But no such levels exist; we have not (yet) created any
artificial civilizations. Thus we seem to be at the very bottom of the tower,
which is unlikely, and so it seems safe to assume we are real flesh and blood
after all. But this reassuring chain of reasoning has a dark side. If we ever
do construct a simulated world rich enough in resources that its inhabitants
can create their own simulated worlds, then on that basis alone we might have
to conclude that we ourselves are a simulation.  Is the Universe Computable?

One group of scholars would argue that our world cannot be a computer
simulation because it includes something that is uncomputable, namely the
conscious human mind. Three advocates of this view are John Searle, Hubert
Dreyfus and Roger Penrose. They marshal quite different arguments in support
of their positions, but all three conclude that no algorithmic process could
reproduce everything that goes on in the mind. This idea that consciousness
guarantees our reality echoes the Cartesian motto "I think, therefore I am."

For those who see no vital difference between brains and computers, the
Searle-Dreyfus-Penrose arguments offer no refuge. But perhaps some other
computability constraint will intervene. After all, even if it turns out we
can simulate a single human mind, it doesn't necessarily follow that we can
simulate the entire visible universe.

Writing a program to simulate even a simple physical system—say a few balls
on a billiard table—gives you respect for nature's computational abilities.
There is so much to keep track of. If you get careless in your
collision-detection algorithm, two billiard balls will glide right through
each other—a glitch in the Matrix that is sure to be noticed. Performing such
a computation for all the atoms in the universe would be truly daunting.

Jürgen Schmidhuber of the Istituto Dalle Molle di Studi sull'Intelligenza
Artificit paper titled "A Computer Scientist's View of Life, the Universe and
Everything." He concludes that the simplest strategy for simulating the
universe might be to compute all possible universes simultaneously. The
program for a typical universe would be long and messy, with many tedious
special cases. But a trivial metaprogram avoids these complications. It
simply enumerates all possible universe-simulating programs in order of
increasing length, and executes them simultaneously by interleaving their

Deus ex Machina

The world seems very solid when you stub a toe, and the suggestion that it
might all be a mere pattern of bits appears downright silly. But even an idea
that's not taken seriously or literally can have a powerful influence.

The clockwork universe was first of all a theological notion. A clock was
thought to imply a clockmaker; and yet, once the clock was wound and set in
motion, there was no further need for divine intervention. Thus the religion
of the clockwork universe was a cool and inoffensive minimifidianism, with a
creator but no presiding ruler. In a similar way, a computational theology
might suppose a departed programmer, who clicked on the button marked "Go"
and then walked away. But even without a meddlesome programmer on the scene,
free will is hard to find in a computing or computed universe. Our actions
seem to be ruled by an algorithm whose scope we cannot know.

Perhaps there is a way out. In principle, every detail of a computer's future
can be deduced from its present state. Nevertheless, anyone who writes
programs has occasionally been surprised by their behavior. Some of the
surprises are unpleasant: They are bugs. From another point of view, though,
surprises are the whole point of computation. If you could work out in your
head everything a program might do, you would have no need to run it on a
machine. This idea can be stated more strongly: Some programs are
"incompressible," in that no shorter program yields the same result, and
there is no faster way of learning what the program does than to run it from
start to finish. If our program turs the programmer never anticipated.

Maybe we can even keep the program and dispense with the programmer. Just as
the need for a clockmaker gradually faded from the clockwork universe,
perhaps a computational universe could evolve without a computermaker. There
is much interest lately in self-organizing systems, emergent computation and
evolutionary algorithms. What these buzzwords have in common is the theme of
computations done without any need for someone to specify the program in full
detail. One of these ideas might allow us to compute our lives away in
comfortable anonymity and autonomy.

And a further flight of metaphysical fancy can wipe out the last traces of
computational creationism. In the tower of simulations built upon
simulations, the ever-nagging question is who built the computer at the top
of the tower. But an obvious topological trick will rid us of this
inconvenience. Simply wrap the tower around and connect the bottom to the
top, forming a vicious circle. In this ring of worlds, we simulate ourselves.

© Brian Hayes

This Article from Issue

September-October 1999

Volume 87, Number 5

Page: 392

DOI: 10.1511/1999.5.392

Printer Friendly

Save To Library

You can find this online at

© Sigma Xi, The Scientific Research Society

More information about the FoRK mailing list