Internet Computing Interview with Robert Cailliau.

I Find Karma (
Fri, 20 Feb 1998 04:03:29 -0800

Makes me want to subscribe to IEEE Internet Computing [well, that, and
Rohit's forthcoming bimonthly column on transport protocols...]. This
interview is gold, Jerry, gold:

This whole article is one long, sad observation of the true damage
Windows has done. Read Cailliau's description of the Web As It Might
Have Been.

> So you lost that because you decided to release it to the public?
> We lost it because we couldn't port it easily from
> NeXTStep. Writing an editor is much harder than writing a passive
> browser. The guy who brings out a passive browser spreads it
> faster, but it's not necessarily better for the user.
> ...
> In retrospect I think we should have concentrated much more on
> getting a better port of the NeXTStep version out. We should have
> done things by saying okay, if you want to call this a Web browser,
> you must have editing in it. But this is not the way this crowd
> operates.

The description of many-windows-in-a-small-amount-of-real-estate is good:

> It's really hard for people to escape the paradigm of a single window?
> That's also defined the success of Windows, Windows '95 and so
> forth: you don't bring up large numbers of multiple windows. I
> confess myself that one of the things I had to get used to when I
> first worked with NeXTStep was the large number of independent
> windows all over the place. You need a fair amount of screen real
> estate to accommodate them, but once you get used to it it's
> actually much better.

And the following is just a plain-great philosophy:

> Well no, I think you would have had Web TV. But you would not have
> had Word-to-HTML.
> One of the things that I want to get back is the idea that the Web
> version is the original, and if you really need paper product you
> derive it from there. Tim had a little application which he used to
> translate HTML files into text, collate them, and print a book that
> included a table of contents and index. I have a few of those
> printouts in my office, but the program is long since gone and
> forgotten.
> Our idea was that the original document is on the Web. What's on
> the Web is not the result of a conversion from something else, but
> rather, it's what you start out with, it's how you think. You don't
> really learn the rhetoric of Hypertext unless you have fast,
> multiwindowing hypertext where you can create links by dragging and
> dropping, and navigating around in your structure without changing
> the window contents. But without the Hypertext as the original, you
> end up in the situation where we are in now.

Something else to check out is Cailliau's "A Little History of the World
Wide Web" that covers 1980 through June 1995:

The full text of the IEEE Internet Computing interview appears below.
-- Adam


Robert Cailliau is Head of the Web Office, IT Division, CERN where
he has been since 1974. He was the Head of Office Computing Systems
there from 1985-1989, and in 1990, he proposed, with Tim
Berners-Lee, a hypertext system for access to CERN documentation.

That joint project became the World Wide Web.

Cailliau started the series of International World Wide Web
Conferences by organizing the first one at CERN in May 1994. He is
a cofounder of the IW3C2, the organization that organizes the
conference series.

IC's Editor-in-Chief, Charles Petrie, recorded this dinner
conversation with Cailliau in November 1997.

How did you happen to start thinking about what would become the Web?

There are actually many anecdotes and details, but at that time I
had just stepped out of Office Computing Systems where I was group
leader for a while. Much earlier, I had been developing document
handling systems at CERN, and I had also been toying with
Hypercard. We had all buildings connected with Appletalk, and it
was just conceivable that we could get something done with
hypertext over the network.

So you obviously had Macs then.

Oh yes, we still have 1,600 of them, and I'm not about to give up
mine. I've got a 3400 which runs at 250 MHz. It's a very pleasant
machine. But we're talking about 1987 and '88.

I wanted some of my people to seriously consider Hypertext in
writing active documentation, but there was just no way to get them
into that: they were still thinking very much in terms of
mainframes and big machines and huge databases. But then there was
some reorganization at CERN, and I took the opportunity to get out
of office computing systems and move on to electronics and
computing physics (a new division that eventually disappeared). I
proposed a range of possible projects for myself, one of which was
Hypertext, and another of which was analyzing physics data using
object-oriented systems and NeXTStep.

But why were you so hot on Hypertext?

I felt that we needed to be able to do more than just produce
something and then output it on paper; we needed to be able to
navigate within it. There had been this project called CERNDOC,
which was a system completely based on VM, CMS, and the IBM. It was
sort of a hierarchical system in which you could search for
documentation, get a document out, and then maybe print it. But I
felt that the whole thing should have been hypertext-based, or that
we should at least look into what could be done with it. I thought
that we could maybe even do things on the network, but I had not
thought of the Internet.

But you were driven by some strong intuition about Hypertext?

Yes, because searching for things in trees was not right. With
Hypercard you could link anything to anything.

So you saw the inadequacy of trees, and you saw the ability to jump
around in Hypercard...

Yes, and possibly also the networking design because we already had
the problems of everyone having a machine and knowing that the
electronic form of a document was on a disk somewhere, but it
should actually have been available on a server -- maybe at the end
of the corridor. Of course, with AppleTalk protocols and AppleShare
you could do it all, so the whole idea was there essentially, but
it was not unlocked.

Was it inaccessible only because you didn't have a good indexing scheme?

We didn't have any indexing scheme. We also didn't have an
interface that was good enough: the user had to know too much.

And this was apparent to you right away?

Well, not right away. These things grew over many years. I had
also done some structured documentation systems in 1976, so I had
all this on the back burner. I had already been asking, "How do you
use the computer to do this documentation correctly?" But I didn't
know much about the formal work.

I was also working with the control system of the Proton
Synchrotron--the smallest, but most complex, of our particle
accelerators--and there in the 1970s we had set up a system of
computers that talked to each other. Programs actually sent
themselves over the network, and I'd written a byte code
interpreter for mobile code.

This story has not been told much, but we had all that, and the
documentation was there as well. We had our own process control
network; we had our own interpretive language. We had the whole
thing, and this was 1973-74.

So in the early '70s, you had networking, mobile code, and byte
interpreters. You had the equivalent of Java and the Internet.

In a sense, yes, and in fact we had graphical user interfaces. We
had independent control consoles, and the whole lot, but it's all
gone now.

Did anybody know about this work?

Of course. This has been published, but like many things, it was
defined and applied in a small plot, and not as part of mainstream
computing. Mainstream computing in the early '70s was cards.
Batch. Input, treatment, and output on mainframes. But we were
already in real time. All of the things I described in the control
systems were also in real time, and there was not much room for
messing about. We had a cycle time of one second in the particle
accelerator and 10,000 control parameters.

Was that a design feature?

Absolutely. In fact, the reason that we custom-designed the control
system network ourselves with an external firm was that there was
no other way of doing it. You couldn't buy these things off the
shelf, and certainly not with response times like that. The
response time was the main driving force behind the design because
of the demands of the real-time tasks linked to it.

We need to write another "history of computing" article about that
control system of 1973. It's all documented, and I think it is
important because in it was all this stuff about remote execution,
remote access to information, networking, documentation, doing
everything with the same computer.

You could download code?

And files.

But you could download byte code and execute it remotely? I wonder if
the Java people know about any of this.

I'm pretty certain the Java people don't know. I just hope we
haven't thrown away all the hardware. The whole thing was done on
Norsk Data computers. They were a small Norwegian company--the last
independent computer company in Europe, I think--and after that
nothing existed except from the US, and of course from the
US-perspective, if it isn't done in the US it doesn't exist in
computing, right?

Like it never happened.

Yes, exactly. It's as if it never happened if computing is done
outside the US.

But the Web certainly made the US sit up and take notice.


So you had all this background, and it led you to believe that you
needed a better user interface, you needed an indexing scheme that let
you jump around, and this is what led you to the strong intuition that
some form of Hypertext was the key.

We had all the background. I'm describing the receptive field and
circumstances in which it all happened. It seemed Hypertext was
probably going to be a key, and it was silly not to at least
investigate it. At the same time, Tim Berners-Lee had very similar
ideas, but he also knew about the Internet, which I didn't know
anything about. A common friend of ours, Mike Sendall, (who
happened also to be Tim's boss) said, "Look, you guys know what you
are doing. Why don't you sit down and talk to each other?" So I
read Tim's proposal.

What was his proposal?

It didn't have a name, but it said essentially the same things. It
was also Hypertext based, and it was planned to be accessible to
all the different formats, and so forth. There were two main
differences: he used the Internet and he had something running to
show. There were two main differences: he used the Internet and he
had something running to show. So I essentially gave up and joined
him immediately. It was obvious that there was no use trying to do
anything else but push his proposal through.

It seemed to have the attributes you were looking for?

Yes. So then we wrote a common proposal to get resources and
management attention, and to make it an interdivisional project--we
were in two different divisions, and a CERN division is about two
to three hundred people--and that project proposal was called World
Wide Web. That was May 1990.

Then we worked hard, and Tim got the NeXTStep version out fairly
early. The NeXTStep was a browser and editor--which was the same
thing; there was no difference between the author and the reader.

Tim told me that. He said that you really intended that the
implementation be as much an editor as a browser, but somehow got lost
along the way. That was your intent?

Yes, and I think in retrospect the biggest mistake made in the
whole project was the public release of the Line-Mode Browser. It
gave the Internet hackers immediate access, but only from the point
of view of the passive browser--no editing capabilities. The
NeXTStep version was much more powerful. And it was much more
elegant. It was quite depressing, having to step back and say, ""I
have to port this to a PC?""

It's like computing cannot jump too far ahead at any one time.

You seem to have to reinvent these things every 10 years.

Well, let me get the sequence straight. First there is the
editor/browser on NeXTStep ?

It also had graphics, of course. You could display PostScript.

You could display PostScript immediately?

Of course. It was designed as a program to display graphics. And it
showed a different window at every click. You did not get lost so

So it was integral. And it displayed GIF?

No. It did not display bitmaps, bitmaps aren't scalable, right? You
can't do anything with them. They're nonsensical. Bitmaps are not
graphics; they're the display result of graphics. You can't express
graphics in dots, and a bitmap does not have a metric. It has no

You want vector graphics?

Well you want it expressed in units--in measurable distance
units--because the pixels happen to be different on every screen
and on every printer. When you want to do mathematics by making
bitmaps out of the formulae, you end up with something absolutely
horrible, depending on how you treat them. Most of the time even on
the screen it ends up horrible.

So, you started off with NeXTStep and an editor/browser displaying
PostScript. This was all WYSIWYG, right?

It's all WYSIWYG--no HTML, and no URLs. To see the URL you had to
call up a special window from a special menu.

So you couldn't see a URL?

Why would you see the URL? Who needed to see the URLs?

What kind of navigation scheme did you use? Was there anything like
bookmarks or a history of places you had visited?

Why would you need that? Every time you clicked, you got a new
window. If you found an interesting window, you linked it to your
home page. Your home page, which was a piece of HTML sitting on
your machine--that's what a home page really means--functioned as
your "bookmark" page. Since your home page was still there,
alongside with the newly found interesting page, you would put your
cursor there, click on your mouse, and drag the link. That's it. If
you needed more than one page for bookmarks, you would just create
new local HTML pages.

It captures the URL for you?

Yes, so bookmarks make no sense, right? I mean, why do you want to
separate these things out into special formats for bookmark or a
special format for this or a special format for that? Everything
was just simple, pure HTML. No need for anything else, because it
was instantly editable. Just click here, click there. And it's
fast. Much faster.

If you can instantly edit, you don't need URLs?

Right. You start out by writing your own set of documentary pages
on your local disk. Then you would click your insertion point in
the browser. Like in every good application, if you wanted to put
the insertion point, you'd click once. If you wanted to make
something work or to follow a link, you clicked twice. There was no
distinction between editing mode and browser mode. We lost all that
along the way. What we see now is mostly inflated rubbish.

So you lost that because you decided to release it to the public?

We lost it because we couldn't port it easily from
NeXTStep. Writing an editor is much harder than writing a passive
browser. The guy who brings out a passive browser spreads it
faster, but it's not necessarily better for the user.

You couldn't distribute it on NeXTStep, because NeXTStep wasn't
available everywhere, so you decided to port it to X?

But we couldn't. Neither of us had any X knowledge. It takes at
least 6 months to pick up a programming system.

So you decided simply to port it to Unix on a line command basis?

Yes, that's how. Also, the reason behind that was the command line
browser, which was only a browser, was written by a technical
student as a demonstration model. This is where I had a fight,


With the other Internet hackers. It was written in pure, flat
C. Not ANSI C, mind you. The C it was written in was sort of
compilable by any old C compiler on any old machine. The idea was
that it had to be accessible to everyone, everywhere, even on an
IBM PC--AT or lower. At one point I tried to put some order in and
redevelop the documentation and some of the code. I started putting
in some indentations so I could at least read the code, but
unfortunately that was ANSI C, so it didn't compile. So I was
"ordered" to put it back in its old state. I said okay, that's how
you want it? I will stop coding. I just can't read this stuff.

So why was this a mistake, in retrospect?

It was a mistake because it gave a lot of people the impression
that the Web was something which is not graphical and not
editable--just another medium to look at. That was not what we
wanted. We wanted an authoring tool.

So when you say that wasn't the right way to spread it, what was
the right way to spread it?

In retrospect I think we should have concentrated much more on
getting a better port of the NeXTStep version out. We should have
done things by saying okay, if you want to call this a Web browser,
you must have editing in it. But this is not the way this crowd
operates. They can talk to each other without ever seeing each
other, over e-mail, and then...

What's wrong with that?

There's nothing wrong with that, it just doesn't necessarily
produce what's needed. It's also not the way society works. Anyway,
all of these reflections are about little parts of the environment
in which the Web happened. Once the Line Mode Browser was out
there, it kept spreading--especially once the library was out there
in the public domain.

Getting the CERN administration to accept that the basic code
library had to be put in the public domain was one of my
achievements. I felt it was important to distribute the basic
library freely--to make it available to everyone without any
strings attached, so it could explode. I just think we should not
have done that before we had the editable text object out there as

This has shaped the whole cyberspace since then, and the editor has only
come back recently sort of through the back door into a separate mode.

It could have been a whole different world.

And you certainly wouldn't have had Web TV.

Well no, I think you would have had Web TV. But you would not have
had Word-to-HTML.

One of the things that I want to get back is the idea that the Web
version is the original, and if you really need paper product you
derive it from there. Tim had a little application which he used to
translate HTML files into text, collate them, and print a book that
included a table of contents and index. I have a few of those
printouts in my office, but the program is long since gone and

Our idea was that the original document is on the Web. What's on
the Web is not the result of a conversion from something else, but
rather, it's what you start out with, it's how you think. You don't
really learn the rhetoric of Hypertext unless you have fast,
multiwindowing hypertext where you can create links by dragging and
dropping, and navigating around in your structure without changing
the window contents. But without the Hypertext as the original, you
end up in the situation where we are in now.

So I use Emacs now to author my HTML, and if the world were otherwise, I
would have this nice WYSIWYG system.

However, HTML is also defective, because HTML is far too
simple. For example, for mathematics there is no possibility of
raising something to a power, and there are no graphics in
HTML. Presumably this will change with the introduction of XML of
which I expect much.

How did this happen?

We got caught by time. The development of HTML was not a priority,
and of course, adding to it indiscriminately is not good. I'm very
adamant about this: I want to keep the structure separate from the
presentation. I hate it when a site forces, for instance, the font
size to Times 7.0 against everything that I try, with the result
that I just can't read it. I then have to download the HTML and
take the tags out if I want to read it. The style sheets more or
less help us, but they are not yet fully deployed.

By the way, we had some crude form of style sheets in the first
browser because I used that all the time for demonstrations. I
would use three styles: one for projecting from the screen, one for
printing, and one for actually doing the editing. No changes to the
HTML files!

But this whole business of doing the HTML right was not really high
priority. And the next step, which was to introduce mathematics, is
a very difficult one, and it still has not been solved.

That's amazing. LaTeX solved this problem a long time ago.

No. LaTeX has not solved it because it also does two-dimensional
layout. As far as I know, it does not separate the concept of
derivatives from how you note them. For example, the ideal thing
would be to say, if I have a derivative of X with respect to T, I
note this as "derivative of X with respect to T." And then
somewhere else, I specify that in the context of the paper I'm
writing, I will denote derivatives by just a dot over the X, or in
another context, I'll denote it by DX/DT.

Okay, but at least LaTeX gives you one way.

It does not separate the presentation from the denotation. From the
meaning. You cannot take the LaTeX and stick it into something like
Mathematica, and get results out.

Yes, but that's because there's no standard way for doing that.

Exactly. If you start from a notation like that of Mathematica, for
example, the mathematics program works, and then you can decide how
you want to present it.

Then you've chosen Mathematica as your standard.

It's an example; mathematicians have developed systems, and a
general, non-proprietary standard can be made. Mathematicians have
been working on SGML DTDs for mathematics. But the Internet crowd
doesn't necessarily want to talk to them and vice-versa. There are
many missed opportunities here!

Getting the attention span of C programmers is like the battle I
had to fight with Tim and his people to make them adopt some form
of code management system. I would often go home on Friday evening
with some version of the Mac browser, and come in and report a bug
on Monday afternoon, only to find that over the weekend there had
been three more versions of the library. And not only had the
library changed, but the API of the library had changed. Not just a
set of bug fixes - programming interface had changed. So you had to
start everything all over again.

I'm sorry, but one cannot work that way. You can get something out
fast if you can control the prototype, but in computing there is no
such thing as a prototype. The prototype stutters itself all over
the world.

You can't make a software model, can you? I can make a model
airplane and it can fly, but if I make a model piece of software
which actually works then that's equivalent to the real
application, right?

So what got released was what we have?

Yes. Now I still think it was a very heroic and very positive
collaboration of a lot of very enthusiastic youngsters around the
world who had absolutely no supervision at all.

What do you mean, all around the world? You mean it wasn't just the
group within CERN?

Well, we were only four at CERN. There were people like Dan
Connolly, who early on said, "Hey, wait a bit, and let's make HTML
a real SGML DTD," and Pei Wei, who wrote one of the very first X
browsers. There were quite a few people out there who put in their
enthusiasm and effort.

So it was an uncontrolled explosion, sort of a chain reaction?

Yes, and that's a very positive thing about it. There were all
these people that saw a common good and a common goal, and they
spent their weekends and nights on it. They did something which I
think in the end was very useful. The other extreme would have been
to make this very corporate and controlled, and that would not have
worked at all. So I would have preferred to have been a little bit
away from the extreme anarchy that actually happened, but I am
definitely glad we stayed away from the proprietary, corporate or
institutional way.

I really believe that no telecom, no software company, no hardware
company could have come up with this. Only such a dispersed,
intuitive, young, enthusiastic crowd could have done this--with all
the disadvantages to that as well.

But it was only possible within this culture.


So you're saying the Web really is the child of the Internet.

Oh absolutely, yes, though it's a child not just of the Internet,
but of an early network culture. It could have happened on another
open network, but there didn't happen to be any other one.

Sure. It has nothing particularly to do with the fact that it's packet
switched, but it does have to do with the culture of being open and wild
and free.

Yes. It's an academic network which happened to be on cables and
TCP/IP, but which could equally have been on paper and telephone
lines, and have led to other applications instead.

At the moment I'm reading the history of the Large Hadron Collider
project. It starts by recounting the history of high energy and
nuclear physics, and there, too, if you read what happened between
physicists--no matter where they were in the world, no matter what
the politics of the country was, no matter what their cultural
background was--all the physicists in the whole world stuck
together in the common attitudes about the future of nuclear
physics, atom bombs and so forth. Throughout the world, they were
independent of country loyalties and religious loyalties and
anything like that. They all worked for the good of everyone.

Are you saying the Internet is nothing new? It's the next incarnation of
the early century physicists?

It's a similar culture. It's a very similar thing that happened, I

The picture that you've painted is that you and Tim made a joint
proposal, you built something, you got something much less than what you
built out on a line command browser, and then suddenly there was some
sort of explosion of interaction and nobody could be in control anymore.

Then, of course, came something that can be viewed either as
positive or negative depending on perspective: the release of
Mosaic. We had great problems conceptually with Mosaic because it
was sort of the Volkswagen Bug of transportation. Everybody can
afford it, it takes no time to install, it does something that is
new to you, but it's not quite transportation. The analogies are
all false, but you see what I mean.

But the VW Bug was very popular.

Right, and so was Mosaic. It was okay, but it was a single window
thing. A single window, non-editing thing that got its popularity
from two aspects: it was much easier to install than any of the
other, better, X Window based browsers that went before it because
it came as one big blob for Unix machines. Its second
characteristic which was very attractive was that it was close to
what people knew: it put the images in line.

I personally didn't want the images in line. It's a nuisance
because you can't keep the image in view. For example, when you
read a physics paper, you want the diagrams in view while you
browse the text. You don't want to lose that image, and you don't
want it to scroll out of sight. But all browsers today do that,
it's just like a platform-independent presentation of the printed

So the cognitive dissonance, the difference between what they already
knew and what they were seeing wasn't very big?

Right. This is essentially like saying you stand on top of a
mountain, and you want to go down into a certain valley, but
everybody is from another valley so they they go down into a valley
they recognize. But it's the wrong valley to go into, and once
you're in there, it's very difficult to get out.

So we're stuck in this valley.

Well we're getting out of it slowly, I suppose.

And then we have to find a whole other route out of the valley, with
SGML or XML or something.

Do you need XML?

Ah, you roll your eyes at XML! I don't know, do you?

Every time I hear extensible something-something-something I get
very nervous because it opens the door for incompatibility. It
means a version that works differently, which must be because you
don't have these-and-these macros or whatever it is that makes it
an extension. Yet I have hopes for XML.

Or plug-in.

Plug-ins is another way to do diversification. And okay, great. As
long as you keep a common standard it's fine. But if you don't
have that--and we don't seem to have that--then what?

So it sounds like you and Tim started off with something really great in
NeXTStep, and then something else more primitive took off and exploded
with the Internet community. Then in the third phase, Mosaic came
along. Yet another simplification; it didn't make use of everything.

Inside Mosaic, for a long time, was the CERN Web program library.

But it didn't make use of everything, right?

Well, the program library didn't have the editable text objects.

But you're saying it was some sort of simplification. It had GIFs in
line. What other kinds of simplifications did it have?

It had GIFs in line, but I don't think GIFs in line is a
simplification. It's a complication because now you have to have
your program do the GIF display, whereas we just used some external
application to display the images. But for some reason people
cannot get away from this TV paradigm where everything is on the
same screen, so everything has to be in the same window. I've seen
sites where instead of downloading the video and then using the
video player, they insist that you download a plug-in which plays
the video inside the window. So you get stuck with all these

It's really hard for people to escape the paradigm of a single window?

That's also defined the success of Windows, Windows '95 and so
forth: you don't bring up large numbers of multiple windows. I
confess myself that one of the things I had to get used to when I
first worked with NeXTStep was the large number of independent
windows all over the place. You need a fair amount of screen real
estate to accommodate them, but once you get used to it it's
actually much better.

Do you think this is something that our children will accommodate
very easily?

No, I don't think so.

Why not?

Because a large fraction of the population is never going to be
able to handle the essence of computing.

Don't you think that's an older portion of the populace? I don't think
my daughter will need a network computer, but my mother needs a network

No. I think it has nothing to do with age. Let's say the current
percentages of the people that can and can't handle them is
different from the steady state distribution because a larger
percentage of the younger generation will be able to handle
them. My fundamental thesis is still that even in the steady state
there will always be a segment of the population that will not be
able to handle them--will not want to handle them. And the example
is all around you here.

Don't think America invented the network computer. We have had them
here in France for 15 years. They're called Minitels. To all
intents and purposes, it is a network computer, and it has not
spread beyond 50% of the households.

Even when it was given away for free?

It still is given away. You can still go to France Telecom and pick
up the basic model for free (although you won't get a telephone
book). The point is half of the households don't get one.


Because they're not interested, or they can't handle it. It is too
complicated an object with too many buttons on it. There's a
keyboard there, you know...

So you're saying this has something to do with people not being able to
program the clock on their VCRs?

Exactly. Yes.

But a true, say, e-mail appliance, wouldn't have all these buttons.

It would have a keyboard.

Yes, it would have a keyboard, but people who can type can handle a
keyboard. I'm thinking about my mother. She is a journalist, and she can
type. She just doesn't want to deal with the computer that she has to
install software for, or that she has to reboot. She wants an appliance
that is failsafe.

Even the appliance will run complicated Java applets with
interactions where you have to respond, and that's where it comes
back in. Maybe you don't have to handle the configuration in your
network computer, but you have to learn this Java applet which is
complex, complicated, different from whatever you saw before. You
have to declare yourself, you have to get your preferences in, you
have to configure it.

Am I hearing that you also don't approve of Java applets?

No, I didn't say that. I'm saying that with Java programming the
interface has come back. The diversity of applets is akin to
configuring my computer. Instead of having one interface to HTML
pages, I will have an interface to the bank and a different one for
the grocery store and one for....

This is a pain, isn't it?

Well, it won't go away. But people will not be able to handle
them. I'll bet you right now that there will be an irreducible
percentage of the population that will not be able to handle
network computers--with any sort of interaction with a complicated,
abstract thing. They just will not handle it. And the proof is in
the Minitel. You don't boot a Minitel; you don't configure it; but
it is the interface between all these different servers. If you go
to a particular server, it's different from this other server, and
many people just won't handle it.

So what you're saying is that people who can't do tax forms can't handle
even forms on the Web.

It's the same complication.

There's a cut right there.

Yes, and I'm not saying that this is 50% of the population.

So all that history aside, now where are you? What do you have to do
with the Web now?

Towards the end of 1993 I thought, this is really crazy. There's
so much out there, we should have a conference about Web
technology. To put it mildly, Tim was not in favor of this idea. In
fact, at one point, one of his comments was, "If you want to waste
your time on that, go ahead. But I think there are other things to
be done." (More on that later.)

However, I thought it was very urgent that we consider this a tool
for high energy physics. After all, CERN was paying our salaries.

You mean in '93 it wasn't yet used? In '93 everybody was using it

No, no, no. Not everybody.

Well, we were using it in '93.

Yes, sure, but you were at a university. SLAC got the first server
up in the US on 12 December of 1992, and Commerce Net was 1994.

Anyway, the HEP Institutes were wondering what was going to happen,
and if CERN was taking it seriously. But the Web was never an
official project with CERN. In the list of official projects you
will see all kinds of physics experiments, but you will not see the
Web. Now I'm talking about 1992, 1993.

This proposal called the World Wide Web was never officially approved?
We all had the impression that this came out of something that was
widely used at CERN. What you're telling me is that it was accepted more
outside of CERN before it was accepted in CERN?

Not really, because the High Energy Physics Institutes also were
worrying very much about whether CERN was going to commit to this
and support it.

I was going around trying to push this in early '93. So at one
point I called a meeting with division leaders and directors, and
said I was planning a trip through the High Energy Physics
laboratories to the Hypertext conference in Seattle. I was going to
go to FermiLab (Fermi National Accelerator Laboratory), to SLAC, to
Los Alamos, and I wanted to know what I could tell these
people. Were we going to commit to the Web development and put some
manpower and resources other than Tim and me and a student each on
it? We're in November 1993 at this point.

So late!

This is how it goes in the place where the thing comes from, and
I'm sorry, but this is not unique. Also, this is not commerce,
right? Also, we are a physics institute, not a commercial

So we needed to get this out of CERN by either making it a
business, which Tim didn't want; by finding a suitable informatics
institute to take this over; or by finding European Commission
money. In fact, that September I had already concluded with
FraunHofer Gesellschaft the first purely Web-based European
Commission project.

So the whole rest of the world was in the dark?

No. I'm sorry, but the whole rest of the world you're thinking of
is the continental United States. That is a minor fraction of the
civilized world. Please put things in perspective. And it's
California, probably, you're really thinking of.

No, in Texas also. It was catching on in Austin, Texas, and it was
catching on in California.

Yes, yes, we had been in San Antonio, and I had been at the
University of Texas to get a demonstration working at the Hypertext
conference in 1991. I remember the 1991 conference very well. For a
selected part of the academia the Web existed, indeed. And for
another selected part of the public in Europe it was there.
However, the world at large did not even know about the Internet in
1993. It is not true that the Internet was known by the general
public before 1994. Anyway, at CERN I was lobbying very hard inside
to get the right resources after I came back from the 1993
Hypertext conference in Seattle where we were not even truly
present--I was our sole representative, and we had not submitted a
paper for the conference at all. But one-third of the demos were

When I came back from that, I thought, "Wow, this subject is big
enough to make a conference base." So I announced, 23 hours before
NCSA decided to do it, the first international conference on the
World Wide Web, to be held in Geneva at CERN in May
1994. Subsequent telephone calls with NCSA sorted out that yes, we
would have the first one because we announced it first, we were
ready first, and it was only appropriate that we have it
first. See, this is constantly a bloody rush with the United

We just won't leave you alone.

Not only will you not just leave us alone, you will not stop
working. You will not switch off. You will not take holidays. You
will not enjoy life. You just work like crazy, and you can put that
in your article, if you like. It's very hard competing with you
guys. It's impossible.

Because we don't have a life.

Well I didn't say that. You said it. But anyway, I started the
Conferences, and it took a lot of time and energy. Together with
Joseph Hardin of NCSA we founded the International Conference
Committee, which is still going, and of which I'm the current
chair. We did the second conference that year in Chicago, and to
show how big this Internet hacker crowd was, in May I had 400
people here with another 200 or 300 who couldn't get in, because
for security reasons I couldn't fit more in--the restaurant
wouldn't take more.

You were turning people away?

Oh, yes. It was totally wild. People were saying, "But I don't need
any food. I'll stand in the aisle. I just want to come." Then we
had the second in Chicago in October of 1994, and there were 1,300
people. Then in April of 1995 we were at Darmstadt. In fact, the
next several conferences were decided at Geneva. It was going to be
Chicago, Darmstadt, Boston, and of course, at the same time, I was
doing the European Commission project--in '94 also we did the
consortium. Then at the end of '94 CERN decided that we were not
going to be the European representative, the European arm, of the
consortium, because we were going to do physics. So we transferred
everything to INRIA.

So they decided that the tail was wagging the dog?

Well, no, it was just that this was not our main mission. I had
actually always argued in that vein: CERN was the logical place for
the Web to happen, but not the logical place to keep developing
it. And of course, you must not forget that since 1992, in parallel
to all that, I was busy with splitting off (with some difficulty)
the CERN information from the Web information. I had to make it
distinct from what was happening in the Web development part while
still providing service to users who didn't care where the
technology came from.

Yes, but this happened everywhere.

Sure. And the first day is fine. But after a while you can't allow
a mix-up of corporate information and development information any
more. It was particularly critical at CERN because it started
here. The trouble with the first Web server which you can see in
the books is that in their mind it was all mixed up. We were
putting up information about CERN as a laboratory mixed with the
Web development documentation, and while you can do that at Day One
of the Web, once it takes some size, what you put up about CERN is
the concern of our press office and our Management. And because
we're an international institute, this is particularly important.
Also, we had to deal with a total non-approach to the whole issue
of publication.

A non-approach?

Today there is a whole history to look at. We had no such thing. It
just grew, or exploded in our faces, or whatever you might call
it. Anyone who wants to start today can access a vast amount of
literature about how to do it and how not to do it from all sorts
of cases where things have gone wrong and right and whatever. I was
sorting it out from nothing.

Another conflict with the Internet crowd was about access
protection. I was losing clients towards the end of '92.

You were losing clients?

Physicists were realizing that anything I put on the server was
visible worldwide. Now if physics experiment A is distributing the
minutes of its meetings, it doesn't want physics experiment B that
is also looking for the Nobel Prize to read them as well, right?

But they're scientists.

There are a lot of things you discuss with your colleagues, and
there are a number of things that you don't. So they were holding
backon further usage of the Web for what it was really for. They
were using it halfway, but not as a collaborative tool, because it
was too open. And this was not the idea of putting locks on the
Web. I mean, I consider a lock just a signal that says if you go in
here...'re trespassing.

Yes. You're not respecting the distance that I wish to keep. I am
not in favor of all this cryptography stuff, right? I have read
comments saying you should use cryptography even on your home
computer, because your wife might look at your disk. That's going
too far, but at least you put up some signal saying this is
private. I am now in the toilet, close the door. But this was not
well received in the beginning--not at all.

By the Internet crowd?

By what I call the Internet crowd.

It's funny to hear you talk about "the Internet crowd." You were part of

No, I have great difficulty considering this as a civilization. It
would have been okay, I guess, if they had also generated their own
incomes, but they did not. They were like a bunch of artists
living on the upper reaches of a society that was doing very
well. When it doesn't do very well, they are still tolerated.

Dependent upon patronage.

Right. So this young employee is a Unix wizard, and he keeps all
these things together, and at night he does extracurricular things,
like with the Web--so let him do it. But it's not in the
contract. There is, however, no way to reconcile our current
civilisation with the so-called credo of the Internet: "We don't
beleive in kings, presidents and voting. We believe in rough
consensus and running code." That credo is fine if you also grow
your own potatoes and bake your own pizza. Then it's okay, and I
can live with it.

But not if the kings are supporting you.

To maintain this philosophy you must become independent of
them. Then if you have found a way to sustain it maybe I'll join
you. But you must have some order somewhere. You cannot go without
a system of making decisions, or it just doesn't sustain
itself. And so maybe it won't. Or maybe I'm too old for this.

Anyway, all these things happened in parallel. The CERN internal
lobbying, the CERN information server, splitting it off from the
rest and making it usable, providing the physicists with usable Web
sites, access protection. And the conferences, the European
Commission project, that was all my part of it.

So how is it used today?

Well, we're still woking very hard, even inside, because it is
still not perceived as a collaborative medium where there must be a
minimum of rules. We have only very recently gotten back to the
awareness of the general user. But again, that's a phenomenon which
is a pure result of the fact that the Web grew at CERN. People
have seen it here from the beginning; it was something lying
around. Elsewhere it has been done better because of our
experiences. The first guidelines for publishing on the Web came
out of other laboratories a lot earlier than they did here. CERN
has for a long time been a free-for-all on the Web.

What I'm hearing here are two things from you. One is that at first CERN
didn't use the Web as well as some other sites because it was there to
begin with, so people didn't respect it. Whereas, other people have sort
of discovered it and experimented and come up with the rules for using it.

Yes, this is partly the fate of being first. We're working on that,
and we're getting there.

But it also sounds to me like you're saying that in some sense the US
hijacked the Web.

Well that was unavoidable because the Internet culture was bigger
and more uniform in the US. It's a bigger pool, and there's also a
uniform language basin.

How do you mean?

Well, you publish something interesting in English in California,
and you can read it in Boston. You publish something interesting in
Athens, and you cannot read it in Hamburg, right?

So you mean the actual, natural language?


But also there was a bigger pool of this Internet crowd to which it

Yeah, I mean essentially America has completely wiped out--whether
this has been done consciously or not historians should find out; I
will not volunteer an opinion on this--but you have essentially
wiped out all computing industry in Europe.


Well tell me about computing in Europe. Name a single
shrink-wrapped package that you know of--a popular, shrink-wrapped
package--coming out of Europe.

How did we do this?

Name me one. One, just one.

I can tell you a very successful software concern out of Europe: SAP.

What do they make?

They are basically a consulting firm, but they push a particular piece
of software that goes in and integrates a business process. It takes a
lot of consulting; there's a lot of cottage industry.

True, we have that kind of stuff. And we certainly have the
talent. There's no doubt about it. And of course we invented a lot
of the technology--byte code interpretation or object-oriented
development. All good programming languages came out of here,
right? A lot of computer hardware concepts came out of here. All of
this stuff came out of Europe. Even the word "packet" came from
Davies. But our computer industry has been wiped out. We have no
chip design anymore.

But why is that? How did this happen?

You should consult historians.

You're telling me that this is kind of like Japan taking other people's
initial technological developments and commercializing them?

They have no computer culture either.

But in terms of computers, you're telling me that the US is the Japan of

The U.S. has kept an iron grip on anything that was strategic
computing development.

How have we managed to do that? I mean software is free, it's
everywhere. There was an Internet crowd here at CERN. How has this

It probably has partially to do also, but only partially, with the
flower power California culture. Which is somehow needed. You need
that sort of attitude to life to get a real computer hacker. Pure
hackers don't wear business suits. The guys that come up with the
ideas don't. You know, you need the Xerox Palo Alto Research
Center. It's definitely not a corporate thing.

Is this just culture? Is it the economic climate? Is it money?

No, I think the really innovative guy in computing is not really
driven by money. I don't think he is. Tim certainly is not. Most of
the people that I have seen that really have done something,
they're not driven by money. Those who are driven by money and have
achieved money, they have not produced any innovative ideas. I
shall not name them.


Anyone who slaps a "this page is best viewed with Browser X" label on
a Web page appears to be yearning for the bad old days, before the Web,
when you had very little chance of reading a document written on another
computer, another word processor, or another network.
-- Tim Berners-Lee