*TP and the future of the Internet

Joe Kiniry (kiniry@frankie.cs.caltech.edu)
Mon, 9 Sep 96 14:06:10 -0800

You wrote:
> Tim expects me to genuflect:
> > >Well, why SHOULDN'T a browser manage local files? If the next
> > >generation of HTTP is a Universal Transport Protocol, *TP, then
> > >managing a distributed file system is tantamount to managing a
> > >distributed ANYTHING.
> >
> > wrong again bitboy. what mr. a is talking about is no more than
> > opendoc
> Opendoc had its chance, and it flubbed it. There's a new sheriff in
> town.

i sit and ask myself _why_ is opendoc a failure? i mean, the
design of the technology is top-notch, it is becoming available on a
wide range of platforms (much like COM claims it will be in a few
quarters), and it satisfies a need that the industry felt was there.
my best guess is that it just plain wasn't delivered in a timely
fashion (in internet-years, mind you) and that the product ended up
being more heavyweight than many hoped (i've seen it slow a mac 9500
down pretty darn well).

> See, maybe Opendoc is too much. Its transport mechanism is a mess.
> IIOP is a mess: it's too big, it's slow, and it's a sledgehammer
> when a flyswatter will do. No, I'm convinced that a simpler,
> extensible system would be a better basis for a distributed file
> system (or a distributed ANYTHING for that matter).

i would not call iiop a mess. i'd call it an rpc-centered protocol
that tried to satisfy lots of different peoples' needs. come on
adam, you know that _all_ rpc mechanisms are slow, regardless of the
overhead imposed by the complexity of the protocol or packet
design. this won't always be the case, mind you, and i'd rather go
with an option that can actually support everything i need to do in
the time-window i need to do it in than something that is
half-baked. now, if you want to talk about a simple mechanism that
supports a distributed file system, why look at things at the
protocol level? meaning, we have pretty darn good distributed file
systems today that solve many of the naming, caching, and mirroring
problems in an industrial environment. of course, fetching a file
from site X from sight Y with AFS is slower than HTTP, but i would
argue that is because of the overhead imposed by the services you
get with AFS. if one were to design an extensible minimal
distributed messaging protocol, which is what i assume you are
speaking about when you talk about *TP, then i would also argue that
as soon as you layer those services on it that something like AFS
provides your protocol will be just as slow as something that we've
been using for years.

> Rohit and I have been flirting with this idea for a while; remember
> 18 months ago, Gordon, when you came to visit us and we drew up all
> those charts, and Ernie concluded that *TP is doable? Well, maybe
> the next generation of HTTP *should* be *TP, by stripping it down
> to a core set of functions (i.e., just GET and PUT), and by giving
> a method by which the protocol can be extended.
> I get the feeling that keeping HTTP as a transaction system has
> benefits that outweigh those that arise from moving to an RPC-based
> system. And by shrinking the current HTTP, as opposed to growing
> it, we give an ideal minimum standard on which to build a
> distributed anything, because in such a system,
> Messages form the foundation of any distributed system, and so the
> smallest, most extensible transport layer that allows for this, is
> the holy grail for which to search. And I'm not talking about a
> mailbox system built on top of Java sockets; I'm talking about an
> actual, bare-bones, living-on-the-wire, efficient, universal
> middleware layer.

i'm a little unclear as to how simple you want to go. the most
bare-bones, on-the-wire protocols i know cannot do what you propose.
as soon as you start adding the necessary elements to support, for
instance, a transaction model, your protocol is no longer as
bare-bones as you would have liked. then you say, let's do
conditional inclusion of extra message information. the problem
with that is your efficiency goes out the window because of the
extra processing required on every packet.

> See, Tim, the problem with Opendoc is that it's too big to be an
> efficient, and therefore ideal, component architecture. In an ideal
> ICA, each component has a name and provides a service, and each
> component has a state (and the component and its state can be
> marshalled/demarshalled). And that's ALL there is to each
> component.

i agree with you here.

> If HTTP were to evolve into a *TP -- the universal transport
> protocol that subsumes all existing transfer protocols -- it would
> need to be small (providing only GET and PUT, say) but extensible
> (using a mechanism like a Protocol Extension Protocol).

why not UTP? sounds like the next logical step in the ftp->tftp
line to me if you just want get, put, and query.

> When I say extensible, I mean extensible in the sense of how state
> is transferred by *TP. Maybe it happens by explicit request (like
> the current HTTP). Maybe it happens by explicit push (like SMTP).
> Maybe it happens by timed flood fill (like NNTP). Another issue is
> whether the transferred bits are moved (SMTP), or actually copied
> (NNTP, FTP, HTTP). Other issues include but are not limited to
> reliability of transactions, security of transactions, and timeout
> conventions. The point is, though, that NONE of these issues have
> to be part of the *TP. They are extensions, and PEP can handle
> adding these extensions. The universal transport protocol should be
> simple, small, a core, a microkernel. The additions give the power,
> but at the base is a fast, automatable transaction system.

but this is just like saying iiop is layered on tcp, a fast,
simple, small, core microkernel of a reliable transport layer. i
don't mind trying to push things up a level, especially when you
consider the extra bits that we'll have to shove around given the
growth of network bandwidth and processor speeds; this might very
well be an interesting exercise. i'm just something of a realist,
and the realist in me says that if the industry took X years to see
the light of http, and X+15 years to see that tcp was a good thing
(in a certain sense), then the likelihood of a new protocol getting
adopted, even if it is the savior we've all been searching for, is
pretty darn small.

> And automatable transactions are the key win here, according to my
> friends Captain Morgan and Jim Beam, who visited me last night to
> show me the KillerApp of coreHTTP + PEP. Unfortunately, I was too
> drunk to write coherently, and the idea doesn't seem so killer
> under the envelope of sobriety, but think about it: what if you had
> a secure, automatable transaction system? And by automatable I mean
> that components can be plugged into each other through interactions
> with each other's interfaces, automatically by a computer. For
> example, I could drag and drop my business card into a form, and
> all the proper information would trickle down to the appropriate
> form fields automatically.

gee, you just described taligent!

> In other words, just by visiting the spot, the customer brings
> along all the information associated with her/him. When s/he clicks
> to buy something, this information is used *automatically* to
> determine if the transaction should be committed (if sufficient
> funds or sufficient credit exists) or aborted (if funding/credit
> information cannot be verified). Automatability also gives us the
> ability to DISCOVER new interfaces -- for example, if someone wants
> to barter instead of transferring funds for a service, the
> macropayment system can automatically adapt to the alternate
> request to barter. By minimizing the type system to the bag-of-bits
> plus black box, we allow all kinds of extensions that CORBA gets
> crushed on (because it has to build everything up from primitive
> types, feh on it).

ah, there is my argument about corba as well. it is an all or
nothing love affair, and every services therein leverages (sorry
buzz-meisters) off other every other. it is difficult if not
impossible to strip out a single service.

> But to recognize the fact that everything in cyberspace is a DOT
> (Document/Object/Type), with hyperlinks being the shortest distance
> between two DOTs, is nothing short of nirvana. The enlightened move
> on to the next level, while everyone else sits around scratching
> their heads, saying that there's nothing to this vision or it's
> obvious or it's stupid or it doesn't raise the bar at all. Well,
> I'm telling you it DOES raise the bar; the only thing in question
> is my ability to relay the implications in such a way that people
> Get It. Or maybe I don't *want* everyone to Get It until after I'm
> done with my KillerApp.

come back to the fold grasshopper.