I started as an intern there in late spring 1996 and my recollection is
that its stock price was around 42 when I began there; when I left in
early autumn, I seem to remember the stock price being around 68. This
summer, I started 16 days ago with a stock price around 85, and today --
sixteen days later -- the stock price is 105. Darn, companies should be
*dying* to hire me as an intern: I cost nothing, and my presence is
positively correlated to significant market cap augmentation...
And speaking of two years ago, I did actually have a point to this post.
I'm wondering when *TP was first mentioned on FoRK. I know we've been
talking about it since 92/93, but when did the pipe dream actually debut
in an Internet-searchable format? It's at least August 1996, as I
ascertained from the post sampled from below.
Rohit, it's amazing to me that two years ago we were discussing
separating out the transport from HTTP itself. A nonsensical, messy
rant chock full o' pointers was written by yours truly on FoRK in August
I guess Simon Spero's no longer working on HTTP, right? Geez, has
anyone written a "history of HTTP including all of the splinter efforts
and extension efforts and research efforts" so I can keep track of
everything in my mind? I guess it would be more of an "evolution" paper
than a "history" paper since the changes are still ongoing... and W3C
loves the word "evolution," I've ascertained... :)
---- push ----
While I'm writing a wish list, does any good document with plenty of
examples exist as to how to make extensions to HTTP while maintaining
good design practice? For example, how do I know how to evaluate the
tradeoffs when deciding between adding methods, versus adding headers,
versus requiring additional things in the message body content itself?
Or is good design something that must be understood gestalt, passed on
from master to grasshopper through enigmatically encoded koans, without
ever really being explicitly taught?
---- pop ----
Back to the reminiscence of the critique of Simon Spero's suggestions.
Holy smoke and mirrors, Batman: the issues we were thinking about two
years ago still seem quite relevant now... [quoting selectively from the
> To avoid these problems, HTTP-NG allows many different requests to be
> sent over a single connection. These requests are asynchronous -
> there's no need for the client to wait for a response before sending
> out a different request. The server can also respond to requests in any
> order it sees fit - it can even interweave the data from multiple
> objects, allowing several images to be transferred in "parallel".
> In a sense, this transforms HTTP from a transaction-based model to an
> asynchronous RPC-based model.
> Transactions are good because they are simple: you have a request, and
> you have a response, and that's all the messaging required. With an
> asynchronous RPC, you have all kinds of new problems: marshalling and
> demarshalling, acknowledgements and cancellations, and you don't get
> the benefits of recovery, replication, time stamping, and security.
> To be fair, there's lots to be gained by moving to an RPC, but the
> fundamental problem is that you LOSE transactions. As an example of why
> this is bad, consider the fact that you're giving away any notion of
> aggregtion. As another example of why this is bad, consider the fact
> that the CORBA security model looks like SSL because it cannot talk
> about transactions.
> Quite simply, using an RPC system suggests a lower grain of invocation.
> This implies a more brittle client-server relation; what you'd really
> rather have is interacting coprocesses that can interact in real time.
> I mean, the Web should be less brittle, so it can transfer, cache,
> proxy, and decouple in both time and space. I hypothesize that the
> Web needs to go in direction of increasing the decoupling.
In fact, now I'd go further: in addition to decoupling across time and
space, I think it should go across organizational boundaries as well.
Later, Simon wrote about evolution:
> The best transition strategy for moving from HTTP 1.0 to HTTP-NG is
> through the use of intermediate proxy servers. This allows the
> existing base of servers and clients to continues operating as they
> are now, whilst still taking advantage of much of the performance
> enhancements in the new protocol.
> The reason that this works is that most of the performance problems in
> HTTP 1.0 are caused by delays in the network. If proxy servers are
> placed close to older clients and servers, then these delays become
> significant. For example, if two servers are placed at either end of a
> transatlantic link, communicating with each other using HTTP-NG, but
> accepting and sending requests to and from other systems using HTTP
> 1.0, all the HTTP 1.0 delays would all occur within a continent, rather
> than spanning the intercontinental links. Further, a cacheing server can
> interpret HTML documents and pre-fetch any inlined objects before an
> HTTP client requests them.
> It's unclear that you need this transition strategy, because it's
> unclear that the vision of HTTP-NG as it stands in this document is what
> the world wants and/or needs. W3C should be driving the direction of
> HTTP-NG towards *TP ("Star TP") -- the universal transfer protocol.
Oy vey, this sounds like the point of view of an uninformed, overly
optimistic and naive outsider. How dare I have the audacity to suggest
what the W3C's agenda ought to have been. I was not, and still am not,
a member of W3C. I am only a member of the Web community proper.
Anyway, here's where we start to get interesting. The first and only
definition of *TP I remember seeing on FoRK...
> At its most fundamental level, *TP is about moving bags of bits around
> the world. The model is this: when requesting information to be
> transferred to you, you basically get a bag of bits, a piece of
> metainformation that tells you what type describes those bits (in most
> cases, it's going to be a MIME type), some routing information, and
> some related external information (such as a like a PEP bag).
[Remember, this was several Internet years before Mandatory came along.]
Geez, I wonder what other nuggets are sitting there in the 1996 FoRK
archives when only a dozen of us were around to listen to the rants.
FoRK has far less ranting like this now, I've noticed...
> Think about it: all transfer protocols are variations of this. Telnet
> is a single point to single point line generation scheme, which
> maintains a connection. NNTP is essentially a many points to many
> points flood fill. HTTP is a single point to a single point on demand,
> with performance unreliable. FTP is a single point to a single point on
> demand, with reliable performance. SMTP is a single point to
> potentially multiple points, with reliable performance. And so on.
Rohit, you think your brain is melting? I can't believe I wrote this.
I was, of course, talking about a universal server to complement the
universal client. Too bad that while I still had a brain back in August
1996, I didn't flesh out more of this vision of a single communication
substrate for passing any desired documents, data, and (for all intents
and purposes) programs around the Web, highlighting all the benefits that
come from their synergy effects...
> Note that to implement these, you don't need a vast spectrum of millions
> of methods. Maybe you only need what HTTP 1.1 provides: HEAD, GET, PUT,
> POST, DELETE, PATCH, etc., and everything else is something that goes
> through dynamic method invocation. Heck, the metainformation could
> include a signature of a method that's already on the Web for use with
> the transferred bits.
Curious that I was so strongly against synchronous RPCs over HTTP
before XML was even a sparkle in its parents' eyes...
> Now, what good would an HTTP that serves as *TP be? The answer, of
> course, is AUTOMATABILITY. For example, think of an interface builder
> for the web that is automatable. As one of countless applications, you
> could put a palate toegether to make a United Airlines reservation map.
See now, to me, notifications are the key to automatability at the
application layer. And I really haven't strayed from the vision. I
just sat on my high academic mountaintop for two years, paralyzed. I
play this game very poorly, obviously.
> Or, as another example, a vendor's home page could be received as
> receive as HTML type, but you could extract other information, such as a
> virtual business card, from this. This virtual business card could be a
> draggable thing from my homepage, that I could drop into forms, send off
> as needed, and so on. Taken to its logical conclusion, I could
> annotate every form with what data types (denoted by their URLs) are
> compatibe, and I could make assertions based on that, drag it into a
> homepage and it fills the right fields -- and here's the key word --
> As Rohit says, the problem with programming for the web is that it is
> essentially two sides of a sheet of glass. On the server side, you need
> to take a legacy application, and use forms to have HTTP/HTML interface
> that only knows the MIME type.
I like the "sheet of glass" imagery.
> But if you have a smart MIME type manager, after you've demarshalled the
> bits transferred and recognize them to be a format such as "Postscript",
> you could also recognize the actual individual components in there for
> use elsewhere. With the proper transaction model, you could do it
> AUTOMATICALLY. And HTTP 1.1 with PEP might very well be the proper
> transaction model for such a scheme.
Of course, that was zen. This is tao, and the rules have changed
somewhat. But the truth is still out there...
> Now, this brings us to the land of Microsoft, for, at the low levels,
> there is a lot of similarity between *TP and Microsoft's DCOM.
> See, CORBA is too brittle for such a typing system to work, because
> new types need to inherit from old types, and the hierarchy can become
> quite a mess. And although it might very well be the case that Java
> Beans also are loose enough to allow for this type play, it will
> actually be unknown until Javasoft releases the thing in like 12 months.
> But the important thing to note here is that the models afforded by Java
> Beans, COM/DCOM, and, for that matter, Objective-C, are actually the
> same thing! It is quite possible that COM is far better than we gave it
> credit for. Infospheres started along that path, too, but along the
> way it seems to have lost this crucial insight.
> Rohit can probably go through the United Airlines example a lot better
> than I could, but at its core what should be the crucial elements of
> 1. A better file manager than Plan9, providing OPEN, READ, WRITE,
> and CLOSE with caching, and byte ranges.
> 2. A simple but good DII, which is to say, NOT a functional one.
> All I want is gateway through which I can invoke any message,
> NOT necessarily understand them.
> 3. Persistence, as good as DCOM's Istream and Ipersiststream.
You never did respond to any of this, Rohit. Maybe that's why I myself
dropped the ball on it, too. So I'm curious -- and I'm probably the
only one on FoRK who's curious, but indulge me: what do you think now
about these kind of things? And if you don't reply, I wonder if I'll
bring it up again in the summer of 2000...
.sig bonus -- r/evolution septuple-play!
Evolution has a lot of dinosaurs in its path.
-- Carver Mead
Evolution: life's a niche, and then you die.
Greed captures the essence of the evolutionary spirit.
-- Michael Douglas as Gordon Gecko in "Wall Street"
Evolution sounds okay, but I'd rather keep my options open.
The concept of positive feedback loops and tipping factors are
absolutely fascinating... All revolutions are corrupted, so all
dynasties and empires must fail... (eventually; in the meantime, I'm
-- Dan Kohn
Whatever happened to revolution for the hell of it? Whatever happened
to protesting nothing in particular, just because it's Saturday, and we
have nothing else to do?
-- King Missile
To aid the evolution of superior technology we must act in true
Darwinian fashion and destroy inferior crap. Hear hear!
-- Robert Harley