My notes from the "Future of HTTP Workshop" at WWW7.

I Find Karma (adam@cs.caltech.edu)
Sun, 10 May 1998 02:53:38 -0700


These are my notes on the future of HTTP, from the web conference in
Brisbane, April 14, 1998. Any errors of transcription are my own and
not the people to whom I attribute them. Please correct anything you
think is incorrectly represented. Sorry it's not more organized, but I
calls em as I hears em. Jim, feel free to edit this as you like.
-- Adam

First slide-through of the day, an executively suited Rohit gave a talk
on how to make HTTP a truly extensible HYPERTEXT protocol. His points:
1. Why was HTTP successful? Network effects.
Started from a simple topology, simple parsing, and simple extensibility.
A gentle learning curve.
Resistance from people to build complex servers and clients.
2. Programmability == killerApp potential?
3. Need to maintain relations between resources.
4. Simplicity gave us the ubiquity of HTTP.
"The future of HTTP? There is none." -- Larry Masinter
Somewhere in the 150-page HTTP 1.1 spec is a 10-page spec waiting to get out.
5. Messages over objects or objects over messages?
Does it look like an API or a message syntax?

In the ensuing discussion, Bill brought up the IP vs. XMS issue at PARC.
1. IP is missing a good sequence packet protocol.
2. The lowest common denominator is pretty low.
3. Simple is not always cheap. Performance hits may be significant.

Mike then gave a talk on how HTTP 1.X can be replaced with a 3-layer
structure (lowest: transport; middle: OO RPC like CORBA, DCOM, Java RMI;
highest: web-specific interfaces). His points:
1. HTTP-NG has to support existing web architecture, and
interoperate with and eventually supplant HTTP 1.X
2. Existing Web architecture: client-server, chained intermediaries.
Authorization is poor -- really just authentication.
3. HTTP has exactly one method.
We need to specify HTTP-NG's semantic model, then get to its protocol.
4. HTTP-NG's philosophy: more is better.
Efficiency, scalability, modularity, evolvability, authoring,
expressiveness, security, liberty/privacy/trust support,
transport flexibility, resource migration/replication,
nested and recursive RPCs, etc etc etc
5. HTTP-NG's goal: by June 1998, design and prototype 3-layer structure,
with efficiency and scalability of existing Web, and
with extensibility and evolvability of existing OO RPCs.
Testbed by Web characterizations group: uses ILU, modified Apache,
Surge-based simple-fetcher parameterizable client.

Ensuing discussion. Some of the points:
1. Bill: PUT. GET. HEAD. POST. We need to guide extensions.
2. Mike: Web atop OO messaging system or put messaging system on top of Web?
3. Henrik [in response]: It must do both, because people are doing both.
Testbed of URIs that can be delivered at different points in the
transport stack.
4. Bill: Decoupling is an important part.
5. Jim: IPP, WebDAV, etc do not concern typical end users.
What comes next should have some compelling new functionality.
6. Henrik [in response]: That's the purpose of the web characterization group.
7. Bill: Using a web browser to fetch documents is one application.
However, there is no framework for designing new applications on the web.
It took a team of geniuses two years to design HTTP 1.1. Ditto WebDAV.
8. Mike: Need to fight the network effect.
Need a strong draw for the next HTTP. Programmability? Manageability?
Need dual-protocol client-servers, chained proxies, etc
9. Rohit: How to make it viable for private action to change installed base?
10. Bill: System with web as primary app deployment system for
distributed networked apps.
Have we reached the end of document serving utility?
Will the next wave be structured data serving?
11. Jim: Outside of typical framework of Web's artifacts today:
A. Bundling -- e.g., dbase + spreadsheet + word processing
B. Global space usage -- e.g., Java over C++
12. Ed: Notion of evolution important. RDP. Proliferation service.
Threading control. Use Web server as ORB and then RMI.
Not performance, but evolution, is important. Self updates, self
configurations. "You cannot characterize the future, only the past."
13. Henrik: "I don't have a whale. Why should we save the whales?"
IE 4 and Communicator 5 have HTTP 1.1 support but no pipelining
like Amaya and Arena have.
14. Bill: New browsers come either from new computers or new ISP packages.
Users do not maintain their machines.
How backward compatible does the next HTTP need be and for how long?
15. Ed: Going through firewalls is very important.
16. Henrik: nirvana == describe features to apply, extend things.
But we need interoperability!
17. Jim: Users of servers (IT departments) are an important constituency.
Maybe if we enhance the manageability of servers?

After a coffee break, Spencer talked about how HTTP is just one part of
an architecture. His points:
1. This architecture is being used in very low-end bandwidth-limited devices.
2. The constraints these devices face are real and demanding.

Discussion ensued. Some of the points:
1. Mike: No one says you have to use TCP and IP with proxies.
2. Spencer: IP redundancy needed for wireless links (which tend to
be point to point)
3. Mike: Interface as layer vs. implementation as a layer.
4. Henrik: People don't use HTTP because they have assumptions based
on HTTP 1.0
5. Dave: Handhelds moving away from HTML, moving toward scripting.
6. Daniel: Remove some attributes. Use gateways.
7. Ed: QoS in HTTP 1.1 not sufficient for some apps.
8. Spencer: Most of the MNCP/WAP/Mowgli/MNCRS predates HTTP 1.1

Then Fred talked about delta encodings and faster web transfers. His points:
1. Current web inefficiently uses bandwidth.
2. Proposal: add compression to HTTP.
Also, add delta encodings (i.e., only send differences).
These combined will have the effect of sending less data.
3. Goal: reduce response size and response delay.
Build this into Apache and other servers. How to modify HTTP to do this?
4. "Delta encoding of something in original compressed format does
not work very well." But specific content types can be amenable
to delta encodings.
5. Differencing programs include "diff -e | gzip", vdelta @ AT&T
6. Experiment: traced full content for all external Web accesses.
7. Delta-eligible common uses: personalized Web pages, software
downloads, database lookups.

Discussion ensued. Some of the points:
1. Mike: Better caching at the client or proxy would shoot up
the percent transfer time saved.
2. Fred: Several places where you can do deltas.
Proxy-browser. Server-client. Etc.
3. Dave: Masinter doing HTML diffs.
4. Fred: We've studied the rate of change in networks (e.g., gifs don't change).
Compression is better when you can't do delta encodings. (gzip algorithm).
Delta encoding requires some thought but compression is a nobrainer.
5. Ed: Distribution replication standard needed.
6. Fred: Internet draft on delta encodings in HTTP by J. Mogul, Y. Goland,
A. van Hoff, et al. Future work: version management, caching effects,
invalidation mechanisms, images, etc.
7. Josh: If delta encoding were available, people would take advantage of it.
8. Fred: HTML preprocessing USENIX paper.
9. Rohit: So what? How to integrate into protocol design?
Need help managing bucket of bits being transferred around.
10. Bill: Individual resource values and representation values. Marshalling.
11. Daniel: Just dumb compression will give somewhat decent results.
12. Fred: Avg compressed size = 50%. Avg delta encoding = 10%.
13. Spencer: CSS help?
14. Fred [in response]: Not really because of the data we looked
at in our experiments.
15. Dave: Scripting avoids extra round trip times.
16. Andrew: Need to know stuff about content at the client end to
take advantage of delta encodings.
17. Henrik: TCP has nonlinear performance; once TCP is warm, get good
performance qualitatively. HTTP 1.1 is optimized; the only way
to send things faster is to send less data.

Then lunch in the rain, after which Jim talked about WebDAV. His
points:
1. WebDAV is a particular application of the Web for which they had
to add to HTTP.
2. A framework for developing applications like WebDAV would have
been really helpful.

Discussion ensued. Some of the points:
1. Mike: Core part of protocol and then a bunch of extensions.
2. Dave: If you have conflicts, go into some merge activity.
3. Bill: Why do merging at WebDAV level instead of punting up a level?
4. Bill: Experiment with Jim Davis @ PARC is to replace all network
file systems with HTTP-NG API and WebDAV. Methods defined on server.
5. Jim: Not clear how it affects the on-the-wire implementation.
6. Daniel: WebDAV has content access (edits) and server access (move,
copy) mixed. You're using the same mechanism to do both file
content management and file system management.
7. Mike: Do anything to internal or external links on resources?
8. Henrik: Why have copy and move?
9. IBM guy: Make resource mgmt sit on the resource.
Multiresource requires logging, transactions. Stackable file systems.
10. Jim: WebDAV is not a distributed NFS:
A. Higher grain size.
B. Resources control their own interfaces.
11. Bill: Must handle character sets of any native language.
12. Rohit: XML-based message syntax.
13. Bill: Difference between semantic interface and message interface.
"Magic middleware" translates between.
14. Mike: Program interface wraps up, does stuff over network.
Network interface handles the wire.
15. Jim: Network services @ app layer. Application services @ app layer.
16. Mike: Complexities occur @ higher layer.
17. Bill: Underlying protocol should be distinct from parameters.
18. Rohit: Is genericity possible? Or will protocols have to be app-specific?
Consider HTTP as a generic bit bucket transfer system.
19. Bill: Don't have complete semantics for bit bucket transfer.
20. Mike: What layer are we talking about?
Locking makes sense at the web app layer, not the RPC system layer.
Unification for apps in the same family is what's needed.
21. Ed: Locking in HTTP is not function-complete.
22. Henrik: Locks are at app level. Other locks at app level may differ.
23. Jim: Generic write, generic lock.
24. Mike: Generic Web document write. Not generic relational database update.
OMG transaction service not core but a common service for interoperability.
25. Bill: HTTP-NG is:
A. TCP enhancement layer - state sharing, muxer
B. Type system, parameter, messages
C. TCWA - the classic web application (browsers and servers)
26. Rohit: When does TNWA come (the next web application)?
Protocol layer of NG is not of the 1.1 family.
What in HTTP 1.1's ubiquity is applicable to what comes next?
What layer does rights management belong in?

Then Shane talked about security -- SSL, TLS, etc. Discussion ensued.
Some of the points:
1. Rohit: Composition is a useful concept.
Sig components, key exchange components.
Stacking compression and encryption layer is a lot of rope.
Extremely inefficient, too.
Need rights management per resource.
Who's allowed to put? To get? To print? To put this ACL in HTTP?
2. Mike: Need delegation, not ACLs.
ACLs are not all the world knows about access control.
3. Bill: Does delegation work?
4. Jim: Lucky if we get whole-resource-read and whole-resource-write
in WebDAV.
5. Josh: Layering. TLS, SSL, authenticating or encrypting based on
connection. BUT want them based on resource (pages, readers, ...).
SSL vs. SHTTP. SMIME is SHTTP reborn without users, realms, ...,
but could use it effectively.
6. Bill: Security incorporates:
A. Authentication.
B. Authorization.
C. Message integrity.
D. Privacy.
E. Accounting.
7. Jim: Don't make connection-oriented. Token passback.
8. Ed: As soon as you put security in, you put in state.
9. Rohit: Is delegation-check mechanism needed in HTTP-NG?
10. Mike: We're trying not to develop web app any more than anyone
else is doing. All we want is a framework to build WebDAV, etc. in.
11. Rohit: What does elegance buy us?
12. Bill: By 7/9/98 we'll have a feasibility study.
Once the next HTTP is at the IETF stage, the future is up for grabs.
13. Rohit: What will get us more users is to improve the Web application suite.
14. Mike: Want elegant framework for expression.
15. Henrik: When a new data model, XML, emerged, the whole world
listened and is migrating.
16. Josh: What to do to create same kind of buzz? Deployed ubiquity.
17. Henrik: HTTP 1.1 is not a single protocol.
18. Jim: What is the appropriate level to do security?
19. Mike: Modularity is key. Some security at each layer.
Factor into layers so at the end of the day you get the control you want.
20. Henrik: Small core doesn't do anything.
Avoid an unchangeable core of bad things.
May lose interoperability.
So, HTTP-NG is a generic way to do stuff.
Then we'll see what we can do with it.
21. Jim: If you have a feature, you want it to be in core.
22. Josh: Deny everything except what I explicitly allow.
Turns to Pandora's box when web access is factored in.
Invoke, POST, prop-find not allowed by default.
23. Rohit: HTML is classic web app, lowest common denominator based on
the SGML framework. DTDs are an interface tool for SGML.
So the question for the next HTTP is,

HTML : XML :: HTTP : ???

We need to think about what the parents of HTTP are to get to ???

After the tea break, Hugh talked about the OpenLink service. His
points:
1. Links and anchors are stored in a link database.
2. Clients talk to servers about hypermedia objects. OHP.
3. Writing servers is easy compared with writing clients.
4. Client side -- message routing, caching, doc mgmt, launching apps.
5. Server side -- synchronization, caching, security.

Discussion ensued. Some of the points:
1. Mike: Take his work and suport it. Use the web.
Pointers into real time MM.
2. Rohit: How to transport real time media?
3. Daniel: QoS?
4. Bill: Headers and mux are small enough to support it.

Then, although we were running overtime, Henrik got up to talk about the
emergence of the Big Hammer protocol with the small heart. His points:
1. POST as a tunnel mechanism.
2. There is no single HTTP 1.1 because extensibility is not part of
the basic package. There is no structured way of extending HTTP.
3. HTTP is at the center of the hurricane of protocols:
A. Similar protocols with different background (IMAP, ...)
B. Protocols that copy HTTP (RTSP, SIP, ...)
C. Protocols that use HTTP POST (IPP, ...)
D. Protocols that copy pieces of HTTP for easier later integration
with HTTP (TIP, ...)
E. Actual HTTP extensions (DAV, ...)
4. Don't want different protocol engines for highly related tasks.
5. Is end-to-end really end-to-end? Depends.
6. We are not in the business to sell beauty.
7. HTTP 1.1 has reached the limit of adding extensions.
8. So what comes next?
A. Generic app level protocol (simple extensible framework,
protocols become profiles)
B. Explicit layering and modularization (break up big lump
style HTTP 1.X messages)
C. Extensibility at the core (HTTP/PEP/Mandatory work)
D. Not quite RPC, not quite messages
9. HTTP is not MIME, but it doesn't really matter.
10. Need generic framework to deal with apps in a layered way.
11. Granularity of extensions is important.
12. Will distributed extensibility lead to chaos? Not if we
guide the way to extend.

Some discussion ensued but my writing hand was getting tired at this
point. Some points I wrote down:
1. Rohit: Some needs to chart which compression, transport, etc
work well with each other.
2. Mike: The next HTTP only gets one chance to be reviewed.
Need to hit a home run first time at the plate.
3. Rohit: *TP. One application protocol for all applications.
4. Mike: Syntactic framework needed first.

Well, that's it. I wish I could have better conveyed these notes but
I'm not as articulate as I'd like to be.

----
adam@cs.caltech.edu

We aim above the mark to hit the mark.
-- Ralph Waldo Emerson