From: Jeff Bone (firstname.lastname@example.org)
Date: Wed Mar 15 2000 - 19:14:50 PST
> [I Cced this to email@example.com so Dave Winer can respond to the list if
> he likes.You both have a point.
Oh, good. Hi Dave!
> As soon as Jeff starts doing his computing on a
> platform that uses XML-RPC instead of the X protocol to transmit
> graphics updates to the screen, I'll change my mind about Mark having a
> point. :)
Yeah, yeah, I've heard *THAT* one before. :-) But, to be perfectly honest --- I've
spent as much effort as possible in the last year turning myself into "Joe User"
with the following result: 95% of my actual use of software, sans personal
exploration, is in some way essentially Web and mail. (Let me tell you, the CEO gig
is no place for a technologist. ;-) Notable exceptions: buddy lists (but it
doesn't have to be that way, for sure,) Napster (ditto,) WinAmp & clones, and
Microsoft Office (gag!)
And the only reason I have to use all of those other things is because I don't have
the following three things: (1) a minimalist local Web app server, (2) an
integrated store and set of abstractions for all personal data / docs, and (3) a
syncronization system to sync local / offline data mods with some centralized,
Web-accessible store accessed through the same apps run in (1).
Well, except maybe for WinAmp, which necessarily has a tighter connection to the
hardware. But I'd put that outside the "productivity domain" anyway. And frankly,
I'm only worried about distribution / location transparency / etc in terms of data
and productivity apps anyway.
> It does sort of defeat the point; aside from the dubious benefit of
> surreptitiously tunneling through firewalls,
From experience, there's no dubiousness in that benefit. It can literally mean life
or death for apps intended to be run by corporate users. Activerse had the
(probably, then) misguided notion that corps were ready for productive use of
instant messaging; whatever philosophical problems we had, firewalls *were* always
an adoption barrier in that market. This isn't an abstract / philosophical problem,
it's a real-world problem that *all* communicating network applications face.
> XML-RPC could just as well
> be implemented over plain TCP as over HTTP. I'm not really sure why
> Dave Winer chose to use HTTP as the canonical substrate.
I have no doubts in my mind why --- it's because HTTP is the new transport. It's
one reason why XML-RPC is being snatched up (more or less, as a model more than
anything else) while IIOP is relegated to niche departmental uses.
> Still, I'm not sure it matters; XML-RPC is a great protocol, and it
> does a great job at the vast majority of distributed computing tasks,
> and reinventing it to be slightly better would provide no payoff for
> this vast majority.
Yes, it's a fucking fantastic protocol, absolutely sincerely. It basically calls
the whole DTD notion into question, IMO. Why should the parsing (i.e., syntax and
metasemantics) be rolled up in the actual semantics of data? Divide and conquer, or
more precisely, let abstraction live at the appropriate level.
> It is probably not well suited for specialized tasks, like that involve
> very high volumes of requests (X can theoretically handle something
> like 4 million requests per second on a uniprocessor), require very low
> latencies (MPI over VIA over Fast Ethernet has something like 80
> microseconds latency), operate within very tight resource constraints
> (I'm sure it'll take at least a few thousand instructions to implement
> XML-RPC, which is bad if you only have space for 2048 instructions on
> the microcontroller controlling your robot), etc.
Sure, but again, in loosely-couple wide-area apps: how important is performance?
What are acceptable latencies? IMO, you have to byte-align data in a localized IPC
app like the X server. Not so for distributed apps. And I'm not convinced by your
estimate re: implementing XML-RPC in tight space.
OT: one interesting implementation of the whole TCP etc. stack has been and remains
Larry Peterson et. al.'s work at the Univ. of Arizona on the X-kernel.
(Unfortunately named, has nothing to do with X11 et. al.) One of the things it
points out is that a lot of network protocol stack performance issues arise from
implementation choices rather than actual protocol design per se. At one point, the
X-kernel hosted the fastest-benchmarked TCP/IP stack around --- anyone know if
that's still true? And their stack metaphor is -- has been -- easily extended up to
the app protocol level with similar performance / memory usage benefits. I've
always wondered if this approach wouldn't be highly suitable for limited devices.
> You could think of GET as being an RPC, albeit an RPC that has certain
> caching (and thus idempotence) semantics. POST is probably closer to
> being a real RPC: the name of the routine you're calling is the
> local-part of the URL, the parameters are the POSTed data, and the
> return value is the returned data.
Kragen, as always, thoughtful feedback. Thanks!
This archive was generated by hypermail 2b29 : Wed Mar 15 2000 - 19:20:30 PST