8-bit man!

Rohit Khare Rohit@KnowNow.com
Sat, 11 Jan 2003 13:54:11 -0800


For about five minutes today, I actually made it to 255lbs! Let's see 
if it keeps.

By tonight, I think I will have met five Economist-reading single women 
this week. Not that it's to any ulterior avail -- it's just a reminder 
that chemistry, in turn, is only 3% of the few hundred candidates. For 
everyone's amusement, here's a classic Rifpost on the topic: 
http://www.xent.com/FoRK-archive/apr98/0115.html

Now that I've insufficiently buried the lede, I'd like to drop in 
another brainstorm. I've had a hard time trying to reduce the vision of 
*TP to a concrete vision. How about tp:// URLs?

The new server would be a teepee, of course: some kind of 
transfer-enabled personal event environment or some other wiseass 
acronym. But the goal of a teepee server would be to:

   1. Serve all the TPs -- resources are available by any standard 
protocol you'd like

   2. Propagate resource-change notifications -- add a selector-action 
event notification mechanism that will store subscriptions and match 
each resource-change (not resource) along any TP capable of push

   3. Maintain a DAV store of events/resources as a file repository -- 
the first event-driven application running on *top* of the relay server 
is a database that's listening for event changes -- it is not in the 
core, since that's a potentially lossy router.

The basic construction of teepee is a hybrid of httpd, sendmail, and 
jabber.  The base layer is a connection manager, since we have to 
abandon IP for a truly end-to-end identification mechanism. Just above 
it is basic SSL security; content security comes later up (PKI).

The next layer is the event microkernel proper (I may still be wrong; 
this may be the bottom turtle). Subscription rules are a pair of 
turing-complete (unfortunately) functions: selector() and action(), 
taking a single argument, a tp-blob (call it e, for event).

The original bootstrap rule, following the syntax of mod_pubsub, is 
phrased as a single subscription rule: {sub_request(), add_sub()}

sub_request() returns true if e.name's last pathname is "/kn_routes/"
add_sub() incrementally inserts/updates a new rule in the 
subscription_table keyed by e.name's final pathname component 
('filename')

Thus, we can describe the main() loop of teepee -- a dispatch loop:

while (true)
    wait for new input event e
    dispatch(e)
end

dispatch(e)
    foreach rule in subscription_table in parallel
        if rule.selector(e)
            then P(rule);
                 rule.action(e);
                 V(rule); //atomic semaphore
    end
end

I'll bet one key is that the router has to make a best effort, so some 
e's may be dropped, or the subscription table may change during a 
dispatch loop.  Thus, the assurance is that the selector remains true 
for the duration of the action() call -- selector() is re-entrant, but 
action() is atomic.

Thus if an action is to transmit a megabyte blob down a TCP socket, the 
intent is that updates to the same event will be dropped if they pile 
up; once that megabyte is done; the next time action() will be called, 
it will send the most recent version of the event, rather than "falling 
behind real-time".

The third layer of teepee is a more traditional DAV store. It gets 
first dibs by adding a rule to the initial bootstrap table: {true, 
insert_resource()}

Thus, depending on caching and capacity -- how often insert_resource() 
can be called, and the cache lifetime indicated by the resource owner 
-- the storage layer will be try to keep as much state as it can.

Now we can implement the first useful upper-layer service: replay. In 
order to make event-driven development easier, you'll often want to 
"catch up" on recent events to synchronize new subscribers. The replay 
engine is implemented as a third subscription rule: 
{replay_sub_request(), replay()}

replay_sub_request() returns true if the final path component of e.name 
is "/kn_routes/" and e.replay-n or e.replay-since is defined
replay() queries the DAV store and directly invokes e.action on each. 
Thus, the replay agent directly injects additional invocations of 
e.action for e.selectors asking for history (on a best-effort basis).

Of course, there's a trust-management layer that hasn't been indicated 
here yet; I don't know how far down the core it belongs.