I am still waiting for one of you brainiacs to come up with the concept behind
WWW/2. Imagine for a second that you had a series of distributed centers that
were each responsible for some section of the WWW/2. Using idle resources, they
actively tried to keep the collected body of knowledge, link liveness, content
caching, etc. as up to date as humanly-computerly-internetworkly-and holistically
possible. The current WWW is only about 3.5 Terabytes, a number easily farmed.
As Sun is one of my favorite companies, I'd take one of their super-scalable
servers, max it out with 2G RAM per processor, 1024 processors, and put one of these
at maybe 20 different spots all over the world. At the front end, I'd put another
100 or so Ultra 60's per center actively running the spidering, smart caching the content,
sorting it on liveness, network accesibility, timeliness of data, change of information.
Not only this, combine with the DEC research guys who not only have some amazing
search engine classification stuff, they do intra-link analysis to determine with
what probability one page is actually a version of another. You could automatically
create versioning information for each page to be able to generate and recreate a
version history and determine exactly what was available at any given date/time
on the whole WWW. Just imagine how useful that would be for invalidating MSFT patents,
not to mention a lot of other useful information tasks.
One of the reasons the WWW took off was because of the highly decentralized, self-
organizing nature of the content and links between them. This toleration of inconsistnecy
appears as broken links and inconsistent data including the difficulty to be able to
information with absolute certainty, only supplant it with later information. The trick
is to allow the inconsistent data, but make a top down attempt to try and manage
the inconsistency. All you have to do is to look at DNS, sure there's broken domain
names, non-existant hosts, etc., but where the bits hit the network its actually a
fairly decent top down management scheme. There's no reason why you couldn't take
a distributed.net scheme and try and manage the inconsistency and brokeness.
Going on, DNS is, IMHO, due for a major overhaul. Two things that need fixing?
mobility and routing. The problems are as follows: there are a lot of politics
embedded in the routing and registration of IP numbers, addresses, etc. (try
registering fork.org or endeavors.org and have the traffic routed to an IP
number owned by UCI or a new IP wired to UCI's network and you'll see what I mean).
One approach is to dynamically assign a new IP based on every time you go live on
the network. This has the problem that you need to register an intermediary that
knows how to route your traffic to the appropriate name. You've all seen the ATT or
PacBell commercial where the call automatically gets routed to the office to the home
to the cell phone, etc? AOL AIM actually does a pretty good job of Internet scale
messaging to alert all other buddies when and where you are logged on. The other approach
which I like better is to assume a fixed IP for every device, and then in essence to
dynamic DNS registration based on connection, location, availability, something. There's
no reason why it needs to take 3 days to register a new domain name, or 3-5 days to change
the routing information and receive confirmation for routing and registration changes.
should take 16ms or so, happen every time you take more than 5 steps or pick up another
Just a little blue skying,