Re: [John Robb of Gomez] The 2X (two way) Internet

From: fielding@ebuilt.com
Date: Thu Jan 11 2001 - 06:19:29 PST


Crikey.

Adam, you've posted many great bits in the recent past, so don't
take this personally, but this 2XInternet buzzword-fest SUCKS.
Not in a small way, either.

Pardon me while I point out the obvious fallacies...

On Thu, Jan 11, 2001 at 12:14:54AM -0600, Adam Rifkin wrote:
> The following was written by John Robb, President and founder of
> www.gomez.com , and I found it at
>
> http://www.thetwowayweb.com/the2xInternet
>
>...
> > The 2X (two way) Internet
> >
> > The Internet is undergoing a transformation to a new system that scales
> > better, costs less, and provides better end-user performance than the
> > Web.

And it tastes good, too. Let's see if any of these claims are backed-up.

> > Gomez calls this new system the 2X (two-way) Internet. We believe
> > the arrival of the 2X Internet is as momentous an occasion as the
> > arrival of the Web and is what's necessary to achieve global customer
> > adoption of the Internet as a means of doing business. Here are
> > characteristics of the 2X Internet. It's:

What is necessary to achieve global customer adoption? According to this
guy, it is more complicated technology and interfaces that are specific
to each application. Personally, I think it will be the general
availability of cheap electricity, satisfaction of more basic needs
(like food, liberty, and peace), and, finally, simpler interfaces.

> > Local. The 2X Internet relies on the ability of desktop PCs to do much
> > of the work necessary to actually assemble and serve a complex Web
> > site. Contrast this to how the Web squanders PC power by only retrieving
> > and rendering pages that are pre-masticated by large corporate servers.

Oooh, yes, squandering. Why does it do that? Because those large
corporate servers can be controlled by small corporate teams to produce
something consistent with what the large corporation wanted its customers
to see. Does the Web require this? No. This has nothing to do with
the Web technology itself -- anyone can produce an HTML file that
allows the client to obtain content from a variety of sources. An
applet is even more flexible (though non-portable, because clients
can't be trusted to be up-to-date).

This is a social phenomenon, pure and simple, and it won't be "fixed" by
any number of stupid acronyms. Nobody wants it to be fixed.

> > Global. Rather than interconnect with singular large corporate sites,
> > the 2X Internet will rely on customers to connect to a global network of
> > servers that run highly distributed applications. In this model, all
> > interactions are made with local servers that are part of a global
> > network. No more will customers be forced to interact with services that
> > are time zones away.

Yes, no more nasty forcing of clients to use authoritative servers.
Instead, we'll require everyone to use a peer network. After all,
we can trust everyone on the peer network.

This must be the part that "scales better" than the modern Web.
That is, if you focus only on origin server scalability and ignore the
overhead generated by content-peering spam across the mesh. OTOH,
the Web already has support for hierarchical caching and cache mesh
technology, so there is no improvement here -- I suppose what he means
is that IF all users are forced to use the mesh, then we will have
solved the existing cache mesh problem of all the users opting-out
because they don't like the added latency and unreliable content
delivered by the mesh. Brilliant.

I suppose I should mention Akamai at some point. Or CDN peering.

Does he also mean to claim that this is where "end-user performance"
is better than the Web? I suppose if every peer contained every
service, then the interaction latency would certainly be smaller.
Not as small as it would be if every client could download the
service directly ahead of time (i.e., mobile code, a.k.a. applets
in the Web design space). And the two are equally scalable,
unless we force these peers into hierarchical formations.
 
> > Metered. All interactions on the 2X Internet will be metered, analyzed,
> > and charged to customers. Due to the infrastructure needed and the
> > complexity of the interactions, business models based on free
> > interactions will be cast aside in favor of a fee for service model.

Because that is clearly what the customers want. Right?

Will the customers in the audience who want to be charged by the packet
for Internet access please raise their hands? Thank you.

But wait, in the first paragraph he claimed that the 2XInternet would
"cost less" than the Web. How is it going to cost less when he just
claimed that it will become user-pay? Oh, yeah... he thinks it will
cost the big corporates less money than running big Web servers.

> > How the 2X Internet will fix the Web
> >
> > The 2X Internet has the power to transform the mediocre experience of
> > the Web into a vibrant, rich, experience that drives customer adoption
> > of the Internet as the preferred way of doing business. Here's what
> > went wrong with the Web and what the 2X Internet will do to correct it:
> >
> > It's slow. Customer productivity drops dramatically when response times
> > are longer than a second. Times are 7-10 seconds on the Web and
> > structural issues will prevent all improvement. The 2X Internet speeds
> > the customer's response times by using a server located on the
> > customer's PC to connect to XML-enabled data services provided by
> > local servers in the 2X Internet global "cloud." The short
> > distances traveled and the power of dedicated local resources will
> > radically improve customer productivity by making interactivity sub second!

Look ma, it's CORBA, only with a less efficient transport protocol.

But wait... CORBA applications are only "faster" than the Web when
the number of interactions are artificially constrained to a minimum
and when each interaction doesn't require large data transfers (e.g.,
lots of control messages or event notifications). So how is this
less efficient version of the same architecture going to be better?
Volume, volume, volume.

And, unlike the Web, none of those interactions are cachable by
intermediaries, so you don't get any benefit from amortized costs over
repeated transactions. And how does the information about those
interactions (ya know, the stuff that generates revenue) get back
to the owner of the content?

> > It's expensive. Monster clusters of servers and oversized switches
> > costing tens of millions $$ (with huge staffs to match) run most
> > corporate Web efforts. By distributing applications and data across a
> > global network of small, inexpensive servers and allowing the customer's
> > own desktop PC to share in the burden of computation, costs drop radically.

Now he's smokin some rarified dope. There are a few hundred websites
on this planet that require monster clusters of servers behind switches
and costing millions of $$$ (I don't know of any with a huge staff).
Their largest expenses are not for the Web part of the site at all.
It is first and foremost the people who manage the content provided
by the website, which certainly wouldn't be reduced by this scheme.
The next big cost is for the massive transaction-monitoring persistent
databases that sit on the back-end recording people's orders or the set
of adverts they have been shown (in the case of campaign-based ad-supported
sites). Because that is the revenue stream. And how many of these
companies do you think will allow their customers to "share in the burden"
of computing their revenue streams?

> > It can't scale. Web sites scale horribly when faced with hundreds of
> > thousands of users doing complex tasks. By dividing the task of enabling
> > the transaction over a global network of servers and the PC resources of
> > millions of customers, the 2X Internet solves the scalability problem.

Actually, Web sites scale quite well. Highly personalized content generating
sites don't scale very well, but that isn't even an option with a peer
network. Billing systems are the parts of websites that typically
run into scalability problems.

As mentioned above, this only deals with the issue of origin server
scalability [which is not a problem with the modern Web if you hire
some outfit, like eBuilt, that knows how to architect big services].
In a peer network, you have to deal with service scalability (how
many peers need to know the interaction rules for how many different
service types), peer scalability (how do you tell clients to piss off
when too many try to use the same peer), service locator scalability
(how do you keep the overhead of mesh spamming from exceeding the
available bandwidth, let alone the comparative cost of just going
directly to the authoritative source), and social scalability
(how many people will each corporation have to sue in order to stop
buggy peers from screwing up the company's services).

> > The Rise of the 2X Internet
> >
> > Over the next three years, the infrastructure of the 2X Web will explode
> > onto the Internet, accompanied by rapidly expanding bandwidth
> > (particularly in the all optical core) and plummeting storage costs. It
> > will serve as the basis for the next great technology companies that
> > will push aside the aging PC world players and will breathe life into
> > the moribund, financially strapped, online content world.

Translation: I have this great business plan that guarantees a profit
just as soon as the golden age arrives.

> > Here is a
> > taxonomy of the 2X Internet:
> >
> > Global backbones that provide XML dial tone, desktop servlets, and
> > highly distributed applications will emerge and begin to fill the roll
> > to customers what the telcos did in the pre-Internet world. These
> > backbone providers will not arise out of the Telco or hosting world but
> > rather will be new names, such as Neoplat.

Neowhat? Don't hold your breath waiting for "global backbones" to be
provided by anyone but the Telco or hosting world -- they are the only
ones capable of going so far into debt.

> > New innovative software from companies such as Centrata, KnowNow, and
> > Applied MetaComputing will enable independent software vendors to
> > rapidly develop, globally deploy, and easily manage their
> > software. Microsoft will play a very large part of this revolution too
> > with its .Net infrastructure, but it will likely be late to the party
> > and a ferocious competitor when it arrives.

Well, it is nice to be called innovative, even if he doesn't understand
what the rest of those claims mean.

> > Applications engineered for the 2X Internet will quickly dominate. Web
> > enabled apps and will be quickly outgunned in all the major areas of
> > evaluation: functionality, ease of use, cost, and performance by the 2X
> > Internet apps. Names such as Groove, Userland, and Rotor may become
> > commonplace in the delivery of these services.

Great, more name-dropping for no apparent reason.

So everything this guy has claimed is bogus. Please tell me that there
is some value in the "two way Web", because this crap just dissuaded me
from any interest in the technology. I know you guys can do better.

....Roy



This archive was generated by hypermail 2b29 : Fri Apr 27 2001 - 23:18:20 PDT