I am the team lead for Blaze Web Performance Pack. I got a search hit on
this page (http://xent.w3.org/FoRK-archive/fall96/0700.html) when I was
looking for "xSpeed" in a test query. I was tempted to comment on the
discussion on this page.
> >=95A feature that anticipates links a user could follow on a given
> >page begins pre-loading those pages before the user clicks on the
> This seems like a really bad idea. If every user running a browser is
> HTTP communicaitons X% of the time their browser is running, this
> that each browser will now be performing HTTP communications X+n% of
> time. So in one fell swoop, this technology increases each browser's
> contribution to net congestion.
This discussion really falls into the category of whether read ahead
browsing (or pre-fetching) is evil for net congestion or not. On one hand
it would provide the user with a "quicker" page however it does pull a
set of other documents in anticipation.
I agree that this is debatable, however our product spends a significant
time computing and filtering the list of probable hits. It does not
pre-fetch any cgi-based links, (which includes a significant set of ads,
and searches, etc.) It does not pre-fetch pages blocked by servers that
support robot exclusion. It analyzes the contents of each page that goes
to the browser to more accurately predict the next hit. It also analyzes,
web access patterns and adapts to them.
> >=95Page-streaming HTTP loads an entire Web page all in one shot
> >than in intermittent connections.
> Is this just a persistent connection? If a web page is larger than the
> packet sizes between the client and server, you certainly can't load a
> page in one shot (whatever a shot is).
This shot is an example of a marketing shot in the dark... :-) The "shot"
really corresponds to a high level HTTP connection not so much as
individual TCP connections which may vary based on the packet size as you
It differs from the persistent connection in an interesting manner. Even
for persistent connections, for each HTML page the client would make
subsequent transactions to the server for each of the inline pieces
(images, java, etc.) On the other hand xSpeed uses a compressed archive
of the whole page including "packagable" inlines. This is decompressed on
the client side and all subsequent requests are handled there. Hence
saving on all the RTTs for the inlines.
I hope I was able to clarify some doubts here. If you would like to see
this technology in action please visit our web site
(http://www.xspeed.com) for a trialware/prototype version of the product.
The site also has links that allow you to test the speed increase with
Thanks for your interrest and views on xSpeed!