Content-Type: text/plain; charset=us-ascii
[Need to go visit ISI... In DC, Joe mentioned some even more intriguing
current work that shoots down MUX and turns the mecahnism for DNS
caching on its head... RK]
Content-Type: text/plain; charset=us-ascii; name="C370--USC_ISI.html"
Content-Disposition: inline; filename="C370--USC_ISI.html"
[1996 Project Summary]
LSAM: Large-Scale Active Middleware (formerly Cities Online)
University of Southern California/Information Sciences Institute
* Additional project information, provided by the performing organization
The Large-Scale Active Middleware (LSAM) project is developing the
middleware infrastructure to support scalable distributed
information services. LSAM development includes network-sensitive
distributed caching, replication for reliability and performance,
integrated security and access control mechanisms, and a
demonstration application. This middleware will support the
large-scale deployment of these services with effective economies
of scale in confederated systems.
LSAM is composed of four major tasks - Intelligent Bandwidth,
Replication, Security, and Test-Application Development.
Intelligent Bandwidth (IB) organizes the use of distributed caches
based on network parameters and usage information. Replication
supports copy management and copy selection to reduce access
contention and increase reliability. Security services integrate
emerging authentication and privacy mechanisms with access
control, and augment those services to accommodate the new
middleware LSAM will develop. The Test Application will
demonstrate the advanced services provided by LSAM, and provide a
We will build intelligent bandwidth mechanisms to more effectively
use available network resources to reduce response latency and
combine responses for realtime mass access. This mechanism will
support sensitivity to the networking environment, and selecting
caches based on network topology. We will also support graceful
degradation as operation becomes more like "disconnected". We will
use a reliable multicast transport protocol developed for
background bulk data distribution, and will develop a
self-configuring multicast group management system.
Replication is a popular approach to improve service performance
and reliability. We will develop mechanisms to support automated
selection of a replica, based on availability and performance. We
will also integrate existing mechanisms for data coherence, name
transparency, replica security and integrity, coordinated caching
and replication across administrative domains, and administrative
Security for many parts of LSAM will depend on securing actions
between agents, whether those agents are servers, clients, or
caches. We are developing techniques to integrate the various
aspects of a transaction that can be secured, more or less
independently, including stream, transaction, and access control.
Recent FY-96 Accomplishments
* Published various papers on the performance effects of proposed HTTP
extensions. Our general conclusion is that the current extensions offer
very limited benefit, and may hinder integration with emerging Internet
* Released software patches for performance analysis and enhancements to
the public-domain Apache web server, and a proxy that instruments the
individual phases of a web transaction. Also released tools to measure
Web performance in fine detail. We developed a proxy that instruments
the individual phases of a web transaction, and scripts to measure
batches of web transactions, including the replay of a proxy log.
* We developed an automated approach to the maintenance of links for web
pages of related work. We developed scripts to use existing Web search
engines, in addition to local resources, to provide several views of
* Prototyped a replica selection mechanism with modular selection
algorithm. Two algorithms are currently employed: DNS hostnames and
geographic location based on whois data.
* Deploy Intelligent Bandwidth mechanisms, using multicast web bulk
transport, and self-configured dynamic multicast groups.
* Develop a generalized, configurable automated Web-rendezvous facility,
as an extension of the automated link maintenance mechanism.
* Develop a replica selection server based on geography, DNS info, et
al., (with comparison). Develop additional replica selection algorithms
based on latency/bandwidth and network topology. Deploy a replica
selection mechanism for public use. Evaluate effectiveness of various
* Implement a security system supporting both current stream and
transaction mechanisms and new replication and cache access control.
* Demonstrate a web-system "application" that features the facilities of
the LSAM middleware.
USC/ISI has been active in making its papers, software, and
results available on the web, and been an active participant in
the IETF and other e-mail groups. USC/ISI will continue
discussions and joint research with interested parties, notably:
GTE Labs - James Sterbenz - (middleware)
ARPA BADD effort - TAC:Steve Schwab - (IB)
ARPA Digital Libraries - (Ron Larsen) - (services)
Philips Research - Yasser alSafadi - (services)
Univ Arizona - Lawrence Brakmo - (mcast bulk)
Hughes - Y. Zhang - (mcast web)
UCLA - G. Popek, Kleinrock - (mobile)
Metricom - Mike Ritter - (mobile)
National Digital Network - Rich Amons - (mobile)
WWW Consortia - Rohit Khare - (IB)
Netscape - (several) - (IB)
* Herbert Schorr (email@example.com)
* Jon Postel (firstname.lastname@example.org)
* Joe Touch (email@example.com)