Draft -1 of MIT statement

Rohit Khare (khare@pest.w3.org)
Fri, 29 Dec 95 17:19:00 -0500


This one seems to be a loss. Needs a major tune-up and cut-down. Still too broad...?

I could turn it upside down and just say Web Web Web, and list web-related
miniproblems.

Sorry, but you're missing some italics and bolding of words

Comments?

Rohit

-----------------------------------------------------
Opportunites for Globally Distributed Applications

For decades now, computer science has prosyletized parallel and distributed
computation as the only reasonable solution to Von Neumann architecture
computing bottlenecks. Developers in the field, like soldiers on Independence
Day, smartly salute the standard of distributed computing.

So why dont we live in a world of casually disributed applications? Why is it
a Herculean effort to even reach so high as client-server decoupling? Why,
after fifty years, do we await the millenium meekly expecting the deployment
of three-tier applications? I want to know, and more importantly, to know how
to revolutionize the situation.

I will outline some of the forces I think are at work, some of the projects
at MIT LCS I think are addressing the situation, and some of the skills I
think are necessary to tackle this problem. Furthermore, I want to tie this
academic search for solutions to the very real, fast-moving World Wide Web.
How can we build robust, globally distributed applications?

High Performance Computing Mindset. At Caltech, I studied in the bowels of
this beast; my research mandate was to bring parallel processing to
programmers in the trenches. My direct efforts went toward building an
electronic textbook systems, but the real lessons were about the HPC
communitys view of application development. Developing software for
multicomputers and multiprocessors is difficult, expensive, and only
appropriate for the most demanding compute applications. At the same time, the
methods we were developing for transitioning programmers and students to
parallel computing (the Archetype method) highlighted the usefulness of
parallel decomposition at all levels of code develpment, on serial and
parallel devices. Analyzing the dependency between various components of an
algorithm and its data structures is not just a monolithic analysis for going
parallel incremental, formal models of this information is universally
applicable, and vital for generating professional solutions. In the end,
though, HPC recapitulates a priesthood, a community whose separate existence
verifies prejudices about the impractiality, inaccessibility, and
incomprehensibility of traditional solutions

Organizationally Novel. Many of the most compelling scenarios for universally
interoperable, universally distributed applications lie acros traditional
organizational domains. For example: how can Intuits Quicken transduce the
state of your portfolio from your brokers mainframe? how can a price quotation
cross through three companies order systems, unmolested? These kinds of
issues are far above the usual stack of hardware, protocol, and methodology
problems: they are hermeneutic problems of shared knowledge. Today, even
client-server applications are limited to a single domain: the companys sales
force, perhaps its suppliers, but rarely further. Many of the brightest hopes
for object-oriented applications and CORBA-based distributed programming rest
on such visions: shared componentware, standard object interfaces for employee
and the like.

Emerging Infrastructure. As the front pages of the New York Times make plain,
the world is finally inheriting the technology academia has always taken for
granted. Todays Internet, with further improvements on the horizon, provides
powerful services far beyond those commonly available to software developers:
global naming, addressing, directory services, and scalable bandwidth. At the
application layer, the Web has revolutionized presentation/interface
techniques, and network programming languages like Java, Telescript, and
Kranzs Abstract Risc Code promise to make the future appliction layer almost
infintely diverse and extensible. Below, though, mobility services are still
not clear, and above (at the financial and political layers of the network
stack), pricing, payments, and security models remain insubstantial.

Among this mish-mosh of reasons and technologies, it is clear I have not
identified a crisp adacemic problem. There is a systemic intractablity to
bringing the world to distibuted applications. While there are many promising
technologies at the Lab, I hold no illusion that a technical solution alone is
the key.

Global Object Brokers. CORBA, a commercially compelling technology for remote
method invocation and interface discovery, is inherently a
single-organization scope solution. It is designed assuming LAN-like
capabilities and precisely homogenous object models and preconfigured
addressing. W3Cs HTTP-NG addresses one half of this problem: writing an object
request service optimized for global distribution: asynchronous messaging,
fault-tolerance, etc. The Information Mesh project covers the second half:
reasoning about object roles and object naming. Liskovs Thor group already
understands many of these problems.

Protocol Level Engineering. Clarks work with Application-Level Framing is a
step towards developing applications and protocols more suited for each other.
Mobility, and appliations adapted to wireless distribution, have been
addressed by the ROVER project.

Parallel & Distributed Expertise. What comes across most clearly about theLab
is its committment to spreading the gospel. There are many other projects
around the lab that address critical issues I myself may not be able to
contribute to, but make LCS the best environment to work on this challenge.
Formal Methods work by Liskov and Guttag, on Larch, and emerging work from
Lynchs group on compositional verification are critical elements, not just for
verifying protocols, but for guiding software development. Groups working on
dataflow architecture s and multithreaded languages like *T also have insights
on how to transform legacy code.

So far, this vision is (hopefully) preaching to the choir. Why Rohit Khare,
though? Putting aside academic credentials, I think there are three claims I
can make. First, I measure success to a higher standard than publishing
papers: I am intensely interested in the real world and having an impact.
Second, though I am an engineer and hacker at heart, I think I have a healthy
respect for theoretical analysis. Thats a skill fromtempering the hot,
hyperactive developers mind with four years of Caltechs total commitment to
rigorous proof. Finally, I have developed an intimate knowledge of the global
Internet and its protocols. I think new protocols and new application models
will be the key to deploying new generation of Internet-distributed programs.
Working at the W3C, I am developing an extension protocol for HTTP, a
delicate and difficult technology to game out. On top of PEP, I am helping
develop solutions for security, electronic payment, object labelling, and new
HTTP services like leasing, cache logging, etc.

Finally, I'd like to note a final detail of my case: I am currently a
research staff member of the W3C. I am automatically interested in the allied
research problems of the Web (caching, naming, security, etc), and thats
where Id expect to begin working. I think that as a joint student of an LCS
research group while working at, and being funded by, W3C would be a
win-win-win for each group and for binding the W3C and LCS closer together.
Even if I am not the first, I believe that graduate researcher working with
the W3C is an absolutely vital step for LCS.