[Technology Review] Akamaiís Algorithms

Date view Thread view Subject view Author view

From: Linda (joelinda1@home.com)
Date: Mon Sep 04 2000 - 22:14:32 PDT

[Adam asked me to FoRK this...]


September/October 2000
Akamaiís Algorithms

Tom Leighton has the formula for going from MIT math professor to
Internet gazillionaire. You do the math. Tom Leighton, a professor at
MITís Laboratory for Computer Science, or LCS, holds nearly 10 million
shares in Akamai Technologies, a company he co-founded in August 1998.
Last October, Akamai went public, with prices at the initial public
offering (IPO) starting off at $26 a share; by the end of the day,
investors had bid the price up to $145 a share. A month later the
stock was selling at $327 a share. No matter how much math anxiety
you might have, you get the pointóTom Leighton had become a very
rich man.

An academic whose expertise is in parallel algorithms and applied
mathematics, Leighton is at first glance an unlikely candidate for
an Internet tweeds-to-riches success story. But on closer
examination, it makes perfect sense. For years, Leighton has
been scrutinizing how complex networks operateóand how they can
be optimized. So, five years ago, when Tim Berners-Lee (the
inventor of the World Wide Web) came down the hall at LCS looking
for ways to better manage the escalating traffic flow on the
Internet, Leighton and his crew of graduate students were an
obvious place to drop in.

During the next several years, Leighton and a mix of MIT graduate
students and undergrads tried to figure out a better way to
manage and distribute content over the Web. In early 1998, the
group, which included grad student Daniel Lewin (who along with
Leighton and Jonathan Seelig, a student at MITís Sloan School,
went on to found Akamai), entered the MIT $50K Entrepreneurship
Competition. The team was a finalist but didnít win.

Still, the venture capitalists came knocking. And the rest is
Internet history. Today the company runs a worldwide network of
more than 4,000 servers that distributes Web content for such
customers as Yahoo!, CNN and C-SPAN; if a PC user requests, for
example, videostreaming from C-SPANís Web site, the Akamai system
of servers helps to deliver that content, thereby avoiding
bottlenecks at C-SPANís centralized site. The distributed network
makes content delivery over the Web quicker and more reliable.

Despite hitting the IPO jackpot, the soft-spoken MIT professor
(currently on a leave of absence from LCS) displays few overt
signs of material success. At Akamaiís new headquarters
adjacent to the MIT campus, Leighton, the companyís chief
scientist, occupies a modest corner office overseeing a maze
of cubicles. Itís very much the office of a professor, and
Leighton speaks in the patient and precise words of someone
used to explaining how things work. TR Senior Editor David
Rotman recently went over for a lesson on managing traffic on
todayís Internet.

TR: When did it occur to you that you could use algorithms to
optimize content delivery on the Web?

LEIGHTON: The first time I ever thought about the Internet
 was in 1995. My office [at MITís LCS] is down the hall
from Tim Berners-Lee and the Web Consortium. Over time we
talked about some of the issues facing the Internet. These are
the kinds of large-scale networking problems that our group
was working on and that I have a long-term interest in. So
we took on some of them as research projects.

TR: In a sense, the Internet is really the ultimate
networking challenge, isnít it?
LEIGHTON: Yes. Thatís right.

TR: What was the problem that you started with in í95?

LEIGHTON: We were looking at ways to deal with flash
crowding and hot-spotting. Thatís where a lot of people go
to one site at one time and swamp the site and bring down
the network around itóand make everyone unhappy.

TR: Can you explain the technologies youíve developed?

LEIGHTON: Today weíre probably one of the worldís largest
distributed networks. At a high level, weíre serving content
or handling applications for end users, and weíre doing that from
servers that are close to the end users. ďCloseĒ is something
that changes dynamically, based on network conditions, server
performance and load. Because weíre close, we can avoid a lot of
the hangups, delays and packet loss that you might experience if
youíre far away. Before, you typically got your interaction with
a central Web site. And typically that was far away. Now you
typically have a lot of your interactionsónot all, but a lotówith
an Akamai server that is near you and is selected in real time.

TR: What are the tricks and challenges to making this distributed
system work?

LEIGHTON: Itís an extremely hard area; you canít go and just throw
a bunch of servers out there and have them all work with each other.
The servers themselves are going to fail. Processors are going to fail.
The Internet has all sorts of its own issues and failure modes. So
all these kinds of things have to be built into the algorithmic
approach. How do you develop a decentralized algorithm with imperfect
information that is still going to work? Thatís a huge challenge. But
itís clearly what you have to do. You canít have any central point of
failure or the system will come down. I canít think of a component
or a piece of hardware that hasnít failed at some point or some place.
So, itís a given [that you need a distributed system].

When a client comes to one of our customers looking for content, we
have to figure out where that client is, which of our locations at
that moment is the best to serve the client from, and what load
conditions are, so we donít overload anything. We have got to handle
flash crowds that are both geographic and content specific. We have
got to replicate the content immediately to handle any of those kinds
of issues, but you canít afford to have copies of everything
everywhere. Youíve got to make these decisions and respond back to the
clients in milliseconds. Weíve got to be automatic. And when pieces
fail, youíve got to compensate automatically for that.

TR: Thatís what you call fault tolerant?
LEIGHTON: Yes, and you have to be fault tolerant across all aspects.
Then there are also the non-obvious things. Like billing. Weíre
serving billions of hits a day, and weíre billing for every single
hit. Weíve got to figure out whose content it was and how many bytes
it has, and bill them for it. On top of that, we have a service that
we offer our customers, where they can see within 60 seconds how many
hits we served for them in the last 60 seconds. In addition, we can
break down for our customers where the hits are coming from by country
or state. Itís a challenging algorithmic problem. How do you actually
do that? And make it work with a finite amount of hardware and

TR: Hardware isnít really the key to this, is it?

LEIGHTON: Itís not even a major component. I donít want to belittle
our hardware partners, but the key here is the algorithmic and software
infrastructure. Itís critical.

TR: What is your competition in offering a distributed network for
content delivery?

LEIGHTON: Thereís not really much out there. Weíre at a time when
thereís a lot of business plans and thereís a lot of stories. Thereís
not much in the way of real services available today. Pretty much the
only competitor in our space is Digital Island, which recently
acquired Sandpiper [Networks]. There are others that have announced
[business plans] but are not actively carrying traffic yet. One of
the things that distinguishes Akamai is the amount of research and
engineering and R&D effort that went into designing the system. Itís
not just throwing a bunch of boxes out there. There are companies that
have tried do that with no distributed system. The companies that
announced,services based on that approach two or three years ago
arenít still in business. Doing that didnít work.

TR: What are the upcoming challenges for the technology? Is it to
deliver content faster?

LEIGHTON: Thatís a component. Weíre trying to deliver on the promise
of the Internet. There is the idea that there is a tremendous
revolution happening with regard to the Internet. At the same time,
thereís frustration because of the limitations. What weíre trying to
do is to make the Internet more useful. And a component of
that is making it faster and more reliable. Another component, somewhat
related, is enabling the delivery of more enriching, more enabling
content. If we can make streaming better, and in this case speed is
not so much the issue, itís bandwidth and not having packet loss,
youíre going to get a much better image on your screen; youíll do more
with it, and more people are going to use it to convey content and
information. And thatís invaluable in enriching the power of the
Internet. But not everything is pushing bits. Akamai offers services
for capabilities such as Internet conferencing that enable, for
example, distance learning. With these services, content providers
or enterprise customers can effectively deliver content and interact
with small or large audiences on the Web through live audio and video;
there are features for sharing presentations, audience polling and
moderating messaging.

TR: When you introduce a new function like conferencing, for example,
what demands does it place on the network?

LEIGHTON: How are you going to implement it? How are you going to
integrate it into this massive distributed platform? How are you
going to maintain it for thousands of customers? You have thousands of
customers and hundreds of millions of people accessing those customers,
and weíre sitting in between. And it all has to work by itself. You
canít be monkeying around. Delivering conferencing sounds simple. But
itís not so simple when youíre talking this kind of scale. When people
think about streaming they think of a single source where the content
comes from, and then it branches out in a tree through the Internet.
Those places can break down and then all those people downstream are
out of luck. Weíve developed an entirely new way of going about it so
that thereís no critical point of failure. If the source dies, then
youíre stuck. But once [the content] is out of the source, we
replicate it and spread it throughout the system. So, itís not a tree.

TR: What does it look like?

LEIGHTON: Itís hard to describe. The way to think about it is that
between the source and destination, you have multiple transmissions
going on such that you can lose content on those paths; you can have
packet loss on any or all of them, but at the endpoint you have enough
information coming in from those locations so you can reconstruct the
signal. So, if something gets killed along the way, such as a path
gets killed, nobodyís affected.

TR: Weíve all experienced frustrations with videostreaming. In terms
of the technology, what will it take to make it more reliable? When
will we be able to watch webcasts as easily as TV on a full screen?

LEIGHTON: In order that videostreaming be more reliable, you need a
content distribution service to deliver the bits reliably to the edge
of the network, and then you have to have a reliable last-mile
connection to the Internet. If you want high-quality video, then you
better have a high-bandwidth connection to the Internet. It will still
be some time before you can get TV-quality videostreams on a
widespread basis. Weíve demonstrated a megabit-per-second live
stream. In fact, just recently we carried thousands of
one-megabit-per-second streams to live customers accessing a
conference keynote address by Steve Jobs [CEO of Apple Computer].
This is a major milestone for the Internet. With that technology you
get a very high quality videostream. If the last mile is broadband,
then youíre all set to go. One thing weíre working on is bandwidth
profiling. The idea is to automatically detect the bandwidth of the
last mile. Does the client have a broadband connection, a 28K modem,
or is it narrow bandóa cell phone or something? Then we deliver the
content as a function of that. So if you detect that the client has
high bandwidth, they get the high-bandwidth versionóthe streamed
version as opposed to the static version. Or in the case of narrow
bandwidth, you get a printed version as opposed to the graphics.

TR: The very nature of the Web seems to be changing with such
functions as videostreaming and conferencing. What will Akamai
be working on in five years? What do you think the Internet will
be like then?
LEIGHTON: Things move so fast, itís really hard to predict. People
who try to predict end up eating their words. I think weíre just
at the beginning of the Internet revolution. I donít think weíve
even begun to think of all the things that we can be doing on
the Internet. I canít tell you what will be the hot service
five years from now. I donít know. I would hope by then that,
for example, the quality of streaming is much better. That itís
part of daily life. At the least, I would expect the typical
Web experience to become richer, more efficient and more
reliable than it is today.

TR: You are seen by many as a model of an academic making it big
as an entrepreneur in the new economy. What do you tell those
looking to emulate your success?

LEIGHTON: I never had an aspiration to be an entrepreneur. I love
academics and co-founded Akamai because we felt it was the best
way to transfer our technology from a research environment into
practice. It felt really nice to be taking technology, especially
technology out of a university, and making a difference with it.
Thatís probably the biggest reward. It often takes 10 to 20 years
for a technology in a university to really manifest itself in
practice. And this time weíre able to decrease that time
dramatically. Iím perfectly happy writing a paper that only
five people read. Pretty smart people will read it, and I get a
kick out of that. Itís what I spent all my life doing.
But this is something with a chance to make a difference.

TR: Do you ever miss the days when, as you put it, you spent your
time writing papers that maybe five people were able to read and

LEIGHTON: Yes, although I donít have much time to think about it.

Date view Thread view Subject view Author view

This archive was generated by hypermail 2b29 : Mon Sep 04 2000 - 22:16:49 PDT