>Teledesic *will* solve this because it's a non-problem! Satellites have
>been part of the IP fabric since the very beginning -- making it work =
>TCP is only marginally harder.
>I'm unconcerned because the problems affecting satellites are isomophic =
>those facing gigabit transcontinental lines: the 'fat pipe problem'. No
>matter how fast (or slow) your transmission speed, it's still at least =
>ms light delay coast-to-coast rtt. While a modem is lucky to send a few
>bytes in that window, the ack-delay is effectively zero packets. When a
>terabit line sends hundreds of thousands of packets within that =
>window, the acks back up and explode.=20
>So the solutions are tractable, too: streaming acks, selective nacks,
>better modeling of the transport mediums' underlying error rates and
>self-recovering error-resistant encodings. All of this MUST be solved =
>the ground for fiber/SONET/etc, so it should port right over to space. =
>worst case cost is a session-layer TCP gateway.
>That's why Teledesic may stillbe right for not investing a dime in sw
>research and letting the rest of the Net solve their problems in the =
>36 months :-)
>PS. Fooey on NASA for not predicting these problems when they turned on
>their satellite 155mbps net -- the response-window ack-delay is obvious =
>the most casual observer....
30 ms delay can cause bottlenecks, but only on *really* fat pipes.=20
Geostationary satellites, by contrast, have round-trip delays of over
Teledesic's philosophy is to be to be seamless compatible with
terrestrial fiber-based networks. Since protocols and applications are
developed for terrestrial networks, any and all such applications will
work correctly over Teledesic. Our strategy is for the application not
to know it's going over a satellite.
(A curious fact: since light travels faster through a vacuum (c) than
through glass (0.6c), long connections over Teledesic will actually have
lower-latency than "more direct" connections via terrestrial fiber.)
BTW, NASA (along with everyone else) knew about TCP latency problems;
the article was not well reported. RFC 1323 even explains how to use a
larger window. It's an interesting point, though, that those options
are not widely implemented today, because people design for the
terrestrial environment. By extension, HTTP 1.1 may fix the latency
issue of HTTP over satellites, but do businesses really want to bet that
the next killer application, or the one after that, will tolerate
non-standard network connections?
I've appended the Teledesic white paper on this issue (which I wrote) to
the end of the message.
-- Dan Kohn <firstname.lastname@example.org> Teledesic Corporation +1-206-803-1411 (voice) 803-1404 (fax) http://www.teledesic.com
The Latency Factor
Without knowing for certain all the applications and data protocols a broadband network will be called upon to accommodate in the 21st Century, it is reasonable to assume that those applications will be developed in the advanced urban areas of the developed world - where fiber-optics sets the standard. Satellite systems offer the capability to provide location-insensitive, two-way, broadband service, extending the reach of networks and applications to anywhere on Earth. But to ensure seamless compatibility with those networks, a satellite system must be designed with the same essential characteristics as fiber networks - broadband channels, low error rates and low delays. =20
Satellite systems are of two general types: geostationary-Earth-orbit (GEO) and non-geostationary, primarily low-Earth-orbit (LEO).=20 Geostationary satellite systems orbit at an altitude of 36,000 kilometers (km) above the Equator - the only orbit that allows the satellite to maintain a fixed position in relation to Earth. At this height, communications through a GEO - which can travel only as fast as the speed of light - entail a round-trip transmission latency - end-to-end delay - of at least one-half second. This means that GEOs can never provide fiber-like delays.
This GEO latency is the source of the annoying delay in many intercontinental phone calls, impeding understanding and distorting the personal nuances of speech. What can be an inconvenience on voice transmissions, however, can be untenable for real-time applications such as videoconferencing as well as many standard data protocols - even for the protocols underlying the Internet. The advanced digital broadband networks will be packet-switched networks in which voice, video, and data are all just packets of digitized bits. It is not feasible to separate out applications that can tolerate delay from those that can't. As a result, the network has to be designed for the most demanding application.
Applications are developed for prevailing terrestrial standards, not for special networks with non-standard characteristics. Companies that build networks that are not compatible with the predominant protocols and applications are taking a big business risk that their systems will be usable mainly for specialized, proprietary applications. History has not looked favorably upon companies that have made big bets on products that don't conform with prevailing standards. And since telecommunications customers make purchasing decisions based on their most demanding - not their average - application, geostationary satellite systems represent a very risky choice for a carrier if even a relative minority of services - such as voice, videoconferencing and certain data protocols - are latency-sensitive.
In fact, it turns out that the vast majority of protocols running over the Internet and intranets are adversely affected by high-latency connections. Two of the most important standards in computing today provide examples. TCP is the standard transport protocol for networking, and the World Wide Web is the fastest growing network application in history, widely recognized as a new medium for collaboration and commerce. Both are intrinsic to the Internet and intranets, yet neither works well over geostationary links.
The Internet Protocols - TCP/IP
TCP/IP is the protocol suite underlying the Internet and all intranets.=20 It is so fundamental to the operation of the Internet, that one of the best technical definitions of the Internet is "the network of interlinked computers running the TCP/IP protocol suite". Transmission Control Protocol (TCP) is a reliable data protocol; it guarantees that the data will arrive in the same form it was sent, without loss or corruption. Like most protocols, TCP splits the data into segments - called packets - and then reassembles them in the same order on the other side of the link. This way, if any data is lost in transit, the missing packets can simply be re-transmitted. However, this requires that all unacknowledged packets be stored on the transmitting computer until confirmation is received that the packets arrived successfully.=20 To confirm successful transmission, TCP utilizes acknowledgment packets, where the recipient indicates essentially "I've correctly received the data so far; please continue". The time it takes to send some data and get an acknowledgment back is the round-trip delay - known as the round-trip latency - of the connection.
TCP/IP was designed on, and works quite well over, terrestrial networks with low latency. Problems arise, however, when it is used over non-standard networks with high latency, such as geostationary satellite links. The issue is that most implementations of TCP allow only a small number of packets to be stored in a buffer on the transmitting computer while awaiting acknowledgment that they were received correctly on the other side of the connection. Using a small buffer wasn't just an oversight. Small buffers can improve performance in certain circumstances, such as when one machine serves many users simultaneously (e.g., a popular Web server).
For example, the default buffer size in both the Windows 95 and Windows NT implementations of TCP/IP is 64 kilobits. This means that at any given moment, only 64 kilobits can be in transit and awaiting acknowledgment. No matter how many bits the channel theoretically can transmit, it still takes at least half a second for any 64 bits to be acknowledged. So, the maximum data throughput rate is 64 kilobits per = =BD second, or 128 kbps.
What does this mean to the end-user? If a customer takes any computer - from a low-end laptop to a top-of-the-line, multiprocessor server, adds an industry-standard Windows operating system, hooks up a broadband geostationary link, and orders a 2 Mbps connection, they expect to be able to transmit about 2 Mbps worth of data. In fact, any connection via a geostationary satellite would be constrained to only 128 kbps, which is less than 7% of the purchased capacity.
As Dr. Lawrence P. Seidman of Hughes Space and Communications Company has written, "A very high data rate channel with latency is effectively a low throughput channel."
The World Wide Web
The World Wide Web recently overtook all other applications as the most common use of Internet bandwidth. Intranets, the most talked-about trend in computing, are based on the concept of utilizing Web technologies within a corporate network. Like TCP/IP, Web technologies were developed for terrestrial networks, and encounter serious performance problems over non-standard, high-latency links.
Each part of a Web page - the text, each graphic, sounds, etc. - are fetched using independent TCP transactions. The actual data delivered by each of these transactions can be quite small - a tiny graphic of only a few dozen bytes, for example. In these circumstances, the overhead caused by protocol set-up quickly overwhelms any delay from actually sending data. All TCP connections, including Web transactions, require at least two round-trip delays for setting up a connection. The Web protocol will then add at least one additional round-trip delay, and will add more in many circumstances. All of this is overhead that's separate from the time it takes to actually transmit data. With a minimum of three round-trip delays, each lasting at least 500 milliseconds, protocol set-up will take at least 1.5 seconds per Web transaction over geostationary satellite links. But it gets worse.
That's because displaying any Web page can involve dozens of different transactions, each one requiring a separate protocol set-up, and each one incurring the delay penalties. When these individual 1.5 second delays are aggregated together, Web pages downloaded over a geostationary satellite can take tens or hundreds of times longer than connections made over networks such as Teledesic that provide fiber-like delays. And because Web transfers are conducted over TCP, GEO customers face not just the Web protocol overhead delays but also the performance bottlenecks of TCP.=20
Alternatives For Using High-Latency Links
There is a wide consensus that TCP/IP and the World Wide Web are two of the most important and widely distributed technologies in modern networking. They also are representative examples of what people use over networks. But many other networking technologies have even greater problems with high latency. For example, the standard mainframe and minicomputer communications protocols - SNA and DEC LAT - generally will not work at all over high-latency links.
Geostationary satellite proponents may argue that there is a simple solution to the problems caused by excessive latency: Modify the protocols. While these adjustments may be technically feasible, they are economically uncompelling. Network managers do not want to have to modify their protocols or installed base to deal with non-standard equipment. If there are no comparably-priced alternatives, high-latency broadband access will be better than no broadband access at all. But when economical alternatives exist - such as broadband LEO systems like Teledesic - the economic cost of ensuring compatibility with non-standard GEO networks will be difficult to justify. After all, GEOs - limited by the speed of light - can never be seamlessly compatible with terrestrial networks. Broadband geostationary satellites increasingly appear to be a late iteration of a mature technology.
It should be pointed out that the above problems caused by excessive latency do not affect all data transmissions, only the most reliable - "lossless" - ones. For real-time data, such as voice and video, it is not essential that all data be transmitted, so many of the above problems can be avoided. Unfortunately, real-time applications, such as voice telephony and videoconferencing, are precisely the applications most susceptible to unacceptable quality degradation as a result of high latency. For example, imagine trying to control a mouse with a 500 ms latency between any movement and the response on the screen. With such a simple activity as pointing so frustrating, it is difficult to conceptualize advanced, real-time applications such as collaborative visualization and engineering over a GEO link.
In fact, there is a wide consensus in telecom today that with alternatives such as subsea fiber available, GEOs are no longer suitable for the best-known real-time application - voice. Customers have grown to expect low-latency voice service, with the ability to engage in lively, interactive discussions. Echo cancellation and other new technologies can never eliminate the delay associated with GEO satellites.
When customers evaluate GEO versus LEO broadband satellite links, they will need to decide whether they are willing to make do with bandwidth constraints, protocol hassles, and "choppy" real-time applications, or whether they want connections with the same essential characteristics of fiber. Instead of attempting to ensure the compatibility of the entire installed base of network equipment with which one might want to communicate, receiving seamless compatibility with standard, fiber-based terrestrial networks becomes increasingly attractive.
Not that long ago, few telephone companies had heard of the World Wide Web. What telco wants to take the risk that the next killer application - or the one after that - will simply not work over its network? By deploying a network that is seamlessly compatible with fiber, Teledesic can help ensure that customers can use the next generation of applications - whatever they may be and wherever they are needed.