FW: Internet health report condition serious

Dan Kohn (dan@teledesic.com)
Thu, 19 Sep 1996 16:37:41 -0700

>[* This is just a horrible article. They get 70% of the details right,
>but they totally misunderstand the big picture, that things are a
>little unstable with the incredible growth of the Internet, but that
>the problems are being fixed. - dan *]
>Internet health report/ condition serious
>Source: Network World
>Network World via Individual Inc. : If you're steering some of your
>organization's mission-critical applications onto the Internet's
>information highway, you'd better turn around now -- while you still
>Network World has just concluded a monthlong investigation into the
>health of the Internet service providers and the other of Network
>World's readers -- plus interviews with ISPs, users and leading
>experts. The conclusion: The 'Net is not reliable enough today to
>handle many strategic business applications. Further, the situation
>won't likely get better until some major standards issues get settled
>and resulting products filter out - probably a year from now.
>That's not to say the Internet is broken; it plugs along every day
>delivering E-mail, supporting large file transfers and barely keeping
>pace with the demands of World-Wide Web users.
>However, a series of elements threaten the existing 'Net
>infrastructure. Carriers are scampering to stay ahead of capacity
>requirements, while at the same time dealing with some high-profile
>failures. The Net's workhorse routers are buckling under the strain of
>unchecked network growth; they often become confused due to changes in
>the network, losing blocks of data wholesale and sacrificing
>performance. Traffic exchange points are often clogged.
>Underscoring the 'Net's fragility, some carriers are even offering
>enhanced Internet services that provide access to the Internet, yet
>move traffic over private conduits.
>All of these technology issues aren't lost on users. In a Network World
>survey of 200 readers, 65% of respondents say the reliability of the
>Internet has either gotten worse or stayed the same in the last six
>months; only 30% say it has improved.
>``A company that's spending its money to have reliable communications
>has to use a privately contracted network with guaranteed response
>times,'' says Seldon Ball, a computer technical adviser at the Wilson
>Synchotron Laboratory at Cornell University in Ithaca, N.Y. ``The
>public Internet is just not reliable enough for that. It's not
>something you can bet your company on today.''
>Guy Cook, president of Colorado SuperNet, a superregional Internet
>service provider that hosts Web sites for corporate titans such as
>General Motors Corp., doesn't wholly disagree. ``The challenge now is
>to make this a more reliable network,'' he says. ``What we're dealing
>with is a relatively immature network infrastructure that needs to be
>further developed.''
>Cook doesn't expect a catastrophic outage of the Internet anytime soon,
>although he does see an increase in the frequency of brownouts or other
>events. Then there's Bob Metcalfe, who repeatedly has warned the
>networking community that the Internet is on the verge of a collapse of
>catastrophic proportions.
>Metcalfe, who gained fame for inventing Ethernet and now works for NW
>parent International Data Group, recently softened his stance slightly,
>agreeing with Cook that there will be an increase in the frequency and
>the impact of outages and service brownouts. Considering recent events,
>Metcalfe's original vision seems prophetic.
>A series of human and technical mishaps last month knocked out America
>Online, Inc. and its 6 million customers for nearly a day. Back in
>June, Netcom On-Line Communications Services, Inc.'s backbone took a
>major hit that stranded 400,000 users without service. Apex Global
>Information Services, Inc. (AGIS) this spring experienced router snafus
>and suffered outages due in part to a meltdown between its network
>center and a major Internet traffic exchange point.
>And even MCI Communications Corp. - which many say carries the bulk of
>Internet traffic - announced a moratorium on new dedicated access
>customers for its Internet service between mid-February and mid-March
>when the carrier pushed through a badly needed capacity upgrade.
>Those are only some of the publicly acknowledged events. There are
>scores of others everyday, ``some which are minor and others which are
>major,'' according to Phil Lawlor, president of the Dearborn,
>Mich.-based AGIS.
>Are these the strains of a network ready to collapse? They could be.
>Although some, including many ISPs, say they are only a chain of
>unrelated occurrences. Metcalfe doesn't think so.
>``The problems are systemic,'' Metcalfe says. ``The billing mechanisms,
>the settlement mechanisms and the management operations just are not
>there; hardware is not ultrareliable, and the software is susceptible
>to human error as we saw in the Netcom case. It's going to get worse
>before it gets better.''
>Rudderless ship What's happening with the Internet is more than the
>fits and starts of technology; the very organizational underpinnings of
>the Internet have been ripped out and replaced by a loose structure
>that relies on competition to deliver the appropriate services in a
>reliable and a cost-efficient manner. Lost in the transfer of the
>Internet from a publicly run facility to a privately owned and operated
>hodgepodge of networks is the control and guidance that the National
>Science Foundation (NSF) previously exerted.
>Today, network operators are responsible for their own infrastructure,
>but no one has overall responsibility for the 'Net. ``People report no
>problem with their component, but the system as a whole is losing
>traffic,'' says Mark Luker, program director for NSF Network.
>The Internet has undergone remarkable change since the NSF retired the
>NSFNet backbone in April 1995.
>What once was an architecture that revolved around noncommercial
>traffic feeding into NSFNet from downstream research, military and
>other nets, has evolved into a collection of about a dozen core
>backbone providers. These companies - known as network service
>providers (NSP) - share commercial and research traffic at exchange
>points (see story, page 108). Local and regional ISPs contract with
>backbone providers to carry their traffic for the long haul.
>``It has become a much larger, more stratified and more costly entity
>within which to operate,'' says Gordon Cook, editor and publisher of
>The Cook Report, an authoritative newsletter about ISP activity.
>It is precisely that backbone diversity that leads NSPs and others
>active in the Internet community to brush off Metcalfe's claims.
>``Multiple transient outages have occurred, and they cannot be nailed
>down to any one thing - much like the phone network,'' says Pat Craig,
>group manager of IP services for Sprint Corp., a major Internet
>backbone provider.
>``All of the large providers have had periods of genuine horror shows
>in the network, and we've all taken turns catching javelins,'' says
>Michael O'Dell, chief technical officer for UUNET Technologies, Inc.
>The Internet has been designed to withstand major outages, even a
>nuclear attack, O'Dell adds. ``It's hard to imagine something that will
>produce a worldwide failure of the Internet.''
>There are signs though, that the Internet is heaving under enormous
>A Network World survey of major NSPs revealed that for different
>providers, traffic loads have tripled, quadrupled and - in MCI's case -
>increased 3,000% over the same time last year.
>Many of the measurements captured by the Route Arbiter - an NSF funded
>project to collect network statistics - indicate a degradation in
>performance since the breakup of NSFNet, says Bill Norton, network
>engineer with Merit Systems, Inc., an Ann Arbor, Mich., organization
>that oversees the Route Arbiter project. Norton also chairs the North
>American Network Operators Group, a coalition of ISPs that make
>decisions on Internet operational issues.
>``We're not at the level of performance we had with NSFNet, but then
>again it didn't have nearly the number of users supported today,''
>Norton says.
>The overwhelming popularity of the Web is also leading to congestion
>and sapping performance all across the Internet.
>``Every Web connection has a whole bunch more independent connections
>hidden under the covers,'' says Dan Benjamin, an Orlando, Fla.-based
>Internet consultant. ``Eventually, we're faced with having to rethink
>how applications are deployed on the Web.''
>Another source of congestion are the exchanges where NSPs hand off
>traffic destined for another provider's network. NSPs constantly refer
>to these sites as choke points where data backs up on access lines and
>where overloaded equipment melts down.
>A host of equipment, network design and policy issues contribute to
>congestion at these sites. Increasingly, NSPs are making an end run
>around the exchanges by setting up one-to-one deals, known as peering
>relationships, with other carriers.
>Flap trap An issue that may ultimately pose a more serious threat to
>the Internet's reliability than capacity problems is a phenomenon
>called route flapping.
>Route flapping occurs when an Internet-attached router intermittently
>ceases transmission across a wide-area link. This can be caused by
>configuration errors, status changes in net links, software bugs and
>other problems.
>Most commonly, a router looks for administrative packets shipped over a
>link; these packets advise about route changes and status across the
>Internet. If the administrative message is not received, the router
>eventually stops transmitting over the questionable link. Only after
>other routers broadcast messages that the link is viable will the
>router resume transmission. Thus the line flaps up and down, according
>to Jordan Becker, vice president of network services at backbone
>service provider ANS.
>This is becoming an especially troublesome issue for NSPs or ISPs that
>don't have the memory or the hardware in place to deal with it, says
>Craig Labovitz, a network engineer with Merit Systems. Instability
>caused by route flaps results in poor performance for the end user and
>makes sites momentarily unreachable, he says.
>Compounding the issue is the fact that route tables for the Internet
>have grown wildly complex, leading to a situation where routers can't
>keep up with the changes. If a router's forwarding cache becomes
>invalidated by a lack of updates, the router doesn't know how to
>forward packets. ``Essentially, the processor gets a firehose directed
>at it, which causes some very ugly failure scenarios,'' says UUNET's
>How serious is route flapping?
>Mark Kosters, the principal investigator for the Internet Network
>Information Center, labels flapping as ``the biggest issue the backbone
>ISP community is dealing with.'' Route flaps are occurring with greater
>frequency at the core of the Internet. Current router technology is
>``stretched to the breaking point,'' he says. ``Routers often spend
>more time with routing updates than they do with sending user data.''
>According to a Web posting at the Routing Arbiter, ``severe levels of
>routing instability can lead to poor network performance (such as
>packet loss, latency and interruptions of service).'' Cisco Systems,
>Inc., whose routers dominate on the Internet, has teams of experts
>roaming among ISPs to milk the most from the routers and ensure they
>don't seize.
>At one point last spring, the Routing Arbiter posted packet loss
>numbers of 30% to 50% for some NSPs. A 10% loss is noticeable in
>service performance, while a 50% loss almost renders a service
>unusable, Labovitz says. Even today, during daily peak periods, he
>says, it is not uncommon to see a few providers with packet loss rates
>of 30%. A good deal of that is tied to fluctuation in routing tables.
>Part of the reason that routes flaps have become prevalent is the
>growth of the Internet. As you increase the number of routers across
>the 'Net, routing tables grow enormously, and some routers eventually
>lose track of all the possible routes required to calculate a least
>cost path.
>``Their opinion of routes is different,'' says Brent Bilger, director
>of product marketing for Cisco's Service Provider Market unit. ``When
>routers get out of sync with other routers, then you have problems in
>your internet.''
>Cisco, whose routers are used by almost every NSP, has deployed a route
>damping algorithm to lessen the effect of route changes. In essence,
>the routers are taught to ignore some of the routing updates. That
>leads to a trade-off between information suppression and route
>optimization, UUNET's O'Dell says. ``The more you aggregate routing
>knowledge, the greater the chance you will produce less optimal
>Officials at MCI and Sprint say route damping methods have all but
>eliminated the problem from their backbones. But Merit's Labovitz says,
>``The problems seem to be getting worse, despite the aggressive use of
>route damping.'' Carriers are beginning to mandate use of Complex Inter
>Domain Routing address blocks, which are analogous to area codes and
>are meant to reduce the number of updates flowing across the Net.
>Capacity constraints While ISPs must contend with the route flap
>events, they also must be on the lookout to throttle back congestion as
>it flares up. Most NSPs today closely guard the capacity percentages of
>their backbones, fending off questions about capacity limits by saying
>their backbones are not stressed.
>But consider the plight of MCI, which last spring would not allow new
>dedicated access customers to hop on its network for an eight-week
>period. MCI's backbone was in serious trouble back then, with packet
>loss rates of 20% to 30% on most of the West Coast.
>Rob Hagens, MCI's director of Internet engineering, says an upgrade of
>the backbone from 45M bit/sec to OC-3 was ``an easy sell [to top
>management] because Internet backbone services is one of the great new
>revenue areas of the future.''
>Other sources say MCI's Internet division may have supported the
>upgrade, but top management waffled on the investment until the company
>started turning away dedicated access customers.
>To understand MCI's capacity woes, consider this: The carrier's ATM-
>based Internet backbone now handles 250 terabytes a month - that's a
>whopping 3,000% increase over this time last year. (A terabyte is equal
>to 1 million megabytes.) Part of the problem, not only for MCI but for
>other ISPs, is predicting the loads they will be carrying, says Scott
>Bradner, a consultant in Harvard University's Office of Information
>In the early days of the Internet, Bradner says it was possible for
>carriers to oversubscribe a line by 8 to 1 - even 20 to 1, in some
>cases - because users were typically exchanging only E-mail. That meant
>a carrier could sell you a T-1, for instance, and multiplex your data
>along with other users' onto a single T-1 feed linked to the core of
>the network.
>``That worked fine until the distortion brought about by the Web,''
>Bradner says. Now, it is much more difficult for carriers to estimate
>what percentage of a line users will tie up. That forces NSPs to add
>infrastructure at the core to handle capacity demands. In addition,
>many backbone providers carry traffic for resellers that oversubscribe
>their lines, wreaking havoc on NSP capacity planning.
>Tending to ever-changing capacity requirements certainly reduces
>congestion on NSP backbones, but there are other points of congestion,
>too. Almost every backbone provider fingers the Internet's traffic
>exchange points as congestion culprits.
>The exchanges are simplistic interconnects that usually rely on an FDDI
>or, less often, an ATM switch to pass traffic among dozens of NSPs and
>ISPs. Service providers purchase one or more ports on the switch. They
>may also choose a less costly option, such as a 10M bit/sec connection
>to an attached FDDI ring, or they can lease a port on an FDDI hub that
>links the ring to the on-site switch.
>Connecting to a switch is just the first step; each NSP is then
>responsible for hammering out traffic exchange agreements with any
>number of NSPs or ISPs present at the site.
>In effect, the exchange operator has little responsibility - other than
>maintaining the working order of the switch and any attached rings.
>``A lot of problem reports are handled by the NSPs because so many of
>the events happen in the complex routing layer,'' says B.J. Chang,
>director of technology programs in the Advanced Networks Group at MFS
>Communications, Inc. MFS operates several exchanges known as
>Metropolitan Area Exchanges (MAE). MFS's Washington exchange - MAE-East
>- was one of four original NSF-funded exchanges.
>Eighteen months ago, when the NSF decommissioned NSFNet, it funded
>establishment of four network access points (NAP), which were the first
>Internet exchanges. Under NSF provisions, each of the four NAPs -
>located in Pennsauken, N.J., Washington, D.C., Chicago and San
>Francisco - has to be connected to at least two others to provide for
>alternate path routing in the event of an outage.
>The NSF recently curtailed funding of the four NAPs, citing their
>success as commercial entities.
>Chang says many of the Internet exchanges are getting a bad rap over
>the congestion issue. ``The place where congestion seems to occur is
>not in the exchange, but in the access links that lead to the
>exchanges,'' she says. For instance, the shared ring some exchanges
>operate may become saturated.
>``There is no method of congestion control in this setup,'' says AGIS
>President Phil Lawlor. ``Any one participant could flood the others'
>full access capabilities.''
>Second-tier relief An entire second tier of Internet exchanges is
>beginning to dot the Net landscape in places such as Phoenix, Los
>Angeles, Houston and Dallas. Digital Equipment Corp. just entered the
>business, launching the Digital Internet Exchange (DIX) in Palo Alto,
>Calif., where it will provide service for the Commercial Internet
>eXchange, BBN Planet and others.
>While most exchanges are operated by a carrier that supplies the
>circuits leading up to the exchange switch, Digital will use a variety
>of telecommunications carriers, according to DIX Manager Al Avery. ``If
>you become disenchanted with one service provider, you can switch to
>another without having to relocate your gear to another site,'' Avery
>says. The other advantage is ISPs and NSPs using the site can employ
>physically divergent paths into the DIX.
>MFS' Chang believes the second-tier exchanges will ease the pressure on
>existing sites. ``My theory is we'll see much more content distributed
>locally, and that will mean there will be less traffic trying to shove
>into the big exchanges.''
>NSPs, meanwhile, aren't sitting idle. Increasingly, they are reducing
>their reliance on the exchanges and setting up direct peering
>relationships with other providers. So, if two carriers have enough
>traffic to justify a DS-3 between each other, they will set up the
>connection instead of pushing the traffic to an exchange. Sprint, for
>instance, has set up five direct ex- change links with MCI and four
>with UU-NET, Cook said.
>MCI says it has 12 direct interconnects with ISPs. UUNET says it has
>four in place and two more on the drawing board.
>``If the NAPs are congested, the private interconnects are the
>antidote,'' Cook says. Part of that antidote may be to cure economic
>ills; connecting to an exchange is a costly undertaking. According to
>The Cook Report, the annual cost is about $100,000, and many providers
>are tied in to multiple exchanges.
>The private interconnects are ``the only kind I'm building from now
>on,'' UUNET's O'Dell says. ``The exchange points are unscalable. They
>will exist and provide a certain breadth of coverage, but they are
>doomed long term.''
>While NSPs and exchange operators grapple with these issues and more,
>some users remain unfazed. Indeed, 57% of respondents to the Network
>World reader survey say they forged ahead with Internet plans despite
>outages and brownouts. Only 19% of say they postponed projects out of
>Count Paul Zengilowski, president of Burlington, Vt.-based Data
>Clearinghouse Corp., are among the apprehensive. He says law firms
>often ask his clearinghouse if it uses the Internet to transport data
>to clients.
>``They're asking us the question not because they want to know if we
>will, but because they want to make sure we are not using it,''
>Zengilowski says. The main objection from law clients is security, but
>for Data Clearinghouse, ``reliability of service is the real factor.''
>Phil DeMar, a network analyst at Fermi National Accelerator Labs in
>Batavia, Ill., brings up yet another issue. ``The general perception in
>our community is that there has been some degradation in the type of
>performance we've seen on the Internet over the past year or so,'' he
>says. ``I think a lot of it is due to congestion.''
>It's hard to disagree with either users assessment. While NSPs maintain
>that their nets are highly reliable, their move to direct connections
>for traffic exchange seems a sure sign that the NAPs and other traffic
>exchanges are problematic.
>If you listen to the NSPs, they say route flaps are under control; yet
>other reports indicate damping hasn't snuffed out all the fires. But,
>perhaps the biggest issue is the service providers' ability to stay
>ahead of capacity requirements. The key to that will be the emergence
>of bandwidth reservation and quality-of-service functions.
>So yes, plans are underway to remedy many of these problems. But if
>you're betting corporate dollars and your career on what's out there,
>you'd better stay off the Net on-ramp.
>Freelance writer Joanne Cummings of Marlborough, Mass., and Network
>World Online Senior Writer Chris Nerney contributed to this report.
>Routing across the 'Net - pass the hot potato When the NSF Network
>backbone was decommissioned in April 1995, it was replaced by four
>network access points, basically exchanges at which commercial Internet
>service providers could pass traffic to one another. Ever since, ISPs
>have been rewriting the Net routing rules If you're trying to contact a
>resource located on your service provider's network, the process is
>simple: Traffic is sent to the ISP's router, which does an address
>lookup, identifies it as an on-network destination and sends it on its
>way. But contacting a resource off your provider's network is another
>Backbone ISPs - known as network service providers (NSP) - do not want
>to incur the cost of carrying traffic destined for another provider's
>network. So each will find the nearest point at which it can hand off
>data to the destination network or to an intermediate transit provider
>- a practice known as hot potato routing.
>The graphic (at right) shows a hypothetical example of how traffic
>might flow between users in Boston and Seattle. Note that, instead of
>handing off data to an intermediate NSP, any two NSPs may have a
>special peering agreement, where they nail up a dedicated circuit via
>which they exchange large loads of data, such as the one between NSP A
>and Z in the example shown. On the return trip, NSP Z will likewise
>look for the nearest location to hand off traffic - in this case, the
>exchange point in San Jose, Calif. In this fashion, each NSP winds up
>paying its fair share of transport costs.
>What NSPs and ISPs don't want is a local or a regional provider that
>dumps traffic to a backbone carrier but has no presence elsewhere in
>the country to handle an equitable load of return traffic.
>In that instance, the local or regional provider must pay the backbone
>carrier for transit carrying charges.
>``When you purchase service, by default you're choosing a backbone
>provider,'' says John Curran, chief technology officer for BBN Planet.
>So, even though you may be dealing with a local or a regional entity,
>you should learn about their upstream backbone provider.
>There is an exception to this procedural routing system. Some service
>providers offer enhanced IP services or so-called private Internet
>services. Basically, these services give you a connection to the
>Internet, but your data rides over the service provider's backbone
>until it reaches the nearest point of exit to its destination. ``We can
>control the quality of our service on an end-to-end basis,'' says Pat
>Craig, Sprint Corp.'s group manager of IP services. Some customers, he
>says, want this type of service to engineer parallel lines, with one
>for higher priority traffic.
>[09-18-96 at 17:15 EDT, Copyright 1996, Network World]
>This story is found in the following NewsPage topics:
>*Electronic Frontier ( Data Communications / Miscellaneous )
>*Interactive Media Business Overview ( Interactive Media & Multimedia /
>Internet, Web & Infohighway ) *Electronic Frontier ( Interactive Media
>& Multimedia / Internet, Web & Infohighway ) *Information Superhighway
>( Interactive Media & Multimedia / Internet, Web & Infohighway )
>*Internet Overview ( Interactive Media & Multimedia / Internet, Web &
>Infohighway ) *Information Superhighway ( Business Management /
>Information Technology for Business ) *Internet Overview ( Business
>Management / Information Technology for Business ) *Cisco Systems (
>Company Tracking / 'C' Companies ) *MCI Communications ( Company
>Tracking / 'M' Companies ) *Cisco Systems ( Computer Hardware &
>Peripherals / Company Tracking ) *Cisco Systems ( Computer Software /
>Company Tracking ) *Infohighway - North America ( Interactive Media &
>Multimedia / Internet, Web & Infohighway )
>Main | Register | Search | Stocks | Hot Topics | Infoseek | Freeloader
>| Individual, Inc.
>Questions? | NewsPage Direct | Sources | Advertising | Topic Index |
>Copyright 1996