Berkeley CSUA MOTD:Entry 52187
Berkeley CSUA MOTD
 
WIKI | FAQ | Tech FAQ
http://csua.com/feed/
2025/05/25 [General] UID:1000 Activity:popular
5/25    

2008/12/7-10 [Uncategorized] UID:52187 Activity:low
12/6    An interesting proposal to deal with network congestion and
        "bandwidth hogs" -ausman
        http://www.spectrum.ieee.org/dec08/7027
        \- helo you may be interested in VJ's talk at GOOG.
           the orginal papers on RED [random early detect/drop] are
           pretty good.
        \- you might be interested in VJ's GOOG talk. if you are interested
           in congestions, you might also look at the orginal RED papers
           in congestion, you might also look at the orginal RED papers
           if you are not familar with them.
           \_ When is this going to be?
Cache (2924 bytes)
www.spectrum.ieee.org/dec08/7027
Slashdot Illustration: QuickHoney The Internet is founded on a very simple premise: shared communications links are more efficient than dedicated channels that lie idle much of the time. We share local area networks at work and neighborhood links from home. And then we share again--at any given time, a terabit backbone cable is shared among thousands of folks surfing the Web, downloading videos, and talking on Internet phones. But there's a profound flaw in the protocol that governs how people share the Internet's capacity. The protocol allows you to seem to be polite, even as you elbow others aside, taking far more resources than they do. Network providers like Verizon and BT either throw capacity at the problem or improvise formulas that attempt to penalize so-called bandwidth hogs. Let me speak up for this much-maligned beast right away: bandwidth hogs are not the problem. There is no need to prevent customers from downloading huge amounts of material, so long as they aren't starving others. Rather than patching over the problem, my colleagues and I at BT (formerly British Telecom) have worked out how to fix the root cause: the Internet's sharing protocol itself. It turns out that this solution will make the Internet not just simpler but much faster too. You might be shocked to learn that the designers of the Internet intended that your share of Internet capacity would be determined by what your own software considered fair. They gave network operators no mediating role between the conflicting demands of the Internet's hosts--now over a billion personal computers, mobile devices, and servers. The Internet's primary sharing algorithm is built into the Transmission Control Protocol, a routine on your own computer that most programs run--although they don't have to. TCP is one of the twin pillars of the Internet, the other being the Internet Protocol, which delivers packets of data to particular addresses. Your TCP routine constantly increases your transmission rate until packets fail to get through some pipe up ahead--a tell-tale sign of congestion. The billions of other TCP routines around the Internet behave in just the same way, in a cycle of taking, then giving, that fills the pipes while sharing them equally. It's an amazing global outpouring of self-denial, like the "after you" protocol two people use when they approach a door at the same time--but paradoxically, the Internet version happens between complete strangers, even fierce commercial rivals, billions of times every second. Services like YouTube, eBay, Skype, and iTunes are all judged by how much Internet capacity they can grab for you, as are the Internet phone and TV services provided by the carriers themselves. Some of these companies are opting out of TCP's sharing regime, but most still allow TCP to control how much they get--about 90 percent of the 200 000 terabytes that cross the Internet each second.