www.cs.washington.edu/homes/tom -> www.cs.washington.edu/homes/tom/
I also helped design 20 Nachos, a popular project for teaching undergraduate operating systems, and I am working on a related project called 21 Fishnet for teaching undergraduate networking with a network of ad hoc wireless devices. My current research project is called RIP (Re-architecting the Internet Protocols), a collaborative effort with my colleague 22 David Wetherall and our students 23 Andy Collins, Ankur Jain, 24 Ratul Mahajan, Stavan Parikh, 25 Maya Rodrig, 26 Neil Spring, and 27 Gang Zhao. While the Internet has been an astounding engineering triumph, it faces huge technical problems. As just one example, worldwide spending on cleaning up after viruses, worms, and spam -- that is, spending on coping with the consequences of connecting to the Internet -- is much larger than the worldwide spending on Internet connectivity itself. The Internet itself is fragile, insecure, and poorly optimized. Our research is to fix the myriad problems with the Internet by re-thinking its design from first principles, to help the Internet realize its potential for revolutionizing communication in our society. RIP involves four inter-related projects: Reverse engineering the Internet. Remarkably, there is very little quantitative data about the Internet's behavior. In large part, this is because the Internet is operated by a loose federation of tens of thousands of organizations, at turns both competing and cooperating with each other to provide Internet service to end users. Almost all Internet service providers consider details of their internal operation to be confidential. To provide a robust understanding of how the Internet really behaves, we are undertaking a project to systematically measure every aspect of the Internet's behavior, from topology and provisioning, to intra- and inter-domain routing policies, to failures and mis-configurations, to workload. A key insight is to leverage and integrate the various sources of information that leak out from service about their internal operation, in much the same way that astronomers infer stellar structure from the evidence which reaches our telescopes. See 28 here for a manifesto outlining the opportunity and research challenges to Internet reverse engineering. We have written several papers sketching initial results in this area, including the first study to accurately measure backbone network topologies and the first study to infer internal routing and peering policy of service providers. We have also built several tools to aid our effort, including 29 Scriptroute, an extensible and secure tool for making Internet measurements, 30 tulip, a new tool using advanced measurement techniques to diagnose end-to-end Internet performance problems, and an ongoing effort to produce more accurate sample topology and workload generators for network research. The Internet is at times amazingly robust and at times incredibly fragile. Faced with multiple simultaneous hardware failures, the Internet will (more or less quickly) re-organize itself to reestablish connectivity. But the Internet is not equally robust to software errors and mis-configurations. Even without malicious attack, small errors have repeatedly cascaded to cause massive disruptions in Internet service. As a first step, we conducted a survey of major failures in the Internet over the past twenty years; We have also produced several specific protocols, including one to prevent receivers from cheating congestion control limits and another to completely eliminate the potential for denial-of-service attacks. We are also looking at provably robust protocols for wide area routing and wireless media access. Popular mythology is that Internet bandwidth is getting exponentially cheaper each year, and soon will be essentially free. Indeed, it almost is -- it costs a penny to send a 10MB file across the Internet. So there's no need to carefully manage resources, right? The truth is more complex: computing is becoming cheaper at a much faster rate than networking bandwidth, in large part because computing equipment is a much higher volume business than wide area networking gear, and therefore can leverage enormous economies of scale. For comparison, that same penny will buy you 100 giga-ops of computing! More importantly, the curves are diverging over time, and we believe this will long term trend will radically alter the Internet's architecture. Indeed it has already -- the Internet illuminati rail against middleboxes, as violating the Internet's end-to-end architecture, yet firewalls, load balancers, and NATs are just the tip of the iceberg. How should we architect the Internet for a world where computing is free, networks are cheap, and people are expensive? We would build a system that is self-managing, is optimized for end-user performance, and uses computing throughout to get more efficient use out of networking hardware. This vision underlies our efforts in network management, radical congestion control architectures, provably stable adaptive routing algorithms, wide-area link compression protocols, aggressive caching and pre-fetching algorithms, etc. Of course, all this is academic if we can't figure out a way to change the Internet. We are a founding member of 31 PlanetLab, a worldwide network of computers for developing and deploying new protocols and distributed services. The 32 Scriptroute extensible network measurement facility was one of the first services to be deployed on PlanetLab. Part of our vision is to smooth the path from research idea to validation, by building better tools for generating Internet-like topologies and workloads, to faster parallel simulation frameworks, to scalable emulation, to deployment on PlanetLab. At same time, we have also focused on making protocols themselves easier to extend, starting with my colleague David Wetherall's Active Networks work, to the Active Names system for leveraging naming as the point of indirection for introducing new protocols, to our more recent work building a TCP stack that can be safely extended from either endpoint. We draw inspiration for the RIP project from another architecture project, 33 Taliesin West. This building was designed and built by Frank Lloyd Wright and his students for their own use, out of materials found in the local area, to exist in harmony with its desert setting. Many of the ideas in RIP are likewise from our students, using the building blocks we have at our disposal, and designed to draw strength from rather than oppose the trends in the Internet's underlying technologies and use. One of the great things about doing computer science research for a living is that you get to work with really smart people, and you get to learn a lot at the same time. Both at Washington (where I was a graduate student) and Berkeley (where I taught in the early 90's), and now back again at Washington (where I returned for the weather), I've benefited from a stellar group of colleagues and students, full of energy, insight and fresh ideas. Almost all the results we've had together are really due to them. My students in particular have been inspiring to work with (in addition to the RIP students listed above): 34 Mike Dahlin (UT-Austin), Doug Ghormley (Sandia), 35 Margaret Martonosi (Princeton), 36 Jeanna Neefe Matthews (Clarkson), Drew Roselli (Microsoft), 37 Stefan Savage (UCSD), 38 Amin Vahdat (UCSD via Duke), and 39 Randy Wang (Princeton). A test of a good advisor is how well their students do after they fly the coop: three of my students have recently received tenure, three have won the prestigious 40 Sloan Research Fellowship, two have won department teaching awards, and collectively, they've co-authored four award papers. I've also had great fun working with other people's students, including my brother 41 Eric Anderson (now at Google), 42 Remzi and 43 Andrea Arpaci-Dusseau (Wisconsin), 44 Robert Grimm (NYU), 45 Arvind Krishnamurthy (Yale), Dennis Lee (Amazon), 46 Rich Martin (Rutgers), Steve Lucco (MSR), 47 Nick McKeown (Stanford), and 48 Robert Wahbe (Microsoft), to name a few. Not to mention the great colleagues with whom I've worked, most recently 49 David Culler, ...
|