Berkeley CSUA MOTD:Entry 28242
Berkeley CSUA MOTD
 
WIKI | FAQ | Tech FAQ
http://csua.com/feed/
2025/07/08 [General] UID:1000 Activity:popular
7/8     

2003/4/27-29 [Computer/Networking] UID:28242 Activity:high
4/27    I want to hook two ethernets together.  The distance would be greater
        than the 100m max. What's the best way to connect them? Is there an
        such a thing as a UTP-ethernet to coax bridge?
        \_ you need Cisco's LRE (Long Range Ethernet) technology.  It can
           go up to a couple of thousand feets using typical twisted pair
           like a phone line. -cisco guy
           \_ As can optical ethernet (compare prices)
              http://www.iec.org/online/tutorials/opt_ethernet/topic02.html
    http://www.iec.org/online/tutorials/opt_ethernet/images/figure05.jpg _/
        \_ use STP to extend range, put a switches in the middle (if feasible).
           I have a couple of standalone RJ-45 to BNC things (they look like
           4-port hubs, which is another possiblity... one with a coax port
           on it...  You could also find some old 386's with 10-base T and
           10-base 2 cards and bridge them.  You'll be slowing your network
           down with anything using coax
        \_ Duct tape is the best way.
        \_ Need cable?  Why not just run an ipsec tunnel over an 802.11a/g
           link with directional range extenders?  Unless you absolutely must
           have full 100mbit fdx, performance won't suffer too much in most
           environments.  You can get cheap enough wifi hardware.
           Alternatively, if you can find one, go back to the basics and
           use repeaters (hub = multiport repeater.)  -John
           \_ some people work in secure environments.  wireless != secure.
              \_ thats why he says to use an ipsec tunnel btwn the two sides..
                 about as secure as it can get.
                 \_ no, it isn't.  ipsec over a wire might be.  i said a
                    "secure" environment.  anything that can be picked up
                    in the air is *not* secure no matter what protocol(s)
                    you're using.
                    \_ wire is more secure only if you can truly secure
                       physical access
                        \_ Wire is more secure because physical access
                           to a wire is more difficult than air.
                           \_ which says nothing to address the point
                              \_ I need the codes. I have to get inside
                                 Zion, and you have to tell me how.  You are
                                 going to tell me or you are going to die.
2025/07/08 [General] UID:1000 Activity:popular
7/8     

You may also be interested in these entries...
2012/3/29-6/4 [Computer/HW/Memory, Computer/HW/CPU, Computer/HW/Drives] UID:54351 Activity:nil
3/29    A friend wants a PC (no mac). She doesn't want Dell. Is there a
        good place that can custom build for you (SSD, large RAM, cheap video
        card--no game)?
        \_ As a side note: back in my Cal days more than two decades ago when
           having a 387SX made me the only person with floating-point hardware,
           most machines were custom built.
	...
2012/1/19-3/3 [Computer/Networking, Politics/Foreign/Europe, Computer/SW] UID:54294 Activity:nil
1/19    Transcript between the Italian cruise ship captain and the Port
        Authority
        http://www.csua.org/u/v9i (abcnews.go.com)
        This captain is amazing.
	...
2011/12/29-2012/2/6 [Computer/Networking, Computer/SW/Security] UID:54277 Activity:nil
12/29   New brute force attack against WPA1/2 base stations based on a flaw
        in WiFi Protected Setup (WPS):
        http://www.kb.cert.org/vuls/id/723755
        http://www.tacnetsol.com/products
        http://sviehb.wordpress.com/2011/12/27/wi-fi-protected-setup-pin-brute-force-vulnerability
	...
2011/11/8-30 [Computer/SW/Security, Computer/SW/OS/Windows] UID:54218 Activity:nil
11/8    ObM$Sucks
        http://technet.microsoft.com/en-us/security/bulletin/ms11-083
        \_ How is this different from the hundreds of other M$ security
           vulnerabilities that people have been finding?
           \_ "The vulnerability could allow remote code execution if an
               attacker sends a continuous flow of specially crafted UDP
	...
Cache (6903 bytes)
www.iec.org/online/tutorials/opt_ethernet/topic02.html
History The First Optical Ethernet Repeaters The first Ethernet standard included a provision for a single 2-km optical repeater in an Ethernet LAN that was expected to be used to link different buildings in a large LAN. As parts of a shared LAN, these links were not only half-duplex--they also propagated collisions (the signals used to limit access to a single sender at a time). Their spans were limited by the maximum delay that could be allowed on an Ethernet LAN and still detect collisions. Campus Optical Ethernet The advent of the Ethernet bridge, now commonly called an Ethernet switch, changed the game. The purpose of an Ethernet bridge is to connect two different Ethernet LANs (the name switch evolved to denote interconnecting more than two). This occurs at the MAC layer, Layer 2 of the 7-layer OSI protocol model, and there are two important features involved. First, not all traffic on either end is transported -- only traffic destined for the other LAN. Second, collisions (and collision-detection signals) are not transported; Together, these features not only improve network performance by isolating LAN segments but also greatly increase the maximum size of an Ethernet LAN. The Ethernet bridge enabled large LANs to be deployed because a network of campus bridges could interconnect all of the building LANs. Instead of forming a simple star network, these were implemented as meshes, with multiple connections to each LAN. People quickly realized that if both ends of an optical link terminated on a bridge port, then the normal limits on the size of an Ethernet LAN segment no longer applied. The optical link could be operated as full-duplex, thereby doubling the bandwidth of the link. And with only one transmitter on a LAN segment, there could be no collisions; In the early days, this still meant only a few kilometers because light emitting diodes and multimode fibers were used; Optical Fast Ethernet In 1995, a new standard emerged for 100-megabits-per-second (Mbps) Ethernet transmission (or Fast Ethernet) over Category-5 (Cat-5) unshielded twisted pair (UTP) copper cable. Actually, several standards were proposed and implemented, but only one gained significant acceptance--that was 100BASE-TX using two Cat-5 UTPs at distances up to 100 meters. Following close on the heels of the copper Fast Ethernet standards development was the optical side. The first standards to emerge were adapted from fiber distributed data interface (FDDI) technology. The transceiver design and encoding technique were the same, simplifying the work of the standards committee and ensuring that the standardized technology would actually work. While there were some differences, considerable effort was expended to make sure that FDDI transceivers could be readily adapted to optical Fast Ethernet. As on the copper side, several standards were ratified on the optical side. The first standard was for medium-range multimode fiber transmission at 1310 nm (100BASE-FX), based upon the FDDI standards. This provided for a normal range of about 2 km--adequate for most campus environments. The second optical fast-Ethernet standard was 100BASE-SX, ratified in June of 2000 (a full six years later). This standard enabled backward compatibility with 10BASE-FL by using the same 850 nm wavelength and an auto-negotiation protocol. Note that there was no single-mode fiber standard for optical Fast Ethernet transmission (and there is still none at the time of this writing, nor is one expected). This lack of a formal standard has not stopped equipment manufacturers from implementing longhaul (10 km - 100 km) fast-Ethernet links, and in practice they are likely to be interoperable, at least when operating at the same wavelength. They are available both at 1310 nm (the wavelength used in 100BASE-FX, FDDI, and SONET equipment) and 1550 nm (the wavelength used in wavelength division multiplexing WDM systems). This de facto compatibility has resulted from the evolution of Ethernet devices, which originally separated the digital logic from the analog logic. This distinction was formalized in the Fast Ethernet standard as the media independent interface (MII). Today, the chip sets that handle Ethernet are generally independent of the chips that handle the media, be they copper, fiber, or potentially something else. Moreover, the fiber-optic drivers themselves dont know (or care) if the fiber is multimode or single-mode, what the type of connector is, or what wavelength is being used. The bottom line is that the approach used by Ethernet component manufacturers has led to a great deal of flexibility and interoperability, often transcending the standards that were created for that very purpose. As before, the standards committee took full advantage of existing technology and borrowed the transceivers and encoding formats of fiber channel. Specifically, the FC-0 fiber driver/receiver was copied, along with the FC-1 serializer/deserializer. The 8B/10B encoding format was used, which specifies the framing and clock-recovery mechanism. Much of the early gigabit Ethernet standards work was in specifying half-duplex operation in a collision detection environment equivalent to the shared LANs of ordinary and Fast Ethernet. However, that work was effectively wasted--all commercial Gigabit Ethernet implementations today are point-to-point, full-duplex links. There are devices available that operate at 1550 nm and devices that operate at much greater distances than 5km. In fact, 150km spans are possible without repeaters or amplifiers. Again, this was enabled by the separation of Ethernet control logic from media control logic, which has now been formalized in a new way. It is not the formal definition of an MII (called gigabit MII GMII in the Gigabit Ethernet standards), but rather a new standard for a GigaBit Interface Converter (GBIC). GBIC modules Originally specified for fiber channel, the GBIC standards have evolved to support Gigabit Ethernet, and these modules have become the de facto standard for gigabit Ethernet interfaces. They provide hot-swappable modules that can be installed to support LAN, campus area network (CAN), MAN, and even WAN transport interchangeably, as needed. And they are available from a large number of suppliers, keeping prices competitive and capabilities expanding. The GBIC revolution is one of the best examples of industry consortia creating a new market by consensus, yielding a net increase in the revenues of all the participants. The modules themselves, by virtue of being small and plugging into a standardized slot, challenged the transceiver manufacturers, who responded with technological innovations and features beyond all reasonable expectations. The chip manufacturers contributed readily available chip sets that support these GBIC modules, simplifying and speeding up the task of the hardware engineers.