www.iec.org/online/tutorials/opt_ethernet/topic02.html
History The First Optical Ethernet Repeaters The first Ethernet standard included a provision for a single 2-km optical repeater in an Ethernet LAN that was expected to be used to link different buildings in a large LAN. As parts of a shared LAN, these links were not only half-duplex--they also propagated collisions (the signals used to limit access to a single sender at a time). Their spans were limited by the maximum delay that could be allowed on an Ethernet LAN and still detect collisions. Campus Optical Ethernet The advent of the Ethernet bridge, now commonly called an Ethernet switch, changed the game. The purpose of an Ethernet bridge is to connect two different Ethernet LANs (the name switch evolved to denote interconnecting more than two). This occurs at the MAC layer, Layer 2 of the 7-layer OSI protocol model, and there are two important features involved. First, not all traffic on either end is transported -- only traffic destined for the other LAN. Second, collisions (and collision-detection signals) are not transported; Together, these features not only improve network performance by isolating LAN segments but also greatly increase the maximum size of an Ethernet LAN. The Ethernet bridge enabled large LANs to be deployed because a network of campus bridges could interconnect all of the building LANs. Instead of forming a simple star network, these were implemented as meshes, with multiple connections to each LAN. People quickly realized that if both ends of an optical link terminated on a bridge port, then the normal limits on the size of an Ethernet LAN segment no longer applied. The optical link could be operated as full-duplex, thereby doubling the bandwidth of the link. And with only one transmitter on a LAN segment, there could be no collisions; In the early days, this still meant only a few kilometers because light emitting diodes and multimode fibers were used; Optical Fast Ethernet In 1995, a new standard emerged for 100-megabits-per-second (Mbps) Ethernet transmission (or Fast Ethernet) over Category-5 (Cat-5) unshielded twisted pair (UTP) copper cable. Actually, several standards were proposed and implemented, but only one gained significant acceptance--that was 100BASE-TX using two Cat-5 UTPs at distances up to 100 meters. Following close on the heels of the copper Fast Ethernet standards development was the optical side. The first standards to emerge were adapted from fiber distributed data interface (FDDI) technology. The transceiver design and encoding technique were the same, simplifying the work of the standards committee and ensuring that the standardized technology would actually work. While there were some differences, considerable effort was expended to make sure that FDDI transceivers could be readily adapted to optical Fast Ethernet. As on the copper side, several standards were ratified on the optical side. The first standard was for medium-range multimode fiber transmission at 1310 nm (100BASE-FX), based upon the FDDI standards. This provided for a normal range of about 2 km--adequate for most campus environments. The second optical fast-Ethernet standard was 100BASE-SX, ratified in June of 2000 (a full six years later). This standard enabled backward compatibility with 10BASE-FL by using the same 850 nm wavelength and an auto-negotiation protocol. Note that there was no single-mode fiber standard for optical Fast Ethernet transmission (and there is still none at the time of this writing, nor is one expected). This lack of a formal standard has not stopped equipment manufacturers from implementing longhaul (10 km - 100 km) fast-Ethernet links, and in practice they are likely to be interoperable, at least when operating at the same wavelength. They are available both at 1310 nm (the wavelength used in 100BASE-FX, FDDI, and SONET equipment) and 1550 nm (the wavelength used in wavelength division multiplexing WDM systems). This de facto compatibility has resulted from the evolution of Ethernet devices, which originally separated the digital logic from the analog logic. This distinction was formalized in the Fast Ethernet standard as the media independent interface (MII). Today, the chip sets that handle Ethernet are generally independent of the chips that handle the media, be they copper, fiber, or potentially something else. Moreover, the fiber-optic drivers themselves dont know (or care) if the fiber is multimode or single-mode, what the type of connector is, or what wavelength is being used. The bottom line is that the approach used by Ethernet component manufacturers has led to a great deal of flexibility and interoperability, often transcending the standards that were created for that very purpose. As before, the standards committee took full advantage of existing technology and borrowed the transceivers and encoding formats of fiber channel. Specifically, the FC-0 fiber driver/receiver was copied, along with the FC-1 serializer/deserializer. The 8B/10B encoding format was used, which specifies the framing and clock-recovery mechanism. Much of the early gigabit Ethernet standards work was in specifying half-duplex operation in a collision detection environment equivalent to the shared LANs of ordinary and Fast Ethernet. However, that work was effectively wasted--all commercial Gigabit Ethernet implementations today are point-to-point, full-duplex links. There are devices available that operate at 1550 nm and devices that operate at much greater distances than 5km. In fact, 150km spans are possible without repeaters or amplifiers. Again, this was enabled by the separation of Ethernet control logic from media control logic, which has now been formalized in a new way. It is not the formal definition of an MII (called gigabit MII GMII in the Gigabit Ethernet standards), but rather a new standard for a GigaBit Interface Converter (GBIC). GBIC modules Originally specified for fiber channel, the GBIC standards have evolved to support Gigabit Ethernet, and these modules have become the de facto standard for gigabit Ethernet interfaces. They provide hot-swappable modules that can be installed to support LAN, campus area network (CAN), MAN, and even WAN transport interchangeably, as needed. And they are available from a large number of suppliers, keeping prices competitive and capabilities expanding. The GBIC revolution is one of the best examples of industry consortia creating a new market by consensus, yielding a net increase in the revenues of all the participants. The modules themselves, by virtue of being small and plugging into a standardized slot, challenged the transceiver manufacturers, who responded with technological innovations and features beyond all reasonable expectations. The chip manufacturers contributed readily available chip sets that support these GBIC modules, simplifying and speeding up the task of the hardware engineers.
|