Although Ethernet has been used for a number of years in Local Area Networks (LAN) for Information Technology (IT), it is now being used within railway telecoms applications such as customer information systems and Voice Over Internet Protocol (VoIP) and, more recently, within signalling control systems. Writes Paul Darlington
One recent example is the newly commissioned modular signalling scheme between Crewe and Shrewsbury, where Ethernet is used to connect together the; interlocking, trackside equipment, level crossing controllers and the control system. It is also now the dominant technology for Layer 2 of telecoms Internet Protocol (IP) networks.
Origin of Ethernet
The origins of Ethernet began in the 1970s with a requirement to link together computers on desks with devices such as printers.
The purpose of a LAN is to connect many more than just two systems. Connecting several thousands of computers to a LAN can in theory be done using a star, a ring, or a bus topology.
A star is every computer is connected to some central point. A bus consists of a single, long cable that computers connect to along its run. With a ring, a cable runs from the first computer to the second, from there to the third and so on until all participating systems are connected, and then the last is connected to the first, completing the ring.
Ethernet was invented at Xerox’s Palo Alto Research Centre (PARC) in the mid-1970s. Xerox was building the world’s first laser printer and wanted all of the PARC’s computers to connect with the printer. Bob Metcalfe and colleagues were asked to build a networking system to do the job.
Bob based his network system on ALOHAnet which was a radio network set up in the 1960s between several Hawaiian Islands. With this system, all the remote transmitters used the same frequency and nodes transmitted whenever they liked. Obviously, two of them might transmit at the same time, interfering with each other so both transmissions were lost. To overcome this problem, the central location acknowledged a message if it was received correctly. If it was not acknowledged then the transmitter sent the same packet again a short random time period later. The retransmissions made sure that the data got across eventually. It is ironic that Ethernet was based on a wireless technology as, 40 years later, wireless Ethernet systems are now widely used.
The Xerox team improved on ALOHAnet in several ways. First of all, Ethernet nodes checked to see if the ether is idle (Carrier Sense) and waited if they sensed a signal. Second, once transmitting over the shared medium (Multiple Access), Ethernet nodes checked for interference by comparing the signal on the wire to the signal they were trying to send. If the two didn’t match, there must be a collision (Collision Detect). In that case, the transmission was broken off. Both sides now knew that their transmission failed, so they started retransmission attempts using an exponential back-off procedure. The protocol was therefore known as CSMA/CD.Ethernet can be compared to an audio telephone conference without a chairperson. If two people on the conference start talking at the same time they will normally pause, before one of them starts talking (transmitting), while the other listens (receives). Once the first speaker stops talking the second speaker starts to talk.
The experimental Ethernet ran at 2.94Mbit/s. In 1973, radio or wireless could not provide the speed required so Ethernet used a thick coaxial cable which was referred to as “the ether”. The name did not come from the anaesthetic ether, but from the luminiferous ether that was at one point thought to be the medium through which electromagnetic waves propagate. On 22 May 1973, Bob circulated a memo titled ‘Alto Ethernet’ which contained a rough schematic of how it would work.
So what of the competition?
Token Bus was introduced by General Motors for it its Manufacturing Automation Protocol (MAP) standardisation scheme. A token was passed around a ‘virtual ring’ on a coaxial cable and only network nodes that possessed a token were able to transmit. It was standardised by IEEE 802.4, and was mainly used for industrial applications. However, due to difficulties handling device failures and adding new stations to a network, token bus gained a reputation for being unreliable and difficult to upgrade.
Token Ring was introduced by IBM and was standardised as IEEE 802.5. A three-byte frame called a token travelled around a ring of cable connecting the computer nodes together. Empty information frames were also continuously circulated on the ring – when a device had a message to send it seized the token. The device would then be able to send the frame.
In the 1980s there was a battle between Ethernet and Token Ring as to which was the best LAN architecture and, at the time, a classic interview question was to describe the difference between the two. There were claims that Token Ring was superior to Ethernet. However, with the development of switched and faster variants of Ethernet, Token Ring architectures lagged behind Ethernet and the higher sales of Ethernet allowed economies of scale which drove down prices further. Eventually Ethernet won the battle as 100Mbit/s and 10Gbit/s switched Ethernet dominated the market.
So Ethernet won the battle for standardisation, by being cheaper, ultimately faster and, most importantly, by being an open standard. It developed over the decades and assimilated higher bitrate protocols until it has become ubiquitous, not just for LANs but nowadays within Layer 2 telecoms networks which can be used for both railway telecoms and signalling applications.
Ethernet becomes ‘The Standard’
Bob Metcalfe left Xerox in the late 70s and joined Digital. He was asked to develop another LAN system, but he considered that he had already developed the best there was with Ethernet. He suggested that Xerox and Digital work together on a standard, and subsequently a consortium with Digital, Intel and Xerox was formed, known as the DIX consortium. They created an open and multi-vendor 10Mbit/s Ethernet specification and published this as DIX Ethernet 2.0 in 1979.The Institute of Electrical and Electronics Engineers (IEEE) were then involved in the standard and eventually produced 802.3, which is now considered the official Ethernet standard. There were some minor differences in terminology and format, but essentially it is the same standard. The IEEE originally avoided the word ‘Ethernet’ so that it would not be accused of endorsing any particular vendor. However, Xerox released all ownership of the name in due course so, while it appears to be a product name, Ethernet is now both an open technology standard and a name.
The first Ethernet was known as 10Base5 and used thick coaxial cable. The 9.5mm thick coaxial cable also wasn’t the easiest type of cabling to work with and subsequently a thinner solution was introduced in 1986 (10Base2) and called ‘Thinax’. This was much easier to install and use. The cables were half the size of ‘Thick Ethernet’ and looked similar to a TV antenna cable. Instead of cumbersome connectors, the thinner cables ended in BNC connectors and devices were attached through T-connectors.
In 1991, a new specification was developed to allow Ethernet to run over unshielded twisted pair cabling (UTP) and known as 10BaseT. This is still universally used today. UTP cables for Ethernet come as four pairs of thin twisted cables. The cables can be solid copper or made of thin strands. The former has better electrical properties; the latter is easier to work with. UTP cables are fitted with the now-common RJ45 plastic snap-in connectors.
A fibre version was also introduced and known as 10BaseF (with 10 being the speed in Mbits/s).
Every UTP cable is also its own Ethernet segment. So in order to build a LAN with more than two computers, it was necessary to use a multiport repeater, also known as a hub. The hub or repeater simply repeats an incoming signal on all ports and also sends a jam signal to all ports if there was a collision. The end result was a fast and flexible system, so fast it’s still in use today.
Bridges and Switches
The next step was simply to bridge between all ports. The multiport bridges were called switching hubs or Ethernet switches. With a switch, if the computer on port 1 is sending to the computer on port 3, and the computer on port 2 is sending to port 4, there are no collisions, the packets are only sent to the port that leads to the packet’s destination address. Switches learn which address is reachable over which port simply by observing the source addresses in frames flowing through the switch.
In 1998 the next iteration of Ethernet was introduced called Gigabit Ethernet. 1000BASE-T.
The new technology was introduced with only switch architecture and CSMA/CD was unnecessary as the two sides can both transmit at the same time. This is called full duplex operation, as opposed to half duplex for traditional CSMA/CD operation.
10 Gigabit Ethernet
A common way to create a LAN in a building or office was to have a series of relatively small switches, perhaps one per wiring closet where all the UTP cables come together. The small switches are then connected to a bigger and/or faster switch that functions as the backbone of the LAN. With users on multiple floors and servers concentrated in a server room, there’s often a lot of bandwidth required between the switches. So, even though computers with a 10 Gigabit Ethernet connection were not common, 10GE was badly needed as a backbone technology and the standard was published in 2002.In 2006 10GBASE-T standard was published, allowing 10 Gigabit Ethernet over twisted pair cable. 10GBASE-T needed even better cables than 1000BASE-T and Category 6 cabling was introduced to reach 100 meters, with thicker insulation than previous versions.
Reaching for 100 Gigabit Ethernet, and beyond
After 10 Gigabit Ethernet, 100Gbit/s was the next obvious step. However, transmitting at 100Gbit/s and faster over fibre has numerous challenges, as the laser pulses that carry information through fibre become so short that they have a hard time maintaining their shape. The IEEE therefore kept open the option to make a smaller step towards 100Gbit/s with a 40Gbit/s version and, on 17 June 2010, published standards for both 40 Gbit/s and 100 Gbit/s Ethernet. Products are now commercially available.
Nothing stops still with Ethernet though and so, in May 2013, 40 years after Bob Metcalfe’s memo to Xerox, work started on project IEEE 802.3bs for 400Gbits/s. To put this into perspective, a single telephone voice channel requires 64kbits/s, so a 400Gbits/s Ethernet connection could carry the equivalent of 6.5 million telephone calls.
It can be seen that Ethernet has managed to survive over 40 years in production, increasing its speed by no less than four orders of magnitude. In those 40 years, all aspects of Ethernet have been changed and only the packet format has remained the same. It has evolved from simply connecting computers within buildings, to connecting whole campuses together, and is now to be found at the heart of nearly all modern telecoms networks. For example, Ethernet is now starting to be used within signalling control systems, both for vital and non-vital communications.
The IEEE has several task forces and study groups looking at various improvements and variants and Ethernet will continually evolve, just as it has done over the last 40 years.
The only reason Ethernet growth has slowed relatively over the past decade is because wireless LANs (in the form of Wi-Fi) have been introduced and are so convenient. However, wired and wireless LANs are largely complementary so, even though more and more devices go through life with an unoccupied Ethernet port, Ethernet is always there to deliver the speed, reliability and security that shared wireless can struggle to provide.
1000Gbit/s? On the one hand, this seems unlikely, as transporting 100Gbit/s over fibre is already a big challenge. On the other hand, in 1975 few people would have guessed that today we would carry around affordable lap tops with 10Gbit/s ports.
Gigabit Ethernet already uses parallelism by using all four wire pairs in a UTP cable, and many 40Gbit/s and 100Gbit/s Ethernet variants over fibre also use parallel datastreams, each using a slightly different wavelength of laser light. Telecoms carrier networks already transport multi-terabit aggregate bandwidths over a single fibre using dense wavelength division multiplexing (DWDM), so this seems an obvious opportunity for Ethernet to once again take existing telecoms technology, streamline it, and aggressively push the price down.Bob Metcalfe’s view
Bob Metcalfe is now a Professor of Innovation at the University of Texas. He has predicted that the future of Ethernet will be:
» Up – Ethernet data speeds will continue to increase, as can be seen by the release of 40 and 100Gbit/s, and now investigation work on 400Gbit/s.
» Through – Ethernet will continue to be used throughout telecoms carrier networks to supplement and replace SDH (Synchronous Digital Hierarchy).
» Over – It’s ironic that Ethernet was developed on a wireless technology before being a wired technology, but it will continue to be used more and
more over the wireless ‘ether’.
» Down – Ethernet will be used more and more down the technology hierarchy chain. That’s from network PCs to within sub-personal devices and micro-controllers, and into the embedded internet of everything. For example, there is already a lot of work specifying Ethernet for use within the automotive industry, and such uses will be for all industries and which must include railways.
» Across – Both LAN and WAN (wide area network) speeds are relatively high, but very often constrained by the telecoms network connection. Ethernet will play a key role as Next Generation Telecoms Networks bridge the gap between LANs and Carrier Networks.
Ethernet is one of the success stories of the last 40 years and will be around for many years to come as it continues to evolve.