Ethernet switch classes. Comparison of network devices

Key Features of Switches

Switch performance is what network integrators and administrators expect from this device in the first place.

The main indicators of the switch that characterize its performance are:

  1. frame filtering speed;
  2. the speed of promotion of frames;
  3. total throughput;
  4. frame transmission delay.

Filtering speed

Reception of a frame in its buffer;

Viewing the address table in order to select the destination port for the frame;

Destroying a frame because its destination port and source port belong to the same logical segment.

The filtering speed of almost all switches is non-blocking - the switch has time to drop frames at the rate of their arrival.

Forwarding speed determines the rate at which the switch performs the following frame processing steps:

Reception of a frame in its buffer;

lookup of the address table in order to find the port for the destination address of the frame;

· transmission of a frame to the network through the destination port found in the address table.

Both filtration rate and advance rate are usually measured in frames per second. By default, these are Ethernet protocol frames of the minimum length (64 bytes without a preamble). Such frames create the heaviest mode of operation for the switch.

Bandwidth switch is changed by the amount of user data (in megabits per second) transmitted per unit of time through its ports.

The maximum value of the switch throughput is always reached on the frames of maximum length. Therefore, a switch can be blocking for the minimum length frames, but still have very good throughput performance.

Frame Delay is measured as the time elapsed from the moment the first byte of the frame arrives at the input port of the switch until the moment this byte appears at its output port.

The amount of delay introduced by the switch depends on the mode of its operation. If switching is carried out "on the fly", then the delays are usually small and range from 5 to 40 µs, and with full frame buffering - from 50 to 200 µs (for frames of minimum length).

On-the-fly and fully buffered switching

During on-the-fly switching, a part of the frame containing the address of the recipient is received into the input buffer, a decision is made to filter or retransmit the frame to another port, and if the output port is free, then the frame is immediately transferred while the rest of it continues to enter the input buffer . If the output port is busy, then the frame is fully buffered in the input buffer of the receiving port. The disadvantages of this method include the fact that the switch passes erroneous frames for transmission, because when it is possible to analyze the end of the frame, its beginning will already be transferred to another subnet. And this leads to the loss of useful time of the network.


Full buffering of received packets, of course, introduces a large delay in data transmission, but the switch has the ability to fully analyze and, if necessary, convert the received packet.

Table 6.1 lists the features of the switches when operating in two modes.

Table.6.1 Comparative characteristics switches when working in different modes

Although all switches have much in common, it is advisable to divide them into two classes, designed to solve different tasks.

Workgroup switches

Workgroup switches provide dedicated bandwidth when connecting any pair of nodes connected to the switch ports. If the ports have the same speed, the recipient of the packet must be free to avoid blocking.

By supporting at least as many addresses per port as can be present in a segment, the switch provides 10 Mbps dedicated bandwidth per port. Each switch port is associated with a unique address of the connected given port Ethernet devices.

The physical point-to-point connection between workgroup switches and 10Base-T nodes is typically made with unshielded twisted-pair cable, and 10Base-T compliant equipment is installed at the network nodes.

Workgroup switches can operate at 10 or 100 Mbps for different ports. This feature reduces the level of blocking when attempting to establish multiple 10 Mbps client connections on the same high-speed port. In client-server workgroups, multiple 10 Mbps clients can access a server connected to a 100 Mbps port. In the example shown in Figure 8, three 10 Mbps nodes access the server at the same time on a 100 Mbps port. Of the 100 Mbps bandwidth available for server access, 30 Mbps is used, and 70 Mbps is available for simultaneous connection of seven more 10 Mbps devices to the server via virtual circuits.

Support for different speeds is also useful for combining multicast Ethernet switches using 100 Mbps Fast Ethernet (100Base-T) hubs as local backbones. In the configuration shown in Figure 9, 10 Mbps and 100 Mbps switches are connected to a 100 Mbps hub. Local traffic stays within working group, and the rest of the traffic is sent to the network through a 100 Mbps Ethernet hub.

To connect to a 10 or 100 Mbps repeater, the switch must have a port capable of a large number Ethernet addresses.

The main advantage of workgroup switches is high network performance at the workgroup level by providing each user with a dedicated channel bandwidth (10 Mbps). In addition, switches reduce (up to zero) the number of collisions - unlike the backbone switches described below, workgroup switches will not transmit collision fragments to recipients. Workgroup switches allow you to completely save the network infrastructure from the client side, including programs, network adapters, cables. Workgroup switches cost per port today comparable to managed hub ports.

Backbone switches

Backbone switches provide a medium speed connection between a pair of idle Ethernet segments. If the port speeds for the sender and receiver are the same, the destination segment must be free to avoid blocking.

At the workgroup level, each node shares a 10 Mbps bandwidth with other nodes on the same segment. A packet destined outside of this group will be forwarded by the backbone switch as shown in Figure 10. The backbone switch provides simultaneous transmission of packets at the media rate between any pair of its ports. Like workgroup switches, backbone switches can support different speeds for their ports. Backbone switches can work with 10Base-T segments and segments based on coaxial cable. In most cases, the use of backbone switches provides a simpler and effective method improve network performance compared to routers and bridges.

The main disadvantage when working with backbone switches is that at the workgroup level, users work with a shared environment if they are connected to segments organized on the basis of repeaters or coaxial cable. Moreover, the response time at the workgroup level can be quite long. Unlike hosts connected to switch ports, hosts on 10Base-T or coax segments are not guaranteed 10 Mbps bandwidth and often have to wait until other hosts have finished transmitting their packets. At the workgroup level, collisions are still preserved, and fragments of packets with errors will be forwarded to all networks connected to the backbone. These shortcomings can be avoided if switches are used at the workgroup level instead of 10Base-T hubs. In most resource-intensive applications, a 100 Mbps switch can act as a high-speed backbone for workgroup switches with 10 and 100 Mbps ports, 100 Mbps hubs, and servers that have 100 Mbps Ethernet adapters installed.

Feature Comparison

The main properties of Ethernet switches are shown in the table:

Benefits of Ethernet Switches

The main advantages of using Ethernet switches are listed below:
Increase productivity with high-speed connections between Ethernet segments (backbone switches) or network nodes (workgroup switches). In contrast to a shared Ethernet environment, switches enable integrated performance to grow as users or segments are added to the network.
Reduced collisions, especially when each user is connected to a different switch port.
Minimize the cost of migrating from a shared to a switched environment by retaining the existing 10 Mbps Ethernet infrastructure (cables, adapters, software).
Increase security by forwarding packets only to the port to which the destination is connected.
Low and predictable latency due to the fact that the band is shared by a small number of users (ideally one).

Comparison of network devices

Repeaters

Ethernet repeaters, in the context of 10Base-T networks often referred to as hubs or hubs, operate in accordance with the IEEE 802.3 standard. The repeater simply forwards received packets to all of its ports, regardless of the destination.

Although all devices connected to the Ethernet repeater (including other repeaters) "see" the entire network traffic, only the node to which it is addressed should receive the packet. All other nodes should ignore this packet. some network devices (for example, protocol analyzers) operate on the basis that the network medium (such as Ethernet) is public and analyze all network traffic. For some environments, however, the ability of each node to see all packets is unacceptable for security reasons.

From a performance point of view, repeaters simply transmit packets using the entire bandwidth of the link. The delay introduced by the repeater is very small (in accordance with IEEE 802.3 - less than 3 microseconds). Networks containing repeaters have a 10 Mbps bandwidth similar to a coaxial cable segment and are transparent to most network protocols such as TCP/IP and IPX.

Bridges

Bridges operate in accordance with the IEEE 802.1d standard. Like Ethernet switches, bridges are protocol independent and forward packets to the port to which the destination is connected. However, unlike most Ethernet switches, bridges do not forward packet fragments on collisions or error packets because all packets are buffered before they are forwarded to the destination port. Packet buffering (store-and-forward) introduces latency compared to on-the-fly switching. Bridges can provide performance equal to the throughput of the medium, but internal blocking slows them down somewhat.

Routers

The operation of routers depends on network protocols and is determined by the protocol-related information carried in the packet. Like bridges, routers do not forward fragments of packets to the destination when collisions occur. Routers store the entire packet in their memory before forwarding it to the destination, therefore, when using routers, packets are transmitted with a delay. Routers can provide bandwidth equal to the bandwidth of the link, but they are characterized by the presence of internal blocking. Unlike repeaters, bridges, and switches, routers modify all transmitted packets.

Summary

The main differences between network devices are shown in Table 2.

performance, are:
  • frame filtering speed;
  • the speed of promotion of frames;
  • throughput;
  • transmission delay frame.

In addition, there are several switch characteristics that have the greatest impact on these performance characteristics. These include:

  • switching type;
  • the size of the frame buffer(s);
  • switching matrix performance;
  • the performance of the processor or processors;
  • size switching tables.

Filtering rate and frame advance rate

The rate of filtering and frame advancement are the two main performance characteristics of the switch. These characteristics are integral indicators and do not depend on how the switch is technically implemented.

Filtering speed

  • receiving a frame in its buffer;
  • discarding a frame if an error is found in it (the checksum does not match, or the frame is less than 64 bytes or more than 1518 bytes);
  • dropping a frame to avoid loops in the network;
  • dropping a frame in accordance with the filters configured on the port;
  • view switching tables to look up the destination port based on the frame's destination MAC address, and discard the frame if the frame's source and destination are connected to the same port.

The filtering speed of almost all switches is non-blocking - the switch manages to drop frames at the rate of their arrival.

Forwarding speed determines the rate at which the switch performs the following frame processing steps:

  • receiving a frame in its buffer;
  • view switching tables in order to find the destination port based on the MAC address of the recipient of the frame;
  • frame transmission to the network through the found software switching table port of destination.

Both the filtration rate and the advance rate are usually measured in frames per second. If the characteristics of the switch do not specify for which protocol and for which frame size the values ​​​​of filtering and forwarding rates are given, then by default it is considered that these indicators are given for the Ethernet protocol and frames of the minimum size, that is, frames with a length of 64 bytes (without a preamble) with data field of 46 bytes. The use of the minimum length frames as the main indicator of the switch processing speed is explained by the fact that such frames always create the most difficult operating mode for the switch compared to frames of another format with equal throughput of transmitted user data. Therefore, when testing a switch, the minimum frame length mode is used as the most difficult test, which should check the ability of the switch to work with the worst combination of traffic parameters.

Switch bandwidth (throughput) is measured by the amount of user data (in megabits or gigabits per second) transmitted per unit of time through its ports. Since the switch operates at the link layer, for it the user data is the data that is carried in the data field of the frames of the link layer protocols - Ethernet, Fast Ethernet, etc. The maximum value of the switch throughput is always reached on frames of maximum length, since when In this case, the share of overhead costs for frame overhead is much lower than for frames of the minimum length, and the time for the switch to perform frame processing operations per one byte of user information is significantly less. Therefore, a switch can be blocking for the minimum frame length, but still have very good throughput performance.

Frame transmission delay (forward delay) is measured as the time elapsed from the moment the first byte of the frame arrives at the input port of the switch until the moment this byte appears at its output port. The delay is the sum of the time spent buffering the bytes of the frame, as well as the time spent processing the frame by the switch, namely, viewing switching tables, making a forwarding decision, and gaining access to the egress port environment.

The amount of delay introduced by the switch depends on the switching method used in it. If switching is carried out without buffering, then the delays are usually small and range from 5 to 40 µs, and with full frame buffering - from 50 to 200 µs (for frames of the minimum length).

Switching table size

Maximum capacity switching tables defines limit quantity MAC addresses that the switch can operate at the same time. IN switching table for each port, both dynamically learned MAC addresses and static MAC addresses that were created by the network administrator can be stored.

The value of the maximum number of MAC addresses that can be stored in switching table, depends on the application of the switch. D-Link switches for workgroups and small offices typically support a 1K to 8K MAC address table. Large workgroup switches support 8K to 16K MAC address tables, while network backbone switches typically support 16K to 64K addresses or more.

Insufficient capacity switching tables can cause the switch to slow down and clog the network with excess traffic. If the switching table is full and the port encounters a new source MAC address in an incoming frame, the switch will not be able to table it. In this case, the response frame to this MAC address will be sent through all ports (except for the source port), i.e. will cause flooding.

Frame buffer size

To provide temporary storage of frames in cases where they cannot be immediately transferred to the output port, the switches, depending on the implemented architecture, are equipped with buffers on the input, output ports or a common buffer for all ports. Buffer size affects both frame delay and packet loss rate. Therefore, the larger the amount of buffer memory, the less likely it is to lose frames.

Typically, switches designed to operate in critical parts of the network have a buffer memory of several tens or hundreds of kilobytes per port. The buffer common to all ports is usually several megabytes in size.

The topic of gigabit access is becoming more and more relevant, especially now, when competition is growing, ARPU is falling, and tariffs of even 100 Mbps are no longer surprising. We have long considered the issue of switching to gigabit access. Repulsed by the price of equipment and commercial feasibility. But competitors are not asleep, and when even Rostelecom began to provide tariffs of more than 100 Mbps, we realized that we could not wait any longer. In addition, the price for a gigabit port has significantly decreased and it has become simply unprofitable to install a FastEthernet switch, which in a couple of years will still have to be changed to a gigabit one. Therefore, they began to choose a gigabit switch for use at the access level.

We have reviewed various models gigabit switches and settled on the two most suitable in terms of parameters, and, at the same time, corresponding to our budget expectations. These are Dlink DGS-1210-28ME and .

Frame


The body of the SNR is made of thick, durable metal, which makes it heavier than the "competitor". The D-link is made of thin steel, which gives it a weight savings. However, it makes it more susceptible to external influences due to its lower strength.

D-link is more compact: its depth is 14 cm, while that of SNR is 23 cm. The SNR power connector is located on the front, which undoubtedly facilitates installation.

Power supplies


D-link power supply


SNR power supply

Despite the fact that the power supplies are very similar, we still found differences. The D-link power supply is made economically, perhaps even too much - there is no lacquer coating on the board, the protection against interference at the input and output is minimal. As a result, according to Dlink, there are fears that these nuances will affect the switch's sensitivity to power surges, and operation in variable humidity, and in dusty conditions.

Switch board





Both boards are made neatly, there are no complaints about the installation, however, SNR has a better textolite, and the board is made using lead-free soldering technology. This, of course, is not about the fact that SNR contains less lead (than you can't scare anyone in Russia), but that these switches are produced on a more modern line.

In addition, again, as in the case of power supplies, D-link saved on varnish. SNR has a varnish coating on the board.

Apparently, it is implied that the working conditions of D-link access switches should be a priori excellent - clean, dry, cool .. well, like everyone else. ;)

Cooling

Both switches have a passive cooling system. D-link has larger radiators, and this is a definite plus. However, SNR has free space between the board and the back wall, which has a positive effect on heat dissipation. An additional nuance is the presence of heat-removing plates located under the chip, which remove heat to the switch case.

We conducted a small test - we measured the temperature of the heatsink on the chip under normal conditions:

  • The switch is placed on a table at room temperature 22C,
  • 2 SFP modules installed,
  • We are waiting for 8-10 minutes.

The test results were surprising - D-link heated up to 72C, while SNR only reached 63C. What will happen to D-link in a tightly packed box in the summer heat, it's better not to think.



Temperature on D-link 72 degrees



On SNR 61 C, flight is normal

lightning protection

The switches are equipped different system lightning protection. D-link uses gas arresters. SNR has varistors. Each of them has its pros and cons. However, the response time of varistors is better, and this provides better protection for the switch itself and subscriber devices connected to it.

Summary

From D-link there is a feeling of economy on all components - on the power supply, board, case. Therefore, in this case, it gives the impression of a more preferable product for us.

This LAN is built on switches, so this chapter covers the key performance characteristics of switches.

The main characteristics of a switch that measure its performance are:

  • - speed of filtration (filtering);
  • - routing speed (forwarding);
  • - bandwidth (throughput);
  • - frame transmission delay.

In addition, there are several switch characteristics that have the greatest impact on these performance characteristics. These include:

  • - size of the frame buffer(s);
  • - performance of the internal bus;
  • - performance of the processor or processors;
  • - size of the internal address table.

The rate of filtering and frame advancement are the two main performance characteristics of the switch. These characteristics are integral indicators, they do not depend on how the switch is technically implemented.

The filter rate determines the rate at which the switch performs the following frame processing steps:

  • - receiving a frame in its buffer;
  • - Destruction of the frame, since its destination port is the same as the source port.

The forward rate determines the rate at which the switch performs the following frame processing steps:

  • - receiving a frame in its buffer;
  • - viewing the address table in order to find the port for the destination address of the frame;
  • - frame transmission to the network through the destination port found in the address table.

Both filtration rate and advance rate are usually measured in frames per second. If the characteristics of the switch do not specify for which protocol and for which frame size the values ​​​​of filtering and forwarding rates are given, then by default it is considered that these indicators are given for the Ethernet protocol and frames of the minimum size, that is, frames with a length of 64 bytes (without a preamble), with a data field of 46 bytes. If the rates are given for a particular protocol, such as Token Ring or FDDI, then they are also given for the minimum length frames of that protocol (for example, 29-byte frames for the FDDI protocol).

The use of minimum length frames as the main indicator of the speed of the switch is explained by the fact that such frames always create the most difficult operating mode for the switch compared to frames of another format with equal throughput of the transferred user data. Therefore, when testing a switch, the minimum frame length mode is used as the most difficult test, which should check the ability of the switch to work with the worst combination of traffic parameters for it. In addition, for packets of a minimum length, the filtering and forwarding speeds have a maximum value, which is of no small importance when advertising a switch.

The throughput of a switch is measured by the amount of user data transmitted per unit of time through its ports. Since the switch operates at the link layer, the user data for it is the data that is carried in the data field of the frames of the link layer protocols - Ethernet, Token Ring, FDDI, etc. The maximum value of the switch throughput is always achieved on frames of the maximum length, since in this case the share of overhead costs for frame overhead information is much lower than for frames of the minimum length, and the time for the switch to perform frame processing operations per one byte of user information is significant. less.

The dependence of the switch throughput on the size of transmitted frames is well illustrated by the example of the Ethernet protocol, for which, when transmitting frames of the minimum length, a transmission rate of 14880 frames per second and a throughput of 5.48 Mbps is achieved, and when transmitting frames of the maximum length, a transmission rate of 812 frames per second and a bandwidth of 9.74 Mbps. Throughput drops by almost half when switching to frames of the minimum length, and this is without taking into account the time lost on processing frames by the switch.

Frame transmission delay is measured as the time elapsed from the moment the first byte of the frame arrives at the input port of the switch until the moment this byte arrives at the output port of the switch. Latency is the sum of the time spent buffering the frame's bytes, as well as the time spent processing the frame by the switch - looking up the address table, deciding whether to filter or forward, and accessing the egress port media.

The amount of delay introduced by the switch depends on the mode of its operation. If switching is carried out "on the fly", then the delays are usually small and range from 10 µs to 40 µs, and with full frame buffering - from 50 µs to 200 µs (for frames of the minimum length).

The switch is a multiport device, therefore, it is customary for it to give all the above characteristics (except for the frame transmission delay) in two versions. The first option is the total performance of the switch with simultaneous transmission of traffic through all its ports, the second option is the performance per one port.

Since with simultaneous transmission of traffic by several ports, there is a huge number of traffic options that differ in the size of frames in the stream, the distribution of the average intensity of frame streams between destination ports, the coefficients of variation in the intensity of frame streams, etc. etc., then when comparing switches in terms of performance, it is necessary to take into account for which traffic variant the published performance data were obtained.

Estimate the required overall performance of the switch.

Ideally, a switch installed in a network transmits frames between nodes connected to its ports at the rate at which nodes generate these frames, without introducing additional delays and without losing a single frame. In real practice, the switch always introduces some delays in the transmission of frames, and may also lose some frames, that is, not deliver them to their destinations. Due to differences in internal organization different models switches, it is difficult to predict how a particular switch will transmit frames of a particular traffic pattern. The best criterion there is still a practice when the switch is placed in a real network, and the delays introduced by it and the number of lost frames are measured.

Except bandwidth For individual switch elements, such as port processors or a shared bus, switch performance is affected by such switch parameters as the size of the address table and the size of the common buffer or individual port buffers.

Address table size.

The maximum capacity of the address table determines maximum amount MAC addresses that the switch can handle at the same time. Since switches most often use a dedicated processor unit with its own memory to store an instance of the address table to perform the operations of each port, the size of the address table for switches is usually given per port. Instances of the address table of different processor modules do not necessarily contain the same address information - most likely there will be not so many duplicate addresses, unless the traffic distribution of each port is completely equally probable among the other ports. Each port only stores the sets of addresses it has recently used.

The value of the maximum number of MAC addresses that the port processor can remember depends on the application of the switch. Workgroup switches typically only support a few addresses per port, as they are designed to form microsegments. Departmental switches must support several hundred addresses, and network backbone switches up to several thousand, typically 4K to 8K addresses.

Insufficient address table capacity can slow down the switch and flood the network with excess traffic. If the port processor's address table is full, and it encounters a new source address in an incoming packet, it must evict any old address from the table and place a new one in its place. This operation itself will take some time from the processor, but the main performance loss will be observed when a frame arrives with a destination address that had to be removed from the address table. Since the frame's destination address is unknown, the switch must forward the frame to all other ports. This operation will create unnecessary work for many port processors, in addition, copies of this frame will also fall on those network segments where they are completely optional.

Some switch manufacturers solve this problem by changing the algorithm for handling frames with an unknown destination address. One of the switch ports is configured as a trunk port, to which all frames with an unknown address are sent by default. In routers, this technique has been used for a long time, allowing you to reduce the size of address tables in networks organized according to a hierarchical principle.

The transmission of a frame to the trunk port is based on the fact that this port is connected to the upstream switch, which has sufficient address table capacity and knows where to send any frame. An example of a successful frame transmission using a trunk port is shown in Figure 4.1. Switch top level has information about all network nodes, so the frame with the destination MAC3 address, transmitted to it through the trunk port, it transmits through port 2 to the switch to which the node with the MAC3 address is connected.

Figure 4.1 - Using a trunk port to deliver frames with an unknown destination

Although the trunk port method will work effectively in many cases, it is possible to imagine situations where frames will simply be lost. One such situation is depicted in Figure 4.2. The lower layer switch has removed the MAC8 address that is connected to its port 4 from its address table in order to make room for the new MAC3 address. When a frame arrives with a MAC8 destination address, the switch forwards it to trunk port 5, through which the frame enters the upper-level switch. This switch sees from its address table that the MAC8 address belongs to its port 1, through which it entered the switch. Therefore, the frame is not processed further and is simply filtered out, and, therefore, does not reach the destination. Therefore, it is more reliable to use switches with a sufficient number of address tables for each port, as well as support for a common address table by the switch management module.


Figure 4.2 - Frame loss when using a trunk port

Buffer size.

The switch's internal buffer memory is needed to temporarily store data frames in cases where they cannot be immediately transferred to the output port. The buffer is designed to smooth out short-term traffic ripples. After all, even if the traffic is well balanced and the performance of the port processors, as well as other processing elements of the switch, is sufficient to transfer average traffic values, this does not guarantee that their performance will be enough for very high peak load values. For example, traffic can arrive simultaneously at all switch inputs for several tens of milliseconds, preventing it from transmitting received frames to output ports.

To prevent frame losses in case of short-term multiple excess of the average traffic intensity value (and for local networks often there are values ​​of the traffic ripple factor in the range of 50 - 100) the only remedy is a large buffer. As in the case of address tables, each port processor module usually has its own buffer memory for storing frames. The larger the amount of this memory, the less likely it is to lose frames during congestion, although if the traffic averages are unbalanced, the buffer will still overflow sooner or later.

Typically, switches designed to operate in critical parts of the network have a buffer memory of several tens or hundreds of kilobytes per port. It's good that this buffer memory can be reallocated between multiple ports, since simultaneous overloads on multiple ports are unlikely. An additional security feature can be a common buffer for all ports in the switch management module. Such a buffer is usually several megabytes in size.



Loading...
Top