Vlan based on ports. Types of virtual networks

Building VLAN based on ports based on addition only additional information to the address tables of the switch and does not use the embeddings information about belonging to a virtual network in the transmitted frame. Its main advantages:

1. Flexibility and convenience to set up and change, the ability to add tags allows information about VLAN propagate through multiple 802.1Q-compatible switches over a single physical connection ( trunk channel, Trunk Link);

2. ability IEEE 802.1Q VLANs add and extract tags from frame headers allows the use of switches and network devices in the network that do not support the standard IEEE 802.1Q;

3. devices different manufacturers, supporting the standard, can work together, regardless of any proprietary solution;

4. To connect subnets at the network level, you need a router or L3 switch. However, for simpler cases, for example, to organize access to the server from various VLAN, a router is not required. You need to include the switch port to which the server is connected to all subnets, and network adapter server must support the standard IEEE 802.1Q.

Some definitions of IEEE 802.1Q

· Tagging- the process of adding 802.1Q membership information VLAN in the frame header.

· Untagging- 802.1Q membership information extraction process VLAN from the frame header.

· VLAN ID (VID)- identifier VLAN.

· Port VLAN ID (PVID)- port ID VLAN.

· Ingress port- port of the switch to which frames are received, and at the same time a decision is made about belonging to VLAN.

· Egress port- switch port from which frames are transmitted to other network devices, switches or workstations, and, accordingly, the marking decision should be made on it.

Any switch port can be configured as tagged(labeled) or as untagged(unmarked). Function untagging allows you to work with those network devices of the virtual network that do not understand the tags in the Ethernet frame header. Function tagging allows you to customize VLAN between several switches that support the standard IEEE 802.1Q.

Figure - Marked and unmarked VLAN ports

IEEE 802.1Q VLAN tag

Standard IEEE 802.1Q defines changes to the Ethernet frame structure that allow information about VLAN over the network. On fig. 6.7 shows the 802.1Q tag format VLAN. 32 bits (4 bytes) have been added to the Ethernet frame, which increases its size to 1522 bytes. The first 2 bytes (Tag Protocol Identifier, TPID) with a fixed value of 0x8100 determine that the frame contains an 802.1Q protocol tag. The remaining 2 bytes contain the following information:

Priority- 3 bits of the transmission priority field encode up to eight priority levels (from 0 to 7, where 7 is the highest priority), which are used in the 802.1p standard;

Canonical Format indicator (CFI) - 1 bit of the canonical format indicator is reserved for designating frames of other types of networks (Token Ring, FDDI) transmitted over the Ethernet backbone;

VID (VLAN ID) - 12-bit identifier VLAN determines which VLAN belongs to the traffic. Because under the field VID 12 bits are allocated, then you can set 4094 unique VLAN (VID 0 and VID 4095 reserved).

(frame), then network devices that do not support this standard can transmit traffic without regard to its belonging to a VLAN.

802.1Q places inside a frame tag, which conveys information about whether the traffic belongs to a VLAN.

The tag size is 4 bytes. It consists of the following fields:

  • Tag Protocol Identifier(TPID, tagging protocol identifier). The field size is 16 bits. Specifies which protocol is used for tagging. For 802.1Q, the value is 0x8100.
  • priority(a priority). The field size is 3 bits. Used by the IEEE 802.1p standard to prioritize transmitted traffic.
  • Canonical Format Indicator(CFI, Canonical Format Indicator). The field size is 1 bit. Indicates the MAC address format. 0 - canonical, 1 - non-canonical. CFI is used for interoperability between Ethernet and Token Ring networks.
  • VLAN Identifier(VID, VLAN ID). The field size is 12 bits. Specifies which VLAN the frame belongs to. The range of possible values ​​is from 0 to 4094.

When using the Ethernet II standard, 802.1Q inserts a tag before the Protocol Type field. Since the frame has changed, the checksum is recalculated.

In the 802.1Q standard, there is the concept of Native VLAN. The default is VLAN 1. Traffic on this VLAN is not tagged.

There is a similar 802.1Q proprietary protocol developed by Cisco Systems - ISL.

Frame format

Inserting an 802.1Q Tag into an Ethernet-II Frame

Links


Wikimedia Foundation. 2010 .

See what "IEEE 802.1Q" is in other dictionaries:

    IEEE 802.11- is a set of standards for wireless local area network (WLAN) computer communication, developed by the IEEE LAN/MAN Standards Committee (IEEE 802) in the 5 GHz and 2.4 GHz public spectrum bands.General descriptionThe 802.11 family includes over… … Wikipedia

    IEEE 802.11- (also: Wireless LAN (WLAN), Wi Fi) Herausgeber ist das Institute of Electrical and Electronics Engineers (IEEE). Die erste Version des Standards wurde 1997 verabschiedet. Sie ... ... Deutsch Wikipedia

    IEEE 802.3

    Ieee 802

    IEEE 802.3- est une norme pour les réseaux informatiques édictée par l Institute of Electrical and Electronics Engineers (IEEE). Cette norme est generalement connue sous le nom d Ethernet. C est aussi un sous comité du comité IEEE 802 comprenant plusieurs… … Wikipedia en Français

    A group of standards in the IEEE family concerning local computer networks(LAN) and metropolitan area networks (MAN). In particular, the IEEE 802 standards are limited to networks with variable length packets. The number 802 was the next free number for ... ... Wikipedia

    IEEE 802.15- is the 15th working group of the IEEE 802 which specializes in Wireless PAN (Personal Area Network) standards. It includes six task groups (numbered from 1 to 6):Task group 1 (WPAN/Bluetooth)IEEE 802.15.1 2002 has derived a Wireless Personal Area … Wikipedia

    IEEE 802- est un comité de l IEEE qui décrit une famille de normes relatives aux réseaux locaux (LAN) et métropolitains (MAN) basés sur la transmission de données numériques par le biais de liaisons filaires ou sans fil. Plus spécifiquement, les normes… … Wikipedia en Français

    IEEE 802- refers to a family of IEEE standards dealing with local area networks and metropolitan area networks.More specifically, the IEEE 802 standards are restricted to networks carrying variable size packets. (By contrast, in cell based networks data is … Wikipedia

    IEEE 802.15.4a- (formally called IEEE 802.15.4a 2007) is an amendment to IEEE 802.15.4 (formally called IEEE 802.15.4 20060 specifying that additional physical layers (PHYs) be added to the original standard.OverviewIEEE 802.15.4 2006 specified four different… …Wikipedia

    IEEE 802.11- Example déquipement fabriqué sur les recommandations de la norme IEEE 802.11. Ici, un routeur avec switch 4 ports integrated de la marque Linksys. IEEE 802.11 est un terme qui désigne un ensemble de normes concernant les réseaux sans fil qui ont… … Wikipédia en Français


The article discusses the possibilities of Ethernet in relation to use in industry; the material also presents special application protocols based on Ethernet.

OOO "AKOM", Chelyabinsk

Having successfully conquered the world of office automation, Ethernet and TCP/IP launched an attack on distributed production control systems. As the main “weapon”, this uses the tempting idea of ​​a “seamless” connection of all levels of the classic automation pyramid: from the level of automation of technological processes to the level of enterprise management. Implementing this idea required a major adaptation of Ethernet, especially in terms of real-time support. Non-deterministic communication protocols such as HTTP and FTP certainly provide versatility and ease of use, but for industrial use, special application protocols have still had to be developed based on Ethernet.

OSI - Open Systems Interconnection Model

The OSI (Open System Interconnection) model schematically describes and standardizes the links between various devices in network architecture. The OSI model defines seven levels of system networking, gives them standard names, and specifies what functions each level should perform and how interaction with a higher level will be provided.

Rice. 1. OSI model (Open System Interconnection)

Before the user data from Appendix 1 (fig. 1.) can be sent over Ethernet, this data sequentially passes through the entire communication stack from the top to the lowest level. In this case, the final packet is formed for transmission (encapsulation) - when a frame (packet) is formed in accordance with the requirements of the current level, a frame from a higher level is embedded into it. Thus, the data that has reached the lowest level (the physical transmission medium) is transmitted to the second system, where the reverse process of serial transmission of the received data to the upper levels takes place to the destination - Annex 2.

Such a process is like a well-oiled pipeline and requires a clear description of the logical interaction between the levels.

Table 1

In Ethernet, according to the IEEE 802.1-3 standard, layers 1 and 2 of the OSI model are implemented. Support for the third, Network layer, is provided by the IP (Internet Protocol) protocol overlaid on Ethernet, and the TCP and UDP transport protocols correspond to Layer 4. Layers 5-7 are implemented in the FTP, Telnet, SMTP, SNMP application protocols and in the specific industrial protocols considered below. automation (Industrial Ethernet). It should be noted that Industrial Ethernet protocols can replace or supplement Layers 3 and 4 (IP and TCP/UDP) in some applications.

Layer 1 (Physical) describes a method for bit-by-bit serial transmission of data across a physical medium. As applied to the IEEE 802.3 standard, a standard Ethernet frame should look like this:

Preamble - preamble, used to synchronize the receiving device and indicates the beginning of the Ethernet frame;

Destination - recipient's address;

Source - sender address;

Type Field - high level protocol type (for example, TCP/IP);

Data Field - transmitted data;

Check - checksum (CRC).

Layer 2 (Link) improves the reliability of data transmission over the Physical Layer by packing data into standard frames with the addition of address information and a checksum (error detection). Access to the physical transmission medium, according to IEEE 802.3, is carried out through the CSMA / CD mechanism, which leads to inevitable collisions when several devices start transmission at the same time. The link layer allows solving this problem by providing distribution of access rights for network-forming devices. This is implemented in Ethernet switches (Switched Ethernet technology), in which, based on the data of the link layer, all incoming data is automatically checked for integrity and checksum compliance (CRC) and, if the result is positive, is redirected only to the port to which the data receiver is connected.

Layer 3 (Network) provides messaging between different networks using the IP protocol (as applied to Ethernet) as a tool. Data received from the Transport Layer is encapsulated in a Network Layer frame with IP headers and passed to the Data Link Layer for segmentation and further transmission. The current IP version 4 (IPv4) uses an address range of up to 32 bits, while IPv6 expands the address space to 128 bits.

Layer 4 (Transport) provides data transmission with a given level of reliability. Support for this level is implemented in the TCP and UDP protocols. TCP (Transmission Control Protocol - transmission control protocol) is an advanced protocol with means of establishing, confirming and terminating a connection, with means of detecting and correcting errors. High reliability of data transmission is achieved at the cost of additional time delays and an increase in the amount of transmitted information. UDP (User Datagram Protocol - user datagram protocol) was created as a counterweight to TCP and is used in cases where speed, rather than reliability of data transfer, becomes the primary factor.

Layers 5 - 7 are responsible for the final interpretation of the transmitted user data. Examples from the world of office automation are the FTP and HTTP protocols. Industrial Ethernet protocols also use these layers, but different ways which makes them incompatible. So the Modbus / TCP, EtherNet / IP, CIPsync, JetSync protocols are located strictly above Layer 4 of the OSI model, and the ETHERNET Powerlink, PROFInet, SERCOS protocols expand and partially replace Layers 3 and 4.

Ethernet/IP

EtherNet/IP is based on the Ethernet TCP and UDP IP protocols and extends the communication stack for use in industrial automation (Fig. 2.). The second part of the name “IP” stands for “Industrial Protocol”. Industrial Ethernet Protocol (Ethernet/IP) was developed by the ODVA group with the active participation of Rockwell Automation at the end of 2000 based on the CIP (Common Interface Protocol) communication protocol, which is also used in ControlNet and DeviceNet networks. The EtherNet/IP specification is public and free. In addition to the typical functions of the HTTP, FTP, SMTP, and SNMP protocols, EtherNet/IP provides the transfer of time-critical data between the host and I/O devices. Reliability of transmission of non-time-critical data (configuration, download/unload of programs) is provided by the TCP stack, and time-critical delivery of cyclic control data will be carried out via the UDP stack. To simplify the setup of an EtherNet/IP network, most standard automation devices come with predefined configuration files(EDS).

CIPsync is an extension of the CIP communication protocol and implements time synchronization mechanisms in distributed systems based on the IEEE 1588 standard.

PROFINET

The first version of PROFINET used Ethernet for non-time-critical communication between devices top level and Profibus-DP field devices. Interaction with Profibus-DP was carried out quite simply using the PROXY built into the stack.

The second version of PROFINET provides two mechanisms for communication over Ethernet: TCP/IP is used to transfer non-time-critical data, and real time provided on the second channel by a special protocol. This real-time protocol "jumps" over Layers 3 and 4, transforming the length of the transmitted data to achieve determinism. In addition, to optimize communication, all data transmissions in PROFINET are prioritized according to IEEE 802.1p. For real-time communication, data must have the highest (seventh) priority.

PROFINET V3 (IRT) uses hardware to create a fast link with even better performance. Complies with IRT (Isochronous Real-Time) requirements of the IEEE-1588 standard. PROFINET V3 is mainly used in motion control systems using dedicated Ethernet/PROFINET V3 switches.

Rice. 2. The structure of Ethernet/IP in the layers of the OSI model

Rice. 3. Structure of PROFINET in the layers of the OSI model

Rice. 4. Ethernet PowerLink Structure in OSI Model Layers

ETHERNET Powerlink

In ETHERNET Powerlink, the TCP/IP and UDP/IP stacks (Layers 3 and 4) are extended by the Powerlink stack. Based on the TCP, UDP and Powerlink stacks, both asynchronous transfer of non-time-critical data and fast isochronous transfer of cyclic data are carried out.

The Powerlink stack fully manages data traffic on the network for real-time operation. For this, SCNM (Slot Communication Network Management) technology is used, which determines the time interval and strict rights for data transmission for each station in the network. In each such time interval, only one station has full access to the network, which allows you to get rid of collisions and ensure determinism in work. In addition to these individual timeslots for isochronous data transfer, SCNM provides common timeslots for asynchronous data transfer.

In cooperation with the CiA (CAN in Automation) group, a Powerlink v.2 extension has been developed using CANopen device profiles.

Powerlink v.3 includes time synchronization mechanisms based on the IEEE 1588 standard.

Modbus/TCP-IDA

The newly formed Modbus-IDA group proposes an IDA architecture for distributed control systems using Modbus as the message structure. Modbus-TCP is a symbiosis of the standard Modbus protocol and the Ethernet-TCP/IP protocol as a communication medium. The result is a simple, structured, open transmission protocol for Master-Slave networks. All three protocols from the Modbus family (Modbus RTU, Modbus Plus and Modbus-TCP) use the same application protocol, which allows them to be compatible at the level of user data processing.

IDA is not only Modbus-based protocols, it is a whole architecture that combines building methods various systems automation with distributed intelligence and describing both the structure of the control system as a whole and the interfaces of devices and software in particular. This provides vertical and horizontal integration of all levels of automation with extensive use of web technologies.

Real-time data transmission is provided using the IDA stack, which is an add-on over TCP/UDP and is based on the Modbus protocol. Transmission of non-time-critical data and support for web technologies occurs through the TCP/IP stack. Possibility provided remote control devices and systems (diagnostics, parameterization, program download, etc.) using standard HTTP, FTP and SNMP protocols.

EtherCAT

EtherCAT (Ethernet for Control Automation Technology) is an Ethernet-based automation concept developed by the German company Beckhoff. The main difference of this technology is the processing of Ethernet frames “on the fly”: each module in the network, simultaneously with receiving the data addressed to it, broadcasts the frame to the next module. When transmitting, the output data is similarly inserted into the relayed frame. Thus, each module in the network gives a delay of only a few nanoseconds, providing the system as a whole with real-time support. Non-time-critical data is transmitted in the time intervals between real-time data transmissions.

EtherCAT implements synchronization mechanisms based on the IEEE 1588 standard. The low latency of data transmission makes it possible to use EtherCAT in motion control systems.

SERCOS III

SERCOS (SErial Real-Time Communication System) is a digital interface optimized for communication between the controller and VFD (frequency converters) using a fiber optic ring. Developed in its original form by a group of companies in the late 80s of the last century. Real-time operation is achieved using the TDMA (Time Division Multiplex Access) mechanism - Time Division Multiplex Access. SERCOS-III is latest version this interface and is based on Ethernet.

Foundation Fieldbus HSE

When developing the Foundation Fieldbus standard, they tried to rely entirely on the OSI model, but, in the end, for reasons of performance, the model was changed: Layer 2 was replaced with a proprietary Data Negotiation layer, Layers 3-6 were eliminated and an eighth layer called User . The user level includes function blocks, which are standardized packages of control functions (for example, the analog input signal, PID control, etc.). These function blocks must meet the requirements of a wide range of different equipment from different manufacturers, and not a specific type of device. Connected devices use a software “Device Description” (DD) to communicate their unique properties and data to the system. This makes it easy to add new devices to the system on a plug-and-play basis.

The second hallmark of Foundation Fieldbus technology is peer-to-peer communications between field devices. With peer-to-peer communication, each device connected to the bus can communicate with other devices on the bus directly (that is, without the need to signal through the control system).

Foundation Fieldbus HSE ((High-Speed ​​Ethernet) was developed in 2000. Main features: based on Ethernet, 100 Mbaud data rate, real-time support, compatible with all commercial Ethernet equipment, use Internet protocols(FTP, HTTP, SMPT, SNMP and UDP), the ability to communicate with the FF H1 network without contacting the host system.

SafeEthernet

Developed by the German company HIMA based on Ethernet with support for Internet protocols. True to the company's profile, and as the name implies, this protocol is optimized for use in security systems.

embedding information about belonging to a virtual network in the transmitted frame. Virtual local networks , built on the basis of the IEEE 802.1Q standard, use additional fields a frame to store VLAN membership information as it roams the network. From the point of view of convenience and flexibility of settings, VLAN of the IEEE 802.1Q standard is best solution compared to port-based VLANs. Its main advantages:
  1. flexibility and convenience in setting up and changing - you can create the necessary combinations of VLANs both within a single switch and in the entire network built on switches supporting the IEEE 802.1Q standard. The tagging capability allows VLAN information to be propagated across multiple 802.1Q-compliant switches over a single physical link ( trunk channel, Trunk Link);
  2. allows you to activate the spanning tree algorithm ( Spanning Tree ) on all ports and work in normal mode. The spanning tree protocol is proving to be very useful for large networks, built on several switches, and allows the switches to automatically determine the tree-like configuration of connections in the network with an arbitrary connection of ports to each other. For normal operation switch required no closed routes online. These routes can be created by the administrator specifically to create redundant links, or they can occur randomly, which is quite possible if the network has multiple links and the cabling system is poorly structured or documented. Using the Spanning Tree protocol, switches block redundant routes after building a network diagram. Thus, loops in the network are automatically prevented;
  3. the ability of IEEE 802.1Q VLANs to add and extract tags from frame headers allows the use of switches and network devices in the network that do not support the IEEE 802.1Q standard;
  4. devices from different manufacturers that support the standard can work together, regardless of any proprietary solution;
  5. to connect subnets at the network level, you need a router or L3 switch. However, for simpler cases, for example, to organize access to the server from different VLANs, a router is not required. The switch port to which the server is connected must be included in all subnets, and the server's network adapter must support the IEEE 802.1Q standard.


Rice. 6.5.

Some definitions of IEEE 802.1Q

  • Tagging- the process of adding 802.1Q VLAN membership information to the frame header.
  • Untagging- the process of extracting information about belonging to 802.1Q VLAN from the frame header.
  • VLAN ID (VID)- VLAN identifier.
  • Port VLAN ID (PVID)- VLAN port identifier.
  • Ingress port- port of the switch to which frames are received, and at the same time a decision is made about belonging to VLAN.
  • Egress port- switch port from which frames are transmitted to other network devices, switches or workstations, and, accordingly, the marking decision should be made on it.

Any switch port can be configured as tagged(labeled) or as untagged(unmarked). Function untagging allows you to work with those network devices of the virtual network that do not understand the tags in the Ethernet frame header. Function tagging allows you to configure VLANs between multiple switches that support the IEEE 802.1Q standard.


Rice. 6.6.

IEEE 802.1Q VLAN tag

The IEEE 802.1Q standard defines changes to the Ethernet frame structure to allow the transmission of VLAN information over the network. On fig. 6.7 shows the 802.1Q tag format

Part IV

At present, the number of applications carrying delay-sensitive traffic has increased significantly. Moreover, the growth trend of such applications and, accordingly, their users is not only maintained, but is also gaining momentum. To address the issues of transmission of the specified traffic, several standards and specifications have been developed, which will be discussed in this article.

IEEE 802.1Q and IEEE 802.1p standards

The task of the working groups working on the p and Q standards is to give the network industry a unified method for transmitting information about the priority of a frame and its belonging to a VLAN over the network. Two package marking specifications have been developed:

  • the first, single-level, defines the interaction virtual networks via the Fast Ethernet trunk;
  • the second, two-level, concerns the marking of packets in mixed backbones, including Token Ring and FDDI.

The first specification from the very beginning needed only minimal refinement, since it, in fact, is a tag switching technology promoted to the market by Cisco. Delays in the adoption of the 802.1Q standard are explained by the need for detailed elaboration of a much more complex "two-layer" specification.

The standard had to satisfy the following rather high requirements:

  • scalability at the level of packet exchange between switches;
  • continuity at the level of existing end applications;
  • adaptation at the level of existing protocols and routing tables;
  • economy in terms of recycling high-speed highways;
  • compatibility with ATM, especially with LAN emulation;
  • manageability package labeling process.

The 802.1Q standard adds four bytes to the Ethernet frame. These 32 bits contain information about the Ethernet frame belonging to the VLAN and its priority. More precisely, three bits encode up to eight priority levels, 12 bits allow traffic to be distinguished up to 4096 VLANs, one bit is reserved for other types of network frames (Token Ring, FDDI) transmitted over the Ethernet backbone, etc.

The Priority Level Identifier field allows eight such levels to be used, corresponding to the 802.1p priority system.

In the Ethernet frame header, 802.1Q fields are placed between the source address and the 802.3 payload frame length (Ethernet frame) or higher protocol type (Ethernet II frame) field.

Currently, almost all network companies have already created commercial versions of products that support the 802.1p and 802.1Q standards. In addition, many Ethernet switch manufacturers have already implemented proprietary prioritization services.

Obviously, changing the structure of the Ethernet frame entails serious problems - because it loses compatibility with all traditional Ethernet devices that are oriented to the old frame format.

Indeed, because the 802.1Q data is placed before the payload length (or protocol type) field, the traditional network product will not find this information in the usual place and instead "read" the number x8100 - the default value of the new Tag Protocol Identifier fields in 802.1Q frames.

The source of the problem is not only a change in the placement of Ethernet frame header fields, but also an increase in the maximum length of a given frame. Many network devices are unable to handle frames longer than 1518 bytes. There has been controversy among experts as to whether maximum size lengthen the Ethernet frame by four bytes or shorten the maximum payload size by four bytes to compensate for the overhead increase. The 802.1Q specification provides for both approaches, so it's up to vendors to ensure that their products are interoperable.

From a technical point of view, interoperability of old equipment with 802.1Q-compatible modern devices is not difficult, and most manufacturers will be able to implement this feature in their products at the level of their ports. To dock an 802.1Q-compliant device to an old switch or NIC, you simply need to disable 802.1Q support on the desired port, and all traffic will be sent to the network as usual.

Priorities and Classes of Service

The IEEE 802.1p specification, created as part of the 802.1Q standardization process, defines a method for conveying priority information. network traffic. While most LANs rarely experience sustained congestion, occasional bursts of traffic are common and can result in packet transmission delays. This is absolutely unacceptable for networks designed to transmit voice and video. The 802.1p standard specifies a queuing algorithm that ensures timely delivery of time sensitive traffic.

The Working Group on Standardization of Integrated Services in Multiple Link Layer Networks (ISSLL) has defined a number of classes of service depending on what delay time is allowed for the transmission of a packet of a particular type of traffic. Imagine a network with different types of traffic: latency sensitive on the order of 10 ms, not allowing delays of more than 100 ms, and almost insensitive to delays. For such a network to work successfully, each of these types of traffic must have its own priority level, which ensures that the requirements for the amount of delay are met. Using the concept of the Resource Reservation Protocol (RSVP) and the class of service system, a priority control scheme can be defined. The RSVP protocol, which will be discussed below, is supported by most switching routers, and in particular by Cabletron's SSR 8000/8600 models.

In addition to prioritization, the 802.1p standard introduces the important Generic Attributes Registration Protocol (GARP) with two special implementations. The first of these is the GMRP (GARP Multicast Registration Protocol), which allows workstations to request a connection to a multicast messaging domain. The concept supported by this protocol is called leaf-initiated connection. The GMRP protocol ensures that traffic is transmitted only to those ports from which the request for multicast traffic came, and is well aligned with the 802.1Q standard.

The second implementation of GARP is the GVRP (GARP VLAN Registration Protocol), similar to GMRP. However, working on it work station instead of a request to connect to a multicast domain, it sends a request to access a specific VLAN. This protocol links the p and Q standards.

With the adoption of the preliminary versions of the 802.1Q and 802.1p standards, there is every opportunity for the widespread use of traffic prioritization in Ethernet networks. Using products that support prioritization mechanisms, network administrators will be able to manage the switching infrastructure of their network in such a way that, for example, highest level priority received traffic office suite Lotus Notes And Email, and RealAudio audio streams are the lowest level. Traffic prioritization mechanisms based on the 802.1Q and 802.1p specifications have undoubtedly become another trump card of Ethernet technology.

But although these specifications provide traffic prioritization for the most popular layer 2 topologies, they do not guarantee that the entire network infrastructure (from one endpoint to another) will support the processing of priority traffic. In particular, the 802.1Q and 802.1p specifications are useless in controlling the priority of IP traffic (Layer 3 traffic) transmitted over a low-speed WAN or Internet access channels, that is, through the most likely bottlenecks in the network infrastructure.

To fully manage traffic across the entire network, you must first implement effective prioritization of IP traffic. In this regard, a number of questions arise. Does the local network support such prioritization mechanisms? What about WAN equipment? Does your ISP support these mechanisms? What about the infrastructure at the other end of the connection in this regard? If at least one device located between two systems does not support prioritization mechanisms, it will be impossible to implement the transfer of priority traffic from one network end node to another.

Unlike Ethernet technology, IP has been around for a long time to prioritize network traffic, first introduced in a version published in 1981. Each IP packet has an eight-bit Type of Service (ToS) field, consisting of two subfields (see the structure of the IP packet header):

  • three-bit - to set the priority level of the packet;
  • four-bit - to indicate the class (type) of service preferred for this packet (the remaining eighth bit is not used).

The first three bits of the ToS field allow IP traffic to be set to the same eight priority levels (from 0 to 7) as the 802.1Q and 802.1p specifications, as well as most other LAN technologies. Therefore, it is possible to display information about the priorities of Ethernet frames and IP packets in a one-to-one manner, which means that end-to-end processing of priority traffic transmitted from one Ethernet networks to another via a distributed IP network or ISP infrastructure.

The four other bits of the ToS field used allow the network manager to individually route each packet according to the nature of the data it contains. For example, NNTP (Network News Transfer Protocol) packets transporting UseNet news can be set to a low cost class of service, and Telnet packets can be set to a low latency class of service.

Initially, RFC 791 (the original version of the IP protocol) defined only three classes of service, each of which was associated with a separate bit, set to "1" or "0" depending on the need for a particular type of service. With the adoption of the RFC 1349 standard, another class was added, and now the previously separated four bits began to be considered as a single unit. Therefore, today with their help you can set a maximum of 16 values ​​(from 0 to 15).

Network administrators managing complex networks with many routes can use type-of-service bits in conjunction with routing protocols such as OSPF to create custom routing services. For example, packets with a "mark" low latency (low delay) can be sent not over a satellite connection, but over a high-speed optical line, while "unpretentious" traffic (class of service "low cost") is sent through the Internet, and not through a corporate distributed network.

By combining the service type set bits with the priority bits, you can very precisely set how packets with specific data types are handled, such as defining rules for network filters to give all Lotus Notes application packets a medium priority level and assign a low latency class of service. At the same time, Notes users will receive preferential treatment compared to users of other, less important applications. You can define a different set of filters that will mark all RealAudio audio application traffic as low priority and set its class of service to high. throughput(high throughput).

If you have your own end-to-end connection between the sender node and the destination node, you can dispose of the packets however you like. But in most ISP networks, packets with priority levels set and untagged packets will be treated the same way. Therefore, in terms of prioritizing traffic and assigning different classes of service to it the best option is to use a private wide area network. When working over the Internet, you can assign filters to incoming from this global network traffic to at least control its progress on your own network.

However, not everything depends on the network infrastructure. Currently, there are significant problems with setting the priority and type of service bits in IP packets. These bits can be set both by the application itself as packets are generated and sent, or by a network device using special filters. In both cases, support for these features is entirely dependent on application vendors, operating systems, and network equipment.

But surprisingly, only a few operating systems use mechanisms in their IP stacks to write to the packet information about its priority level and the class of service required for it. The WINSOCK.DLL API that ships with Windows 95 and Windows NT does not have this capability at all, so attempts to call the "setsockopt (IP_TOS)" function result in an "invalid operation" diagnostic message. Other operating systems, such as Irix, HP-UX, and Solaris, have only partial support for these features.

Among all operating systems, only Linux and Digital UNIX have strong support for ToS functions. Moreover, it is available both directly in the systems themselves and in their sets. standard applications. For example, both systems provide Telnet clients and servers capable of setting the low latency bit of the ToS field - none of the other operating systems we tested have such important capabilities. Client and FTP server, running on Linux and Digital UNIX, are able to set the low latency bit in packets transmitted over the control channel, and the high throughput bit in packets transmitted over the data channel. As a result, such an FTP command as abort operation (interrupt the command) will be transmitted to the server along the fastest route and, accordingly, in the minimum time (quickly canceling the download of the file from the server).

Why do only a few applications support the ToS byte functionality? Yes, because most of the operating systems in which they work do not provide proper support for these functions. And until Microsoft modifies the WINSOCK.DLL API Windows systems NT, application vendors such as Lotus Development, Netscape Communications, and Oracle will not be able to implement priority management mechanisms in their applications.

However, there are ways to get around the problems that operating system and application vendors are slow to address. The surest of them is to implement IP traffic prioritization services not in applications and operating systems, but in network infrastructure devices. Administrators of many large and heavily loaded networks have been prioritizing for several years using filters installed in routers on a per-application basis.



Loading...
Top