Fairness is that when congestion occurs, each
Sources (or different TCP connections or UDP datagrams established by the same source) can share the same network resources (such as bandwidth and cache). ) quite. Sources at the same level should get the same amount of network resources. produce
The fundamental reason of fairness is that congestion will inevitably lead to packet loss, which will lead to competition among data streams for limited network resources, and the data streams with weak competitiveness will be more damaged.
Therefore, without congestion, there is no fairness problem.
The fairness of TCP layer is manifested in two aspects:
( 1)?
Connection-oriented TCP and connectionless UDP have different responses and treatments to congestion indication when congestion occurs, which leads to unfair use of network resources. When congestion occurs, TCP has a congestion control response mechanism.
The data flow will enter the congestion avoidance stage according to the congestion control steps, thus actively reducing the amount of data sent to the network. But for connectionless datagram UDP, because there is no end-to-end congestion control mechanism, even if the network is congested.
Plug indication (such as packet loss, repeated receipt of ACK, etc.). ), UDP will not reduce the amount of data sent to the network like TCP. Results TCP data streams subject to congestion control get less and less network resources,
UDP without congestion control will get more and more network resources, which will lead to serious unfair distribution of network resources in various sources.
Unfair distribution of network resources
Coming over will aggravate congestion and may even lead to congestion collapse. Therefore, how to judge whether each data stream strictly abides by TCP congestion control and how to "punish" the behavior that does not abide by the congestion control protocol when congestion occurs.
This paper introduces a hot spot of congestion control research at present. The fundamental way to solve the fairness problem of congestion control in transport layer is to make full use of end-to-end congestion control mechanism.
(2) Some TCP connections also have fairness problems. The reason for the problem is that some TCP uses a larger window size before congestion, or their RTT is smaller, or the data packet is larger than other TCP, so they will also occupy more bandwidth.
RTT is unfair.
Update strategy of AIMD congestion window
There are also some shortcomings. Sum-type increase strategy increases the congestion window of the sender's data stream by one packet within the round-trip delay (RTT). Therefore, when different data streams affect the network bottleneck bandwidth,
When lines compete, the congestion window of TCP data stream with smaller RTT will grow faster than that of TCP data stream with larger RTT, thus occupying more network bandwidth resources.
additional information
The line quality between China and America is not very good, rtt is long, and packets are often dropped. TCP protocol loses packets if it succeeds, and loses packets if it fails. The design purpose of TCP is to solve the problem of reliable transmission on unreliable lines, that is, to solve the problem of packet loss, but packet loss greatly reduces the transmission speed of TCP. HTTP protocol uses TCP protocol at the transport layer, so the speed of web page download depends on the speed of TCP single-thread download (because web pages are downloaded by single thread).
The main reason why TCP transmission speed is greatly reduced due to packet loss is packet loss retransmission mechanism, and TCP congestion control algorithm is the control mechanism.
Several sets of TCP congestion control algorithms are provided in the Linux kernel, which can be seen through the kernel parameter net.ipv4.tcp _ available _ congestion _ control.
Vegas
1994
Brakmo proposed a new congestion control mechanism TCP?
Vegas, another angle of congestion management. As can be seen from the foregoing, TCP congestion control is based on packet loss. Once packet loss occurs, the congestion window is adjusted. However, packet loss is not necessarily caused by
The network has entered congestion, but because the RTT value is closely related to the network operation, TCP?
Vegas uses the change of RTT value to judge whether the network is congested, so as to adjust the congestion control window. If RTT increases, Vegas thinks the network is congested and starts to reduce the congestion window.
If the RTT becomes smaller, Vegas thinks that the network congestion is gradually relieved, so it increases the congestion window again. Because Vegas does not judge the available bandwidth of the network by packet loss, but by the change of RTT, it can detect the available bandwidth of the network more accurately, thus achieving better efficiency. But Vegas has a flaw, which can be said to be fatal, and ultimately affects TCP.
Vegas is not widely used on the Internet. The problem is that the bandwidth competitiveness of the stream using TCP Vegas is not as good as that of the stream not using TCP Vegas.
This is because as long as the router buffers data in the network, the RTT will become larger. If the buffer does not overflow, congestion will not occur. However, buffering data will cause processing delay, so RTT will change.
Large, especially in the network with relatively small bandwidth, as long as data is transmitted at the beginning, RTT will increase sharply, which is especially obvious in wireless networks. In this case, TCP?
Vegas reduces its own congestion window, but as long as there is no packet loss, as can be seen from the above, standard TCP will not reduce its own window, so the two are unfair at first, and so on, TCP.
Vegas is very inefficient. In fact, if all TCP adopts Vegas congestion control mode, the fairness between flows will be better, and the competitiveness is not the problem of Vegas algorithm itself.
Applicable environment: it is difficult to apply on a large scale on the Internet (low bandwidth competitiveness).
2. Reno
Reno is the most widely used and mature algorithm. The mechanisms of slow start, congestion avoidance, fast retransmission and fast recovery included in this algorithm are the basis of many existing algorithms. It is easy to see from the Reynolds operation mechanism that in order to maintain the dynamic balance, a certain amount of loss must be generated periodically, while the AIMD mechanism-fast slow increase, especially in the large window environment, it takes a long time to recover the window contraction caused by the loss of a datagram, so the bandwidth utilization rate cannot be very high, and this disadvantage will become more and more obvious with the continuous improvement of network link bandwidth. In terms of fairness, according to statistics, Reno's fairness has been quite affirmed, and it can ideally maintain the principle of fairness in a larger network.
Reno algorithm has become the mainstream because of its simplicity, effectiveness and strong robustness, and is widely used.
However, it can't effectively handle the situation that multiple packets are lost from the same data window. The new Reynolds algorithm solves this problem.
Protocol based on packet loss feedback
In recent years, with the popularization of high-bandwidth delay product networks, many new TCP protocol improvements based on packet loss feedback have emerged, including HSTCP, STCP, BIC-TCP, CUBIC and H-TCP.
Generally speaking, based on packet loss feedback
This protocol is a passive congestion control mechanism, which judges network congestion according to packet loss events in the network. Even if the load in the network is high, the protocol will not actively reduce itself as long as there is no congestion and packet loss.
Sending speed. This protocol can make full use of the remaining bandwidth of the network and improve the throughput. However, due to the aggressiveness of the feedback protocol based on packet loss when the network is near saturation, on the one hand, the bandwidth utilization rate of the network is greatly improved; On the other hand, for the congestion control protocols based on packet loss feedback, greatly improving the network utilization means that the next congestion packet loss event is not far away, so these protocols increase the network bandwidth utilization, indirectly increase the network packet loss rate, and lead to an increase in the jitter of the whole network.
Friendly nature
BIC-TCP,
Protocols based on packet loss feedback, such as HSTCP and STCP, greatly improve their own throughput, but also seriously affect the throughput of Reno streams. The protocol based on packet loss feedback produces such poor TCP friendliness.
The main reason is the active congestion window management mechanism of these protocol algorithms. These protocols usually think that as long as there is no packet loss in the network, there must be extra bandwidth to continuously improve their transmission rate.
From the macroscopic point of view of time, its transmission rate shows a concave development trend, and the closer to the network bandwidth, the faster the peak transmission rate increases. This not only brings a lot of congestion and packet loss, but also maliciously devours the network.
The bandwidth resources of other * * * flows in the network cause the fairness of the whole network to decline.
3. High-speed transmission control protocol
HSTCP (High-speed transmission control protocol) is a high-speed network congestion control algorithm based on AIMD (additive growth and multiplicative reduction), which can effectively improve the network throughput of high-speed and long-delay networks. By modifying the increase and decrease parameters of the standard TCP congestion avoidance algorithm, the window can be quickly increased and slowly decreased, and the window can be kept in a large enough range to make full use of the bandwidth, which can achieve better performance than TCP in high-speed networks.
Reno has higher bandwidth, but it has serious RTT unfairness. Fairness means that multiple flows sharing the same network bottleneck occupy equal network resources.
TCP sender dynamically adjusts the increment function of HSTCP congestion window through the expected packet loss rate of the network.
Window growth mode to avoid congestion: cwnd = cwnd+a(cwnd)/cwnd.
Window discarding mode after packet loss: cwnd = (1-b(cwnd))*cwnd.
Where a(cwnd) and
B(cwnd) is two functions. In standard TCP, a(cwnd)= 1 and b(cwnd)=0.5. In order to achieve TCP friendliness, in the case of low window, that is to say, in non-.
In the network environment of BDP, HSTCP uses the same A and B as standard TCP to ensure their friendship. When the window is large (critical value LowWindow=38), a new A is adopted.
And b to meet the requirement of Qualcomm excess. See RFC3649 document for details.
4. Famous American game software companies
In wireless networks, tcpwestwood is found to be an ideal algorithm after a lot of research. Its main idea is to estimate the bandwidth by continuously detecting the arrival rate of ack at the sender, adjust the congestion window and slow start threshold with the bandwidth estimation value when congestion occurs, and adopt aiad(additive increase and)
Adaptive congestion control mechanism. It not only improves the throughput of wireless network, but also has good fairness and interoperability with existing networks. The existing problem is that congestion and wireless packet loss can not be well distinguished during transmission, which leads to frequent calls of congestion mechanism.
5.H-TCP
H-tcp is an excellent algorithm in high-performance networks, but there are some problems such as unfair rtt and unfriendly bandwidth.
6.BIC-TCP
Disadvantages of BIC-TCP: First, it has strong preemption. In the case of short bandwidth delay of small links, the growth function of BIC-TCP is more preemptive than that of standard TCP. It is equivalent to restarting a slow start algorithm in the detection stage, while the window of TCP has been growing linearly after stability, and the slow start process will not be executed again. Secondly, the window control stage of BIC-TCP is divided into binary.
Search is added, max is detected, and then Smax and Smin are distinguished. These values increase the difficulty of algorithm implementation and the complexity of protocol performance analysis model. In the low RTT network and low-speed environment, BIC may be too "active", so people further improved BIC, that is, cube. Is the default algorithm of Linux before adopting cube.
7. Cubic
Cube simplifies the window adjustment algorithm of BIC-TCP in design,
In the window adjustment of BIC-TCP, there will be a concave-convex growth curve (here, concave-convex refers to concave-convex function, concave function/convex function in the mathematical sense), while CUBIC uses cubic function (i.e.
A cubic function), there is also a concave-convex part in the cubic function curve, which is very similar to the curve of BIC-TCP, so this part replaces the growth curve of BIC-TCP. in addition
In addition, the key point in CUBIC is that its window growth function only depends on the time interval value of two consecutive congestion events, so the window growth is completely independent of the network delay RTT. As mentioned above, HSTCP has serious RTT unfairness, and the RTT independence of CUBIC makes CUBIC maintain good RTT fairness between TCP connections sharing bottleneck links.
CUBIC is the congestion control protocol of TCP (Transmission Control Protocol) and the default TCP algorithm in Linux.
This protocol modifies the linear window.
Is the growth function of the existing TCP standard a cubic function?
In order to improve the scalability of TCP in fast and long distance?
Network. It also achieves a fairer bandwidth allocation.
Streams with different RTT (round trip time) by
Window growth will be independent of RTT-so these traffic growth?
Their crowded windows are at the same speed. During steady state, cubic?
When the window is away.
Saturation point, and slowly rises as it approaches.
Reach the saturation point. This function allows
Cube is very scalable when bandwidth and delay products.
The network is large, and at the same time it is highly stable and fair?
To the standard TCP stream.
8.STCP
Extensible TCP protocol.
STCP algorithm was proposed by Tom Kelly in 2003. It adjusts the sending window size by modifying the TCP window increase and decrease parameters to adapt to the high-speed network environment. The algorithm has high link utilization and stability, but the window increase of the mechanism is inversely proportional to RTT, which exists to some extent.
RTT is unfair. When it exists with the traditional TCP stream, it takes up too much bandwidth, and its TCP friendliness is poor.