This article outlines the differences between the different congestion control algorithms offered on the appliance for acceleration.
Mostly, these algorithms are improvements in some aspect of the TCP stack accounting for varying network environment (high-speed nets, satellite, wireless, etc.). Due to a large number of choices, it is recommended that the default congestion control algorithm (CUBIC) is used, unless there is a strong reason not to, e.g., advised by Exinda support or Engineering, or there is performance issues on the link.
The following is a list of the different types of congestion control and a brief summary of each.
- High-Speed TCP: The algorithm is described in RFC 3649. The primary use is for connections with large bandwidth and large RTT (such as Gbit/s and 100 ms RTT).
- H-TCP: It was proposed by the Hamilton Institute for transmissions that recover more quickly after a congestion event. It is also designed for links with high bandwidth and RTT.
- Scalable TCP: This is another algorithm for WAN links with high bandwidth and RTT. One of its design goals is a quick recovery of the window size after a congestion event. It achieves this goal by resetting the window to a higher value than standard TCP.
- TCP BIC: BIC is the abbreviation for Binary Increase Congestion control. BIC uses a unique window growth function. In case of packet loss, the window is reduced by a multiplicative factor. The window size just before and after the reduction is then used as parameters for a binary search for the new window size. BIC was used as a standard algorithm in the Linux kernel.
- TCP CUBIC: CUBIC is a less aggressive variant of BIC, meaning, it does not steal as much throughput from competing TCP flows as does BIC.
- TCP Hybla: TCP Hybla was proposed to transmit data efficiently over satellite links and "defend" the transmission against TCP flows from other origins.
- TCP Low Priority: This is an approach to develop an algorithm that uses excess bandwidth for TCP flows. It can be used for low priority data transfers without disturbing other TCP transmissions.
- TCP Tahoe/Reno: These are the classical models used for congestion control. They exhibit the typical slow start of transmissions. The throughput increases gradually until it stays stable. It is decreased as soon as the transfer encounters congestion, then the rate rises again slowly. The window is increased by adding fixed values. TCP Reno uses a multiplicative decrease algorithm for the reduction of window size. TCP Reno is the most widely deployed algorithm.
- TCP Vegas: It introduces the measurement of RTT for evaluating the link quality. It uses additive increases and additive decreases for the congestion window.
- TCP Veno: This variant is optimised for wireless networks, since it was designed to handle random packet loss better. It tries to keep track of the transfer, and guesses if the quality decreases due to congestion or random packet errors.
- TCP Westwood+: It addresses both large bandwidth/RTT values and random packet loss together with dynamically changing network loads. It analyses the state of the transfer by looking at the acknowledgement packets. Westwood+ is a modification of the TCP Reno algorithm.