Category:Network
Transmission Control Protocol (
TCP) vistit
wikipedia on TCP .
Congestion control
Is used to optimize the senders behaviour to the current network load.
other
Flow Control
links
Optimization
There are some parameters that can be set to greatly improve the Throughput in some networks such as
high-bandwidth high-latency or
high out-of-order-count .
Bandwidth-delay product (BDP) and
Windowsize are the keywords.
Most parameters can be set using
ifconfig or editing
/etc/sysctl.conf .
The linux kernel supports autotuning of the
TCP-Window size. The variable must be
1 . The second parameter of tcp_wmem/tcp_rmem is adjustd by the autoconfig.
#cat /proc/sys/net/ipv4/tcp_moderate_rcvbuf
Parameters
List Overview:
- MTU
- txqueuelen / netdev_max_backlog (tcp_backlog)
- tcp_mem, tcp_rmem, tcp_wmem
- vm/min_free_kbytes
- rmem_max / wmem_max
Additional Settings could be done via
ethtool.
description
tcp_mem
/net/ipv4/tcp_mem difines the memory behavior of the memory allocated to tcp-Stacks. There are 3 values to be set, measured in memorypages (4kBytes on our hardware):
- Low value: If the actual memory usage is below this point, nothing happens.
- Start preasure: If usage climbs over this value the kernel starts to lower tcp-buffer sizes untill the Low value is reached.
- Max value: Tcp streams get dropped no growth beneath this point.
tcp_rmem / tcp_wmem
/net/ipv4/tcp_rmem and
/net/ipv4/tcp_wmem control the sizes of the recive and send buffers of a
TCP-socket, measured in Bytes. Three values have to be set each. Tune
tcp_mem accordingly to prevent a too aggressive memory-policy.
- Minimal buffer size. Even under heavy load.
- Default size. Overrides /proc/sys/net/core/rmem_default value . This value ist used to calculate the TCP-windowsize. In general this value ist autotuned by the kernel.
- Maximum size that can be allocated to one socket.
configuration
Modify Parameters:
#echo "4096 16384 4194304" > /proc/sys/net/ipv4/tcp_wmem
or add the following line to
/etc/sysctl.conf for boot-time configuration:
net/ipv4/tcp_wmem = 4096 16384 4194304
VLANs+Channelbonding
Refers to
this special networklayout.
As
NFS uses
TCP the nodes have to be optimized for maximum
TCP-Throughput. As a result of
round-robin the number of
TCP-out-of-order-deliveries is higher than in general and as a packetloss won't happen that often, the
TCP-window-size should be increased to prevent retransmissions of packets, which would slow down the connection.
websites