IMPORTANT: Per accedir als fitxer de subversion: http://acacha.org/svn (sense password). Poc a poc s'aniran migrant els enllaços. Encara però funciona el subversion de la farga però no se sap fins quan... (usuari: prova i la paraula de pas 123456)

http://www.securitybydefault.com/2013/11/1010-herramientas-de-monitorizacion-de.html

UDP

TODO

TCP

Una simple connexió TCP is veu molt afectada pels següents paràmetres:

  • Round-Time Trip (RTT) time: és el temps que tarda una paquet en anar i tornar (el valor que retorna la comanda ping)
  • Maximum Segment Size (MSS): la quantitat de dades màxima per paquet.
  • TCP Window size: La mida de la finestra TCP als dos extrems de la connexió (per defecte 64K- TODO Linux autotuning pot millorar aquest valor?)

Altres conceptes importants:

  • Maximum TCP Buffer (Memory) space: All operating systems have some global mechanism to limit the amount of system memory that can be used by any one TCP connection.
  • Socket Buffer Sizes: Most operating systems also support separate per connection send and receive buffer limits that can be adjusted by the user, application or other mechanism as long as they stay within the maximum memory limits above. These buffer sizes correspond to the SO_SNDBUF and SO_RCVBUF options of the BSD setsockopt() call
  • TCP Large Window Extensions (RFC1323): These enable optional TCP protocol features (window scale and time stamps) which are required to support large BDP paths.
  • TCP Selective Acknowledgments Option (SACK, RFC2018) allow a TCP receiver inform the sender exactly which data is missing and needs to be retransmitted.
  • Path MTU: The host system must use the largest possible MTU for the path. This may require enabling Path MTU Discovery (RFC1191, RFC1981, RFC4821).

Formula del càlcul del throughput

http://bradhedlund.com/2008/12/19/how-to-calculate-tcp-throughput-for-long-distance-links/
CALCULADORA: http://www.switch.ch/network/tools/tcp_throughput/index.html

Formula per calcular el TCP throughput, límit :

TCP-Window-Size-in-bits / Latency-in-seconds = Bits-per-second-throughput

Mida de la finestra típica a Windows: 64KB = 65536 Bytes = 524288 bits

TODO: Taula de velocitats segons la latència:
0.01ms
0.05ms
0.1ms
1ms
5ms
10 ms
50 ms
10 ms

TODO

So lets work through a simple example. I have a 1Gig Ethernet link from Chicago to New York with a round trip latency of 30 milliseconds. If I try to transfer a large file from a server in Chicago to a server in New York using FTP, what is the best throughput I can expect?

First lets convert the TCP window size from bytes to bits. In this case we are using the standard 64KB TCP window size of a Windows machine.

64KB = 65536 Bytes. 65536 * 8 = 524288 bits

Next, lets take the TCP window in bits and divide it by the round trip latency of our link in seconds. So if our latency is 30 milliseconds we will use 0.030 in our calculation.

524288 bits / 0.030 seconds = 17476266 bits per second throughput = 17.4 Mbps maximum possible throughput

So, although I may have a 1GE link between these Data Centers I should not expect any more than 17Mbps when transferring a file between two servers, given the TCP window size and latency.

What can you do to make it faster? Increase the TCP window size and/or reduce latency.

To increase the TCP window size you can make manual adjustments on each individual server to negotiate a larger window size. This leads to the obvious question: What size TCP window should you use? We can use the reverse of the calculation above to determine optimal TCP window size.

Formula to calculate the optimal TCP window size:

Bandwidth-in-bits-per-second * Round-trip-latency-in-seconds = TCP window size in bits / 8 = TCP window size in bytes

So in our example of a 1GE link between Chicago and New York with 30 milliseconds round trip latency we would work the numbers like this…

1,000,000,000 bps * 0.030 seconds = 30,000,000 bits / 8 = 3,750,000 Bytes

Therefore if we configured our servers for a 3750KB TCP Window size our FTP connection would be able to fill the pipe and achieve 1Gbps throughput.

One downside to increasing the TCP window size on your servers is that it requires more memory for buffering on the server, because all outstanding unacknowledged data must be held in memory should it need to be retransmitted again. Another potential pitfall is performance (ironically) where there is packet loss, because any lost packets within a window requires that the entire window be retransmitted – unless your TCP/IP stack on the server employs a TCP enhancement called “selective acknowledgements”, which most do not.

Another option is to place a WAN accelerator at each end that uses a larger TCP window and other TCP optimizations such as TCP selective acknowledgements just between the accelerators on each end of the link, and does not require any special tuning or extra memory on the servers. The accelerators may also be able to employ Layer 7 application specific optimizations to reduce round trips required by the application.

Reduce latency? How is that possible? Unless you can figure out how to overcome the speed of light there is nothing you can do to reduce the real latency between sites. One option is, again, placing a WAN accelerator at each end that locally acknowledges the TCP segments to the local server, thereby fooling the servers into seeing very low LAN like latency for the TCP data transfers. Because the local server is seeing very fast local acknowledgments, rather than waiting for the far end server to acknowledge, is the very reason why we do not need to adjust the TCP window size on the servers.

In this example the perfect WAN accelerator would be the Cisco 7371 WAAS Appliance, as it is rated for 1GE of optimized throughput.

WAAS stands for: Wide Area Application Services

The two WAAS appliances on each end would use TCP optimizations over the link such as large TCP windows and selective acknowledgements. Additionally, the WAAS appliances would also remove redundant data from the TCP stream resulting in potentially very high levels of compression. Each appliance remembers previously seen data, and if that same chunk of data is seen again, that data will be removed and replaced with a tiny 2 Byte label. That tiny label is recognized by the remote WAAS appliance and it replaces the tiny label with the original data before sending the traffic to the local server.

The result of all this optimization would be higher LAN like throughput between the server in Chicago and New York without any special TCP tuning on the servers. Formula to calculate Maximum Latency for a desired throughput

You might want to achieve 10 Gbps FTP throughput between two servers using standard 64KB TCP window sizes. What is the maximum latency you can have between these two servers to achieve 10 Gbps?

TCP-window-size-bits / Desired-throughput-in-bits-per-second = Maximum RTT Latency

524288 bits / 10,000,000,000 bits per second = 52.4 microseconds

Recursos

TCP Performance Tunning

aka TCP Tunning.

Recursos:

Linux

IMPORTANT: Les versions més recents del nucli de Linux: (versió 2.6.17 o superior) tenen un sistema d'autotunning que permet incrementar el buffer a una mida màxima de 4 MB. Excepte en casos molt estranys, el tunning manual no es recomana ja que rarament representarà una millora per al sistema. http://www.psc.edu/networking/projects/tcptune/#Linux http://kb.pert.geant.net/PERTKB/TCPBufferAutoTuning

RFC 1323

Window Scaling and increase the TCP window size to 1 MB. To do this, we'll add the following lines to /etc/sysctl.conf and issue sudo sysctl -p to apply the changes immediately.

net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 1048576
net.core.wmem_default = 1048576
net.ipv4.tcp_rmem = 4096 1048576 16777216
net.ipv4.tcp_wmem = 4096 1048576 16777216
net.ipv4.tcp_congestion_control = bic
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1

As before, we're setting the maximum buffer size large and the default window size to 1 MB.

RFC 1323 is enabled via

net.ipv4.tcp_window_scaling 
net.ipv4.tcp_timestamps. 

These options are probably on by default, but it never hurts to force them via /etc/sysctl.conf. Finally, we are choosing BIC as our TCP Congestion Control Algorithm. Again, that value is most likely the default on your system (especially any kernel version after 2.6.12).

Consultar els valors del nucli relacionats amb TCP

$ sudo sysctl -a | fgrep tcp
error: permission denied on key 'net.ipv4.route.flush'
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_retrans_collapse = 1
net.ipv4.tcp_syn_retries = 5
net.ipv4.tcp_synack_retries = 5
net.ipv4.tcp_max_orphans = 65536
net.ipv4.tcp_max_tw_buckets = 180000
net.ipv4.tcp_keepalive_time = 7200
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_retries1 = 3
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_fin_timeout = 60
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_abort_on_overflow = 0
net.ipv4.tcp_stdurg = 0
net.ipv4.tcp_rfc1337 = 0
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_orphan_retries = 0
net.ipv4.tcp_fack = 1
net.ipv4.tcp_reordering = 3
net.ipv4.tcp_ecn = 0
net.ipv4.tcp_dsack = 1
net.ipv4.tcp_mem = 237408	316544	474816
net.ipv4.tcp_wmem = 4096	16384	4194304
net.ipv4.tcp_rmem = 4096	87380	4194304
net.ipv4.tcp_app_win = 31
net.ipv4.tcp_adv_win_scale = 2
net.ipv4.tcp_tw_reuse = 0
net.ipv4.tcp_frto = 0
net.ipv4.tcp_low_latency = 0
net.ipv4.tcp_no_metrics_save = 0
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_tso_win_divisor = 3
net.ipv4.tcp_congestion_control = cubic
net.ipv4.tcp_abc = 0
net.ipv4.tcp_mtu_probing = 0
net.ipv4.tcp_base_mss = 512
net.ipv4.tcp_workaround_signed_windows = 0
net.ipv4.tcp_slow_start_after_idle = 1
net.ipv4.tcp_available_congestion_control = cubic reno
net.ipv4.tcp_allowed_congestion_control = cubic reno
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_sent = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_recv = 60
net.ipv4.netfilter.ip_conntrack_tcp_timeout_established = 432000
net.ipv4.netfilter.ip_conntrack_tcp_timeout_fin_wait = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_close_wait = 60
net.ipv4.netfilter.ip_conntrack_tcp_timeout_last_ack = 30
net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_close = 10
net.ipv4.netfilter.ip_conntrack_tcp_timeout_max_retrans = 300
net.ipv4.netfilter.ip_conntrack_tcp_loose = 1
net.ipv4.netfilter.ip_conntrack_tcp_be_liberal = 0
net.ipv4.netfilter.ip_conntrack_tcp_max_retrans = 3
net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 120
net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 60
net.netfilter.nf_conntrack_tcp_timeout_established = 432000
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.netfilter.nf_conntrack_tcp_timeout_last_ack = 30
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_close = 10
net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300
net.netfilter.nf_conntrack_tcp_loose = 1
net.netfilter.nf_conntrack_tcp_be_liberal = 0
net.netfilter.nf_conntrack_tcp_max_retrans = 3
fs.nfs.nlm_tcpport = 0
sunrpc.tcp_slot_table_entries = 16

Windows

http://www.speedguide.net/faq_in_q.php?qid=247

Generadors de trànsit

Generar trànsit en port sense configuració lògica

Si es tracta d'un port amb Ethernet no podrem fer pings (paquets ICMP), ni enviar dades TCP o UDP sense saber abans la MAC del peer. Per això es pot activar un ARP estàtic (vege ARP) per enviar trànsit a una IP inventada. El trànsit no tindra retorn però si que sortirà de la interfície de xarxa.

Abans que res cal definit un rang de xarxa per a fer proves. Per exemple:

192.168.55.0/24

I posar una IP d'aquesta xarxa a la interfície de sortida, al nostre exemple eth0:

$ sudo ifconfig eth0 192.168.55.9/24

Per forçar una entrada ARP a una interfície de xarxa (la interfície o es vol injectar trànsit):

$ sudo arp -s 192.168.55.10 00:14:1c:32:af:1a

Ara ja podem enviar trànsit (que no tindrà resposta però l'enviament si que es farà) a la IP 192.168.55.10. Per exemple ho podem fer amb iperf:

$ iperf -c 192.168.55.10 -u -b 10m 

On amb -b generem 10 megues de transtit.

Comproveu amb la comanda tcpdump que es genera trànsit:

$ sudo tcpdump -ni eth0

També podeu utilitzar:

$ sudo watch ifconfig eth0

O iptraf o un Flood ping:

$ sudo ping -i 0 192.168.201.1

RouterOS traffic generator

Consulteu routerOS

iperf

Consulteu iperf

Download accelerators

Al utilitzar una sola connexió TCP hi ha un límit en la quantitat de dades que es poden baixar i segurament aquest límit no permet explotar al màxim la velocitat de l'enllaç.

Per aquesta raó es van inventar els acceleradors de descàrregues o la mateixa raó per les que les eines P2P utilitzant múltiples connexions.

Exemples per a Linux:

Recursos:

Vegeu també

Enllaços externs