IMPORTANT: Per accedir als fitxer de subversion: http://acacha.org/svn (sense password). Poc a poc s'aniran migrant els enllaços. Encara però funciona el subversion de la farga però no se sap fins quan... (usuari: prova i la paraula de pas 123456)

El driver de Linux per Bonding proporciona un mètode per agregar múltiples interfícies de xarxa en una sola interfície lògica.

El comportament d'una interfície bonding depen del mode escollit, però hi ha dos funcionalitats o serveis principals:

A més es pot fer link integrity monitoring.

El driver de Bonding originalment ve dels patches de Donald Becker's beowulf per al kernel 2.0, però des d'aleshores ha canviat força i les antigues eines no funcionaran actualment.

aka Unió d'interfícies de xarxa (channel bonding)

Introducció

Bonding modes

TODO

balance-rr: This mode is the only mode that will permit a single TCP/IP connection to stripe traffic across multiple interfaces. It is therefore the only mode that will allow a single TCP/IP stream to utilize more than one interface's worth of throughput. This comes at a cost, however: the striping generally results in peer systems receiving packets out of order, causing TCP/IP's congestion control system to kick in, often by retransmitting segments.

It is possible to adjust TCP/IP's congestion limits by altering the net.ipv4.tcp_reordering sysctl parameter. The usual default value is 3, and the maximum useful value is 127. For a four interface balance-rr bond, expect that a single TCP/IP stream will utilize no more than approximately 2.3 interface's worth of throughput, even after adjusting tcp_reordering.

Note that the fraction of packets that will be delivered out of order is highly variable, and is unlikely to be zero. The level of reordering depends upon a variety of factors, including the networking interfaces, the switch, and the topology of the configuration. Speaking in general terms, higher speed network cards produce more reordering (due to factors such as packet coalescing), and a "many to many" topology will reorder at a higher rate than a "many slow to one fast" configuration.

Many switches do not support any modes that stripe traffic (instead choosing a port based upon IP or MAC level addresses); for those devices, traffic for a particular connection flowing through the switch to a balance-rr bond will not utilize greater than one interface's worth of bandwidth.

If you are utilizing protocols other than TCP/IP, UDP for example, and your application can tolerate out of order delivery, then this mode can allow for single stream datagram performance that scales near linearly as interfaces are added to the bond.

This mode requires the switch to have the appropriate ports configured for "etherchannel" or "trunking."

active-backup: There is not much advantage in this network topology to the active-backup mode, as the inactive backup devices are all connected to the same peer as the primary. In this case, a load balancing mode (with link monitoring) will provide the same level of network availability, but with increased available bandwidth. On the plus side, active-backup mode does not require any configuration of the switch, so it may have value if the hardware available does not support any of the load balance modes.

balance-xor: This mode will limit traffic such that packets destined for specific peers will always be sent over the same interface. Since the destination is determined by the MAC addresses involved, this mode works best in a "local" network configuration (as described above), with destinations all on the same local network. This mode is likely to be suboptimal if all your traffic is passed through a single router (i.e., a "gatewayed" network configuration, as described above).

As with balance-rr, the switch ports need to be configured for "etherchannel" or "trunking."

broadcast: Like active-backup, there is not much advantage to this mode in this type of network topology.

802.3ad: This mode can be a good choice for this type of network topology. The 802.3ad mode is an IEEE standard, so all peers that implement 802.3ad should interoperate well. The 802.3ad protocol includes automatic configuration of the aggregates, so minimal manual configuration of the switch is needed (typically only to designate that some set of devices is available for 802.3ad). The 802.3ad standard also mandates that frames be delivered in order (within certain limits), so in general single connections will not see misordering of packets. The 802.3ad mode does have some drawbacks: the standard mandates that all devices in the aggregate operate at the same speed and duplex. Also, as with all bonding load balance modes other than balance-rr, no single connection will be able to utilize more than a single interface's worth of bandwidth.

Additionally, the linux bonding 802.3ad implementation distributes traffic by peer (using an XOR of MAC addresses), so in a "gatewayed" configuration, all outgoing traffic will generally use the same device. Incoming traffic may also end up on a single device, but that is dependent upon the balancing policy of the peer's 8023.ad implementation. In a "local" configuration, traffic will be distributed across the devices in the bond.

Finally, the 802.3ad mode mandates the use of the MII monitor, therefore, the ARP monitor is not available in this mode.

balance-tlb: The balance-tlb mode balances outgoing traffic by peer. Since the balancing is done according to MAC address, in a "gatewayed" configuration (as described above), this mode will send all traffic across a single device. However, in a "local" network configuration, this mode balances multiple local network peers across devices in a vaguely intelligent manner (not a simple XOR as in balance-xor or 802.3ad mode), so that mathematically unlucky MAC addresses (i.e., ones that XOR to the same value) will not all "bunch up" on a single interface.

Unlike 802.3ad, interfaces may be of differing speeds, and no special switch configuration is required. On the down side, in this mode all incoming traffic arrives over a single interface, this mode requires certain ethtool support in the network device driver of the slave interfaces, and the ARP monitor is not available.

balance-alb: This mode is everything that balance-tlb is, and more. It has all of the features (and restrictions) of balance-tlb, and will also balance incoming traffic from local network peers (as described in the Bonding Module Options section, above).

The only additional down side to this mode is that the network device driver must support changing the hardware address while the device is open.

Instal·lació

El paquet que cal instal·lar és:

$ sudo apt-get install ifenslave

Fitxers instal·lats

$ dpkg -L ifenslave-2.6 
/.
/sbin
/sbin/ifenslave-2.6
/usr
/usr/share
/usr/share/man
/usr/share/man/man8
/usr/share/man/man8/ifenslave-2.6.8.gz
/usr/share/doc
/usr/share/doc/ifenslave-2.6
/usr/share/doc/ifenslave-2.6/copyright
/usr/share/doc/ifenslave-2.6/examples
/usr/share/doc/ifenslave-2.6/examples/ethernet+hotplug_wifi
/usr/share/doc/ifenslave-2.6/examples/two_hotplug_ethernet
/usr/share/doc/ifenslave-2.6/README.Debian.gz
/usr/share/doc/ifenslave-2.6/changelog.Debian.gz
/etc
/etc/network
/etc/network/if-up.d
/etc/network/if-up.d/ifenslave
/etc/network/if-pre-up.d
/etc/network/if-pre-up.d/ifenslave
/etc/network/if-post-down.d
/etc/network/if-post-down.d/ifenslave

Executable

Com podeu veure utilitza alternatives:

$ which ifenslave
/sbin/ifenslave
$ ls -la /sbin/ifenslave
lrwxrwxrwx 1 root root 27 mai  8 20:41 /sbin/ifenslave -> /etc/alternatives/ifenslave


Per mostrar/configurar les alternatives:

$ sudo update-alternatives --config ifenslave 
Només hi ha una alternativa al grup d'enllaços ifenslave: /sbin/ifenslave-2.6
No hi ha res a configurar.

Configuració

Cal configura el mòdul modificant el fitxer /etc/modprobe.d/aliases.conf:

alias bond0 bonding
options bonding mode=0 miimon=100 downdelay=200 updelay=200

I per definir el Bonding cal modificar el fitxer /etc/network/interfaces:

# The loopback network interface
auto lo
iface lo inet loopback

# The user-accessible network interface
auto bond0
iface bond0 inet static
        address 10.10.1.251
        netmask 255.255.255.0
        broadcast 10.10.1.255
        network 10.10.1.0
        gateway 10.10.1.1
        up /sbin/ifenslave bond0 eth0
        up /sbin/ifenslave bond0 eth1 

We do not need to define eth0 or eth1 in /etc/network/interfaces as they will be brought up when the bond comes up. If for documentation purposes, you wish to include them in /etc/network/interfaces the following should be the configuration:

# Members of the bonded network interface
auto eth0
iface eth0 inet manual
auto eth1
iface eth1 inet manual

You can view the current status of our bonded interface by:

$ cat /proc/net/bonding/bond0

Please note: A bonded network interface supports multiple modes. In this example eth0 and eth1 are in an round-robin configuration.

Bonding

Para hacer bonding no tendremos más que hacer:

# modprobe bonding max_bonds=2 (El máximo que se puede hasta la fecha según leí)
# /sbin/ifenslave bond0 eth0 eth1 (Lo que nosotros tocaremos a partir de ahora es bond0)

Recursos

Estat del bonding

/proc/net/bonding

A la carpeta proc del sistema el nucli Linux mostra informació sobre l'estat de les interfícies Bonding, concretament o trobareu a la carpeta /proc/net/bonding on es mostra l'estat del bonding Linux per a cada interfície tipus Bonding que hi hagi al sistema.

Per exemple per a la interfície bond0:

$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Speed: 100 Mbps
Duplex: half
Link Failure Count: 4
Permanent HW addr: d0:67:e5:f4:1d:96
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 100 Mbps
Duplex: half
Link Failure Count: 0
Permanent HW addr: d0:67:e5:f4:1d:97
Slave queue ID: 0

Exemples

http://wiki.debian.org/NetworkConfiguration#Howto_use_vlan_.28dot1q.2C_802.1q.2C_trunk.29_.28Etch.2C_Lenny.29

Exemple pas a pas

TODO

  • Mayor ancho de banda: el ancho de banda de la interfaz virtual será la suma de los anchos de banda de las interfaces reales.
  • Balanceo de carga: tendremos balanceo de carga del tráfico de red entre todas las interfaces reales (por defecto Round Robin).
  • Redundancia: si falla una tarjeta de red los datos irán sólo por las que estén en buen estado.

En Linux se encarga del channel bonding el módulo Linux Channel Bonding. Veamos cómo activarlo en un equipo con dos interfaces de red (eth0 y eth1):

Instalamos ifenslave (interface enslave, paquete ifenslave):

$ sudo apt-get install ifenslave

Cargamos en el kernel el módulo bonding (el parámetro miimon=100 significa que monitoriza cada 100 ms si las tarjetas reales funcionan):

$ sudo modprobe bonding miimon=100
   editamos /etc/network/interfaces y configuramos la interfaz virtual bond0 (con la directiva slaves definimos las tarjetas de red reales que forman bond0):
   auto bond0
   iface bond0 inet static
         address 192.168.0.100
         netmask 255.255.255.0
         broadcast 192.168.0.255
         gateway 192.168.0.1
         slaves eth0 eth1
   Para que al arrancar sólo se active bond0 eliminaremos las líneas auto eth0 y auto eth1.
   levantamos la interfaz virtual:
   # ifup bond0
   Si las interfaces reales están up las echaremos abajo:
   # ifdown eth0
   # ifdown eth1
   comprobamos si todo está correcto (las tres interfaces deben tener la misma MAC):
   # ifconfig
   bond0     Link encap:Ethernet  HWaddr 00:11:22:33:44:55
             inet addr:192.168.0.100  Bcast:192.168.0.255  Mask:255.255.255.0
             inet6 addr: fe80::211:22ff:fe33:4455/64 Scope:Link
             UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
             RX packets:1357 errors:0 dropped:0 overruns:0 frame:0
             TX packets:793 errors:14 dropped:0 overruns:0 carrier:14
             collisions:0 txqueuelen:0
             RX bytes:144979 (141.5 KiB)  TX bytes:148606 (145.1 KiB)
    
   eth0      Link encap:Ethernet  HWaddr 00:11:22:33:44:55
             inet6 addr: fe80::211:22ff:fe33:4455/64 Scope:Link
             UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
             RX packets:462 errors:0 dropped:0 overruns:0 frame:0
             TX packets:303 errors:14 dropped:0 overruns:0 carrier:14
             collisions:0 txqueuelen:1000
             RX bytes:51388 (50.1 KiB)  TX bytes:51796 (50.5 KiB)
             Interrupt:17 Base address:0x1800
    
   eth1      Link encap:Ethernet  HWaddr 00:11:22:33:44:55
             inet6 addr: fe80::211:22ff:fe33:4455/64 Scope:Link
             UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
             RX packets:895 errors:0 dropped:0 overruns:0 frame:0
             TX packets:490 errors:0 dropped:0 overruns:0 carrier:0
             collisions:0 txqueuelen:1000
             RX bytes:93591 (91.3 KiB)  TX bytes:96810 (94.5 KiB)
             Interrupt:16 Base address:0x8400


Bonding a Proxmox

Consulteu Proxmox#Crear_un_bonding

Vegeu també Proxmox.

Bonding + VLANS

Vegeu també

Enllaços externs