Home » Centos/RHEL » Setup and Configure Network Interface Bonding or Teaming on Linux

Setup and Configure Network Interface Bonding or Teaming on Linux

The join or mix together of network interface or card in order to offer a logical interface. And it is also called by many other names such as “channel teaming”, “NIC teaming”, “link aggregation”, “channel bonding”, “port trunking”, “Ethernet bonding”. The network teaming offer redundancy. If one interface is losing or unplugged, the second network card will keep the network and link up and alive. This theory as formerly implemented in the Linux kernel is extensively referred to as “bonding”. The term Network Teaming has been preferred to refer to this new achievement of the concept. The existing bonding driver is unaffected, Network teaming is provide as an option.
Linux permits us to bond multiple network inferfaces card into single virtual network card using linux kernel module named bonding. Here we are going to describe Setup and Configure Network Interface Bonding or Teaming on Linux
RHEL bonding supports seven potential “modes” for guaranteed interfaces. These modes confirm the approach within which traffic sent out of the guaranteed interface is really spread over the important interfaces. Modes 0, 1, and a pair of ar far and away the foremost normally used among them.
Mode zero (balance-rr)
Mode 1 (active-backup)
Mode 2 (balance-xor)
Mode 3 (broadcast)
Mode 4 (802.3ad)
Mode 5 (balance-tlb)
Mode 6 (balance-alb)

Setup and configure Firewalld Rules
Configure multiple php on parallel
In This article we are using Mode zero (balance-rr)

Configure bonding kernel module
We need to create a file named bond.conf in the /etc/modprobe.d/ directory which by default not created and add the below content.

# vi /etc/modprobe.d/bond.conf
alias bond0 bonding
#modprobe bonding

Setup Ethernet Interface Channel Bonding
We have 2 inferface p2p1 and p2p2, And bond0 will be setup for bonding function. We Need root user privileged to execute below commands.
Load Balancing (Round-Robin)
Configure eth1 and put the directive MASTER bond0 and p2p1 interface as a SLAVE in config file.

vi /etc/sysconfig/network-scripts/ifcfg-p2p1
DEVICE=p2p1
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
MASTER=bond0
SLAVE=yes

Do the same for p2p2 network interface.

vi /etc/sysconfig/network-scripts/ifcfg-p2p2
DEVICE=p2p2
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
MASTER=bond0
SLAVE=yes

Finally create bond0 virtual inferface configuration.
Create new interface file named ifcfg-bond0 in “/etc/sysconfig/network-scripts/” directory

vi /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
IPADDR=192.168.24.132
NETMASK=255.255.255.0
BONDING_OPTS=”mode=0 miimon=100″
#service network restart

Initialized bonding module and make possible bond interface
The simplest method to do this is to reboot the server which will automatically load the bonding kernel module as well as will enable the network bonding interface in the next boot.
Further we can do this manually without rebooting the server using the below command.

# modprobe bonding
# ifconfig bond0 up
# service network restart

Confirm bonding status that what is going on.
To Confirm bonding details and status run the below commands.

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: p2p1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0e:1e:89:a5:e0
Slave queue ID: 0

Slave Interface: p2p2
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0e:1e:89:a5:e2
Slave queue ID: 0

Above output indicates that Bonding Mode is running in Load Balancing (RR) mode and p2p1 & p2p2 are showing up

About

I am founder and webmaster of www.linuxpcfix.com and working as a Sr. Linux Administrator (Expertise on Linux/Unix & Cloud Server) and have been in the industry from last 7 years.

Leave a Reply

Your email address will not be published. Required fields are marked *

*
*

Time limit is exhausted. Please reload the CAPTCHA.

Categorized Tag Cloud