Sunday, 22 September 2019

How to install and configure Open vSwitch with NIC bonding on CentOS 7



Open vSwitch is a multi layer software switch which is free and open source software released under Apache 2 license. In this tutorial we are going to setup Open vSwitch with NIC bonding. NIC Bonding enables two or more network interfaces to act as one interface, to provide higher data rates and as well as link fail over.
Step-1: Create Bond Configuration Files
We need to create these below files in /etc/sysconfig/network-scripts/ directory and append the below parameters on respective files:

[root@linuxcloudy ~]# vi /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
NAME=bond0
TYPE=Bond
BONDING_MASTER=yes
IPADDR=192.168.43.105
NETMASK=255.255.255.0
GATEWAY=192.168.43.1
DNS1=192.168.43.1
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
NM_CONTROLLED=no
BONDING_OPTS="mode=1 miimon=100"

Here, you need to replace IP address and interface names as per setup IP addresses.
Similarly, Modify ens33 and ens34 interfaces configuration files and put the below parameters:

[root@linuxcloudy ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
DEVICE=ens33
TYPE=Ethernet
USERCTL=no
ONBOOT=yes
BOOTPROTO=none
NM_CONTROLLED=no
MASTER=bond0
SLAVE=yes

[root@linuxcloudy ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens34
DEVICE=ens34
TYPE=Ethernet
USERCTL=no
ONBOOT=yes
BOOTPROTO=none
NM_CONTROLLED=no
MASTER=bond0
SLAVE=yes

Step-2: Load bond driver/module
Check bonding driver/module is loaded for NIC-bonding interface (bond0) is bringing up.
First, load the bonding module, enter:

[root@linuxcloudy ~]# modprobe --first-time bonding
[root@linuxcloudy ~]#

Step-3: Restarting Network service:
Now we can restart the network service in order to bring up bond0 interface:

[root@linuxcloudy ~]# systemctl restart network
[root@linuxcloudy ~]#

Step-4:  Verify Bond Configuration:
Enter the below commands to verify the bonding configuration status from Linux kernel bonding driver:

[root@linuxcloudy ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: ens33
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: ens33
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:1b:86:76
Slave queue ID: 0

Slave Interface: ens34
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 1
Permanent HW addr: 00:0c:29:1b:86:80
Slave queue ID: 0
[root@linuxcloudy ~]#

[root@linuxcloudy ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens34: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
    link/ether 00:0c:29:1b:86:76 brd ff:ff:ff:ff:ff:ff
3: ens33: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
    link/ether 00:0c:29:1b:86:76 brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:0c:29:1b:86:76 brd ff:ff:ff:ff:ff:ff
    inet 192.168.43.105/24 brd 192.168.43.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe1b:8676/64 scope link
       valid_lft forever preferred_lft forever
[root@linuxcloudy ~]#

Bond configuration done now setup Open vSwitch:

Step 1. Install the Open vSwitch package:

[root@linuxcloudy ~]# yum install https://repos.fedorapeople.org/repos/openstack/EOL/openstack-juno/epel-7/openvswitch-2.3.1-2.el7.x86_64.rpm -y
Loaded plugins: fastestmirror
……………………………
Installed:
  openvswitch.x86_64 0:2.3.1-2.el7

Complete!
[root@linuxcloudy ~]#

Step 2. Enable and start the openvswitch daemon:

[root@linuxcloudy ~]# systemctl enable openvswitch
Created symlink from /etc/systemd/system/multi-user.target.wants/openvswitch.service to /usr/lib/systemd/system/openvswitch.service.
[root@linuxcloudy ~]# systemctl start openvswitch
[root@linuxcloudy ~]#

Step 3. Check the OVS switch status:

[root@linuxcloudy ~]# ovs-vsctl show
8351a631-f1e7-43f5-a9ee-736f80778c43
    ovs_version: "2.3.1"
[root@linuxcloudy ~]#

Step 4. Create an Open vSwitch bridge device called ovs-br0 and verify:

[root@linuxcloudy ~]# ovs-vsctl add-br ovs-br0
[root@linuxcloudy ~]# ovs-vsctl show
8351a631-f1e7-43f5-a9ee-736f80778c43
    Bridge "ovs-br0"
        Port "ovs-br0"
            Interface "ovs-br0"
                type: internal
    ovs_version: "2.3.1"
[root@linuxcloudy ~]#

Step 5. Modify the bond0 configuration file /etc/sysconfig/network-scripts/ifcfg-bond0 and Create a bridge configuration file /etc/sysconfig/network-scripts/ifcfg-ovs-br0 with the following content:

[root@linuxcloudy ~]# cp -p /etc/sysconfig/network-scripts/ifcfg-bond0 /etc/sysconfig/network-scripts/ifcfg-ovs-br0
[root@linuxcloudy ~]# vi /etc/sysconfig/network-scripts/ifcfg-bond0
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=ovs-br0
DEVICE=bond0
ONBOOT=yes
NAME=bond0
BONDING_MASTER=yes
BOOTPROTO=none
USERCTL=no
NM_CONTROLLED=no
BONDING_OPTS="mode=1 miimon=100"

[root@linuxcloudy ~]# vi /etc/sysconfig/network-scripts/ifcfg-ovs-br0
DEVICE=ovs-br0
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.43.105
NETMASK=255.255.255.0
GATEWAY=192.168.43.1

Step 6. Restart the network service:

[root@linuxcloudy ~]# systemctl restart network
[root@linuxcloudy ~]#

Step 7. Finally, verify the configuration:

[root@linuxcloudy ~]# ovs-vsctl show
8351a631-f1e7-43f5-a9ee-736f80778c43
    Bridge "ovs-br0"
        Port "ovs-br0"
            Interface "ovs-br0"
                type: internal
        Port "bond0"
            Interface "bond0"
    ovs_version: "2.3.1"
[root@linuxcloudy ~]#

[root@linuxcloudy ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens34: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
    link/ether 00:0c:29:1b:86:76 brd ff:ff:ff:ff:ff:ff
3: ens33: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
    link/ether 00:0c:29:1b:86:76 brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
    link/ether 00:0c:29:1b:86:76 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::20c:29ff:fe1b:8676/64 scope link
       valid_lft forever preferred_lft forever
5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether f2:6c:ff:1e:6e:a7 brd ff:ff:ff:ff:ff:ff
6: ovs-br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 00:0c:29:1b:86:76 brd ff:ff:ff:ff:ff:ff
    inet 192.168.43.105/24 brd 192.168.43.255 scope global ovs-br0
       valid_lft forever preferred_lft forever
    inet6 2405:204:3126:fccb:20c:29ff:fe1b:8676/64 scope global mngtmpaddr dynamic
       valid_lft 3388sec preferred_lft 3388sec
    inet6 fe80::20c:29ff:fe1b:8676/64 scope link
       valid_lft forever preferred_lft forever
[root@linuxcloudy ~]#

[root@linuxcloudy ~]# ping -c4 192.168.43.1
PING 192.168.43.1 (192.168.43.1) 56(84) bytes of data.
64 bytes from 192.168.43.1: icmp_seq=1 ttl=64 time=3.39 ms
64 bytes from 192.168.43.1: icmp_seq=2 ttl=64 time=2.66 ms
64 bytes from 192.168.43.1: icmp_seq=3 ttl=64 time=2.52 ms
64 bytes from 192.168.43.1: icmp_seq=4 ttl=64 time=2.71 ms

--- 192.168.43.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3007ms
rtt min/avg/max/mdev = 2.525/2.824/3.395/0.338 ms
[root@linuxcloudy ~]#

It’s done!!! OVS vSwitch with NIC bonding is configured.

No comments:

Post a Comment