Configuring Open vSwitch

Imagine for a second that you’re working for a small company that has three to four KVM hosts, a couple of network-attached storage devices to host their 15 virtual machines, and that you’ve been employed by the company from the very start. So, you’ve seen it all – the company buying some servers, network switches, cables, and storage devices, and you were a part of a small team of people that built that environment. After 2 years of that process, you’re aware of the fact that everything works, it’s simple to maintain, and doesn’t give you an awful lot of grief.

Now, imagine the life of a friend of yours working for a bigger enterprise company that has 400 KVM hosts and close to 2,000 virtual machines to manage, doing the same job as you’re doing in a comfy chair of your office in your small company.

Do you think that your friend can manage his or her environment by using the very same tools that you’re using? XML files for network switch configuration, deploying servers from a bootable USB drive, manually configuring everything, and having the time to do so? Does that seem like a possibility to you?

There are two basic problems in this second situation:

  • The scale of the environment: This one is more obvious. Because of the environment size, you need some kind of concept that’s going to be managed centrally, instead of on a host-per-host level, such as the virtual switches we’ve discussed so far.
  • Company policies: These usually dictate some kind of compliance that comes from configuration standardization as much as possible. Now, we can agree that we could script some configuration updates via Ansible, Puppet, or something like that, but what’s the use? We’re going to have to create new config files, new procedures, and new workbooks every single time we need to introduce a change to KVM networking. And big companies frown upon that.

So, what we need is a centralized networking object that can span across multiple hosts and offer configuration consistency. In this context, configuration consistency offers us a huge advantage – every change that we introduce in this type of object will be replicated to all the hosts that are members of this centralized networking object. In other words, what we need is Open vSwitch (OVS). For those who are more versed in VMware-based networking, we can use an approximate metaphor – Open vSwitch is for KVM-based environments similar to what vSphere Distributed Switch is for VMware-based environments.

In terms of technology, OVS supports the following:

  • VLAN isolation (IEEE 802.1Q)
  • Traffic filtering
  • NIC bonding with or without LACP
  • Various overlay networks – VXLAN, GENEVE, GRE, STT, and so on
  • 802.1ag support
  • Netflow, sFlow, and so on
  • (R)SPAN
  • OpenFlow
  • OVSDB
  • Traffic queuing and shaping
  • Linux, FreeBSD, NetBSD, Windows, and Citrix support (and a host of others)

Now that we’ve listed some of the supported technologies, let’s discuss the way in which Open vSwitch works.

First, let’s talk about the Open vSwitch architecture. The implementation of Open vSwitch is broken down into two parts: the Open vSwitch kernel module (the data plane) and the user space tools (the control pane). Since the incoming data packets must be processed as fast as possible, the data plane of Open vSwitch was pushed to the kernel space:

Figure 4.11 – Open vSwitch architecture

Figure 4.11 – Open vSwitch architecture

The data path (OVS kernel module) uses the netlink socket to interact with the vswitchd daemon, which implements and manages any number of OVS switches on the local system.

Open vSwitch doesn’t have a specific SDN controller that it uses for management purposes, in a similar fashion to VMware’s vSphere distributed switch and NSX, which have vCenter and various NSX components to manage their capabilities. In OVS, the point is to use someone else’s SDN controller, which then interacts with ovs-vswitchd using the OpenFlow protocol. The ovsdb-server maintains the switch table database and external clients can talk to the ovsdb-server using JSON-RPC; JSON is the data format. The ovsdb database currently contains around 13 tables and this database is persistent across restarts.

Open vSwitch works in two modes: normal and flow mode. This chapter will primarily concentrate on how to bring up a KVM VM connected to Open vSwitch’s bridge in standalone/normal mode and will a give brief introduction to flow mode using the OpenDaylight controller:

  • Normal Mode: Switching and forwarding are handled by OVS bridge. In this modem OVS acts as an L2 learning switch. This mode is specifically useful when configuring several overlay networks for your target rather than manipulating the switch’s flow.
  • Flow Mode: In flow mode, the Open vSwitch bridge flow table is used to decide on which port the receiving packets should be forwarded to. All the flows are managed by an external SDN controller. Adding or removing the control flow requires using an SDN controller that’s managing the bridge or using the ctl command. This mode allows a greater level of abstraction and automation; the SDN controller exposes the REST API. Our applications can make use of this API to directly manipulate the bridge’s flows to meet network needs.

Let’s move on to the practical aspect and learn how to install Open vSwitch on CentOS 8:

  1. The first thing that we must do is tell our system to use the appropriate repositories. In this case, we need to enable the repositories called epel and centos-release-openstack-train. We can do that by using a couple of yum commands:
    yum -y install epel-release
    yum -y install centos-release-openstack-train
  2. The next step will be installing openvswitch from Red Hat’s repository:
    dnf install openvswitch -y
  3. After the installation process, we need to check if everything is working by starting and enabling the Open vSwitch service and running the ovs-vsctl -V command:
    systemctl start openvswitch
    systemctl enable openvswitch
    ovs-vsctl -V

    The last command should throw you some output specifying the version of Open vSwitch and its DB schema. In our case, it’s Open vSwitch 2.11.0 and DB schema 7.16.1.

  4. Now that we’ve successfully installed and started Open vSwitch, it’s time to configure it. Let’s choose a deployment scenario in which we’re going to use Open vSwitch as a new virtual switch for our virtual machines. In our server, we have another physical interface called ens256, which we’re going to use as an uplink for our Open vSwitch virtual switch. We’re also going to clear ens256 configuration, configure an IP address for our OVS, and start the OVS by using the following commands:
    ovs-vsctl add-br ovs-br0
    ip addr flush dev ens256
    ip addr add 10.10.10.1/24 dev ovs-br0
    ovs-vsctl add-port ovs-br0 ens256
    ip link set dev ovs-br0 up
  5. Now that everything has been configured but not persistently, we need to make the configuration persistent. This means configuring some network interface configuration files. So, go to /etc/sysconfig/network-scripts and create two files. Call one of them ifcfg-ens256 (for our uplink interface):
    DEVICE=ens256
    TYPE=OVSPort
    DEVICETYPE=ovs
    OVS_BRIDGE=ovs-br0
    ONBOOT=yes

    Call the other file ifcfg-ovs-br0 (for our OVS):

    DEVICE=ovs-br0
    DEVICETYPE=ovs
    TYPE=OVSBridge
    BOOTPROTO=static
    IPADDR=10.10.10.1
    NETMASK=255.255.255.0
    GATEWAY=10.10.10.254
    ONBOOT=yes
  6. We didn’t configure all of this just for show, so we need to make sure that our KVM virtual machines are also able to use it. This means – again – that we need to create a KVM virtual network that’s going to use OVS. Luckily, we’ve dealt with KVM virtual network XML files before (check the Libvirt isolated network section), so this one isn’t going to be a problem. Let’s call our network packtovs and its corresponding XML file packtovs.xml. It should contain the following content:
    <network>
    <name>packtovs</name>
    <forward mode='bridge'/>
    <bridge name='ovs-br0'/>
    <virtualport type='openvswitch'/>
    </network>

So, now, we can perform our usual operations when we have a virtual network definition in an XML file, which is to define, start, and autostart the network:

virsh net-define packtovs.xml
virsh net-start packtovs
virsh net-autostart packtovs

If we left everything as it was when we created our virtual networks, the output from virsh net-list should look something like this:

Figure 4.12 – Successful OVS configuration, and OVS+KVM configuration

Figure 4.12 – Successful OVS configuration, and OVS+KVM configuration

So, all that’s left now is to hook up a VM to our newly defined OVS-based network called packtovs and we’re home free. Alternatively, we could just create a new one and pre-connect it to that specific interface using the knowledge we gained in Chapter 3, Installing KVM Hypervisor, libvirt, and oVirt. So, let’s issue the following command, which has just two changed parameters (--name and --network):

virt-install --virt-type=kvm --name MasteringKVM03 --vcpus 2 --ram 4096 --os-variant=rhel8.0 --cdrom=/var/lib/libvirt/images/CentOS-8-x86_64-1905-dvd1.iso --network network:packtovs --graphics vnc --disk size=16

After the virtual machine installation completes, we’re connected to the OVS-based packtovs virtual network, and our virtual machine can use it. Let’s say that additional configuration is needed and that we got a request to tag traffic coming from this virtual machine with VLAN ID 5. Start your virtual machine and use the following set of commands:

ovs-vsctl list-ports ovs-br0
ens256
vnet0

This command tells us that we’re using the ens256 port as an uplink and that our virtual machine, MasteringKVM03, is using the virtual vnet0 network port. We can apply VLAN tagging to that port by using the following command:

ovs-vsctl set port vnet0 tag=5

We need to take note of some additional commands related to OVS administration and management since this is done via the CLI. So, here are some commonly used OVS CLI administration commands:

  • #ovs-vsctl show: A very handy and frequently used command. It tells us what the current running configuration of the switch is.
  • #ovs-vsctl list-br: Lists bridges that were configured on Open vSwitch.
  • #ovs-vsctl list-ports <bridge>: Shows the names of all the ports on BRIDGE.
  • #ovs-vsctl list interface <bridge>: Shows the names of all the interfaces on BRIDGE.
  • #ovs-vsctl add-br <bridge>: Creates a bridge in the switch database.
  • #ovs-vsctl add-port <bridge> : <interface>: Binds an interface (physical or virtual) to the Open vSwitch bridge.
  • #ovs-ofctl and ovs-dpctl: These two commands are used for administering and monitoring flow entries. You learned that OVS manages two kinds of flows: OpenFlows and Datapath. The first is managed in the control plane, while the second one is a kernel-based flow.
  • #ovs-ofctl: This speaks to the OpenFlow module, whereas ovs-dpctl speaks to the Kernel module.

The following examples are the most used options for each of these commands:

  • #ovs-ofctl show <BRIDGE>: Shows brief information about the switch, including the port number to port name mapping.
  • #ovs-ofctl dump-flows <Bridge>: Examines OpenFlow tables.
  • #ovs-dpctl show: Prints basic information about all the logical datapaths, referred to as bridges, present on the switch.
  • #ovs-dpctl dump-flows: It shows the flow cached in datapath.
  • ovs-appctl: This command offers a way to send commands to a running Open vSwitch and gathers information that is not directly exposed to the ovs-ofctl command. This is the Swiss Army knife of OpenFlow troubleshooting.
  • #ovs-appctl bridge/dumpflows <br>: Examines flow tables and offers direct connectivity for VMs on the same hosts.
  • #ovs-appctl fdb/show <br>: Lists MAC/VLAN pairs learned.

Also, you can always use the ovs-vsctl show command to get information about the configuration of your OVS switch:

Figure 4.13 – ovs-vsctl show output

Figure 4.13 – ovs-vsctl show output

We are going to come back to the subject of Open vSwitch in Chapter 12, Scaling Out KVM with OpenStack , as we go deeper into our discussion about spanning Open vSwitch across multiple hosts, especially while keeping in mind the fact that we want to be able to span our cloud overlay networks (based on GENEVE, VXLAN, GRE, or similar protocols) across multiple hosts and sites.

Other Open vSwitch use cases

As you might imagine, Open vSwitch isn’t just a handy concept for libvirt or OpenStack – it can be used for a variety of other scenarios as well. Let’s describe one of them as it might be important for people looking into VMware NSX or NSX-T integration.

Let’s just describe a few basic terms and relationships here. VMware’s NSX is an SDN-based technology that can be used for a variety of use cases:

  • Connecting data centers and extending cloud overlay networks across data center boundaries.
  • A variety of disaster recover scenarios. NSX can be a big help for disaster recover, for multi-site environments, and for integration with a variety of external services and devices that can be a part of the scenario (Palo Alto PANs).
  • Consistent micro-segmentation, across sites, done the right way on the virtual machine network card level.
  • For security purposes, varying from different types of supported VPN technologies to connect sites and end users, to distributed firewalls, guest introspection options (antivirus and anti-malware), network introspection options (IDS/IPS), and more.
  • For load balancing, up to Layer 7, with SSL offload, session persistence, high availablity, application rules, and more.

Yes, VMware’s take on SDN (NSX) and Open vSwitch seem like competing technologies on the market, but realistically, there are loads of clients who want to use both. This is where VMware’s integration with OpenStack and NSX’s integration with Linux-based KVM hosts (by using Open vSwitch and additional agents) comes in really handy. Just to further explain these points – there are things that NSX does that take extensive usage of Open vSwitch-based technologies – hardware VTEP integration via Open vSwitch Database, extending GENEVE networks to KVM hosts by using Open vSwitch/NSX integration, and much more.

Imagine that you’re working for a service provider – a cloud service provider, an ISP; basically, any type of company that has large networks with a lot of network segmentation. There are loads of service providers using VMware’s vCloud Director to provide cloud services to end users and companies. However, because of market needs, these environments often need to be extended to include AWS (for additional infrastructure growth scenarios via the public cloud) or OpenStack (to create hybrid cloud scenarios). If we didn’t have a possibility to have interoperability between these solutions, there would be no way to use both of these offerings at the same time. But from a networking perspective, the network background for that is NSX or NSX-T (which actually uses Open vSwitch).

It’s been clear for years that the future is all about multi-cloud environments, and these types of integrations will bring in more customers; they will want to take advantage of these options in their cloud service design. Future developments will also most probably include (and already partially include) integration with Docker, Kubernetes, and/or OpenShift to be able to manage containers in the same environment.

There are also some more extreme examples of using hardware – in our example, we are talking about network cards on a PCI Express bus – in a partitioned way. For the time being, our explanation of this concept, called SR-IOV, is going to be limited to network cards, but we will expand on the same concept in Chapter 6, Virtual Display Devices and Protocols, when we start talking about partitioning GPUs for use in virtual machines. So, let’s discuss a practical example of using SR-IOV on an Intel network card that supports it.

It will gives you output similar to below:

Related Articles

How to add swap space on Ubuntu 21.04 Operating System

How to add swap space on Ubuntu 21.04 Operating System

The swap space is a unique space on the disk that is used by the system when Physical RAM is full. When a Linux machine runout the RAM it use swap space to move inactive pages from RAM. Swap space can be created into Linux system in two ways, one we can create a...

read more

Lorem ipsum dolor sit amet consectetur

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

one + seven =