Rebuilding My Home Network
ABSTRACT
I have had my ESXi box for YEARS and recently decided to take a dive into the world of KVM (QEMU) on Ubuntu Linux. It’s a popular and completely open hypervisor that has become a staple in many datacenter environments. I had been loathing the change because it meant I had to rebuild a few of the VMs I still had. Though I made the plunge recently into docker containers and finding apps I could containerize, I still have a handfull of VMs that perform various functions on my home network and “home lab”. I’ll preface this by saying that the ESXi box did us well over the years and I hardly ever had to touch it. The one thing I did not have with ESXi was other hosts to use for moving VMs. The fact that the license isn’t free and ESXi is very persnickety about hardware requirements just was a put-off in trying to implement V-Motion (VMWare’s method of migrating/moving VMs around in a cluster). With KVM, my options are more open and I have a handful of hosts that are compatible with KVM so if I ever had to move VMs around, I can in a pinch if ever I have a problem with hardware.
SOLUTION
To create a small cluster of physical hosts to run my new VMs built on KVM, I simply carried out these steps (I’ll go into greater detail on each one):
- Install Ubuntu Server 20.04 OS on each physical machine
- Configure a netplan for each physical KVM (pkvm) host that achieves:
- bonded (teamed) interfaces
- LACP (802.3ad) attributes to bring up the channel-group session
- vlan tagged traffic over the bond
- bridge interfaces to allow kvm guest VMs to attach to the desired vlan
- Install necessary KVM packages
- Configured a port channel interface on the main switch & define physical switch ports that will participate in the LACP channel-group
- Configure a common NFS mount on the NAS to hold the KVM guest images on the network
- Moving new kvm VMs into my newly rebuilt KVM host
- Consume donuts that my wife and kids made while the “internet was out”
Setting a Netplan
Ubuntu 20.04 uses netplan to configure the operation of network interfaces. This method uses a simple YAML formatted file that is easy to write and backup. If you screw up, you can always revert back to a previous file. (always make a backup!) My netplan file looks like this:
network: bonds: bond0: interfaces: - enp1s0f2 - enp1s0f3 parameters: mode: 802.3ad lacp-rate: fast mii-monitor-interval: 100 ethernets: # enp1s0f0: {} # enp1s0f1: {} enp1s0f2: {} enp1s0f3: {} enp3s0: dhcp4: no enp4s0: dhcp4: no vlans: vlan.2: id: 2 link: bond0 dhcp4: no vlan.5: id: 5 link: bond0 dhcp4: no vlan.11: id: 11 link: bond0 dhcp4: no vlan.10: id: 10 link: bond0 dhcp4: no vlan.50: id: 50 link: bond0 dhcp4: no vlan.73: id: 73 link: bond0 dhcp4: no vlan.300: id: 300 link: bond0 dhcp4: no bridges: br73: interfaces: - vlan.73 br2: interfaces: - vlan.2 br5: interfaces: - vlan.5 br10: interfaces: - vlan.10 br11: interfaces: - vlan.11 addresses: [10.0.1.3/24] gateway4: 10.0.1.1 nameservers: addresses: [10.0.1.10] br50: interfaces: - vlan.50 version: 2
Installing KVM packages
apt install -y qemu qemu-kvm libvirt-daemon bridge-utils virt-manager virtinst
Configuring a port channel interface
With the netplan configuration saved and in place, I just executed sudo netplan apply and then proceeded to setup the switch-side of the bonded connection. The first thing I needed to do was configure a new port channel interface on the switch that would be suitable for carrying tagged traffic:
interface Port-channel2 description KVMBOX switchport trunk encapsulation dot1q switchport mode trunk spanning-tree portfast end
After setting up the port channel interface, I now had to add the physical interfaces that are cabled from the switch to the big KVM box:
interface GigabitEthernet1/0/44 description TRUNK_TO_VBOX1 switchport trunk encapsulation dot1q switchport mode trunk lacp port-priority 500 channel-protocol lacp channel-group 2 mode active interface GigabitEthernet1/0/45 description TRUNK_TO_VBOX1 switchport trunk encapsulation dot1q switchport mode trunk lacp port-priority 500 channel-protocol lacp channel-group 2 mode active end wr mem
Once both ports were setup, I could then test the channel-group status:
CORE-SW#sh etherchannel 2 summary Flags: D - down P - bundled in port-channel I - stand-alone s - suspended H - Hot-standby (LACP only) R - Layer3 S - Layer2 U - in use f - failed to allocate aggregator M - not in use, minimum links not met u - unsuitable for bundling w - waiting to be aggregated d - default port Number of channel-groups in use: 2 Number of aggregators: 2 Group Port-channel Protocol Ports ------+-------------+-----------+----------------------------------------------- 2 Po2(SU) LACP Gi1/0/44(P) Gi1/0/45(P)
It’s time to now setup an NFS share for holding KVM images
This part is easy:
- apt install nfs-client
Add a line to /etc/fstab:
10.9.9.20:/volume1/vol1 /vol1 nfs _netdev,nfsvers=3,nolock,noatime,bg 0 0
Then mount: sudo mount /vol1. I then created a symbolic link from the /var/lib/libvirt/images directory to point to /vol1/kvms (where the other KVM servers put their images)