Here’s what I’ve built, or what I’m currently working on.


A “terminal server” or centralized shared desktop server has obvious benefits. Here are just a few:

  • Central application management
  • Securely access home automation or BMS systems remotely
  • Use cheap thin clients or even older machines that are now too slow
  • Cross platform – this works with Windows, Mac, and Linux machines
  • Remotely accessible from anywhere
  • Reduced support requirements
  • Privately use another desktop from work or home
  • Save on the expense of premium hardware on every desktop
  • More secure application deployment within trusted network zones
  • License Free! – this type of terminal server requires no expensive licensing


I built a terminal server running Xubuntu 22.04 with local users. To enable the use of RDP protocol clients (because Windows, Linux, and Mac machines already have RDP clients) I installed the xrdp package. You can perform the following steps to turn any Xubuntu desktop instance into a full terminal server:

sudo apt install xrdp
sudo systemctl enable --now xrdp
sudo ufw allow from any to any port 3389 proto tcp

When adding new users to the box, simply add any desktop user the usual way in Ubuntu:

Once the user is added, give them whatever permissions are desired. If you have various applications and a printer already setup on this box, then the user will have access to them to print from within their session. Once setup, give the user their credentials and have them login for the first time.

From another Linux desktop, they can use the “Remmina” app to open a session, or from Windows use Remote Desktop, etc:

Hit “Save and Connect”

Upon logging in, they will see the login greeter:

Once logged in, the terminal server desktop appears:

From within this session, the desktop experience is perfect for running centralized applications, or browsing the internet from the terminal server’s location.


I had a need to create a shared desktop Ubuntu machine that would be used by more than one person, but I wanted it to function more like an internet kiosk, and not be changeable by the end user. After playing with Raspbian OS for other projects, I knew about the overlayfs method of making the SD card read only. This is often done to protect the SD card from being prematurely worn by constant writing. In this application, I wanted to do something similar using the Ubuntu OS on more powerful J4125 hardware. After some initial googling, I learned about the overlayroot package. Installing and setting it up is fairly easy and can be done in mere minutes. Read on…


First thing we will want to do is install the Ubuntu operating system and all the software applications we might want. We want to set up the machine exactly how we will want it BEFORE we enable the overlayroot read only FS. Set all preferred desktop settings, packages, themes, browser settings, backgrounds, desired user accounts, adding printers, etc FIRST!

After we have the desktop provisioned the way we want it, the next step is to “freeze” this configuration by installing and enabling overlayroot:

sudo apt install overlayroot

Once you have it installed, you will need to modify a configuration file to enable it. Edit the following file – you only need to change the following variable: overlayroot=””

sudo nano /etc/overlayroot.conf


Save and close the file. There are other options for this config, but they are out of the scope of this article. For more information, please consult the config file, it is loaded with documentation in the comments.

Last step, reboot the machine. Your machine will come up in a read only state. Any changes made to the system will be eliminated after reboot! This is perfect for a shared computer where you don’t want multiple people mucking up the machine, and cleanup is as easy as rebooting.

Undo The Overlayroot

If you should wish to undo the overlayroot to update the system, or add/change something, you can do so by passing the following argument on the grub boot:


Here’s how:

reboot the machine, and at the grub prompt hit “e” to edit the chosen boot command and put the above option on the boot line like so:

menuentry 'Ubuntu, with Linux 3.5.0-54-generic (Writable)' --class ubuntu --class gnu-linux --class gnu --class os {
	gfxmode $linux_gfx_mode
	insmod gzio
	insmod part_msdos
	insmod ext2
	set root='(hd0,msdos1)'
	search --no-floppy --fs-uuid --set=root 28adfe9d-c122-479a-ab81-de57d16516dc
	linux	/vmlinuz-3.5.0-54-generic root=/dev/mapper/faramir-root ro overlayroot=disabled
	initrd	/initrd.img-3.5.0-54-generic

Keep in mind this is a ONE TIME modification to the boot line, once booted, make all your changes and then reboot again, and the system will be restored to the overlayroot read only state. If you would wish to permanently undo the overlayroot, then clear the overlayroot=”tmpfs” variable in /etc/overlayroot.conf BEFORE rebooting.

I bought a used Dell R430 server from Ebay. It still had the old hostname showing in the system details on the iDrac. Without the OMSA tool installed on the server, it is hard to change the displayed name. The OMSA tool also enables a lot of other admin functionality as well. I read many guides on how to install the tool for my system, so I’m posting it here in hopes this will help someone.

Step 1

Install the repository. Dell maintains a complete repo for Ubuntu/Debian users at:

You need to review the support matrix on that repo for your server model, generation, and supported OS. In my case, for my R430 (13th generation) I installed version Here’s how I added the repo:

sudo gpg --keyserver --recv-key 1285491434D8786F
sudo gpg -a --export 1285491434D8786F | sudo apt-key add -
echo 'deb focal main' | sudo tee -a /etc/apt/sources.list.d/
sudo apt update

Step 2

Once apt was updated to include the repo, all that is left to do is install the tool:

sudo apt install srvadmin-all

After installation, reboot the server (the installation makes many changes) and then start the OMSA service:

sudo /opt/dell/srvadmin/sbin/ start

After starting the service, you should then be able to access the admin page using a local account at:


Once logged in, you will then see the dashboard:


Because I have multiple computer systems (servers, desktop/laptops, and embedded home automation devices) I wanted to have a master time source that would work even if my internet connection became unavailable. The best way to achieve this is by using time signals from GPS satellites. Commercial GPS clocks tend to cost upwards in the thousands of dollars. My budget isn’t sized to afford a commercial GPS clock, but I wanted one so I did not have to rely on internet NTP sources should the internet connection fail. Fortunately there are a number of inexpensive GPS modules on the market which allow you to build such a master clock. I had built one of these about 10 years ago but never documented the project, so here I am documenting it now. For this project, you will want a GPS module that offers a PPS (Pulse Per Second) output. This PPS signal allows the NTP server to precisely align the GPS seconds with the rise of the PPS pulse. The PPS signal is read from an available GPIO pin on the Raspberry Pi. Any Raspberry Pi board will do – as long as it is dedicated to this purpose and running no other applications.


Materials used for this project:

Preparing the Raspbian OS:

To prepare and install all the needed software run:

sudo apt update
sudo apt dist-upgrade
sudo apt install pps-tools gpsd gpsd-clients gpsd-tools chrony

Now, wire the GPS module to the RPi as follows: (Note: we’re using GPIO 18)

Pin connections:

  1. GPS PPS to RPi pin 12 (GPIO 18)
  2. GPS VIN to RPi pin 2 or 4
  3. GPS GND to RPi pin 6
  4. GPS RX to RPi pin 8
  5. GPS TX to RPi pin 10
Raspberry gPIo -

Now, make the following config changes:

sudo bash -c "echo 'enable_uart=1' >> /boot/config.txt
# the next 3 lines are for GPS PPS signals
sudo bash -c "echo 'dtoverlay=pps-gpio,gpiopin=18' >> /boot/config.txt
sudo bash -c "echo 'init_uart_baud=9600' >> /boot/config.txt
# load the pps-gpio module on boot
sudo bash -c "echo 'pps-gpio' >> /etc/modules"

Open the following file and edit as shown below:

sudo vim /etc/default/gpsd

# Devices gpsd should collect to at boot time.
# They need to be read/writeable, either by user gpsd or the group dialout.
DEVICES="/dev/ttyS0 /dev/pps0"

# Other options you want to pass to gpsd

# Automatically hot add/remove USB GPS devices via gpsdctl

It’s now time to restart the GPS Clock to allow the above changes. You can test the loading of the pps-gpio module by running:

pi@time:~ $ lsmod | grep pps
pps_ldisc              16384  2
pps_gpio               16384  2

If you see the pps_gpio module returned in the output above, then please proceed. If not, please GO BACK AND CHECK YOUR WIRING!

sudo vim /etc/chrony/chrony.conf

# Include configuration files found in /etc/chrony/conf.d.
confdir /etc/chrony/conf.d

# Put in some reliable NTP backups (these will be fallback sources if GPS signal disappears).  
# In my case, I had a GPS module fail after 10 years and without these backups, your clocks 
# will wonder!  Lesson learned!
server iburst

# Enter these two statements to tell Chrony to prefer the GPS time.
refclock SHM 0 delay 0.325 refid NMEA
refclock PPS /dev/pps0 refid PPS

# Set this to whatever subnet you wish to allow to query the GPS clock.  Replace the below
# with whatever is relevant in your environment!  In my example, the following subnet is used 
# by my network switches which are NTP peers.  Only the switches sync with the GPS clock and
# network endpoints in other subnets/vlans query the nearest switch for time.  

# This directive specify the location of the file containing ID/key pairs for
# NTP authentication.
keyfile /etc/chrony/chrony.keys

# This directive specify the file into which chronyd will store the rate
# information.
driftfile /var/lib/chrony/chrony.drift

# Save NTS keys and cookies.
ntsdumpdir /var/lib/chrony

# Uncomment the following line to turn logging on.
#log tracking measurements statistics

# Log files location.
logdir /var/log/chrony

# Stop bad estimates upsetting machine clock.
maxupdateskew 100.0

# This directive enables kernel synchronisation (every 11 minutes) of the
# real-time clock. Note that it can’t be used along with the 'rtcfile' directive.

# Step the system clock instead of slewing it if the adjustment is larger than
# one second, but only in the first three clock updates.
makestep 1 3

# Get TAI-UTC offset and leap seconds from the system tz database.
# This directive must be commented out when using time sources serving
# leap-smeared time.
leapsectz right/UTC

Now that you have completed the above, it’s time to restart the Chrony service:

sudo systemctl restart chrony

You can now check to see if the GPS is working:

sudo cgps

You should see output like:

Note the “Status” line. Having a 3D GPS fix is good, it indicates a solid fix and therefore reliable time.

If you are getting the above output, your GPS should be working and chrony should be serving time to your network. You can check to see if chrony is getting time from the GPS:

sudo chronyc -n sources

MS Name/IP address         Stratum Poll Reach LastRx Last sample               
#- NMEA                          0   4   377     7   +141ms[ +141ms] +/-  163ms
#* PPS                           0   4   377     9   -185ns[ -427ns] +/-  312ns
^-                  1   7   377    72  -1348us[-1348us] +/-   24ms
^-                  1   8   377   111  -1386us[-1387us] +/-   24ms
^-                 3  10   377   676  +5834us[+5900us] +/-   59ms
^-                 1   7   377    23    -66us[  -66us] +/- 3640us

In the above example, the time source with the asterisk (*) is the preferred in-use time source. The hash (#) indicates that the listed time source is a locally attached time source, while the carrot (^) indicates that the time source is remote. Here you can see that the GPS PPS time source has a better accuracy in NANOSECONDS!

Harden The Clock

At this point, it is a good idea to now enable an overlay filesystem to preserve your MicroSD card. This will ensure that no more writing occurs and that if the GPS clock loses power suddenly, that the card will not get corrupted. To do so please do the following:

sudo raspi-config

Select Performance Options:

Then Overlay File System:

Choose YES to enable the overlay filesystem, then YES to make the /boot partition READ ONLY! At this point, you can safely power cycle the GPS clock and it should reboot and continue working automatically. Be sure to place the antenna in view of the sky. If you are using an internal antenna, place the GPS clock near a window.

I recently purchased a Weatherflow weatherstation (Tempest) and set it up in the back yard. What I like about this company is that they are very open source friendly and allow access to their API for getting access to the data from your weather station. This data is value for many purposes, including home automation, or weather alerting. My first exercise was to learn how to use the API for making data calls and then doing something with the data. The Tempest device tracks the following weather conditions:

  • Air Temperature
  • Barometric Pressure
  • Relative Humidity
  • Precipitation & accumulation
  • Wind Average and Gust
  • Wind Direction
  • Solar Radiation
  • UV Index
  • Brightness (expressed in lux)
  • Lightning Strike (count, distance, by time)
  • “Feels Like” temperature
  • Heat Index
  • Wind Chill
  • Dew Point
  • Pressure Trend
  • Wet Bulb Temperature

The API provides access to a JSON formatted feed which allows you to write a programmatic method to access, store, and consume the data for just about anything. Here’s what the returned JSON string looks like:

[{'timestamp': 1654725915, 'air_temperature': 28.0, 'barometric_pressure': 1001.2, 'station_pressure': 1001.2, 'sea_level_pressure': 1014.8, 'relative_humidity': 78, 'precip': 0.0, 'precip_accum_last_1hr': 0.0, 'precip_accum_local_day': 0.0, 'precip_accum_local_day_final': 0.0, 'precip_accum_local_yesterday': 0.0, 'precip_accum_local_yesterday_final': 0.0, 'precip_minutes_local_day': 0, 'precip_minutes_local_yesterday': 0, 'precip_minutes_local_yesterday_final': 0, 'precip_analysis_type_yesterday': 0, 'wind_avg': 0.5, 'wind_direction': 359, 'wind_gust': 1.2, 'wind_lull': 0.0, 'solar_radiation': 486, 'uv': 6.04, 'brightness': 58367, 'lightning_strike_last_epoch': 1654217568, 'lightning_strike_last_distance': 39, 'lightning_strike_count': 0, 'lightning_strike_count_last_1hr': 0, 'lightning_strike_count_last_3hr': 0, 'feels_like': 31.8, 'heat_index': 31.8, 'wind_chill': 28.0, 'dew_point': 23.8, 'wet_bulb_temperature': 24.9, 'wet_bulb_globe_temperature': 28.7, 'delta_t': 3.1, 'air_density': 1.15816, 'pressure_trend': 'steady'}]
The Tempest weather station by Weatherflow

Once you setup access to the API, you can then make use the data in real time. I wrote a simple Python script to access the JSON feed and create a text string to be pushed to my LED scrolling weathertube. Here’s the code below:

import urllib, json, requests
from requests.structures import CaseInsensitiveDict

url = ""

headers = CaseInsensitiveDict()
headers["Accept"] = "application/json"
headers["Authorization"] = "Bearer YOUR-API-KEY"

def convertTemp(c):
    f = (c * 1.8) + 32
    return f

resp = requests.get(url, headers=headers)

wx = resp.json()
wxobs = wx["obs"]
tempf = convertTemp(wxobs[0]["air_temperature"])
dewpoint = convertTemp(wxobs[0]["dew_point"])

output = "     Current Conditions:   UV Index: " + str(round(wxobs[0]["uv"], 1)) + "       Temperature: " + str(round(tempf, 1)) + "F""       Dewpoint: " + str(round(dewpoint, 1)) + "F"
output += "     Humidity: " + str(wxobs[0]["relative_humidity"]) + "%""         Pressure: " + str(wxobs[0]["pressure_trend"]) + "      Wind Gust: " + str(wxobs[0]["wind_gust"]) + "MPH" + ""
output += "     Lighting Strikes Last Hour: " + str(wxobs[0]["lightning_strike_count_last_1hr"]) + "      Lighting Last Distance: " + str(wxobs[0]["lightning_strike_last_distance"]) + "Miles"

file_object = open('/scripts/wx/conditions', 'w')

The above Python script is called from a bash script that is executed from cron every 5 minutes:


python3 /scripts/
msg=`/bin/cat /scripts/wx/conditions`

curl -X POST -F "msg=$msg" -s > /dev/null

In the above example, the host is the IP address of the weather tube which accepts curl POST input to update the display.


In our home, we have 2 garage doors with RF remotes in our cars.  For most people, this is generally considered “good enough”.  I wanted to come up with a way to connect our garage door openers to our home automation system.  Doing so would have the added benefit of remote control from anywhere, especially if we are away.  This could be useful so that deliveries could be placed in the garage, or for any other reason where we would want to allow someone access to the garage but not the rest of the house.  Some folks have a code entry panel that serves this purpose but then you have to share that code and by doing so, can compromise security should the code be shared without your knowledge.  With the ability to remotely open the garage, allows access without needing to share a credential.

This article makes the following assumptions:

  • You already know how to flash Tasmota onto ESP8266 hardware
  • You are familiar with Domoticz home automation console and adding devices
  • You use some method of message transport ie. MQTT between Domoticz and your Tasmota powered hardware

View the following video to see a demo of how this works:



Since I already have an in-place home automation system, all I needed to do was configure two buttons on the console that would accept a pushbutton command and send a signal to a relay to open the door.  For hardware, I used a dual relay board that has an esp8266 chip.  The 8266 chip was flashed with the latest Tasmota release, and configured to operate the relays.  The switch output of the relays is wired to the existing wall switches of the garage door so that when tripped, will cause the door to operate just as the physical wall buttons already do. 

The Tasmota configuration uses the “Generic” template and configures GPIO0 to be Relay1, and GPIO2 to be Relay2.  After setting the GPIOs, I had to enter some settings and a ruleset into the Tasmota Console on the ESP-01.  To enable the ESP-01 to talk to the relay serial chip on the board, I had to go to the console and enter the following command and ruleset:  (NOTE: some older versions of this dual relay board may use 9600 baud instead of the 115200 baud shown here.)  Also, the dual relay board needs to operate in Mode 1 (default mode and indicated by a red LED on the board).

seriallog 0
on System#Boot do Backlog Baudrate 115200; SerialSend5 0 endon
on Power1#State=1 do SerialSend5 A00101A2 endon
on Power1#State=0 do SerialSend5 A00100A1 endon
on Power2#State=1 do SerialSend5 A00201A3 endon
on Power2#State=0 do SerialSend5 A00200A2 endon

Then to enable the above rule:

rule1 1

turns on rule1.  Once enabled, I want to ensure that power disturbances do not trigger the relays or cause the ESP-01 to lose it’s config.  To ensure that the relays stay OFF when there are power interruptions or power cycles, I needed to enter this command on the Tasmota Console:

PowerOnState 0

I also wanted to make sure the config would remain intact even if there were several power cycles/disturbances (sometimes Tasmota can reset to defaults if there are more than 6 fast consecutive power cycles), so I also entered this command into the Tasmota Console:

SetOption65 1

Finally, we want the relays to only trigger momentarily when activated so that a pulse is registered to the garage door openers.  To do that, we must enter two more commands on the Tasmota Console of the relay board:

PulseTime1 1
PulseTime2 1

Once these changes are set, all that is needed is to set the Domoticz IDX address to match the two pushbuttons that were added to the Domoticz console.   At this point it should be possible to remotely trigger the relays from Domoticz and they will trigger ON for 1 second and then switch off when called via Domoticz.  All that is left to do is wire the relay N.O. contacts to the physical garage door wall switches in the garage.  It will now be possible to open or close the garage doors from the Domoticz console via any mobile device that has access to the Domoticz home automation console.


I moved to an area where my AM news station (WBZ) comes in rather scratchy.  Sure I could stream them over the internet on a mobile device, but what about the radios I currently have?  Have they now become paperweights?  Fortunately, WBZ streams online and I found a cool FM transmitter module that I thought “I could use this with a Raspberry Pi to put WBZ on the FM dial near my home”.  The FM module is about $12 and available on Amazon and I already had a raspberry pi computer I could dedicate for the project.  Why not try it?



This article is for informational/educational purposes only.  If you make use of any information in this article, I will not be liable for your use of this information and any action you take based on the technical discussion herein is solely at your own peril/risk!  Please check local laws for this application in your country of residence!  This article also deals with a solution powered by AC mains voltage.  If you do not understand what you are doing, PLEASE be safe and get qualified help!

I installed Ubuntu Linux 20.04.2 server on the raspberry pi computer, and then installed a software called liquidsoap.  Liquidsoap is an audio/streaming swiss army knife and is of course, open source.  Normally, people use liquidsoap to capture a live audio source and then create a stream on the internet.  I wanted to do the reverse, and pull in an internet stream and play it over the USB DSP that is built into the FM module.  A bonus is that the FM module is also powered via the USB connection – one cable does it all.  Shown here is the finished transmitter:

The FM module is quite versatile.  It has an analog line-in, condenser mic, and USB audio interface all built in!  Depending on what input you use, the module is smart enough to pick that input and use only that.  When I hooked the module to my raspberry pi and ran:

aplay -l

I was able to see the USB audio interface on the FM module:

ubuntu@audio1:~$ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: Headphones [bcm2835 Headphones], device 0: bcm2835 Headphones [bcm2835 Headphones]
Subdevices: 8/8
Subdevice #0: subdevice #0
Subdevice #1: subdevice #1
Subdevice #2: subdevice #2
Subdevice #3: subdevice #3
Subdevice #4: subdevice #4
Subdevice #5: subdevice #5
Subdevice #6: subdevice #6
Subdevice #7: subdevice #7
card 1: CD002 [CD002], device 0: USB Audio [USB Audio]
Subdevices: 0/1
Subdevice #0: subdevice #0

The “card 1” device is the USB connection to the FM module.

All I needed to do now was install and setup liquidsoap.  For that I used this guide and installed with OPAM.  Once I had liquidsoap installed, I created a .liq script which had the following configuration to stream WBZ and play it on the FM module’s USB interface:

str = ""
prog = mksafe(input.http(str))
prog = amplify(0.7,override="replay_gain",prog)

With this .liq file saved as play.liq, I could then start it up by running:

liquidsoap ./play.liq

If you want to add this as a systemd service, just follow the conventions to create the service file and install it as a service so it comes up whenever the raspberry pi is started.

FM Module Tips

The FM module as it comes, does not have an antenna on it.   For best results, solder a 1 meter length of wire on the “ANT” solder pad and place the entire RPi/FM setup in a high location within your home.  You should find a clear spot on your FM dial using a portable radio and set the FM module to that frequency.  When properly set, you should be able to pickup the signal from your RPi/FM package at least 4 houses away before you start to hear static.  This amount of range from such a small module is pretty decent and sufficient to enjoy your streamed audio source on any ordinary radio near your home.  The sound quality is very good for a $12 module and sounds nice on my Tivoli and other radios.


Connecting the home brew transmitter to a small RF amp brings the power output up to about 10 watts. This power is then fed through an RF bandpass filter before the antenna. This helps eliminate RF harmonics that WILL get you busted in short order! DO NOT operate a device like this at any considerable power level without proper filtering, it is a guaranteed way to get busted for radiating harmonics and interfering with other signals!

The transmitter connected to the amp module (live audio is fed from an icecast stream over CAT6 cable). Note the small white WiFi smart outlet. This lets me remotely turn off the transmitter if I get wind that the FCC is taking an interest. I have the ability to remotely kill it from my smartphone from anywhere, at a moment’s notice:

The RF bandpass filter (SUPER IMPORTANT!)

Finally, the TUNED circular polarized antenna – circular polarization helps reduce multipath signal distortion for mobile listeners. This antenna is precisely tuned to the desired broadcast frequency of 95.1 using a cheap portable VNA (Vector Network Analyzer)


My neighbor recently did a landscape lighting project of his own.  It looked great and was a simple grid-tied system.  I wanted to kill 2 birds with one stone and do a landscape lighting system of my own, but I wanted ours to be connected (wirelessly) to our Domoticz home automation system and I didn’t want to pay for the electricity to run it.  I was able to achieve both goals in this project through the use of ESP8266 and some MOSFETs triggered by PWM which had the added benefit of making the system dimmable if desired.


For this project, I gathered the following materials:

The way my house is situated, all the sun shine is in the back yard – plenty of sun there year round.  In the front where our landscaping is, has a lot of shade so not a good place for a solar panel.  I also wanted to hide all the power generation and control stuff in the back yard anyway.   I was able to cut a thin slice into the side yard following the foundation from the back yard to the front yard and bury the cable in the dirt easily.  You can’t even see the cable:

In the back, is where the power generation and control stuff was located.   I made the wireless PWM controller inside an  IP67 rated enclosure and put banana posts on for easy connection:

This allows me to control the lights ON/OFF/Dim using my existing Domoticz home automation system.  commands and telemetry is carried over wifi and MQTT to the Domoticz docker container.  The object in Domoticz can then apply time schedules, change brightness, or even turn on the lights during a motion trigger event from a PIR sensor that can be added to sense presence.

In the back yard, I set the case containing the battery and charge controller, solar panel, and PWM controller under the solar panel, in a spot where there is ample sunlight all day.

In the front, I ran the cable near the areas I wanted the lights and used the included connectors that came with the lamps.

I had the object in Domoticz setup to turn these on at 50% brightness 30 minutes past sun down.  Here’s the final result on how it looks:


We moved into town in July 2020 and as new residents always looking for ways to get a pulse on happenings in the town.  One great way to do that is to monitor the radio systems of the municipal services that operate within the town.  To monitor, one can either go out and purchase a $400 scanning receiver, install an antenna, and stay within ear shot of the scanner to stay informed, or the (better) option is to setup a dedicated receiver for each service and connect the audio output to a computer (running appropriate software) to create and send a stream to or other streaming relay service.  What’s nice about streaming to broadcastify, is that they archive all the audio you send them, so when something interesting happens, and you miss it, you can download the audio from the stream archives and listen to it from anywhere as your schedule permits – or you can just listen live from any mobile device using the broadcastify app from anywhere.





I chose to do the second option, because I have plenty of spare radios and computers.   To create my streams I use the following in my arsenal:

  • a low power Intel Atom ultra small form factor computer with plenty of USB ports and running Linux OS.
  • the liquidsoap audio toolkit to define and create the streams.
  • USB audio interfaces with inputs (connected to the radios)
  • UHF or VHF radios to dedicate to the monitoring setup – connected to a common (shared) or individual antennas.
  • An account on Broadcastify or other stream server  – from where you will serve your streams.
  • Broadcastify stream details for each stream (you will use this to setup liquidsoap).
  • a wired network connection (preferred) for your computer generating the streams.

I started with a small energy efficient computer (an ASUS Intel Atom “net top” computer) on which I installed Ubuntu Linux and Liquidsoap.  This computer needs a reliable internet connection and power source, as it will be running 24/7.  I chose a low power computer because I wanted to keep my energy costs low for the project.  Once Linux is installed, and an IP setup on the box, a keyboard and monitor are no longer needed.  You can do the rest of the setup over the local network over SSH.  To setup the computer for streaming, I installed Liquidsoap and created a config file to define the streams: (/etc/liquidsoap/radio.liq)

apt install liquidsoap
apt install liquidsoap-plugin-alsa

Once installed, you need to create a config file to tell liquidsoap how to create and process your streams:

# Define physical audio pickups:
radio4 = mksafe(input.alsa(device="plughw:CARD=USB,DEV=0"))
radio5 = mksafe(input.alsa(device="plughw:CARD=CODEC,DEV=0"))

# Define stream destinations:
%mp3(stereo=false, bitrate=16, samplerate=22050),
port=80, password="p@55w0rd", genre="Scanner",
description="Northbridge Police Dispatch", mount="/kejrncsk888",
name="Northbridge Police Dispatch - 453.1875 MHz", user="source",
url="", radio4)

%mp3(stereo=false, bitrate=16, samplerate=22050),
port=80, password="p@ssw0rd", genre="Scanner",
description="Northbridge Fire Dispatch", mount="/s7sfsd87dsf",
name="Northbridge Fire Dispatch - 154.3625 MHz", user="source",
url="", radio5)

You get the parameters for the above output definitions from the feed details in your broadcastify account when you apply to setup a feed.  Once setup in the config file, you can issue the following command to restart the liquidsoap service and bring your feeds online:

sudo systemctl restart liquidsoap

Once restarted, liquidsoap should now be sending your audio to broadcastify.  You should see your feeds online:


Now that your stream is up, it’s time to hook up the radios and start sending audio over your stream (radio configuration/programming is out of the scope of this article).  Connect the “speaker out” jack on the back of the radio to the correct “line in” port on your USB audio pickup device and set the volume halfway to start.  (You don’t want too much audio or your stream could be noisy/distorted).  As the radio is receiving audio, adjust the volume knob on the radio for good balance of loudness and clarity.  Do the same on any other radios you wish to setup.  Be sure to lock the tuning so that the frequency can’t be accidentally changed.


Because we’re pulling signals off the air and streaming them online, you’ll need to either buy or make an antenna for such a dedicated setup.  I chose to make a simple one using an SO-239 connector:


Now, you can download the Broadcastify app on any mobile device and listen to the feeds from anywhere.  The data rate is extremely small so listening for long periods should not consume a lot of data on a data plan.  Now you can stay informed by listening or listen whenever you see local police/fire activity in your town.

Rebuilding My Home Network


I have had my ESXi box for YEARS and recently decided to take a dive into the world of KVM (QEMU) on Ubuntu Linux.  It’s a popular and completely open hypervisor that has become a staple in many datacenter environments.  I had been loathing the change because it meant I had to rebuild a few of the VMs I still had.  Though I made the plunge recently into docker containers and finding apps I could containerize, I still have a handfull of VMs that perform various functions on my home network and “home lab”.  I’ll preface this by saying that the ESXi box did us well over the years and I hardly ever had to touch it.  The one thing I did not have with ESXi was other hosts to use for moving VMs.  The fact that the license isn’t free and ESXi is very persnickety about hardware requirements just was a put-off in trying to implement V-Motion (VMWare’s method of migrating/moving VMs around in a cluster).  With KVM, my options are more open and I have a handful of hosts that are compatible with KVM so if I ever had to move VMs around, I can in a pinch if ever I have a problem with hardware.


To create a small cluster of physical hosts to run my new VMs built on KVM, I simply carried out these steps (I’ll go into greater detail on each one):

  • Install Ubuntu Server 20.04 OS on each physical machine
  • Configure a netplan for each physical KVM (pkvm) host that achieves:
    • bonded (teamed) interfaces
    • LACP (802.3ad) attributes to bring up the channel-group session
    • vlan tagged traffic over the bond
    • bridge interfaces to allow kvm guest VMs to attach to the desired vlan
  • Install necessary KVM packages
  • Configured a port channel interface on the main switch & define physical switch ports that will participate in the LACP channel-group
  • Configure a common NFS mount on the NAS to hold the KVM guest images on the network
  • Moving new kvm VMs into my newly rebuilt KVM host
  • Consume donuts that my wife and kids made while the “internet was out”

Setting a Netplan

Ubuntu 20.04 uses netplan to configure the operation of network interfaces.  This method uses a simple YAML formatted file that is easy to write and backup.  If you screw up, you can always revert back to a previous file.  (always make a backup!)  My netplan file looks like this:

      - enp1s0f2
      - enp1s0f3
        mode: 802.3ad
        lacp-rate: fast
        mii-monitor-interval: 100
#    enp1s0f0: {}
#    enp1s0f1: {}
    enp1s0f2: {}
    enp1s0f3: {}
      dhcp4: no
      dhcp4: no
      id: 2
      link: bond0
      dhcp4: no
      id: 5
      link: bond0
      dhcp4: no
      id: 11
      link: bond0
      dhcp4: no
      id: 10
      link: bond0
      dhcp4: no
      id: 50
      link: bond0
      dhcp4: no
      id: 73
      link: bond0
      dhcp4: no
      id: 300
      link: bond0
      dhcp4: no
      - vlan.73
      - vlan.2
      - vlan.5
      - vlan.10
      - vlan.11
      addresses: []
        addresses: []
      - vlan.50
  version: 2

Installing KVM packages

apt install -y qemu qemu-kvm libvirt-daemon bridge-utils virt-manager virtinst

Configuring a port channel interface

With the netplan configuration saved and in place, I just executed sudo netplan apply and then proceeded to setup the switch-side of the bonded connection.  The first thing I needed to do was configure a new port channel interface on the switch that would be suitable for carrying tagged traffic:

interface Port-channel2
 description KVMBOX
 switchport trunk encapsulation dot1q
 switchport mode trunk
 spanning-tree portfast

After setting up the port channel interface, I now had to add the physical interfaces that are cabled from the switch to the big KVM box:

interface GigabitEthernet1/0/44
 description TRUNK_TO_VBOX1
 switchport trunk encapsulation dot1q
 switchport mode trunk
 lacp port-priority 500
 channel-protocol lacp
 channel-group 2 mode active
interface GigabitEthernet1/0/45
 description TRUNK_TO_VBOX1
 switchport trunk encapsulation dot1q
 switchport mode trunk
 lacp port-priority 500
 channel-protocol lacp
 channel-group 2 mode active

wr mem

Once both ports were setup, I could then test the channel-group status:

CORE-SW#sh etherchannel 2 summary
Flags:  D - down        P - bundled in port-channel
        I - stand-alone s - suspended
        H - Hot-standby (LACP only)
        R - Layer3      S - Layer2
        U - in use      f - failed to allocate aggregator

        M - not in use, minimum links not met
        u - unsuitable for bundling
        w - waiting to be aggregated
        d - default port

Number of channel-groups in use: 2
Number of aggregators:           2

Group  Port-channel  Protocol    Ports
2      Po2(SU)         LACP      Gi1/0/44(P) Gi1/0/45(P)

It’s time to now setup an NFS share for holding KVM images

This part is easy:

  • apt install nfs-client

Add a line to /etc/fstab: /vol1   nfs     _netdev,nfsvers=3,nolock,noatime,bg    0       0

Then mount: sudo mount /vol1.   I then created a symbolic link from the /var/lib/libvirt/images directory to point to /vol1/kvms (where the other KVM servers put their images)

Now time to move my newly built KVM VMs into my new KVM host (built on a temporary KVM host)