All posts by K1WIZ


A “terminal server” or centralized shared desktop server has obvious benefits. Here are just a few:

  • Central application management
  • Securely access home automation or BMS systems remotely
  • Use cheap thin clients or even older machines that are now too slow
  • Cross platform – this works with Windows, Mac, and Linux machines
  • Remotely accessible from anywhere
  • Reduced support requirements
  • Privately use another desktop from work or home
  • Save on the expense of premium hardware on every desktop
  • More secure application deployment within trusted network zones
  • License Free! – this type of terminal server requires no expensive licensing


I built a terminal server running Xubuntu 22.04 with local users. To enable the use of RDP protocol clients (because Windows, Linux, and Mac machines already have RDP clients) I installed the xrdp package. You can perform the following steps to turn any Xubuntu desktop instance into a full terminal server:

sudo apt install xrdp
sudo systemctl enable --now xrdp
sudo ufw allow from any to any port 3389 proto tcp

When adding new users to the box, simply add any desktop user the usual way in Ubuntu:

Once the user is added, give them whatever permissions are desired. If you have various applications and a printer already setup on this box, then the user will have access to them to print from within their session. Once setup, give the user their credentials and have them login for the first time.

From another Linux desktop, they can use the “Remmina” app to open a session, or from Windows use Remote Desktop, etc:

Hit “Save and Connect”

Upon logging in, they will see the login greeter:

Once logged in, the terminal server desktop appears:

From within this session, the desktop experience is perfect for running centralized applications, or browsing the internet from the terminal server’s location.


I had a need to create a shared desktop Ubuntu machine that would be used by more than one person, but I wanted it to function more like an internet kiosk, and not be changeable by the end user. After playing with Raspbian OS for other projects, I knew about the overlayfs method of making the SD card read only. This is often done to protect the SD card from being prematurely worn by constant writing. In this application, I wanted to do something similar using the Ubuntu OS on more powerful J4125 hardware. After some initial googling, I learned about the overlayroot package. Installing and setting it up is fairly easy and can be done in mere minutes. Read on…


First thing we will want to do is install the Ubuntu operating system and all the software applications we might want. We want to set up the machine exactly how we will want it BEFORE we enable the overlayroot read only FS. Set all preferred desktop settings, packages, themes, browser settings, backgrounds, desired user accounts, adding printers, etc FIRST!

After we have the desktop provisioned the way we want it, the next step is to “freeze” this configuration by installing and enabling overlayroot:

sudo apt install overlayroot

Once you have it installed, you will need to modify a configuration file to enable it. Edit the following file – you only need to change the following variable: overlayroot=””

sudo nano /etc/overlayroot.conf


Save and close the file. There are other options for this config, but they are out of the scope of this article. For more information, please consult the config file, it is loaded with documentation in the comments.

Last step, reboot the machine. Your machine will come up in a read only state. Any changes made to the system will be eliminated after reboot! This is perfect for a shared computer where you don’t want multiple people mucking up the machine, and cleanup is as easy as rebooting.

Undo The Overlayroot

If you should wish to undo the overlayroot to update the system, or add/change something, you can do so by passing the following argument on the grub boot:


Here’s how:

reboot the machine, and at the grub prompt hit “e” to edit the chosen boot command and put the above option on the boot line like so:

menuentry 'Ubuntu, with Linux 3.5.0-54-generic (Writable)' --class ubuntu --class gnu-linux --class gnu --class os {
	gfxmode $linux_gfx_mode
	insmod gzio
	insmod part_msdos
	insmod ext2
	set root='(hd0,msdos1)'
	search --no-floppy --fs-uuid --set=root 28adfe9d-c122-479a-ab81-de57d16516dc
	linux	/vmlinuz-3.5.0-54-generic root=/dev/mapper/faramir-root ro overlayroot=disabled
	initrd	/initrd.img-3.5.0-54-generic

Keep in mind this is a ONE TIME modification to the boot line, once booted, make all your changes and then reboot again, and the system will be restored to the overlayroot read only state. If you would wish to permanently undo the overlayroot, then clear the overlayroot=”tmpfs” variable in /etc/overlayroot.conf BEFORE rebooting.

I bought a used Dell R430 server from Ebay. It still had the old hostname showing in the system details on the iDrac. Without the OMSA tool installed on the server, it is hard to change the displayed name. The OMSA tool also enables a lot of other admin functionality as well. I read many guides on how to install the tool for my system, so I’m posting it here in hopes this will help someone.

Step 1

Install the repository. Dell maintains a complete repo for Ubuntu/Debian users at:

You need to review the support matrix on that repo for your server model, generation, and supported OS. In my case, for my R430 (13th generation) I installed version Here’s how I added the repo:

sudo gpg --keyserver --recv-key 1285491434D8786F
sudo gpg -a --export 1285491434D8786F | sudo apt-key add -
echo 'deb focal main' | sudo tee -a /etc/apt/sources.list.d/
sudo apt update

Step 2

Once apt was updated to include the repo, all that is left to do is install the tool:

sudo apt install srvadmin-all

After installation, reboot the server (the installation makes many changes) and then start the OMSA service:

sudo /opt/dell/srvadmin/sbin/ start

After starting the service, you should then be able to access the admin page using a local account at:


Once logged in, you will then see the dashboard:


Because I have multiple computer systems (servers, desktop/laptops, and embedded home automation devices) I wanted to have a master time source that would work even if my internet connection became unavailable. The best way to achieve this is by using time signals from GPS satellites. Commercial GPS clocks tend to cost upwards in the thousands of dollars. My budget isn’t sized to afford a commercial GPS clock, but I wanted one so I did not have to rely on internet NTP sources should the internet connection fail. Fortunately there are a number of inexpensive GPS modules on the market which allow you to build such a master clock. I had built one of these about 10 years ago but never documented the project, so here I am documenting it now. For this project, you will want a GPS module that offers a PPS (Pulse Per Second) output. This PPS signal allows the NTP server to precisely align the GPS seconds with the rise of the PPS pulse. The PPS signal is read from an available GPIO pin on the Raspberry Pi. Any Raspberry Pi board will do – as long as it is dedicated to this purpose and running no other applications.


Materials used for this project:

Preparing the Raspbian OS:

To prepare and install all the needed software run:

sudo apt update
sudo apt dist-upgrade
sudo apt install pps-tools gpsd gpsd-clients gpsd-tools chrony

Now, wire the GPS module to the RPi as follows: (Note: we’re using GPIO 18)

Pin connections:

  1. GPS PPS to RPi pin 12 (GPIO 18)
  2. GPS VIN to RPi pin 2 or 4
  3. GPS GND to RPi pin 6
  4. GPS RX to RPi pin 8
  5. GPS TX to RPi pin 10
Raspberry gPIo -

Now, make the following config changes:

sudo bash -c "echo 'enable_uart=1' >> /boot/config.txt
# the next 3 lines are for GPS PPS signals
sudo bash -c "echo 'dtoverlay=pps-gpio,gpiopin=18' >> /boot/config.txt
sudo bash -c "echo 'init_uart_baud=9600' >> /boot/config.txt
# load the pps-gpio module on boot
sudo bash -c "echo 'pps-gpio' >> /etc/modules"

Open the following file and edit as shown below:

sudo vim /etc/default/gpsd

# Devices gpsd should collect to at boot time.
# They need to be read/writeable, either by user gpsd or the group dialout.
DEVICES="/dev/ttyS0 /dev/pps0"

# Other options you want to pass to gpsd

# Automatically hot add/remove USB GPS devices via gpsdctl

It’s now time to restart the GPS Clock to allow the above changes. You can test the loading of the pps-gpio module by running:

pi@time:~ $ lsmod | grep pps
pps_ldisc              16384  2
pps_gpio               16384  2

If you see the pps_gpio module returned in the output above, then please proceed. If not, please GO BACK AND CHECK YOUR WIRING!

sudo vim /etc/chrony/chrony.conf

# Include configuration files found in /etc/chrony/conf.d.
confdir /etc/chrony/conf.d

# Put in some reliable NTP backups (these will be fallback sources if GPS signal disappears).  
# In my case, I had a GPS module fail after 10 years and without these backups, your clocks 
# will wonder!  Lesson learned!
server iburst

# Enter these two statements to tell Chrony to prefer the GPS time.
refclock SHM 0 delay 0.325 refid NMEA
refclock PPS /dev/pps0 refid PPS

# Set this to whatever subnet you wish to allow to query the GPS clock.  Replace the below
# with whatever is relevant in your environment!  In my example, the following subnet is used 
# by my network switches which are NTP peers.  Only the switches sync with the GPS clock and
# network endpoints in other subnets/vlans query the nearest switch for time.  

# This directive specify the location of the file containing ID/key pairs for
# NTP authentication.
keyfile /etc/chrony/chrony.keys

# This directive specify the file into which chronyd will store the rate
# information.
driftfile /var/lib/chrony/chrony.drift

# Save NTS keys and cookies.
ntsdumpdir /var/lib/chrony

# Uncomment the following line to turn logging on.
#log tracking measurements statistics

# Log files location.
logdir /var/log/chrony

# Stop bad estimates upsetting machine clock.
maxupdateskew 100.0

# This directive enables kernel synchronisation (every 11 minutes) of the
# real-time clock. Note that it can’t be used along with the 'rtcfile' directive.

# Step the system clock instead of slewing it if the adjustment is larger than
# one second, but only in the first three clock updates.
makestep 1 3

# Get TAI-UTC offset and leap seconds from the system tz database.
# This directive must be commented out when using time sources serving
# leap-smeared time.
leapsectz right/UTC

Now that you have completed the above, it’s time to restart the Chrony service:

sudo systemctl restart chrony

You can now check to see if the GPS is working:

sudo cgps

You should see output like:

Note the “Status” line. Having a 3D GPS fix is good, it indicates a solid fix and therefore reliable time.

If you are getting the above output, your GPS should be working and chrony should be serving time to your network. You can check to see if chrony is getting time from the GPS:

sudo chronyc -n sources

MS Name/IP address         Stratum Poll Reach LastRx Last sample               
#- NMEA                          0   4   377     7   +141ms[ +141ms] +/-  163ms
#* PPS                           0   4   377     9   -185ns[ -427ns] +/-  312ns
^-                  1   7   377    72  -1348us[-1348us] +/-   24ms
^-                  1   8   377   111  -1386us[-1387us] +/-   24ms
^-                 3  10   377   676  +5834us[+5900us] +/-   59ms
^-                 1   7   377    23    -66us[  -66us] +/- 3640us

In the above example, the time source with the asterisk (*) is the preferred in-use time source. The hash (#) indicates that the listed time source is a locally attached time source, while the carrot (^) indicates that the time source is remote. Here you can see that the GPS PPS time source has a better accuracy in NANOSECONDS!

Harden The Clock

At this point, it is a good idea to now enable an overlay filesystem to preserve your MicroSD card. This will ensure that no more writing occurs and that if the GPS clock loses power suddenly, that the card will not get corrupted. To do so please do the following:

sudo raspi-config

Select Performance Options:

Then Overlay File System:

Choose YES to enable the overlay filesystem, then YES to make the /boot partition READ ONLY! At this point, you can safely power cycle the GPS clock and it should reboot and continue working automatically. Be sure to place the antenna in view of the sky. If you are using an internal antenna, place the GPS clock near a window.

Just wanted to publish a picture of my LED pimped rack for the heck of it. For years my stuff wasn’t setup in the neatest way, and after acquiring this rack just after our move to NC, I really was determined to “clean up my act” and do this stuff proper, just as I would in the datacenters I work in. It was time to bring my professional game home. The rack has really become a small “datacenter in a box”. In the rack, are my:

  • Cisco 3750G 48 port PoE (802.3af) core switch – multiple vlan gateways
  • GPS governed time server – the cisco switch “peers” with this, and all equipment queries the switch for time
  • 2 24 port keystone patch bays terminate in-cabinet and external cat6 runs
  • pfsense Firewall – HUGE NAT state table capacity! (inbound static routes to core switch)
  • Home Lab – various virtual machines and containers for testing out solutions before production deployment
  • Home Automation – data acquisition, command & control
  • Home Security – various sensors, remote notifications
  • Network Security – DNS Jail/Blocklist, Network Tap (for wiresharking/logging of traffic)
  • Unifi (wifi) Controller Dashboard – controls multiple wireless APs throughout the property
  • Minecraft Server – spun up for my neighbors and friends
  • Web Server
  • Storage Array – terabytes of fault tolerant file storage (8 drives: 6 live and 2 hot spares)
  • UHF GMRS Radio Repeater – used for neighborhood watch in our community
  • Scanning Radios – used to stream local police and fire dispatch to the internet for remote listening
  • Tripplite Double Conversion UPS – keeps all systems floating during transfer to generator power when power goes out
The Rack – Pimped In Purple
The Bridgeberry Minecraft Server – Showing “City In The Clouds” at sunset
The UHF GMRS Neighborhood Watch Repeater System

I recently purchased a Weatherflow weatherstation (Tempest) and set it up in the back yard. What I like about this company is that they are very open source friendly and allow access to their API for getting access to the data from your weather station. This data is value for many purposes, including home automation, or weather alerting. My first exercise was to learn how to use the API for making data calls and then doing something with the data. The Tempest device tracks the following weather conditions:

  • Air Temperature
  • Barometric Pressure
  • Relative Humidity
  • Precipitation & accumulation
  • Wind Average and Gust
  • Wind Direction
  • Solar Radiation
  • UV Index
  • Brightness (expressed in lux)
  • Lightning Strike (count, distance, by time)
  • “Feels Like” temperature
  • Heat Index
  • Wind Chill
  • Dew Point
  • Pressure Trend
  • Wet Bulb Temperature

The API provides access to a JSON formatted feed which allows you to write a programmatic method to access, store, and consume the data for just about anything. Here’s what the returned JSON string looks like:

[{'timestamp': 1654725915, 'air_temperature': 28.0, 'barometric_pressure': 1001.2, 'station_pressure': 1001.2, 'sea_level_pressure': 1014.8, 'relative_humidity': 78, 'precip': 0.0, 'precip_accum_last_1hr': 0.0, 'precip_accum_local_day': 0.0, 'precip_accum_local_day_final': 0.0, 'precip_accum_local_yesterday': 0.0, 'precip_accum_local_yesterday_final': 0.0, 'precip_minutes_local_day': 0, 'precip_minutes_local_yesterday': 0, 'precip_minutes_local_yesterday_final': 0, 'precip_analysis_type_yesterday': 0, 'wind_avg': 0.5, 'wind_direction': 359, 'wind_gust': 1.2, 'wind_lull': 0.0, 'solar_radiation': 486, 'uv': 6.04, 'brightness': 58367, 'lightning_strike_last_epoch': 1654217568, 'lightning_strike_last_distance': 39, 'lightning_strike_count': 0, 'lightning_strike_count_last_1hr': 0, 'lightning_strike_count_last_3hr': 0, 'feels_like': 31.8, 'heat_index': 31.8, 'wind_chill': 28.0, 'dew_point': 23.8, 'wet_bulb_temperature': 24.9, 'wet_bulb_globe_temperature': 28.7, 'delta_t': 3.1, 'air_density': 1.15816, 'pressure_trend': 'steady'}]
The Tempest weather station by Weatherflow

Once you setup access to the API, you can then make use the data in real time. I wrote a simple Python script to access the JSON feed and create a text string to be pushed to my LED scrolling weathertube. Here’s the code below:

import urllib, json, requests
from requests.structures import CaseInsensitiveDict

url = ""

headers = CaseInsensitiveDict()
headers["Accept"] = "application/json"
headers["Authorization"] = "Bearer YOUR-API-KEY"

def convertTemp(c):
    f = (c * 1.8) + 32
    return f

resp = requests.get(url, headers=headers)

wx = resp.json()
wxobs = wx["obs"]
tempf = convertTemp(wxobs[0]["air_temperature"])
dewpoint = convertTemp(wxobs[0]["dew_point"])

output = "     Current Conditions:   UV Index: " + str(round(wxobs[0]["uv"], 1)) + "       Temperature: " + str(round(tempf, 1)) + "F""       Dewpoint: " + str(round(dewpoint, 1)) + "F"
output += "     Humidity: " + str(wxobs[0]["relative_humidity"]) + "%""         Pressure: " + str(wxobs[0]["pressure_trend"]) + "      Wind Gust: " + str(wxobs[0]["wind_gust"]) + "MPH" + ""
output += "     Lighting Strikes Last Hour: " + str(wxobs[0]["lightning_strike_count_last_1hr"]) + "      Lighting Last Distance: " + str(wxobs[0]["lightning_strike_last_distance"]) + "Miles"

file_object = open('/scripts/wx/conditions', 'w')

The above Python script is called from a bash script that is executed from cron every 5 minutes:


python3 /scripts/
msg=`/bin/cat /scripts/wx/conditions`

curl -X POST -F "msg=$msg" -s > /dev/null

In the above example, the host is the IP address of the weather tube which accepts curl POST input to update the display.

Had an application where I had to convert a ton of CSV logs to JSON format for ingestion by another system. I didn’t have good luck with Python, so I tried PHP. Posting this work here in the hopes it will help someone else. Keep in mind, this is a basic example and the following warning applies should this be considered for production use:

Stern Warning: This example assumes the source of the CSV files is doing any error handling before writing records in the CSV source files. It is also assumed that the source CSV files each have a KEY as the first row in each file so that fields in the data rows are properly represented. If this is not the case in your application, the below example will require further enhancement. Only use the example as shown if you are confident that your application/process creating the CSV source input files is doing proper data validation and handling!

The Code: (convert.php)

// php function to convert csv to json format.  Takes 2 arguments:
// arg1 = input csv file     arg2 = output json file

$fin = $argv[1];
$fout = $argv[2];

function csvToJson($fin,$fout) {
    // File Handles
    if (!($fp = fopen($fin, 'r'))) {
        die("Can't open file...");

    $fo = fopen($fout, 'a');

    // Processing
    $key = fgetcsv($fp,"0",",");

    while ($row = fgetcsv($fp,"0",",")) {
            $json = array_combine($key, $row);
            $json = json_encode($json) . "\n";
            fwrite($fo, $json);

    // release file handles
    // phone a friend

To use this script, here’s an example from my shell:

for FILE in /var/convert/csv/*.csv; do php /var/convert/convert.php $FILE /var/convert/json/$(basename $FILE).json; done

If this helped you, or if you have suggestions for refinement, please let me know in the comments below! 🙂


The technology discussed in this article has serious potential to land you in trouble under possible state and federal wiretap statutes. Your use of this information is at your own risk and I cannot be held liable for your failure to use this information in accordance with local/federal laws. You agree that any use of this information is at your risk and that you agree to follow laws in your area when using the materials and software technology discussed herein. This information is published for educational purposes only.

I wanted to build a super sensitive microphone to pickup sounds and transmit them via my cloud streaming server so that I could monitor an area remotely. The project goals were:

  • had to be inexpensive, compared to commercially available “off the shelf” offerings
  • had to use existing open source components (software/hardware)
  • had to be sensitive enough to pickup sounds from adjacent rooms
  • had to be wireless
  • had to use an efficient sound codec to transmit picked up audio
  • had to be easy to operate

Parts List:

  • Raspberry Pi Zero 2W (any small ARM board should work, but it’s got to have wifi) $15
  • 32GB MicroSD card – you can use smaller, but this is what I had on hand $8
  • Dupont ribbon cables – $1
  • 5V wall wart – I had these on hand, but you should be able to source for about $5
  • MEMS Mic element – (INMP441) – $4
  • Polycarbonate project case – (BUD Industries PIP-11760-C) $15
  • Raspbian OS 11 – (based on Debian Linux) FREE
  • Liquidsoap audio toolkit – (installed via OPAM) FREE

Total Project Cost: $48USD Here’s a picture of the finished unit (I built 2):

Mic McCloud

The mic element is held in place by a bead of super glue around the edge. In this project, I did not build a stereo mic, but rather just a mono pickup. (I only wired one mic element and set it to the LEFT channel) It would be fairly easy to wire a second element and do a stereo version. Refer to the wiring diagram for how to wire a stereo version:

For stereo, wire as shown with two elements. For mono, just wire one element as LEFT

Here’s a picture of the hardware kit. I built two units, so what is shown are two Raspberry Pi Zero 2W and two INMP441 MEMS elements:

Why build one, when you can build 2?

The Final Production Version

Here’s a close-up of the MEMS mic element:

Don’t let the tiny hole fool you, this mic hears EVERYTHING!


For software, I wanted to keep things simple: no GUIs, no top heavy libraries or applications, just bare Linux, minimal ALSA config, I2S driver, and one of my favorite audio tools: liquidsoap. Follow these steps to prepare your system:

Deploy your OS

I used Raspbian OS 11 as the OS, it is based on Debian so that makes it a familiar and logical choice. I won’t get into how to deploy the OS, as that’s not really the scope of this article, but you can get this information on the raspbian website. Once you have the OS deployed to your card, you will need a temporary Pi to use that has more RAM (NOTE: the Raspberry Pi Zero 2W only has 512MB ram, which is NOT enough to do the compilation of the software you will need. I suggest you put the SD card into a Pi 3 or Pi 4 with at least 2GB of RAM and do all the steps herein before finally transferring the card to your Raspberry Pi Zero 2W for production use.

Install required packages

Now that you have your OS loaded on your SD card, put the card in your temporary Pi unit and perform these steps logged in as your default “pi” user:

sudo apt update
sudo apt install opam screen aptitude make gcc git bc libncurses5-dev bison flex libssl-dev debhelper-compat linux-headers dkms
sudo usermod -aG audio pi

At this point, go ahead and reboot the Pi by issuing: sudo reboot. When the Pi reboots, we need to uncomment the source deb repository so we can install libfdk-aac-dev from the source packages. Debian is not able to distribute these as binary packages because AAC+ codec is not free. We can however easily get it from the source packages and have debhelper compile it for us. Follow these steps as your “pi” user:

sudo nano /etc/apt/sources.list

(uncomment the source deb package repo as shown):

deb bullseye main contrib non-free rpi
# Uncomment line below then 'apt-get update' to enable 'apt-get source'
deb-src bullseye main contrib non-free rpi

(save the file in nano by doing CTRL-O then exit by doing CTRL-X), and run the following command:

sudo apt update
sudo apt-get source libfdk-aac-dev
sudo apt-get --build source fdk-aac

(after the packages are downloaded and built, you will have the following packages in your current directory), run the following commands to finally install them:

sudo dpkg -i libfdk-aac2_2.0.1-1_armhf.deb
sudo dpkg -i libfdk-aac-dev_2.0.1-1_armhf.deb

At this point, we should have the minimum necessary packages installed. We can now go ahead and setup the I2S driver:

sudo nano /boot/config.txt

(you need to ensure the following are set as shown):


(after any changes, save the file and exit)

sudo dpkg -i snd-i2s-rpi-dkms_0.0.2_all.deb
sudo modprobe snd-i2s_rpi

(edit /etc/modules and add the following, then save and close the file):

sudo reboot

Last, we create a very simple ALSA config file:

(open for editing: /etc/asound.conf and REPLACE all contents with, save and close the file):

pcm.!default {
        type hw
        card 0
ctl.!default {
        type hw
        card 0

At this point, your Pi is ready to support I2S sound input. Reboot the Pi once more and then you can do the following command to verify:

arecord -l
**** List of CAPTURE Hardware Devices ****
card 0: sndrpii2scard [snd_rpi_i2s_card], device 0: simple-card_codec_link snd-soc-dummy-dai-0 [simple-card_codec_link snd-soc-dummy-dai-0]
  Subdevices: 0/1
  Subdevice #0: subdevice #0

The last software bits we need to add are Liquidsoap via OPAM:

opam init
opam switch create 4.10.0
opam depext taglib mad lame vorbis cry samplerate ocurl liquidsoap fdkaac alsa
opam install taglib mad lame vorbis cry samplerate ocurl liquidsoap fdkaac alsa
sudo ln -s ~/.opam/4.10.0/bin/liquidsoap /sbin/liquidsoap

At this point Liquidsoap should be installed, and now we can create a .liq file to define the output stream. I assume you already have an icecast server and this article assumes you know how to setup an icecast streaming server and connect sources to it. To define your mic’s liquidsoap stream output create a file in “pi”s home directory with the following content:

input = mksafe(input.alsa()) 
input = amplify(10.0,override="replay_gain",input)
input = filter.iir.butterworth.low(frequency = 10000.0, order = 8, input)
input = filter.iir.butterworth.high(frequency = 200.0, order = 8, input)

  %fdkaac(channels=2, samplerate=44100, bandwidth="auto", bitrate=32, afterburner=true, aot="mpeg4_he_aac_v2", transmux="adts", sbr_mode=false),
  port=8000, password="P@55w0rd", genre="live",
  description="LIVE", mount="/mic2",
  name="MIC 2", user="source",
  url="", input)


Once you have this file set, you can test your install by doing:

liquidsoap -v mic.liq

If all is good, you should see the stream start. To automate it to start at boot time, you can place a file in /etc/cron.d:


@reboot         pi      screen -d -m liquidsoap -v /home/pi/mic.liq

Save the file and reboot. Your live mic stream should now startup after reboot. You can then use any media endpoint you wish to tune in the stream and listen to the mic, or record the stream using VLC

After you have the Mic setup, you may wish to enable the overlay filesystem to protect your SD card from excessive writes or unplanned power loss. To do so:

run raspi-config >> Performance Options >> Overlay FS. Set /boot to read only if asked, once this is done, reboot the Pi. Your mic and now be unplugged without worry of corrupting your SD card.

A quick howto for setting up liquidsoap to create your own online radio station and transmit using the efficient and awesome AAC+ audio codec. I’ve made it super simple to create an encoder that can take program audio and create an AAC+ encoded stream that you can send to one or more icecast distribution servers – to broadcast around the world! Read on:

Install an up to date OS (as of this writing, Ubuntu 20.04 is what I used). Just the bare command line only server install is all you need. You could even do this on a Raspberry Pi with a USB audio pickup and then connect your program audio from the output of your processed audio chain. Installation is quite simple by following these commands:

sudo apt install opam screen
opam init
opam switch create 4.10.0
opam depext taglib mad lame ffmpeg vorbis cry samplerate ocurl liquidsoap fdkaac alsa
opam install taglib mad lame ffmpeg vorbis cry samplerate ocurl liquidsoap fdkaac alsa
sudo ln -s ~/.opam/4.10.0/bin/liquidsoap /sbin/liquidsoap

Answer Yes to any yes/no prompts, and once complete, you will have a working copy of liquidsoap with full AAC+ support. Now that liquidsoap is installed, you can now create a .liq file to set the parameters of your stream. Once you create this file, launching your stream becomes quite simple. Here’s an example .liq file. You can change any of the parameters to suit your needs:


input = mksafe(input.alsa()) 

  %fdkaac(channels=2, samplerate=44100, bandwidth="auto", bitrate=96, afterburner=true, aot="mpeg4_he_aac_v2", transmux="adts", sbr_mode=false),
  port=8000, password="my.P@ssw0rd", genre="live",
  description="LIVE", mount="/live",
  name="MY STATION NAME", user="source",
  url="", input)

Now that you have your liquidsoap and .liq file installed and ready, simply launch a screen session and invoke the following command:

liquidsoap -v ./myliqfile.liq

You can disconnect from your screen session and the stream should still continue running. To reconnect to your detached screen session, you can simply do: screen -r You can have multiple streams running on the same host by opening more screen sessions and invoking additional liquidsoap instances. If running multiple screens, you can list them by running: screen -ls

Now and then, heavily used systems may need to have their swap usage cycled (reset) to increase performance. There are many occasions where even though a system has enough RAM, there may still be a growing swap usage. The steps I outline here are safe to run on a production host to reduce swap usage and return swap contents to RAM.

Check current swap use:

root@pkvm1:~# free -m
              total        used        free      shared  buff/cache   available
Mem:           7800        4822         383           1        2594        2668
Swap:          4095         429        3666

We can see here that about 430MB of swap is used even though there is plenty of RAM available. In this case, the system gets average consistent use and has been up for 206 days. We want to also see what the swappiness setting is currently set at and maybe reduce it:

root@pkvm1:~# cat /proc/sys/vm/swappiness

root@pkvm1:~# sysctl vm.swappiness=20
vm.swappiness = 20
root@pkvm1:~# cat /proc/sys/vm/swappiness

This new setting of 20 should help the system swap less often. We now want to force the system to move swap contents back to RAM where it belongs. To do that, we’ll turn swap off, and WAIT approx. 30 seconds, then turn swap back on:

root@pkvm1:~# swapoff -a
root@pkvm1:~# swapon -a
root@pkvm1:~# free -m
              total        used        free      shared  buff/cache   available
Mem:           7800        5295         143           2        2360        2194
Swap:          4095           0        4095

We can now see that swap contents has been moved to RAM and that swap has reclaimed space. It should be easy to write a cron job to check swap usage and periodically do this when swap usage goes above an acceptable threshold.