All posts by K1WIZ


I moved to an area where my AM news station (WBZ) comes in rather scratchy.  Sure I could stream them over the internet on a mobile device, but what about the radios I currently have?  Have they now become paperweights?  Fortunately, WBZ streams online and I found a cool FM transmitter module that I thought “I could use this with a Raspberry Pi to put WBZ on the FM dial near my home”.  The FM module is about $12 and available on Amazon and I already had a raspberry pi computer I could dedicate for the project.  Why not try it?



This article is for informational/educational purposes only.  If you make use of any information in this article, I will not be liable for your use of this information and any action you take based on the technical discussion herein is solely at your own peril/risk!  Please check local laws for this application in your country of residence!  This article also deals with a solution powered by AC mains voltage.  If you do not understand what you are doing, PLEASE be safe and get qualified help!

I installed Ubuntu Linux 20.04.2 server on the raspberry pi computer, and then installed a software called liquidsoap.  Liquidsoap is an audio/streaming swiss army knife and is of course, open source.  Normally, people use liquidsoap to capture a live audio source and then create a stream on the internet.  I wanted to do the reverse, and pull in an internet stream and play it over the USB DSP that is built into the FM module.  A bonus is that the FM module is also powered via the USB connection – one cable does it all.  Shown here is the finished transmitter:

The FM module is quite versatile.  It has an analog line-in, condenser mic, and USB audio interface all built in!  Depending on what input you use, the module is smart enough to pick that input and use only that.  When I hooked the module to my raspberry pi and ran:

aplay -l

I was able to see the USB audio interface on the FM module:

[email protected]:~$ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: Headphones [bcm2835 Headphones], device 0: bcm2835 Headphones [bcm2835 Headphones]
Subdevices: 8/8
Subdevice #0: subdevice #0
Subdevice #1: subdevice #1
Subdevice #2: subdevice #2
Subdevice #3: subdevice #3
Subdevice #4: subdevice #4
Subdevice #5: subdevice #5
Subdevice #6: subdevice #6
Subdevice #7: subdevice #7
card 1: CD002 [CD002], device 0: USB Audio [USB Audio]
Subdevices: 0/1
Subdevice #0: subdevice #0

The “card 1” device is the USB connection to the FM module.

All I needed to do now was install and setup liquidsoap.  For that I used this guide and installed with OPAM.  Once I had liquidsoap installed, I created a .liq script which had the following configuration to stream WBZ and play it on the FM module’s USB interface:

str = ""
prog = mksafe(input.http(str))
prog = amplify(0.7,override="replay_gain",prog)

With this .liq file saved as play.liq, I could then start it up by running:

liquidsoap ./play.liq

If you want to add this as a systemd service, just follow the conventions to create the service file and install it as a service so it comes up whenever the raspberry pi is started.

FM Module Tips

The FM module as it comes, does not have an antenna on it.   For best results, solder a 1 meter length of wire on the “ANT” solder pad and place the entire RPi/FM setup in a high location within your home.  You should find a clear spot on your FM dial using a portable radio and set the FM module to that frequency.  When properly set, you should be able to pickup the signal from your RPi/FM package at least 4 houses away before you start to hear static.  This amount of range from such a small module is pretty decent and sufficient to enjoy your streamed audio source on any ordinary radio near your home.  The sound quality is very good for a $12 module and sounds nice on my Tivoli and other radios.


Connecting the home brew transmitter to a small RF amp brings the power output up to about 10 watts. This power is then fed through an RF bandpass filter before the antenna. This helps eliminate RF harmonics that WILL get you busted in short order! DO NOT operate a device like this at any considerable power level without proper filtering, it is a guaranteed way to get busted for radiating harmonics and interfering with other signals!

The transmitter connected to the amp module (live audio is fed from an icecast stream over CAT6 cable). Note the small white WiFi smart outlet. This lets me remotely turn off the transmitter if I get wind that the FCC is taking an interest. I have the ability to remotely kill it from my smartphone from anywhere, at a moment’s notice:

The RF bandpass filter (SUPER IMPORTANT!)

Finally, the TUNED circular polarized antenna – circular polarization helps reduce multipath signal distortion for mobile listeners. This antenna is precisely tuned to the desired broadcast frequency of 95.1 using a cheap portable VNA (Vector Network Analyzer)


My neighbor recently did a landscape lighting project of his own.  It looked great and was a simple grid-tied system.  I wanted to kill 2 birds with one stone and do a landscape lighting system of my own, but I wanted ours to be connected (wirelessly) to our Domoticz home automation system and I didn’t want to pay for the electricity to run it.  I was able to achieve both goals in this project through the use of ESP8266 and some MOSFETs triggered by PWM which had the added benefit of making the system dimmable if desired.


For this project, I gathered the following materials:

The way my house is situated, all the sun shine is in the back yard – plenty of sun there year round.  In the front where our landscaping is, has a lot of shade so not a good place for a solar panel.  I also wanted to hide all the power generation and control stuff in the back yard anyway.   I was able to cut a thin slice into the side yard following the foundation from the back yard to the front yard and bury the cable in the dirt easily.  You can’t even see the cable:

In the back, is where the power generation and control stuff was located.   I made the wireless PWM controller inside an  IP67 rated enclosure and put banana posts on for easy connection:

This allows me to control the lights ON/OFF/Dim using my existing Domoticz home automation system.  commands and telemetry is carried over wifi and MQTT to the Domoticz docker container.  The object in Domoticz can then apply time schedules, change brightness, or even turn on the lights during a motion trigger event from a PIR sensor that can be added to sense presence.

In the back yard, I set the case containing the battery and charge controller, solar panel, and PWM controller under the solar panel, in a spot where there is ample sunlight all day.

In the front, I ran the cable near the areas I wanted the lights and used the included connectors that came with the lamps.

I had the object in Domoticz setup to turn these on at 50% brightness 30 minutes past sun down.  Here’s the final result on how it looks:


We moved into town in July 2020 and as new residents always looking for ways to get a pulse on happenings in the town.  One great way to do that is to monitor the radio systems of the municipal services that operate within the town.  To monitor, one can either go out and purchase a $400 scanning receiver, install an antenna, and stay within ear shot of the scanner to stay informed, or the (better) option is to setup a dedicated receiver for each service and connect the audio output to a computer (running appropriate software) to create and send a stream to or other streaming relay service.  What’s nice about streaming to broadcastify, is that they archive all the audio you send them, so when something interesting happens, and you miss it, you can download the audio from the stream archives and listen to it from anywhere as your schedule permits – or you can just listen live from any mobile device using the broadcastify app from anywhere.





I chose to do the second option, because I have plenty of spare radios and computers.   To create my streams I use the following in my arsenal:

  • a low power Intel Atom ultra small form factor computer with plenty of USB ports and running Linux OS.
  • the liquidsoap audio toolkit to define and create the streams.
  • USB audio interfaces with inputs (connected to the radios)
  • UHF or VHF radios to dedicate to the monitoring setup – connected to a common (shared) or individual antennas.
  • An account on Broadcastify or other stream server  – from where you will serve your streams.
  • Broadcastify stream details for each stream (you will use this to setup liquidsoap).
  • a wired network connection (preferred) for your computer generating the streams.

I started with a small energy efficient computer (an ASUS Intel Atom “net top” computer) on which I installed Ubuntu Linux and Liquidsoap.  This computer needs a reliable internet connection and power source, as it will be running 24/7.  I chose a low power computer because I wanted to keep my energy costs low for the project.  Once Linux is installed, and an IP setup on the box, a keyboard and monitor are no longer needed.  You can do the rest of the setup over the local network over SSH.  To setup the computer for streaming, I installed Liquidsoap and created a config file to define the streams: (/etc/liquidsoap/radio.liq)

apt install liquidsoap
apt install liquidsoap-plugin-alsa

Once installed, you need to create a config file to tell liquidsoap how to create and process your streams:

# Define physical audio pickups:
radio4 = mksafe(input.alsa(device="plughw:CARD=USB,DEV=0"))
radio5 = mksafe(input.alsa(device="plughw:CARD=CODEC,DEV=0"))

# Define stream destinations:
%mp3(stereo=false, bitrate=16, samplerate=22050),
port=80, password="[email protected]", genre="Scanner",
description="Northbridge Police Dispatch", mount="/kejrncsk888",
name="Northbridge Police Dispatch - 453.1875 MHz", user="source",
url="", radio4)

%mp3(stereo=false, bitrate=16, samplerate=22050),
port=80, password="[email protected]", genre="Scanner",
description="Northbridge Fire Dispatch", mount="/s7sfsd87dsf",
name="Northbridge Fire Dispatch - 154.3625 MHz", user="source",
url="", radio5)

You get the parameters for the above output definitions from the feed details in your broadcastify account when you apply to setup a feed.  Once setup in the config file, you can issue the following command to restart the liquidsoap service and bring your feeds online:

sudo systemctl restart liquidsoap

Once restarted, liquidsoap should now be sending your audio to broadcastify.  You should see your feeds online:


Now that your stream is up, it’s time to hook up the radios and start sending audio over your stream (radio configuration/programming is out of the scope of this article).  Connect the “speaker out” jack on the back of the radio to the correct “line in” port on your USB audio pickup device and set the volume halfway to start.  (You don’t want too much audio or your stream could be noisy/distorted).  As the radio is receiving audio, adjust the volume knob on the radio for good balance of loudness and clarity.  Do the same on any other radios you wish to setup.  Be sure to lock the tuning so that the frequency can’t be accidentally changed.


Because we’re pulling signals off the air and streaming them online, you’ll need to either buy or make an antenna for such a dedicated setup.  I chose to make a simple one using an SO-239 connector:


Now, you can download the Broadcastify app on any mobile device and listen to the feeds from anywhere.  The data rate is extremely small so listening for long periods should not consume a lot of data on a data plan.  Now you can stay informed by listening or listen whenever you see local police/fire activity in your town.

Rebuilding My Home Network


I have had my ESXi box for YEARS and recently decided to take a dive into the world of KVM (QEMU) on Ubuntu Linux.  It’s a popular and completely open hypervisor that has become a staple in many datacenter environments.  I had been loathing the change because it meant I had to rebuild a few of the VMs I still had.  Though I made the plunge recently into docker containers and finding apps I could containerize, I still have a handfull of VMs that perform various functions on my home network and “home lab”.  I’ll preface this by saying that the ESXi box did us well over the years and I hardly ever had to touch it.  The one thing I did not have with ESXi was other hosts to use for moving VMs.  The fact that the license isn’t free and ESXi is very persnickety about hardware requirements just was a put-off in trying to implement V-Motion (VMWare’s method of migrating/moving VMs around in a cluster).  With KVM, my options are more open and I have a handful of hosts that are compatible with KVM so if I ever had to move VMs around, I can in a pinch if ever I have a problem with hardware.


To create a small cluster of physical hosts to run my new VMs built on KVM, I simply carried out these steps (I’ll go into greater detail on each one):

  • Install Ubuntu Server 20.04 OS on each physical machine
  • Configure a netplan for each physical KVM (pkvm) host that achieves:
    • bonded (teamed) interfaces
    • LACP (802.3ad) attributes to bring up the channel-group session
    • vlan tagged traffic over the bond
    • bridge interfaces to allow kvm guest VMs to attach to the desired vlan
  • Install necessary KVM packages
  • Configured a port channel interface on the main switch & define physical switch ports that will participate in the LACP channel-group
  • Configure a common NFS mount on the NAS to hold the KVM guest images on the network
  • Moving new kvm VMs into my newly rebuilt KVM host
  • Consume donuts that my wife and kids made while the “internet was out”

Setting a Netplan

Ubuntu 20.04 uses netplan to configure the operation of network interfaces.  This method uses a simple YAML formatted file that is easy to write and backup.  If you screw up, you can always revert back to a previous file.  (always make a backup!)  My netplan file looks like this:

      - enp1s0f2
      - enp1s0f3
        mode: 802.3ad
        lacp-rate: fast
        mii-monitor-interval: 100
#    enp1s0f0: {}
#    enp1s0f1: {}
    enp1s0f2: {}
    enp1s0f3: {}
      dhcp4: no
      dhcp4: no
      id: 2
      link: bond0
      dhcp4: no
      id: 5
      link: bond0
      dhcp4: no
      id: 11
      link: bond0
      dhcp4: no
      id: 10
      link: bond0
      dhcp4: no
      id: 50
      link: bond0
      dhcp4: no
      id: 73
      link: bond0
      dhcp4: no
      id: 300
      link: bond0
      dhcp4: no
      - vlan.73
      - vlan.2
      - vlan.5
      - vlan.10
      - vlan.11
      addresses: []
        addresses: []
      - vlan.50
  version: 2

Installing KVM packages

apt install -y qemu qemu-kvm libvirt-daemon bridge-utils virt-manager virtinst

Configuring a port channel interface

With the netplan configuration saved and in place, I just executed sudo netplan apply and then proceeded to setup the switch-side of the bonded connection.  The first thing I needed to do was configure a new port channel interface on the switch that would be suitable for carrying tagged traffic:

interface Port-channel2
 description KVMBOX
 switchport trunk encapsulation dot1q
 switchport mode trunk
 spanning-tree portfast

After setting up the port channel interface, I now had to add the physical interfaces that are cabled from the switch to the big KVM box:

interface GigabitEthernet1/0/44
 description TRUNK_TO_VBOX1
 switchport trunk encapsulation dot1q
 switchport mode trunk
 lacp port-priority 500
 channel-protocol lacp
 channel-group 2 mode active
interface GigabitEthernet1/0/45
 description TRUNK_TO_VBOX1
 switchport trunk encapsulation dot1q
 switchport mode trunk
 lacp port-priority 500
 channel-protocol lacp
 channel-group 2 mode active

wr mem

Once both ports were setup, I could then test the channel-group status:

CORE-SW#sh etherchannel 2 summary
Flags:  D - down        P - bundled in port-channel
        I - stand-alone s - suspended
        H - Hot-standby (LACP only)
        R - Layer3      S - Layer2
        U - in use      f - failed to allocate aggregator

        M - not in use, minimum links not met
        u - unsuitable for bundling
        w - waiting to be aggregated
        d - default port

Number of channel-groups in use: 2
Number of aggregators:           2

Group  Port-channel  Protocol    Ports
2      Po2(SU)         LACP      Gi1/0/44(P) Gi1/0/45(P)

It’s time to now setup an NFS share for holding KVM images

This part is easy:

  • apt install nfs-client

Add a line to /etc/fstab: /vol1   nfs     _netdev,nfsvers=3,nolock,noatime,bg    0       0

Then mount: sudo mount /vol1.   I then created a symbolic link from the /var/lib/libvirt/images directory to point to /vol1/kvms (where the other KVM servers put their images)

Now time to move my newly built KVM VMs into my new KVM host (built on a temporary KVM host)


At work, I serve on a team that has an on call rotation.  As a heavy sleeper, SMS dispatches to my cellphone are not enough to get me out of bed.  I needed a better solution, but what was that going to look like?  I have an iPhone X and it’s just not loud enough to wake me, and having the notifications be persistent until acknowledged was a real necessity.  I needed something that would almost slap me in the face to get me out of bed for a 3am dispatch.  No joke!  There just aren’t a lot of solutions out there to literally kick someone out of bed at 3am (heavy sleepers) and I certainly wanted to keep my manager happy by answering pages in a timely manner.  Time to build a solution!

Luckily, I have skills in the wireless dark arts as a Ham Radio operator and I knew there was a way to use the free Pi-Star software to make a pocsag pager transmitter.  I bought a programmable pager online that would work on the frequency of 439.9875MHz and proceeded to build the transmitter and attach it to my network.



  • latest pi-star image on microsd card and configured on the network
  • Any USB wifi G or N dongle (the onboard wifi chip on the RPi Zero W sucks)
  • OTG to USB adaptor pigtail for wifi dongle
  • 15″ SMA whip antenna – the little stubby antennas really suck!
  • MMDVM Hotpost board kit (this has the radio and RPi Zero)
  • Programmable POCSAG pager that will work on 439.9875MHz (Digital Paging Company in North Hollywood CA Be sure to ask for SKU: R924-430-6V33
  • Gmail account to pull the messages of interest from to create page messages and triggers
  • Linux box (or VM) with fetchmail configured to pull and parse mail from the gmail account
  • I had to write some code to make it all work
  • OPTIONAL: an MQTT broker to send actions to home automation such as Domoticz, Hass, etc (in my case, I have a bright light on my night stand that shines in my face that gets turned on to help wake me up)

Setting Up Pi-star

Download pi-star and write the image to the microsd card.  Configure pi-star by disabling the built-in wifi chip (in favor of the USB dongle) and set up pi-star to connect to your network.  (see pi-star site for more info).  Once connected to your network, you should give it a static IP address so your script will always be able to reach it and dispatch messages to it.  To do so, you can use the DHCP IP reservation function on your router or edit /etc/dhcpcd.conf to add the static IP by adding something like the following to the end of the file: (change to what’s appropriate for your environment)

interface wlan0
static ip_address=
static routers=
static domain_name_servers=

NOTE: you need to make sure the / partition is writeable before you edit this file (by default it is not) by running “rpi-rw” and when you’re done, run “rpi-ro” to make the / partion read-only again.  Keeping the / partition read only will help protect your sd card from wear.

Next, you must setup pi-star and turn on POCSAG mode and be sure the frequency is set to 439.9875MHz (this process is documented on the pi-star website/wifi – so I won’t go into that here).  Once you have pi-star setup in pocsag mode and on the correct frequency, it’s time to program your pager (see pager documentation) and program in the frequency and a CAP code so your pager will respond to page transmissions.  With your pager programmed correctly, and pi-star properly configured, you are now ready to install the scripts and set the script up in a crontab.  (this assumes you have already configured fetchmail to pull down mail messages from the gmail account you will use to receive on call dispatches).  Refer to fetchmail documentation on how to setup IMAP or POP access to gmail.

I call fetchmail and in crontab every minute:

* * * * * /usr/bin/fetchmail &> /dev/null
* * * * * /home/john/ &>/dev/null

NOTE: please see the comments within the scripts for info on what needs to be setup (variables) for the script to process your mail.



If you have an Amazon device, like Alexa, Ring, etc, you soon will be sharing your internet connection publicly (at least a small portion of it).  Amazon is quietly opting-in device owners to create a new public network called Sidewalk.   This new feature will be turned on by default.   You can turn it off fairly easily by performing the following steps:


Should you choose to opt-out of Amazon Sidewalk, here’s how:

Open the Alexa app on your iPhone or Android
Tap More
Tap Settings
Tap Account Settings
Tap Amazon Sidewalk
Toggle the switch to Off to disable your participation

You can always change your mind later and join back in.


You may be one of those people (like me) who prefers web content rendered in dark mode.  Often the preference for the choice is due to less eye strain and glare.  Light text on dark background is much easier on the eyes.  There’s also another thing to consider: for those who use AMOLED screens, darker pixels means using less energy for displays.  My preference is for the first reason – it’s a lot easier on my eyes when I spend hours in front of a computer as both a staple in my profession and for my hobby – that’s a LOT of screen time! (and assault on my eyes).


Fortunately, the solution is a simple one (if you’re a Chrome user).  You can enter this string in the URL field: chrome://flags/#enable-force-dark and choose Enable.  The result is pretty darned sweet!


Just soon after moving into our new home in Northbridge MA, I have begin rolling out home automation technologies that are designed from scratch and uniquely custom.   Doing it this way, I’ve more control over the quality and versatility of my deployment and design.  The trade-off is a higher learning curve to achieve this supreme result – but I LOVE learning!  In our old home, many of my automation made use of VMs running various applications to perform tasks and provide actionable data.  In my new design, I want to achieve the following goals:

  • lowest power footprint as possible – do more with less.
  • learn and utilize docker container technology to replace fatter VMs with slimmer deployment of applications on power efficient computing (such as the Intel Celeron J4115)
  • maintain the highest levels of system security by using a local PubSub broker so message payloads never leave the firewall
  • utilize my investment in Ubiquiti Unifi wireless APs (5 of them throughout the house) over a dedicated SSID for home automation
  • be easy to manage
  • wife accepted!   (WAF Score)     Note: for those who don’t know, WAF score is a value I assign to project that achieve high acceptance rating from my wife  (Wife Acceptance Factor)


First thing I did is research and select a hardware platform to run my containers.  The following applications will be containerized in this new design:

  • Domoticz – the home automation hub software I use – essentially the cornerstone of the entire system
  • Pi-Hole – DNS ad blocking and management application (also part of our DNS jail)
  • MySQL – Yeah, I need a database for some of these apps
  • PHP MyAdmin – a nice MySQL web based database workbench
  • WordPress – for hosting local information that guests will land on when they visit and join the guest wifi
  • Cacti – a robust SNMP NMS system for monitoring application/system performance over time
  • habridge – a java based Phillips Hue emulator that allows us to connect Alexa (Amazon Echo) to selected controlled endpoints on the Domoticz dashboard
  • A minimal ubuntu container for launching various scripts

In the past, I had run these on VMs and while it worked well, it isn’t as efficient as a containerized deployment.   The hardware I used for the Intel box was this.   It is a Celeron J4115 with 8GB Ram and a decent sized M2 SSD.  I use an NFS mount to store container configs and logs on the Synology NAS for safe keeping.

When I built the weather tube, I re-used a marquee scroller project I had done previously and wrote a couple scripts to push data to it and display weather stats in the kitchen in real time.  I first use a Darksky API account to get the data for my town onto my Domoticz container.  Domoticz periodically polls my Darksky account every 5 minutes or so and displays the weather info through widgets on the Domoticz dashboard.   This is great and all, but if you’re not looking at the dash you won’t have the latest info.  Luckily, Domoticz has a rich json API and it’s easy to get data out of Domoticz for various uses.   Every “device” or sensor object in Domoticz has an IDX address which is easy to use via the json API to poll data from Domoticz.   The advantage here is that I only have to poll Darksky once from Domoticz, but I can poll Domoticz as many times, and from as many local devices as I want, which helps me stay within the free limits on my Darksky account.  The json feed for devices looks something like this (click for larger view):

So how do I leverage this bounty of information and display it on my LED Scrolling Marquee tube?  A little smear of Bash and Python scripting to the rescue!  I first create a python script ( that will pull the data elements that I want.  This script will then build a string output to scroll to the tube at it’s IP address on the network.   The string is cached to a file wx/conditions  and the contents of this file is then pushed to the tube every 15 minutes by another script that is called by chron. (  A snapshot of these scripts appear below for your reference.  Click on any of them for a larger view:


This is a quick and dirty way to push this data to the tube, but I am currently working on an MQTT version that will subscribe to a topic and pickup the data feed via a published topic.  I’ll update this article once that’s ready for prime time so check back often.

Here’s a video of the tube in action in the kitchen:


Working with one or a fleet of LUKs encrypted Linux machines, it may be necessary to do a remote reboot (as might be the case when you’re using the machine remotely).  But what if the host has entire disk encryption such as LUKs and intended for remote users?  If you ever needed to reboot the host, you would have to physically be present at the local console to enter the LUKs passphrase!  Can’t make it into the office to unlock that drive? You’re SCREWED!  Well, not when you setup dropbear SSH and SSH public key authentication!  Dropbear to the rescue!  Read on!

The Solution

Manual Approach

So let’s say for this example, you’re running either Debian or Ubuntu Linux.  Your entire system drive is LUKs encrypted (likely required by your corporate policy).  You can install the needed package via apt as follows:

apt install dropbear-initramfs

This package allows your system to rebuild the initramfs with a dropbear SSH listener.  This will also work even if you update your kernel so not to worry.  Once you install this package, here’s how you configure it:

nano /etc/dropbear-initramfs/config

When you open the file for editing, you’ll want to add this line to the file, then save and close:

DROPBEAR_OPTIONS=”-p 222 -I 120″

What this line does is configure dropbear to spawn and listen for connections on TCP port 222.  The -I 120 option sets dropbear to disconnect if the session is idle for more than 120 seconds.  Once you have changed the file in this way, there is one more thing to do.  You need to generate ssh keys and copy your public key to the authorized hosts file on the dropbear config folder.   If you already have ssh keys generated, you can simply copy your public key DO NOT copy your private key!  To generate ssh keys:


Your keys will be found in ~/.ssh/    you can easily copy your public key by viewing the file ~/.ssh/    (on a trusted machine – preferably an SSH jumphost or other trusted SSH host)

less ~/.ssh/

Then simply copy the key to another terminal window on the dropbear target host and edit the file:

sudo nano /etc/dropbear-initramfs/authorized_keys

Paste the key into the open file, then save and quit.  The final step is to rebuild your initramfs image so that it now includes dropbear:

sudo update-initramfs -u

This last step is what rebuilds your initramfs image and it will now include dropbear with new kernels.


Automated Deployment Using Ansible

But what if you have to do this on many machines?  This setup could take you a while to do manually.  Ansible to the rescue!  I’ll show you how to setup a simple play that will deploy these changes to one host.  You can use an inventory file with multiple hosts to run this against multiple targets:

The play:

If you don’t mind supplying just the hostname and ip for installation targets, I wrote a bash wrapper script to also make deployment easy if you’re just doing about a dozen or so hosts:

First create a target template file:

Then create the wrapper script in bash:

What happens here is that when you run the wrapper script, it’ll ask you for the hostname and IP of the installation target.  It’ll then create the inventory file for you and run the play:

In this example I had already installed dropbear to this host because I didn’t have another one without dropbear.   What you see here is what you could expect except that you would see more changes reflected because the addition of dropbear would have caused more changes to this host.  I only showed this to showcase the example of running the play against a single target using the wrapper.


Building The Remote Reboot Tool

Ok now that dropbear is installed on our remote encrypted host, we could manually SSH to the dropbear instance after rebooting the machine by running:

ssh [email protected] -p 222

Once connected, you can manually issue the following commands to unlock your LUKs drive:

cryptroot-unlock  (hit enter)

You then enter your LUKs passphrase just as you would at the local console, then hit Enter again, disconnect, and the host will finish rebooting.  At this point you can access the system as you normally do by remote.  Is there an easier way?  There sure is!  A little PHP, and expect magic to the rescue:

First, we create a web form to take input from the user.  We need the IP address and LUKs passphrase of the target.  Our webform collects this from the user:

When the form is submitted, the inputs are passed to the submit script:

The job of the submit script is to call our expect script and pass the two variables to it.  The expect script does all the heavy lifting and makes the SSH connection to dropbear and provides the information during the session prompts.  We also put the trusted keys in /scripts/key which our expect script uses to authenticate to dropbear:

But BEFORE our expect script will work, we need to install the interpreter on the jump host where our PHP script will call it:

apt install expect


Once this script finishes, the remote dropbear host is unlocked.   Our end user only has to interface with the PHP web form:

I hope you find this solution useful.  Please feel free to comment or share your ideas.


This year, my family and I went into Hobby Lobby in search of Halloween decorations.  I found a nice ceramic jack-o-lantern that was painted green inside and was only $3.00.   I had an idea…

I thought, “I could put an ESP8266 in there with a double LED and put a flickering loop on the ESP and that would look so cool sitting on my dining room table where the Halloween Spread is laid out every year…  and it would look awesome.  So I put the ceramic jack in my cart amongst other things.

CODE: FunkyCandle

I start with a Wemos module that has a 18650 battery holder, which is very convenient for this project.  Since the area to be lit was fairly small, I used two green LEDs and soldered them to pins D6 and D7 and ground.  This provides two separate flicker channels for a more realistic flicker effect.  If you need to make this brighter, you could use small mosfets and a higher voltage with higher power LEDs of any color you wish.

Here’s a look at the module:

If you want higher power output, you can put a MOSFET on each PWM channel and drive higher power LEDs like this:

A video of the higher power unit.  The flicker effect is random and looks very realistic:

NOTE: when you flash the code to the module for the first time, you will not see the LEDs blink yet.  To make them blink, you first have to add the module to your wireless network (this is designed as a network client so you can remote control it).  To do that, take your mobile phone and browse wifi network like so and find “FunkyCandle”:

Once you see that network, go ahead and click on it to access the config manager (click Configure WiFi): The module will now scan available nearby networks so that you can join one (click the one you want and then enter the password):

Once you enter the password, click save and wait about 20 seconds for the network to be stored and then go look for the module’s IP address using your router’s tools (likely under DHCP leases etc), then enter that IP address into your browser like this to turn it on.  So for example, if the ip of the module on your network is, you would enter this URL into your browser to turn on your candle:   and to turn it off:    (NOTE: this only turns the LEDs on or off, it does not turn the module on or off!)

Now, simply place the module into your favorite decoration and enjoy!   Here’s a video of mine: