All posts by K1WIZ


We moved into town in July 2020 and as new residents always looking for ways to get a pulse on happenings in the town.  One great way to do that is to monitor the radio systems of the municipal services that operate within the town.  To monitor, one can either go out and purchase a $400 scanning receiver, install an antenna, and stay within ear shot of the scanner to stay informed, or the (better) option is to setup a dedicated receiver for each service and connect the audio output to a computer (running appropriate software) to create and send a stream to or other streaming relay service.  What’s nice about streaming to broadcastify, is that they archive all the audio you send them, so when something interesting happens, and you miss it, you can download the audio from the stream archives and listen to it from anywhere as your schedule permits – or you can just listen live from any mobile device using the broadcastify app from anywhere.





I chose to do the second option, because I have plenty of spare radios and computers.   To create my streams I use the following in my arsenal:

  • a low power Intel Atom ultra small form factor computer with plenty of USB ports and running Linux OS.
  • the liquidsoap audio toolkit to define and create the streams.
  • USB audio interfaces with inputs (connected to the radios)
  • UHF or VHF radios to dedicate to the monitoring setup – connected to a common (shared) or individual antennas.
  • An account on Broadcastify or other stream server  – from where you will serve your streams.
  • Broadcastify stream details for each stream (you will use this to setup liquidsoap).
  • a wired network connection (preferred) for your computer generating the streams.

I started with a small energy efficient computer (an ASUS Intel Atom “net top” computer) on which I installed Ubuntu Linux and Liquidsoap.  This computer needs a reliable internet connection and power source, as it will be running 24/7.  I chose a low power computer because I wanted to keep my energy costs low for the project.  Once Linux is installed, and an IP setup on the box, a keyboard and monitor are no longer needed.  You can do the rest of the setup over the local network over SSH.  To setup the computer for streaming, I installed Liquidsoap and created a config file to define the streams: (/etc/liquidsoap/radio.liq)

apt install liquidsoap
apt install liquidsoap-plugin-alsa

Once installed, you need to create a config file to tell liquidsoap how to create and process your streams:

# Define physical audio pickups:
radio4 = mksafe(input.alsa(device="plughw:CARD=USB,DEV=0"))
radio5 = mksafe(input.alsa(device="plughw:CARD=CODEC,DEV=0"))

# Define stream destinations:
%mp3(stereo=false, bitrate=16, samplerate=22050),
port=80, password="[email protected]", genre="Scanner",
description="Northbridge Police Dispatch", mount="/kejrncsk888",
name="Northbridge Police Dispatch - 453.1875 MHz", user="source",
url="", radio4)

%mp3(stereo=false, bitrate=16, samplerate=22050),
port=80, password="[email protected]", genre="Scanner",
description="Northbridge Fire Dispatch", mount="/s7sfsd87dsf",
name="Northbridge Fire Dispatch - 154.3625 MHz", user="source",
url="", radio5)

You get the parameters for the above output definitions from the feed details in your broadcastify account when you apply to setup a feed.  Once setup in the config file, you can issue the following command to restart the liquidsoap service and bring your feeds online:

sudo systemctl restart liquidsoap

Once restarted, liquidsoap should now be sending your audio to broadcastify.  You should see your feeds online:


Now that your stream is up, it’s time to hook up the radios and start sending audio over your stream (radio configuration/programming is out of the scope of this article).  Connect the “speaker out” jack on the back of the radio to the correct “line in” port on your USB audio pickup device and set the volume halfway to start.  (You don’t want too much audio or your stream could be noisy/distorted).  As the radio is receiving audio, adjust the volume knob on the radio for good balance of loudness and clarity.  Do the same on any other radios you wish to setup.  Be sure to lock the tuning so that the frequency can’t be accidentally changed.


Because we’re pulling signals off the air and streaming them online, you’ll need to either buy or make an antenna for such a dedicated setup.  I chose to make a simple one using an SO-239 connector:


Now, you can download the Broadcastify app on any mobile device and listen to the feeds from anywhere.  The data rate is extremely small so listening for long periods should not consume a lot of data on a data plan.  Now you can stay informed by listening or listen whenever you see local police/fire activity in your town.

Rebuilding My Home Network


I have had my ESXi box for YEARS and recently decided to take a dive into the world of KVM (QEMU) on Ubuntu Linux.  It’s a popular and completely open hypervisor that has become a staple in many datacenter environments.  I had been loathing the change because it meant I had to rebuild a few of the VMs I still had.  Though I made the plunge recently into docker containers and finding apps I could containerize, I still have a handfull of VMs that perform various functions on my home network and “home lab”.  I’ll preface this by saying that the ESXi box did us well over the years and I hardly ever had to touch it.  The one thing I did not have with ESXi was other hosts to use for moving VMs.  The fact that the license isn’t free and ESXi is very persnickety about hardware requirements just was a put-off in trying to implement V-Motion (VMWare’s method of migrating/moving VMs around in a cluster).  With KVM, my options are more open and I have a handful of hosts that are compatible with KVM so if I ever had to move VMs around, I can in a pinch if ever I have a problem with hardware.


To create a small cluster of physical hosts to run my new VMs built on KVM, I simply carried out these steps (I’ll go into greater detail on each one):

  • Install Ubuntu Server 20.04 OS on each physical machine
  • Configure a netplan for each physical KVM (pkvm) host that achieves:
    • bonded (teamed) interfaces
    • LACP (802.3ad) attributes to bring up the channel-group session
    • vlan tagged traffic over the bond
    • bridge interfaces to allow kvm guest VMs to attach to the desired vlan
  • Install necessary KVM packages
  • Configured a port channel interface on the main switch & define physical switch ports that will participate in the LACP channel-group
  • Configure a common NFS mount on the NAS to hold the KVM guest images on the network
  • Moving new kvm VMs into my newly rebuilt KVM host
  • Consume donuts that my wife and kids made while the “internet was out”

Setting a Netplan

Ubuntu 20.04 uses netplan to configure the operation of network interfaces.  This method uses a simple YAML formatted file that is easy to write and backup.  If you screw up, you can always revert back to a previous file.  (always make a backup!)  My netplan file looks like this:

      - enp1s0f2
      - enp1s0f3
        mode: 802.3ad
        lacp-rate: fast
        mii-monitor-interval: 100
#    enp1s0f0: {}
#    enp1s0f1: {}
    enp1s0f2: {}
    enp1s0f3: {}
      dhcp4: no
      dhcp4: no
      id: 2
      link: bond0
      dhcp4: no
      id: 5
      link: bond0
      dhcp4: no
      id: 11
      link: bond0
      dhcp4: no
      id: 10
      link: bond0
      dhcp4: no
      id: 50
      link: bond0
      dhcp4: no
      id: 73
      link: bond0
      dhcp4: no
      id: 300
      link: bond0
      dhcp4: no
      - vlan.73
      - vlan.2
      - vlan.5
      - vlan.10
      - vlan.11
      addresses: []
        addresses: []
      - vlan.50
  version: 2

Installing KVM packages

apt install -y qemu qemu-kvm libvirt-daemon bridge-utils virt-manager virtinst

Configuring a port channel interface

With the netplan configuration saved and in place, I just executed sudo netplan apply and then proceeded to setup the switch-side of the bonded connection.  The first thing I needed to do was configure a new port channel interface on the switch that would be suitable for carrying tagged traffic:

interface Port-channel2
 description KVMBOX
 switchport trunk encapsulation dot1q
 switchport mode trunk
 spanning-tree portfast

After setting up the port channel interface, I now had to add the physical interfaces that are cabled from the switch to the big KVM box:

interface GigabitEthernet1/0/44
 description TRUNK_TO_VBOX1
 switchport trunk encapsulation dot1q
 switchport mode trunk
 lacp port-priority 500
 channel-protocol lacp
 channel-group 2 mode active
interface GigabitEthernet1/0/45
 description TRUNK_TO_VBOX1
 switchport trunk encapsulation dot1q
 switchport mode trunk
 lacp port-priority 500
 channel-protocol lacp
 channel-group 2 mode active

wr mem

Once both ports were setup, I could then test the channel-group status:

CORE-SW#sh etherchannel 2 summary
Flags:  D - down        P - bundled in port-channel
        I - stand-alone s - suspended
        H - Hot-standby (LACP only)
        R - Layer3      S - Layer2
        U - in use      f - failed to allocate aggregator

        M - not in use, minimum links not met
        u - unsuitable for bundling
        w - waiting to be aggregated
        d - default port

Number of channel-groups in use: 2
Number of aggregators:           2

Group  Port-channel  Protocol    Ports
2      Po2(SU)         LACP      Gi1/0/44(P) Gi1/0/45(P)

It’s time to now setup an NFS share for holding KVM images

This part is easy:

  • apt install nfs-client

Add a line to /etc/fstab: /vol1   nfs     _netdev,nfsvers=3,nolock,noatime,bg    0       0

Then mount: sudo mount /vol1.   I then created a symbolic link from the /var/lib/libvirt/images directory to point to /vol1/kvms (where the other KVM servers put their images)

Now time to move my newly built KVM VMs into my new KVM host (built on a temporary KVM host)


At work, I serve on a team that has an on call rotation.  As a heavy sleeper, SMS dispatches to my cellphone are not enough to get me out of bed.  I needed a better solution, but what was that going to look like?  I have an iPhone X and it’s just not loud enough to wake me, and having the notifications be persistent until acknowledged was a real necessity.  I needed something that would almost slap me in the face to get me out of bed for a 3am dispatch.  No joke!  There just aren’t a lot of solutions out there to literally kick someone out of bed at 3am (heavy sleepers) and I certainly wanted to keep my manager happy by answering pages in a timely manner.  Time to build a solution!

Luckily, I have skills in the wireless dark arts as a Ham Radio operator and I knew there was a way to use the free Pi-Star software to make a pocsag pager transmitter.  I bought a programmable pager online that would work on the frequency of 439.9875MHz and proceeded to build the transmitter and attach it to my network.



  • latest pi-star image on microsd card and configured on the network
  • Any USB wifi G or N dongle (the onboard wifi chip on the RPi Zero W sucks)
  • OTG to USB adaptor pigtail for wifi dongle
  • 15″ SMA whip antenna – the little stubby antennas really suck!
  • MMDVM Hotpost board kit (this has the radio and RPi Zero)
  • Programmable POCSAG pager that will work on 439.9875MHz (Digital Paging Company in North Hollywood CA Be sure to ask for SKU: R924-430-6V33
  • Gmail account to pull the messages of interest from to create page messages and triggers
  • Linux box (or VM) with fetchmail configured to pull and parse mail from the gmail account
  • I had to write some code to make it all work
  • OPTIONAL: an MQTT broker to send actions to home automation such as Domoticz, Hass, etc (in my case, I have a bright light on my night stand that shines in my face that gets turned on to help wake me up)

Setting Up Pi-star

Download pi-star and write the image to the microsd card.  Configure pi-star by disabling the built-in wifi chip (in favor of the USB dongle) and set up pi-star to connect to your network.  (see pi-star site for more info).  Once connected to your network, you should give it a static IP address so your script will always be able to reach it and dispatch messages to it.  To do so, you can use the DHCP IP reservation function on your router or edit /etc/dhcpcd.conf to add the static IP by adding something like the following to the end of the file: (change to what’s appropriate for your environment)

interface wlan0
static ip_address=
static routers=
static domain_name_servers=

NOTE: you need to make sure the / partition is writeable before you edit this file (by default it is not) by running “rpi-rw” and when you’re done, run “rpi-ro” to make the / partion read-only again.  Keeping the / partition read only will help protect your sd card from wear.

Next, you must setup pi-star and turn on POCSAG mode and be sure the frequency is set to 439.9875MHz (this process is documented on the pi-star website/wifi – so I won’t go into that here).  Once you have pi-star setup in pocsag mode and on the correct frequency, it’s time to program your pager (see pager documentation) and program in the frequency and a CAP code so your pager will respond to page transmissions.  With your pager programmed correctly, and pi-star properly configured, you are now ready to install the scripts and set the script up in a crontab.  (this assumes you have already configured fetchmail to pull down mail messages from the gmail account you will use to receive on call dispatches).  Refer to fetchmail documentation on how to setup IMAP or POP access to gmail.

I call fetchmail and in crontab every minute:

* * * * * /usr/bin/fetchmail &> /dev/null
* * * * * /home/john/ &>/dev/null

NOTE: please see the comments within the scripts for info on what needs to be setup (variables) for the script to process your mail.



If you have an Amazon device, like Alexa, Ring, etc, you soon will be sharing your internet connection publicly (at least a small portion of it).  Amazon is quietly opting-in device owners to create a new public network called Sidewalk.   This new feature will be turned on by default.   You can turn it off fairly easily by performing the following steps:


Should you choose to opt-out of Amazon Sidewalk, here’s how:

Open the Alexa app on your iPhone or Android
Tap More
Tap Settings
Tap Account Settings
Tap Amazon Sidewalk
Toggle the switch to Off to disable your participation

You can always change your mind later and join back in.


You may be one of those people (like me) who prefers web content rendered in dark mode.  Often the preference for the choice is due to less eye strain and glare.  Light text on dark background is much easier on the eyes.  There’s also another thing to consider: for those who use AMOLED screens, darker pixels means using less energy for displays.  My preference is for the first reason – it’s a lot easier on my eyes when I spend hours in front of a computer as both a staple in my profession and for my hobby – that’s a LOT of screen time! (and assault on my eyes).


Fortunately, the solution is a simple one (if you’re a Chrome user).  You can enter this string in the URL field: chrome://flags/#enable-force-dark and choose Enable.  The result is pretty darned sweet!


Just soon after moving into our new home in Northbridge MA, I have begin rolling out home automation technologies that are designed from scratch and uniquely custom.   Doing it this way, I’ve more control over the quality and versatility of my deployment and design.  The trade-off is a higher learning curve to achieve this supreme result – but I LOVE learning!  In our old home, many of my automation made use of VMs running various applications to perform tasks and provide actionable data.  In my new design, I want to achieve the following goals:

  • lowest power footprint as possible – do more with less.
  • learn and utilize docker container technology to replace fatter VMs with slimmer deployment of applications on power efficient computing (such as the Intel Celeron J4115)
  • maintain the highest levels of system security by using a local PubSub broker so message payloads never leave the firewall
  • utilize my investment in Ubiquiti Unifi wireless APs (5 of them throughout the house) over a dedicated SSID for home automation
  • be easy to manage
  • wife accepted!   (WAF Score)     Note: for those who don’t know, WAF score is a value I assign to project that achieve high acceptance rating from my wife  (Wife Acceptance Factor)


First thing I did is research and select a hardware platform to run my containers.  The following applications will be containerized in this new design:

  • Domoticz – the home automation hub software I use – essentially the cornerstone of the entire system
  • Pi-Hole – DNS ad blocking and management application (also part of our DNS jail)
  • MySQL – Yeah, I need a database for some of these apps
  • PHP MyAdmin – a nice MySQL web based database workbench
  • WordPress – for hosting local information that guests will land on when they visit and join the guest wifi
  • Cacti – a robust SNMP NMS system for monitoring application/system performance over time
  • habridge – a java based Phillips Hue emulator that allows us to connect Alexa (Amazon Echo) to selected controlled endpoints on the Domoticz dashboard
  • A minimal ubuntu container for launching various scripts

In the past, I had run these on VMs and while it worked well, it isn’t as efficient as a containerized deployment.   The hardware I used for the Intel box was this.   It is a Celeron J4115 with 8GB Ram and a decent sized M2 SSD.  I use an NFS mount to store container configs and logs on the Synology NAS for safe keeping.

When I built the weather tube, I re-used a marquee scroller project I had done previously and wrote a couple scripts to push data to it and display weather stats in the kitchen in real time.  I first use a Darksky API account to get the data for my town onto my Domoticz container.  Domoticz periodically polls my Darksky account every 5 minutes or so and displays the weather info through widgets on the Domoticz dashboard.   This is great and all, but if you’re not looking at the dash you won’t have the latest info.  Luckily, Domoticz has a rich json API and it’s easy to get data out of Domoticz for various uses.   Every “device” or sensor object in Domoticz has an IDX address which is easy to use via the json API to poll data from Domoticz.   The advantage here is that I only have to poll Darksky once from Domoticz, but I can poll Domoticz as many times, and from as many local devices as I want, which helps me stay within the free limits on my Darksky account.  The json feed for devices looks something like this (click for larger view):

So how do I leverage this bounty of information and display it on my LED Scrolling Marquee tube?  A little smear of Bash and Python scripting to the rescue!  I first create a python script ( that will pull the data elements that I want.  This script will then build a string output to scroll to the tube at it’s IP address on the network.   The string is cached to a file wx/conditions  and the contents of this file is then pushed to the tube every 15 minutes by another script that is called by chron. (  A snapshot of these scripts appear below for your reference.  Click on any of them for a larger view:


This is a quick and dirty way to push this data to the tube, but I am currently working on an MQTT version that will subscribe to a topic and pickup the data feed via a published topic.  I’ll update this article once that’s ready for prime time so check back often.

Here’s a video of the tube in action in the kitchen:


Working with one or a fleet of LUKs encrypted Linux machines, it may be necessary to do a remote reboot (as might be the case when you’re using the machine remotely).  But what if the host has entire disk encryption such as LUKs and intended for remote users?  If you ever needed to reboot the host, you would have to physically be present at the local console to enter the LUKs passphrase!  Can’t make it into the office to unlock that drive? You’re SCREWED!  Well, not when you setup dropbear SSH and SSH public key authentication!  Dropbear to the rescue!  Read on!

The Solution

Manual Approach

So let’s say for this example, you’re running either Debian or Ubuntu Linux.  Your entire system drive is LUKs encrypted (likely required by your corporate policy).  You can install the needed package via apt as follows:

apt install dropbear-initramfs

This package allows your system to rebuild the initramfs with a dropbear SSH listener.  This will also work even if you update your kernel so not to worry.  Once you install this package, here’s how you configure it:

nano /etc/dropbear-initramfs/config

When you open the file for editing, you’ll want to add this line to the file, then save and close:

DROPBEAR_OPTIONS=”-p 222 -I 120″

What this line does is configure dropbear to spawn and listen for connections on TCP port 222.  The -I 120 option sets dropbear to disconnect if the session is idle for more than 120 seconds.  Once you have changed the file in this way, there is one more thing to do.  You need to generate ssh keys and copy your public key to the authorized hosts file on the dropbear config folder.   If you already have ssh keys generated, you can simply copy your public key DO NOT copy your private key!  To generate ssh keys:


Your keys will be found in ~/.ssh/    you can easily copy your public key by viewing the file ~/.ssh/    (on a trusted machine – preferably an SSH jumphost or other trusted SSH host)

less ~/.ssh/

Then simply copy the key to another terminal window on the dropbear target host and edit the file:

sudo nano /etc/dropbear-initramfs/authorized_keys

Paste the key into the open file, then save and quit.  The final step is to rebuild your initramfs image so that it now includes dropbear:

sudo update-initramfs -u

This last step is what rebuilds your initramfs image and it will now include dropbear with new kernels.


Automated Deployment Using Ansible

But what if you have to do this on many machines?  This setup could take you a while to do manually.  Ansible to the rescue!  I’ll show you how to setup a simple play that will deploy these changes to one host.  You can use an inventory file with multiple hosts to run this against multiple targets:

The play:

If you don’t mind supplying just the hostname and ip for installation targets, I wrote a bash wrapper script to also make deployment easy if you’re just doing about a dozen or so hosts:

First create a target template file:

Then create the wrapper script in bash:

What happens here is that when you run the wrapper script, it’ll ask you for the hostname and IP of the installation target.  It’ll then create the inventory file for you and run the play:

In this example I had already installed dropbear to this host because I didn’t have another one without dropbear.   What you see here is what you could expect except that you would see more changes reflected because the addition of dropbear would have caused more changes to this host.  I only showed this to showcase the example of running the play against a single target using the wrapper.


Building The Remote Reboot Tool

Ok now that dropbear is installed on our remote encrypted host, we could manually SSH to the dropbear instance after rebooting the machine by running:

ssh [email protected] -p 222

Once connected, you can manually issue the following commands to unlock your LUKs drive:

cryptroot-unlock  (hit enter)

You then enter your LUKs passphrase just as you would at the local console, then hit Enter again, disconnect, and the host will finish rebooting.  At this point you can access the system as you normally do by remote.  Is there an easier way?  There sure is!  A little PHP, and expect magic to the rescue:

First, we create a web form to take input from the user.  We need the IP address and LUKs passphrase of the target.  Our webform collects this from the user:

When the form is submitted, the inputs are passed to the submit script:

The job of the submit script is to call our expect script and pass the two variables to it.  The expect script does all the heavy lifting and makes the SSH connection to dropbear and provides the information during the session prompts.  We also put the trusted keys in /scripts/key which our expect script uses to authenticate to dropbear:

But BEFORE our expect script will work, we need to install the interpreter on the jump host where our PHP script will call it:

apt install expect


Once this script finishes, the remote dropbear host is unlocked.   Our end user only has to interface with the PHP web form:

I hope you find this solution useful.  Please feel free to comment or share your ideas.


This year, my family and I went into Hobby Lobby in search of Halloween decorations.  I found a nice ceramic jack-o-lantern that was painted green inside and was only $3.00.   I had an idea…

I thought, “I could put an ESP8266 in there with a double LED and put a flickering loop on the ESP and that would look so cool sitting on my dining room table where the Halloween Spread is laid out every year…  and it would look awesome.  So I put the ceramic jack in my cart amongst other things.

CODE: FunkyCandle

I start with a Wemos module that has a 18650 battery holder, which is very convenient for this project.  Since the area to be lit was fairly small, I used two green LEDs and soldered them to pins D6 and D7 and ground.  This provides two separate flicker channels for a more realistic flicker effect.  If you need to make this brighter, you could use small mosfets and a higher voltage with higher power LEDs of any color you wish.

Here’s a look at the module:

If you want higher power output, you can put a MOSFET on each PWM channel and drive higher power LEDs like this:

A video of the higher power unit.  The flicker effect is random and looks very realistic:

NOTE: when you flash the code to the module for the first time, you will not see the LEDs blink yet.  To make them blink, you first have to add the module to your wireless network (this is designed as a network client so you can remote control it).  To do that, take your mobile phone and browse wifi network like so and find “FunkyCandle”:

Once you see that network, go ahead and click on it to access the config manager (click Configure WiFi): The module will now scan available nearby networks so that you can join one (click the one you want and then enter the password):

Once you enter the password, click save and wait about 20 seconds for the network to be stored and then go look for the module’s IP address using your router’s tools (likely under DHCP leases etc), then enter that IP address into your browser like this to turn it on.  So for example, if the ip of the module on your network is, you would enter this URL into your browser to turn on your candle:   and to turn it off:    (NOTE: this only turns the LEDs on or off, it does not turn the module on or off!)

Now, simply place the module into your favorite decoration and enjoy!   Here’s a video of mine:


We have an electronic Z-Wave lock on our back door.   This lock works great with our Domoticz home automation system over the Z-Wave interface.  The lock allows users to each have their own  code, but an epic fail is that you have to take the lock apart to create/change/delete codes.  I hate the way codes are managed on this lock and the task is quite onerous.  There HAS to be a better way!  My needs require:

  • to be able to have family members be able to have ability to unlock the door
  • to be able to allow access for hired house help (babysitters,contractors)
  • to easily create and distribute some kind of physical token (other than a key) that can be easily revoked/changed if lost
  • solution has to operate reliably with our existing home automation infrastructure

Of course ESP8266  and MFRC522 to the rescue!


First, the shopping list (bring your own 5 volt power source):

You can build just about ANYTHING with the ESP8266 so long as you have a problem to solve, and couple imagination to the problem – you can do magic!  That’s just what I did here.  I had an RFID chip reader antenna module (MFRC522) and proceeded to see how others have implemented an RFID badge reader.   I found one project that implemented a basic reader with a relay switch to activate a solenoid.  Since I don’t have a solenoid on my lock, I had to make modifications to the code from Luis’ project:

  • use MQTT to signal to Domoticz to unlock the door over my own MQTT broker like the rest of my sensors
  • log when invalid tags are scanned
  • enter a unique client and topic name to uniquely identify the reader on the system

Here’s my fork of Luis’ code which adds improvements that include server-side MQTT support and an example for Domoticz home automation.  To connect the reader to the door lock, I had to get the Domoticz IDX address of the lock and build a statement into the code that would be sent when the database returned a match.  The statement looks like this where XXX is the IDX of the locking device:

echo shell_exec(`mosquitto_pub -h $pubHost -u $pubUser -P $pubPass -t $topic -m “{\”command\”:\”switchlight\”,\”idx\”:123,\”switchcmd\”:\”Off\”}”`);

The project workflow is as follows:

  • User scans their tag
  • 8266 calls out over WiFi to a local server to check a database of RFID tags using an HTTP POST
  • fobcheck.php script receives the POST value containing the tag ID and checks the database
  • If the database returns a match, fobcheck.php runs a shell command to the server-side MQTT client to instruct the door to unlock

The completed reader is simply powered from a 5V power source outside and placed in a convenient location near the door.   Here’s what I used to build the reader in a weatherproof case: (click for larger image)

The fobs can be attached to any keychain for easy use and are relatively cheap so if they are lost or stolen, can be easily deactivated.

To manage users, I use a database back end where each tag has a record which contains the assigned user, tag ID, and enablement value (1 for active, 0 for inactive):

To allow the ESP8266 to read the database, I wrote a PHP connector script.  It connects to the database when the ESP8266 sends an http POST request:


Click for larger image:


So you may recall my tank notification project whereby I create alerts to our mobile phone to notify us to empty the tank for the dehumidifier in the basement.  That project was cool, but it didn’t really come full circle.   I got tired of emptying that beast daily!  After all, who needs home automation if you can’t get your house to actually do all the work for you?  The point of automation is to transform a process workflow into a repeatable sequence of steps and turn those steps into actionable smarts, output, or work requiring little to no human intervention.  Sure, there are self-emptying dehumidifiers out on the market but I decided to build my own automated tank for the following reasons:

  • Because I can!
  • Commercially available self-emptying dehumidifiers often fail – their pumps wear out often
  • As a learning exercise – to learn the needed bit of knowledge and transform that knowledge into a process plan
  • Because, why the fuck do I want to empty a 5 gallon container of water every day during the humid seasons???


As the humidifier runs, the tank fills – this is basic physics at work.  During filling, the level rises.   Tank level is something we can measure and turn into system input.  To solve this problem, I need to know the following things:

  1. When the level reaches maximum – How do we measure that?
  2. How to communicate our input to the central computer so we get notified and so we can send instruction to the pump to turn on
  3. We know we need to turn the pump on at maximum level to prevent overflow (and a mess, and unhappy wife)
  4. How do we know when to turn the pump off?  We don’t want to run a dry pump – that is bad!
  5. Communicate tank status during draining
  6. Send signal to turn the pump off – what logic is used to define an empty tank and only run the pump when there is water present

NOTE: It is desireable to keep pump cycles to a minimum so we preserve the life of the pump.  This is why a 10 gallon container is used to collect condensate.  I use a 1/3 horsepower submersible pump to drain the container.  Drain time is about a minute.


We can register maximum level using a float sensor.  In this design, I use two.  One at maximum level and one at minimum level.  This allows me 3 possible states to power the logic needed to drive the decision to turn the pump on and off as needed but not let the pump run dry. (VERY important!)  The float sensor I used was sourced from Amazon and looks as pictured here.  


This sensor was not ready to be used just yet.   I had to make a mount for it out of PVC, so that I could set the correct height.  I didn’t want the maximum tank level to be too close to the tank lid, that would just be too risky.  One of these is also set much deeper into the tank to register the minimum reading.






Once we have the sensors mounted to the lid, and proper depth set, we can now read these using (you guessed it) an ESP8266 wireless MCU running my favorite flavor of open source firmware, Tasmota.  I have found so many versatile uses for this firmware which runs reliably on ubiquitous $6 modules.  Just two pin inputs and a ground connection are all that are required to read the sensors.  To hook them up, I used 4 conductor solid telephone wire from the terminals to the sensor wires.  

The ESP8266 was connected to both sensors and shared a common ground.  When the status is read (either ON or OFF) we use a rule to format an MQTT message to a computer running Domoticz (this should also work with other HA software apps as well).  On the Domoticz HA computer, we declare two integer values (1 or 0) and set them (click for larger image): 

Once those are in place, we can then setup a logic structure to power the decisions on when and when not to apply power to the pump.  In Domoticz, it’s super easy to create a logic scenario.  If you can click and drool with a mouse, you can implement a LUA script or a block type logic like this (click for larger image):

Every time the sensor state changes, the variables we set get updated by a rule placed into the Tasmota firmware console:

rule1 on POWER1#state=1 do publish domoticz/in {“command”:”setuservariable”,”idx”:5,”value”:”1″} endon on POWER1#state=0 do publish domoticz/in {“command”:”setuservariable”,”idx”:5,”value”:”0″} endon

rule2 on POWER2#state=1 do publish domoticz/in {“command”:”setuservariable”,”idx”:6,”value”:”1″} endon on POWER2#state=0 do publish domoticz/in {“command”:”setuservariable”,”idx”:6,”value”:”0″} endon

These two rules allow us to create the MQTT message when the event is triggered by a sensor action when the water rises or falls.  This message then updates the variables on the Domoticz computer and the logic re-evaluates the situation.   This process occurs constantly and allows us to run the pump only when needed and when it is wet.  We also get notified on our mobile device when the pump turns on and when it turns off so we are aware.


What pump did I use anyway you ask?  I found this gem on Amazon (Amazon ASIN B000X07GQS) and boy is it FAST!:

In order to turn this on and off by command, I used one of the famous KMC70011 WiFi outlets which I re-flashed with Tasmota (I don’t like Chinese servers having control of my shit)  These are very reasonable on Amazon, and when you liberate them from their Chinese overlords, they are AWESOME!  Reflashing with Tasmota is fairly easy and these are easy to open and make the right connections.  They are also ETL certified for safety.