Projects

Here’s what I’ve built, or what I’m currently working on.

ABSTRACT

In our home, we have 2 garage doors with RF remotes in our cars.  For most people, this is generally considered “good enough”.  I wanted to come up with a way to connect our garage door openers to our home automation system.  Doing so would have the added benefit of remote control from anywhere, especially if we are away.  This could be useful so that deliveries could be placed in the garage, or for any other reason where we would want to allow someone access to the garage but not the rest of the house.  Some folks have a code entry panel that serves this purpose but then you have to share that code and by doing so, can compromise security should the code be shared without your knowledge.  With the ability to remotely open the garage, allows access without needing to share a credential.

This article makes the following assumptions:

  • You already know how to flash Tasmota onto ESP8266 hardware
  • You are familiar with Domoticz home automation console and adding devices
  • You use some method of message transport ie. MQTT between Domoticz and your Tasmota powered hardware

View the following video to see a demo of how this works:

 

SOLUTION

Since I already have an in-place home automation system, all I needed to do was configure two buttons on the console that would accept a pushbutton command and send a signal to a relay to open the door.  For hardware, I used a dual relay board that has an esp8266 chip.  The 8266 chip was flashed with the latest Tasmota release, and configured to operate the relays.  The switch output of the relays is wired to the existing wall switches of the garage door so that when tripped, will cause the door to operate just as the physical wall buttons already do. 

The Tasmota configuration uses the “Generic” template and configures GPIO0 to be Relay1, and GPIO2 to be Relay2.  After setting the GPIOs, I had to enter some settings and a ruleset into the Tasmota Console on the ESP-01.  To enable the ESP-01 to talk to the relay serial chip on the board, I had to go to the console and enter the following command and ruleset:  (NOTE: some older versions of this dual relay board may use 9600 baud instead of the 115200 baud shown here.)  Also, the dual relay board needs to operate in Mode 1 (default mode and indicated by a red LED on the board).

seriallog 0
Rule1
on System#Boot do Backlog Baudrate 115200; SerialSend5 0 endon
on Power1#State=1 do SerialSend5 A00101A2 endon
on Power1#State=0 do SerialSend5 A00100A1 endon
on Power2#State=1 do SerialSend5 A00201A3 endon
on Power2#State=0 do SerialSend5 A00200A2 endon

Then to enable the above rule:

rule1 1

turns on rule1.  Once enabled, I want to ensure that power disturbances do not trigger the relays or cause the ESP-01 to lose it’s config.  To ensure that the relays stay OFF when there are power interruptions or power cycles, I needed to enter this command on the Tasmota Console:

PowerOnState 0

I also wanted to make sure the config would remain intact even if there were several power cycles/disturbances (sometimes Tasmota can reset to defaults if there are more than 6 fast consecutive power cycles), so I also entered this command into the Tasmota Console:

SetOption65 1

Finally, we want the relays to only trigger momentarily when activated so that a pulse is registered to the garage door openers.  To do that, we must enter two more commands on the Tasmota Console of the relay board:

PulseTime1 1
PulseTime2 1

Once these changes are set, all that is needed is to set the Domoticz IDX address to match the two pushbuttons that were added to the Domoticz console.   At this point it should be possible to remotely trigger the relays from Domoticz and they will trigger ON for 1 second and then switch off when called via Domoticz.  All that is left to do is wire the relay N.O. contacts to the physical garage door wall switches in the garage.  It will now be possible to open or close the garage doors from the Domoticz console via any mobile device that has access to the Domoticz home automation console.

ABSTRACT

I moved to an area where my AM news station (WBZ) comes in rather scratchy.  Sure I could stream them over the internet on a mobile device, but what about the radios I currently have?  Have they now become paperweights?  Fortunately, WBZ streams online and I found a cool FM transmitter module that I thought “I could use this with a Raspberry Pi to put WBZ on the FM dial near my home”.  The FM module is about $12 and available on Amazon and I already had a raspberry pi computer I could dedicate for the project.  Why not try it?

SOLUTION

I installed Ubuntu Linux 20.04.2 server on the raspberry pi computer, and then installed a software called liquidsoap.  Liquidsoap is an audio/streaming swiss army knife and is of course, open source.  Normally, people use liquidsoap to capture a live audio source and then create a stream on the internet.  I wanted to do the reverse, and pull in an internet stream and play it over the USB DSP that is built into the FM module.  A bonus is that the FM module is also powered via the USB connection – one cable does it all.  Shown here is the finished transmitter:

The FM module is quite versatile.  It has an analog line-in, condenser mic, and USB audio interface all built in!  Depending on what input you use, the module is smart enough to pick that input and use only that.  When I hooked the module to my raspberry pi and ran:

aplay -l

I was able to see the USB audio interface on the FM module:

[email protected]:~$ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: Headphones [bcm2835 Headphones], device 0: bcm2835 Headphones [bcm2835 Headphones]
Subdevices: 8/8
Subdevice #0: subdevice #0
Subdevice #1: subdevice #1
Subdevice #2: subdevice #2
Subdevice #3: subdevice #3
Subdevice #4: subdevice #4
Subdevice #5: subdevice #5
Subdevice #6: subdevice #6
Subdevice #7: subdevice #7
card 1: CD002 [CD002], device 0: USB Audio [USB Audio]
Subdevices: 0/1
Subdevice #0: subdevice #0

The “card 1” device is the USB connection to the FM module.

All I needed to do now was install and setup liquidsoap.  For that I used this guide and installed with OPAM.  Once I had liquidsoap installed, I created a .liq script which had the following configuration to stream WBZ and play it on the FM module’s USB interface:

str = "http://cast.wizworks.net:8000/wbz"
prog = mksafe(input.http(str))
prog = amplify(0.7,override="replay_gain",prog)
output.alsa(device="plughw:CARD=CD002,DEV=0",prog)

With this .liq file saved as play.liq, I could then start it up by running:

liquidsoap ./play.liq

If you want to add this as a systemd service, just follow the conventions to create the service file and install it as a service so it comes up whenever the raspberry pi is started.

FM Module Tips

The FM module as it comes, does not have an antenna on it.   For best results, solder a 1 meter length of wire on the “ANT” solder pad and place the entire RPi/FM setup in a high location within your home.  You should find a clear spot on your FM dial using a portable radio and set the FM module to that frequency.  When properly set, you should be able to pickup the signal from your RPi/FM package at least 4 houses away before you start to hear static.  This amount of range from such a small module is pretty decent and sufficient to enjoy your streamed audio source on any ordinary radio near your home.  The sound quality is very good for a $12 module and sounds nice on my Tivoli and other radios.

Background

My neighbor recently did a landscape lighting project of his own.  It looked great and was a simple grid-tied system.  I wanted to kill 2 birds with one stone and do a landscape lighting system of my own, but I wanted ours to be connected (wirelessly) to our Domoticz home automation system and I didn’t want to pay for the electricity to run it.  I was able to achieve both goals in this project through the use of ESP8266 and some MOSFETs triggered by PWM which had the added benefit of making the system dimmable if desired.

Solution

For this project, I gathered the following materials:

The way my house is situated, all the sun shine is in the back yard – plenty of sun there year round.  In the front where our landscaping is, has a lot of shade so not a good place for a solar panel.  I also wanted to hide all the power generation and control stuff in the back yard anyway.   I was able to cut a thin slice into the side yard following the foundation from the back yard to the front yard and bury the cable in the dirt easily.  You can’t even see the cable:

In the back, is where the power generation and control stuff was located.   I made the wireless PWM controller inside an  IP67 rated enclosure and put banana posts on for easy connection:

This allows me to control the lights ON/OFF/Dim using my existing Domoticz home automation system.  commands and telemetry is carried over wifi and MQTT to the Domoticz docker container.  The object in Domoticz can then apply time schedules, change brightness, or even turn on the lights during a motion trigger event from a PIR sensor that can be added to sense presence.

In the back yard, I set the case containing the battery and charge controller, solar panel, and PWM controller under the solar panel, in a spot where there is ample sunlight all day.

In the front, I ran the cable near the areas I wanted the lights and used the included connectors that came with the lamps.

I had the object in Domoticz setup to turn these on at 50% brightness 30 minutes past sun down.  Here’s the final result on how it looks:

ABSTRACT

We moved into town in July 2020 and as new residents always looking for ways to get a pulse on happenings in the town.  One great way to do that is to monitor the radio systems of the municipal services that operate within the town.  To monitor, one can either go out and purchase a $400 scanning receiver, install an antenna, and stay within ear shot of the scanner to stay informed, or the (better) option is to setup a dedicated receiver for each service and connect the audio output to a computer (running appropriate software) to create and send a stream to broadcastify.com or other streaming relay service.  What’s nice about streaming to broadcastify, is that they archive all the audio you send them, so when something interesting happens, and you miss it, you can download the audio from the stream archives and listen to it from anywhere as your schedule permits – or you can just listen live from any mobile device using the broadcastify app from anywhere.

MY STREAMS

POLICE



FIRE


MY SOLUTION

I chose to do the second option, because I have plenty of spare radios and computers.   To create my streams I use the following in my arsenal:

  • a low power Intel Atom ultra small form factor computer with plenty of USB ports and running Linux OS.
  • the liquidsoap audio toolkit to define and create the streams.
  • USB audio interfaces with inputs (connected to the radios)
  • UHF or VHF radios to dedicate to the monitoring setup – connected to a common (shared) or individual antennas.
  • An account on Broadcastify or other stream server  – from where you will serve your streams.
  • Broadcastify stream details for each stream (you will use this to setup liquidsoap).
  • a wired network connection (preferred) for your computer generating the streams.

I started with a small energy efficient computer (an ASUS Intel Atom “net top” computer) on which I installed Ubuntu Linux and Liquidsoap.  This computer needs a reliable internet connection and power source, as it will be running 24/7.  I chose a low power computer because I wanted to keep my energy costs low for the project.  Once Linux is installed, and an IP setup on the box, a keyboard and monitor are no longer needed.  You can do the rest of the setup over the local network over SSH.  To setup the computer for streaming, I installed Liquidsoap and created a config file to define the streams: (/etc/liquidsoap/radio.liq)


apt install liquidsoap
apt install liquidsoap-plugin-alsa

Once installed, you need to create a config file to tell liquidsoap how to create and process your streams:


# Define physical audio pickups:
radio4 = mksafe(input.alsa(device="plughw:CARD=USB,DEV=0"))
radio5 = mksafe(input.alsa(device="plughw:CARD=CODEC,DEV=0"))


# Define stream destinations:
output.icecast(
%mp3(stereo=false, bitrate=16, samplerate=22050),
host="audio3.broadcastify.com",
port=80, password="[email protected]", genre="Scanner",
description="Northbridge Police Dispatch", mount="/kejrncsk888",
name="Northbridge Police Dispatch - 453.1875 MHz", user="source",
url="https://www.broadcastify.com/listen/feed/34984", radio4)


output.icecast(
%mp3(stereo=false, bitrate=16, samplerate=22050),
host="audio1.broadcastify.com",
port=80, password="[email protected]", genre="Scanner",
description="Northbridge Fire Dispatch", mount="/s7sfsd87dsf",
name="Northbridge Fire Dispatch - 154.3625 MHz", user="source",
url="https://www.broadcastify.com/listen/feed/35186", radio5)

You get the parameters for the above output definitions from the feed details in your broadcastify account when you apply to setup a feed.  Once setup in the config file, you can issue the following command to restart the liquidsoap service and bring your feeds online:


sudo systemctl restart liquidsoap

Once restarted, liquidsoap should now be sending your audio to broadcastify.  You should see your feeds online:

HOOKING UP THE RADIOS

Now that your stream is up, it’s time to hook up the radios and start sending audio over your stream (radio configuration/programming is out of the scope of this article).  Connect the “speaker out” jack on the back of the radio to the correct “line in” port on your USB audio pickup device and set the volume halfway to start.  (You don’t want too much audio or your stream could be noisy/distorted).  As the radio is receiving audio, adjust the volume knob on the radio for good balance of loudness and clarity.  Do the same on any other radios you wish to setup.  Be sure to lock the tuning so that the frequency can’t be accidentally changed.

RADIOS NEED ANTENNAS

Because we’re pulling signals off the air and streaming them online, you’ll need to either buy or make an antenna for such a dedicated setup.  I chose to make a simple one using an SO-239 connector:

LISTEN FROM ANYWHERE

Now, you can download the Broadcastify app on any mobile device and listen to the feeds from anywhere.  The data rate is extremely small so listening for long periods should not consume a lot of data on a data plan.  Now you can stay informed by listening or listen whenever you see local police/fire activity in your town.

Rebuilding My Home Network

ABSTRACT

I have had my ESXi box for YEARS and recently decided to take a dive into the world of KVM (QEMU) on Ubuntu Linux.  It’s a popular and completely open hypervisor that has become a staple in many datacenter environments.  I had been loathing the change because it meant I had to rebuild a few of the VMs I still had.  Though I made the plunge recently into docker containers and finding apps I could containerize, I still have a handfull of VMs that perform various functions on my home network and “home lab”.  I’ll preface this by saying that the ESXi box did us well over the years and I hardly ever had to touch it.  The one thing I did not have with ESXi was other hosts to use for moving VMs.  The fact that the license isn’t free and ESXi is very persnickety about hardware requirements just was a put-off in trying to implement V-Motion (VMWare’s method of migrating/moving VMs around in a cluster).  With KVM, my options are more open and I have a handful of hosts that are compatible with KVM so if I ever had to move VMs around, I can in a pinch if ever I have a problem with hardware.

SOLUTION

To create a small cluster of physical hosts to run my new VMs built on KVM, I simply carried out these steps (I’ll go into greater detail on each one):

  • Install Ubuntu Server 20.04 OS on each physical machine
  • Configure a netplan for each physical KVM (pkvm) host that achieves:
    • bonded (teamed) interfaces
    • LACP (802.3ad) attributes to bring up the channel-group session
    • vlan tagged traffic over the bond
    • bridge interfaces to allow kvm guest VMs to attach to the desired vlan
  • Install necessary KVM packages
  • Configured a port channel interface on the main switch & define physical switch ports that will participate in the LACP channel-group
  • Configure a common NFS mount on the NAS to hold the KVM guest images on the network
  • Moving new kvm VMs into my newly rebuilt KVM host
  • Consume donuts that my wife and kids made while the “internet was out”

Setting a Netplan

Ubuntu 20.04 uses netplan to configure the operation of network interfaces.  This method uses a simple YAML formatted file that is easy to write and backup.  If you screw up, you can always revert back to a previous file.  (always make a backup!)  My netplan file looks like this:

network:
  bonds:
    bond0:
      interfaces:
      - enp1s0f2
      - enp1s0f3
      parameters:
        mode: 802.3ad
        lacp-rate: fast
        mii-monitor-interval: 100
  ethernets:
#    enp1s0f0: {}
#    enp1s0f1: {}
    enp1s0f2: {}
    enp1s0f3: {}
    enp3s0:
      dhcp4: no
    enp4s0:
      dhcp4: no
  vlans:
    vlan.2:
      id: 2
      link: bond0
      dhcp4: no
    vlan.5:
      id: 5
      link: bond0
      dhcp4: no
    vlan.11:
      id: 11
      link: bond0
      dhcp4: no
    vlan.10:
      id: 10
      link: bond0
      dhcp4: no
    vlan.50:
      id: 50
      link: bond0
      dhcp4: no
    vlan.73:
      id: 73
      link: bond0
      dhcp4: no
    vlan.300:
      id: 300
      link: bond0
      dhcp4: no
  bridges:
    br73:
      interfaces:
      - vlan.73
    br2:
      interfaces:
      - vlan.2
    br5:
      interfaces:
      - vlan.5
    br10:
      interfaces:
      - vlan.10
    br11:
      interfaces:
      - vlan.11
      addresses: [10.0.1.3/24]
      gateway4: 10.0.1.1
      nameservers:
        addresses: [10.0.1.10]
    br50:
      interfaces:
      - vlan.50
  version: 2

Installing KVM packages

apt install -y qemu qemu-kvm libvirt-daemon bridge-utils virt-manager virtinst

Configuring a port channel interface

With the netplan configuration saved and in place, I just executed sudo netplan apply and then proceeded to setup the switch-side of the bonded connection.  The first thing I needed to do was configure a new port channel interface on the switch that would be suitable for carrying tagged traffic:

interface Port-channel2
 description KVMBOX
 switchport trunk encapsulation dot1q
 switchport mode trunk
 spanning-tree portfast
end

After setting up the port channel interface, I now had to add the physical interfaces that are cabled from the switch to the big KVM box:

interface GigabitEthernet1/0/44
 description TRUNK_TO_VBOX1
 switchport trunk encapsulation dot1q
 switchport mode trunk
 lacp port-priority 500
 channel-protocol lacp
 channel-group 2 mode active
interface GigabitEthernet1/0/45
 description TRUNK_TO_VBOX1
 switchport trunk encapsulation dot1q
 switchport mode trunk
 lacp port-priority 500
 channel-protocol lacp
 channel-group 2 mode active
end

wr mem

Once both ports were setup, I could then test the channel-group status:

CORE-SW#sh etherchannel 2 summary
Flags:  D - down        P - bundled in port-channel
        I - stand-alone s - suspended
        H - Hot-standby (LACP only)
        R - Layer3      S - Layer2
        U - in use      f - failed to allocate aggregator

        M - not in use, minimum links not met
        u - unsuitable for bundling
        w - waiting to be aggregated
        d - default port


Number of channel-groups in use: 2
Number of aggregators:           2

Group  Port-channel  Protocol    Ports
------+-------------+-----------+-----------------------------------------------
2      Po2(SU)         LACP      Gi1/0/44(P) Gi1/0/45(P)

It’s time to now setup an NFS share for holding KVM images

This part is easy:

  • apt install nfs-client

Add a line to /etc/fstab:

10.9.9.20:/volume1/vol1 /vol1   nfs     _netdev,nfsvers=3,noatime,bg    0       0

Then mount: sudo mount /vol1.   I then created a symbolic link from the /var/lib/libvirt/images directory to point to /vol1/kvms (where the other KVM servers put their images)

Now time to move my newly built KVM VMs into my new KVM host (built on a temporary KVM host)

 

ABSTRACT

At work, I serve on a team that has an on call rotation.  As a heavy sleeper, SMS dispatches to my cellphone are not enough to get me out of bed.  I needed a better solution, but what was that going to look like?  I have an iPhone X and it’s just not loud enough to wake me, and having the notifications be persistent until acknowledged was a real necessity.  I needed something that would almost slap me in the face to get me out of bed for a 3am dispatch.  No joke!  There just aren’t a lot of solutions out there to literally kick someone out of bed at 3am (heavy sleepers) and I certainly wanted to keep my manager happy by answering pages in a timely manner.  Time to build a solution!

Luckily, I have skills in the wireless dark arts as a Ham Radio operator and I knew there was a way to use the free Pi-Star software to make a pocsag pager transmitter.  I bought a programmable pager online that would work on the frequency of 439.9875MHz and proceeded to build the transmitter and attach it to my network.

SOLUTION

Parts:

  • latest pi-star image on microsd card and configured on the network
  • Any USB wifi G or N dongle (the onboard wifi chip on the RPi Zero W sucks)
  • OTG to USB adaptor pigtail for wifi dongle
  • 15″ SMA whip antenna – the little stubby antennas really suck!
  • MMDVM Hotpost board kit (this has the radio and RPi Zero)
  • Programmable POCSAG pager that will work on 439.9875MHz (Digital Paging Company in North Hollywood CA Be sure to ask for SKU: R924-430-6V33
  • Gmail account to pull the messages of interest from to create page messages and triggers
  • Linux box (or VM) with fetchmail configured to pull and parse mail from the gmail account
  • I had to write some code to make it all work
  • OPTIONAL: an MQTT broker to send actions to home automation such as Domoticz, Hass, etc (in my case, I have a bright light on my night stand that shines in my face that gets turned on to help wake me up)

Setting Up Pi-star

Download pi-star and write the image to the microsd card.  Configure pi-star by disabling the built-in wifi chip (in favor of the USB dongle) and set up pi-star to connect to your network.  (see pi-star site for more info).  Once connected to your network, you should give it a static IP address so your script will always be able to reach it and dispatch messages to it.  To do so, you can use the DHCP IP reservation function on your router or edit /etc/dhcpcd.conf to add the static IP by adding something like the following to the end of the file: (change to what’s appropriate for your environment)

interface wlan0
static ip_address=10.1.73.73/24
static routers=10.1.73.1
static domain_name_servers=10.0.1.10

NOTE: you need to make sure the / partition is writeable before you edit this file (by default it is not) by running “rpi-rw” and when you’re done, run “rpi-ro” to make the / partion read-only again.  Keeping the / partition read only will help protect your sd card from wear.

Next, you must setup pi-star and turn on POCSAG mode and be sure the frequency is set to 439.9875MHz (this process is documented on the pi-star website/wifi – so I won’t go into that here).  Once you have pi-star setup in pocsag mode and on the correct frequency, it’s time to program your pager (see pager documentation) and program in the frequency and a CAP code so your pager will respond to page transmissions.  With your pager programmed correctly, and pi-star properly configured, you are now ready to install the scripts and set the oncallConnector.sh script up in a crontab.  (this assumes you have already configured fetchmail to pull down mail messages from the gmail account you will use to receive on call dispatches).  Refer to fetchmail documentation on how to setup IMAP or POP access to gmail.

I call fetchmail and oncallConnection.sh in crontab every minute:

* * * * * /usr/bin/fetchmail &> /dev/null
* * * * * /home/john/oncallConnection.sh &>/dev/null

NOTE: please see the comments within the scripts for info on what needs to be setup (variables) for the script to process your mail.

 

 

ABSTRACT

Just soon after moving into our new home in Northbridge MA, I have begin rolling out home automation technologies that are designed from scratch and uniquely custom.   Doing it this way, I’ve more control over the quality and versatility of my deployment and design.  The trade-off is a higher learning curve to achieve this supreme result – but I LOVE learning!  In our old home, many of my automation made use of VMs running various applications to perform tasks and provide actionable data.  In my new design, I want to achieve the following goals:

  • lowest power footprint as possible – do more with less.
  • learn and utilize docker container technology to replace fatter VMs with slimmer deployment of applications on power efficient computing (such as the Intel Celeron J4115)
  • maintain the highest levels of system security by using a local PubSub broker so message payloads never leave the firewall
  • utilize my investment in Ubiquiti Unifi wireless APs (5 of them throughout the house) over a dedicated SSID for home automation
  • be easy to manage
  • wife accepted!   (WAF Score)     Note: for those who don’t know, WAF score is a value I assign to project that achieve high acceptance rating from my wife  (Wife Acceptance Factor)

THE SOLUTION

First thing I did is research and select a hardware platform to run my containers.  The following applications will be containerized in this new design:

  • Domoticz – the home automation hub software I use – essentially the cornerstone of the entire system
  • Pi-Hole – DNS ad blocking and management application (also part of our DNS jail)
  • MySQL – Yeah, I need a database for some of these apps
  • PHP MyAdmin – a nice MySQL web based database workbench
  • WordPress – for hosting local information that guests will land on when they visit and join the guest wifi
  • Cacti – a robust SNMP NMS system for monitoring application/system performance over time
  • habridge – a java based Phillips Hue emulator that allows us to connect Alexa (Amazon Echo) to selected controlled endpoints on the Domoticz dashboard
  • A minimal ubuntu container for launching various scripts

In the past, I had run these on VMs and while it worked well, it isn’t as efficient as a containerized deployment.   The hardware I used for the Intel box was this.   It is a Celeron J4115 with 8GB Ram and a decent sized M2 SSD.  I use an NFS mount to store container configs and logs on the Synology NAS for safe keeping.

When I built the weather tube, I re-used a marquee scroller project I had done previously and wrote a couple scripts to push data to it and display weather stats in the kitchen in real time.  I first use a Darksky API account to get the data for my town onto my Domoticz container.  Domoticz periodically polls my Darksky account every 5 minutes or so and displays the weather info through widgets on the Domoticz dashboard.   This is great and all, but if you’re not looking at the dash you won’t have the latest info.  Luckily, Domoticz has a rich json API and it’s easy to get data out of Domoticz for various uses.   Every “device” or sensor object in Domoticz has an IDX address which is easy to use via the json API to poll data from Domoticz.   The advantage here is that I only have to poll Darksky once from Domoticz, but I can poll Domoticz as many times, and from as many local devices as I want, which helps me stay within the free limits on my Darksky account.  The json feed for devices looks something like this (click for larger view):

So how do I leverage this bounty of information and display it on my LED Scrolling Marquee tube?  A little smear of Bash and Python scripting to the rescue!  I first create a python script (fetchwx.py) that will pull the data elements that I want.  This script will then build a string output to scroll to the tube at it’s IP address on the network.   The string is cached to a file wx/conditions  and the contents of this file is then pushed to the tube every 15 minutes by another script that is called by chron. (wxpush.sh)  A snapshot of these scripts appear below for your reference.  Click on any of them for a larger view:

 

This is a quick and dirty way to push this data to the tube, but I am currently working on an MQTT version that will subscribe to a topic and pickup the data feed via a published topic.  I’ll update this article once that’s ready for prime time so check back often.

Here’s a video of the tube in action in the kitchen:

Abstract

Working with one or a fleet of LUKs encrypted Linux machines, it may be necessary to do a remote reboot (as might be the case when you’re using the machine remotely).  But what if the host has entire disk encryption such as LUKs and intended for remote users?  If you ever needed to reboot the host, you would have to physically be present at the local console to enter the LUKs passphrase!  Can’t make it into the office to unlock that drive? You’re SCREWED!  Well, not when you setup dropbear SSH and SSH public key authentication!  Dropbear to the rescue!  Read on!

The Solution

Manual Approach

So let’s say for this example, you’re running either Debian or Ubuntu Linux.  Your entire system drive is LUKs encrypted (likely required by your corporate policy).  You can install the needed package via apt as follows:

apt install dropbear-initramfs

This package allows your system to rebuild the initramfs with a dropbear SSH listener.  This will also work even if you update your kernel so not to worry.  Once you install this package, here’s how you configure it:

nano /etc/dropbear-initramfs/config

When you open the file for editing, you’ll want to add this line to the file, then save and close:

DROPBEAR_OPTIONS=”-p 222 -I 120″

What this line does is configure dropbear to spawn and listen for connections on TCP port 222.  The -I 120 option sets dropbear to disconnect if the session is idle for more than 120 seconds.  Once you have changed the file in this way, there is one more thing to do.  You need to generate ssh keys and copy your public key to the authorized hosts file on the dropbear config folder.   If you already have ssh keys generated, you can simply copy your public key DO NOT copy your private key!  To generate ssh keys:

ssh-keygen

Your keys will be found in ~/.ssh/    you can easily copy your public key by viewing the file ~/.ssh/id_rsa.pub    (on a trusted machine – preferably an SSH jumphost or other trusted SSH host)

less ~/.ssh/id_rsa.pub

Then simply copy the key to another terminal window on the dropbear target host and edit the file:

sudo nano /etc/dropbear-initramfs/authorized_keys

Paste the key into the open file, then save and quit.  The final step is to rebuild your initramfs image so that it now includes dropbear:

sudo update-initramfs -u

This last step is what rebuilds your initramfs image and it will now include dropbear with new kernels.

 

Automated Deployment Using Ansible

But what if you have to do this on many machines?  This setup could take you a while to do manually.  Ansible to the rescue!  I’ll show you how to setup a simple play that will deploy these changes to one host.  You can use an inventory file with multiple hosts to run this against multiple targets:

The play:

If you don’t mind supplying just the hostname and ip for installation targets, I wrote a bash wrapper script to also make deployment easy if you’re just doing about a dozen or so hosts:

First create a target template file:

Then create the wrapper script in bash:

What happens here is that when you run the wrapper script, it’ll ask you for the hostname and IP of the installation target.  It’ll then create the inventory file for you and run the play:

In this example I had already installed dropbear to this host because I didn’t have another one without dropbear.   What you see here is what you could expect except that you would see more changes reflected because the addition of dropbear would have caused more changes to this host.  I only showed this to showcase the example of running the play against a single target using the wrapper.

 

Building The Remote Reboot Tool

Ok now that dropbear is installed on our remote encrypted host, we could manually SSH to the dropbear instance after rebooting the machine by running:

ssh [email protected] -p 222

Once connected, you can manually issue the following commands to unlock your LUKs drive:

cryptroot-unlock  (hit enter)

You then enter your LUKs passphrase just as you would at the local console, then hit Enter again, disconnect, and the host will finish rebooting.  At this point you can access the system as you normally do by remote.  Is there an easier way?  There sure is!  A little PHP, and expect magic to the rescue:

First, we create a web form to take input from the user.  We need the IP address and LUKs passphrase of the target.  Our webform collects this from the user:

When the form is submitted, the inputs are passed to the submit script:

The job of the submit script is to call our expect script and pass the two variables to it.  The expect script does all the heavy lifting and makes the SSH connection to dropbear and provides the information during the session prompts.  We also put the trusted keys in /scripts/key which our unlock.sh expect script uses to authenticate to dropbear:

But BEFORE our expect script will work, we need to install the interpreter on the jump host where our PHP script will call it:

apt install expect

 

Once this script finishes, the remote dropbear host is unlocked.   Our end user only has to interface with the PHP web form:

I hope you find this solution useful.  Please feel free to comment or share your ideas.

PROJECT

This year, my family and I went into Hobby Lobby in search of Halloween decorations.  I found a nice ceramic jack-o-lantern that was painted green inside and was only $3.00.   I had an idea…

I thought, “I could put an ESP8266 in there with a double LED and put a flickering loop on the ESP and that would look so cool sitting on my dining room table where the Halloween Spread is laid out every year…  and it would look awesome.  So I put the ceramic jack in my cart amongst other things.

CODE: FunkyCandle

I start with a Wemos module that has a 18650 battery holder, which is very convenient for this project.  Since the area to be lit was fairly small, I used two green LEDs and soldered them to pins D6 and D7 and ground.  This provides two separate flicker channels for a more realistic flicker effect.  If you need to make this brighter, you could use small mosfets and a higher voltage with higher power LEDs of any color you wish.

Here’s a look at the module:

If you want higher power output, you can put a MOSFET on each PWM channel and drive higher power LEDs like this:

A video of the higher power unit.  The flicker effect is random and looks very realistic:

NOTE: when you flash the code to the module for the first time, you will not see the LEDs blink yet.  To make them blink, you first have to add the module to your wireless network (this is designed as a network client so you can remote control it).  To do that, take your mobile phone and browse wifi network like so and find “FunkyCandle”:

Once you see that network, go ahead and click on it to access the config manager (click Configure WiFi): The module will now scan available nearby networks so that you can join one (click the one you want and then enter the password):

Once you enter the password, click save and wait about 20 seconds for the network to be stored and then go look for the module’s IP address using your router’s tools (likely under DHCP leases etc), then enter that IP address into your browser like this to turn it on.  So for example, if the ip of the module on your network is 10.1.1.29, you would enter this URL into your browser to turn on your candle:   http://10.1.1.29/on   and to turn it off: http://10.1.1.29/off    (NOTE: this only turns the LEDs on or off, it does not turn the module on or off!)

Now, simply place the module into your favorite decoration and enjoy!   Here’s a video of mine:


ABSTRACT

We have an electronic Z-Wave lock on our back door.   This lock works great with our Domoticz home automation system over the Z-Wave interface.  The lock allows users to each have their own  code, but an epic fail is that you have to take the lock apart to create/change/delete codes.  I hate the way codes are managed on this lock and the task is quite onerous.  There HAS to be a better way!  My needs require:

  • to be able to have family members be able to have ability to unlock the door
  • to be able to allow access for hired house help (babysitters,contractors)
  • to easily create and distribute some kind of physical token (other than a key) that can be easily revoked/changed if lost
  • solution has to operate reliably with our existing home automation infrastructure

Of course ESP8266  and MFRC522 to the rescue!

SOLUTION

First, the shopping list (bring your own 5 volt power source):

You can build just about ANYTHING with the ESP8266 so long as you have a problem to solve, and couple imagination to the problem – you can do magic!  That’s just what I did here.  I had an RFID chip reader antenna module (MFRC522) and proceeded to see how others have implemented an RFID badge reader.   I found one project that implemented a basic reader with a relay switch to activate a solenoid.  Since I don’t have a solenoid on my lock, I had to make modifications to the code from Luis’ project:

  • use MQTT to signal to Domoticz to unlock the door over my own MQTT broker like the rest of my sensors
  • log when invalid tags are scanned
  • enter a unique client and topic name to uniquely identify the reader on the system

Here’s my fork of Luis’ code which adds improvements that include server-side MQTT support and an example for Domoticz home automation.  To connect the reader to the door lock, I had to get the Domoticz IDX address of the lock and build a statement into the code that would be sent when the database returned a match.  The statement looks like this where XXX is the IDX of the locking device:

echo shell_exec(`mosquitto_pub -h $pubHost -u $pubUser -P $pubPass -t $topic -m “{\”command\”:\”switchlight\”,\”idx\”:123,\”switchcmd\”:\”Off\”}”`);

The project workflow is as follows:

  • User scans their tag
  • 8266 calls out over WiFi to a local server to check a database of RFID tags using an HTTP POST
  • fobcheck.php script receives the POST value containing the tag ID and checks the database
  • If the database returns a match, fobcheck.php runs a shell command to the server-side MQTT client to instruct the door to unlock

The completed reader is simply powered from a 5V power source outside and placed in a convenient location near the door.   Here’s what I used to build the reader in a weatherproof case: (click for larger image)

The fobs can be attached to any keychain for easy use and are relatively cheap so if they are lost or stolen, can be easily deactivated.

To manage users, I use a database back end where each tag has a record which contains the assigned user, tag ID, and enablement value (1 for active, 0 for inactive):

To allow the ESP8266 to read the database, I wrote a PHP connector script.  It connects to the database when the ESP8266 sends an http POST request:

 

Click for larger image: