Why
So recently, I had a very old physical docker host on my network die. This was an old Intel NUC machine that served as a compact docker host to run several applications on docker as a lightweight install. For years, it had been running like a swiss clock and updating containerized applications on this box was always super easy and quick, thanks to how Docker works. Though the box died, I was lucky enough to have backup configurations for my containerized apps and was able to (relatively) easily rebuild and get my applications back online again. I use both the docker CLI and a nice graphical app “portainer” to manage docker containers. Here’s what my portainer dashboard looks like:
It’s really nice on the eyes and allows easy management of running applications within the docker engine. Of course, I do also use the CLI as well:
root@docker01:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
34f5062f1f21 lscr.io/linuxserver/unifi-network-application:latest "/init" 26 minutes ago Up 26 minutes unifi
eea3cd640f88 mongo:latest "docker-entrypoint.s…" 38 minutes ago Up 38 minutes mongodb
e442264808eb eclipse-mosquitto "/docker-entrypoint.…" 2 hours ago Up 2 hours mqtt
86f80bfac689 domoticz/domoticz "docker-entrypoint.s…" 2 hours ago Up 2 hours domoticz
bc2377f83ef4 portainer/portainer-ce:latest "/portainer" 3 hours ago Up 2 hours 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp, 8000/tcp, 0.0.0.0:9443->9443/tcp, :::9443->9443/tcp portainer
af7a7eef5698 pihole/pihole:latest "/s6-init" 7 weeks ago Up 2 hours (healthy) pihole
3842f0bd1170 pglombardo/pwpush-ephemeral:latest "containers/docker/p…" 4 months ago Up 2 hours passwd-pusher
4e77ab674304 louislam/uptime-kuma:1 "/usr/bin/dumb-init …" 5 months ago Up 2 hours (healthy) uptime-kuma-305
cd1ba8c2c9dd louislam/uptime-kuma:1 "/usr/bin/dumb-init …" 5 months ago Up 2 hours (healthy) uptime-kuma
804f9a0d8f08 wazuh/wazuh-dashboard:4.5.0 "/entrypoint.sh" 5 months ago Up 2 hours 443/tcp, 0.0.0.0:443->5601/tcp, :::443->5601/tcp single-node-wazuh.dashboard-1
430d3a355705 wazuh/wazuh-indexer:4.5.0 "/entrypoint.sh open…" 5 months ago Up 2 hours 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp single-node-wazuh.indexer-1
bf9bcd686a54 wazuh/wazuh-manager:4.5.0 "/init" 5 months ago Up 2 hours 0.0.0.0:1514-1515->1514-1515/tcp, :::1514-1515->1514-1515/tcp, 0.0.0.0:514->514/udp, :::514->514/udp, 0.0.0.0:55000->55000/tcp, :::55000->55000/tcp, 1516/tcp single-node-wazuh.manager-1
df30678d8675 netboxcommunity/netbox:v3.4-2.5.2 "/usr/bin/tini -- /o…" 9 months ago Up 2 hours (healthy) 0.0.0.0:8000->8080/tcp, :::8000->8080/tcp netbox-docker_netbox_1
8327ecb8995b postgres:15-alpine "docker-entrypoint.s…" 9 months ago Up 2 hours 5432/tcp netbox-docker_postgres_1
0951af72342d redis:7-alpine "docker-entrypoint.s…" 9 months ago Up 2 hours 6379/tcp netbox-docker_redis_1
dc00e3d1a3d2 redis:7-alpine "docker-entrypoint.s…" 9 months ago Up 2 hours 6379/tcp netbox-docker_redis-cache_1
9f9c09c2ab8d nodered/node-red:latest "./entrypoint.sh" 9 months ago Up 2 hours (healthy) nodered
root@docker01:~#
My main docker host is a virtual machine in my KVM infrastructure and the VM has interfaces on multiple vlans to allow docker containers to directly access different vlans, instead of all using the docker host’s IP address. This allows me to have each docker container to have it’s own dedicated IP in different places on the network. Just as if they were each a physical machine. Here’s the netplan config showing how I did this:
network:
ethernets:
enp1s0:
addresses:
- 10.0.1.60/24
gateway4: 10.0.1.1
nameservers:
addresses:
- 10.0.1.15
- 10.0.1.16
enp6s0:
addresses:
- 10.2.50.2/24
enp7s0:
addresses:
- 10.1.73.10/24
enp8s0:
addresses:
- 10.1.74.10/24
enp9s0:
addresses:
- 10.9.9.2/24
As needs change, I can always add new interfaces to this config. The docker engine on the host, has the following macvlan “docker networks” configured:
root@docker01:~# docker network ls
NETWORK ID NAME DRIVER SCOPE
5abc95ac64ee VLAN10 macvlan local
f9f4a341fbc5 VLAN11 macvlan local
b4703b93610e VLAN50-2 macvlan local
3adf8935c754 bridge bridge local
e76d1dfe3a50 host host local
ed908d02419e mariadb_default bridge local
f31f3bd7d17b netbox-docker_default bridge local
ccd4268c93dc none null local
4227a380e0c9 passbolt_default bridge local
6e7ed7d641db single-node_default bridge local
These allow me to specificly place containers where I want them on the network. It’s as simple as passing additional arguments at the time of building a container:
docker run -d --network=VLAN50-2 \
--ip=10.2.50.15 \
--name=mongodb \
-v /vol1/docker/mongodb:/etc/mongo \
mongo:latest
Of course, portainer will also let me do all this in the dashboard too, but I appreciate the simplicity of the CLI for building new containerized apps. Also, the CLI allows me to easily script the buildout of dozens or hundreds of containers VERY quickly. This is why the CLI has a lot of value over the portainer dashboard.
(Re)-building My Wifi Controller In Docker
As of 01-01-2024, the linuxserver.io group deprecated the older unifi docker image. They say very clearly that they will no longer be maintaining this. They have however published a new “Unifi Network Application” that is going to be maintained going forward. The caveat about this change, is that they no longer bundle mongodb with the controller image. I think for performance reasons, they tailored the new image to break these apart. So now you have to setup a dockerized MongoDB server and the Unifi Network Application. Not being familiar with MongoDB, and after a few trials, I finally got both setup. In this article, I will detail the steps (but will remove sensitive items) in the hopes that this how-to will save other people time and grief.
Setting up MongoDB in Docker:
This guide assumes you have setup your docker engine host similar to the way I have done. If you do not have multiple vlans or a need for each container to have its own IP on the network, then you will need to remove the “network” and “ip” lines, and add the port exposures to expose the container app service ports on your docker host. If your docker engine host is setup similar to mine, you can just use the provided examples as shown.
docker run -d --network=<docker.network.name> \
--ip=<ip.address> \
--name=mongodb \
-v /some/storage/path:/etc/mongo \
mongo:latest
In the above example, my storage path is the path to an NFS mount on my NAS. I externally store configs outside the container so that when I update the container periodically, they won’t get overwritten, and if I have a failure of the docker host or VM, I can quickly spin the container back up on another docker engine.
It only takes a few seconds for the MongoDB container to come up. Once the container is running, it needs to be prepared for the Unifi Network Application. Follow these steps to do so:
- install the latest mongo packages on your workstation – you will use these to interact with and setup the unifi DB and user so that the unifi container can make use of the DB.
- create the new DB and user
- begin the install of the unifi container
To install the latest mongo packages: (this assumes your workstation is Ubuntu 20.04) If not, consult THIS PAGE
sudo apt-get install gnupg curl
curl -fsSL https://pgp.mongodb.com/server-7.0.asc | \
sudo gpg -o /usr/share/keyrings/mongodb-server-7.0.gpg \
--dearmor
echo "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/7.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-7.0.list
sudo apt-get update
sudo apt-get install -y mongodb-org
After the packages are installed, you can now setup your unifi DB and user on the MongoDB server you just created:
mongosh <ip.of.mongo.server>:27017/admin
Once connected, you will want to issue these commands in the MongoShell:
admin> use unifi
switched to db unifi
unifi> db.createUser({
... user: "unifi",
... pwd: "your.p@ssw0rd",
... roles: [{ role: "readWrite", db: "unifi" }]
... });
{ ok: 1 }
unifi> exit
At this point, the unifi DB and user are ready for use by your unifi container. It is now time to build the unifi container as follows: (commands to issue on your docker host)
docker run -d --network=<docker.network.name> \
--ip=<ip.address> \
--name=unifi \
-e PUID=1000 \
-e PGID=1000 \
-e TZ=America/New_York \
-e MONGO_USER=unifi \
-e MONGO_PASS=your.p@ssw0rd \
-e MONGO_HOST=<mongodb.server.ip> \
-e MONGO_PORT=27017 \
-e MONGO_DBNAME=unifi \
-e MEM_LIMIT=4096 \
-e MEM_STARTUP=1024 \
-v /some/storage/path:/config \
--restart unless-stopped \
lscr.io/linuxserver/unifi-network-application:latest
Change the values to suit your environment. Though the container builds in a few seconds, you really want to wait about 5 minutes before you try to access the web UI of the Unifi Network Application. The address for the application is dictated by what you set in the container build line. To access the webUI and setup your Unifi controller, you will navigate to:
https://ip.address:8443/
Upon reaching that in your browser, you’ll have the option to setup a new network or restore from a backup file of an existing network. Once setup, you’ll see the login page:
After logging in, you’ll be presented with your dashboard: