This will be needlessly long‐winded, but I’m going to write about a very specific home network problem I ventured to solve a while back.
The problem
I have a Raspberry Pi in my house that functions as a home automation server. It’s on the network as home.local
and has an IP address of (let’s say) 192.168.1.99
. No Home Assistant — not yet, at least — but there are a handful of things I use to coordinate my home automation. Four of them have a web presence:
- Node‐RED for if‐this‐then‐that stuff and other simple logic things. The frontend runs on a configurable port number —
1880
by default. Can be accessed athttp://home.local:1880
. - Homebridge for acting as a HomeKit gateway to a bunch of non‐HomeKit stuff — mainly smart outlets and light bulbs that run Tasmota. Has a great web frontend that runs on a configurable port number,
8080
by default. Can be accessed athttp://home.local:8080
. - Zigbee2MQTT for a few Zigbee devices I have in the house. (I mainly use WiFi for my smart devices, but Zigbee’s a better fit for things that aren’t near an outlet — like battery‐powered buttons, or sensors that report data very occasionally.) Has a very useful web frontend that runs on a configurable port number,
8080
by default. Can be accessed athttp://home.local:8080
. - A static web site that acts as a dashboard for home automation stuff — graphing temperatures of various rooms over the last eight hours, states of various light bulbs, and so on. Really silly, and not something I consult very often, but I like keeping it around. Doesn’t have its own server, so I need to set something up.
I can configure these port numbers so that they don’t clash, but is that the limit of my imagination? I don’t care about the port that something runs on, and I don’t want to have to remember it. These things have names; I want to give them URLs that leverage those names.
I want these services to have “subdomains” of home.local
. I want node-red.home.local
to take me to the Node‐RED frontend. Likewise with homebridge.home.local
, zigbee2mqtt.home.local
, and dashboard.home.local
.
“Subdomain” is in quotes because it’s not the right word here, but for now just think of them as four separate sites with four separate domain names.
How do I pull this off? Split the problem into two parts:
- First, we need
dashboard.home.local
and the others to resolve to192.168.1.99
just likehome.local
does. - Then we’ve got to make the server respond with a different site for each thing based on which host name is in the URL. A web server has a name for this, and has since HTTP 1.1: these are virtual hosts.
The DNS side of this (the hard part)
This part will apply to people like me who prefer to use Multicast DNS instead of running their own DNS server to handle name resolution.
Why mDNS?
Let’s get this out of the way. Here are my arguments in favor of mDNS:
- The things I use support them out of the box. Raspberry Pis have Avahi preinstalled these days, and Raspberry Pi Imager makes it possible to set your Pi’s hostname and enable SSH during the installation process (via the hidden options panel). If you set the hostname to
foo
during installation, on first boot you’ll be able to SSH into it atfoo.local
without having to plug it into a monitor first or figure out its IP address. - mDNS is also within the reach of embedded devices. My ESP8266s can broadcast a hostname with the built‐in
ESP8266mDNS
library, and can resolve mDNS hostnames with mDNSResolver. - It looks like a domain name. My brain needs that. It will never feel normal to me to type in
http://thing/foo
, even if my network can resolvething
to an IP address. What the hell is that? Of course, I could add a faux‐TLD like.dev
, but it doesn’t exactly feel safe to invent a TLD in this crazy future where new TLDs are introduced all the time. The.local
TLD(-ish) thing is part of RFC 6762; it’s “safe” in the sense that it would be a stupid idea for ICANN to introduce a.local
TLD — stupid even by ICANN’s standards.
Why not ordinary DNS?
It’s easy enough to add a DNS server to a spare Pi. Somewhat fashionable, too, judging from the popularity of Pi‐Hole. And your router can be configured to use your internal DNS server so that you don’t have to change your DNS settings on all of your network devices.
I’ll give you the short answer: mDNS is a parallel and alternative way to resolve names, but a local DNS server is a wrapper around whatever DNS server you prefer for the internet at large. I can easily set up a Pi as my DNS server and tell it to resolve what it can, delegating everything else to 8.8.8.8
or whatever. But I’ve now made the Pi the most likely point of failure, and when it does fail, all my DNS lookups will fail, not just the local ones.
When I set up a Pi‐Hole, I had two DNS outages the first day. Bad luck? Probably. I could’ve tracked down the root cause, but by definition you will only notice a DNS failure when you’re in the middle of something else.
Anyway, this is not to say that an mDNS approach is better — only that name servers are essential infrastructure for network devices, and running my own name server on a $40 credit‐card‐sized computer was not a hassle‐free experience. Maybe if I owned the house I live in, I could put ethernet into the walls and replace my mesh Wi‐Fi network with something that has fewer possible points of failure, but you go to war with the network you have.
How do subdomains work in mDNS?
They don’t.
Well, they work just fine as domain names, but they don’t have the semantics of subdomains that you’d come to expect from the DNS world. To quote the mDNS RFC:
Multicast DNS domains are not delegated from their parent domain via use of NS (Name Server) records, and there is also no concept of delegation of subdomains within a Multicast DNS domain. Just because a particular host on the network may answer queries for a particular record type with the name “example.local.” does not imply anything about whether that host will answer for the name “child.example.local.”, or indeed for other record types with the name “example.local.”.
Outside‐world DNS describes a hierarchy where foo.example.com
is a subdomain of example.com
, and example.com
is (technically) a subdomain of com
. Web browsers build features upon this implied hierarchy, such as allowing cookies set on example.com
to be sent on requests for foo.example.com
.
In mDNS, foo.local
is not a subdomain of local
. That .local
is just a tag on the end meant to act as a namespace. So bar.foo.local
can exist, but it’s just bar.foo
with .local
added on, and is not understood by mDNS to be a subdomain of foo.local
. The two labels can coexist within the same network, but they could point to different machines or the same machine.
The RFC says that any UTF‐8 string (more or less) is a valid name in mDNS, and that includes the period. So I can publish homebridge.home.local
and have it resolve to the same machine as home.local
.
So why invent a subdomain concept where one doesn’t exist?
It feels right, I guess?
The only advantage of the “subdomain” here is in my brain and its desire to treat these as child‐sites of my server at home.local
. In a minute I’ll give you a compelling reason to use a simpler system instead. But this is how I did it.
Are there any downsides to the subdomain approach?
There is a tiny downside, yes, almost too small to mention: Windows doesn’t support it at all.
Before Windows 10, to get mDNS support in Windows, you could install Bonjour Print Services for Windows. Since Windows 10, there’s built‐in mDNS support from the OS, except it’s bad.
Trying ping
from the command line illustrates the problem:
C:\Users\Andrew>ping bar.localPing request could not find host bar.local. Please check the name and try again.C:\Users\Andrew>ping foo.bar.localPing request could not find host foo.bar.local. Please check the name and try again.
Here I’m pinging two names on my network, both of them nonexistent. The first one returns an error message after about two seconds; it tried to resolve bar.local
and failed. The second one returns an error message immediately, as though it didn’t even try. Windows does not support mDNS resolution of names with more than one period.
This is flat‐out wrong behavior — foo.bar.local
is a valid name in mDNS — but there you have it. I suspect it’s because .local
has been used by Microsoft products in the past in some non‐mDNS contexts; maybe there’s a heuristic somewhere in Windows that thinks foo.bar.local
is one of those usages and can’t be convinced otherwise.
When I discovered this, I attempted to disable the built‐in mDNS support and reinstall Bonjour Print Services, but I failed. Maybe there’s a brilliant way to make it work right, but I haven’t found it.
Since I have exactly one Windows machine in my house, I’m satisfied with a low‐tech workaround: the venerable hosts
file. On my one Windows machine, all subdomain‐style mDNS aliases go in there:
192.168.1.99 dashboard.home.local192.168.1.99 homebridge.home.local
…and so on. Keeping this file updated when I add aliases is barely a chore; it’s not like there’s a new alias every week.
So this isn’t a deal‐breaker for me. But if it’s one for you, there’s an easy workaround: don’t use subdomains. Instead of dashboard.home.local
, do dashboard-home.local
, or just dashboard.local
if you prefer simplicity. As long as it has exactly one .
in it, Windows handles it fine.
How do I broadcast extra names?
OK, all that’s out of the way. Back to the problem: we want dashboard.home.local
and the rest to resolve to the same IP address as home.local
. How hard could that be?
Red herring: /etc/avahi/hosts
On Linux, Avahi is in charge of broadcasting the Pi’s hostname as [hostname].local
. Could it broadcast the other names we want? Let’s dig into its config directory… aha! There’s a file called /etc/avahi/hosts
!
Quoth its man page:
The file format is similar to the one of /etc/hosts: on each line an IP address and the corresponding host name. The host names should be in FQDN form, i.e. with appended .local suffix.
Couldn’t be simpler. So we just need to put them into this file, right?
I’ll save you the trouble of trying. To illustrate the problem more directly, let’s use the avahi-publish
utility to try to broadcast an mDNS name:
sudo apt install avahi-utilsavahi-publish -a foo.home.local 192.168.1.99
We get the response:
Failed to add address: Local name collision
This happens because, by default, Avahi expects each IP address to have exactly one name on the network. Just as it says “home.local
resolves to 192.168.1.99
,” it wants to be able to say “192.168.1.99
is called home.local
” without ambiguity. Since this IP address already has a name, it won’t let us add a second — unless we add the --no-reverse
(or -R
) parameter:
avahi-publish -a foo.home.local -R 192.168.1.99
Now we get what we wanted:
Established under name 'foo.home.local'
So if I can do it with avahi-publish
, I can do it in /etc/avahi/hosts
, right? Well, no. There’s no way to specify the no‐reverse option within the hosts file — the downside to imitating the simplicity of /etc/hosts
.
Maybe it’ll behave differently in the future, but for now we’ll have to publish these aliases a different way. If we can’t just put it into the config file, we’ll have to write a startup script and make a systemd service out of it.
Script Option 1: avahi‐publish
Now that we know about avahi-publish
, the obvious approach would be to write a script that looks like this:
#!/bin/bash/usr/bin/avahi-publish -a homebridge.home.local -R 192.168.1.99 &/usr/bin/avahi-publish -a node-red.home.local -R 192.168.1.99 &/usr/bin/avahi-publish -a zigbee2mqtt.home.local -R 192.168.1.99 &
(Wait, am I writing a Bash script in public? I’ve had nightmares like this.)
Did you notice that avahi-publish
is still running from earlier, and will run indefinitely until we terminate the process with ^C or the like? That’s why those ampersands are needed in the script — they’ll fork the process and run in the background.
This will work fine as a one‐shot script, but not as a daemon. We want this script to run indefinitely and to clean up those child processes when the daemon is stopped.
Let’s make it wait indefinitely:
#!/bin/bash/usr/bin/avahi-publish -a homebridge.home.local -R 192.168.1.99 &/usr/bin/avahi-publish -a node-red.home.local -R 192.168.1.99 &/usr/bin/avahi-publish -a zigbee2mqtt.home.local -R 192.168.1.99 &while true; do sleep 10000; done
(This is ugly, but portable. sleep infinity
works just fine on Linux, but not on macOS.)
We’re getting somewhere, but I think we need to do something else. We’re creating one new child process for each alias we’re publishing, and I’m kinda sure that those processes will stick around if this script terminates. Let’s trap SIGTERM and make sure that those child processes also get terminated:
#!/bin/bashfunction _term { pkill -P $$}trap _term SIGTERM/usr/bin/avahi-publish -a homebridge.home.local -R 192.168.1.99 &/usr/bin/avahi-publish -a node-red.home.local -R 192.168.1.99 &/usr/bin/avahi-publish -a zigbee2mqtt.home.local -R 192.168.1.99 &while true; do sleep 10000; done
Playing around with this, I’m pretty sure it’ll do the right thing, and will daemonize nicely when we run it later on. If you were satisfied with this, you could save it to someplace like /home/pi/scripts/publish-mdns-aliases.sh
and skip down to the systemd section, but I think you should keep reading.
I fumbled my way though the writing of that Bash script, but I would prefer not to have to manage these child processes at all. I’d like a setup where I create only one process no matter how many aliases I’m publishing.
Script Option 2: Python’s mdns‐publisher
Hey, there’s a Python package that does what we want! Let’s install it.
pip install mdns-publisherwhich mdns-publish-cname
On my machine (Raspbian 11, or bullseye), that works just fine and outputs /home/pi/.local/bin/mdns-publish-cname
. If which
doesn’t find it, you might need to add PATH="$HOME/.local/bin:$PATH"
to .profile
or .bashrc
. Or, if you’d rather install it globally, run sudo pip install mdns-publisher
instead and which
will return /usr/local/bin/mdns-publish-cname
.
The mdns-publish-cname
binary is great because it accepts any number of aliases as arguments. Run it yourself and see:
mdns-publish-cname dashboard.home.local homebridge.home.local node-red.home.local zigbee2mqtt.home.local
All four of those hostnames should now respond to ping
.
Perfect! It does just what we want in a single process. And it assumes we want to publish these aliases for ourselves, rather than for another machine, so we don’t even need to hard‐code an IP address.
To me, this is clearly superior to Option 1. Sure, I had to install a pip
package first, but I had to install avahi-utils
via APT before I could use avahi-publish
, so I think that’s a wash.
Let’s daemonize it and get the hell out of here
Not satisfied at having solved our problem with elegance, the mdns‐publisher repo also includes a sample systemd service file that we’ll use as a starting point for our own.
There are a few lines worth reflecting on:
After=network.target avahi-daemon.service
This is important because we can’t publish aliases if Avahi hasn’t started yet.
If you went with Script Option 1, there’s another thing to take care of: the page we cribbed this from says it needs to restart if Avahi itself restarts. To make that happen, you’d need one extra line in the [Unit]
section:
PartOf=avahi-daemon.service
If you’re using Script Option 2, this doesn’t seem to be necessary. If I restart Avahi while I monitor the mdns‐publisher output, I can see that mdns-publish-cname
somehow knows to republish the aliases.
Restart=no
I’d usually go with on-failure
in my service files, but maybe no
is OK here. I think it depends on whether the daemon would exit non‐successfully for intermittent reasons, or for a reason that’s likely to persist. If it’s the latter, you’ll just end up in a restart loop that you’d have to manage with other systemd options like StartLimitBurst
. I’ll keep this as‐is.
ExecStart=/usr/bin/mdns-publish-cname --ttl 20 vhost1.local vhost2.local
You’ll need to point this to /home/pi/.local/bin/mdns-publish-cname
, or /usr/local/bin/mdns-publish-cname
if you installed the package with sudo
. If you do the former, you’ll want to set User=pi
instead of User=nobody
, because the nobody
user won’t be able to run a binary located inside your pi
user’s home folder.
Of course, you’ll want to change vhost1.local vhost2.local
to the actual aliases you want to publish. But instead of doing that, let’s go one step further:
Aliases in a config file
It feels more intuitive to me to put my aliases inside a config file. After all, a Pi’s primary hostname isn’t specified in the text of some systemd unit config file; it lives at /etc/hostname
.
So I created /home/pi/.mdns-aliases
for keeping a list of aliases, one per line:
dashboard.home.localnode-red.home.localhomebridge.home.localzigbee2mqtt.home.local
I then wrote a Python script to read from that list:
#!/usr/bin/env pythonimport osargs = ['mdns-publish-cname']with open('/home/pi/.mdns-aliases', 'r') as f: for line in f.readlines(): line = line.strip() if line: args.append(line.strip())os.execv('/home/pi/.local/bin/mdns-publish-cname', args)
We read each line, strip whitespace, toss out blank lines (like if I inadvertently add a newline to the end), and pass along the rest as arguments to mdns-publish-cname
. The os.execv
call works like exec
in the shell: it replaces the current process with the one specified. That’s perfect for our purposes.
Save that script somewhere (I saved mine at /home/pi/scripts/publish-mdns-aliases.py
), make it executable with chmod +x
, and run it as a test. Make sure the output is what you expect. Make sure that you can ping each of your aliases while this script is running, but then be unable to ping them once you quit the script with ^C.
Now we can simplify the service file:
[Unit]Description=Avahi/mDNS CNAME publisherAfter=network.target avahi-daemon.service[Service]User=piType=simpleWorkingDirectory=/home/piExecStart=/home/pi/scripts/publish-mdns-aliases.pyRestart=noPrivateTmp=truePrivateDevices=true[Install]WantedBy=multi-user.target
Save it in your home directory as mdns-publisher.service
, then run:
chmod 755 mdns-publisher.servicesudo chown root:root mdns-publisher.servicesudo mv mdns-publisher.service /etc/systemd/systemsudo systemctl enable mdns-publisher
Now we’ll start it up and monitor its output:
sudo systemctl start mdns-publisher.service && sudo journalctl -u mdns-publisher -f
The output should look quite similar to what you saw earlier when you ran the script directly.
We’re more than half done
I promise that was the hard part. Now that we’ve got a persistent way to give our Pi more than one mDNS name, we can move onto the other half of this task.
The web server side of this (the easy part)
If I type node-red.home.local
into my browser’s address bar, what do I want to happen?
- It will resolve
node-red.home.local
to192.168.1.99
. (We just proved it.) - That request should hit port 80 of my home server.
- The response from the server should be identical to what it would be if I’d typed in
home.local:1880
.
This is more or less a reverse proxy, so let’s attempt to handle this with nginx.
sudo apt install nginxcd /etc/nginx/sites-available
Proxy a web server running on a specific port
Nginx allows modular per‐site configuration in a similar way to Apache: inside its config directory there are directories called sites-available
and sites-enabled
. You can define a config file in sites-available
, then symlink it into sites-enabled
to enable it.
sudo nano node-red.conf
Now we’ll look at the nginx documentation for a few minutes and throw something together. No, I’m kidding; that’s a very funny joke. In truth, we’ll google “nginx proxy config file example” or something like that, click on results until we find something that’s close to what we want, then tweak it until it’s exactly what we want.
server { listen 80; # for IPv4 listen [::]:80; # for IPv6 server_name node-red.home.local; access_log /var/log/nginx/node-red.access.log; location / { proxy_pass http://home.local:1880; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_cache_bypass 1; proxy_no_cache 1; port_in_redirect on; }}
Let’s call out some stuff:
The
server_name
directive means that thisserver
block should only apply when theHost
request header matchesnode-red.home.local
.location /
will match any request for this host, since it’s the onlylocation
directive in the file. Every path starts with/
, and nginx will use whichever block has the longest matching prefix.proxy_pass
will transparently serve the contents of its URL without a redirect.We use
proxy_set_header
to make sure that the software serving up the proxied site sees aHost
header ofnode-red.home.local
; if we omitted this, it’d see aHost
header ofhome.local
. This one is tricky; it may not always be necesssary, and in some cases might even break stuff.Remember that you’re forwarding to a different web server in this example. Consider what that web server would expect to see, if anything, and whether it would behave incorrectly if
Host
were different from its expectation.Node‐RED seems to serve everything up with absolute or relative URLs, and therefore doesn’t need to care about its own hostname. I’m leaving this line in because it’s easier to keep it around (and comment it out if you don’t need it) than to have to look it up in the cases where you do need it.
I use Node‐RED to make some data available via WebSockets, so we also have to make sure nginx can handle those requests. First, we make a point to pass along any
Upgrade
andConnection
headers; by default they wouldn’t survive to the next “hop” in the chain, but WebSockets use those headers to switch protocols. We skip the proxy cache because Google told me to.
Anything I didn’t explain just now is something I don’t fully understand but am too nervous to remove just in case it’s important.
Anyway, let’s save this file and return to the shell.
Now we create our symbolic link:
cd ../sites-enabledln -s ../sites-available/node-red.conf .
Run an ls -l
for sanity’s sake and make sure you see your symlinked node-red.conf
. If you’re really paranoid, run cat node-red.conf
and make sure you see the contents of the file.
Now we’ll restart nginx and monitor the startup log:
sudo systemctl restart nginx && sudo journalctl -u nginx -f
You want to watch the logs after you enable a site because, if you’re like me, you will have screwed up the config somehow, even if you directly copied and pasted it from someplace and barely changed it. If nginx can’t parse the config file, it’ll explain what you did wrong.
Did it work? Type node-red.home.local
into your address bar and find out:
Beautiful. Buoyed by this success, we’ll create similar files in sites-available
called homebridge.conf
and zigbee2mqtt.conf
, and point them to their corresponding ports.
Serve up some flat files
The last one is a hell of a lot easier than this: my dashboard site is just a JAMstack (ugh) app that requires nothing more complicated from the server than the ability to serve static files. So here’s what dashboard.conf
looks like:
server { listen 80; listen [::]:80; server_name dashboard.home.local; root /home/pi/dashboard; index index.html;}
Of course, make sure that root
points to the actual place where your flat files can be found. That should take care of it. If you use something like create-react-app
to build and deploy these flat files, it’ll just work as‐is.
I eschew “bare metal,” Andrew
I won’t do a deep‐dive here, but Traefik Proxy is a good option if you prefer to containerize things. If you were already running these various services in different Docker containers, you could map them to the URLs you want by adding labels like traefik.http.routers.router0.rule=Host(`node-red.home.local`)
to those containers. Traefik will handle the rest.
I use Traefik on my other Pi server, the one that handles tasks other than home automation. After some experience with both approaches, I’ve decided that containerization involves an equal number of pain points, but in new and interesting places.
Coming in part two
No, that’s a joke. This is a one‐parter. I wrote this down mainly so I can refer back to it in a couple years after I’ve forgotten all this stuff, but if you’ve made it this far, you probably also found it helpful or interesting. Let’s see if I can’t write a few more things like that.