Multiple synced pihole docker instances

I’ve been running a pihole on a raspberry pi 3b for a few years now.
I noticed a few weeks ago that there was something wrong with it. It was working, but something was wrong. Long story short, the SD card decided to switch itself in to read only mode.

I ended up ordering two M2 SATA SSDs and two USB enclosures. I have another pi (4b) running homeassistant, so to preempt the SD card failing, I figured I’d replace it with an SSD too.

Since the pihole was working albeit in readonly mode, I figured I would set up a pihole docker instance on one of my vms. Used teleporter and I was up and running fairly quickly. There’s a nice guide available on using a macvlan network and cloudflared to give the containers their own subnet addresses, rather than bridging the docker network. The only caveat is that the docker host can’t talk to the containers without some faffing around. Fortunately that isn’t an issue in my use case. I set up a second vm/docker host for redundancy.

The SSDs arrived yesterday, so I got the pi’s up and running, although I decided to set up pihole in a docker container rather than installing it directly on the host as I had it before. Also, it was a lot faster to get it running since I just had to copy one of the existing instances folders across, modify the compose file, and “up” the containers.

Of course, it was then that I discovered that cloudflared do not make an arm64 version of their container. Fortunately some bloke has built a drop in replacement that does actually work on a pi.

Now I have 3 pihole instances running. I definitely don’t need 3, and even 2 is a bit much, but at least now if I need to reboot the pi, I don’t have to stop name resolution for the whole network. I’ll leave the third instance running for a while, though I’ll probably get rid of it eventually.

The problem with having three independent instances is that if I make any changes on one pihole, I would need to make the changes on all the piholes. Annoying.
I found a nifty script that uses git to sync pihole configs. I modified it slightly (just paths, and adding in some docker commands to restart the containers & update the gravity database). I set up some cron jobs, and now I only need to modify one pihole. The configs will eventually transfer to the other two.

I suppose an alternate method would be to have a master folder on nfs and mount them inside the other instance containers as read only. That might work.
** See post below

1 Like

pi-hole is, without a doubt, one of the most useful and well-made open source software projects. I happened upon it when I was on NBN satellite internet, with macroscopic latencies and microscopic data caps, and was looking for a way to conserve as much data volume as possible. Since then, it’s been running on my network, and has consistently shown that around 20% of DNS requests out of my network are for advertising assets. Comprehensive blocking is almost a necessity, not least since online advertising has become a vector for spy- and malware.

I have to say I’m running the lazy variant though. My pi-hole install is not dockerised, and I have not put any thought into backing up and restoring. It’s been running flawlessly for me except for once when the SD card started crapping out. It doesn’t take long to set up a fresh one, and my whitelist has maybe a dozen entries. As long as I can get at the filesystem, I can pull the data out of the database.

Running Adguard Home on Home Assistant and on a Pi Zero for redundancy. I prefer Adguard Home to Pi-hole. (can use the same lists)

Found gravity-sync which is pretty awesome. It pretty much does everything for you and it’s very easy to install. It works very nicely with pihole docker but is not exclusive to it. It can work with bare metal installs of pihole too.