Part 3: The Network Rebuild Side Quest

proxmoxpfsensepiholenetworkinghomelab

Homelab Arena Part 3: The Network Rebuild Side Quest

Part 2 laid the foundation.

Storage was rebuilt, disks were labeled, and the node finally behaved predictably again. The arena was ready for the next step: continuing the K3s rebuild.

That was the plan.

Then the lights went out.

Galaxy Express 999 - An orphan journeys through the stars on a celestial steam train to trade his humanity for a mechanical body.

A Side Track That Was Always Coming

It turns out the Homelab Arena isn’t just about K3s after all.

Part 0, 1, and 2 were focused on bringing the cluster back. But this part is different — a side track that was always going to happen eventually.

The network stack had been running quietly for years:

  • pfSense on ESXi
  • Pi-hole in Docker inside a CentOS VM

It wasn’t new, but it was stable. It did its job without needing attention.

But it was also tied to older hardware, and the migration had been sitting on the list.

The outage didn’t break everything.

It just made the decision for me.

The Hardware Shift

Old:

  • HP Gen2 i5
  • 8GB RAM
  • 4-port NIC
  • ESXi

New:

  • HP T740 thin client (Ryzen-based)
  • 8GB RAM
  • Proxmox

Smaller, quieter, and already prepared.

Step 1: Recover the Configuration

The goal was not to fix the old system.

The goal was to extract what mattered.

Bootstrapping ESXi for Access

A fresh ESXi installation was written to a new USB drive.

After booting:

  • the old SSD datastore mounted automatically
  • existing VMs appeared without reconstruction
  • the system was usable again — temporarily

This was enough to retrieve configuration.

Exporting pfSense Configuration

Power on the pfSense VM.

From the Web UI:

Diagnostics → Backup & Restore

Download:

config.xml

This file contains:

  • interface definitions
  • firewall rules
  • DHCP configuration
  • NAT rules
  • system settings

Because pfSense ties logic to interface roles, not hardware identifiers, this file can be restored on completely different hardware and still work after reassignment.

Extracting Pi-hole Data

Pi-hole was running in Docker, so there is no single export.

From the CentOS VM:

/var/docker/pihole/etc-pihole/

Copy this directory out.

Important contents:

  • gravity.db
  • pihole-FTL.db
  • dhcp.leases
  • local.list
  • hosts/
  • adlists.list

These represent the operational state of Pi-hole.

Step 2: Build the New Network Foundation

Physical Layout

Internet

nic0 (onboard)

pfSense (VM)

4 physical LAN ports

Logical Design

                    INTERNET
                        |
                     [ WAN ]
                     pfSense
            ┌─────────┼─────────┬─────────┐
            |         |         |         |
          LAN0      LAN1      LAN2      LAN3
       10.0.0.1  10.0.1.1  10.0.2.1  10.0.3.1
            |
     Infrastructure services

Proxmox Network Configuration

Each physical interface becomes its own bridge.

Edit:

nano /etc/network/interfaces
auto lo
iface lo inet loopback

iface nic0 inet manual
iface enp1s0f0 inet manual
iface enp1s0f1 inet manual
iface enp1s0f2 inet manual
iface enp1s0f3 inet manual

# LAN0 (primary)
auto vmbr0
iface vmbr0 inet static
    address 10.0.0.224/24
    gateway 10.0.0.1
    bridge-ports enp1s0f0

# LAN1
auto vmbr1
iface vmbr1 inet manual
    bridge-ports enp1s0f1

# LAN2
auto vmbr2
iface vmbr2 inet manual
    bridge-ports enp1s0f2

# LAN3
auto vmbr3
iface vmbr3 inet manual
    bridge-ports enp1s0f3

# WAN
auto vmbr5
iface vmbr5 inet manual
    bridge-ports nic0

Apply:

ifreload -a

Step 3: Deploy pfSense

Create the VM

Add 5 network interfaces:

NICBridge
net0vmbr5 (WAN)
net1vmbr0
net2vmbr1
net3vmbr2
net4vmbr3

Assign Interfaces

From console:

1) Assign Interfaces

Assign:

WAN  → vtnet0
LAN  → vtnet1
OPT1 → vtnet2
OPT2 → vtnet3
OPT3 → vtnet4

Set Interface IPs

LAN   → 10.0.0.1 /24
OPT1  → 10.0.1.1 /24
OPT2  → 10.0.2.1 /24
OPT3  → 10.0.3.1 /24

Enable DHCP on each interface.

Restore Configuration

Upload:

config.xml

After restore:

  • pfSense reboots
  • interface mismatch warning appears
  • reassign interfaces again

This step is expected — hardware identifiers changed.

Post-Restore Validation

Check:

  • WAN receives IP
  • LAN reachable
  • Web UI accessible

Verify services:

  • DHCP active
  • firewall rules present
  • system logs clean

Step 4: Deploy Pi-hole as LXC

Create Container

  • Debian 12 template
  • 1 CPU
  • 512MB RAM
  • static IP on LAN0

Install Pi-hole

apt update && apt upgrade
apt install -y curl
curl -sSL https://install.pi-hole.net | bash

Restore Data (Selective)

Stop service:

systemctl stop pihole-FTL

Backup clean install:

cp -r /etc/pihole /etc/pihole.clean

Restore only necessary data:

cp gravity.db /etc/pihole/
cp pihole-FTL.db /etc/pihole/
cp dhcp.leases /etc/pihole/
cp local.list /etc/pihole/
cp -r hosts/ /etc/pihole/

Fix permissions:

chown -R pihole:pihole /etc/pihole
chmod -R 755 /etc/pihole

Rebuild database:

pihole -g

Start service:

systemctl start pihole-FTL

Verify

pihole status

Test:

ping 10.0.0.1
ping 8.8.8.8

Critical Setting

Settings → DNS → Interface Listening Behavior
→ Listen on all interfaces, permit all origins

This is required because requests come from multiple subnets.

Step 5: Validation

From container:

ping 8.8.8.8

From client:

nslookup google.com 10.0.0.x

Expected:

  • DNS resolves
  • clients receive correct responses

The New Arena

Proxmox (T740)

├─ pfSense VM
│   ├─ WAN
│   ├─ LAN  10.0.0.1
│   ├─ OPT1 10.0.1.1
│   ├─ OPT2 10.0.2.1
│   └─ OPT3 10.0.3.1

└─ Pi-hole LXC

Where This Leaves the Arena

The old system worked.

The new system is easier to rebuild.

  • fewer layers
  • clearer separation
  • faster recovery path
  • simpler mental model

Back to the Main Story

This wasn’t part of the original plan.

But it was always going to happen.

Now the network is stable again.

Which means the arena is ready.

Next

Back to where we left off:

rebuilding the K3s cluster

The arena continues.