Homelab Arena Part 2: Laying the Storage Foundation
Part 1 brought the server back to life.
The Proxmox reinstall worked, the Gen8 boot trap was solved, and the node finally behaved like a normal hypervisor again. The UI loads, updates work, and the SSD thinpool is ready for VM disks.
But the machine is still basically empty.
Before restoring anything important or rebuilding services, I wanted to deal with something simpler first: the storage layout.

The hardware situation
This server currently has:
- SSD → Proxmox OS + VM disks (LVM-thin)
- 2 × 4TB HDDs → bulk storage
Those two HDDs are where most of the future data will live. There are many ways to organize storage in Proxmox — ZFS pools, LVM groups, distributed storage — but at this stage of the rebuild I didn’t need anything complicated.
What I needed was a simple and predictable foundation.
A Naming Convention That Stuck With Me
One naming convention I picked up early in my career was to keep infrastructure names generic rather than descriptive.
Across many systems — databases, storage arrays, backup targets — it’s common to see simple sequences like data01, data02, db01, db02, and so on. The idea is straightforward: infrastructure often outlives its original purpose, so names that describe a specific role tend to age poorly.
That convention stuck with me.
So for this rebuild I kept things simple and named the drives:
data01
data02
The names don’t imply a specific role. Today they might hold backups or restore files. Tomorrow they might store datasets, migration space, or something else entirely.
The goal is simply to give the server stable storage locations that won’t need renaming later as the system evolves.
Creating the storage
In Proxmox I created two directory storages, one for each HDD. During this step Proxmox wipes the disk, creates a partition, formats it, and mounts it automatically.
After the process completes the mountpoints look like this:
/mnt/pve/data01
/mnt/pve/data02
From the shell you can see the structure immediately.
lsblk
Typical result:
sdb
└─sdb1 ext4 /mnt/pve/data01
sdc
└─sdc1 ext4 /mnt/pve/data02
At this point the node has three storage layers:
SSD thinpool → VM disks
data01 HDD → bulk storage
data02 HDD → bulk storage
Nothing fancy — just something that makes sense when logging into the machine months later.
Labeling the disks
For clarity I labeled the filesystems so the disk identity matches the storage name.
e2label /dev/sdb1 data01
e2label /dev/sdc1 data02
You can verify the labels with:
blkid | grep data
Proxmox actually mounts the disks using UUIDs stored in /etc/fstab, so the labels are mainly there for human clarity.
What Proxmox actually created
Behind the scenes the storage stack now looks like this:
disk → partition → filesystem → mountpoint → proxmox storage
Example:
/dev/sdb
└─ /dev/sdb1 ext4
↓
/mnt/pve/data01
Proxmox also registers the storage inside:
/etc/pve/storage.cfg
Example entry:
dir: data01
path /mnt/pve/data01
content images,iso,vztmpl,backup
Storage layout after setup
Once both disks were configured, the node looked like this:
Proxmox Node
│
├─ SSD
│ └─ LVM-thin (pve/data)
│ └─ VM disks
│
├─ HDD #1
│ └─ /dev/sdb1 (ext4)
│ └─ /mnt/pve/data01
│
└─ HDD #2
└─ /dev/sdc1 (ext4)
└─ /mnt/pve/data02
A quick check confirms the mounts:
findmnt
Preparing a restore location
Before rebuilding anything else, I wanted to confirm something important:
that restoring backups still works.
Proxmox expects VM backup files inside a directory named dump, so I created that under data02.
mkdir -p /mnt/pve/data02/dump
Restoring a VM from before the reinstall
I made a VM backup before wiping the machine. It’s time to copy it over.
scp vzdump-qemu-<VMID>-<timestamp>.vma.zst \
root@<proxmox-ip>:/mnt/pve/data02/dump/
Once the file landed there, Proxmox detected it automatically.
From the UI the backup appears under:
Datacenter → Storage → data02 → Backups
Restoring it takes only a few minutes.
When the VM booted normally, that was the real signal that the rebuild worked. The restore didn’t just bring back a machine — it brought back the services that used to run on it before the reinstall.
Where the system stands now
At this point the arena is quiet again — in a good way.
The node is stable, storage is laid out, and restores work.
The system now looks like this:
SSD thinpool → VM disks
/mnt/pve/data01 → HDD storage
/mnt/pve/data02 → HDD storage
/mnt/pve/data02/dump → VM restore location
It’s intentionally simple.
That simplicity gives the rebuilt node something it didn’t have earlier in the process: a predictable place for everything to live.
Next
With the storage foundation in place, the node is ready for the next step: creating the first fresh VM and installing a new operating system on it.
The arena is open again.