Homelab Arena Part 7: From Cluster to Platform — ApplicationSet, Storage, and Backups
Part 6 ended with a cluster that could manage itself.
Argo CD had taken over the desired state. The repository had become the source of truth. The control plane no longer depended on manual steps.
That was the first half of the problem.
A cluster that manages itself is useful, but it is still not a platform. Applications can be deployed, but there is no consistent way to define persistence, no standard deployment pattern, and no clear backup strategy.
This part focuses on defining those pieces.

The missing layer: how applications live
Up to this point, deploying an application still required decisions:
- where does the data live
- how is it exposed
- how is it backed up
Each app solved this slightly differently.
That inconsistency does not scale.
The goal here is to introduce a single pattern that answers all three questions in a predictable way.
Reusing the disk layout
The disk layout from Part 2 becomes important now:
Proxmox Node
├─ SSD
│ └─ VM disks
├─ HDD #1
│ └─ /mnt/pve/data01
└─ HDD #2
└─ /mnt/pve/data02
Each disk has a clear role:
- SSD runs the cluster
- HDD #1 stores live application data
- HDD #2 stores backups
Instead of mixing concerns, storage is separated from compute.
NFS as the persistence layer
The directory:
/mnt/pve/data01/k3s-nfs
is exposed to the cluster via NFS.
This becomes the shared persistence layer:
Pods → PVC → NFS → HDD #1
The key change is conceptual.
Persistence is no longer tied to a node. It is tied to the cluster.
Applications can move. Data stays.
Dynamic provisioning with nfs-subdir
Manually managing volumes does not scale.
Instead, the cluster uses:
nfs-subdir-external-provisioner
The flow becomes:
Application requests PVC
→ provisioner creates directory on NFS
→ pod mounts it
Each application gets its own folder automatically.
Storage becomes declarative instead of operational.
One chart, many apps
The next step is standardizing deployments.
Instead of creating a chart per application, a single shared chart is used.
It contains only four templates:
deployment.yaml
service.yaml
ingress.yaml
pvc.yaml
Each template has a clear responsibility.
The chart defines how applications run. The values define what runs.
This removes duplication and enforces consistency across all apps.
ApplicationSet: turning structure into deployments
Managing one Argo CD Application per app does not scale.
Instead, a single ApplicationSet generates applications automatically.
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: simple-chart-apps
namespace: argocd
spec:
generators:
- git:
repoURL: https://github.com/<user>/<repo>.git
revision: HEAD
directories:
- path: argocd/simple-chart/apps/*
template:
metadata:
name: '{{path.basename}}'
spec:
project: simple-chart
source:
repoURL: https://github.com/<user>/<repo>.git
targetRevision: HEAD
path: argocd/simple-chart/chart
helm:
valueFiles:
- values.yaml
- ../apps/{{path.basename}}/values.yaml
destination:
server: https://kubernetes.default.svc
namespace: '{{path.basename}}'
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
How it works
Git repository
└── apps/
├── family-blog/
├── uptime-kuma/
└── it-tools/
↓
ApplicationSet scans directories
↓
Argo CD generates Applications
↓
Helm renders shared chart
↓
Kubernetes creates resources
The directory structure defines the applications.
Argo CD turns that structure into running workloads.
Repository layout
This pattern depends on a clear structure:
argocd/
├── bootstrap/
├── platform/
└── simple-chart/
├── chart/
│ ├── Chart.yaml
│ └── templates/
└── apps/
├── family-blog/
├── uptime-kuma/
└── it-tools/
The separation is intentional:
chart/defines the templateapps/defines the instances
Example: a complete application
A single file defines an application:
nameOverride: it-tools
image:
repository: corentinth/it-tools
tag: "2024.10.22-7ca5933"
service:
port: 80
targetPort: 80
ingress:
host: tools.c.home
That file is enough for Argo CD to deploy the app.
No additional manifests are required.
Migrating applications
With this pattern in place, migrating applications becomes mechanical.
- two blogs
- uptime-kuma
- it-tools
Each application follows the same structure.
Stateful apps request storage. Stateless apps do not.
The system supports both without special cases.
Backups: outside the cluster
Persistence introduces a second concern.
Data must survive failure.
All application data lives here:
/mnt/pve/data01/k3s-nfs
Backups run here:
/mnt/pve/data02/backups/k3s-nfs
A simple rsync job copies data from the live disk to the backup disk.
This runs on the Proxmox host, not inside Kubernetes.
That decision is deliberate.
If the cluster fails, backups should still work.
What this changes
Before this:
- apps were deployed individually
- storage was inconsistent
- backups were undefined
After this:
- apps follow a standard pattern
- storage is predictable
- backups are reliable
The cluster now has structure.
What this unlocks
With this foundation:
- adding an app becomes a values file
- upgrading apps becomes a version change
- observability can be added consistently
- restore flows become predictable
This is where the system starts behaving like a platform.
Closing
Part 6 introduced control.
Part 7 defines how workloads exist within that control.
Applications are now:
- defined in Git
- deployed automatically
- persisted consistently
- backed up predictably
The cluster is no longer just running applications.
It is starting to behave like infrastructure.