Homelab Arena Part 6: Handing the Cluster Over to Argo CD

Part 5 ended with a working k3s cluster.
The nodes were up, the control plane was stable, and the foundation finally felt predictable again.
That was the goal.
But a cluster that only exists is not enough.
The next step is not about installing more components. It is about deciding how the cluster should be managed going forward.
Up to this point, everything still depended on me.
Commands, playbooks, and a bit of memory.
That is the part I wanted to change.
This is where Argo CD enters the arena.
One repository, clear boundaries
Everything still lives in one repository.
That was intentional.
At this scale, splitting Terraform, Ansible, and Kubernetes manifests across multiple repos introduces more friction than it removes. Keeping everything together makes the system easier to understand and easier to rebuild.
Each layer still has a clear responsibility.
- Terraform creates the machines
- Ansible installs the cluster and bootstraps Argo CD
- Argo CD owns the desired state after that
The structure reflects that separation:
homelab/
├── ansible/
├── terraform/
├── argocd/
│ ├── bootstrap/
│ └── platform/
└── docs/
Nothing overlaps. Each layer builds on top of the previous one.
How control moves to Argo CD
The interesting part is not installing Argo CD.
The interesting part is how control moves to it.
Ansible
├── installs Argo CD
├── creates repo credentials
└── applies root Application
↓
Argo CD
└── reads argocd/bootstrap from Git
↓
Bootstrap layer
├── project definition
└── platform application
↓
Platform apps
├── Argo CD
└── Sealed Secrets
Ansible gets Argo CD running.
Argo CD takes over everything after that.
That handoff is the whole point.
From individual DNS records to a platform
In Part 3, DNS was still managed one entry at a time.
app1.c.home
app2.c.home
grafana.c.home
That works, but it does not scale once the cluster starts hosting multiple services.
Instead of thinking in terms of individual applications, the cluster needs a simple pattern.
*.c.home → Traefik
Everything routes through the same entry point.
Pi-hole handles this with a wildcard rule.
sudo pihole-FTL --config misc.etc_dnsmasq_d true
sudo systemctl restart pihole-FTL
sudo tee /etc/dnsmasq.d/05-wildcard.conf >/dev/null <<'EOF'
address=/.c.home/10.0.1.150
EOF
sudo systemctl restart pihole-FTL
Verify:
dig argocd.c.home @10.0.0.88 +short
dig test.c.home @10.0.0.88 +short
At this point, any *.c.home hostname resolves to the cluster.
That is enough to give Argo CD a proper address.
Installing Argo CD
Argo CD is installed using Helm as part of the Ansible bootstrap.
The configuration stays minimal.
configs:
params:
server.insecure: "true"
server:
ingress:
enabled: true
ingressClassName: traefik
hostname: argocd.c.home
tls: false
servicePort: 80
Traefik handles routing.
Argo CD runs behind it without needing to manage TLS or external exposure directly.
Connecting to a private repository
The repository is private.
That means Argo CD needs credentials before it can reconcile anything from Git.
A GitHub Personal Access Token over HTTPS is enough.
apiVersion: v1
kind: Secret
metadata:
name: argocd-repo-argocd-homelab
namespace: argocd
labels:
argocd.argoproj.io/secret-type: repository
stringData:
type: git
url: https://github.com/<user>/<repo>.git
username: git
password: <github-token>
Passed in at runtime:
ansible-playbook playbooks/bootstrap_argocd.yml \
-e "argocd_github_https_token=ghp_..."
That is enough for Argo CD to clone the repository.
First access
After installation, the initial admin password can be retrieved with:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
Username:
admin
This is only needed once.
What matters more is whether Argo CD can read the repository and start reconciling the cluster.
The root Application
Argo CD does not manage anything until it is told where to look.
That happens through a single Application.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: bootstrap-argocd
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/<user>/<repo>.git
targetRevision: main
path: argocd/bootstrap
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: true
selfHeal: true
This is the handoff.
Ansible applies this once.
From that point forward, Argo CD reads from Git and keeps the cluster in sync.
Defining the platform boundary
Even in a homelab, the platform layer needs a boundary.
Argo CD Projects provide that.
clusterResourceWhitelist:
- group: '*'
kind: '*'
Platform applications need permission to create cluster-level resources.
CRDs, ClusterRoles, and ClusterRoleBindings all live outside a namespace.
Allowing them explicitly keeps the behavior predictable.
The project becomes the boundary of this arena.
The first platform application
With the project in place, the first application is straightforward.
Sealed Secrets.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: sealed-secrets
namespace: argocd
spec:
project: platform
source:
repoURL: https://bitnami-labs.github.io/sealed-secrets
chart: sealed-secrets
targetRevision: 2.18.5
destination:
server: https://kubernetes.default.svc
namespace: kube-system
It solves the next problem immediately.
How to store secrets in Git without exposing them.
Where the arena stands now
Argo CD is no longer accessed through a raw IP address or a temporary port-forward.
It lives at:
https://argocd.c.home
That small change matters.
The cluster now has:
- a stable entry point
- a Git-backed source of truth
- a controller that reconciles state
- its first platform application
Part 5 gave me a cluster.
Part 6 gives the cluster a way to manage itself.