GKE - Part 5: Private Platform Access with the Tailscale Kubernetes Operator

This part is about adding a private access path to the GKE nonprod cluster with the Tailscale Kubernetes operator.
I had done a similar setup before on EKS, but when I needed to repeat it on GKE, I realized I had forgotten enough small details that I had to go back through the docs again. This post is the runbook I wish I had kept the first time.
The goal is simple:
Install the Tailscale Kubernetes operator with Argo CD
Expose Argo CD privately through Tailscale Ingress
Document the exact setup for the next time I need it
By the end of this step:
- Tailscale OAuth credentials are created in the admin console
- the Tailscale Kubernetes operator is installed through the existing GitOps flow
- the Helm chart version is pinned from the official chart index
- Argo CD is exposed through a private
.ts.netURL - a simple test workload confirms that Tailscale Ingress works on GKE
The final Argo CD URL for this setup is:
https://argocd-nonprod-g.tailnet-name.ts.net/
Why Tailscale here
There are several ways to provide private access to Kubernetes services.
I could use a traditional VPN, private load balancers, a public ingress locked down with SSO, a bastion host, or temporary port forwarding. All of those can work, but each option adds a different kind of operational cost.
Tailscale is attractive because it gives the team a private network that is easy to join, easy to leave, and easy to reason about. Once a user is on the tailnet and has the right access policy, internal services can be reached by name.
For Kubernetes, the operator makes this cleaner. Instead of running Tailscale sidecars manually or creating one-off access paths, the operator watches Kubernetes resources and creates the Tailscale proxy resources needed to expose workloads.
For this first GKE setup, I only care about one primary use case:
Expose Argo CD privately to the team.
The same pattern can later be reused for other internal tools:
Grafana
internal admin APIs
temporary test apps
preview environments
platform dashboards
What the Tailscale operator needs
The Tailscale Kubernetes operator needs permission to create and manage devices in the tailnet.
The standard setup uses OAuth client credentials from the Tailscale admin console. The operator uses those credentials to authenticate to the Tailscale API, create auth keys, and register the proxy devices it manages.
The rough flow is:
Tailscale admin console:
create tags
create OAuth client
grant required scopes
attach operator tag
Kubernetes:
create tailscale namespace
store OAuth client ID and secret
install the operator with Helm through Argo CD
expose workloads with Tailscale Ingress
Tailscale also supports a workload identity federation path. I am not using that for this first version. For the initial GitOps setup, OAuth client credentials are the straightforward option and match how I originally approached this on EKS.
Step 1: update the Tailscale ACL tags
In the Tailscale admin console, open the access control policy file.
The operator needs one tag for itself and another tag for the devices it creates:
"tagOwners": {
"tag:k8s-operator": [],
"tag:k8s": ["tag:k8s-operator"]
}
The meaning is:
tag:k8s-operator:
used by the Kubernetes operator itself
tag:k8s:
used by proxy devices created by the operator
tag:k8s-operator owns tag:k8s:
the operator is allowed to create devices with tag:k8s
This is easy to miss. If the operator cannot own the tag used by its proxy devices, the Kubernetes resources may look fine while the Tailscale side refuses to create or authorize those devices.
For the first pass, I am keeping the default tag model:
tag:k8s-operator
tag:k8s
Later, I can introduce environment-specific tags if I need different access policies per cluster or environment:
tag:gke-nonprod
tag:gke-prod
tag:platform
For the first GKE nonprod installation, the default tags are enough.
Step 2: create the Tailscale OAuth client
In the Tailscale admin console, go to:
Settings
Trust credentials
OAuth clients
Generate OAuth client
Create an OAuth client for the Kubernetes operator.
The client needs these write scopes:
Devices Core: write
Auth Keys: write
Services: write
It also needs this tag:
tag:k8s-operator
After creating the client, save both values:
client_id
client_secret
The client secret is only shown once, so this is the point where I slow down and store it properly before closing the page.
For GitOps, I do not want this secret committed as plain YAML. The immediate manual version is a Kubernetes Secret. The better GitOps version is a SealedSecret or an ExternalSecret.
Step 3: create the secret
The operator needs the OAuth client_id and client_secret. Since this setup is managed by Argo CD, I store them as a SealedSecret in the Tailscale templates folder.
First, generate the normal Secret manifest locally:
kubectl create secret generic operator-oauth \
--namespace tailscale \
--from-literal=client_id='YOUR_CLIENT_ID' \
--from-literal=client_secret='YOUR_CLIENT_SECRET' \
--dry-run=client \
-o yaml > operator-oauth.secret.yaml
Then seal it:
kubeseal \
--namespace tailscale \
--name operator-oauth \
--format yaml \
< operator-oauth.secret.yaml \
> apps/infra/tailscale/templates/operator-oauth.sealedsecret.yaml
Now the operator credentials can follow the same GitOps path as the rest of the platform configuration.
Step 4: pin the Helm chart version
This is one of the details I wanted to document because I knew I would forget it.
The Tailscale stable Helm chart repository is:
https://pkgs.tailscale.com/helmcharts
The chart index is available at:
https://pkgs.tailscale.com/helmcharts/index.yaml
There are a few ways to check the latest version.
With Helm:
helm repo add tailscale https://pkgs.tailscale.com/helmcharts
helm repo update
helm search repo tailscale/tailscale-operator --versions | head
With curl and yq:
curl -s https://pkgs.tailscale.com/helmcharts/index.yaml \
| yq '.entries."tailscale-operator"[0].version'
At the time I wrote this, the latest stable chart version in the index was:
1.96.5
The chart version and app version are aligned:
chart version: 1.96.5
app version: v1.96.5
In Argo CD, I pin the chart version instead of tracking an unbounded latest value:
targetRevision: 1.96.5
The important habit is not the exact version number. The important habit is checking the chart index, pinning the version, and making upgrades deliberate.
Before committing the Argo CD change, I also check the chart values for the exact version I am installing:
helm show values tailscale/tailscale-operator --version 1.96.5
That command is the source of truth for value names. If the chart values change in a future release, I want to catch that before Argo CD tries to sync it.
In my Argo CD setup, the operator is installed from the upstream Tailscale chart, while extra manifests such as the SealedSecret live in the Git repo. The important chart values are the OAuth secret references and the operator hostname:
helm:
releaseName: tailscale-operator
valuesObject:
oauth:
clientIdSecret:
name: operator-oauth
key: client_id
clientSecretSecret:
name: operator-oauth
key: client_secret
operatorConfig:
hostname: tailscale-operator-gke-nonprod
Once this is committed and synced by Argo CD, the operator should register itself in the Tailscale admin console with the tag:k8s-operator tag.
Step 5: use Tailscale as the Ingress class
Once the operator is running, it is time to use Tailscale as the ingressClassName for internal services.
This is the part that is different from a normal public ingress setup. I am not creating a public DNS record, using cert-manager, or managing a TLS secret. Tailscale Ingress uses MagicDNS, and Tailscale handles HTTPS for the .ts.net name.
For Argo CD, the Ingress should use:
spec:
ingressClassName: tailscale
tls:
- hosts:
- argocd-nonprod-g
That short TLS host becomes the first label of the MagicDNS name:
argocd-nonprod-g.tailnet-name.ts.net
So the final URL becomes:
https://argocd-nonprod-g.tailnet-name.ts.net/
This is where I had to adjust my original Argo CD values. I had values that looked like a public ingress path, with a public host and a cert-manager-managed TLS secret. That belongs to the public DNS and Gateway/cert-manager model, not to the private Tailscale model.
For this path, I want the short Tailscale host:
argocd-nonprod-g
Argo CD is installed by Helm, so I expose the server through chart values:
configs:
params:
server.insecure: true
server:
ingress:
enabled: true
ingressClassName: tailscale
hosts:
- argocd-nonprod-g
tls:
- hosts:
- argocd-nonprod-g
The server.insecure: true value is needed because TLS terminates before traffic reaches the Argo CD server. The backend service is plain HTTP on port 80.
The rendered Ingress should look like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server
namespace: argocd
spec:
ingressClassName: tailscale
rules:
- host: argocd-nonprod-g
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
number: 80
tls:
- hosts:
- argocd-nonprod-g
If I see a public-style host or a TLS secret like this, I know I am still mixing two different ingress models:
tls:
- hosts:
- argocd.nonprod.g.example.com
secretName: argocd-server-tls
For the Tailscale path, I remove the cert-manager annotations, remove the public host, and leave the host as the short MagicDNS label.
Then I check the Ingress:
kubectl get ingress argocd-server -n argocd -o yaml
kubectl get ingress -n argocd
Expected shape:
NAME CLASS HOSTS ADDRESS PORTS AGE
argocd-server tailscale argocd-nonprod-g argocd-nonprod-g.tailnet-name.ts.net 80, 443 2m
The first request can be slower because the Tailscale certificate may be provisioned on first connect. If the first attempt times out, I give it a moment and try again.
Step 6: test with a simple app
Before relying on the Argo CD exposure completely, I like to test the operator with a small app. This removes Argo CD-specific behavior from the debugging process and proves that Tailscale Ingress works on its own.
The sample app runs in its own namespace:
apiVersion: v1
kind: Namespace
metadata:
name: janus-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
namespace: janus-test
spec:
replicas: 1
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hello
namespace: janus-test
spec:
selector:
app: hello
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello
namespace: janus-test
spec:
ingressClassName: tailscale
tls:
- hosts:
- janus-test
rules:
- host: janus-test
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hello
port:
number: 80
The tls.hosts value is what drives the Tailscale MagicDNS name. The rules.host keeps the Ingress definition explicit and consistent with the way I usually write Kubernetes Ingress resources.
Validate it:
kubectl get pods -n janus-test -o wide
kubectl get svc -n janus-test
kubectl get ingress -n janus-test
Expected Ingress shape:
NAME CLASS HOSTS ADDRESS PORTS AGE
hello tailscale janus-test janus-test.tailnet-name.ts.net 80, 443 1m
Then call it through the tailnet:
curl https://janus-test.tailnet-name.ts.net/
Expected response:
Hello, world!
Version: 1.0.0
If this works, then the operator, OAuth credentials, MagicDNS, HTTPS, and Tailscale Ingress are all working independently of Argo CD.
Where this leaves the cluster
At this point, the GKE nonprod cluster has a private platform access path.
Argo CD is available at:
https://argocd-nonprod-g.tailnet-name.ts.net/
The current access model is conservative:
The team:
read-only Argo CD access for now
Read-only access gives the team a way to observe the new GitOps environment while the platform is still being stabilized. Write access can come later, after the deployment flow and RBAC model have had more time to settle.
That matters because the cluster is no longer only running platform controllers and sample apps. It is starting to run real application workload, while keeping internal platform access private through Tailscale.