Skip to content

devops

Extending Our Tailscale Setup with a Terraform-Managed Bastion

originally posted at LinkedIn at Sept 06, 2025

In my previous post, I wrote about how we replaced a traditional VPN with Tailscale to connect engineers to Kubernetes services. That solved a big piece of the puzzle: cluster access was now simple, secure, and reliable.

But as always, not everything lives in Kubernetes. We still had private databases, legacy services, and tools running in our VPC that engineers needed to reach. That’s where a bastion came in.

Flying with Super Speed in the New Bastion Tunnel

The DevOps Odyssey, Part 4: Secrets, GitHub Auth, and Scaling Out

originally posted at LinkedIn at Aug 31, 2025

In Part 1, I bootstrapped a zero-click deployment pipeline on OCI with Terraform, Ansible, and Docker Compose — complete with HTTPS, DNS, and CI/CD.

Part 2 evolved that into a Kubernetes-native architecture, replacing Docker with K3s for a declarative control plane.

Part 3 brought in GitOps with Argo CD, letting the cluster manage itself from a single commit.

Now, in Part 4, I pushed the setup toward something that looks and feels much closer to production. Three key steps made that happen:

  1. Sealing secrets so I could finally commit them to Git safely.
  2. Adding GitHub authentication with Dex, making the Argo CD UI open (read-only) to anyone with a GitHub account.
  3. Expanding the cluster with a proper worker node — and replacing my ill-fated “master as NAT” shortcut with OCI’s managed NAT Gateway.

Autobot master cloned a worker self to prepare for the upcoming battle.

Reviving Doraemon: A Slack Bot’s Second Life in Kubernetes

originally posted at LinkedIn at Aug 16, 2025

Automation bots have evolved. What’s next?

Some projects stick with you. For me, it was a little Slack bot I hacked together at a previous job—something that could talk to our infrastructure and give quick answers without switching tools. I never learned what happened to it. Layoffs came. From what I later heard, it wasn’t adopted. It felt like watching a small idea I cared about slowly disappear.

Fast-forward to Flagler. I mentioned the bot almost off‑hand, unsure if anyone would care. My boss immediately supported the idea, and that gave me the energy to bring it back. This post is about reviving that project—this time with intent, care, and a proper home in Kubernetes.

The DevOps Odyssey, Part 3: GitOps on K3s with Argo CD — Self-Managing Infrastructure from a Single Commit

originally posted at LinkedIn at July 31, 2025

In Part 1, we bootstrapped a zero-click deployment pipeline on OCI using Terraform, Ansible, and Docker Compose — complete with HTTPS, DNS, and CI/CD.

Part 2 evolved that foundation into a Kubernetes-native architecture, replacing Docker with K3s. That gave us a declarative control plane and a better foundation for future growth — without sacrificing simplicity or resource constraints.

Now, in Part 3, we finally bring in GitOps: managing the entire cluster from a Git repository using Argo CD. This marks the transition from automation to self-reconciliation — and sets the stage for horizontal scaling and federated identity in the next phase.

Automation bots have evolved. What’s next?

Replatforming Airbyte: From Developer Laptop to EKS

originally posted at LinkedIn at July 25, 2025

In early-stage engineering teams, it's natural for tools to start out simple — often running on a single developer machine, just to get things moving. That’s how our Airbyte setup began: quick to spin up, good enough for testing connectors, and easy to iterate on.

But as our team grew and data pipelines became more embedded in how we operated, we knew it was time to treat Airbyte like real infrastructure. That meant moving beyond local environments and into a scalable, secure, and repeatable deployment.

We migrated Airbyte OSS to Amazon EKS, using Helm and AWS-native services like S3 and IAM Roles for Service Accounts (IRSA). Our goal wasn’t to fix something broken, but to build on what was working and make it production-ready—without sacrificing developer velocity.

This post shares how we did it, what we learned, and what you might want to consider if you’re operationalizing Airbyte (or any similar open-source tool) in a small but growing cloud-native team.

DevOps Clown sending laptop application to the Cloud

The DevOps Odyssey Continues: Evolving from Docker to K3s with Ansible

originally posted at LinkedIn at July 25, 2025

In Part 1, I turned an OCI Free Tier VM into a fully automated, HTTPS-secured Docker host using Terraform, Ansible, Traefik, and GitHub Actions. That stack was great for monoliths or simple containers.

But containers want orchestration. And I want GitOps.

So this phase of the odyssey shifts gears: replacing Docker Compose with K3s — a lightweight Kubernetes distribution that fits beautifully in constrained environments like OCI free tier.

The goal? A production-grade Kubernetes control plane, fully bootstrapped with Ansible, ready for GitOps.

Automation bots have evolved. What’s next?

Swapping VPN for Tailscale: A Five-Day Internal Infra Upgrade

originally posted at LinkedIn at June 25, 2025

We recently started migrating away from our traditional VPN setup—and toward something simpler, faster, and cheaper: Tailscale.

This wasn’t a full rip-and-replace. In just five days, we moved a core set of internal Kubernetes services behind Tailscale, enough to start retiring our legacy VPN setup piece by piece.

The results?
✅ Smoother developer workflows
✅ Better access control
✅ Significant cost savings
✅ Self-serve onboarding
✅ Fewer support headaches

Enjoy Super Speeding in Private Network Tunnel

The DevOps Odyssey: Fully Automating OCI App Deployment with Terraform, Ansible, and Docker

Introduction: The Engineer's Drive for Automation

As a DevOps engineer, I thrive on full‑stack automation—turning repetitive, error‑prone deployments into push‑button, ultra‑reliable workflows.
I recently challenged myself to get Job Winner, an opensource full‑stack app (Spring Boot + React), live on Oracle Cloud Infrastructure (OCI) in less than 15 minutes from a cold start.
But the real goal wasn't speed alone—it was idempotence: every run of the pipeline should converge the system to the exact same, secure, HTTPS‑enabled state without manual touch‑points.

OCI, Terraform, Ansible

Goodbye Nginx, Hello Traefik! Effortless HTTPS with Let's Encrypt and Docker

If you've struggled with Nginx reverse proxy configs, certbot timers, and nginx -s reload, it's time to meet Traefik — a modern reverse proxy built for dynamic containerized environments.

Why Traefik over Nginx?

Unlike Nginx, which requires manual configuration updates and reloads, Traefik auto-discovers services via Docker labels, keeping your proxy config in sync with running containers. It also:

  • Automatically obtains and renews Let’s Encrypt certificates
  • Handles HTTP/HTTPS routing, path-based rules, load balancing, and more
  • Supports metrics, tracing, and even canary deployments with Traefik Enterprise

For small setups or demos, it’s a powerful, drop-in Nginx replacement — with less boilerplate.

Traefik vs Nginx

Building a Reusable Terraform Static Site Module with CloudFront, S3, and Route 53

Overview

A common need in modern cloud infrastructure is hosting static websites — whether it's marketing sites, documentation portals, or Single Page Applications (SPAs) built with React, Vue, or Svelte.

At first, the AWS building blocks for this are fairly simple:

  • S3 for object storage
  • CloudFront for CDN
  • ACM for HTTPS
  • Route 53 for DNS

But quickly, managing this setup by hand or duplicating configs across environments (prod, staging, QA) becomes painful:

  • Too many copy/paste Terraform files
  • Hard to apply consistent policies
  • Complicated to manage uploads (especially when some sites are CI/CD and some are manual content sites)

Terraform Static Site Module