OCI Compute Provisioning with Terraform + Ansible Automation
This project provisions Oracle Cloud Infrastructure (OCI) compute instances using Terraform, and configures them with Ansible. It provides a reproducible infrastructure-as-code setup for deploying and preparing lightweight OCI hosts, including K3s Kubernetes clusters.
What This Project Does
- Creates virtual machines (VMs) on OCI
- Provisions related OCI networking components
- Bootstraps instances using Ansible or cloud-init
- Deploys K3s Kubernetes clusters with master and worker nodes
π Project Structure
βββ ansible
β βββ playbooks
β β βββ deploy-jobwinner.yml
β β βββ setup-docker.yml
β β βββ setup-k3s.yml
β βββ roles
β βββ common
β βββ deploy-jobwinner
β βββ docker
β βββ k3s
βββ modules
β βββ k3s-worker
β β βββ cloud-init.sh.tftpl
β β βββ main.tf
β β βββ output.tf
β β βββ README.md
β β βββ variables.tf
β βββ lb
β βββ web-server
β βββ cloud-init.sh.tftpl
β βββ main.tf
β βββ output.tf
β βββ README.md
β βββ variables.tf
βββ k3s-master.tf
βββ k3s-workers.tf
βββ lb-for-k3s.tf
βββ job-winner.tf
βββ README.md
Prerequisites
How to Use
1. Provision OCI Compute with Terraform
From the project root:
terraform init
terraform apply
2. Configure the Host
- Set up Docker with Ansible
- Set up K3s master node with Ansible
- Deploy K3s worker nodes automatically with cloud-init
Current Capabilities
- Terraform module to create OCI compute with public IP
- Automatically configure firewall and SSH access
- Ansible workflow to install Docker and dependencies
- Ansible workflow to install K3s and configure a Kubernetes master node
- Automated K3s worker node deployment using cloud-init
- Dedicated K3s worker module
- Automatic cluster joining without manual intervention
- Cleanly organized roles for reusability
K3s Cluster Setup
Master Node
- Deployed using
k3s-master.tf - Configured with Ansible for full K3s server setup
- Public IP for external access and management
Worker Nodes
- Deployed using
k3s-workers.tf - Use cloud-init for automated setup (no Ansible required)
- No public IPs for enhanced security
- Automatically join the cluster by retrieving node token from master
- Shared network infrastructure with master node
Load Balancer
- Optional load balancer configuration in
lb-for-k3s.tf - Routes traffic to worker nodes for high availability
This setup is ideal for spinning up quick dev/test environments or bootstrap nodes with Docker or Kubernetes, and is easily extendable to other configurations. The K3s workers provide a scalable, secure Kubernetes cluster suitable for production workloads.