目的
K8sの検証を行いたい時、K8sのクラスタ自体を作る事自体に時間がかかってしまう。今回は時間的ボトルネックをAnsibleとTerraformで解決する
構成
ディレクトリ構成
- main.tf
- provider.tf
- k3s-ansible
- ansible.cfg
- site.yml
- inventory
- my-cluster
- hosts.ini
- group_vars
- all.yml
- my-cluster
Terraformの設定
provider.tf
今回、VMのプロヴィジョニング先はKVMホスト上なので、libvirt providerを用いる
provider "libvirt" {
uri = "qemu+ssh://shoma@172.24.20.3/system"
}
resource "libvirt_volume" "os-image" {
count = 5
name = "${format("terraform-vm%02d.qcow2", count.index + 1)}"
source = "https://cloud-images.ubuntu.com/releases/23.04/release-20230714/ubuntu-23.04-server-cloudimg-amd64.img"
format = "qcow2"
}
resource "libvirt_cloudinit_disk" "cloudinit_terraform-vm" {
count = 5
name = "${format("terraform-vm%02d.iso", count.index + 1)}"
pool = "default"
user_data = <<EOF
#cloud-config
hostname: "${format("terraform-vm%02d.qcow2", count.index + 1)}"
user: tmcit
password: tmcit
chpasswd: { expire: False }
ssh_pwauth: True
EOF
network_config = <<EOF
version: 2
ethernets:
ens3:
addresses:
- "${format("172.24.20.20%d/24", count.index + 1)}"
gateway4: 172.24.20.254
nameservers:
addresses: [172.24.2.51]
EOF
}
resource "libvirt_domain" "terraform-vm" {
count = 5
name = "${format("ubuntu-terraform%02d", count.index + 1)}"
memory = 2048
vcpu = 2
disk { volume_id = libvirt_volume.os-image[count.index].id }
cloudinit = libvirt_cloudinit_disk.cloudinit_terraform-vm[count.index].id
cpu {
mode = "host-passthrough"
}
network_interface {
bridge = "bridge1"
}
console {
type = "pty"
target_port = "0"
}
graphics {
type = "vnc"
listen_type = "address"
autoport = "true"
}
}
resource "null_resource" "ssh-keydelete" {
count = 5
triggers = {
vm_id = libvirt_domain.terraform-vm[count.index].id
}
provisioner "local-exec" {
command = <<COMMAND
sleep 45
ssh-keygen -R ${format("172.24.20.20%d", count.index + 1)}
COMMAND
}
}
resource "null_resource" "ssh_key" {
count = 5
triggers = {
vm_id = libvirt_domain.terraform-vm[count.index].id
}
provisioner "local-exec" {
command = <<COMMAND
sleep 60
sshpass -p tmcit ssh-copy-id -o StrictHostKeyChecking=no -i ~/.ssh/id_rsa.pub tmcit@${format("172.24.20.20%d", count.index + 1)}
COMMAND
}
}
resource "null_resource" "run_ansible" {
count = length(libvirt_domain.terraform-vm)
triggers = {
vm_id = libvirt_domain.terraform-vm[count.index].id
}
provisioner "local-exec" {
command = <<COMMAND
sleep 75
cd k3s-ansible
ansible-playbook site.yml
COMMAND
}
}
ansibleの設定
k3s-ansible/inventory/my-cluster/hosts.ini
[master]
172.24.20.201
[node]
172.24.20.202
172.24.20.203
172.24.20.204
172.24.20.205
[k3s_cluster:children]
master
node
k3s-ansible/site.yml
---
- hosts: k3s_cluster
gather_facts: yes
become: yes
roles:
- role: prereq
- role: download
- role: raspberrypi
- hosts: master
become: yes
roles:
- role: k3s/master
- hosts: node
become: yes
roles:
- role: k3s/node
k3s-ansible/ansible.cfg
inventoryの部分を変更する
[defaults]
nocows = True
roles_path = ./roles
inventory = inventory/my-cluster/hosts.ini
remote_tmp = $HOME/.ansible/tmp
local_tmp = $HOME/.ansible/tmp
pipelining = True
become = True
host_key_checking = False
deprecation_warnings = False
callback_whitelist = profile_tasks
確認
$ terraform apply
$ ssh tmcit@172.24.20.201
$ kubectl get nodess
NAME STATUS ROLES AGE VERSION
terraform-vm05 Ready <none> 57s v1.22.3+k3s1
terraform-vm04 Ready <none> 57s v1.22.3+k3s1
terraform-vm02 Ready <none> 57s v1.22.3+k3s1
terraform-vm03 Ready <none> 57s v1.22.3+k3s1
terraform-vm01 Ready control-plane,master 106s v1.22.3+k3s1