LoginSignup
0
0

k0sによるkubernetes環境構築

Posted at

はじめに

kubernetesの実験環境の構築に何度も苦しめられましたが、k0sにより自由に構築と破棄ができる体制が整ったので備忘録として記載します。

環境

  • buildserver
    • Hardware : Raspberry pi 3B+
    • OS : Raspbian GNU/Linux 11 (bullseye)
    • Memory : 1 GB
    • SD card : 16 GB
  • masternode
    • Hardware : ESPRIMO Q7010/E
    • OS : Ubuntu 22.04
    • Memory : 16 GB
    • SSD : 256 GB
  • workernode01
    • Hardware : ThinkCentre m720q
    • OS : Ubuntu 22.04
    • Memory : 8 GB
    • SSD : 128 GB

環境構成図は下記の通り

masternodeとworkernode01はWindows11とUbuntu 22.04のデュアルブートで構築しているため、実際にk8s関連で利用できるディスク領域は少ない状態で実施した。

構築手順

構築利用パッケージの導入/必要設定の投入

buildserver, masternode, workernode01に下記の設定やパッケージ等が入っていることを確認すること。
vimの導入は自由に。

# apt install vim
# apt install ssh
# passwd root

設定ファイル変更

下記buildserverの設定を変更。

# vim /etc/hosts
127.0.1.1      buildserver
192.168.0.11   masternode
192.168.0.12   workernode01

SSH設定&鍵生成

buildserverでrootログインをyesに変更。公開鍵の転送に必要

# vim /etc/ssh/sshd_config
PermitRootLogin yes

次にbuildserverで鍵を生成する。
鍵生成の途中の入力はすべて空エンターにしている。

# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX root@buildserver
The key's randomart image is:
+---[RSA 3072]----+
(省略)
+----[SHA256]-----+

buidserverで下記手順を実施し、k8sを展開するノードたち(MasternodeやWorkernode)に生成した鍵を転送する。
転送した後、鍵でちゃんとSSH接続できるかを確認(パスワード入力なくてもログインできるようになっていること)

# ssh-copy-id root@masternode
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@masternode's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@masternode'"
and check to make sure that only the key(s) you wanted were added.

# ssh-copy-id root@workernode01
(省略)
# ssh root@masternode
Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 6.2.0-39-generic x86_64)
(省略)
root@masternode:~# 
root@masternode:~# exit

この段階でおそらくsshdのPermitRootLoginをprohibit-passwordに変えても問題なさそうだが未検証

k0sの導入

下記の手順通りに導入

# VER=$(curl -s https://api.github.com/repos/k0sproject/k0sctl/releases/latest|grep tag_name | cut -d '"' -f 4)
# echo $VER
v0.16.0
# wget https://github.com/k0sproject/k0sctl/releases/download/${VER}/k0sctl-linux-arm -O k0sctl
(省略)
k0sctl              100%[===================>]  15.81M  6.82MB/s    in 2.3s

2023-12-11 20:20:20 (6.82 MB/s) - ‘k0sctl’ saved [16580608/16580608]
# chmod +x k0sctl
# cp -r k0sctl /usr/local/bin/ /bin
# k0sctl version
version: v0.16.0
commit: 7e8c272

次にks0がk8s環境を構築する際に参照する設定ファイルを生成。

# cat k0sctl.yaml
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
spec:
  hosts:
  - ssh:
      address: masternode
      user: root
      port: 22
      keyPath: ~/.ssh/id_rsa
    role: controller+worker
  - ssh:
      address: workernode01
      user: root
      port: 22
      keyPath: ~/.ssh/id_rsa
    role: worker

上記設定ファイルをもとに構築開始。
1分くらいで完了していたので、シンプルな構成であればかなり早く構築できる。

# k0sctl apply --config k0sctl.yaml

?????????????????????????????????????????????? ????????? ???
????????????????????????????????????????          ???    ???
????????????????????????????????????????          ???    ???
????????????????????????????????????????          ???    ???
??????????????????????????????????????????????    ???    ??????????
k0sctl v0.16.0 Copyright 2023, k0sctl authors.
Anonymized telemetry of usage will be sent to the authors.
By continuing to use k0sctl you agree to these terms:
https://k0sproject.io/licenses/eula
INFO ==> Running phase: Connect to hosts
INFO [ssh] workernode01:22: connected
INFO [ssh] masternode:22: connected
INFO ==> Running phase: Detect host operating systems
INFO [ssh] workernode01:22: is running Ubuntu 22.04.3 LTS
INFO [ssh] masternode:22: is running Ubuntu 22.04.3 LTS
INFO ==> Running phase: Acquire exclusive host lock
INFO ==> Running phase: Prepare hosts
INFO ==> Running phase: Gather host facts
INFO [ssh] masternode:22: using masternode as hostname
INFO [ssh] workernode01:22: using workernode01 as hostname
INFO [ssh] masternode:22: discovered enx04ab182bc1b9 as private interface
INFO [ssh] workernode01:22: discovered eno1 as private interface
INFO [ssh] masternode:22: discovered 192.168.0.11 as private address
INFO [ssh] workernode01:22: discovered 192.168.0.12 as private address
INFO ==> Running phase: Validate hosts
INFO ==> Running phase: Gather k0s facts
INFO ==> Running phase: Validate facts
INFO ==> Running phase: Configure k0s
WARN [ssh] masternode:22: generating default configuration
INFO [ssh] masternode:22: validating configuration
INFO [ssh] masternode:22: configuration was changed, installing new configuration
INFO ==> Running phase: Initialize the k0s cluster
INFO [ssh] masternode:22: installing k0s controller
INFO [ssh] masternode:22: waiting for the k0s service to start
INFO [ssh] masternode:22: waiting for kubernetes api to respond
INFO ==> Running phase: Install workers
INFO [ssh] workernode01:22: validating api connection to https://192.168.0.11:6443
INFO [ssh] masternode:22: generating token
INFO [ssh] workernode01:22: writing join token
INFO [ssh] workernode01:22: installing k0s worker
INFO [ssh] workernode01:22: starting service
INFO [ssh] workernode01:22: waiting for node to become ready
INFO ==> Running phase: Release exclusive host lock
INFO ==> Running phase: Disconnect from hosts
INFO ==> Finished in 52s
INFO k0s cluster version v1.28.4+k0s.0 is now installed

masternodeに接続し、下記コマンドを入力し動作確認。
kubectlコマンドを利用する際はk0s kubectlをつけて実行する。

# k0s kubectl get pods -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-85df575cdb-ctz26          1/1     Running   0          1h
coredns-85df575cdb-k8k2c          1/1     Running   0          1h
konnectivity-agent-2f4ts          1/1     Running   0          1h
konnectivity-agent-zvm97          1/1     Running   0          1h
kube-proxy-czvwf                  1/1     Running   0          1h
kube-proxy-j9b8v                  1/1     Running   0          1h
kube-router-2fd5g                 1/1     Running   0          1h
kube-router-scmfv                 1/1     Running   0          1h
metrics-server-7556957bb7-lclh8   1/1     Running   0          1h

環境の破棄

下記コマンドで環境破棄できる。
なお、今回の環境では15秒程度でリセットできた。

# k0sctl reset
k0sctl v0.16.0 Copyright 2023, k0sctl authors.
Anonymized telemetry of usage will be sent to the authors.
By continuing to use k0sctl you agree to these terms:
https://k0sproject.io/licenses/eula
? Going to reset all of the hosts, which will destroy all configuration and data, Are you sure? Yes
INFO ==> Running phase: Connect to hosts
INFO [ssh] masternode:22: connected
INFO [ssh] workernode01:22: connected
(省略)
INFO ==> Running phase: Reset workers
INFO [ssh] workernode01:22: reset
INFO ==> Running phase: Reset controllers
INFO [ssh] masternode:22: reset
INFO ==> Running phase: Reset leader
INFO [ssh] masternode:22: reset
INFO ==> Running phase: Release exclusive host lock
INFO ==> Running phase: Disconnect from hosts
INFO ==> Finished in 15s

設定ファイルの変更箇所にもよると思うが、reset前にk0sctl.yamlの内容を変更すると、

FATA reset failed - log file saved to /root/.cache/k0sctl/k0sctl.log: failed on 1 hosts:
 - [ssh] masternode:22: [ssh] masternode:22: is configured as k0s controller+worker but is already running as controller - role change is not supported

というエラーを吐き出したので、きちんとresetした後に変更すると安全。

最後に

短時間で構築と破棄ができそうなので、
k0sctl.yamlを色々変更してmasterとworkerの構成を変えたり、追加機能の導入(calicoなど)が気軽に試せると思う。
本当はworkernode02にRaspberry piを指定して構築したかったがエラー吐き出して失敗した。
Raspberry piだとパッケージの追加や何かしら設定をいじる必要があるような記述があったので(ここここにそれらしき記載あり)、それを試してみたい。

エラー対処

k0s doesn't appear to be running but has been installed as a service

applyの途中で失敗した後、resetを実行した際にbuildserverにて下記エラーが発生した。

FATA reset failed - log file saved to /root/.cache/k0sctl/k0sctl.log: failed on 1 hosts:
 - [ssh] workernode01:22: k0s doesn't appear to be running but has been installed as a service at /etc/systemd/system/k0sworker.service - please remove it or start the service

エラー文の通り、対象ノードworkernode01に入り、下記コマンドを実行しresetを再度実行、解消された。

# systemctl stop k0sworker
# systemctl disable k0sworker

なお、masternodeで発生したケースでは下記のようなエラー文が出力される。
その際はsystemctl stop k0scontrollersystemctl disable k0scontrollerで解消される。

FATA apply failed - log file saved to /root/.cache/k0sctl/k0sctl.log: failed on 1 hosts:
 - [ssh] masternode:22: k0s doesn't appear to be running but has been installed as a service at /etc/systemd/system/k0scontroller.service - please remove it or start the service

参考サイト

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0