0
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 1 year has passed since last update.

EC2のUbuntu(22.04)上へのKubernetes(v1.28)環境の構築

Posted at

本記事の記載内容

1. はじめに
2. 前提情報
3. ControlPlaneの構築
4. WorkerNodeの構築
5. 参考リンク
6. 構築時につまづいた事

1. はじめに

自己学習用にkubernetesを手元で触れる環境を用意するために構築した際の記録。
本来であれば、各種クラウドベンダーが提供するkubernetes系のマネージドサービスを利用する方が効率的ではあるが、kubernetesに関する知識を習得することが目的である為、あえてEC2上にゼロから構築。

2. 前提情報

環境構築時の構成要素と内容(種別/バージョン)は以下の通り。

構成要素 内容 (種別/バージョン)
環境 AWSのEC2
インスタンスタイプ t3.small(2 vCPU/2 GiBメモリ)
OS Ubuntu Server 22.04 LTS
AMI名 ubuntu-jammy-22.04-amd64-server-20230516
AMI-ID ami-0d52744d6551d851e
EBSボリューム 10GiB
コンテナエンジン containerd v1.7.3
コンテナ低レベルランタイム runC 1.1.8
コンテナオーケストレーション kubernetes v1.28.2
仮想ネットワーク(コンテナ間通信) Flannel v0.22.3

3. ControlPlaneの構築

3-1. AWSマネジメントコンソールからEC2の構築

  • 2. 前提情報 に記載した構成でEC2を構築。
  • EC2は、インターネット接続可能なパブリックサブネット上に構築。
  • EC2構築の詳細な手順については、本記事では割愛。

3-2. 構築したEC2にログインしKubernetes環境構築

  • OSにログイン後、rootユーザにて作業を実施。

正式な環境を構築する場合は、セキュリティ要件に応じた権限設計等を考慮する必要がありますが、
今回構築する環境は自己学習用の開発環境であり、すべての作業をrootユーザにて実施しています。

OSホスト名の変更

実行コマンド
hostnamectl set-hostname <設定するホスト名>
コマンド出力結果
root@ip-10-0-0-105:~# hostnamectl set-hostname k8s-control-plane
root@ip-10-0-0-105:~# 

hostsへの自ホストの追加

実行コマンド
vim /etc/hosts
コマンド出力結果
root@ip-10-0-0-105:~# vim /etc/hosts
root@ip-10-0-0-105:~# cat /etc/hosts | grep k8s
10.0.0.105 k8s-control-plane
root@ip-10-0-0-105:~# 

OS再起動の実施

実行コマンド
shutdown -r now 
コマンド出力結果
root@ip-10-0-0-105:~# shutdown -r now 
Connection to x.x.x.x closed by remote host.
Connection to x.x.x.x closed.
localMac:~ root#

containerdのインストール

実行コマンド(一行ずつ実行)
CONTAINERD_VERSION=1.7.3
mkdir -p /usr/local/src
wget -P /usr/local/src https://github.com/containerd/containerd/releases/download/v${CONTAINERD_VERSION}/containerd-${CONTAINERD_VERSION}-linux-amd64.tar.gz
tar -C /usr/local -xf /usr/local/src/containerd-${CONTAINERD_VERSION}-linux-amd64.tar.gz
wget -P /etc/systemd/system https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
systemctl daemon-reload
systemctl enable --now containerd
コマンド出力結果
root@k8s-control-plane:~# CONTAINERD_VERSION=1.7.3
root@k8s-control-plane:~# 
root@k8s-control-plane:~# mkdir -p /usr/local/src
root@k8s-control-plane:~# 
root@k8s-control-plane:~# wget -P /usr/local/src https://github.com/containerd/containerd/releases/download/v${CONTAINERD_VERSION}/containerd-${CONTAINERD_VERSION}-linux-amd64.tar.gz
--2023-09-30 14:07:10--  https://github.com/containerd/containerd/releases/download/v1.7.3/containerd-1.7.3-linux-amd64.tar.gz
Resolving github.com (github.com)... 20.27.177.113
Connecting to github.com (github.com)|20.27.177.113|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/46089560/c5074d7b-7021-4549-b3af-ef5728245812?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230930%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230930T140711Z&X-Amz-Expires=300&X-Amz-Signature=1d57d413832500f7855d0e16d6724d297a90fe97440cd49669f15d7b2d8b2f85&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=46089560&response-content-disposition=attachment%3B%20filename%3Dcontainerd-1.7.3-linux-amd64.tar.gz&response-content-type=application%2Foctet-stream [following]
--2023-09-30 14:07:11--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/46089560/c5074d7b-7021-4549-b3af-ef5728245812?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230930%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230930T140711Z&X-Amz-Expires=300&X-Amz-Signature=1d57d413832500f7855d0e16d6724d297a90fe97440cd49669f15d7b2d8b2f85&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=46089560&response-content-disposition=attachment%3B%20filename%3Dcontainerd-1.7.3-linux-amd64.tar.gz&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.109.133, 185.199.110.133, 185.199.111.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.109.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 46839131 (45M) [application/octet-stream]
Saving to: ‘/usr/local/src/containerd-1.7.3-linux-amd64.tar.gz’

containerd-1.7.3-linux-amd64. 100%[================================================>]  44.67M  26.4MB/s    in 1.7s    

2023-09-30 14:07:13 (26.4 MB/s) - ‘/usr/local/src/containerd-1.7.3-linux-amd64.tar.gz’ saved [46839131/46839131]

root@k8s-control-plane:~# 
root@k8s-control-plane:~# tar -C /usr/local -xf /usr/local/src/containerd-${CONTAINERD_VERSION}-linux-amd64.tar.gz
root@k8s-control-plane:~# 
root@k8s-control-plane:~# 
root@k8s-control-plane:~# wget -P /etc/systemd/system https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
--2023-09-30 14:07:54--  https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.108.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1393 (1.4K) [text/plain]
Saving to: ‘/etc/systemd/system/containerd.service’

containerd.service            100%[================================================>]   1.36K  --.-KB/s    in 0s      

2023-09-30 14:07:54 (12.4 MB/s) - ‘/etc/systemd/system/containerd.service’ saved [1393/1393]

root@k8s-control-plane:~# 
root@k8s-control-plane:~# systemctl daemon-reload
root@k8s-control-plane:~# 
root@k8s-control-plane:~# systemctl enable --now containerd
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.
root@k8s-control-plane:~# 

runCのインストール

実行コマンド(一行ずつ実行)
RUNC_VERSION=1.1.8
wget -O /usr/local/sbin/runc https://github.com/opencontainers/runc/releases/download/v${RUNC_VERSION}/runc.amd64
chmod +x /usr/local/sbin/runc
コマンド出力結果
root@k8s-control-plane:~# RUNC_VERSION=1.1.8
root@k8s-control-plane:~# wget -O /usr/local/sbin/runc https://github.com/opencontainers/runc/releases/download/v${RUNC_VERSION}/runc.amd64
--2023-09-30 14:08:20--  https://github.com/opencontainers/runc/releases/download/v1.1.8/runc.amd64
Resolving github.com (github.com)... 20.27.177.113
Connecting to github.com (github.com)|20.27.177.113|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/36960321/789db355-a93d-45b3-af29-d0f5f2196ab9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230930%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230930T140820Z&X-Amz-Expires=300&X-Amz-Signature=ed74c9a36f3f47ce7c9c961a71b415275ad023578f17ca29728315f104b68030&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=36960321&response-content-disposition=attachment%3B%20filename%3Drunc.amd64&response-content-type=application%2Foctet-stream [following]
--2023-09-30 14:08:20--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/36960321/789db355-a93d-45b3-af29-d0f5f2196ab9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230930%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230930T140820Z&X-Amz-Expires=300&X-Amz-Signature=ed74c9a36f3f47ce7c9c961a71b415275ad023578f17ca29728315f104b68030&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=36960321&response-content-disposition=attachment%3B%20filename%3Drunc.amd64&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.111.133, 185.199.109.133, 185.199.108.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.111.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10684992 (10M) [application/octet-stream]
Saving to: ‘/usr/local/sbin/runc’

/usr/local/sbin/runc          100%[================================================>]  10.19M  29.4MB/s    in 0.3s    

2023-09-30 14:08:21 (29.4 MB/s) - ‘/usr/local/sbin/runc’ saved [10684992/10684992]

root@k8s-control-plane:~# 
root@k8s-control-plane:~# chmod +x /usr/local/sbin/runc
root@k8s-control-plane:~# 

CNIのインストール

実行コマンド(一行ずつ実行)
CNI_VERSION=1.3.0
wget -P /usr/local/src https://github.com/containernetworking/plugins/releases/download/v${CNI_VERSION}/cni-plugins-linux-amd64-v${CNI_VERSION}.tgz
mkdir -p /opt/cni/bin
tar -C /opt/cni/bin -xf /usr/local/src/cni-plugins-linux-amd64-v${CNI_VERSION}.tgz
コマンド出力結果
root@k8s-control-plane:~# CNI_VERSION=1.3.0
root@k8s-control-plane:~# wget -P /usr/local/src https://github.com/containernetworking/plugins/releases/download/v${CNI_VERSION}/cni-plugins-linux-amd64-v${CNI_VERSION}.tgz
--2023-09-30 14:08:34--  https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz
Resolving github.com (github.com)... 20.27.177.113
Connecting to github.com (github.com)|20.27.177.113|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/84575398/d1ad8456-0aa1-4bb9-84e3-4e03286b4e9f?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230930%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230930T140834Z&X-Amz-Expires=300&X-Amz-Signature=4e1b842bb4a28f885d845f2e207e7420df897b5770eb52e591a2a36692abd475&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=84575398&response-content-disposition=attachment%3B%20filename%3Dcni-plugins-linux-amd64-v1.3.0.tgz&response-content-type=application%2Foctet-stream [following]
--2023-09-30 14:08:34--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/84575398/d1ad8456-0aa1-4bb9-84e3-4e03286b4e9f?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230930%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230930T140834Z&X-Amz-Expires=300&X-Amz-Signature=4e1b842bb4a28f885d845f2e207e7420df897b5770eb52e591a2a36692abd475&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=84575398&response-content-disposition=attachment%3B%20filename%3Dcni-plugins-linux-amd64-v1.3.0.tgz&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.110.133, 185.199.108.133, 185.199.109.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 45338194 (43M) [application/octet-stream]
Saving to: ‘/usr/local/src/cni-plugins-linux-amd64-v1.3.0.tgz’

cni-plugins-linux-amd64-v1.3. 100%[================================================>]  43.24M   281MB/s    in 0.2s    

2023-09-30 14:08:35 (281 MB/s) - ‘/usr/local/src/cni-plugins-linux-amd64-v1.3.0.tgz’ saved [45338194/45338194]

root@k8s-control-plane:~# 
root@k8s-control-plane:~# mkdir -p /opt/cni/bin
root@k8s-control-plane:~# 
root@k8s-control-plane:~# tar -C /opt/cni/bin -xf /usr/local/src/cni-plugins-linux-amd64-v${CNI_VERSION}.tgz
root@k8s-control-plane:~# 

systemdバックエンドなcgroupを有効にする

実行コマンド(一行ずつ実行)
mkdir /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
cp -p /etc/containerd/config.toml /etc/containerd/config.toml_`date +%Y%m%d`
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
diff /etc/containerd/config.toml /etc/containerd/config.toml_`date +%Y%m%d`
systemctl restart containerdmkdir /etc/containerd
コマンド出力結果
root@k8s-control-plane:~# mkdir /etc/containerd
root@k8s-control-plane:~# containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
root@k8s-control-plane:~# cp -p /etc/containerd/config.toml /etc/containerd/config.toml_`date +%Y%m%d`
root@k8s-control-plane:~# sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
root@k8s-control-plane:~# diff /etc/containerd/config.toml /etc/containerd/config.toml_`date +%Y%m%d`
137c137
<             SystemdCgroup = true
---
>             SystemdCgroup = false
root@k8s-control-plane:~# 
root@k8s-control-plane:~# systemctl restart containerd
root@k8s-control-plane:~# 

カーネルパラメータの設定

実行コマンド(一行ずつ実行)
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

modprobe overlay
modprobe br_netfilter

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

sysctl --system
コマンド出力結果
root@k8s-control-plane:~# cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
overlay
br_netfilter
root@k8s-control-plane:~# 
root@k8s-control-plane:~# modprobe overlay
root@k8s-control-plane:~# modprobe br_netfilter
root@k8s-control-plane:~# 

root@k8s-control-plane:~# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
root@k8s-control-plane:~# 
root@k8s-control-plane:~# sysctl --system
* Applying /etc/sysctl.d/10-console-messages.conf ...
kernel.printk = 4 4 1 7
* Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
* Applying /etc/sysctl.d/10-kernel-hardening.conf ...
kernel.kptr_restrict = 1
* Applying /etc/sysctl.d/10-magic-sysrq.conf ...
kernel.sysrq = 176
* Applying /etc/sysctl.d/10-network-security.conf ...
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.all.rp_filter = 2
* Applying /etc/sysctl.d/10-ptrace.conf ...
kernel.yama.ptrace_scope = 1
* Applying /etc/sysctl.d/10-zeropage.conf ...
vm.mmap_min_addr = 65536
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.default.accept_source_route = 0
sysctl: setting key "net.ipv4.conf.all.accept_source_route": Invalid argument
net.ipv4.conf.default.promote_secondaries = 1
sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
net.ipv4.ping_group_range = 0 2147483647
net.core.default_qdisc = fq_codel
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
fs.protected_regular = 1
fs.protected_fifos = 1
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
kernel.pid_max = 4194304
* Applying /etc/sysctl.d/99-cloudimg-ipv6.conf ...
net.ipv6.conf.all.use_tempaddr = 0
net.ipv6.conf.default.use_tempaddr = 0
* Applying /usr/lib/sysctl.d/99-protect-links.conf ...
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 2
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
* Applying /etc/sysctl.conf ...
root@k8s-control-plane:~# 

カーネルパラメータ設定時、不必要なスペースが含まれると正しく反映されない可能性がある為注意。
カーネルパラメータが正しく設定されていないと、後続のkubejoin実行時にエラーとなる可能性がある。
例:net.ipv4.ip_forward△△△△△△=△1△ ・・・(△:スペース)

swapの無効化

実行コマンド(一行ずつ実行)
swapon -s
swapoff -a
コマンド出力結果
root@k8s-control-plane:~# swapon -s
root@k8s-control-plane:~# 
root@k8s-control-plane:~# swapoff -a
root@k8s-control-plane:~# 

kubelet/kubeadm/kubectlのインストール

実行コマンド(一行ずつ実行)
apt-get update
apt-get install -y apt-transport-https ca-certificates curl
curl -fsSL https://dl.k8s.io/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
コマンド出力結果
root@k8s-control-plane:~# apt-get update
Hit:1 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy InRelease
Get:2 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates InRelease [119 kB]
Get:3 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-backports InRelease [109 kB]
Get:4 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/universe amd64 Packages [14.1 MB]
Get:5 http://security.ubuntu.com/ubuntu jammy-security InRelease [110 kB]
Get:6 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/universe Translation-en [5652 kB]
Get:7 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/universe amd64 c-n-f Metadata [286 kB]
Get:8 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/multiverse amd64 Packages [217 kB]
Get:9 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/multiverse Translation-en [112 kB]
Get:10 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/multiverse amd64 c-n-f Metadata [8372 B]
Get:11 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages [1014 kB]
Get:12 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main Translation-en [227 kB]
Get:13 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 c-n-f Metadata [15.6 kB]
Get:14 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/restricted amd64 Packages [905 kB]
Get:15 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/restricted Translation-en [146 kB]
Get:16 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/restricted amd64 c-n-f Metadata [532 B]
Get:17 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages [987 kB]
Get:18 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/universe Translation-en [215 kB]
Get:19 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/universe amd64 c-n-f Metadata [21.9 kB]
Get:20 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/multiverse amd64 Packages [41.6 kB]
Get:21 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/multiverse Translation-en [9768 B]
Get:22 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/multiverse amd64 c-n-f Metadata [472 B]
Get:23 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/main amd64 Packages [41.7 kB]
Get:24 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/main Translation-en [10.5 kB]
Get:25 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/main amd64 c-n-f Metadata [388 B]
Get:26 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/restricted amd64 c-n-f Metadata [116 B]
Get:27 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/universe amd64 Packages [24.3 kB]
Get:28 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/universe Translation-en [16.4 kB]
Get:29 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/universe amd64 c-n-f Metadata [640 B]
Get:30 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/multiverse amd64 c-n-f Metadata [116 B]
Get:31 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages [804 kB]       
Get:32 http://security.ubuntu.com/ubuntu jammy-security/main Translation-en [169 kB]
Get:33 http://security.ubuntu.com/ubuntu jammy-security/main amd64 c-n-f Metadata [11.3 kB]
Get:34 http://security.ubuntu.com/ubuntu jammy-security/restricted amd64 Packages [889 kB]
Get:35 http://security.ubuntu.com/ubuntu jammy-security/restricted Translation-en [143 kB]
Get:36 http://security.ubuntu.com/ubuntu jammy-security/restricted amd64 c-n-f Metadata [532 B]
Get:37 http://security.ubuntu.com/ubuntu jammy-security/universe amd64 Packages [785 kB]
Get:38 http://security.ubuntu.com/ubuntu jammy-security/universe Translation-en [144 kB]
Get:39 http://security.ubuntu.com/ubuntu jammy-security/universe amd64 c-n-f Metadata [16.7 kB]
Get:40 http://security.ubuntu.com/ubuntu jammy-security/multiverse amd64 Packages [36.5 kB]
Get:41 http://security.ubuntu.com/ubuntu jammy-security/multiverse Translation-en [7060 B]
Get:42 http://security.ubuntu.com/ubuntu jammy-security/multiverse amd64 c-n-f Metadata [260 B]
Fetched 27.4 MB in 5s (5915 kB/s)                
Reading package lists... Done
root@k8s-control-plane:~# 

root@k8s-control-plane:~# apt-get install -y apt-transport-https ca-certificates curl
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  libcurl4
The following NEW packages will be installed:
  apt-transport-https
The following packages will be upgraded:
  ca-certificates curl libcurl4
3 upgraded, 1 newly installed, 0 to remove and 126 not upgraded.
Need to get 641 kB of archives.
After this operation, 193 kB of additional disk space will be used.
Get:1 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 ca-certificates all 20230311ubuntu0.22.04.1 [155 kB]
Get:2 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/universe amd64 apt-transport-https all 2.4.10 [1510 B]
Get:3 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 curl amd64 7.81.0-1ubuntu1.13 [194 kB]
Get:4 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 libcurl4 amd64 7.81.0-1ubuntu1.13 [290 kB]
Fetched 641 kB in 0s (16.0 MB/s)  
Preconfiguring packages ...
(Reading database ... 64295 files and directories currently installed.)
Preparing to unpack .../ca-certificates_20230311ubuntu0.22.04.1_all.deb ...
Unpacking ca-certificates (20230311ubuntu0.22.04.1) over (20211016ubuntu0.22.04.1) ...
Selecting previously unselected package apt-transport-https.
Preparing to unpack .../apt-transport-https_2.4.10_all.deb ...
Unpacking apt-transport-https (2.4.10) ...
Preparing to unpack .../curl_7.81.0-1ubuntu1.13_amd64.deb ...
Unpacking curl (7.81.0-1ubuntu1.13) over (7.81.0-1ubuntu1.10) ...
Preparing to unpack .../libcurl4_7.81.0-1ubuntu1.13_amd64.deb ...
Unpacking libcurl4:amd64 (7.81.0-1ubuntu1.13) over (7.81.0-1ubuntu1.10) ...
Setting up apt-transport-https (2.4.10) ...
Setting up ca-certificates (20230311ubuntu0.22.04.1) ...
Updating certificates in /etc/ssl/certs...
rehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL
19 added, 6 removed; done.
Setting up libcurl4:amd64 (7.81.0-1ubuntu1.13) ...
Setting up curl (7.81.0-1ubuntu1.13) ...
Processing triggers for man-db (2.10.2-1) ...
Processing triggers for libc-bin (2.35-0ubuntu3.1) ...
Processing triggers for ca-certificates (20230311ubuntu0.22.04.1) ...
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
Scanning processes...                                                                                                  
Scanning linux images...                                                                                               

Running kernel seems to be up-to-date.

No services need to be restarted.

No containers need to be restarted.

No user sessions are running outdated binaries.

No VM guests are running outdated hypervisor (qemu) binaries on this host.
root@k8s-control-plane:~#

root@k8s-control-plane:~# curl -fsSL https://dl.k8s.io/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
root@k8s-control-plane:~# 
root@k8s-control-plane:~# echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main
root@k8s-control-plane:~# 

root@k8s-control-plane:~# apt-get update
Hit:1 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy InRelease
Hit:2 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates InRelease        
Hit:3 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-backports InRelease      
Hit:5 http://security.ubuntu.com/ubuntu jammy-security InRelease                                         
Get:4 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8993 B]
Get:6 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [69.9 kB]
Fetched 78.9 kB in 1s (83.8 kB/s) 
Reading package lists... Done
root@k8s-control-plane:~#

root@k8s-control-plane:~# apt-get install -y kubelet kubeadm kubectl
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  conntrack cri-tools ebtables kubernetes-cni socat
The following NEW packages will be installed:
  conntrack cri-tools ebtables kubeadm kubectl kubelet kubernetes-cni socat
0 upgraded, 8 newly installed, 0 to remove and 126 not upgraded.
Need to get 87.1 MB of archives.
After this operation, 336 MB of additional disk space will be used.
Get:1 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/main amd64 conntrack amd64 1:1.4.6-2build2 [33.5 kB]
Get:2 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/main amd64 ebtables amd64 2.0.11-4build2 [84.9 kB]
Get:3 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB]
Get:4 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 cri-tools amd64 1.26.0-00 [18.9 MB]
Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 1.2.0-00 [27.6 MB]
Get:6 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.28.2-00 [19.5 MB]
Get:7 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.28.2-00 [10.3 MB]
Get:8 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.28.2-00 [10.3 MB]
Fetched 87.1 MB in 5s (18.0 MB/s) 
Selecting previously unselected package conntrack.
(Reading database ... 64312 files and directories currently installed.)
Preparing to unpack .../0-conntrack_1%3a1.4.6-2build2_amd64.deb ...
Unpacking conntrack (1:1.4.6-2build2) ...
Selecting previously unselected package cri-tools.
Preparing to unpack .../1-cri-tools_1.26.0-00_amd64.deb ...
Unpacking cri-tools (1.26.0-00) ...
Selecting previously unselected package ebtables.
Preparing to unpack .../2-ebtables_2.0.11-4build2_amd64.deb ...
Unpacking ebtables (2.0.11-4build2) ...
Selecting previously unselected package kubernetes-cni.
Preparing to unpack .../3-kubernetes-cni_1.2.0-00_amd64.deb ...
Unpacking kubernetes-cni (1.2.0-00) ...
Selecting previously unselected package socat.
Preparing to unpack .../4-socat_1.7.4.1-3ubuntu4_amd64.deb ...
Unpacking socat (1.7.4.1-3ubuntu4) ...
Selecting previously unselected package kubelet.
Preparing to unpack .../5-kubelet_1.28.2-00_amd64.deb ...
Unpacking kubelet (1.28.2-00) ...
Selecting previously unselected package kubectl.
Preparing to unpack .../6-kubectl_1.28.2-00_amd64.deb ...
Unpacking kubectl (1.28.2-00) ...
Selecting previously unselected package kubeadm.
Preparing to unpack .../7-kubeadm_1.28.2-00_amd64.deb ...
Unpacking kubeadm (1.28.2-00) ...
Setting up conntrack (1:1.4.6-2build2) ...
Setting up kubectl (1.28.2-00) ...
Setting up ebtables (2.0.11-4build2) ...
Setting up socat (1.7.4.1-3ubuntu4) ...
Setting up cri-tools (1.26.0-00) ...
Setting up kubernetes-cni (1.2.0-00) ...
Setting up kubelet (1.28.2-00) ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Setting up kubeadm (1.28.2-00) ...
Processing triggers for man-db (2.10.2-1) ...
Scanning processes...                                                                                                  
Scanning linux images...                                                                                               

Running kernel seems to be up-to-date.

No services need to be restarted.

No containers need to be restarted.

No user sessions are running outdated binaries.

No VM guests are running outdated hypervisor (qemu) binaries on this host.
root@k8s-control-plane:~#

root@k8s-control-plane:~# apt-mark hold kubelet kubeadm kubectl
kubelet set on hold.
kubeadm set on hold.
kubectl set on hold.
root@k8s-control-plane:~# 

Kubernetesの構築

本項目はControl Plane構築時のみ実施する。(Worker Node構築時は実施しない)

実行コマンド(一行ずつ実行)
kubeadm init --pod-network-cidr=10.244.0.0/16

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
コマンド出力結果
root@k8s-control-plane:~# kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.28.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0930 14:27:09.190649    7884 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.105]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-control-plane localhost] and IPs [10.0.0.105 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-control-plane localhost] and IPs [10.0.0.105 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.504476 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-control-plane as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-control-plane as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: xxxxxx.xxxxxxxxxxxxxxxx
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.105:6443 --token xxxxxx.xxxxxxxxxxxxxxxx \
--discovery-token-ca-cert-hash  
sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

root@k8s-control-plane:~# 

root@k8s-control-plane:~# mkdir -p $HOME/.kube
root@k8s-control-plane:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@k8s-control-plane:~# chown $(id -u):$(id -g) $HOME/.kube/config
root@k8s-control-plane:~# 

kubeadm init実行時に最後に出力される以下のコマンドは、Worker Node追加時に必要となるためメモしておく

kubeadm join 10.0.0.105:6443 --token xxxxxx.xxxxxxxxxxxxxxxx
--discovery-token-ca-cert-hash  
sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Flannelを使用したCNIのセットアップ

本項目はControl Plane構築時のみ実施する。(Worker Node構築時は実施しない)

実行コマンド
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
コマンド出力結果
root@k8s-control-plane:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
root@k8s-control-plane:~# 

構築後の確認

  • Nodeの状態の確認
実行コマンド
kubectl get nodes
コマンド出力結果
root@k8s-control-plane:~# kubectl get nodes
NAME                STATUS   ROLES           AGE     VERSION
k8s-control-plane   Ready    control-plane   4m37s   v1.28.2
root@k8s-control-plane:~# 
  • Podの状態の確認
実行コマンド
kubectl get pods -A
コマンド出力結果
root@k8s-control-plane:~# kubectl get pods -A
NAMESPACE      NAME                                        READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-gxjw6                       1/1     Running   0          2m13s
kube-system    coredns-5dd5756b68-8xtct                    1/1     Running   0          4m36s
kube-system    coredns-5dd5756b68-kmchv                    1/1     Running   0          4m36s
kube-system    etcd-k8s-control-plane                      1/1     Running   0          4m48s
kube-system    kube-apiserver-k8s-control-plane            1/1     Running   0          4m48s
kube-system    kube-controller-manager-k8s-control-plane   1/1     Running   0          4m48s
kube-system    kube-proxy-n44hz                            1/1     Running   0          4m36s
kube-system    kube-scheduler-k8s-control-plane            1/1     Running   0          4m48s
root@k8s-control-plane:~# 
root@k8s-control-plane:~# kubectl describe nodes
Name:               k8s-control-plane
Roles:              control-plane
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=k8s-control-plane
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"46:d6:5f:9a:c1:5e"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 10.0.0.105
                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sat, 30 Sep 2023 14:27:30 +0000
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  k8s-control-plane
  AcquireTime:     <unset>
  RenewTime:       Sat, 30 Sep 2023 14:33:02 +0000
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Sat, 30 Sep 2023 14:30:22 +0000   Sat, 30 Sep 2023 14:30:22 +0000   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Sat, 30 Sep 2023 14:30:37 +0000   Sat, 30 Sep 2023 14:27:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Sat, 30 Sep 2023 14:30:37 +0000   Sat, 30 Sep 2023 14:27:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Sat, 30 Sep 2023 14:30:37 +0000   Sat, 30 Sep 2023 14:27:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Sat, 30 Sep 2023 14:30:37 +0000   Sat, 30 Sep 2023 14:30:27 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  10.0.0.105
  Hostname:    k8s-control-plane
Capacity:
  cpu:                2
  ephemeral-storage:  9974088Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             1983796Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  9192119486
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             1881396Ki
  pods:               110
System Info:
  Machine ID:                 ec2b3fcc000f6c4f20a5dd789cd2c4cf
  System UUID:                ec2b3fcc-000f-6c4f-20a5-dd789cd2c4cf
  Boot ID:                    541f5670-aeee-424e-b758-d564d98e6c7a
  Kernel Version:             5.19.0-1025-aws
  OS Image:                   Ubuntu 22.04.2 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.7.3
  Kubelet Version:            v1.28.2
  Kube-Proxy Version:         v1.28.2
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (8 in total)
  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
  kube-flannel                kube-flannel-ds-gxjw6                        100m (5%)     0 (0%)      50Mi (2%)        0 (0%)         3m
  kube-system                 coredns-5dd5756b68-8xtct                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (9%)     5m23s
  kube-system                 coredns-5dd5756b68-kmchv                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (9%)     5m23s
  kube-system                 etcd-k8s-control-plane                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         5m35s
  kube-system                 kube-apiserver-k8s-control-plane             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m35s
  kube-system                 kube-controller-manager-k8s-control-plane    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m35s
  kube-system                 kube-proxy-n44hz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
  kube-system                 kube-scheduler-k8s-control-plane             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m35s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                950m (47%)   0 (0%)
  memory             290Mi (15%)  340Mi (18%)
  ephemeral-storage  0 (0%)       0 (0%)
  hugepages-1Gi      0 (0%)       0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)
Events:
  Type     Reason                   Age                    From             Message
  ----     ------                   ----                   ----             -------
  Normal   Starting                 5m21s                  kube-proxy       
  Normal   Starting                 5m47s                  kubelet          Starting kubelet.
  Warning  InvalidDiskCapacity      5m47s                  kubelet          invalid capacity 0 on image filesystem
  Normal   NodeAllocatableEnforced  5m46s                  kubelet          Updated Node Allocatable limit across pods
  Normal   NodeHasNoDiskPressure    5m46s (x7 over 5m47s)  kubelet          Node k8s-control-plane status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     5m46s (x7 over 5m47s)  kubelet          Node k8s-control-plane status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientMemory  5m46s (x8 over 5m47s)  kubelet          Node k8s-control-plane status is now: NodeHasSufficientMemory
  Normal   Starting                 5m36s                  kubelet          Starting kubelet.
  Warning  InvalidDiskCapacity      5m36s                  kubelet          invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  5m36s                  kubelet          Node k8s-control-plane status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    5m36s                  kubelet          Node k8s-control-plane status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     5m36s                  kubelet          Node k8s-control-plane status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  5m36s                  kubelet          Updated Node Allocatable limit across pods
  Normal   RegisteredNode           5m23s                  node-controller  Node k8s-control-plane event: Registered Node k8s-control-plane in Controller
  Normal   NodeReady                2m43s                  kubelet          Node k8s-control-plane status is now: NodeReady
root@k8s-control-plane:~# root@k8s-control-plane:~# kubectl get nodes
NAME                STATUS   ROLES           AGE     VERSION
k8s-control-plane   Ready    control-plane   4m37s   v1.28.2
root@k8s-control-plane:~# 

EC2のセキュリティグループの解放

Control PlaneとWorker Node間で通信を行うため、デフォルトでは閉じているEC2のセキュリティグループ(インバウンド通信)を解放する必要がある。
アウトバウンド通信についてはデフォルトで全解放であるため、以下には記載しないが閉じている場合は必要に応じて解放する必要がある。

  • Control Planeのセキュリティグループ設定

image.png

  • Worker Nodeのセキュリティグループ設定

image.png

4. WorkerNodeの構築

Control Plane構築時と以下の同じステップを実施する。

kubernetesクラスタへのWorker Nodeの追加

以下手順はKubernetesクラスタに追加するWorker Node上で実施する

実行コマンド(一行ずつ実行)
kubeadm join <Control PlaneのIPアドレス>:6443 --token xxxxxx.xxxxxxxxxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
コマンド出力結果
root@k8s-worker-node1:~# kubeadm join 10.0.0.105:6443 --token xxxxxx.xxxxxxxxxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

root@k8s-worker-node1:~# 

Worker Node追加後の確認(Control Plane側で実施)

以下手順はKubernetesクラスタのControl Plane側で実施する
また、Control PlaneのhostsにWorker Nodeのホスト名を追記しておく

  • hosts追加後の確認
実行コマンド
cat /etc/hosts |grep k8s
コマンド出力結果
root@k8s-control-plane:~# cat /etc/hosts |grep k8s
10.0.0.105 k8s-control-plane
10.0.0.101 k8s-worker-node1
root@k8s-control-plane:~# 
  • Nodeの常態の確認
実行コマンド
kubectl get nodes
コマンド出力結果
root@k8s-control-plane:~# kubectl get nodes
NAME                STATUS   ROLES           AGE     VERSION
k8s-control-plane   Ready    control-plane   68m     v1.28.2
k8s-worker-node1    Ready    <none>          8m22s   v1.28.2
root@k8s-control-plane:~# 
  • Podの状態の確認
実行コマンド
kubectl get pods -A
コマンド出力結果
root@k8s-control-plane:~# kubectl get pods -A
NAMESPACE      NAME                                        READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-d5kfv                       1/1     Running   0          9m30s
kube-flannel   kube-flannel-ds-gxjw6                       1/1     Running   0          66m
kube-system    coredns-5dd5756b68-8xtct                    1/1     Running   0          68m
kube-system    coredns-5dd5756b68-kmchv                    1/1     Running   0          68m
kube-system    etcd-k8s-control-plane                      1/1     Running   0          69m
kube-system    kube-apiserver-k8s-control-plane            1/1     Running   0          69m
kube-system    kube-controller-manager-k8s-control-plane   1/1     Running   0          69m
kube-system    kube-proxy-n44hz                            1/1     Running   0          68m
kube-system    kube-proxy-zwqt9                            1/1     Running   0          9m30s
kube-system    kube-scheduler-k8s-control-plane            1/1     Running   0          69m
root@k8s-control-plane:~# 

5. 参考リンク

6. 構築時につまづいた事

kubernetesクラスタ構築後、Control Planeの各種PodがCrashLoopBackOffを繰り返す

事象

kubernetes構築完了後、問題なくPodが上がっているように見えたが、時間が経つとControl Planeの各種Pod(kube-apiserver/kube-controller-manager/kube-proxy・・・etc)がCrashLoopBackOffとなりkubectlコマンドが通らなくなった。

原因

不明・・・(ログ等をもとに調査したものの、分からず。)
おそらく、containerd/runC/Kubernetes/Flannelのバージョンの互換性に起因していた模様。

対応

containerdのバージョンを変更して再構築することで解決。

0
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?