はじめに
OCIのAlways FreeのリソースだけでKubernetesクラスタを作りたいと思います。
ArmプロセッサのA1インスタンスを2つ使ってControl PlaneとWorker Nodeそれぞれ1つずつで作ります。
(使用しているテナンシーはFree Tierではありませんが、Always Freeのリソースだけで作っているので、Free Tierでもできるはず。)
VCNの作成
インスタンスをプロビジョニングするVCNを作成します。
今回はControl PlaneとWorker NodeどちらもPublic Subnetに作ります。VCN名を指定して、あとはデフォルトのままで作成します。
セキュリティ・リストの設定
Kubernetesのマニュアルにあるポートを開けておきます。
Public Subnetに紐づいているセキュリティリストのイングレス・ルールの追加
をクリックします。
今回は以下のようにしました。ついでに80と443も開けています。
インスタンスの作成
Control Plane用のインスタンスを作成します。
OSはUbuntu、シェイプはAlways Free対象のA1インスタンスを選択します。
なお、A1インスタンスは1 OCPU/1 VCPUなので、1 OCPUの場合は、kubeadm init
でエラーになりました。そのため、2 OCPU/12GBメモリにしています。
ネットワークは、先ほど作成したVCNとパブリックサブネットを選択します。
ログイン
インスタンスが作成できたら、パブリックIPアドレスを確認して、SSHでログインします。
$ ssh -i ssh-key-2021-09-12.key ubuntu@155.248.xx.xx
・・・
To run a command as administrator (user "root"), use "sudo <command>".
ubuntu@controlplane:~$
コンテナランタイムのインストール
マニュアルに沿って、コンテナランタイムをインストールします。
ネットワークの設定
面倒なので、rootになります。
$ sudo su -
IPv4フォワーディングを有効化し、iptablesからブリッジされたトラフィックを見えるようにします。
# cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
# modprobe overlay
# modprobe br_netfilter
# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# sysctl --system
* Applying /etc/sysctl.d/10-console-messages.conf ...
kernel.printk = 4 4 1 7
* Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
* Applying /etc/sysctl.d/10-kernel-hardening.conf ...
kernel.kptr_restrict = 1
* Applying /etc/sysctl.d/10-magic-sysrq.conf ...
kernel.sysrq = 176
* Applying /etc/sysctl.d/10-network-security.conf ...
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.all.rp_filter = 2
* Applying /etc/sysctl.d/10-ptrace.conf ...
kernel.yama.ptrace_scope = 1
* Applying /etc/sysctl.d/10-zeropage.conf ...
vm.mmap_min_addr = 32768
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.default.accept_source_route = 0
sysctl: setting key "net.ipv4.conf.all.accept_source_route": Invalid argument
net.ipv4.conf.default.promote_secondaries = 1
sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
net.ipv4.ping_group_range = 0 2147483647
net.core.default_qdisc = fq_codel
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
fs.protected_regular = 1
fs.protected_fifos = 1
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
kernel.pid_max = 4194304
* Applying /etc/sysctl.d/99-cloudimg-ipv6.conf ...
net.ipv6.conf.all.use_tempaddr = 0
net.ipv6.conf.default.use_tempaddr = 0
* Applying /usr/lib/sysctl.d/99-protect-links.conf ...
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 2
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
* Applying /etc/sysctl.d/scsi-log-level.conf ...
dev.scsi.logging_level = 68
* Applying /etc/sysctl.conf ...
br_netfilter
とoverlay
モジュールが読み込まれていることを確認します。
以下のコマンドを実行して、返り値を確認します。
# lsmod | grep br_netfilter
br_netfilter 32768 0
bridge 352256 1 br_netfilter
# lsmod | grep overlay
overlay 155648 0
各カーネルパラメーターが1
に設定されていることを確認します。
# sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
iptablesの設定
この設定をしないと、kubeadm join
の時にエラーになったので、こちらを参考にiptablesを設定します。
あらかじめファイルをバックアップしておきます。
# cp /etc/iptables/rules.v4 /etc/iptables/rules.v4.org
以下のように追記、削除します。
- 17行目から25行目を追加
- この2行を削除
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
12 -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
13 -A INPUT -p icmp -j ACCEPT
14 -A INPUT -i lo -j ACCEPT
15 -A INPUT -p udp --sport 123 -j ACCEPT
16 -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
17 -A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
18 -A INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT
19 -A INPUT -p tcp -m state --state NEW -m tcp --dport 6443 -j ACCEPT
20 -A INPUT -p tcp -m state --state NEW -m tcp --dport 2379 -j ACCEPT
21 -A INPUT -p tcp -m state --state NEW -m tcp --dport 2380 -j ACCEPT
22 -A INPUT -p tcp -m state --state NEW -m tcp --dport 10250 -j ACCEPT
23 -A INPUT -p tcp -m state --state NEW -m tcp --dport 10251 -j ACCEPT
24 -A INPUT -p tcp -m state --state NEW -m tcp --dport 10252 -j ACCEPT
25 -A INPUT -p tcp -m state --state NEW -m tcp --dport 6443 -j ACCEPT
26 -A INPUT -p tcp -m state --state NEW -m tcp --match multiport --dports 30000:32767 -j ACCEPT
27 -A OUTPUT -d 169.254.0.0/16 -j InstanceServices
編集したファイルを反映させます。
# iptables-restore < /etc/iptables/rules.v4
確認します。
# iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N InstanceServices
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p udp -m udp --sport 123 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 6443 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 2379 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 2380 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 10250 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 10251 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 10252 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 6443 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp -m multiport --dports 30000:32767 -j ACCEPT
-A OUTPUT -d 169.254.0.0/16 -j InstanceServices
・・・
containerdのインストール
今回はランタイムとしてcontainerdを使用します。
containerdのインストール
バイナリをダウンロードして、インストールします。
その際、アーキテクチャに合わせてarm64
を選択します。(amd64と似ているので注意)
# wget https://github.com/containerd/containerd/releases/download/v1.7.5/containerd-1.7.5-linux-arm64.tar.gz
--2023-08-28 01:16:09-- https://github.com/containerd/containerd/releases/download/v1.7.5/containerd-1.7.5-linux-arm64.tar.gz
Resolving github.com (github.com)... 140.82.112.4
Connecting to github.com (github.com)|140.82.112.4|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/46089560/592c6562-d1dc-4ac6-8292-8892ecb55f2f?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230828%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230828T011610Z&X-Amz-Expires=300&X-Amz-Signature=1d63a4191d59ee42cc8676ee191785014192053545926209960cfc2adc8edd1f&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=46089560&response-content-disposition=attachment%3B%20filename%3Dcontainerd-1.7.5-linux-arm64.tar.gz&response-content-type=application%2Foctet-stream [following]
--2023-08-28 01:16:10-- https://objects.githubusercontent.com/github-production-release-asset-2e65be/46089560/592c6562-d1dc-4ac6-8292-8892ecb55f2f?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230828%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230828T011610Z&X-Amz-Expires=300&X-Amz-Signature=1d63a4191d59ee42cc8676ee191785014192053545926209960cfc2adc8edd1f&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=46089560&response-content-disposition=attachment%3B%20filename%3Dcontainerd-1.7.5-linux-arm64.tar.gz&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.109.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 34656057 (33M) [application/octet-stream]
Saving to: ‘containerd-1.7.5-linux-arm64.tar.gz’
containerd-1.7.5-linux-arm64.tar.gz 100%[================================================================================================================================================================>] 33.05M 210MB/s in 0.2s
2023-08-28 01:16:10 (210 MB/s) - ‘containerd-1.7.5-linux-arm64.tar.gz’ saved [34656057/34656057]
# tar Cxzvf /usr/local containerd-1.7.5-linux-arm64.tar.gz
bin/
bin/ctr
bin/containerd-stress
bin/containerd
bin/containerd-shim
bin/containerd-shim-runc-v1
bin/containerd-shim-runc-v2
systemdの設定
containerdはsystemdで管理するので、以下のファイルを作成します。
# mkdir -p /usr/local/lib/systemd/system
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target
[Service]
#uncomment to enable the experimental sbservice (sandboxed) version of containerd/cri integration
#Environment="ENABLE_CRI_SANDBOXES=sandboxed"
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target
リロードして、containerdを有効にします。
# systemctl daemon-reload
# systemctl enable --now containerd
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /usr/local/lib/systemd/system/containerd.service.
StatusがRunningになっていることを確認します。
# systemctl status containerd
● containerd.service - containerd container runtime
Loaded: loaded (/usr/local/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2023-08-24 06:46:40 UTC; 36s ago
Docs: https://containerd.io
Process: 2171 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 2172 (containerd)
Tasks: 7
Memory: 12.9M
CPU: 196ms
CGroup: /system.slice/containerd.service
└─2172 /usr/local/bin/containerd
Aug 24 06:46:40 controlplane containerd[2172]: time="2023-08-24T06:46:40.515156256Z" level=info msg="Start subscribing containerd event"
Aug 24 06:46:40 controlplane containerd[2172]: time="2023-08-24T06:46:40.515453260Z" level=info msg="Start recovering state"
Aug 24 06:46:40 controlplane containerd[2172]: time="2023-08-24T06:46:40.515733223Z" level=info msg="Start event monitor"
Aug 24 06:46:40 controlplane containerd[2172]: time="2023-08-24T06:46:40.515752264Z" level=info msg="Start snapshots syncer"
Aug 24 06:46:40 controlplane containerd[2172]: time="2023-08-24T06:46:40.515766264Z" level=info msg="Start cni network conf syncer for default"
Aug 24 06:46:40 controlplane containerd[2172]: time="2023-08-24T06:46:40.515779664Z" level=info msg="Start streaming server"
Aug 24 06:46:40 controlplane containerd[2172]: time="2023-08-24T06:46:40.516898119Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Aug 24 06:46:40 controlplane containerd[2172]: time="2023-08-24T06:46:40.517067841Z" level=info msg=serving... address=/run/containerd/containerd.sock
Aug 24 06:46:40 controlplane containerd[2172]: time="2023-08-24T06:46:40.517252564Z" level=info msg="containerd successfully booted in 0.042658s"
Aug 24 06:46:40 controlplane systemd[1]: Started containerd container runtime.
runcのインストール
バイナリーをダウンロードし、インストールします。
# wget https://github.com/opencontainers/runc/releases/download/v1.1.9/runc.arm64
--2023-08-24 04:09:01-- https://github.com/opencontainers/runc/releases/download/v1.1.9/runc.arm64
Resolving github.com (github.com)... 140.82.113.4
Connecting to github.com (github.com)|140.82.113.4|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/36960321/8e72ca83-10c6-47c2-beea-91ae1cd776c6?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230824%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230824T040901Z&X-Amz-Expires=300&X-Amz-Signature=ffceda911acd676311e1a04a3e46ceceb2f3565e503cac1c06500bdc9175a2fa&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=36960321&response-content-disposition=attachment%3B%20filename%3Drunc.arm64&response-content-type=application%2Foctet-stream [following]
--2023-08-24 04:09:01-- https://objects.githubusercontent.com/github-production-release-asset-2e65be/36960321/8e72ca83-10c6-47c2-beea-91ae1cd776c6?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230824%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230824T040901Z&X-Amz-Expires=300&X-Amz-Signature=ffceda911acd676311e1a04a3e46ceceb2f3565e503cac1c06500bdc9175a2fa&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=36960321&response-content-disposition=attachment%3B%20filename%3Drunc.arm64&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.108.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10020576 (9.6M) [application/octet-stream]
Saving to: ‘runc.arm64’
runc.arm64 100%[=====================================================================================================================>] 9.56M --.-KB/s in 0.1s
2023-08-24 04:09:02 (80.0 MB/s) - ‘runc.arm64’ saved [10020576/10020576]
# install -m 755 runc.arm64 /usr/local/sbin/runc
CNI Pluginのインストール
同様にCNI Pluginのバイナリをダウンロードし、インストールする。
# wget https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz
--2023-08-24 04:12:19-- https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz
Resolving github.com (github.com)... 140.82.114.4
Connecting to github.com (github.com)|140.82.114.4|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/84575398/3eb77db1-766f-4796-aebd-874c5bef349d?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230824%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230824T041219Z&X-Amz-Expires=300&X-Amz-Signature=98af2db10bbc196d45fd822fab56a85395649e2612966e81f1ba9a44034f2686&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=84575398&response-content-disposition=attachment%3B%20filename%3Dcni-plugins-linux-arm64-v1.3.0.tgz&response-content-type=application%2Foctet-stream [following]
--2023-08-24 04:12:19-- https://objects.githubusercontent.com/github-production-release-asset-2e65be/84575398/3eb77db1-766f-4796-aebd-874c5bef349d?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230824%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230824T041219Z&X-Amz-Expires=300&X-Amz-Signature=98af2db10bbc196d45fd822fab56a85395649e2612966e81f1ba9a44034f2686&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=84575398&response-content-disposition=attachment%3B%20filename%3Dcni-plugins-linux-arm64-v1.3.0.tgz&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.111.133, 185.199.110.133, 185.199.109.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.111.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 42335820 (40M) [application/octet-stream]
Saving to: ‘cni-plugins-linux-arm64-v1.3.0.tgz’
cni-plugins-linux-arm64-v1.3.0.tgz 100%[=====================================================================================================================>] 40.37M 159MB/s in 0.3s
2023-08-24 04:12:19 (159 MB/s) - ‘cni-plugins-linux-arm64-v1.3.0.tgz’ saved [42335820/42335820]
# mkdir -p /opt/cni/bin
# tar Cxzvf /opt/cni/bin cni-plugins-linux-arm64-v1.3.0.tgz
./
./loopback
./bandwidth
./ptp
./vlan
./host-device
./tuning
./vrf
./sbr
./tap
./dhcp
./static
./firewall
./macvlan
./dummy
./bridge
./ipvlan
./portmap
./host-local
contaienrd設定ファイルの作成
デフォルトの設定ファイルを作成して、念のためcontaienrdを再起動し、Statusを確認します。
# mkdir /etc/containerd/
# containerd config default > /etc/containerd/config.toml
# systemctl restart containerd.service
# systemctl status containerd
● containerd.service - containerd container runtime
Loaded: loaded (/usr/local/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2023-08-28 02:09:58 UTC; 5s ago
Docs: https://containerd.io
Process: 2663 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 2664 (containerd)
Tasks: 6
Memory: 13.7M
CPU: 74ms
CGroup: /system.slice/containerd.service
└─2664 /usr/local/bin/containerd
Aug 28 02:09:58 worker01 containerd[2664]: time="2023-08-28T02:09:58.692929520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Aug 28 02:09:58 worker01 containerd[2664]: time="2023-08-28T02:09:58.693085920Z" level=info msg=serving... address=/run/containerd/containerd.sock
Aug 28 02:09:58 worker01 containerd[2664]: time="2023-08-28T02:09:58.693551760Z" level=info msg="Start subscribing containerd event"
Aug 28 02:09:58 worker01 containerd[2664]: time="2023-08-28T02:09:58.693737040Z" level=info msg="Start recovering state"
Aug 28 02:09:58 worker01 containerd[2664]: time="2023-08-28T02:09:58.708336962Z" level=info msg="Start event monitor"
Aug 28 02:09:58 worker01 containerd[2664]: time="2023-08-28T02:09:58.708495082Z" level=info msg="Start snapshots syncer"
Aug 28 02:09:58 worker01 containerd[2664]: time="2023-08-28T02:09:58.708595642Z" level=info msg="Start cni network conf syncer for default"
Aug 28 02:09:58 worker01 containerd[2664]: time="2023-08-28T02:09:58.708703722Z" level=info msg="Start streaming server"
Aug 28 02:09:58 worker01 systemd[1]: Started containerd container runtime.
Aug 28 02:09:58 worker01 containerd[2664]: time="2023-08-28T02:09:58.712151362Z" level=info msg="containerd successfully booted in 0.059470s"
kubeadm/kubectl/kubeletのインストール
以下のマニュアルに沿って、kubeadmをインストールします。
準備
パッケージのアップデート
# apt-get update
Get:1 http://ports.ubuntu.com/ubuntu-ports jammy-security InRelease [110 kB]
Hit:2 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy InRelease
Get:3 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-updates InRelease [119 kB]
Get:4 http://ports.ubuntu.com/ubuntu-ports jammy-security/main arm64 Packages [593 kB]
Get:5 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-backports InRelease [109 kB]
Get:6 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy/universe arm64 Packages [13.9 MB]
Get:7 http://ports.ubuntu.com/ubuntu-ports jammy-security/main Translation-en [155 kB]
Get:8 http://ports.ubuntu.com/ubuntu-ports jammy-security/main arm64 c-n-f Metadata [10.9 kB]
Get:9 http://ports.ubuntu.com/ubuntu-ports jammy-security/restricted arm64 Packages [433 kB]
Get:10 http://ports.ubuntu.com/ubuntu-ports jammy-security/restricted Translation-en [110 kB]
Get:11 http://ports.ubuntu.com/ubuntu-ports jammy-security/restricted arm64 c-n-f Metadata [392 B]
Get:12 http://ports.ubuntu.com/ubuntu-ports jammy-security/universe arm64 Packages [684 kB]
Get:13 http://ports.ubuntu.com/ubuntu-ports jammy-security/universe Translation-en [141 kB]
Get:14 http://ports.ubuntu.com/ubuntu-ports jammy-security/universe arm64 c-n-f Metadata [13.9 kB]
Get:15 http://ports.ubuntu.com/ubuntu-ports jammy-security/multiverse arm64 Packages [19.7 kB]
Get:16 http://ports.ubuntu.com/ubuntu-ports jammy-security/multiverse Translation-en [7060 B]
Get:17 http://ports.ubuntu.com/ubuntu-ports jammy-security/multiverse arm64 c-n-f Metadata [236 B]
Get:18 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy/universe Translation-en [5652 kB]
Get:19 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy/universe arm64 c-n-f Metadata [277 kB]
Get:20 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy/multiverse arm64 Packages [184 kB]
Get:21 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy/multiverse Translation-en [112 kB]
Get:22 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy/multiverse arm64 c-n-f Metadata [7064 B]
Get:23 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-updates/main arm64 Packages [801 kB]
Get:24 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-updates/main Translation-en [214 kB]
Get:25 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-updates/main arm64 c-n-f Metadata [15.2 kB]
Get:26 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-updates/restricted arm64 Packages [439 kB]
Get:27 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-updates/restricted Translation-en [114 kB]
Get:28 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-updates/restricted arm64 c-n-f Metadata [388 B]
Get:29 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-updates/universe arm64 Packages [877 kB]
Get:30 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-updates/universe Translation-en [212 kB]
Get:31 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-updates/universe arm64 c-n-f Metadata [19.0 kB]
Get:32 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-updates/multiverse arm64 Packages [23.5 kB]
Get:33 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-updates/multiverse Translation-en [9768 B]
Get:34 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-updates/multiverse arm64 c-n-f Metadata [260 B]
Get:35 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-backports/main arm64 Packages [40.5 kB]
Get:36 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-backports/main Translation-en [10.2 kB]
Get:37 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-backports/main arm64 c-n-f Metadata [388 B]
Get:38 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-backports/restricted arm64 c-n-f Metadata [116 B]
Get:39 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-backports/universe arm64 Packages [20.6 kB]
Get:40 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-backports/universe Translation-en [15.4 kB]
Get:41 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-backports/universe arm64 c-n-f Metadata [512 B]
Get:42 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-backports/multiverse arm64 c-n-f Metadata [116 B]
Fetched 25.4 MB in 4s (6484 kB/s)
Reading package lists... Done
Kubernetesのaptリポジトリを利用するのに必要なパッケージをインストールします。
# apt-get install -y apt-transport-https ca-certificates curl
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
ca-certificates is already the newest version (20230311ubuntu0.22.04.1).
ca-certificates set to manually installed.
curl is already the newest version (7.81.0-1ubuntu1.13).
curl set to manually installed.
The following NEW packages will be installed:
apt-transport-https
0 upgraded, 1 newly installed, 0 to remove and 53 not upgraded.
Need to get 1510 B of archives.
After this operation, 169 kB of additional disk space will be used.
Get:1 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-updates/universe arm64 apt-transport-https all 2.4.10 [1510 B]
Fetched 1510 B in 0s (8545 B/s)
Selecting previously unselected package apt-transport-https.
(Reading database ... 65898 files and directories currently installed.)
Preparing to unpack .../apt-transport-https_2.4.10_all.deb ...
Unpacking apt-transport-https (2.4.10) ...
Setting up apt-transport-https (2.4.10) ...
Scanning processes...
Scanning linux images...
Running kernel seems to be up-to-date.
No services need to be restarted.
No containers need to be restarted.
No user sessions are running outdated binaries.
No VM guests are running outdated hypervisor (qemu) binaries on this host.
Google Cloudの公開鍵をダウンロードします。
# curl -fsSL https://dl.k8s.io/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
Kubernetesのaptリポジトリを追加します。
# echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main
kubeadm/kubectl/kubeletのインストール
# apt-get update
Hit:1 http://ports.ubuntu.com/ubuntu-ports jammy-security InRelease
Hit:2 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy InRelease
Hit:4 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-updates InRelease
Hit:5 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy-backports InRelease
Get:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8993 B]
Ign:6 https://packages.cloud.google.com/apt kubernetes-xenial/main arm64 Packages
Get:6 https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages [66.7 kB]
Fetched 75.7 kB in 1s (72.2 kB/s)
Reading package lists... Done
kubeadm/kubectl/kubeletをインストールします。
# apt-get install -y kubelet kubeadm kubectl
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
conntrack cri-tools ebtables kubernetes-cni socat
The following NEW packages will be installed:
conntrack cri-tools ebtables kubeadm kubectl kubelet kubernetes-cni socat
0 upgraded, 8 newly installed, 0 to remove and 53 not upgraded.
Need to get 78.0 MB of archives.
After this operation, 324 MB of additional disk space will be used.
Get:6 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy/main arm64 conntrack arm64 1:1.4.6-2build2 [32.4 kB]
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main arm64 cri-tools arm64 1.26.0-00 [17.3 MB]
Get:7 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy/main arm64 ebtables arm64 2.0.11-4build2 [85.4 kB]
Get:8 http://ca-toronto-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports jammy/main arm64 socat arm64 1.7.4.1-3ubuntu4 [348 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main arm64 kubernetes-cni arm64 1.2.0-00 [25.8 MB]
Get:3 https://packages.cloud.google.com/apt kubernetes-xenial/main arm64 kubelet arm64 1.28.1-00 [16.8 MB]
Get:4 https://packages.cloud.google.com/apt kubernetes-xenial/main arm64 kubectl arm64 1.28.1-00 [8829 kB]
Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main arm64 kubeadm arm64 1.28.1-00 [8792 kB]
Fetched 78.0 MB in 4s (18.2 MB/s)
Selecting previously unselected package conntrack.
(Reading database ... 65902 files and directories currently installed.)
Preparing to unpack .../0-conntrack_1%3a1.4.6-2build2_arm64.deb ...
Unpacking conntrack (1:1.4.6-2build2) ...
Selecting previously unselected package cri-tools.
Preparing to unpack .../1-cri-tools_1.26.0-00_arm64.deb ...
Unpacking cri-tools (1.26.0-00) ...
Selecting previously unselected package ebtables.
Preparing to unpack .../2-ebtables_2.0.11-4build2_arm64.deb ...
Unpacking ebtables (2.0.11-4build2) ...
Selecting previously unselected package kubernetes-cni.
Preparing to unpack .../3-kubernetes-cni_1.2.0-00_arm64.deb ...
Unpacking kubernetes-cni (1.2.0-00) ...
Selecting previously unselected package socat.
Preparing to unpack .../4-socat_1.7.4.1-3ubuntu4_arm64.deb ...
Unpacking socat (1.7.4.1-3ubuntu4) ...
Selecting previously unselected package kubelet.
Preparing to unpack .../5-kubelet_1.28.1-00_arm64.deb ...
Unpacking kubelet (1.28.1-00) ...
Selecting previously unselected package kubectl.
Preparing to unpack .../6-kubectl_1.28.1-00_arm64.deb ...
Unpacking kubectl (1.28.1-00) ...
Selecting previously unselected package kubeadm.
Preparing to unpack .../7-kubeadm_1.28.1-00_arm64.deb ...
Unpacking kubeadm (1.28.1-00) ...
Setting up conntrack (1:1.4.6-2build2) ...
Setting up kubectl (1.28.1-00) ...
Setting up ebtables (2.0.11-4build2) ...
Setting up socat (1.7.4.1-3ubuntu4) ...
Setting up cri-tools (1.26.0-00) ...
Setting up kubernetes-cni (1.2.0-00) ...
Setting up kubelet (1.28.1-00) ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Setting up kubeadm (1.28.1-00) ...
Processing triggers for man-db (2.10.2-1) ...
Scanning processes...
Scanning linux images...
Running kernel seems to be up-to-date.
No services need to be restarted.
No containers need to be restarted.
No user sessions are running outdated binaries.
No VM guests are running outdated hypervisor (qemu) binaries on this host.
バージョンを固定します。
# apt-mark hold kubelet kubeadm kubectl
kubelet set on hold.
kubeadm set on hold.
kubectl set on hold.
インストールされたバージョンを確認します。
# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.1", GitCommit:"8dc49c4b984b897d423aab4971090e1879eb4f23", GitTreeState:"clean", BuildDate:"2023-08-24T11:21:51Z", GoVersion:"go1.20.7", Compiler:"gc", Platform:"linux/arm64"}
# kubectl version --client
Client Version: v1.28.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
# kubelet --version
Kubernetes v1.28.1
Boot Volumeのバックアップ作成
ここまではControl Plane、Worker Node共通なので、ここでバックアップを取得して、後ほどクローンしてWorker Nodeを作成します。
OCIコンソールからブート・ボリュームのバックアップの作成
をクリックします。
任意の名称をつけて、バックアップを作成します。
Kubernetesクラスタの作成
Control Planeの初期化
PodネットワークアドオンとしてCalicoを使用するので、--pod-network-cidr=192.168.0.0/16
をつけてinitします。
# kubeadm init --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.28.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0828 01:29:52.711314 3748 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [controlplane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.239]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [controlplane localhost] and IPs [10.0.0.239 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [controlplane localhost] and IPs [10.0.0.239 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 8.004123 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node controlplane as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node controlplane as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: x8ceg8.4rjv6rhee2t3gav3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.0.239:6443 --token x8ceg8.4rjv6rhee2t3gav3 \
--discovery-token-ca-cert-hash sha256:8fa5c9a12623686ad4cddb79cd6acc9d297723bd75a7xxxxxec94dc9ed032ca
出力されたメッセージの通りにコマンドを実行して、Contextを設定します。
Contextは一般ユーザで設定します。
# exit
logout
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
確認します。
$ kubectl get node
NAME STATUS ROLES AGE VERSION
controlplane NotReady control-plane 82s v1.28.1
この時点ではNotReady
になっています。
Worker Nodeの設定
インスタンス作成
先ほど取得したBoot Volumeのバックアップの右の「・・・」をクリックし、ブート・ボリュームのリストア
をクリックします。
任意の名称を設定して、Boot Volumeをリストア(作成)します。
作成されたBoot Volumeからインスタンスの作成
をクリックして、Control Planeと同様にインスタンスを作成します。
なお、Worker Nodeは1 OCPU/6GBメモリにしています。
クラスタへの組み込み(Join)
Worker NodeにSSH接続して、Control Planeのinit時の最後に表示されたコマンドを実行します。
worker01$ sudo kubeadm join 10.0.0.239:6443 --token 6sk44f.49a137wdn073awhy \
--discovery-token-ca-cert-hash sha256:33820f5915702c7a9f27a24638a14193be9479240ccac084cxxxxxx5922ab6b19
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Control PlaneでWorker Nodeが組み込まれていることを確認します。
$ kubectl get node
NAME STATUS ROLES AGE VERSION
controlplane NotReady control-plane 10m v1.28.1
worker01 NotReady <none> 23s v1.28.1
なお、この時点ではどちらもNotReady
になります。
Podネットワークアドオンのインストール
今回はCalicoを使用します。
マニフェストをダウンロードして、applyします。
$ curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml -O
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 238k 100 238k 0 0 472k 0 --:--:-- --:--:-- --:--:-- 472k
$ kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
しばらく待って確認します。
$ kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
pod/calico-kube-controllers-7ddc4f45bc-jn6rw 1/1 Running 1 (24s ago) 55s
pod/calico-node-ckwpd 0/1 Running 0 55s
pod/calico-node-jxknt 0/1 Running 1 (30s ago) 55s
pod/coredns-5dd5756b68-6trbg 1/1 Running 0 3m15s
pod/coredns-5dd5756b68-kggrf 1/1 Running 0 3m15s
pod/etcd-controlplane 1/1 Running 9 (2m30s ago) 3m50s
pod/kube-apiserver-controlplane 1/1 Running 9 (3m55s ago) 4m27s
pod/kube-controller-manager-controlplane 1/1 Running 12 (105s ago) 3m50s
pod/kube-proxy-6jqct 1/1 Running 2 (106s ago) 3m16s
pod/kube-proxy-r7nbx 1/1 Running 3 (33s ago) 3m11s
pod/kube-scheduler-controlplane 1/1 Running 13 (105s ago) 3m50s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 4m26s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/calico-node 2 2 0 2 0 kubernetes.io/os=linux 2m2s
daemonset.apps/kube-proxy 2 2 2 2 2 kubernetes.io/os=linux 4m26s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/calico-kube-controllers 1/1 1 1 2m2s
deployment.apps/coredns 2/2 2 2 4m26s
NAME DESIRED CURRENT READY AGE
replicaset.apps/calico-kube-controllers-7ddc4f45bc 1 1 1 55s
replicaset.apps/coredns-5dd5756b68 2 2 2 3m16s
Calicoがデプロイされると、STATUSがReady
になります。
$ kubectl get node
NAME STATUS ROLES AGE VERSION
controlplane Ready control-plane 167m v1.28.1
worker01 Ready <none> 166m v1.28.1
ちなみに
すんなりと一度ではできなくて、何度もやり直したり、作り直したりしました。
こまめにバックアップを取って、戻れるようにしておくといいと思います。
Kubernetesに限らずですが、全てのマネージドサービスに感謝。