LoginSignup
1
1

More than 1 year has passed since last update.

RaspberryPi上のkubernetes(kubeadm)をアップグレード

Last updated at Posted at 2021-10-02

はじめに

昨年、「RaspberryPi 4 にUbuntu20.04 をインストールして、Kubernetes を構築してコンテナを動かす」でRaspberryPiでKubernetes環境を構築しました。
かれこれ1年近くバージョンを固定したままでした。
ようやく重い腰をあげて、アップグレードを行うことにしました。
v1.22.1まで上げました。

環境

  • Raspberry Pi 4 Model B/4GB x 3台 (Mater x 1, Worker x 2)
  • Ubuntu 20.04
  • Kubernetes(kubeadm, kubelet, kubectl) v1.19.3
  • Docker 19.03.13

3台のRaspberryPiは、それぞれ
master01.example.jp
worker01.example.jp
worker02.example.jp
で名前解決できるようにしています。

この記事では、Podを停止した状態にして作業を行っています。 Podを停止できない場合は、kubectl drain を実施するなど安全面に配慮が必要です。

かんたんに手順

進行手順は、下記のようになります。

  1. Mater Node のkubeadm、kubelet、kubectl
  2. Worker Node 1 のkubeadm、kubelet、kubectl
  3. Worker Node 2 のkubeadm、kubelet、kubectl

また、現状のバージョンは v1.19.3で、最新は v1.22.1(2021年09月時点)です。
参考にしたUpgrading kubeadm clusters | Kubernetesには、マイナーバージョンのスキップはサポートされていないと書かれています。
そのため、
v1.19.3 --> v1.20.10 --> v1.21.4 --> v1.22.1
の順に上げることにします。

Master Nodeのアップグレード

まずはMaster Nodeから作業を行います。
kubeadm、kubelet、kubectlの順にアップグレードします。

現在のバージョンを確認 (Master Nodeで実行)

作業前に現在のバージョンを確認します。
v1.19.3です。

$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:47:53Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/arm64"}

$ kubectl get nodes
NAME                  STATUS   ROLES    AGE    VERSION
master01.example.jp   Ready    master   310d   v1.19.3
worker01.example.jp   Ready    worker   310d   v1.19.3
worker02.example.jp   Ready    worker   310d   v1.19.3

kubeadm のバージョン確認 (Master Nodeで実行)

まずは、パッケージ情報を更新します。

$ sudo apt update

ここで、以下のようなエラーが出ました。

Err:2 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY FEEA9169307EA071 NO_PUBKEY 8B57C5C2836F4BEB
Fetched 9383 B in 2s (3829 B/s)
Reading package lists... Done

public key is not availableと書かれているので、keyを取得し直しました。

$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
OK

再度、実行します。

$ sudo apt update

アップグレード可能なバージョンを確認します。

$ apt-cache madison kubeadm | head
   kubeadm |  1.22.1-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
   kubeadm |  1.22.0-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
   kubeadm |  1.21.4-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
   kubeadm |  1.21.3-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
   kubeadm |  1.21.2-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
   kubeadm |  1.21.1-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
   kubeadm |  1.21.0-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
   kubeadm | 1.20.10-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
   kubeadm |  1.20.9-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages

マイナーバージョンが1つ上がった、1.20.10-00へアップグレードすることにします。

kubeadm のアップグレード (Master Nodeで実行)

公式では一行につなげて実行していますが、確認しながら行いたいので、一行づつ分けて実行しています。

$ sudo apt-mark unhold kubeadm
$ sudo apt install kubeadm=1.20.10-00
$ sudo apt-mark hold kubeadm

$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.10", GitCommit:"8152330a2b6ca3621196e62966ef761b8f5a61bb", GitTreeState:"clean", BuildDate:"2021-08-11T18:05:01Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/arm64"}

アップグレードプランの確認です。
アップグレード可能なバージョンを表示してくれます。

$ sudo kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.19.3
[upgrade/versions] kubeadm version: v1.20.10
I0913 21:35:27.756315 1135113 version.go:254] remote version is much newer: v1.22.1; falling back to: stable-1.20
[upgrade/versions] Latest stable version: v1.20.10
[upgrade/versions] Latest stable version: v1.20.10
[upgrade/versions] Latest version in the v1.19 series: v1.19.14
[upgrade/versions] Latest version in the v1.19 series: v1.19.14

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
kubelet     3 x v1.19.3   v1.19.14

Upgrade to the latest version in the v1.19 series:

COMPONENT                 CURRENT    AVAILABLE
kube-apiserver            v1.19.3    v1.19.14
kube-controller-manager   v1.19.3    v1.19.14
kube-scheduler            v1.19.3    v1.19.14
kube-proxy                v1.19.3    v1.19.14
CoreDNS                   1.7.0      1.7.0
etcd                      3.4.13-0   3.4.13-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.19.14

_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
kubelet     3 x v1.19.3   v1.20.10

Upgrade to the latest stable version:

COMPONENT                 CURRENT    AVAILABLE
kube-apiserver            v1.19.3    v1.20.10
kube-controller-manager   v1.19.3    v1.20.10
kube-scheduler            v1.19.3    v1.20.10
kube-proxy                v1.19.3    v1.20.10
CoreDNS                   1.7.0      1.7.0
etcd                      3.4.13-0   3.4.13-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.20.10

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

v1.19.14v1.20.10の2つのバージョンが表示されました。
v1.20.10のコマンドを実行します。
※途中、一箇所 [y/n] の入力があります。

$ sudo kubeadm upgrade apply v1.20.10
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.20.10"
[upgrade/versions] Cluster version: v1.19.3
[upgrade/versions] kubeadm version: v1.20.10
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y              <----- y を入力
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.20.10"...
Static pod: kube-apiserver-master01.example.jp hash: 27ea3f239cf943796f9c27ebee324ccf
Static pod: kube-controller-manager-master01.example.jp hash: 929bd0d134ed517e06910955791c4170
Static pod: kube-scheduler-master01.example.jp hash: ee4c94eb845abf1878fb3c4c489b1365
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-master01.example.jp hash: 3bc96d93e10263c52abe2b7377e22366
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-09-13-21-44-50/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-master01.example.jp hash: 3bc96d93e10263c52abe2b7377e22366
Static pod: etcd-master01.example.jp hash: 3bc96d93e10263c52abe2b7377e22366
Static pod: etcd-master01.example.jp hash: ca73dfcb0097fd8ec6be60bbd2618be7
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests291186170"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-09-13-21-44-50/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-master01.example.jp hash: 27ea3f239cf943796f9c27ebee324ccf
Static pod: kube-apiserver-master01.example.jp hash: 27ea3f239cf943796f9c27ebee324ccf
Static pod: kube-apiserver-master01.example.jp hash: 27ea3f239cf943796f9c27ebee324ccf
Static pod: kube-apiserver-master01.example.jp hash: 913900e53949ba0d02ce31be24f47145
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-09-13-21-44-50/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-master01.example.jp hash: 929bd0d134ed517e06910955791c4170
Static pod: kube-controller-manager-master01.example.jp hash: dcac11caa21c3f362cae65b8b8f5f807
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-09-13-21-44-50/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-master01.example.jp hash: ee4c94eb845abf1878fb3c4c489b1365
Static pod: kube-scheduler-master01.example.jp hash: a861cb849f637ee86b15e52d95cfff1c
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.20.10". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

SUCCESS! Your cluster was upgraded to "v1.20.10". Enjoy!と表示されています。
成功したようです。

kubeletのアップグレード (Master Nodeで実行)

次にkubeletのアップグレードを行います。
念の為の現状確認。

$ kubelet --version
Kubernetes v1.19.3

アップグレードを実行します。

$ sudo apt-mark unhold kubelet
$ sudo apt install kubelet=1.20.10-00
$ sudo apt-mark hold kubelet

バージョンが上がっていることを確認します。

$ kubelet --version
Kubernetes v1.20.10

get nodesコマンドでも上がっていました。

$ kubectl get nodes
NAME                  STATUS   ROLES                  AGE    VERSION
master01.example.jp   Ready    control-plane,master   310d   v1.20.10
worker01.example.jp   Ready    worker                 310d   v1.19.3
worker02.example.jp   Ready    worker                 310d   v1.19.3

kubectlのアップグレード (Master Nodeで実行)

最後にkubeletのアップグレードを行います。
現状確認。
Client Versionがまだv1.19.3です。

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/arm64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.10", GitCommit:"8152330a2b6ca3621196e62966ef761b8f5a61bb", GitTreeState:"clean", BuildDate:"2021-08-11T18:00:37Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/arm64"}

アップグレードを実行します。

$ sudo apt-mark unhold kubectl
$ sudo apt install kubectl=1.20.10-00
$ sudo apt-mark hold kubectl

kubeletを再起動します。

sudo systemctl daemon-reload
sudo systemctl restart kubelet

上がっていることを確認します。

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.10", GitCommit:"8152330a2b6ca3621196e62966ef761b8f5a61bb", GitTreeState:"clean", BuildDate:"2021-08-11T18:06:15Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/arm64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.10", GitCommit:"8152330a2b6ca3621196e62966ef761b8f5a61bb", GitTreeState:"clean", BuildDate:"2021-08-11T18:00:37Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/arm64"}

これでMaster Nodeのアップグレードは完了です。
続けて、Worker Nodeの作業を行います。

Worker Node 01のアップグレード (Worker Node 01で実行)

実行内容は、Master Nodeと同じ内容なので、コマンドだけ羅列します。

$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ sudo apt update
$ apt-cache madison kubeadm | head
$ sudo apt-mark unhold kubeadm
$ sudo apt install kubeadm=1.20.10-00
$ sudo apt-mark hold kubeadm
$ sudo kubeadm upgrade node

$ sudo apt-mark unhold kubelet
$ sudo apt install kubelet=1.20.10-00
$ sudo apt-mark hold kubelet

$ sudo apt-mark unhold kubectl
$ sudo apt install kubectl=1.20.10-00
$ sudo apt-mark hold kubectl

$ sudo systemctl daemon-reload
$ sudo systemctl restart kubelet

$ kubelet --version
Kubernetes v1.20.10

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.10", GitCommit:"8152330a2b6ca3621196e62966ef761b8f5a61bb", GitTreeState:"clean", BuildDate:"2021-08-11T18:06:15Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/arm64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

$ kubectl get node
NAME                  STATUS   ROLES                  AGE    VERSION
master01.example.jp   Ready    control-plane,master   310d   v1.20.10
worker01.example.jp   Ready    worker                 310d   v1.20.10
worker02.example.jp   Ready    worker                 310d   v1.19.3

これでWorker Node 01のアップグレードは完了です。

Worker Node 02のアップグレード (Worker Node 02で実行)

実行内容は、Worker Node 01と同じです。

$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ sudo apt update
$ apt-cache madison kubeadm | head
$ sudo apt-mark unhold kubeadm
$ sudo apt install kubeadm=1.20.10-00
$ sudo apt-mark hold kubeadm
$ sudo kubeadm upgrade node

$ sudo apt-mark unhold kubelet
$ sudo apt install kubelet=1.20.10-00
$ sudo apt-mark hold kubelet

$ sudo apt-mark unhold kubectl
$ sudo apt install kubectl=1.20.10-00
$ sudo apt-mark hold kubectl

$ sudo systemctl daemon-reload
$ sudo systemctl restart kubelet

$ kubelet --version
Kubernetes v1.20.10

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.10", GitCommit:"8152330a2b6ca3621196e62966ef761b8f5a61bb", GitTreeState:"clean", BuildDate:"2021-08-11T18:06:15Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/arm64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

$ kubectl get node
NAME                  STATUS   ROLES                  AGE    VERSION
master01.example.jp   Ready    control-plane,master   310d   v1.20.10
worker01.example.jp   Ready    worker                 310d   v1.20.10
worker02.example.jp   Ready    worker                 310d   v1.20.10

これですべてのNodeが、v1.20.10 にアップグレード完了しました。

同じことを「v1.20.10 --> v1.21.4」と「v1.21.4 --> v1.22.1」で繰り返します。
バージョン番号が変わるだけで、コマンドは同じなので省いて、結果だけ載せます。

v1.21.4 にアップグレードしたときの確認結果


$ kubelet --version
Kubernetes v1.21.4

$ kubectl get node
NAME                  STATUS   ROLES                  AGE    VERSION
master01.example.jp   Ready    control-plane,master   310d   v1.21.4
worker01.example.jp   Ready    worker                 310d   v1.21.4
worker02.example.jp   Ready    worker                 310d   v1.21.4

v1.22.1 にアップグレードしたときの確認結果

最終的に、v1.22.1 までアップグレードしました。

$ kubectl get node
NAME                  STATUS   ROLES                  AGE    VERSION
master01.example.jp   Ready    control-plane,master   311d   v1.22.1
worker01.example.jp   Ready    worker                 311d   v1.22.1
worker02.example.jp   Ready    worker                 311d   v1.22.1

最後に、アップグレードプランの確認してみました。

$ sudo kubeadm upgrade plan

CoreDNS                   v1.8.4    v1.8.4
etcd                      3.5.0-0   3.5.0-0

※アップグレード前
CoreDNS                   1.7.0      1.7.0
etcd                      3.4.13-0   3.4.13-0

いつの間にか、CoreDNSとetcdもアップグレードされていました。

containerdの使用

v1.20 からコンテナランタイムにDockerを使用することが非推奨になっています。
将来のv1.23 からは、Dockershim(kubeletがDockerを使うための仕組み)が削除予定です。
この後、続けて書くつもりだったのですが、文量が多くなったのでページを分けました。
下記のページでコンテナランタイムをDockerからcontainerdに変更した内容を記載しています。
kubernetesのコンテナランタイムをdockerからcontainerdに変更する

参考

最後に

kubernetesのアップグレードは大変そうな気がしていましたが、実際に行ってみるとに簡単にできました。
マイナーバージョンのスキップはサポートされていないため、繰り返し作業が増えたことが面倒でした。

1
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
1