kubeadmで構築したノードのアップデートを行ってみましょう。
TL;DR
ノードは1台で試しています。
結局# kubeadm upgrade apply
します。それだけでした。
現在の状態
2019/10/29の時点で最新が1.16が出ています。今使っているのがkubectlが1.15.2, ノードが1.15.4なので1.16にアップグレードしてみましょう。
自身の開発環境なので1台のノードです。
murata:~ $ kubectl version -o json
{
"clientVersion": {
"major": "1",
"minor": "15",
"gitVersion": "v1.15.2",
"gitCommit": "f6278300bebbb750328ac16ee6dd3aa7d3549568",
"gitTreeState": "clean",
"buildDate": "2019-08-05T09:23:26Z",
"goVersion": "go1.12.5",
"compiler": "gc",
"platform": "linux/amd64"
},
"serverVersion": {
"major": "1",
"minor": "15",
"gitVersion": "v1.15.4",
"gitCommit": "67d2fcf276fcd9cf743ad4be9a9ef5828adc082f",
"gitTreeState": "clean",
"buildDate": "2019-09-18T14:41:55Z",
"goVersion": "go1.12.9",
"compiler": "gc",
"platform": "linux/amd64"
}
}
サーバのアップデート
ちゃんと公式にやり方が載っているのはいいですね。
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/
v1.15.0以降からのアップグレードはドキュメント通りにやればいいですが、v1.13.0 and laterの人は別の手順を踏まないといけないみたいなので注意です。
kubeadmコマンドのアップデート
公式通りに入れていれば、yumやらaptで入れていると思うので、そのままupdateする。↓の場合はfedoraだったのでdnf。centosの人はyumね。
murata:$ sudo dnf update -y kubelet kubeadm kubectl --disableexcludes=kubernetes
Last metadata expiration check: 0:12:49 ago on Tue 29 Oct 2019 12:22:01 PM JST.
Dependencies resolved.
========================================================================================================================================================================================================================================================================================
Package Architecture Version Repository Size
========================================================================================================================================================================================================================================================================================
Upgrading:
kubeadm x86_64 1.16.2-0 kubernetes 9.5 M
kubectl x86_64 1.16.2-0 kubernetes 10 M
kubelet x86_64 1.16.2-0 kubernetes 22 M
Transaction Summary
========================================================================================================================================================================================================================================================================================
Upgrade 3 Packages
Total download size: 42 M
Downloading Packages:
(1/3): 26d3e29e517cb0fd27fca12c02bd75ffa306bc5ce78c587d83a0242ba20588f0-kubectl-1.16.2-0.x86_64.rpm 13 MB/s | 10 MB 00:00
(2/3): bd3de06f520c4a8c0017b653e2673cd6cd1b1386213b600f018fb67d93ffd60b-kubeadm-1.16.2-0.x86_64.rpm 9.5 MB/s | 9.5 MB 00:00
(3/3): 0939db1dc940fa6800429c7ebef9d51fd9d46ff540589817cdb1927b8fae7aaa-kubelet-1.16.2-0.x86_64.rpm 15 MB/s | 22 MB 00:01
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 27 MB/s | 42 MB 00:01
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Running scriptlet: kubelet-1.16.2-0.x86_64 1/1
Upgrading : kubelet-1.16.2-0.x86_64 1/6
Upgrading : kubectl-1.16.2-0.x86_64 2/6
Upgrading : kubeadm-1.16.2-0.x86_64 3/6
Cleanup : kubeadm-1.15.2-0.x86_64 4/6
Cleanup : kubectl-1.15.2-0.x86_64 5/6
Cleanup : kubelet-1.15.2-0.x86_64 6/6
Running scriptlet: kubelet-1.15.2-0.x86_64 6/6
Verifying : kubeadm-1.16.2-0.x86_64 1/6
Verifying : kubeadm-1.15.2-0.x86_64 2/6
Verifying : kubectl-1.16.2-0.x86_64 3/6
Verifying : kubectl-1.15.2-0.x86_64 4/6
Verifying : kubelet-1.16.2-0.x86_64 5/6
Verifying : kubelet-1.15.2-0.x86_64 6/6
Upgraded:
kubeadm-1.16.2-0.x86_64 kubectl-1.16.2-0.x86_64 kubelet-1.16.2-0.x86_64
Complete!
kubeadm upgrade plan
ちゃんと何をすればよいかが用意されています。
murata:$ sudo kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.15.4
[upgrade/versions] kubeadm version: v1.16.2
[upgrade/versions] Latest stable version: v1.16.2
[upgrade/versions] Latest version in the v1.15 series: v1.15.5
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 1 x v1.15.2 v1.16.2
Upgrade to the latest stable version:
COMPONENT CURRENT AVAILABLE
API Server v1.15.4 v1.16.2
Controller Manager v1.15.4 v1.16.2
Scheduler v1.15.4 v1.16.2
Kube Proxy v1.15.4 v1.16.2
CoreDNS 1.3.1 1.6.2
Etcd 3.3.10 3.3.15-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.16.2
_____________________________________________________________________
kubeadm upgrade apply v1.16.2
planで言われている通りにコマンドを流します。
murata:$ sudo kubeadm upgrade apply v1.16.2
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.16.2"
[upgrade/versions] Cluster version: v1.15.4
[upgrade/versions] kubeadm version: v1.16.2
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]:
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.16.2"...
Static pod: kube-apiserver-murata.cloud.deroris.local hash: d25aa1df5a18d5efe2864f91939a75b6
Static pod: kube-controller-manager-murata.cloud.deroris.local hash: 8d4d1622b04e438627409d2541f41f17
Static pod: kube-scheduler-murata.cloud.deroris.local hash: 005af131c75f8b2bb5add0110835dbda
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-murata.cloud.deroris.local hash: 0487325602f4b35b7a2cbd7fe4b865e7
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-29-12-41-10/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-murata.cloud.deroris.local hash: 0487325602f4b35b7a2cbd7fe4b865e7
Static pod: etcd-murata.cloud.deroris.local hash: a95a7eabb94571f2604711796a221d2e
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests316107992"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-29-12-41-10/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-murata.cloud.deroris.local hash: d25aa1df5a18d5efe2864f91939a75b6
Static pod: kube-apiserver-murata.cloud.deroris.local hash: 42f01eefecdb8dbd6c19f1a676c724b0
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-29-12-41-10/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-murata.cloud.deroris.local hash: 8d4d1622b04e438627409d2541f41f17
Static pod: kube-controller-manager-murata.cloud.deroris.local hash: 4ff73fa535d5dfc8b2957bde3d1fda9b
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-29-12-41-10/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-murata.cloud.deroris.local hash: 005af131c75f8b2bb5add0110835dbda
Static pod: kube-scheduler-murata.cloud.deroris.local hash: 9d4ef5aaf6d635080b909c1cf7a3f703
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
`[addons] Applied essential addon: kube-proxy`
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.16.2". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
やっている途中でもPodたちは動いていました。(公式ではちゃんとdrainして別ノードに移してからやってくださいとのこと。)
ノードが複数台ある場合には、 kubeadm upgrade node
もやってねとのこと。
kubeletはリスタートしておけよとのことなので、 sudo systemctl restart kubelet
しました。
結局ここにまとまっていることをやっただけでした。
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
確認
ちゃんとアップデートされています。クライアントも同じサーバにある環境だったので、ローカルにクライアントがある場合にはkubectlも忘れずにアップデートしておきましょう。
murata:$ kubectl version -o json
{
"clientVersion": {
"major": "1",
"minor": "16",
"gitVersion": "v1.16.2",
"gitCommit": "c97fe5036ef3df2967d086711e6c0c405941e14b",
"gitTreeState": "clean",
"buildDate": "2019-10-15T19:18:23Z",
"goVersion": "go1.12.10",
"compiler": "gc",
"platform": "linux/amd64"
},
"serverVersion": {
"major": "1",
"minor": "16",
"gitVersion": "v1.16.2",
"gitCommit": "c97fe5036ef3df2967d086711e6c0c405941e14b",
"gitTreeState": "clean",
"buildDate": "2019-10-15T19:09:08Z",
"goVersion": "go1.12.10",
"compiler": "gc",
"platform": "linux/amd64"
}
}
まとめ
公式通りでおkだった。
マニフェストのapiVersionが一部変わっていますのでご注意ください。
DeploymentのapiVersionがextensions/v1beta1からapps/v1に変わっていたりしました。
apiversionの調べ方はこちらの記事を参考にしました。
https://qiita.com/soymsk/items/69aeaa7945fe1f875822