0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 1 year has passed since last update.

Kubernetes Node Pool Blue Green Upgrade on Anthos Baremetal

Posted at

はじめに

Anthos Baremetal でワーカーノープールのアップグレードをコントロールプレーンと分離してノードプール単位でアップグレードが可能な機能v1.16.0 から GA された
この機能を使用して、Kubernetes クラスタのアップグレード方法として Node Pool を Blue/Green Pool として用意して、ワークロードを切り替えてアップグレードする方法を試したので、その結果を記載する

現時点 (2023.09) では GKE のように機能としてBlue/Green アップグレードは提供されてないので手動でノードプール追加や移行を実施する必要がある

実行環境

自宅ラボ環境。下記の構成で、ノードプールは節約で 1 ノードプール・ 1ノードの状態

  • デプロイモデル : マルチクラスタ デプロイ
    • 1 admin cluster, 1 user cluster
  • 管理クラスタ (admin cluster) 構成
    • クラスタ名: admincluster
    • 非 HA 用コントロール プレーン ノード 1 つ
    • スペック: 4core / 16GiB / 128 GiB (最小スペック)
  • ユーザー クラスタ (user cluster) 構成
    • クラスタ名: usercluster1
    • コントロール プレーン ノード:
      • 非 HA 用コントロール プレーン ノード 1 つ
      • スペック: 4core / 16GiB / 128 GiB (最小スペック)
    • ワーカーノード:
      • 非 HA 用ワーカーノード 1 つ
      • スペック: 4core / 16GiB / 128 GiB (最小スペック)
  • Version
    • Anthos 1.15.1
    • Kubernets 1.26 (v1.26.2-gke.1001)

構築手順・ネットワーク設定・パラメータは下記リンク先に記載

上記環境を v1.15.1 から v1.16.0 へのアップグレードを試す

試験で使用するファイルのパスとしては、実施環境の Admin Workstaion では $HOME/anthos/ 配下の下記パスに各クラスタの設定 yaml を置いている

AdminCluster : bmctl-workspace/admincluster/admincluster.yaml
Usercluster : bmctl-workspace/usercluster1/usercluster1.yaml

下記は試験環境での kubectl get node 出力結果

$ kubectl get node
NAME       STATUS   ROLES           AGE     VERSION
admin01    Ready    control-plane   12m     v1.26.2-gke.1001
worker01   Ready    worker          9m45s   v1.26.2-gke.1001

コントロール ノード (usercluster1)とワーカ ノード ノードプール (np1)の状態については下記の通り (長くなるので抜粋。折りたたみ)

 `kubectl describe nodepools.baremetal.cluster.gke.io` 抜粋 
$ kubectl describe nodepools.baremetal.cluster.gke.io --kubeconfig bmctl-workspace/admincluster/admincluster-kubeconfig -n cluster-usercluster1 usercluster1
Name:         usercluster1
Namespace:    cluster-usercluster1
Labels:       baremetal.cluster.gke.io/control-plane=true
              baremetal.cluster.gke.io/cp-load-balancer=true
              baremetal.cluster.gke.io/dp-load-balancer=true
Annotations:  baremetal.cluster.gke.io/control-plane: true
              baremetal.cluster.gke.io/cp-load-balancer: true
              baremetal.cluster.gke.io/dp-load-balancer: true
              baremetal.cluster.gke.io/version: 1.15.1
API Version:  baremetal.cluster.gke.io/v1
Kind:         NodePool
Metadata:
  Creation Timestamp:  2023-09-16T05:08:32Z
  Finalizers:
    baremetal.cluster.gke.io/node-pool-finalizer
  Generation:  1
  Owner References:
    API Version:           baremetal.cluster.gke.io/v1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  Cluster
    Name:                  usercluster1
    UID:                   b88166a4-b8de-4b31-ac43-bbda077fe9b2
  Resource Version:        17341
  UID:                     930eab25-d9c0-473e-aa75-37d387593bf0
Spec:
  Cluster Name:  usercluster1
  Nodes:
    Address:         192.168.133.11
  Operating System:  linux
Status:
  Anthos Bare Metal Versions:
    1.15.1:  1
  Conditions:
    Last Transition Time:  2023-09-16T05:26:17Z
    Observed Generation:   1
    Reason:                PreflightCheckPassed
    Status:                True
    Type:                  PreflightCheckSuccessful
    Last Transition Time:  2023-09-16T05:26:17Z
    Observed Generation:   1
    Reason:                NodepoolReady
    Status:                True
    Type:                  Ready
    Last Transition Time:  2023-09-16T05:26:17Z
    Observed Generation:   1
    Reason:                ReconciliationCompleted
    Status:                False
    Type:                  Reconciling
  Gke Versions:
    v1.26.2-gke.1001:  1
  Maintenance Nodes:   0
  Managed Fields:
  Ready Nodes:        1
  Ready Timestamp:    2023-09-16T05:08:32Z
  Reconciling Nodes:  0
  Stalled Nodes:      0
  Unknown Nodes:      0
Events:

...

$ kubectl --kubeconfig bmctl-workspace/admincluster/admincluster-kubeconfig describe nodepools.baremetal.cluster.gke.io -n cluster-usercluster1 np1
Name:         np1
Namespace:    cluster-usercluster1
Labels:       <none>
Annotations:  baremetal.cluster.gke.io/version: 1.15.1
API Version:  baremetal.cluster.gke.io/v1
Kind:         NodePool
Metadata:
  Creation Timestamp:  2023-09-16T05:07:56Z
  Finalizers:
    baremetal.cluster.gke.io/node-pool-finalizer
  Generation:  1
  Owner References:
    API Version:           baremetal.cluster.gke.io/v1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  Cluster
    Name:                  usercluster1
    UID:                   b88166a4-b8de-4b31-ac43-bbda077fe9b2
  Resource Version:        17323
  UID:                     ef51f3ea-7498-47b5-b408-d3a8502bd608
Spec:
  Anthos Bare Metal Version:  1.15.1
  Cluster Name:               usercluster1
  Nodes:
    Address:         192.168.133.21
  Operating System:  linux
Status:
  Anthos Bare Metal Versions:
    1.15.1:  1
  Conditions:
    Last Transition Time:  2023-09-16T05:26:16Z
    Observed Generation:   1
    Reason:                PreflightCheckPassed
    Status:                True
    Type:                  PreflightCheckSuccessful
    Last Transition Time:  2023-09-16T05:26:16Z
    Observed Generation:   1
    Reason:                NodepoolReady
    Status:                True
    Type:                  Ready
    Last Transition Time:  2023-09-16T05:26:16Z
    Observed Generation:   1
    Reason:                ReconciliationCompleted
    Status:                False
    Type:                  Reconciling
  Gke Versions:
    v1.26.2-gke.1001:  1
  Maintenance Nodes:   0
  Managed Fields:
  Ready Nodes:        1
  Ready Timestamp:    2023-09-16T05:21:11Z
  Reconciling Nodes:  0
  Stalled Nodes:      0
  Unknown Nodes:      0
Events:

...

Anthos のバージョン (1.15.1) と GKE のバージョン (v1.26.2-gke.1001) が確認できる

下記は試験環境で Anthos バージョンだけ出力したものとなる

$ kubectl get nodepools.baremetal.cluster.gke.io  -n cluster-usercluster1 usercluster1 -ojsonpath='{.status.anthosBareMetalVersions}' --kubeconfig bmctl-workspace/admincluster/admincluster-kubeconfig
{"1.15.1":1}

$ kubectl get nodepools.baremetal.cluster.gke.io  -n cluster-usercluster1 np1 -ojsonpath='{.status.anthosBareMetalVersions}' --kubeconfig bmctl-workspace/admincluster/admincluster-kubeconfig
{"1.15.1":1}

試験用アプリ

試験用に nginx の Pod を構築して、クラスタ外からアクセス監視を実施する
これにより簡易的にクラスタアップグレードでのワークロードの外部からのアクセス断時間を測定する

  • image : nginx
  • pdb : 1
  • replica : 3
  • service type: loadbalancer
  • readinessProbe: periodSeconds 1
 試験用 Nginx Deployment / Service (type: Loadbalancer) 構築 
export KUBECONFIG=$HOME/anthos/bmctl-workspace/usercluster1/usercluster1-kubeconfig
kubectl create ns test-upgrade

cat <<EOF > test-upgrade-nginx.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
  namespace: test-upgrade
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        readinessProbe:
          httpGet:
            port: 80
            path: /
          failureThreshold: 1
          periodSeconds: 1
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx
  name: nginx
  namespace: test-upgrade
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: nginx
  namespace: test-upgrade
spec:
  minAvailable: 1
  selector:
    matchLabels:
      app: nginx
EOF

kubectl apply -f test-upgrade-nginx.yaml

kubectl -n test-upgrade get po

kubectl -n test-upgrade get svc -o jsonpath='{.items[?(@.metadata.name=="nginx")].status.loadBalancer.ingress[0].ip}'

最後に出力される loadBalancerIP をメモする

アクセス監視はクラスタ外部にある Zabbix で HTTP アクセスを 30秒毎に実施して、断時間測定とする
(Zabbix が非力なのか数秒単位だと通常時もレスポンスが取れない状態が出たため30秒)

また、秒単位についてはhttpingで interval 1秒, timeout 1秒にして簡易的に確認する (本環境では 192.168.134.33)

httping 192.168.134.33 -s -i 1 -t 1

上記により本環境のアップグレード試験ではhttpingで1秒以上のワークロードの外部アクセス断がなかった事の確認と、断発生時の30秒単位での断時間測定を実施している

Node Pool を使用した段階アップグレード

クラスタに既存とは別のアップグレードしたノードプールを追加してから Pod を移行する方法で段階的にアップグレードする方法を試す

この章では Pod の移行は drain で実施する (nodeSelector (Affinity) を使用する方法を別の章で実施する)

まずは、実行順番を図示して整理して、その順番で実施する
各手順で実行コマンドを記載して、試験での出力結果については折り畳んで記載する

実行順番

バックアップを取得してツール(bmctl)のアップグレードをしてから実施する
アップグレード順番は Anthos Baremetal では、管理クラスタ → ユーザクラスタ、コントロール ノード → ワーカノード の順番となる

  1. クラスタ Backup
  2. bmctl のアップグレード
  3. 管理クラスタのアップグレード
  4. Control Plane のアップグレード
  5. 新バージョンで Green ノードプールの追加
  6. cordon/drain での Pod 移行
  7. 旧 node pool (Blue ノードプール) の削除

スクリーンショット 2023-09-17 12.28.07.png

事前確認. Known Issue の調査

実施前に Known Issue が該当バージョンで出ていないことを確認することが推奨されている

スクリーンショット 2023-09-09 12.53.04.png

その他アップグレード実施前の確認等はドキュメントの Best Practies を参照するのが良さそう

1. クラスタ Backup

アップグレードを実施する前に、クラスタのバックアップ を各クラスタ向けに実施する

user clsuter

バックアップ前にクラスタの状態を確認する

bmctl check cluster -c usercluster1 --kubeconfig bmctl-workspace/admincluster/admincluster-kubeconfig 
 出力結果 
$ bmctl check cluster -c usercluster1 --kubeconfig bmctl-workspace/admincluster/admincluster-kubeconfig 
Please check the logs at bmctl-workspace/usercluster1/log/check-cluster-20230916-054834/check-cluster.log
[2023-09-16 05:48:37+0000] Waiting for health check job to finish... OK
[2023-09-16 05:49:07+0000] - Validation Category: machines, network, add-ons and kubernetes
[2023-09-16 05:49:07+0000] 	- [PASSED] 192.168.133.11
[2023-09-16 05:49:07+0000] 	- [PASSED] 192.168.133.21
[2023-09-16 05:49:07+0000] 	- [PASSED] add-ons
[2023-09-16 05:49:07+0000] 	- [PASSED] kubernetes
[2023-09-16 05:49:07+0000] 	- [PASSED] node-network
[2023-09-16 05:49:07+0000] Flushing logs... OK

ドキュメント(下記画像)にある通り status.conditions の type "Reconciling" が False になっていることを確認する

スクリーンショット 2023-09-17 14.24.46.png

kubectl get cluster usercluster1 -n cluster-usercluster1 --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -ojsonpath='{.status.conditions[?(@.reason == "Reconciling")].status}'

バックアップを実行する

bmctl backup cluster --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -c usercluster1
 出力結果 
$ bmctl backup cluster --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -c usercluster1
Please check the logs at bmctl-workspace/usercluster1/log/backup-20230916-055032/backup.log
[2023-09-16 05:50:33+0000] Anthos Bare Metal Cluster Backup will save all Anthos Bare Metal related resources and config files from each node to the backup file. Please note that these files will include sensitive credentials like service account keys and ssh keys. Please keep the output tarball in a safe place.
[backup/confirm] Are you sure you want to proceed with the backup? [y/N]: y
[2023-09-16 05:50:37+0000] Take etcd snapshot on pod etcd-admin01
[2023-09-16 05:50:37+0000] Backup files on machine 192.168.133.11
[2023-09-16 05:50:51+0000] Backup files on machine 192.168.133.21
[2023-09-16 05:50:54+0000] The back up file is at bmctl-workspace/backups/usercluster1_backup_2023-09-16T055037Z.tar.gz. It contains information about sensitive keys. Please keep this file safe.

admin clsuter

バックアップ前にクラスタの状態を確認する

bmctl check cluster -c admincluster --kubeconfig bmctl-workspace/admincluster/admincluster-kubeconfig 
 出力結果 
$ bmctl check cluster -c admincluster --kubeconfig bmctl-workspace/admincluster/admincluster-kubeconfig
Please check the logs at bmctl-workspace/admincluster/log/check-cluster-20230916-060706/check-cluster.log
[2023-09-16 06:07:12+0000] Creating bootstrap cluster... OK
[2023-09-16 06:09:52+0000] Installing dependency components... ⠴ W0916 06:10:46.946119  429118 schema.go:149] unexpected field validation directive: validator, skipping validation
[2023-09-16 06:09:52+0000] Installing dependency components... OK
[2023-09-16 06:13:44+0000] Waiting for health check job to finish... OK
[2023-09-16 06:15:24+0000] - Validation Category: machines, network, add-ons and kubernetes
[2023-09-16 06:15:24+0000] 	- [PASSED] 192.168.133.2
[2023-09-16 06:15:24+0000] 	- [PASSED] add-ons
[2023-09-16 06:15:24+0000] 	- [PASSED] kubernetes
[2023-09-16 06:15:24+0000] 	- [PASSED] node-network
[2023-09-16 06:15:24+0000] Flushing logs... OK
[2023-09-16 06:15:24+0000] Deleting bootstrap cluster... OK

status.conditions の type "Reconciling" が False になっていることを確認する

kubectl get cluster admincluster -n cluster-admincluster --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -ojsonpath='{.status.conditions[?(@.reason == "Reconciling")].status}'

バックアップを実行する

bmctl backup cluster --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -c admincluster
 出力結果 
$ bmctl backup cluster --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -c admincluster
Please check the logs at bmctl-workspace/admincluster/log/backup-20230916-061713/backup.log
[2023-09-16 06:17:19+0000] Anthos Bare Metal Cluster Backup will save all Anthos Bare Metal related resources and config files from each node to the backup file. Please note that these files will include sensitive credentials like service account keys and ssh keys. Please keep the output tarball in a safe place.
[backup/confirm] Are you sure you want to proceed with the backup? [y/N]: y
[2023-09-16 06:17:27+0000] Take etcd snapshot on pod etcd-admincl01
[2023-09-16 06:17:27+0000] Backup files on machine 192.168.133.2
[2023-09-16 06:17:44+0000] The back up file is at bmctl-workspace/backups/admincluster_backup_2023-09-16T061727Z.tar.gz. It contains information about sensitive keys. Please keep this file safe.

2. bmctl のアップグレード

Anthos Baremetal の CLI ツールである bmctl のバージョンを対応バージョンへ変更する

export ANTHOSVERSION=1.16.0
bmctl version
mkdir ~/anthos/bin/old/
mv ~/anthos/bin/bmctl ~/anthos/bin/old/bmctl
gsutil cp gs://anthos-baremetal-release/bmctl/$ANTHOSVERSION/linux-amd64/bmctl ~/anthos/bin/
chmod +x ~/anthos/bin/bmctl
bmctl version

下記のように bmctl version: 1.16.0-gke.26 と出力されることを確認する

[2023-09-09 07:47:13+0000] bmctl version: 1.16.0-gke.26, git commit: 97a6fb01605a615292fcaea753be8d4942f9934a, build date: Wed Aug 23 16:18:46 PDT 2023

3. 管理クラスタのアップグレード

ドキュメントにある通りuser clustersの前にadmin cluster をアップグレードする必要がある

スクリーンショット 2023-09-16 16.14.25.png

そのため管理クラスタ(admincluster)を先にアップグレードする

構成ファイル変更

Kind: ClusteranthosBareMetalVersion をアップグレードバージョンに書き換える
ワーカーノードプールについては既存バージョンを追加する

admincluster.yaml
...
apiVersion: baremetal.cluster.gke.io/v1
kind: Cluster
metadata:
  name: admincluster
  namespace: cluster-admincluster
spec:
  type: admin
  profile: default
- anthosBareMetalVersion: 1.15.1
+ anthosBareMetalVersion: 1.16.0
...

preflight check

アップグレード前に事前検証を実施する

bmctl check preflight --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -c admincluster
 出力結果 
$ bmctl check preflight --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -c admincluster
[2023-09-16 06:22:41+0000] Runnning command: bmctl check preflight --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -c admincluster
Please check the logs at bmctl-workspace/admincluster/log/check-preflight-20230916-062241/check-preflight.log
[2023-09-16 06:22:46+0000] Waiting for preflight check job to finish... OK
[2023-09-16 06:26:06+0000] - Validation Category: machines and network
[2023-09-16 06:26:06+0000] 	- [PASSED] cluster-upgrade-check
[2023-09-16 06:26:06+0000] 	- [PASSED] gcp
[2023-09-16 06:26:06+0000] 	- [PASSED] node-network
[2023-09-16 06:26:06+0000] 	- [PASSED] pod-cidr
[2023-09-16 06:26:06+0000] 	- [PASSED] 192.168.133.2
[2023-09-16 06:26:06+0000] 	- [PASSED] 192.168.133.2-gcp
[2023-09-16 06:26:06+0000] Flushing logs... OK

Upgrade 実行

管理クラスタ (admincluster) のアップグレードを実施する

bmctl upgrade cluster --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -c admincluster

上記コマンド実施中に httping でのワークロード(試験用アプリ)への通信断はなかった

 出力結果 

$ kubectl --kubeconfig bmctl-workspace/admincluster/admincluster-kubeconfig get node
NAME        STATUS   ROLES           AGE    VERSION
admincl01   Ready    control-plane   102m   v1.26.2-gke.1001

$ bmctl upgrade cluster --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -c admincluster
[2023-09-16 06:35:57+0000] Runnning command: bmctl upgrade cluster --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -c admincluster
Please check the logs at bmctl-workspace/admincluster/log/upgrade-cluster-20230916-063557/upgrade-cluster.log
[2023-09-16 06:35:57+0000] Before upgrade, please use `bmctl backup cluster` to create a backup.
[2023-09-16 06:36:02+0000] Cluster.Spec.GKEOnPremAPI is not specified. This cluster will enroll automatically to GKE onprem API for easier management with gcloud, UI and terraform after upgrade if GKE onprem API is enabled in GCP services. To unenroll, just update the Cluster.Spec.GKEOnPremAPI.Enabled to be false after upgrade.
[2023-09-16 06:36:05+0000] The current version of cluster is 1.15.1
[2023-09-16 06:36:05+0000] The version to be upgraded to is 1.16.0
[2023-09-16 06:36:06+0000] Waiting for preflight check job to finish... OK
[2023-09-16 06:37:16+0000] - Validation Category: machines and network
[2023-09-16 06:37:16+0000] 	- [PASSED] 192.168.133.2-gcp
[2023-09-16 06:37:16+0000] 	- [PASSED] cluster-upgrade-check
[2023-09-16 06:37:16+0000] 	- [PASSED] gcp
[2023-09-16 06:37:16+0000] 	- [PASSED] node-network
[2023-09-16 06:37:16+0000] 	- [PASSED] pod-cidr
[2023-09-16 06:37:16+0000] 	- [PASSED] 192.168.133.2
[2023-09-16 06:37:16+0000] Flushing logs... OK
[2023-09-16 06:37:16+0000] Bumping the old version 1.15.1 to new version 1.16.0 in the cluster resource.
[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':16; pending: 0/1 [2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.2'; draining[Number of pods yet to drain]: '192.168.133.2':1; pending: 0/1 u[2023-09-16 06:37:16+0000] Waiting for machines to upgrade... OK
[2023-09-16 06:57:36+0000] Writing kubeconfig file: clusterName = admincluster, path = bmctl-workspace/admincluster/admincluster-kubeconfig

$ kubectl --kubeconfig bmctl-workspace/admincluster/admincluster-kubeconfig get node
NAME        STATUS   ROLES           AGE    VERSION
admincl01   Ready    control-plane   126m   v1.27.4-gke.1600

※ 約22分かかった

※ User Cluster のアプリケーション向け通信断は無かった (httpingでの簡易確認)

4. Control Plane のアップグレード (User Cluster)

構成ファイル変更

ユーザクラスタの設定 yaml 内の Kind: ClusteranthosBareMetalVersion をアップグレードバージョン(1.16.0)に書き換える
ワーカーノードプールについてはアップグレードされないように既存バージョン(1.15.1)を追記する

usercluster1.yaml
...
apiVersion: baremetal.cluster.gke.io/v1
kind: Cluster
metadata:
  name: usercluster1
  namespace: cluster-usercluster1
spec:
  type: user
  profile: default
- anthosBareMetalVersion: 1.15.1
+ anthosBareMetalVersion: 1.16.0
...

apiVersion: baremetal.cluster.gke.io/v1
kind: NodePool
metadata:
  name: np1
  namespace: cluster-usercluster1
spec:
+ anthosBareMetalVersion: 1.15.1
  clusterName: usercluster1
  nodes:
  - address: 192.168.133.21

preflight check

bmctl check preflight --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -c usercluster1
 出力結果 
$ bmctl check preflight --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -c usercluster1
[2023-09-17 01:18:57+0000] Runnning command: bmctl check preflight --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -c usercluster1
Please check the logs at bmctl-workspace/usercluster1/log/check-preflight-20230917-011857/check-preflight.log
[2023-09-17 01:18:58+0000] Waiting for preflight check job to finish... OK
[2023-09-17 01:20:08+0000] - Validation Category: machines and network
[2023-09-17 01:20:08+0000] 	- [PASSED] 192.168.133.11
[2023-09-17 01:20:08+0000] 	- [PASSED] 192.168.133.11-gcp
[2023-09-17 01:20:08+0000] 	- [PASSED] cluster-upgrade-check
[2023-09-17 01:20:08+0000] 	- [PASSED] gcp
[2023-09-17 01:20:08+0000] 	- [PASSED] node-network
[2023-09-17 01:20:08+0000] 	- [PASSED] pod-cidr
[2023-09-17 01:20:08+0000] Flushing logs... OK

Upgrade 実行

bmctl upgrade cluster --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -c usercluster1

詳細は下記出力結果に記載するが、上記コマンドでのアップグレード中に通信断が発生した

 出力結果 
$ kubectl get node
NAME       STATUS   ROLES           AGE   VERSION
admin01    Ready    control-plane   20h   v1.26.2-gke.1001
worker01   Ready    worker          20h   v1.26.2-gke.1001

$ bmctl upgrade cluster --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -c usercluster1
[2023-09-17 01:37:21+0000] Runnning command: bmctl upgrade cluster --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -c usercluster1
Please check the logs at bmctl-workspace/usercluster1/log/upgrade-cluster-20230917-013721/upgrade-cluster.log
[2023-09-17 01:37:21+0000] Before upgrade, please use `bmctl backup cluster` to create a backup.
[2023-09-17 01:37:22+0000] Cluster.Spec.GKEOnPremAPI is not specified. This cluster will enroll automatically to GKE onprem API for easier management with gcloud, UI and terraform after upgrade if GKE onprem API is enabled in GCP services. To unenroll, just update the Cluster.Spec.GKEOnPremAPI.Enabled to be false after upgrade.
[2023-09-17 01:37:26+0000] Waiting for preflight check job to finish... OK
[2023-09-17 01:38:36+0000] - Validation Category: machines and network
[2023-09-17 01:38:36+0000] 	- [PASSED] 192.168.133.11-gcp
[2023-09-17 01:38:36+0000] 	- [PASSED] cluster-upgrade-check
[2023-09-17 01:38:36+0000] 	- [PASSED] gcp
[2023-09-17 01:38:36+0000] 	- [PASSED] node-network
[2023-09-17 01:38:36+0000] 	- [PASSED] pod-cidr
[2023-09-17 01:38:36+0000] 	- [PASSED] 192.168.133.11
[2023-09-17 01:38:36+0000] Flushing logs... OK
[2023-09-17 01:38:36+0000] Node pool usercluster1 will be upgraded to 1.16.0 version.
[2023-09-17 01:38:36+0000] Bumping the old version 1.15.1 to new version 1.16.0 in the cluster resource.
[2023-09-17 01:38:36+0000] Waiting for machines to upgrade... ⠙ I0917 01:38:40.739096  458446 request.go:690] Waited for 1.171553148s due to client-side throttling, not priority and fairness, request: GET:https://192.168.133.65:443/api/v1/namespaces/cluster-usercluster1/pods/bm-system-192.168.133.11-machine-prefligfb0f8ccc48a0a05bd55ztwq/log?container=ansible-runner&follow=true
[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':27; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':24; pending: 0/[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade...  upgrading: '192.168.133.11'; draining[Number of pods yet to drain]: '192.168.133.11':7; pending: 0/1[2023-09-17 01:38:36+0000] Waiting for machines to upgrade... OK
[2023-09-17 01:59:06+0000] Writing kubeconfig file: clusterName = usercluster1, path = bmctl-workspace/usercluster1/usercluster1-kubeconfig

$ kubectl get node 
NAME       STATUS   ROLES           AGE   VERSION
admin01    Ready    control-plane   20h   v1.27.4-gke.1600
worker01   Ready    worker          20h   v1.26.2-gke.1001

アップグレードは約22分かかった

controle-plane の admin01 のみが v1.27.4 へアップグレードできたことが確認できた

Control Plane のアップグレードだが、ワークロードへの通信断が下記の通り発生した

$ httping 192.168.134.33 -s -i 1 -t 1
PING 192.168.134.33:80 (/):
connected to 192.168.134.33:80 (237 bytes), seq=0 time=  2.75 ms 200 OK
...
connected to 192.168.134.33:80 (237 bytes), seq=1427 time=  3.38 ms 200 OK
connect time out
...
connect time out
connected to 192.168.134.33:80 (237 bytes), seq=1933 time=  3.35 ms 200 OK
...
connected to 192.168.134.33:80 (237 bytes), seq=2134 time=  2.41 ms 200 OK
--- http://192.168.134.33/ ping statistics ---
2135 connects, 1631 ok, 23.61% failed, time 2622680ms
round-trip min/avg/max = 2.3/3.0/100.9 ms

断時間は Zabbix で簡易測定したところ下記の通り約16分29秒発生した

スクリーンショット 2023-09-17 11.31.45.png

細かい構成が追えてないので、ネットワークか何かでコントロールプレーンノードがワークロードのネットワークへ影響を与える理由・冗長すれば避けられるかなどを追加確認する必要がありそう (今回はそこまではしなかった)

5. 新バージョンで Green ノードプールの追加

ユーザクラスタの設定 YAML に既存とは別の新規の Versin 1.16.0 のノードプール定義を追記する

usercluster1.yaml(追記箇所抜粋)
---
apiVersion: baremetal.cluster.gke.io/v1
kind: NodePool
metadata:
  name: np1-green
  namespace: cluster-usercluster1
spec:
  clusterName: usercluster1
  anthosBareMetalVersion: 1.16.0
  nodes:
  - address: 192.168.133.22

preflight check

bmctl check preflight --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -c usercluster1
 出力結果 
出力例
$ bmctl check preflight --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -c usercluster1
[2023-09-17 02:34:01+0000] Runnning command: bmctl check preflight --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -c usercluster1
Please check the logs at bmctl-workspace/usercluster1/log/check-preflight-20230917-023401/check-preflight.log
[2023-09-17 02:34:02+0000] Waiting for preflight check job to finish... OK
[2023-09-17 02:35:12+0000] - Validation Category: machines and network
[2023-09-17 02:35:12+0000] 	- [PASSED] 192.168.133.22
[2023-09-17 02:35:12+0000] 	- [PASSED] 192.168.133.22-gcp
[2023-09-17 02:35:12+0000] 	- [PASSED] gcp
[2023-09-17 02:35:12+0000] 	- [PASSED] node-network
[2023-09-17 02:35:12+0000] 	- [PASSED] pod-cidr
[2023-09-17 02:35:12+0000] Flushing logs... OK

ノード追加の実行

bmctl update cluster -c usercluster1 --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig

ノードが追加されるのを待つ

kubectl get node -w

コマンド実施中に httping でのワークロード(試験用アプリ)への通信断はなかった

 出力結果 
出力例
$ bmctl update cluster -c usercluster1 --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig
[2023-09-17 02:47:26+0000] Runnning command: bmctl update cluster -c usercluster1 --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig
Please check the logs at bmctl-workspace/usercluster1/log/update-cluster-20230917-024726/update-cluster.log
[2023-09-17 02:47:27+0000] Deleting bootstrap cluster...
$ kubectl get node -w
NAME       STATUS   ROLES           AGE   VERSION
admin01    Ready    control-plane   21h   v1.27.4-gke.1600
worker01   Ready    worker          21h   v1.26.2-gke.1001
worker02   NotReady   <none>          0s    v1.27.4-gke.1600
worker02   NotReady   <none>          0s    v1.27.4-gke.1600
worker02   NotReady   <none>          0s    v1.27.4-gke.1600
worker02   NotReady   <none>          0s    v1.27.4-gke.1600
worker02   NotReady   worker          0s    v1.27.4-gke.1600
worker02   NotReady   worker          0s    v1.27.4-gke.1600
worker02   NotReady   worker          2s    v1.27.4-gke.1600
worker02   NotReady   worker          10s   v1.27.4-gke.1600
admin01    Ready      control-plane   21h   v1.27.4-gke.1600
worker02   NotReady   worker          23s   v1.27.4-gke.1600
worker01   Ready      worker          21h   v1.26.2-gke.1001
worker02   NotReady   worker          24s   v1.27.4-gke.1600
worker02   NotReady   worker          30s   v1.27.4-gke.1600
worker02   Ready      worker          36s   v1.27.4-gke.1600

$ kubectl get node
NAME       STATUS   ROLES           AGE     VERSION
admin01    Ready    control-plane   21h     v1.27.4-gke.1600
worker01   Ready    worker          21h     v1.26.2-gke.1001
worker02   Ready    worker          3m48s   v1.27.4-gke.1600

ノードプールのバージョンについては下記の通り確認した

$ kubectl get nodepools.baremetal.cluster.gke.io  -n cluster-usercluster1  --kubeconfig bmctl-workspace/admincluster/admincluster-kubeconfig
NAME           READY   RECONCILING   STALLED   UNDERMAINTENANCE   UNKNOWN
np1            1       0             0         0                  0
np1-green      1       0             0         0                  0
usercluster1   1       0             0         0                  0

$ kubectl get nodepools.baremetal.cluster.gke.io  -n cluster-usercluster1 usercluster1 -ojsonpath='{.status.anthosBareMetalVersions}' --kubeconfig bmctl-workspace/admincluster/admincluster-kubeconfig
{"1.16.0":1}

$ kubectl get nodepools.baremetal.cluster.gke.io  -n cluster-usercluster1 np1 -ojsonpath='{.status.anthosBareMetalVersions}' --kubeconfig bmctl-workspace/admincluster/admincluster-kubeconfig
{"1.15.1":1}

$ kubectl get nodepools.baremetal.cluster.gke.io  -n cluster-usercluster1 np1-green -ojsonpath='{.status.anthosBareMetalVersions}' --kubeconfig bmctl-workspace/admincluster/admincluster-kubeconfig
{"1.16.0":1}

追加後も既存ワークロード(試験用アプリ)はまだ worker01 のみに配置 (Descheduler等ががある環境だと勝手に分散されてしまうので注意が必要)

$ k get po -n test-upgrade -owide
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
nginx-b5d699dd5-dmpvv   1/1     Running   0          15h   10.4.1.206   worker01   <none>           <none>
nginx-b5d699dd5-msb2k   1/1     Running   0          15h   10.4.1.165   worker01   <none>           <none>
nginx-b5d699dd5-xnw2f   1/1     Running   0          15h   10.4.1.41    worker01   <none>           <none>

通信断は無かった

6. cordon/drain での Pod 移行

Cordon/Drain でワークロードを green ノードへ移行を実行する

kubectl cordon worker01
kubectl drain worker01 --ignore-daemonsets --delete-emptydir-data

上記コマンド実施中に httping でのワークロード(試験用アプリ)への通信断はなかった

 出力結果 
出力例
$ kubectl drain worker01 --ignore-daemonsets --delete-emptydir-data
node/worker01 already cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/anetd-wn6hv, kube-system/gke-metrics-agent-9lnqs, kube-system/kube-proxy-stzxv, kube-system/localpv-2pgqd, kube-system/node-exporter-h8rrn, kube-system/stackdriver-log-forwarder-cv6z6
evicting pod kube-system/ang-controller-manager-5669cb6545-tpzzv
evicting pod test-upgrade/nginx-b5d699dd5-xnw2f
evicting pod kube-system/ang-controller-manager-autoscaler-7ddb7dbfd-c2lt4
evicting pod kube-system/clusterdns-controller-b67d7754-rgf7f
evicting pod kube-system/core-dns-autoscaler-86445f5b7f-n6vwj
evicting pod kube-system/healthcheck-metrics-collector-678c856586-x242l
evicting pod kube-system/kube-state-metrics-54bdc9d49-j8tw2
evicting pod kube-system/metallb-controller-6fc5b574cc-6xppp
evicting pod kube-system/npd-192.168.133.21-vjqd4
evicting pod kube-system/stackdriver-metadata-agent-cluster-level-5dd459ff5b-2ldgd
evicting pod kube-system/stackdriver-operator-54cd6dc494-khp2n
evicting pod test-upgrade/nginx-b5d699dd5-dmpvv
evicting pod test-upgrade/nginx-b5d699dd5-msb2k
evicting pod gke-managed-metrics-server/metrics-server-7f87464c84-87xzn
pod/metallb-controller-6fc5b574cc-6xppp evicted
pod/npd-192.168.133.21-vjqd4 evicted
error when evicting pods/"nginx-b5d699dd5-xnw2f" -n "test-upgrade" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
I0917 03:00:47.657703  462551 request.go:697] Waited for 1.112313543s due to client-side throttling, not priority and fairness, request: GET:https://192.168.134.1:443/api/v1/namespaces/kube-system/pods/clusterdns-controller-b67d7754-rgf7f
pod/nginx-b5d699dd5-msb2k evicted
evicting pod test-upgrade/nginx-b5d699dd5-xnw2f
error when evicting pods/"nginx-b5d699dd5-xnw2f" -n "test-upgrade" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
pod/kube-state-metrics-54bdc9d49-j8tw2 evicted
pod/ang-controller-manager-autoscaler-7ddb7dbfd-c2lt4 evicted
pod/healthcheck-metrics-collector-678c856586-x242l evicted
pod/stackdriver-operator-54cd6dc494-khp2n evicted
pod/stackdriver-metadata-agent-cluster-level-5dd459ff5b-2ldgd evicted
pod/metrics-server-7f87464c84-87xzn evicted
pod/clusterdns-controller-b67d7754-rgf7f evicted
pod/ang-controller-manager-5669cb6545-tpzzv evicted
pod/nginx-b5d699dd5-dmpvv evicted
pod/core-dns-autoscaler-86445f5b7f-n6vwj evicted
evicting pod test-upgrade/nginx-b5d699dd5-xnw2f
error when evicting pods/"nginx-b5d699dd5-xnw2f" -n "test-upgrade" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod test-upgrade/nginx-b5d699dd5-xnw2f
error when evicting pods/"nginx-b5d699dd5-xnw2f" -n "test-upgrade" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod test-upgrade/nginx-b5d699dd5-xnw2f
error when evicting pods/"nginx-b5d699dd5-xnw2f" -n "test-upgrade" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod test-upgrade/nginx-b5d699dd5-xnw2f
error when evicting pods/"nginx-b5d699dd5-xnw2f" -n "test-upgrade" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod test-upgrade/nginx-b5d699dd5-xnw2f
error when evicting pods/"nginx-b5d699dd5-xnw2f" -n "test-upgrade" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod test-upgrade/nginx-b5d699dd5-xnw2f
error when evicting pods/"nginx-b5d699dd5-xnw2f" -n "test-upgrade" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod test-upgrade/nginx-b5d699dd5-xnw2f
error when evicting pods/"nginx-b5d699dd5-xnw2f" -n "test-upgrade" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod test-upgrade/nginx-b5d699dd5-xnw2f
error when evicting pods/"nginx-b5d699dd5-xnw2f" -n "test-upgrade" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod test-upgrade/nginx-b5d699dd5-xnw2f
pod/nginx-b5d699dd5-xnw2f evicted
node/worker01 drained
$ k get po -n test-upgrade -owide -w
NAME                    READY   STATUS              RESTARTS   AGE   IP          NODE       NOMINATED NODE   READINESS GATES
nginx-b5d699dd5-7vxq2   0/1     ContainerCreating   0          17s   <none>      worker02   <none>           <none>
nginx-b5d699dd5-whlbn   0/1     ContainerCreating   0          18s   <none>      worker02   <none>           <none>
nginx-b5d699dd5-xnw2f   1/1     Running             0          15h   10.4.1.41   worker01   <none>           <none>
nginx-b5d699dd5-whlbn   0/1     Running             0          48s   10.4.2.224   worker02   <none>           <none>
nginx-b5d699dd5-whlbn   1/1     Running             0          48s   10.4.2.224   worker02   <none>           <none>
nginx-b5d699dd5-xnw2f   1/1     Running             0          15h   10.4.1.41    worker01   <none>           <none>
nginx-b5d699dd5-xnw2f   1/1     Terminating         0          15h   10.4.1.41    worker01   <none>           <none>
nginx-b5d699dd5-bhrvv   0/1     Pending             0          0s    <none>       <none>     <none>           <none>
nginx-b5d699dd5-bhrvv   0/1     Pending             0          0s    <none>       worker02   <none>           <none>
nginx-b5d699dd5-xnw2f   1/1     Terminating         0          15h   10.4.1.41    worker01   <none>           <none>
nginx-b5d699dd5-bhrvv   0/1     ContainerCreating   0          0s    <none>       worker02   <none>           <none>
nginx-b5d699dd5-xnw2f   0/1     Terminating         0          15h   10.4.1.41    worker01   <none>           <none>
nginx-b5d699dd5-xnw2f   0/1     Terminating         0          15h   10.4.1.41    worker01   <none>           <none>
nginx-b5d699dd5-xnw2f   0/1     Terminating         0          15h   10.4.1.41    worker01   <none>           <none>
nginx-b5d699dd5-7vxq2   0/1     Running             0          60s   10.4.2.81    worker02   <none>           <none>
nginx-b5d699dd5-7vxq2   1/1     Running             0          60s   10.4.2.81    worker02   <none>           <none>
nginx-b5d699dd5-bhrvv   0/1     Running             0          24s   10.4.2.134   worker02   <none>           <none>
nginx-b5d699dd5-bhrvv   1/1     Running             0          24s   10.4.2.134   worker02   <none>           <none>

$ k get po -n test-upgrade -owide
NAME                    READY   STATUS    RESTARTS   AGE    IP           NODE       NOMINATED NODE   READINESS GATES
nginx-b5d699dd5-7vxq2   1/1     Running   0          105s   10.4.2.81    worker02   <none>           <none>
nginx-b5d699dd5-bhrvv   1/1     Running   0          55s    10.4.2.134   worker02   <none>           <none>
nginx-b5d699dd5-whlbn   1/1     Running   0          106s   10.4.2.224   worker02   <none>           <none>

断時間はなかった

$ httping 192.168.134.33 -s -i 1 -t 1 --ai
PING 192.168.134.33:80 (/):
connected to 192.168.134.33:80 (237 bytes), seq=0 time=  3.02 ms 200 OK
...
connected to 192.168.134.33:80 (237 bytes), seq=1537 time=  2.39 ms 200 OK
--- http://192.168.134.33/ ping statistics ---
1538 connects, 1538 ok, 0.00% failed, time 1537821ms
round-trip min/avg/max = 1.7/2.9/34.7 ms

切り戻し

不具合発生時は Blue (既存) ノードをuncordonして、追加した Green ノードをcordon drainをして切り戻す

kubectl uncordon worker01
kubectl cordon worker02
kubectl drain worker02 --ignore-daemonsets --delete-emptydir-data

7. 旧 node pool (Blue ノードプール) の削除

最後に Blue ノードプールを削除する

kubectl --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig delete nodepool np1 -n cluster-usercluster1

上記コマンド実施中に httping でのワークロード(試験用アプリ)への通信断はなかった

 出力結果 
出力例
$ k get node 
NAME       STATUS                     ROLES           AGE   VERSION
admin01    Ready                      control-plane   21h   v1.27.4-gke.1600
worker01   Ready,SchedulingDisabled   worker          21h   v1.26.2-gke.1001
worker02   Ready                      worker          21m   v1.27.4-gke.1600

$ kubectl --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig delete nodepool np1 -n cluster-usercluster1
nodepool.baremetal.cluster.gke.io "np1" deleted

$ k get node
NAME       STATUS   ROLES           AGE   VERSION
admin01    Ready    control-plane   21h   v1.27.4-gke.1600
worker02   Ready    worker          25m   v1.27.4-gke.1600

断時間はなかった

$ httping 192.168.134.33 -s -i 1 -t 1 --ai
PING 192.168.134.33:80 (/):
connected to 192.168.134.33:80 (237 bytes), seq=0 time=  2.33 ms 200 OK
...
connected to 192.168.134.33:80 (237 bytes), seq=305 time=  2.27 ms 200 OK
--- http://192.168.134.33/ ping statistics ---
306 connects, 306 ok, 0.00% failed, time 305776ms
round-trip min/avg/max = 1.6/2.2/4.0 ms
$ kubectl get nodepools.baremetal.cluster.gke.io  -n cluster-usercluster1  --kubeconfig bmctl-workspace/admincluster/admincluster-kubeconfig
NAME           READY   RECONCILING   STALLED   UNDERMAINTENANCE   UNKNOWN
np1-green      1       0             0         0                  0
usercluster1   1       0             0         0                  0

$ kubectl get nodepools.baremetal.cluster.gke.io  -n cluster-usercluster1 usercluster1 -ojsonpath='{.status.anthosBareMetalVersions}' --kubeconfig bmctl-workspace/admincluster/admincluster-kubeconfig
{"1.16.0":1}

$ kubectl get nodepools.baremetal.cluster.gke.io  -n cluster-usercluster1 np1-green -ojsonpath='{.status.anthosBareMetalVersions}' --kubeconfig bmctl-workspace/admincluster/admincluster-kubeconfig
{"1.16.0":1}

最後に削除したノードプール箇所をユーザクラスタ設定ファイル(usercluster1.yaml)から削除もしくはコメントアウトしておく

usercluster1.yaml (削除箇所抜粋)
# ---
# apiVersion: baremetal.cluster.gke.io/v1
# kind: NodePool
# metadata:
#   name: np1
#   namespace: cluster-usercluster1
# spec:
#   anthosBareMetalVersion: 1.15.1
#   clusterName: usercluster1
#   nodes:
#   - address: 192.168.133.21

以上でアップグレード完了

下記はアップグレード後の Cloud Console での画面

スクリーンショット 2023-09-17 12.25.40.png

スクリーンショット 2023-09-17 12.23.57.png

スクリーンショット 2023-09-17 12.24.57.png

Node label での Blue/Green Node Pool の Pod 移行

上記で実施したワークロードを drainで一斉に移行する方法ではなく、
ノードプールにlabelを付与して、各ワークロードでnodeSelectorで配置されるノードを指定することで、
アプリケーション単位で移行できる方法を試験する
(ここでは簡易的にnodeSelectorを使用しているが、nodeAffinityを使用した方が柔軟性がある)

実施順番

コントロールプレーンのアップグレードまではdrainでの方法と同じのため、コントロールプレーンアップグレードが終わった後からの手順を試す

  1. ノードプールの label に blue/green: blue を付与
  2. ワークロードの nodeSelector に blue/green: blue を追加
  3. 新バージョンで Green ノードプールの追加. label blue/green: green を作成時に付与
  4. ワークロードの nodeSelector を blue/green: green に変更して Pod 移行
  5. 最後に残り Pod を Drain して、 blue ノードプールの削除

スクリーンショット 2023-09-18 14.44.35.png

1. ノードプールの label に blue/green: blue を付与 (前回実施の一部環境切り戻しを含む)

前の試験で試験環境が、green ノードプールにアップグレード済みのため、アップグレード前の 1.15.1 バージョンに戻した blue ノードプールを作成する
blue ノードプール作成時に lable も設定しておく
また、前の試験で作成している green ノードプールは一旦削除する

usercluster1.yaml
---
apiVersion: baremetal.cluster.gke.io/v1
kind: NodePool
metadata:
  name: np1-blue
  namespace: cluster-usercluster1
spec:
  anthosBareMetalVersion: 1.15.1
  clusterName: usercluster1
  nodes:
   - address: 192.168.133.21
  lables:
    blue/green: blue
    pool/name: np1
bmctl update cluster -c usercluster1 --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig

blue ノードのラベル状態を確認する

kubectl get node --show-labels

green ノードから blue ノードへワークロードを切り戻す

kubectl cordon worker02
kubectl drain worker02 --ignore-daemonsets --delete-emptydir-data

ワークロードの移行を確認する

kubectl get po -n test-upgrade -owide

green ノードプールを一旦削除する

kubectl --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig delete nodepool np1-green -n cluster-usercluster1

green ノード (worker02) が削除されたことを確認する

kubectl get node
 出力結果 
出力例
$ k get node -w
NAME       STATUS   ROLES           AGE   VERSION
admin01    Ready    control-plane   43h   v1.27.4-gke.1600
worker02   Ready    worker          21h   v1.27.4-gke.1600

$ bmctl update cluster -c usercluster1 --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig
[2023-09-18 00:30:55+0000] Runnning command: bmctl update cluster -c usercluster1 --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig
Please check the logs at bmctl-workspace/usercluster1/log/update-cluster-20230918-003055/update-cluster.log
[2023-09-18 00:30:56+0000] Deleting bootstrap cluster...

$ k get node -w
NAME       STATUS   ROLES           AGE   VERSION
admin01    Ready    control-plane   43h   v1.27.4-gke.1600
worker02   Ready    worker          21h   v1.27.4-gke.1600
admin01    Ready    control-plane   43h   v1.27.4-gke.1600
worker01   NotReady   <none>          0s    v1.26.2-gke.1001
worker01   NotReady   <none>          0s    v1.26.2-gke.1001
worker01   NotReady   <none>          0s    v1.26.2-gke.1001
worker01   NotReady   <none>          0s    v1.26.2-gke.1001
worker01   NotReady   worker          0s    v1.26.2-gke.1001
worker01   NotReady   worker          0s    v1.26.2-gke.1001
worker01   NotReady   worker          5s    v1.26.2-gke.1001
worker01   NotReady   worker          10s   v1.26.2-gke.1001
worker02   Ready      worker          21h   v1.27.4-gke.1600
worker01   NotReady   worker          22s   v1.26.2-gke.1001
worker01   NotReady   worker          23s   v1.26.2-gke.1001
worker02   Ready      worker          21h   v1.27.4-gke.1600
worker01   Ready      worker          26s   v1.26.2-gke.1001

$ k get node --show-labels
NAME       STATUS   ROLES           AGE   VERSION            LABELS
admin01    Ready    control-plane   43h   v1.27.4-gke.1600   baremetal.cluster.gke.io/cgroup=v2,baremetal.cluster.gke.io/k8s-ip=192.168.133.11,baremetal.cluster.gke.io/lbnode=true,baremetal.cluster.gke.io/namespace=cluster-usercluster1,baremetal.cluster.gke.io/node-pool=usercluster1,baremetal.cluster.gke.io/version=1.16.0,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=admin01,kubernetes.io/os=linux,networking.gke.io/ang-node=true,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
worker01   Ready    worker          62s   v1.26.2-gke.1001   baremetal.cluster.gke.io/cgroup=v2,baremetal.cluster.gke.io/k8s-ip=192.168.133.21,baremetal.cluster.gke.io/namespace=cluster-usercluster1,baremetal.cluster.gke.io/node-pool=np1-blue,baremetal.cluster.gke.io/version=1.15.1,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,blue/green=blue,cloud.google.com/gke-nodepool=np1-blue,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker01,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,pool/name=np1
worker02   Ready    worker          21h   v1.27.4-gke.1600   baremetal.cluster.gke.io/cgroup=v2,baremetal.cluster.gke.io/k8s-ip=192.168.133.22,baremetal.cluster.gke.io/namespace=cluster-usercluster1,baremetal.cluster.gke.io/node-pool=np1-green,baremetal.cluster.gke.io/version=1.16.0,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=np1-green,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker02,kubernetes.io/os=linux,node-role.kubernetes.io/worker=

$ kubectl get po -n test-upgrade -owide
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
nginx-b5d699dd5-7vxq2   1/1     Running   0          21h   10.4.2.81    worker02   <none>           <none>
nginx-b5d699dd5-bhrvv   1/1     Running   0          21h   10.4.2.134   worker02   <none>           <none>
nginx-b5d699dd5-whlbn   1/1     Running   0          21h   10.4.2.224   worker02   <none>           <none>

$ kubectl drain worker02 --ignore-daemonsets --delete-emptydir-data
node/worker02 already cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/anetd-lhmjd, kube-system/gke-metrics-agent-l74v8, kube-system/kube-proxy-pt8hv, kube-system/localpv-5m9mm, kube-system/node-exporter-nj9x6, kube-system/stackdriver-log-forwarder-whkm4
evicting pod gke-managed-metrics-server/metrics-server-7f87464c84-rxnxt
evicting pod test-upgrade/nginx-b5d699dd5-whlbn
evicting pod kube-system/ang-controller-manager-5669cb6545-wbg8b
evicting pod kube-system/ang-controller-manager-autoscaler-7ddb7dbfd-pcdjf
evicting pod kube-system/clusterdns-controller-b67d7754-f25x7
evicting pod kube-system/core-dns-autoscaler-86445f5b7f-wpj7j
evicting pod kube-system/healthcheck-metrics-collector-678c856586-8m92s
evicting pod kube-system/kube-state-metrics-54bdc9d49-wbwwx
evicting pod kube-system/metallb-controller-6fc5b574cc-qx8wt
evicting pod kube-system/npd-192.168.133.22-z48kp
evicting pod kube-system/stackdriver-metadata-agent-cluster-level-5dd459ff5b-jdqbv
evicting pod kube-system/stackdriver-operator-54cd6dc494-ss5v4
evicting pod test-upgrade/nginx-b5d699dd5-7vxq2
evicting pod test-upgrade/nginx-b5d699dd5-bhrvv
error when evicting pods/"metrics-server-7f87464c84-rxnxt" -n "gke-managed-metrics-server" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
pod/npd-192.168.133.22-z48kp evicted
pod/metallb-controller-6fc5b574cc-qx8wt evicted
error when evicting pods/"nginx-b5d699dd5-bhrvv" -n "test-upgrade" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
pod/clusterdns-controller-b67d7754-f25x7 evicted
pod/healthcheck-metrics-collector-678c856586-8m92s evicted
pod/nginx-b5d699dd5-whlbn evicted
pod/stackdriver-metadata-agent-cluster-level-5dd459ff5b-jdqbv evicted
pod/stackdriver-operator-54cd6dc494-ss5v4 evicted
pod/nginx-b5d699dd5-7vxq2 evicted
pod/ang-controller-manager-5669cb6545-wbg8b evicted
pod/core-dns-autoscaler-86445f5b7f-wpj7j evicted
I0918 00:36:25.079556  463717 request.go:697] Waited for 1.088185051s due to client-side throttling, not priority and fairness, request: GET:https://192.168.134.1:443/api/v1/namespaces/kube-system/pods/kube-state-metrics-54bdc9d49-wbwwx
pod/kube-state-metrics-54bdc9d49-wbwwx evicted
pod/ang-controller-manager-autoscaler-7ddb7dbfd-pcdjf evicted
evicting pod gke-managed-metrics-server/metrics-server-7f87464c84-rxnxt
error when evicting pods/"metrics-server-7f87464c84-rxnxt" -n "gke-managed-metrics-server" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod test-upgrade/nginx-b5d699dd5-bhrvv
error when evicting pods/"nginx-b5d699dd5-bhrvv" -n "test-upgrade" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod gke-managed-metrics-server/metrics-server-7f87464c84-rxnxt
error when evicting pods/"metrics-server-7f87464c84-rxnxt" -n "gke-managed-metrics-server" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod test-upgrade/nginx-b5d699dd5-bhrvv
error when evicting pods/"nginx-b5d699dd5-bhrvv" -n "test-upgrade" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod gke-managed-metrics-server/metrics-server-7f87464c84-rxnxt
error when evicting pods/"metrics-server-7f87464c84-rxnxt" -n "gke-managed-metrics-server" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod test-upgrade/nginx-b5d699dd5-bhrvv
pod/nginx-b5d699dd5-bhrvv evicted
evicting pod gke-managed-metrics-server/metrics-server-7f87464c84-rxnxt
error when evicting pods/"metrics-server-7f87464c84-rxnxt" -n "gke-managed-metrics-server" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod gke-managed-metrics-server/metrics-server-7f87464c84-rxnxt
error when evicting pods/"metrics-server-7f87464c84-rxnxt" -n "gke-managed-metrics-server" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod gke-managed-metrics-server/metrics-server-7f87464c84-rxnxt
error when evicting pods/"metrics-server-7f87464c84-rxnxt" -n "gke-managed-metrics-server" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod gke-managed-metrics-server/metrics-server-7f87464c84-rxnxt
error when evicting pods/"metrics-server-7f87464c84-rxnxt" -n "gke-managed-metrics-server" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod gke-managed-metrics-server/metrics-server-7f87464c84-rxnxt
error when evicting pods/"metrics-server-7f87464c84-rxnxt" -n "gke-managed-metrics-server" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod gke-managed-metrics-server/metrics-server-7f87464c84-rxnxt
error when evicting pods/"metrics-server-7f87464c84-rxnxt" -n "gke-managed-metrics-server" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod gke-managed-metrics-server/metrics-server-7f87464c84-rxnxt
error when evicting pods/"metrics-server-7f87464c84-rxnxt" -n "gke-managed-metrics-server" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod gke-managed-metrics-server/metrics-server-7f87464c84-rxnxt
error when evicting pods/"metrics-server-7f87464c84-rxnxt" -n "gke-managed-metrics-server" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod gke-managed-metrics-server/metrics-server-7f87464c84-rxnxt
error when evicting pods/"metrics-server-7f87464c84-rxnxt" -n "gke-managed-metrics-server" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod gke-managed-metrics-server/metrics-server-7f87464c84-rxnxt
error when evicting pods/"metrics-server-7f87464c84-rxnxt" -n "gke-managed-metrics-server" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod gke-managed-metrics-server/metrics-server-7f87464c84-rxnxt
error when evicting pods/"metrics-server-7f87464c84-rxnxt" -n "gke-managed-metrics-server" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod gke-managed-metrics-server/metrics-server-7f87464c84-rxnxt
pod/metrics-server-7f87464c84-rxnxt evicted
node/worker02 drained

$ kubectl get po -n test-upgrade -owide -w
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
nginx-b5d699dd5-7vxq2   1/1     Running   0          21h   10.4.2.81    worker02   <none>           <none>
nginx-b5d699dd5-bhrvv   1/1     Running   0          21h   10.4.2.134   worker02   <none>           <none>
nginx-b5d699dd5-whlbn   1/1     Running   0          21h   10.4.2.224   worker02   <none>           <none>
nginx-b5d699dd5-whlbn   1/1     Running   0          21h   10.4.2.224   worker02   <none>           <none>
nginx-b5d699dd5-whlbn   1/1     Terminating   0          21h   10.4.2.224   worker02   <none>           <none>
nginx-b5d699dd5-g2hcj   0/1     Pending       0          0s    <none>       <none>     <none>           <none>
nginx-b5d699dd5-g2hcj   0/1     Pending       0          0s    <none>       worker01   <none>           <none>
nginx-b5d699dd5-whlbn   1/1     Terminating   0          21h   10.4.2.224   worker02   <none>           <none>
nginx-b5d699dd5-g2hcj   0/1     ContainerCreating   0          0s    <none>       worker01   <none>           <none>
nginx-b5d699dd5-whlbn   0/1     Terminating         0          21h   10.4.2.224   worker02   <none>           <none>
nginx-b5d699dd5-7vxq2   1/1     Running             0          21h   10.4.2.81    worker02   <none>           <none>
nginx-b5d699dd5-7vxq2   1/1     Terminating         0          21h   10.4.2.81    worker02   <none>           <none>
nginx-b5d699dd5-7vxq2   1/1     Terminating         0          21h   10.4.2.81    worker02   <none>           <none>
nginx-b5d699dd5-whlbn   0/1     Terminating         0          21h   <none>       worker02   <none>           <none>
nginx-b5d699dd5-xlc6v   0/1     Pending             0          0s    <none>       <none>     <none>           <none>
nginx-b5d699dd5-xlc6v   0/1     Pending             0          0s    <none>       worker01   <none>           <none>
nginx-b5d699dd5-7vxq2   0/1     Terminating         0          21h   10.4.2.81    worker02   <none>           <none>
nginx-b5d699dd5-7vxq2   0/1     Terminating         0          21h   <none>       worker02   <none>           <none>
nginx-b5d699dd5-whlbn   0/1     Terminating         0          21h   10.4.2.224   worker02   <none>           <none>
nginx-b5d699dd5-whlbn   0/1     Terminating         0          21h   10.4.2.224   worker02   <none>           <none>
nginx-b5d699dd5-whlbn   0/1     Terminating         0          21h   10.4.2.224   worker02   <none>           <none>
nginx-b5d699dd5-7vxq2   0/1     Terminating         0          21h   10.4.2.81    worker02   <none>           <none>
nginx-b5d699dd5-7vxq2   0/1     Terminating         0          21h   10.4.2.81    worker02   <none>           <none>
nginx-b5d699dd5-7vxq2   0/1     Terminating         0          21h   10.4.2.81    worker02   <none>           <none>
nginx-b5d699dd5-xlc6v   0/1     ContainerCreating   0          3s    <none>       worker01   <none>           <none>
nginx-b5d699dd5-xlc6v   0/1     Running             0          15s   10.4.3.198   worker01   <none>           <none>
nginx-b5d699dd5-xlc6v   1/1     Running             0          15s   10.4.3.198   worker01   <none>           <none>
nginx-b5d699dd5-bhrvv   1/1     Running             0          21h   10.4.2.134   worker02   <none>           <none>
nginx-b5d699dd5-bhrvv   1/1     Terminating         0          21h   10.4.2.134   worker02   <none>           <none>
nginx-b5d699dd5-m2lrs   0/1     Pending             0          0s    <none>       <none>     <none>           <none>
nginx-b5d699dd5-m2lrs   0/1     Pending             0          0s    <none>       worker01   <none>           <none>
nginx-b5d699dd5-bhrvv   1/1     Terminating         0          21h   10.4.2.134   worker02   <none>           <none>
nginx-b5d699dd5-m2lrs   0/1     ContainerCreating   0          0s    <none>       worker01   <none>           <none>
nginx-b5d699dd5-bhrvv   0/1     Terminating         0          21h   10.4.2.134   worker02   <none>           <none>
nginx-b5d699dd5-g2hcj   0/1     Running             0          17s   10.4.3.224   worker01   <none>           <none>
nginx-b5d699dd5-bhrvv   0/1     Terminating         0          21h   <none>       worker02   <none>           <none>
nginx-b5d699dd5-g2hcj   1/1     Running             0          18s   10.4.3.224   worker01   <none>           <none>
nginx-b5d699dd5-bhrvv   0/1     Terminating         0          21h   10.4.2.134   worker02   <none>           <none>
nginx-b5d699dd5-bhrvv   0/1     Terminating         0          21h   10.4.2.134   worker02   <none>           <none>
nginx-b5d699dd5-bhrvv   0/1     Terminating         0          21h   10.4.2.134   worker02   <none>           <none>
nginx-b5d699dd5-m2lrs   0/1     Running             0          4s    10.4.3.53    worker01   <none>           <none>
nginx-b5d699dd5-m2lrs   1/1     Running             0          5s    10.4.3.53    worker01   <none>           <none>

$ kubectl get po -n test-upgrade -owide
NAME                    READY   STATUS    RESTARTS   AGE     IP           NODE       NOMINATED NODE   READINESS GATES
nginx-b5d699dd5-g2hcj   1/1     Running   0          2m45s   10.4.3.224   worker01   <none>           <none>
nginx-b5d699dd5-m2lrs   1/1     Running   0          2m30s   10.4.3.53    worker01   <none>           <none>
nginx-b5d699dd5-xlc6v   1/1     Running   0          2m45s   10.4.3.198   worker01   <none>           <none>

$ kubectl --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig delete nodepool np1-green -n cluster-usercluster1
nodepool.baremetal.cluster.gke.io "np1-green" deleted

$ k get node
NAME       STATUS   ROLES           AGE     VERSION
admin01    Ready    control-plane   43h     v1.27.4-gke.1600
worker01   Ready    worker          9m24s   v1.26.2-gke.1001

httping による断時間はなかった (1秒以上の断無し)

2. nodeSelector に blue/green: blue を追加

ワークロードにnodeSelectorを追記する

test-upgrade-nginx.yaml
apiVersion: apps/v1
kind: Deployment
...
spec:
...
  template:
...
    spec:
+     nodeSelector:
+       blue/green: blue

変更を反映する

kubectl apply -f test-upgrade-nginx.yaml

ローリングで適用される

kubectl get po -n test-upgrade -owide

ノードセレクターに blue/green=blue があることを確認する

kubectl -n test-upgrade describe po | grep Node-Selector
 出力結果 
出力例
$ kubectl apply -f test-upgrade-nginx.yaml
deployment.apps/nginx configured
service/nginx unchanged
poddisruptionbudget.policy/nginx configured

$ kubectl get po -n test-upgrade -owide
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
nginx-b5d699dd5-g2hcj   1/1     Running   0          10m   10.4.3.224   worker01   <none>           <none>
nginx-b5d699dd5-m2lrs   1/1     Running   0          10m   10.4.3.53    worker01   <none>           <none>
nginx-b5d699dd5-xlc6v   1/1     Running   0          10m   10.4.3.198   worker01   <none>           <none>

$ kubectl get po -n test-upgrade -owide -w
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
nginx-b5d699dd5-g2hcj   1/1     Running   0          10m   10.4.3.224   worker01   <none>           <none>
nginx-b5d699dd5-m2lrs   1/1     Running   0          10m   10.4.3.53    worker01   <none>           <none>
nginx-b5d699dd5-xlc6v   1/1     Running   0          10m   10.4.3.198   worker01   <none>           <none>
nginx-6c8cc5d994-99z2j   0/1     Pending   0          0s    <none>       <none>     <none>           <none>
nginx-6c8cc5d994-99z2j   0/1     Pending   0          0s    <none>       worker01   <none>           <none>
nginx-6c8cc5d994-99z2j   0/1     ContainerCreating   0          0s    <none>       worker01   <none>           <none>
nginx-6c8cc5d994-99z2j   0/1     Running             0          3s    10.4.3.191   worker01   <none>           <none>
nginx-6c8cc5d994-99z2j   1/1     Running             0          3s    10.4.3.191   worker01   <none>           <none>
nginx-b5d699dd5-xlc6v    1/1     Terminating         0          10m   10.4.3.198   worker01   <none>           <none>
nginx-6c8cc5d994-klj6q   0/1     Pending             0          0s    <none>       <none>     <none>           <none>
nginx-6c8cc5d994-klj6q   0/1     Pending             0          0s    <none>       worker01   <none>           <none>
nginx-6c8cc5d994-klj6q   0/1     ContainerCreating   0          0s    <none>       worker01   <none>           <none>
nginx-b5d699dd5-xlc6v    0/1     Terminating         0          10m   10.4.3.198   worker01   <none>           <none>
nginx-b5d699dd5-xlc6v    0/1     Terminating         0          10m   10.4.3.198   worker01   <none>           <none>
nginx-b5d699dd5-xlc6v    0/1     Terminating         0          10m   10.4.3.198   worker01   <none>           <none>
nginx-6c8cc5d994-klj6q   0/1     Running             0          4s    10.4.3.159   worker01   <none>           <none>
nginx-6c8cc5d994-klj6q   1/1     Running             0          4s    10.4.3.159   worker01   <none>           <none>
nginx-b5d699dd5-g2hcj    1/1     Terminating         0          10m   10.4.3.224   worker01   <none>           <none>
nginx-6c8cc5d994-mc4hk   0/1     Pending             0          0s    <none>       <none>     <none>           <none>
nginx-6c8cc5d994-mc4hk   0/1     Pending             0          0s    <none>       worker01   <none>           <none>
nginx-6c8cc5d994-mc4hk   0/1     ContainerCreating   0          0s    <none>       worker01   <none>           <none>
nginx-b5d699dd5-g2hcj    0/1     Terminating         0          10m   10.4.3.224   worker01   <none>           <none>
nginx-b5d699dd5-g2hcj    0/1     Terminating         0          10m   10.4.3.224   worker01   <none>           <none>
nginx-b5d699dd5-g2hcj    0/1     Terminating         0          10m   10.4.3.224   worker01   <none>           <none>
nginx-6c8cc5d994-mc4hk   0/1     Running             0          4s    10.4.3.3     worker01   <none>           <none>
nginx-6c8cc5d994-mc4hk   1/1     Running             0          4s    10.4.3.3     worker01   <none>           <none>
nginx-b5d699dd5-m2lrs    1/1     Terminating         0          10m   10.4.3.53    worker01   <none>           <none>
nginx-b5d699dd5-m2lrs    0/1     Terminating         0          10m   10.4.3.53    worker01   <none>           <none>
nginx-b5d699dd5-m2lrs    0/1     Terminating         0          10m   10.4.3.53    worker01   <none>           <none>
nginx-b5d699dd5-m2lrs    0/1     Terminating         0          10m   10.4.3.53    worker01   <none>           <none>

$ kubectl get po -n test-upgrade -owide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
nginx-6c8cc5d994-99z2j   1/1     Running   0          56s   10.4.3.191   worker01   <none>           <none>
nginx-6c8cc5d994-klj6q   1/1     Running   0          53s   10.4.3.159   worker01   <none>           <none>
nginx-6c8cc5d994-mc4hk   1/1     Running   0          49s   10.4.3.3     worker01   <none>           <none>

$ kubectl -n test-upgrade describe po | grep Node-Selector
Node-Selectors:              blue/green=blue
Node-Selectors:              blue/green=blue
Node-Selectors:              blue/green=blue

3. 新バージョンで Green ノードプールの追加. label blue/green: green を作成時に付与

Green ノードプールの設定をユーザクラスタ設定 YAML へ追記する

usercluster1.yaml
---
apiVersion: baremetal.cluster.gke.io/v1
kind: NodePool
metadata:
  name: np1-green
  namespace: cluster-usercluster1
spec:
  clusterName: usercluster1
  anthosBareMetalVersion: 1.16.0
  nodes:
  - address: 192.168.133.22
  labels:
    blue/green: green
    pool/name: np1

preflight check の実施

bmctl check preflight --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -c usercluster1

green ノードプールの追加を実施

bmctl update cluster -c usercluster1 --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig

ノードが追加されるのを待つ

kubectl get node -w
 出力結果 
出力例
$ bmctl check preflight --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -c usercluster1
[2023-09-18 05:03:17+0000] Runnning command: bmctl check preflight --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig -c usercluster1
Please check the logs at bmctl-workspace/usercluster1/log/check-preflight-20230918-050317/check-preflight.log
[2023-09-18 05:03:18+0000] Waiting for preflight check job to finish... OK
[2023-09-18 05:04:28+0000] - Validation Category: machines and network
[2023-09-18 05:04:28+0000] 	- [PASSED] node-network
[2023-09-18 05:04:28+0000] 	- [PASSED] pod-cidr
[2023-09-18 05:04:28+0000] 	- [PASSED] 192.168.133.22
[2023-09-18 05:04:28+0000] 	- [PASSED] 192.168.133.22-gcp
[2023-09-18 05:04:28+0000] 	- [PASSED] gcp
[2023-09-18 05:04:28+0000] Flushing logs... OK

$ bmctl update cluster -c usercluster1 --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig
[2023-09-18 05:04:57+0000] Runnning command: bmctl update cluster -c usercluster1 --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig
Please check the logs at bmctl-workspace/usercluster1/log/update-cluster-20230918-050457/update-cluster.log
[2023-09-18 05:04:58+0000] Deleting bootstrap cluster...

$ kubectl get node -w
NAME       STATUS   ROLES           AGE     VERSION
admin01    Ready    control-plane   47h     v1.27.4-gke.1600
worker01   Ready    worker          4h32m   v1.26.2-gke.1001
worker02   NotReady   <none>          0s      v1.27.4-gke.1600
worker02   NotReady   <none>          0s      v1.27.4-gke.1600
worker02   NotReady   <none>          0s      v1.27.4-gke.1600
worker02   NotReady   <none>          0s      v1.27.4-gke.1600
worker02   NotReady   worker          0s      v1.27.4-gke.1600
worker02   NotReady   worker          0s      v1.27.4-gke.1600
worker02   NotReady   worker          4s      v1.27.4-gke.1600
worker02   Ready      worker          6s      v1.27.4-gke.1600
worker02   Ready      worker          6s      v1.27.4-gke.1600

$ kubectl get node --show-labels
NAME       STATUS   ROLES           AGE     VERSION            LABELS
admin01    Ready    control-plane   47h     v1.27.4-gke.1600   baremetal.cluster.gke.io/cgroup=v2,baremetal.cluster.gke.io/k8s-ip=192.168.133.11,baremetal.cluster.gke.io/lbnode=true,baremetal.cluster.gke.io/namespace=cluster-usercluster1,baremetal.cluster.gke.io/node-pool=usercluster1,baremetal.cluster.gke.io/version=1.16.0,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=admin01,kubernetes.io/os=linux,networking.gke.io/ang-node=true,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
worker01   Ready    worker          4h34m   v1.26.2-gke.1001   baremetal.cluster.gke.io/cgroup=v2,baremetal.cluster.gke.io/k8s-ip=192.168.133.21,baremetal.cluster.gke.io/namespace=cluster-usercluster1,baremetal.cluster.gke.io/node-pool=np1-blue,baremetal.cluster.gke.io/version=1.15.1,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,blue/green=blue,cloud.google.com/gke-nodepool=np1-blue,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker01,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,pool/name=np1
worker02   Ready    worker          35s     v1.27.4-gke.1600   baremetal.cluster.gke.io/cgroup=v2,baremetal.cluster.gke.io/k8s-ip=192.168.133.22,baremetal.cluster.gke.io/namespace=cluster-usercluster1,baremetal.cluster.gke.io/node-pool=np1-green,baremetal.cluster.gke.io/version=1.16.0,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,blue/green=green,cloud.google.com/gke-nodepool=np1-green,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker02,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,pool/name=np1

4. ワークロードの nodeSelector を blue/green: green に変更して Pod 移行

nodeSelector のラベルを blueからgreenに変更して Pod を移行する

test-upgrade-nginx.yaml
apiVersion: apps/v1
kind: Deployment
...
spec:
...
  template:
...
    spec:
      nodeSelector:
-       blue/green: blue
+       blue/green: green

変更を適用して、ワークロードを blue ノードプールから green ノードプールへ移行する

kubectl apply -f test-upgrade-nginx.yaml

Pod の移行状況を確認する (worker01 から workder02 に変更される)

kubectl -n test-upgrade get po -owide -w
 出力結果 
出力例
$ kubectl -n test-upgrade get po -owide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
nginx-6c8cc5d994-f565p   1/1     Running   0          90s   10.4.3.50    worker01   <none>           <none>
nginx-6c8cc5d994-mnlcs   1/1     Running   0          86s   10.4.3.34    worker01   <none>           <none>
nginx-6c8cc5d994-zv9gt   1/1     Running   0          82s   10.4.3.199   worker01   <none>           <none>

$ kubectl apply -f test-upgrade-nginx.yaml
deployment.apps/nginx configured
service/nginx unchanged
poddisruptionbudget.policy/nginx configured

$ kubectl -n test-upgrade get po -owide -w
NAME                     READY   STATUS    RESTARTS   AGE    IP           NODE       NOMINATED NODE   READINESS GATES
nginx-6c8cc5d994-f565p   1/1     Running   0          2m7s   10.4.3.50    worker01   <none>           <none>
nginx-6c8cc5d994-mnlcs   1/1     Running   0          2m3s   10.4.3.34    worker01   <none>           <none>
nginx-6c8cc5d994-zv9gt   1/1     Running   0          119s   10.4.3.199   worker01   <none>           <none>
nginx-cd545858c-mc8hz    0/1     Pending   0          0s     <none>       <none>     <none>           <none>
nginx-cd545858c-mc8hz    0/1     Pending   0          0s     <none>       worker02   <none>           <none>
nginx-cd545858c-mc8hz    0/1     ContainerCreating   0          0s     <none>       worker02   <none>           <none>
nginx-cd545858c-mc8hz    0/1     Running             0          4s     10.4.4.51    worker02   <none>           <none>
nginx-cd545858c-mc8hz    1/1     Running             0          4s     10.4.4.51    worker02   <none>           <none>
nginx-6c8cc5d994-zv9gt   1/1     Terminating         0          2m5s   10.4.3.199   worker01   <none>           <none>
nginx-cd545858c-lxdsm    0/1     Pending             0          0s     <none>       <none>     <none>           <none>
nginx-cd545858c-lxdsm    0/1     Pending             0          0s     <none>       worker02   <none>           <none>
nginx-cd545858c-lxdsm    0/1     ContainerCreating   0          0s     <none>       worker02   <none>           <none>
nginx-6c8cc5d994-zv9gt   0/1     Terminating         0          2m5s   10.4.3.199   worker01   <none>           <none>
nginx-6c8cc5d994-zv9gt   0/1     Terminating         0          2m5s   10.4.3.199   worker01   <none>           <none>
nginx-6c8cc5d994-zv9gt   0/1     Terminating         0          2m5s   10.4.3.199   worker01   <none>           <none>
nginx-cd545858c-lxdsm    0/1     Running             0          3s     10.4.4.162   worker02   <none>           <none>
nginx-cd545858c-lxdsm    1/1     Running             0          3s     10.4.4.162   worker02   <none>           <none>
nginx-6c8cc5d994-mnlcs   1/1     Terminating         0          2m12s   10.4.3.34    worker01   <none>           <none>
nginx-cd545858c-lxtr7    0/1     Pending             0          0s      <none>       <none>     <none>           <none>
nginx-cd545858c-lxtr7    0/1     Pending             0          0s      <none>       worker02   <none>           <none>
nginx-cd545858c-lxtr7    0/1     ContainerCreating   0          0s      <none>       worker02   <none>           <none>
nginx-6c8cc5d994-mnlcs   0/1     Terminating         0          2m12s   10.4.3.34    worker01   <none>           <none>
nginx-6c8cc5d994-mnlcs   0/1     Terminating         0          2m12s   10.4.3.34    worker01   <none>           <none>
nginx-6c8cc5d994-mnlcs   0/1     Terminating         0          2m12s   10.4.3.34    worker01   <none>           <none>
nginx-cd545858c-lxtr7    0/1     Running             0          3s      10.4.4.152   worker02   <none>           <none>
nginx-cd545858c-lxtr7    1/1     Running             0          3s      10.4.4.152   worker02   <none>           <none>
nginx-6c8cc5d994-f565p   1/1     Terminating         0          2m19s   10.4.3.50    worker01   <none>           <none>
nginx-6c8cc5d994-f565p   0/1     Terminating         0          2m19s   10.4.3.50    worker01   <none>           <none>
nginx-6c8cc5d994-f565p   0/1     Terminating         0          2m19s   10.4.3.50    worker01   <none>           <none>
nginx-6c8cc5d994-f565p   0/1     Terminating         0          2m19s   10.4.3.50    worker01   <none>           <none>

$ kubectl -n test-upgrade get po -owide
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
nginx-cd545858c-lxdsm   1/1     Running   0          18s   10.4.4.162   worker02   <none>           <none>
nginx-cd545858c-lxtr7   1/1     Running   0          15s   10.4.4.152   worker02   <none>           <none>
nginx-cd545858c-mc8hz   1/1     Running   0          22s   10.4.4.51    worker02   <none>           <none>

httping でのアクセス確認では断無しだった

切り戻しは nodeSelectorblue に変更するだけで可能

5. 最後に残り Pod を Drain して、 blue ノードプールの削除

最後に blue ノードプールを削除する

残っているリソースを drain する

kubectl cordon worker01
kubectl drain worker01 --ignore-daemonsets --delete-emptydir-data

blue ノードプールを削除する

kubectl --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig delete nodepool np1-blue -n cluster-usercluster1

最後に削除したノードプール箇所をユーザクラスタ設定ファイル(usercluster1.yaml)から削除もしくはコメントアウトしておく

 出力結果 
$ k get po -A -owide | grep worker01
kube-system                  anetd-bnn7n                                                 2/2     Running     0             4h43m   192.168.133.21   worker01   <none>           <none>
kube-system                  ang-controller-manager-5669cb6545-nxndl                     1/1     Running     0             4h39m   192.168.133.21   worker01   <none>           <none>
kube-system                  ang-controller-manager-autoscaler-7ddb7dbfd-rvxrf           1/1     Running     0             4h39m   10.4.3.52        worker01   <none>           <none>
kube-system                  clusterdns-controller-b67d7754-cgslb                        2/2     Running     0             4h39m   10.4.3.18        worker01   <none>           <none>
kube-system                  core-dns-autoscaler-86445f5b7f-hl6pl                        1/1     Running     0             4h39m   10.4.3.8         worker01   <none>           <none>
kube-system                  gke-metrics-agent-rpc7t                                     1/1     Running     0             4h43m   10.4.3.197       worker01   <none>           <none>
kube-system                  healthcheck-metrics-collector-678c856586-nhq4x              2/2     Running     0             4h39m   10.4.3.143       worker01   <none>           <none>
kube-system                  kube-proxy-59w89                                            1/1     Running     0             4h43m   192.168.133.21   worker01   <none>           <none>
kube-system                  kube-state-metrics-54bdc9d49-xx94v                          1/1     Running     0             4h39m   10.4.3.203       worker01   <none>           <none>
kube-system                  localpv-z97v6                                               1/1     Running     0             4h43m   10.4.3.233       worker01   <none>           <none>
kube-system                  metallb-controller-6fc5b574cc-2b8mv                         1/1     Running     0             4h39m   10.4.3.184       worker01   <none>           <none>
kube-system                  node-exporter-hlxtp                                         1/1     Running     0             4h43m   192.168.133.21   worker01   <none>           <none>
kube-system                  npd-192.168.133.21-wpqwq                                    0/1     Completed   0             4h43m   10.4.3.127       worker01   <none>           <none>
kube-system                  stackdriver-log-forwarder-c85pm                             1/1     Running     0             4h43m   10.4.3.183       worker01   <none>           <none>
kube-system                  stackdriver-metadata-agent-cluster-level-5dd459ff5b-2b9ss   1/1     Running     0             4h39m   10.4.3.98        worker01   <none>           <none>
kube-system                  stackdriver-operator-54cd6dc494-tcqq5                       1/1     Running     0             4h39m   10.4.3.67        worker01   <none>           <none>

$ kubectl cordon worker01
node/worker01 cordoned
node/worker01 already cordoned

$ kubectl drain worker01 --ignore-daemonsets --delete-emptydir-data
Warning: ignoring DaemonSet-managed Pods: kube-system/anetd-bnn7n, kube-system/gke-metrics-agent-rpc7t, kube-system/kube-proxy-59w89, kube-system/localpv-z97v6, kube-system/node-exporter-hlxtp, kube-system/stackdriver-log-forwarder-c85pm
evicting pod kube-system/ang-controller-manager-5669cb6545-nxndl
evicting pod kube-system/stackdriver-operator-54cd6dc494-tcqq5
evicting pod kube-system/ang-controller-manager-autoscaler-7ddb7dbfd-rvxrf
evicting pod kube-system/clusterdns-controller-b67d7754-cgslb
evicting pod kube-system/core-dns-autoscaler-86445f5b7f-hl6pl
evicting pod kube-system/healthcheck-metrics-collector-678c856586-nhq4x
evicting pod kube-system/kube-state-metrics-54bdc9d49-xx94v
evicting pod kube-system/metallb-controller-6fc5b574cc-2b8mv
evicting pod kube-system/npd-192.168.133.21-wpqwq
evicting pod kube-system/stackdriver-metadata-agent-cluster-level-5dd459ff5b-2b9ss
pod/metallb-controller-6fc5b574cc-2b8mv evicted
pod/npd-192.168.133.21-wpqwq evicted
pod/core-dns-autoscaler-86445f5b7f-hl6pl evicted
I0918 05:18:44.544619  464424 request.go:697] Waited for 1.013274615s due to client-side throttling, not priority and fairness, request: GET:https://192.168.134.1:443/api/v1/namespaces/kube-system/pods/clusterdns-controller-b67d7754-cgslb
pod/stackdriver-operator-54cd6dc494-tcqq5 evicted
pod/kube-state-metrics-54bdc9d49-xx94v evicted
pod/clusterdns-controller-b67d7754-cgslb evicted
pod/stackdriver-metadata-agent-cluster-level-5dd459ff5b-2b9ss evicted
pod/ang-controller-manager-autoscaler-7ddb7dbfd-rvxrf evicted
pod/ang-controller-manager-5669cb6545-nxndl evicted
pod/healthcheck-metrics-collector-678c856586-nhq4x evicted
node/worker01 drained

$ k get po -A -owide | grep worker01
kube-system                  anetd-bnn7n                                                 2/2     Running     0             4h46m   192.168.133.21   worker01   <none>           <none>
kube-system                  gke-metrics-agent-rpc7t                                     1/1     Running     0             4h46m   10.4.3.197       worker01   <none>           <none>
kube-system                  kube-proxy-59w89                                            1/1     Running     0             4h46m   192.168.133.21   worker01   <none>           <none>
kube-system                  localpv-z97v6                                               1/1     Running     0             4h46m   10.4.3.233       worker01   <none>           <none>
kube-system                  node-exporter-hlxtp                                         1/1     Running     0             4h46m   192.168.133.21   worker01   <none>           <none>
kube-system                  stackdriver-log-forwarder-c85pm                             1/1     Running     0             4h46m   10.4.3.183       worker01   <none>           <none>

$ k get node
NAME       STATUS                     ROLES           AGE    VERSION
admin01    Ready                      control-plane   2d     v1.27.4-gke.1600
worker01   Ready,SchedulingDisabled   worker          5h1m   v1.26.2-gke.1001
worker02   Ready                      worker          27m    v1.27.4-gke.1600

$ kubectl --kubeconfig=bmctl-workspace/admincluster/admincluster-kubeconfig delete nodepool np1-blue -n cluster-usercluster1
nodepool.baremetal.cluster.gke.io "np1-blue" deleted

$ k get node
NAME       STATUS   ROLES           AGE   VERSION
admin01    Ready    control-plane   2d    v1.27.4-gke.1600
worker02   Ready    worker          28m   v1.27.4-gke.1600

まとめ

Node Pool を使用した Kubernetes の Blue/Green Upgrade が実現できた
これで何かと問題を起こしがちな Kubernetes のアップグレード時のワークロード影響を軽減できればと思う
ただし、コントロールプレーンのアップグレード時にワークロードの外部アクセスへ影響が出たのでネットワーク構成などを確認する必要がありそう

参考

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?