Ubuntu で Kubeadm により k8s 環境構築実験(4)の続きです。
ワーカーノード2が、Not Ready になっている問題です。
もう一度、describe してみました。
ubuntu@Worker-Node2:~$ kubectl describe node worker-node2
Name: worker-node2
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=worker-node2
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 03 Nov 2024 18:55:10 +0000
Taints: node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: worker-node2
AcquireTime: <unset>
RenewTime: Mon, 04 Nov 2024 11:23:55 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 04 Nov 2024 11:23:54 +0000 Sun, 03 Nov 2024 18:55:10 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 04 Nov 2024 11:23:54 +0000 Sun, 03 Nov 2024 18:55:10 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 04 Nov 2024 11:23:54 +0000 Sun, 03 Nov 2024 18:55:10 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Mon, 04 Nov 2024 11:23:54 +0000 Sun, 03 Nov 2024 18:55:10 +0000 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Addresses:
InternalIP: 10.0.12.218
Hostname: worker-node2
Capacity:
cpu: 2
ephemeral-storage: 7034376Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 936100Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 6482880911
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 833700Ki
pods: 110
System Info:
Machine ID: ec29f53375b92e24d6186a6cad778ae0
System UUID: ec29f533-75b9-2e24-d618-6a6cad778ae0
Boot ID: e39c4aa5-dda2-493d-94b3-130e61668978
Kernel Version: 6.8.0-1016-aws
OS Image: Ubuntu 24.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://2.0.0-rc.6
Kubelet Version: v1.31.2
Kube-Proxy Version: v1.31.2
PodCIDR: 192.168.4.0/24
PodCIDRs: 192.168.4.0/24
Non-terminated Pods: (1 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system kube-proxy-hk462 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16h
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 0 (0%) 0 (0%)
memory 0 (0%) 0 (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events: <none>
この表示を発見しました。
18:55:10 +0000 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
kube-system namespace の pod の状況を確認してみます。
ubuntu@Worker-Node2:~$ kubectl get pods -o wide -n kube-system
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-7c65d6cfc9-4cv2p 0/1 ContainerCreating 0 36h <none> master-node <none> <none>
coredns-7c65d6cfc9-f2dtj 0/1 ContainerCreating 0 36h <none> master-node <none> <none>
etcd-master-node 1/1 Running 10 (36h ago) 36h 10.0.11.67 master-node <none> <none>
kube-apiserver-master-node 1/1 Running 3 (36h ago) 36h 10.0.11.67 master-node <none> <none>
kube-controller-manager-master-node 1/1 Running 0 36h 10.0.11.67 master-node <none> <none>
kube-proxy-7ntnb 0/1 CrashLoopBackOff 346 (4m55s ago) 36h 10.0.11.67 master-node <none> <none>
kube-proxy-hk462 0/1 ContainerCreating 0 16h 10.0.12.218 worker-node2 <none> <none>
kube-proxy-qdpkp 1/1 Running 0 34h 10.0.11.173 worker-node <none> <none>
kube-scheduler-master-node 1/1 Running 4 (36h ago) 36h 10.0.11.67 master-node <none> <none>
そもそもマスターノードで kube-proxy がうまくいっていない?
今回は、AWS のそれぞれ限られたリソースでテストしています。
今回は、ワーカーノード2をあきらめることにしました。
worker-node2 を削除しました。すべての namespace の pod を確認します。
ubuntu@Master-Node:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-system calico-kube-controllers-58b4dd66d6-44ltj 1/1 Running 0 89m
calico-system calico-node-9st4c 0/1 CrashLoopBackOff 15 (2m4s ago) 71m
calico-system calico-node-zr285 0/1 Running 0 89m
calico-system calico-typha-f66f5d5f9-ss5wp 1/1 Running 0 90m
calico-system csi-node-driver-55r9w 2/2 Running 36 (7m54s ago) 90m
calico-system csi-node-driver-5q5nw 2/2 Running 0 90m
kube-system coredns-7c65d6cfc9-4cv2p 1/1 Running 1 (39m ago) 38h
kube-system coredns-7c65d6cfc9-f2dtj 1/1 Running 1 (39m ago) 38h
kube-system etcd-master-node 1/1 Running 11 (39m ago) 38h
kube-system kube-apiserver-master-node 1/1 Running 4 (39m ago) 38h
kube-system kube-controller-manager-master-node 1/1 Running 1 (39m ago) 38h
kube-system kube-proxy-btjkk 1/1 Running 17 (116s ago) 72m
kube-system kube-proxy-qdpkp 1/1 Running 0 36h
kube-system kube-scheduler-master-node 1/1 Running 5 (39m ago) 38h
tigera-operator tigera-operator-89c775547-2fvwl 1/1 Running 1 (42m ago) 90m
calico-node が ready になっていないのが気になります。
どうやら、ポートを開けていないのと、マシンタイプが貧弱なのが原因のような気がしたので、マシンタイプを t3.micro から t3.small に変更して、セキュリティグループのポートを、10.0.0.0/8 と 192.168.0.0/16 に対して、全開放してみました。
ubuntu@Master-Node:~$ kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-apiserver calico-apiserver-6664d868c5-5ktmq 1/1 Running 0 5m2s 192.168.77.163 master-node <none> <none>
calico-apiserver calico-apiserver-6664d868c5-bmdlj 1/1 Running 2 (69s ago) 5m2s 192.168.168.134 worker-node <none> <none>
calico-system calico-kube-controllers-58b4dd66d6-4gsf5 1/1 Running 1 (2m25s ago) 15m 192.168.168.135 worker-node <none> <none>
calico-system calico-node-9st4c 1/1 Running 88 (67s ago) 7h52m 10.0.11.67 master-node <none> <none>
calico-system calico-node-zr285 1/1 Running 2 (2m25s ago) 8h 10.0.11.173 worker-node <none> <none>
calico-system calico-typha-f66f5d5f9-ss5wp 1/1 Running 2 (2m25s ago) 8h 10.0.11.173 worker-node <none> <none>
calico-system csi-node-driver-55r9w 2/2 Running 176 (3m48s ago) 8h 192.168.77.164 master-node <none> <none>
calico-system csi-node-driver-5q5nw 2/2 Running 4 (2m25s ago) 8h 192.168.168.136 worker-node <none> <none>
kube-system coredns-7c65d6cfc9-4cv2p 1/1 Running 2 (6m59s ago) 44h 192.168.77.160 master-node <none> <none>
kube-system coredns-7c65d6cfc9-f2dtj 1/1 Running 2 (6m59s ago) 44h 192.168.77.161 master-node <none> <none>
kube-system etcd-master-node 1/1 Running 12 (6m59s ago) 44h 10.0.11.67 master-node <none> <none>
kube-system kube-apiserver-master-node 1/1 Running 5 (6m59s ago) 44h 10.0.11.67 master-node <none> <none>
kube-system kube-controller-manager-master-node 1/1 Running 2 (6m59s ago) 44h 10.0.11.67 master-node <none> <none>
kube-system kube-proxy-btjkk 1/1 Running 92 (3m48s ago) 7h53m 10.0.11.67 master-node <none> <none>
kube-system kube-proxy-qdpkp 1/1 Running 2 (2m25s ago) 42h 10.0.11.173 worker-node <none> <none>
kube-system kube-scheduler-master-node 1/1 Running 6 (6m59s ago) 44h 10.0.11.67 master-node <none> <none>
tigera-operator tigera-operator-89c775547-2fvwl 1/1 Running 4 (2m25s ago) 8h 10.0.11.173 worker-node <none> <none>
これでいいのだろうか。
(参考)
つづく