1
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

kubernetesのコマンドメモ ~ CKAの道

Last updated at Posted at 2024-11-05

CKA取りたいので基礎から学ぶ

会社の Udemy business を申し込みそびれたので、このシリーズでやってみよう
https://www.youtube.com/watch?v=tHAQWLKMTB0&list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC&index=10

day 8

apiVersion がわからないときに使う

apiVersion: v1 でエラーになるとき、 apiVersion: apps/v1 が正しいことが確認できる

kubectl explain replicaset
k explain deployment

config系

永遠に覚えられない

# list up contexts
k config get-contexts

k config use-context rancher-desktop 
k config set-context --current --namespace=default
or
k config set-context kind-kind --namespace default

今使っている ns を知る方法

$ k config get-contexts
CURRENT   NAME                  CLUSTER               AUTHINFO                           NAMESPACE
*         kind-kind             kind-kind             kind-kind                          ns12
          rancher-desktop       rancher-desktop       rancher-desktop

Current ns の全てのリソースを表示。便利

k get all
k get all -o wide

k get pods -o wide
k get nodes -o wide

labelを表示

k get po --show-labels
NAME    READY   STATUS    RESTARTS   AGE     LABELS
nginx   1/1     Running   0          6m45s   run=nginx

describe

slashで区切ってresource/name でアクセスできる

describe は省略記法がないらしい。不便。。 desc でよくしてくれー

k describe rs/nginx-rs
k describe deploy/nginx

replication controller, replicaset, deployment

replication controllerは古い。selectorがイマイチ。
replicasetが今の推奨。
replicasetを作るのがdeployment

直接 pod を作成

これ便利!!

k run ng --image nginx

yamlを出すことも!

k run ng --image nginx --dry-run=client -o yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: ng
  name: ng
spec:
  containers:
  - image: nginx
    name: ng
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

直接deploymentを作成

k create deploy deploy-nginx --image=nginx --replicas=3

# replicasを増やしたりできる
k edit deploy/deploy-nginx

 k scale deploy/nginx --replicas=5

# imageを変えられる k set image
 k set image deploy/nginx-deploy nginx=nginx:1.27.2


# deploymentから作ったreplicasetは直接編集できない(すぐdeploymentで上書きされる)
# replicasetを直接作ってたら、編集できる
k edit rs nginx-7854ff8877

k delete deploy/deploy-nginx      # slashあり版

k delete deploy deploy-nginx    # slashの代わりにスペース。どっちでもいい

create と同時にlabelをつける方法はわからなかった。後からlabelを追加するコマンド

k label deploy deploy-nginx newlabel=111
k label pod ds-pdvfw newlabel=111

$ k describe pod ds-pdvfw | grep -B3 111
Labels:           app=nginx
                  controller-revision-hash=6d59468b47
                  env=demo
                  newlabel=111

podなら直接labelをつけれる

$ k run ng --image nginx --labels="env=me,app=nginx"
pod/ng created

$ k describe pod ng | grep -A1 Label
Labels:           app=nginx
                  env=me

yaml templateを使う

k create deploy deploy/nginx-new --image=nginx --dry-run=client -o yaml > deploy.yaml

rollout

undoしたらimageが古くなった。replica変えたのはundoされなかった気がすうr

kubectl rollout history deploy/nginx-rs
k rollout undo deploy/nginx-rs 
kubectl rollout history deploy/nginx-rs

replicasetを直接作る

yaml書いてapply -f
deployと同じ感じで操作できる

k edit rs/nginx-rs
k scale --replicas=10 rs/nginx-rs
k delete rs/nginx-rs

day 9

kind の使い方

kindはコンテナ内にk8s clusterを作る。ので、nodeportが動かない。そのためにはclusetrを作成し直しになる。一度作ったcluseterの編集はできない

# three node (two workers) cluster config
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
    - containerPort: 30001
      hostPort: 30001
- role: worker
- role: worker
kind create cluster --config kind.yaml
kubectl cluster-info --context kind-kind  # default name = kind
kubectl cluster-info --context kind-kind --name kind2
kubectl cluster-info dump
kind delete cluster --name=kind

extraPortMappingsすると、k get nodes の ip は関係なく、localhost:30001 がそのままnodeportに繋がる。

apiVersion: v1
kind: Service
metadata:
  name: nodeport-svc
  labels:
    env: demo
spec:
  type: NodePort
  ports:
    - nodePort: 30001
      port: 80
      targetPort: 80
  selector:
    app: deploy-nginx

selectorを変えてみる

k create deploy apache --image httpd --replicas 3
$ k describe deploy/apache | grep -i labels
Labels:                 app=apache
  Labels:  app=apache
  selector:
    app: apache

こうしてnodeport applyすると、apacheが localhost:30001 でみれる

Service type: LoadBalancer, ClusterIP

同様に lb, cluster ip も作るとselectorにヒットしたpodに勝手に繋がる。
どのpodかを選ぶのががselector。targetPortはその、selectorでヒットしたpodのport.

LBのtargetPortはdefault 80 らしい。targetPortを消してもapache:80に繋がるけど、targetPort 8080を入れると繋がらなくなった。

spec:
  type: LoadBalancer
  ports:
    - port: 80
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: 80

こんなんでpodの中から試した。

k exec -it apache-7bdd4c55dc-8hnsk -- sh

load balancerのexternal IPs はpending.

k get svc -o wide

service間の接続

service.nodeportをservice.LBに or service.ClusterIPに繋げることはできない。
仕組みが全く違うらしい。
serviceがつあんゲルのはselectorが指定できる podsだけ だと思おう

ExternalName Service の謎

何が便利なのかさっぱりわからなかった。用途不明
-> わかった

Day 10 namespace

k8s namespaceはnetworkを全く切り分けてなかった。そうだったんだ。

異なるns同士で普通に通信できる。name.ns.svc.cluster.local でもいけるし、pod ip addressを直接打ってもアクセスできる。そうだったんだ。

pod.cluster.local はない。必ずserviceを経由しないといけない。抽象化のため。どうしてもやりたいならpod ip address直でアクセスする。

ns間のACLって設定できるの?ということで NetworkPolicy を試したけど思うように動かなかった。そのうち学ぶだろう。

$ k explain NetworkPolicy
GROUP:      networking.k8s.io
KIND:       NetworkPolicy
VERSION:    v1

apiVersion は Group/VERSION になる。

この場合、yamlはこうなる

apiVersion: networking.k8s.io/v1

Day 11 initContainers

k apply じゃなくて k create があることを知った。作成だけできる。

 k create -f pod.yaml

もう一回叩くとエラーになる

$ k create -f pod.yaml
Error from server (AlreadyExists): error when creating "pod.yaml": pods "myapp" already exists

pod nameが被ってるからエラー。なるほどね

pod metadata.name を変えたら成功した

$ k create -f pod.yaml
pod/myapp2 created

色々いじったものをapplyしたら怒られた。createで作ると後からapplyできなくてめんど。よっぽど固定したいもの以外には使わないな

$ k apply -f pod.yaml 
Warning: resource pods/myapp is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
The Pod "myapp" is invalid: spec.initContainers: Forbidden: pod updates may not add or remove containers

k create deploy は特別なコマンド。yamlなしで作れるのは数少ない!

serviceaccount
secret
configmap
namespace
deploy

k create はこれくらいしか対応してないらしい。

initContainer

sh って until; do; done; なんてできたんだ 知らなかった

  initContainers:
    - name: init
      image: busybox
      command: ["sh", "-c", "echo init started.; until nslookup myapp.ns11.svc.cluster.local; do date; sleep 1; done; echo init completed."]

k get containers は存在しない。pod describe で見るしかない

Day 12 DaemonSet

Daemonset

daemonsetというresourceを知った。すべてのノードにpodを確実に載せるためのコントローラらしい。(または指定ラベルと持つノード)

$ k get daemonset -n kube-system
NAME         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kindnet      3         3         3       3            3           kubernetes.io/os=linux   24h
kube-proxy   3         3         3       3            3           kubernetes.io/os=linux   24h

$ k get ds でもdaemonset になる

$ k describe daemonset kube-proxy -n kube-system
Name:           kube-proxy
Selector:       k8s-app=kube-proxy
Node-Selector:  kubernetes.io/os=linux
Labels:         k8s-app=kube-proxy
Annotations:    deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 3
Current Number of Nodes Scheduled: 3
Number of Nodes Scheduled with Up-to-date Pods: 3
Number of Nodes Scheduled with Available Pods: 3
Number of Nodes Misscheduled: 0
Pods Status:  3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           k8s-app=kube-proxy
  Service Account:  kube-proxy
  Containers:
   kube-proxy:
    Image:      registry.k8s.io/kube-proxy:v1.29.2
    Port:       <none>
    Host Port:  <none>
    Command:
      /usr/local/bin/kube-proxy
      --config=/var/lib/kube-proxy/config.conf
      --hostname-override=$(NODE_NAME)
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/kube-proxy from kube-proxy (rw)
  Volumes:
   kube-proxy:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy
    Optional:  false
   xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
   lib-modules:
    Type:               HostPath (bare host directory volume)
    Path:               /lib/modules
    HostPathType:
  Priority Class Name:  system-node-critical
Events:                 <none>

-A optionですべてのnamespaceを検索できる

$ k get ds -A
NAMESPACE     NAME         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   kindnet      3         3         3       3            3           kubernetes.io/os=linux   24h
kube-system   kube-proxy   3         3         3       3            3           kubernetes.io/os=linux   24h


$ k get pods -A

pod は po に略せる

k get po

Controlled By ReplicaSet

普通に k create deploy すると Controlled By ReplicaSet になる

$ k describe po nginx-7854ff8877-mjs9m 
Name:             nginx-7854ff8877-mjs9m
...
Controlled By:  ReplicaSet/nginx-7854ff8877

Controlled by DaemonSet

これで daemon setを作れる。deploy/replicasetとの違いは「replicas」がいらないこと。worker nodeにつき1 podが勝手に保証される。*cplaneにはpodは置かれない

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: ds
spec:
  template:
    metadata:
      labels:
        env: demo  # <-- ここと
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
          - containerPort: 80
  selector:
    matchLabels:
      env: demo  # <-- ここが一致しないとエラーになる

deployのように、podが作られる。

$ k get po -o wide
NAME       READY   STATUS              RESTARTS   AGE   IP       NODE           NOMINATED NODE   READINESS GATES
ds-pdvfw   0/1     ContainerCreating   0          2s    <none>   kind-worker2   <none>           <none>
ds-sv9fg   0/1     ContainerCreating   0          2s    <none>   kind-worker    <none>           <none>

Controlled By: DaemonSet/ds になった

$ k describe pod/ds-pdvfw 
Name:             ds-pdvfw
...
Controlled By:  DaemonSet/ds

Day 13 scheduler

実習: schedulerを止めると、podが作れなくなる!

control plane は /etc/kubernetes/manifests にあるファイルをwatchしているらしい。kube-scheduler.yaml を消すと、その瞬間に kube-scheduler-kind-control-plane podがいなくなる!

watch しながら

watch -n1 "kubectl get po -n kube-system "

Every 1.0s: kubectl get po -n kube-system                                                                                  mMHYTXJ314T: Sun Nov 10 16:41:27 2024

NAME                                         READY   STATUS    RESTARTS        AGE
coredns-76f75df574-rt9xx                     1/1     Running   1 (3d23h ago)   4d17h
coredns-76f75df574-vxtpq                     1/1     Running   1 (3d23h ago)   4d17h
etcd-kind-control-plane                      1/1     Running   1 (3d23h ago)   4d17h
kindnet-bjpx8                                1/1     Running   1 (3d23h ago)   4d17h
kindnet-m6g6m                                1/1     Running   1 (3d23h ago)   4d17h
kindnet-ww6vb                                1/1     Running   1 (3d23h ago)   4d17h
kube-apiserver-kind-control-plane            1/1     Running   1 (3d23h ago)   4d17h
kube-controller-manager-kind-control-plane   1/1     Running   1 (3d23h ago)   4d17h
kube-proxy-cbx5k                             1/1     Running   1 (3d23h ago)   4d17h
kube-proxy-hgrq8                             1/1     Running   1 (3d23h ago)   4d17h
kube-proxy-hnxcv                             1/1     Running   1 (3d23h ago)   4d17h
kube-scheduler-kind-control-plane            1/1     Running   1 (3d23h ago)   4d17h

yamlを消すよ〜

root@kind-control-plane:/etc/kubernetes/manifests# ll
total 28
drwxr-xr-x 1 root root 4096 Nov  5 13:41 .
drwxr-xr-x 1 root root 4096 Nov  5 13:41 ..
-rw------- 1 root root 2406 Nov  5 13:41 etcd.yaml
-rw------- 1 root root 3896 Nov  5 13:41 kube-apiserver.yaml
-rw------- 1 root root 3428 Nov  5 13:41 kube-controller-manager.yaml
-rw------- 1 root root 1463 Nov  5 13:41 kube-scheduler.yaml

root@kind-control-plane:/etc/kubernetes/manifests# mv kube-scheduler.yaml  /tmp

一番下のschedulerがいなくなった!

NAME                                         READY   STATUS    RESTARTS        AGE
coredns-76f75df574-rt9xx                     1/1     Running   1 (3d23h ago)   4d17h
coredns-76f75df574-vxtpq                     1/1     Running   1 (3d23h ago)   4d17h
etcd-kind-control-plane                      1/1     Running   1 (3d23h ago)   4d17h
kindnet-bjpx8                                1/1     Running   1 (3d23h ago)   4d17h
kindnet-m6g6m                                1/1     Running   1 (3d23h ago)   4d17h
kindnet-ww6vb                                1/1     Running   1 (3d23h ago)   4d17h
kube-apiserver-kind-control-plane            1/1     Running   1 (3d23h ago)   4d17h
kube-controller-manager-kind-control-plane   1/1     Running   1 (3d23h ago)   4d17h
kube-proxy-cbx5k                             1/1     Running   1 (3d23h ago)   4d17h
kube-proxy-hgrq8                             1/1     Running   1 (3d23h ago)   4d17h
kube-proxy-hnxcv                             1/1     Running   1 (3d23h ago)   4d17h

podを作るよ=

$ k run no-sche-nginx  --image nginx

podはつくれたけど Pending だよ・・

$ k get pods
NAME                     READY   STATUS    RESTARTS   AGE
ng                       1/1     Running   0          5m58s
nginx-7854ff8877-4wgsv   1/1     Running   0          3d17h
no-sche-nginx            0/1     Pending   0          12s   <-----

Event: <none> のまま変わりません

$ k describe pod no-sche-nginx
Name:             no-sche-nginx
Namespace:        ns2
Priority:         0
Service Account:  default
Node:             <none>
Labels:           run=no-sche-nginx
Annotations:      <none>
Status:           Pending
IP:
IPs:              <none>
Containers:
  no-sche-nginx:
    Image:        nginx
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ff2ls (ro)
Volumes:
  kube-api-access-ff2ls:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

yamlを戻すよ〜

root@kind-control-plane:/etc/kubernetes/manifests# mv /tmp/kube-scheduler.yaml .

scheduler podが復活!

Every 1.0s: kubectl get po -n kube-system                                                                                  mMHYTXJ314T: Sun Nov 10 16:44:22 2024

NAME                                         READY   STATUS    RESTARTS        AGE
coredns-76f75df574-rt9xx                     1/1     Running   1 (3d23h ago)   4d18h
coredns-76f75df574-vxtpq                     1/1     Running   1 (3d23h ago)   4d18h
etcd-kind-control-plane                      1/1     Running   1 (3d23h ago)   4d18h
kindnet-bjpx8                                1/1     Running   1 (3d23h ago)   4d18h
kindnet-m6g6m                                1/1     Running   1 (3d23h ago)   4d18h
kindnet-ww6vb                                1/1     Running   1 (3d23h ago)   4d18h
kube-apiserver-kind-control-plane            1/1     Running   1 (3d23h ago)   4d18h
kube-controller-manager-kind-control-plane   1/1     Running   1 (3d23h ago)   4d18h
kube-proxy-cbx5k                             1/1     Running   1 (3d23h ago)   4d18h
kube-proxy-hgrq8                             1/1     Running   1 (3d23h ago)   4d18h
kube-proxy-hnxcv                             1/1     Running   1 (3d23h ago)   4d18h
kube-scheduler-kind-control-plane            1/1     Running   0               5s

pod が runningに!

$ k get po
NAME                     READY   STATUS    RESTARTS   AGE
ng                       1/1     Running   0          10m
nginx-7854ff8877-4wgsv   1/1     Running   0          3d17h
no-sche-nginx            1/1     Running   0          18s   <------

Eventが入ったよ!

$ k describe pod no-sche-nginx
Name:             no-sche-nginx
Namespace:        ns2
Priority:         0
Service Account:  default
Node:             kind-worker/10.201.0.3
Start Time:       Sun, 10 Nov 2024 16:40:15 +0900
Labels:           run=no-sche-nginx
Annotations:      <none>
Status:           Running
IP:               10.244.1.16
IPs:
  IP:  10.244.1.16
Containers:
  no-sche-nginx:
    Container ID:   containerd://1a661d8c3619e70c616b25b539dd95b43c0795e740aacf7289552ce0ff1242cf
    Image:          nginx
    Image ID:       docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Sun, 10 Nov 2024 16:40:21 +0900
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ff2ls (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  kube-api-access-ff2ls:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  13s   default-scheduler  Successfully assigned ns2/no-sche-nginx to kind-worker
  Normal  Pulling    13s   kubelet            Pulling image "nginx"
  Normal  Pulled     7s    kubelet            Successfully pulled image "nginx" in 5.587s (5.587s including waiting)
  Normal  Created    7s    kubelet            Created container no-sche-nginx
  Normal  Started    7s    kubelet            Started container no-sche-nginx

Controle planeってyamlの出し入れで動くんだ・・

psと、あるフォルダの中身が連動する。不思議な管理手法だなぁ

Node指定

worker node探す

$ k get nodes
NAME                 STATUS   ROLES           AGE     VERSION
kind-control-plane   Ready    control-plane   4d18h   v1.29.2
kind-worker          Ready    <none>          4d18h   v1.29.2
kind-worker2         Ready    <none>          4d18h   v1.29.2

template作る

$ k run nginx --image nginx -o yaml --dry-run=client > pod.yaml

pod yaml with nodeName

pod.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    name: nginx
  nodeName: kind-worker      <------

apply

$ k apply -f pod.yaml 
pod/nginx created

$ k describe po nginx | grep Node
Node:             kind-worker/10.201.0.3      <-----

NodeName 指定すると、schedulerがいなくてもdeployできる

事前に scheduler を止めておく

root@kind-control-plane:/etc/kubernetes/manifests# mv kube-scheduler.yaml  /tmp

$ k get po -n kube-system | grep sche
$

nodeName指定した pod をdeploy

$ k apply -f pod.yaml 
pod/nginx created

$ k get pods
NAME    READY   STATUS              RESTARTS   AGE
nginx   0/1     ContainerCreating   0          2s

$ k get pods
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          7s       <---- 動いた

schedulerがいなくても、podはnodeName使えばdeployできる!

k get pods --selector= の使い方

準備、create pods

$ k run apache --image httpd
pod/apache created

$ k run httpd --image httpd --labels="app=httpd,tier=one"
pod/httpd created

all pods

$ k get pods --show-labels
NAME     READY   STATUS              RESTARTS   AGE     LABELS
apache   1/1     Running             0          28s     run=apache
httpd    0/1     ContainerCreating   0          7s      app=httpd,tier=one
nginx    1/1     Running             0          9m11s   run=nginx

--selector="" で検索

これはserviceからアクセスできない時のdebugに使えそう。

$ k get pods --selector="run=apache"
NAME     READY   STATUS    RESTARTS   AGE
apache   1/1     Running   0          3m20s

--selector key=val でも使える

$ k get pods --selector tier=one
NAME    READY   STATUS    RESTARTS   AGE
httpd   1/1     Running   0          4m41s

Annotation

anotationはlabelsに似ているけどちょっと違う

last-applied-configuration は最後のapplyが記録されている

$ k edit po nginx
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"run":"nginx"},"name":"nginx","namespace":"ns13"},"spec":{"containers":[{"image":"nginx","name":"nginx"}],"nodeName":"kind-worker"}}

Dayy 14 Taints and Tolerations

まず taint ってなんやねん

taintの概念と「汚染」の関連性
ノードへの制約: taintは、ノードに特定の制約を付与するための仕組みです。この制約によって、特定のポッドがそのノードで実行されることを禁止したり、優先度を下げたりすることができます。
汚染の比喩: この制約は、あたかもノードが 特定の「汚染物質」によって汚染されている かのように捉えることができます。汚染されたノードには、特定の種類のポッドしか住めない、あるいは住みにくい という状況を表現しているのです。

しっくりこんなぁ〜

taintはnodeにつける
tolerationはpodにつける

制限(taint)を持ってるのはnode
その制限を潜り抜けるようにしてやるかぁ、がpod(toleration)

$ k get nodes -o wide
NAME                 STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
kind-control-plane   Ready    control-plane   6d    v1.29.2   10.201.0.2    <none>        Debian GNU/Linux 12 (bookworm)   6.6.41-0-virt    containerd://1.7.13
kind-worker          Ready    <none>          6d    v1.29.2   10.201.0.3    <none>        Debian GNU/Linux 12 (bookworm)   6.6.41-0-virt    containerd://1.7.13
kind-worker2         Ready    <none>          6d    v1.29.2   10.201.0.4    <none>        Debian GNU/Linux 12 (bookworm)   6.6.41-0-virt    containerd://1.7.13
$ k taint node kind-worker gpu=true:NoSchedule
node/kind-worker tainted
$ k taint node kind-worker2 gpu=true:NoSchedule
node/kind-worker2 tainted

key-value形式: taintは、key=value:effectという形式で設定されます。
key: 任意の文字列で、ノードの属性を表します。
value: keyの説明や詳細な情報を付加できますが、必須ではありません。
effect: taintの効果を表し、NoSchedule、PreferNoSchedule、NoExecuteのいずれかになります。

$ k describe node kind-worker2 | grep -i taint
Taints:             gpu=true:NoSchedule
$ k run nginx --image nginx
pod/nginx created
$ k get po
NAME    READY   STATUS    RESTARTS   AGE
nginx   0/1     Pending   0          7s
$ k describe pod nginx
...
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  16s   default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) had untolerated taint {gpu: true}. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.

日本語訳
3つのノードのうち、利用可能なノードはありません。

  • 1つのノード: 「node-role.kubernetes.io/control-plane」という耐えられないタイント(汚染)を持っています。これは、このノードがコントロールプレーンの役割を担っており、他のポッドがスケジュールされないことを意味します。
  • 2つのノード: 「gpu: true」という耐えられないタイントを持っています。これは、これらのノードにGPUが搭載されており、GPUを使用しないポッドはスケジュールされないことを意味します。
  • プリエンプション: 3つのノードのうち、利用可能なノードはなく、プリエンプション(優先度の低いポッドを強制終了してリソースを解放する)はスケジューリングに役立ちません。

ここでの taint は restriction, 制限 と捉えるといいかもしれない。

untolerated: taintに耐えらえれない = taint(制限) をクリアできない = pod をdeployできない
tolerated taint: taintに耐えらえる = taint(制限) をクリアできる = pod をdeployできる

メッセージ通り、controle planeは確かにその Taint を持っていた。

$ k describe node kind-control-plane  | grep Taint
Taints:             node-role.kubernetes.io/control-plane:NoSchedule

taintのeffectは、そのtaintがポッドのスケジューリングに与える影響を決定する重要な要素です。効果には主に3種類あり、それぞれ以下のような意味を持ちます。

  1. NoSchedule
    意味: 指定されたノードに、このtaintをtoleration(耐性)を持っていないポッドは絶対にスケジュールされません。
    例: gpu=true:NoSchedule
    このtaintを持つノードには、GPUを使用できるtolerationを持っていないポッドはスケジュールされません。つまり、このノードはGPU専用のノードとして扱われます。

  2. PreferNoSchedule
    意味: 指定されたノードに、このtaintをtolerationを持っていないポッドは、他の選択肢がある限りスケジュールされません。ただし、他の選択肢が全くない場合は、このノードにスケジュールされる可能性があります。
    例: old-node:PreferNoSchedule
    このtaintを持つノードは、古いノードであることを示します。新しいノードがある限り、新しいノードにポッドがスケジュールされます。しかし、新しいノードが全て埋まっている場合は、古いノードにスケジュールされる可能性があります。

  3. NoExecute
    意味: 指定されたノードで実行中のポッドは、このtaintをtolerationを持たない場合、強制終了されます。
    例: evict:NoExecute
    このtaintを持つノードから、このtaintをtolerationを持たないポッドが削除されます。ノードのメンテナンスなど、ノードを一時的に使用不可にする際に使用されます。

effectの選択と活用
NoSchedule: ノードを特定の用途に特化させたい場合に利用します。
PreferNoSchedule: ノードの負荷を分散させたい場合や、古いノードへのポッドのスケジュールを抑制したい場合に利用します。
NoExecute: ノードのメンテナンスや、特定のポッドを強制終了させたい場合に利用します。

apiVersion: v1
kind: Pod
metadata:
  name: apache
spec:
  containers:
  - image: httpd
    name: apache
  tolerations:
  - key: gpu
    value: "true"
    operator: Equal
    effect: NoSchedule
$ k apply -f taint-pod.yaml 
pod/apache created
$ k get po
NAME     READY   STATUS    RESTARTS   AGE
apache   1/1     Running   0          6s
nginx    0/1     Pending   0          13m

Delete taint (Untaint)

今、2つのpodがtaintにあうnodeがなくて困ってる状態

$ k get po
NAME     READY   STATUS    RESTARTS   AGE
apache   0/1     Pending   0          25s
nginx    0/1     Pending   0          15m

最後にdashをつけると削除(untaint)

$ k taint node kind-worker2 gpu=true:NoSchedule-
node/kind-worker2 untainted

podが動き出した

$ k get po
NAME     READY   STATUS    RESTARTS   AGE
apache   1/1     Running   0          4m17s
nginx    1/1     Running   0          19m

default-scheduler によって Successfully assigned というevent logが出ていた

Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  5m31s  default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) had untolerated taint {gpu: true}. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
  Normal   Scheduled         87s    default-scheduler  Successfully assigned ns14/apache to kind-worker2
  Normal   Pulling           86s    kubelet            Pulling image "httpd"
  Normal   Pulled            75s    kubelet            Successfully pulled image "httpd" in 5.506s (10.859s including waiting)
  Normal   Created           75s    kubelet            Created container apache
  Normal   Started           75s    kubelet            Started container apache

既存のworker nodeにはACL通ってるけど、追加したworker nodeには載せたくない時とかに使えそうな気がする。

nodeSelector は labelに対して使う。taintに対しては使えない。taintはlabelより強い。

今、node=true のtainted worker nodeがあるけど、これではpodはPendingのままになる

$ k taint node kind-worker gpu=true:NoSchedule
node/kind-worker tainted
apiVersion: v1
kind: Pod
metadata:
  name: apache
spec:
  containers:
  - image: httpd
    name: apache
  nodeSelector:     <-----
    gpu: "true"    <-----
$ k apply -f taint-pod.yaml 
pod/apache created
$ k get po
NAME     READY   STATUS    RESTARTS   AGE
apache   0/1     Pending   0          3s
$ k describe pod apache
...
Node-Selectors:              gpu=true
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  69s   default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.

labelを追加してもpendingのまま。

$ k label node kind-worker gpu=true
node/kind-worker labeled
$ k get pod
NAME     READY   STATUS    RESTARTS   AGE
apache   0/1     Pending   0          3m55s

taintがないnodeにlabel追加すると、nodeSelectorしていた pod が動き出す

$ k label node kind-worker2 gpu=true
node/kind-worker2 labeled
$ k get po
NAME     READY   STATUS    RESTARTS   AGE
apache   1/1     Running   0          5m8s

nodeSelectorでもeventのerror messageはtaintがらみ?

nodeSelectorを使ってpodを起動

  nodeSelector:
    gpu: "false"

そんなnodeないのでPending

$ k get po
NAME     READY   STATUS    RESTARTS   AGE
apache   0/1     Pending   0          60s

この時、event logにはtaintの文字が出る。

Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  9s    default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.

よくみたら "untolerated taint" は control planeのエラー。scheduleできない時、これは必ず出るのね。

よくみると

2 node(s) didn't match Pod's node affinity/selector.

ちゃんと worker nodeの方にはselectorがmatchしないって出てた。

labelを上書きすると動き始めました。

$ k label node kind-worker gpu=false --overwrite
node/kind-worker labeled
$ k get po
NAME     READY   STATUS              RESTARTS   AGE
apache   0/1     ContainerCreating   0          3m14s

labelの削除方法

keyに dash をつけると消せる

$ k label node kind-worker gpu-
node/kind-worker unlabeled

消した後もscheduleされたpodは動きづつける。label/nodeSelectorが常時チェックされてるわけじゃないのか

$ k get po
NAME     READY   STATUS    RESTARTS   AGE
apache   1/1     Running   0          5m2s

たまにチェックされるぽいらしいけどしばらく経っても変わらなかったよ

noexecute:NoExecute

メンテナンスノード: メンテナンス中のノードに「noexecute:NoExecute」というtaintを設定すると、そのノード上で実行中のポッドは強制終了されます。これは、汚染された場所から人が避難させられる状況に例えられます。

$ k taint node kind-worker noexecute:NoExecute
node/kind-worker tainted

k run してたpodが一瞬で消えました。 違うnodeに移動することもなく消えました

一方 deployment はちゃんと違うnodeにpodを移動させました。(正確にいうと既存pod強制終了 -> 違うnodeに新規作成、なのでサービスは一時停止してる)

Day 15 Node Affinity

podをどのnodeにおくかを nodeAffinity でやってみる

選べるのはrequiredかpreferredの2つ

  1. requiredDuringSchedulingIgnoredDuringExecution
  2. preferredDuringSchedulingIgnoredDuringExecution

すっごい名前だな。 Ignored...じゃないやつはない。podを作る時にしか効果がない。runnningのpodはnodeのlabelが消えようが影響を受けない。

requiredDuringSchedulingIgnoredDuringExecution

これは必須。あうlabelがなければpendingになる

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    name: nginx
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: disktype
            operator: In
            values:
              - ssd
$ k apply -f affinity.yaml 
pod/nginx created
$ k get po
NAME    READY   STATUS    RESTARTS   AGE
nginx   0/1     Pending   0          5s
  Warning  FailedScheduling  68s   default-scheduler  0/3 nodes are available: 
  1 node(s) didn't match Pod's node affinity/selector, 
  1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 
  1 node(s) had untolerated taint {noexecute: }. 
  preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.

nodeに label をつける。affinityってただのlabelのことか。

$ k label node kind-worker disktype=ssd
node/kind-worker labeled

起動した

  Warning  FailedScheduling  2m42s  default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
  Normal   Scheduled         61s    default-scheduler  Successfully assigned day15/nginx to kind-worker
  Normal   Pulling           61s    kubelet            Pulling image "nginx"
  Normal   Pulled            5s     kubelet            Successfully pulled image "nginx" in 55.525s (55.525s including waiting)
  Normal   Created           5s     kubelet            Created container nginx
  Normal   Started           5s     kubelet            Started container nginx
$ k get po
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          7m54s

HDDにしてみる

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    name: nginx
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: disktype
            operator: In
            values:
              - hdd

エラーになってapplyできなかった

$ k apply -f affinity.yaml 
The Pod "nginx" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`,`spec.initContainers[*].image`,`spec.activeDeadlineSeconds`,`spec.tolerations` (only additions to existing tolerations),`spec.terminationGracePeriodSeconds` (allow it to be set to 1 if it was previously negative)
  core.PodSpec{
        ... // 15 identical fields
        Subdomain:         "",
        SetHostnameAsFQDN: nil,
        Affinity: &core.Affinity{
                NodeAffinity: &core.NodeAffinity{
                        RequiredDuringSchedulingIgnoredDuringExecution: &core.NodeSelector{
                                NodeSelectorTerms: []core.NodeSelectorTerm{
                                        {
                                                MatchExpressions: []core.NodeSelectorRequirement{
                                                        {
                                                                Key:      "disktype",
                                                                Operator: "In",
-                                                               Values:   []string{"ssd"},
+                                                               Values:   []string{"hdd"},
                                                        },
                                                },
                                                MatchFields: nil,
                                        },
                                },
                        },
                        PreferredDuringSchedulingIgnoredDuringExecution: nil,
                },
                PodAffinity:     nil,
                PodAntiAffinity: nil,
        },
        SchedulerName: "default-scheduler",
        Tolerations:   {{Key: "node.kubernetes.io/not-ready", Operator: "Exists", Effect: "NoExecute", TolerationSeconds: &300}, {Key: "node.kubernetes.io/unreachable", Operator: "Exists", Effect: "NoExecute", TolerationSeconds: &300}},
        ... // 13 identical fields
  }

podはssdのまま走り続けている

$ k get po -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP            NODE          NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          10m   10.244.1.37   kind-worker   <none>           <none>

preferredDuringSchedulingIgnoredDuringExecution

これはできればって話なのであうnodeがなくてもpodは走る

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx
  name: nginx2
spec:
  containers:
  - image: nginx
    name: nginx
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 1
          preference:
            matchExpressions:
              - key: disktype
                operator: In
                values:
                  - hdd

どこにも disktype=hdd な node はないが Running.

$ k get po -o wide
NAME     READY   STATUS    RESTARTS   AGE   IP            NODE           NOMINATED NODE   READINESS GATES
nginx2   1/1     Running   0          8s    10.244.2.35   kind-worker2   <none>           <none>

ssdのnode labelを消しても、hdd podが走るべきnodeにhdd labelを追加しても、走ってるpodは何も変わらない。

$ k label node kind-worker disktype-
node/kind-worker unlabeled
$ k label node kind-worker disktype=hdd
node/kind-worker labeled
$ k get po -o wide
NAME     READY   STATUS    RESTARTS   AGE     IP            NODE           NOMINATED NODE   READINESS GATES
nginx    1/1     Running   0          17m     10.244.1.37   kind-worker    <none>           <none>
nginx2   1/1     Running   0          4m36s   10.244.2.35   kind-worker2   <none>           <none>

operator: Exists

operator: Exists の場合、labelがblankでも効く。keyさえあればいい。

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx
  name: nginx3
spec:
  containers:
  - image: nginx
    name: nginx
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
            - key: disktype
              operator: Exists
$ k get po
NAME     READY   STATUS    RESTARTS   AGE
nginx    1/1     Running   0          20m
nginx2   1/1     Running   0          8m
nginx3   0/1     Pending   0          2s
$ k label node kind-worker2 disktype=
node/kind-worker2 labeled
$ k get po -o wide
NAME     READY   STATUS    RESTARTS   AGE     IP            NODE           NOMINATED NODE   READINESS GATES
nginx    1/1     Running   0          22m     10.244.1.37   kind-worker    <none>           <none>
nginx2   1/1     Running   0          9m19s   10.244.2.35   kind-worker2   <none>           <none>
nginx3   1/1     Running   0          81s     10.244.2.36   kind-worker2   <none>           <none>

Day 16 Requests and Limits

requestsは、コンテナの最小リソース要件を指定し、スケジューリングとQoSに影響を与えます。
limitsは、コンテナの最大リソース使用量を制限し、リソースの過剰使用を防ぎます。

k top

metrics-server を作ると k topが打てるようになる。これめっちゃ便利

stress pod

k apply -f このyamlで、 polinux/stress pod を作ってみる

$ k get po -n kube-system | grep metri
metrics-server-67fc4df55-bkn8z               1/1     Running   0              49s

node usage

$ k top node
NAME                 CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
kind-control-plane   86m          4%     876Mi           14%       
kind-worker          20m          1%     337Mi           5%        
kind-worker2         17m          0%     373Mi           6%     

stress test

$ k create namespace mem-example
namespace/mem-example created
apiVersion: v1
kind: Pod
metadata:
  name: stress
spec:
  containers:
  - image: polinux/stress
    name: stress
    resources:
      requests:
        memory: "100Mi"
      limits:
        memory: "200Mi"
    command: ["stress"]
    args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]
$ k apply -f stress.yaml -n mem-example
pod/stress created
$ k logs pod/stress   -n mem-example   
stress: info: [1] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
$ k top pod stress
NAME     CPU(cores)   MEMORY(bytes)   
stress   6m           153Mi           

OOMKilled される pod

もう一個作ってみる。今度はlimitsよりも多いmemoryを使う。

apiVersion: v1
kind: Pod
metadata:
  name: stress2
spec:
  containers:
  - image: polinux/stress
    name: stress
    resources:
      requests:
        memory: "50Mi"
      limits:
        memory: "100Mi"
    command: ["stress"]
    args: ["--vm", "1", "--vm-bytes", "250M", "--vm-hang", "1"]
$ k apply -f stress2.yaml 
pod/stress2 created
$ k get po 
NAME      READY   STATUS      RESTARTS   AGE
stress    1/1     Running     0          3m4s
stress2   0/1     OOMKilled   0          11s

podが死んだreason fieldなんてあったんだ

$ k describe pod stress2

    State:          Terminated
      Reason:       OOMKilled
      Exit Code:    137
    Last State:     Terminated
      Reason:       OOMKilled

  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  65s               default-scheduler  Successfully assigned mem-example/stress2 to kind-worker2
  Normal   Pulled     34s               kubelet            Successfully pulled image "polinux/stress" in 5.513s (5.513s including waiting)
  Normal   Pulling    7s (x4 over 65s)  kubelet            Pulling image "polinux/stress"
  Normal   Created    2s (x4 over 59s)  kubelet            Created container stress
  Normal   Pulled     2s                kubelet            Successfully pulled image "polinux/stress" in 5.465s (5.465s including waiting)
  Normal   Started    1s (x4 over 59s)  kubelet            Started container stress
  Warning  BackOff    0s (x5 over 53s)  kubelet            Back-off restarting failed container stress in pod stress2_mem-example(664c4f46-970f-4ac4-9269-ade4d437bb99)

絶対ない 1T メモリを指定

apiVersion: v1
kind: Pod
metadata:
  name: stress3
spec:
  containers:
  - image: polinux/stress
    name: stress
    resources:
      requests:
        memory: "1000Gi"
      limits:
        memory: "1000Gi"
    command: ["stress"]
    args: ["--vm", "1", "--vm-bytes", "1000G", "--vm-hang", "1"]
$ k get po
NAME      READY   STATUS    RESTARTS   AGE
stress    1/1     Running   0          8m47s
stress3   0/1     Pending   0          14s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  32s   default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
  Warning  FailedScheduling  20s   default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.

2 Insufficient memory が見えますな。ナイス。

Upgrade Kind Kubernetes version

CKAは使うk8s versionが決まってるらしい

Software Version: Kubernetes v1.31
https://training.linuxfoundation.org/certification/certified-kubernetes-administrator-cka/

スーパー最新やん。まじか。

kind clusterを作り直しましむ。

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  image: kindest/node:v1.31.2
  extraPortMappings:
    - containerPort: 30001
      hostPort: 30001
- role: worker
  image: kindest/node:v1.31.2
- role: worker
  image: kindest/node:v1.31.2
- role: worker
  image: kindest/node:v1.31.2

$ kind delete cluster
$ kind create cluster --config kind.yaml 
$ k get nodes
NAME                 STATUS     ROLES           AGE   VERSION
kind-control-plane   NotReady   control-plane   21s   v1.31.2
kind-worker          NotReady   <none>          8s    v1.31.2
kind-worker2         NotReady   <none>          9s    v1.31.2
kind-worker3         NotReady   <none>          8s    v1.31.2

楽だな〜〜〜 kind 最高だわ

Day 17/40 - Kubernetes Autoscaling Explained| HPA Vs VPA

scalingはtypeによって使うものが異なる

  1. pods, horizontal autoscaling: HPA
  2. pods, vertical autoscaling: VPA
  3. nodes, horizontal autoscaling: Cluster Autoscaler
  4. nodes, vertical autoscaling: Node AutoProvisioning

KEDAというのが有名らしい

KEDA is a Kubernetes-based Event Driven Autoscaler. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed.
https://keda.sh/

HorizontalPodAutoscaler

deploy.yamlをapplyしたら registry.k8s.io/hpa-example が起動する...んだが、自分のmacではweb serverとして機能しなかった。理由不明。curl して OK が返らなければ同じ症状です。

結局 php:apache を使ってpodの中にこれを置きました

echo '<?php
$x = 0.0001;
for ($i = 0; $i <= 10000000; $i++) {
        $x += sqrt($x);
}
echo "OK!";
?>
' > index.php 
chmod 777 index.php

autoscaleしてみる

k autoscale deplopyment php-apache --cpu-percent=50 --min=1 --max=10

HorizontalPodAutoscaler が作られる

$ k autoscale deploy php-apache --cpu-percent=50 --min=1 --max=10
horizontalpodautoscaler.autoscaling/php-apache autoscaled

負荷をかける。コマンドはここから
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/

kubectl run -i --tty load-generator --rm --image=busybox:1.28 --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"

あっという間にscalingされた

--watch をつけると便利!

$ k get hpa --watch
NAME         REFERENCE               TARGETS              MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   cpu: <unknown>/50%   1         10        0          4s
php-apache   Deployment/php-apache   cpu: 0%/50%          1         10        1          15s
php-apache   Deployment/php-apache   cpu: 55%/50%         1         10        1          30s
php-apache   Deployment/php-apache   cpu: 55%/50%         1         10        2          45s
php-apache   Deployment/php-apache   cpu: 248%/50%        1         10        2          60s
php-apache   Deployment/php-apache   cpu: 122%/50%        1         10        4          75s
php-apache   Deployment/php-apache   cpu: 58%/50%         1         10        5          90s
php-apache   Deployment/php-apache   cpu: 55%/50%         1         10        5          105s
php-apache   Deployment/php-apache   cpu: 57%/50%         1         10        5          2m
php-apache   Deployment/php-apache   cpu: 47%/50%         1         10        6          2m15s
php-apache   Deployment/php-apache   cpu: 54%/50%         1         10        6          2m30s
php-apache   Deployment/php-apache   cpu: 44%/50%         1         10        6          2m45s
php-apache   Deployment/php-apache   cpu: 46%/50%         1         10        6          3m
php-apache   Deployment/php-apache   cpu: 20%/50%         1         10        6          3m15s
php-apache   Deployment/php-apache   cpu: 1%/50%          1         10        6          3m30s
php-apache   Deployment/php-apache   cpu: 0%/50%          1         10        6          3m45s

負荷を止めて 0% になっても、replicaは減らない・・・と思ったら

五分後に減った!!

php-apache   Deployment/php-apache   cpu: 0%/50%          1         10        6          8m
php-apache   Deployment/php-apache   cpu: 0%/50%          1         10        3          8m15s
php-apache   Deployment/php-apache   cpu: 0%/50%          1         10        1          8m30s

すげ〜〜〜〜〜〜〜 なるほど〜

Day 18/40 - Kubernetes Health Probes Explained | Liveness vs Readiness Probes

startup: slow/legacy apps
readiness: whether app is ready
liveness: restart if fails

このサンプルをただ眺めた

liveness-http and readiness-http

apiVersion: v1
kind: Pod
metadata:
  name: hello
spec:
  containers:
  - name: liveness
    image: registry.k8s.io/e2e-test-images/agnhost:2.40
    args:
    - liveness
    livenessProbe:
      httpGet:
        path: /healthz
        port: 8080
      initialDelaySeconds: 3
      periodSeconds: 3
    readinessProbe:
      httpGet:
        path: /healthz
        port: 8080
      initialDelaySeconds: 15
      periodSeconds: 10

liveness command

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-exec
spec:
  containers:
  - name: liveness
    image: registry.k8s.io/busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600
    livenessProbe:
      exec:
        command:
        - cat 
        - /tmp/healthy
      initialDelaySeconds: 5
      periodSeconds: 5

liveness-tcp

apiVersion: v1
kind: Pod
metadata:
  name: tcp-pod
  labels:
    app: tcp-pod
spec:
  containers:
  - name: goproxy
    image: registry.k8s.io/goproxy:0.1
    ports:
    - containerPort: 8080
    livenessProbe:
      tcpSocket:
        port: 3000
      initialDelaySeconds: 10
      periodSeconds: 5

liveness exec

livenessProbe に shell が使えるんだ。ヘェ〜〜〜 log fileのサイズ増えてるかチェックする、みたいのに使えそう

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-exec
spec:
  containers:
    - name: liveness
      image: registry.k8s.io/busybox
      args:
        - /bin/sh
        - -c
        - touch /tmp/healthy; sleep 10; rm -f /tmp/healthy; sleep 10
      livenessProbe:
        exec:
          command:
            - cat
            - /tmp/healthy
        initialDelaySeconds: 5
        periodSeconds: 1

ほんとに定期的に restart する。

$ k get po -o wide --watch
NAME            READY   STATUS    RESTARTS   AGE   IP            NODE           NOMINATED NODE   READINESS GATES
liveness-exec   1/1     Running   0          55s   10.244.2.15   kind-worker3   <none>           <none>
liveness-exec   1/1     Running   1 (11s ago)   66s   10.244.2.15   kind-worker3   <none>           <none>
liveness-exec   1/1     Running   2 (11s ago)   2m    10.244.2.15   kind-worker3   <none>           <none>
liveness-exec   1/1     Running   3 (11s ago)   2m54s   10.244.2.15   kind-worker3   <none>           <none>
liveness-exec   1/1     Running   4 (12s ago)   3m49s   10.244.2.15   kind-worker3   <none>           <none>
liveness-exec   1/1     Running   5 (12s ago)   4m43s   10.244.2.15   kind-worker3   <none>           <none>
liveness-exec   0/1     CrashLoopBackOff   5 (0s ago)    5m25s   10.244.2.15   kind-worker3   <none>           <none>
liveness-exec   1/1     Running            6 (93s ago)   6m58s   10.244.2.15   kind-worker3   <none>           <none>

describe には Unhealthy と Killing 、そしてPulling が再生成(restart) の合図
k8s的にはrestartって再生成なんだな。gemini「コンテナ停止、image pull, create a new container, start he container」これがrestartだって言ってる。

  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  3m12s                default-scheduler  Successfully assigned day18/liveness-exec to kind-worker3
  Normal   Pulled     3m                   kubelet            Successfully pulled image "registry.k8s.io/busybox" in 11.132s (11.132s including waiting). Image
size: 1144547 bytes.
  Normal   Pulled     2m6s                 kubelet            Successfully pulled image "registry.k8s.io/busybox" in 11.029s (11.029s including waiting). Image
size: 1144547 bytes.
  Normal   Created    72s (x3 over 3m)     kubelet            Created container liveness
  Normal   Started    72s (x3 over 3m)     kubelet            Started container liveness
  Normal   Pulled     72s                  kubelet            Successfully pulled image "registry.k8s.io/busybox" in 11.125s (11.125s including waiting). Image
size: 1144547 bytes.
  Warning  Unhealthy  59s (x9 over 2m49s)  kubelet            Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
  Normal   Killing    59s (x3 over 2m47s)  kubelet            Container liveness failed liveness probe, will be restarted
  Normal   Pulling    29s (x4 over 3m11s)  kubelet            Pulling image "registry.k8s.io/busybox"

Day 19 ConfigMap

deployment + env

deploymentを作る

k create deploy deploy --image busybox --dry-run=client -o yaml > day19/deploy.yaml

envをハードコードして起動

      containers:
      - image: busybox
        name: busybox
        command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
        env:
        - name: MY_ENV
          value: "Hello from env"
k apply -f deploy.yaml 
k exec -it deploy-7c66f97755-kb47m -- env | grep MY
MY_ENV=Hello from env

deploymentにconfigmapを使う

cmを作る

k create cm app-cm --from-literal=cm_name=piyush --dry-run=client -o yaml > cm.yaml
k describe cm app-cm

作ったらdeploymentに追加.
env.name が podから見える env の名前になるのびびった!そっちが使われるんか だからみんなkeyと同じenv.nameにするのね

      containers:
      - image: busybox
        name: busybox
        command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
        env:
        - name: config_map_deploy_name    <-- pod に入るenvのkey!!!
          valueFrom:
            configMapKeyRef:   
              name: app-cm         <---- cm の名前
              key: cm_name        <---- cm のkey name
$ k exec -it deploy-5f8567658d-m27c9  -- env | grep config
config_map_deploy_name=piyush

$ k describe pod deploy-5f8567658d-m27c9  | grep -A3 Env
    Environment:
      MY_ENV:                  Hello from env
      config_map_deploy_name:  <set to the key 'cm_name' of config map 'app-cm'>  Optional: false

fileからconfigmapは失敗

file作る

$ cat <<EOT >> env
> AAA=111
> BBB=222
> EOT

ダメなconfigmap

dry-runすると、展開されたyamlを作れる。file importが入るわけじゃない

$ k create cm cm-file --from-file=env --dry-run=client -o yaml
apiVersion: v1
data:
  env: |       <------ココがダメ
    AAA=111
    BBB=222
kind: ConfigMap
metadata:
  name: cm-file

この状態で作ると、残念なことになる

作る

$ k create cm cm-file --from-file=env
configmap/cm-file created
k get cm cm-file
NAME      DATA   AGE
cm-file   1      7s

k describe cm cm-file 
env:
----
AAA=111
BBB=222


$ k exec -it deploy-b95b8c894-mhvbn -- sh
/ # echo $BBB

/ # echo AAA
AAA

値が無茶苦茶。よくみたら、A=1 の形式がダメなのね

apiVersion: v1
data:
  env: |   
    AAA=111 <---- そもそも colon で区切るべき。 AAA: 111
    BBB=222

そのうち出るだろ。今回は諦め。

Day 20

RSA key exchangeという過去の話だった

Day 21 CertificateSigningRequest

user用のclient certを発行してみる
ref: https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/

openssl genrsa -out myuser.key 2048
openssl req -new -key myuser.key -out myuser.csr -subj "/CN=myuser"
$ openssl req -text -noout -verify -in myuser.csr
Certificate request self-signature verify OK
Certificate Request:
    Data:
        Version: 1 (0x0)
        Subject: CN=myuser
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:b7:14:61:11:2f:c7:f9:cc:52:8c:29:14:6d:ee:
                Exponent: 65537 (0x10001)
        Attributes:
            (none)
            Requested Extensions:
    Signature Algorithm: sha256WithRSAEncryption
    Signature Value:
        5f:0b:6b:8f:ad:a1:d5:c2:7e:26:4b:22:0b:36:76:b4:9f:1e:

CSRのbase64を作る

$ cat myuser.csr | base64 | tr -d "\n"
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0diWGwxYzJWeU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQXR4UmhFUy9IK2N4U2pDa1ViZTZxTnBKUWoyWUoyYnZpM2VwREM0N1BjOFlkCmlFdk1KYzRCdHVMdXJjL1Mwd2lPN213bytCMHo5T2pjSCs4QUdSV09Qa0x6Z2pGMkFYYzBlSVVPY1lVaG0xNjMKSFlGQituOFlNeEZSOXNudnBQS2tGN1oxUTFBMW5RZ3FUak9nek9FUUhFQk14YUJ0RVZqakZaT0pUTHVDZVZHMQpYMllrTGNVSDkxcEsrTHFVZDIvMGlWMnIvWjlhRVJiT1Rqa2ptUFkyNTJONDZOODF3ZnRyZmJlMDdRb1dyTXRUCmZIY3NNaXMxWEV5Tms1YklpOWJ1ek9MY0tHOGVRcWdIZEZtYlhWbWF3aFdUQXJyY2JsTW03MytiMTJiUTlQZncKL3ZQTndLK0VZL3QraW1vY2piODdKRXNJMXBhK05KcXlHSEE0R3NwK3d3SURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBRjhMYTQrdG9kWENmaVpMSWdzMmRyU2ZIc3NYa3pBRVo5L21PUTZGODFqakN1ZitFUk5zClFpWXZFTEJWbVJjeXVRa0hKOHpqanpZT1pWNzZRTDAraDFTOFo2WkMzNnJPYkxYTnJUYTRVcTU3NnVGRlBKaWwKbk01R29sN3J4SE9Yak5BdDBGVlFIR3JnQ2p6NDJGVUh1cDh0K0dhZGUwaFZiTnNyOE9BY0RoZld1OHYvWEhUbworMjU5SHE4ZTdhNURPRjl5YkwvNWFHSEthUEV3eGVGQmk1dkZYYU8vamFIejhpWDQ0TFFPRHR0RU1Gd2NVbFh6CldLVWJjNzFraWtlVlhqL3I5UDBDc09IT3dreW9IcHFZNUFacERsbTJPUjIyekE0ZjgrMWdUT1lrbjQ0Y2I5N2kKc1NoaWVpWC90Nmw3WVN4TUdBalgrbjlGM0RYU3NFUmlhOXc9Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=

それをyamlのrequestに入れる

apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: myuser
spec:
  request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0diWGwxYzJWeU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQXR4UmhFUy9IK2N4U2pDa1ViZTZxTnBKUWoyWUoyYnZpM2VwREM0N1BjOFlkCmlFdk1KYzRCdHVMdXJjL1Mwd2lPN213bytCMHo5T2pjSCs4QUdSV09Qa0x6Z2pGMkFYYzBlSVVPY1lVaG0xNjMKSFlGQituOFlNeEZSOXNudnBQS2tGN1oxUTFBMW5RZ3FUak9nek9FUUhFQk14YUJ0RVZqakZaT0pUTHVDZVZHMQpYMllrTGNVSDkxcEsrTHFVZDIvMGlWMnIvWjlhRVJiT1Rqa2ptUFkyNTJONDZOODF3ZnRyZmJlMDdRb1dyTXRUCmZIY3NNaXMxWEV5Tms1YklpOWJ1ek9MY0tHOGVRcWdIZEZtYlhWbWF3aFdUQXJyY2JsTW03MytiMTJiUTlQZncKL3ZQTndLK0VZL3QraW1vY2piODdKRXNJMXBhK05KcXlHSEE0R3NwK3d3SURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBRjhMYTQrdG9kWENmaVpMSWdzMmRyU2ZIc3NYa3pBRVo5L21PUTZGODFqakN1ZitFUk5zClFpWXZFTEJWbVJjeXVRa0hKOHpqanpZT1pWNzZRTDAraDFTOFo2WkMzNnJPYkxYTnJUYTRVcTU3NnVGRlBKaWwKbk01R29sN3J4SE9Yak5BdDBGVlFIR3JnQ2p6NDJGVUh1cDh0K0dhZGUwaFZiTnNyOE9BY0RoZld1OHYvWEhUbworMjU5SHE4ZTdhNURPRjl5YkwvNWFHSEthUEV3eGVGQmk1dkZYYU8vamFIejhpWDQ0TFFPRHR0RU1Gd2NVbFh6CldLVWJjNzFraWtlVlhqL3I5UDBDc09IT3dreW9IcHFZNUFacERsbTJPUjIyekE0ZjgrMWdUT1lrbjQ0Y2I5N2kKc1NoaWVpWC90Nmw3WVN4TUdBalgrbjlGM0RYU3NFUmlhOXc9Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=
  signerName: kubernetes.io/kube-apiserver-client
  expirationSeconds: 86400  # one day
  usages:
    - client auth
$ k apply -f csr.yaml 
certificatesigningrequest.certificates.k8s.io/myuser created

Pendingになる

$ k get csr
NAME     AGE   SIGNERNAME                            REQUESTOR          REQUESTEDDURATION   CONDITION
myuser   3s    kubernetes.io/kube-apiserver-client   kubernetes-admin   24h                 Pending
$ k describe csr myuser
Name:         myuser
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"certificates.k8s.io/v1","kind":"CertificateSigningRequest","metadata":{"annotations":{},"name":"myuser"},"spec":{"expirationSeconds":86400,"request":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0diWGwxYzJWeU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQXR4UmhFUy9IK2N4U2pDa1ViZTZxTnBKUWoyWUoyYnZpM2VwREM0N1BjOFlkCmlFdk1KYzRCdHVMdXJjL1Mwd2lPN213bytCMHo5T2pjSCs4QUdSV09Qa0x6Z2pGMkFYYzBlSVVPY1lVaG0xNjMKSFlGQituOFlNeEZSOXNudnBQS2tGN1oxUTFBMW5RZ3FUak9nek9FUUhFQk14YUJ0RVZqakZaT0pUTHVDZVZHMQpYMllrTGNVSDkxcEsrTHFVZDIvMGlWMnIvWjlhRVJiT1Rqa2ptUFkyNTJONDZOODF3ZnRyZmJlMDdRb1dyTXRUCmZIY3NNaXMxWEV5Tms1YklpOWJ1ek9MY0tHOGVRcWdIZEZtYlhWbWF3aFdUQXJyY2JsTW03MytiMTJiUTlQZncKL3ZQTndLK0VZL3QraW1vY2piODdKRXNJMXBhK05KcXlHSEE0R3NwK3d3SURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBRjhMYTQrdG9kWENmaVpMSWdzMmRyU2ZIc3NYa3pBRVo5L21PUTZGODFqakN1ZitFUk5zClFpWXZFTEJWbVJjeXVRa0hKOHpqanpZT1pWNzZRTDAraDFTOFo2WkMzNnJPYkxYTnJUYTRVcTU3NnVGRlBKaWwKbk01R29sN3J4SE9Yak5BdDBGVlFIR3JnQ2p6NDJGVUh1cDh0K0dhZGUwaFZiTnNyOE9BY0RoZld1OHYvWEhUbworMjU5SHE4ZTdhNURPRjl5YkwvNWFHSEthUEV3eGVGQmk1dkZYYU8vamFIejhpWDQ0TFFPRHR0RU1Gd2NVbFh6CldLVWJjNzFraWtlVlhqL3I5UDBDc09IT3dreW9IcHFZNUFacERsbTJPUjIyekE0ZjgrMWdUT1lrbjQ0Y2I5N2kKc1NoaWVpWC90Nmw3WVN4TUdBalgrbjlGM0RYU3NFUmlhOXc9Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=","signerName":"kubernetes.io/kube-apiserver-client","usages":["client auth"]}}

CreationTimestamp:   Wed, 20 Nov 2024 23:08:43 +0900
Requesting User:     kubernetes-admin
Signer:              kubernetes.io/kube-apiserver-client
Requested Duration:  24h
Status:              Pending
Subject:
         Common Name:    myuser
         Serial Number:
Events:  <none>

describeよりk get csr の方が詳しい不思議。なんと、後で k get csr に発行された証明書が追加される。

$ k get csr -o yaml
apiVersion: v1
items:
- apiVersion: certificates.k8s.io/v1
  kind: CertificateSigningRequest
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"certificates.k8s.io/v1","kind":"CertificateSigningRequest","metadata":{"annotations":{},"name":"myuser"},"spec":{"expirationSeconds":86400,"request":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0diWGwxYzJWeU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQXR4UmhFUy9IK2N4U2pDa1ViZTZxTnBKUWoyWUoyYnZpM2VwREM0N1BjOFlkCmlFdk1KYzRCdHVMdXJjL1Mwd2lPN213bytCMHo5T2pjSCs4QUdSV09Qa0x6Z2pGMkFYYzBlSVVPY1lVaG0xNjMKSFlGQituOFlNeEZSOXNudnBQS2tGN1oxUTFBMW5RZ3FUak9nek9FUUhFQk14YUJ0RVZqakZaT0pUTHVDZVZHMQpYMllrTGNVSDkxcEsrTHFVZDIvMGlWMnIvWjlhRVJiT1Rqa2ptUFkyNTJONDZOODF3ZnRyZmJlMDdRb1dyTXRUCmZIY3NNaXMxWEV5Tms1YklpOWJ1ek9MY0tHOGVRcWdIZEZtYlhWbWF3aFdUQXJyY2JsTW03MytiMTJiUTlQZncKL3ZQTndLK0VZL3QraW1vY2piODdKRXNJMXBhK05KcXlHSEE0R3NwK3d3SURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBRjhMYTQrdG9kWENmaVpMSWdzMmRyU2ZIc3NYa3pBRVo5L21PUTZGODFqakN1ZitFUk5zClFpWXZFTEJWbVJjeXVRa0hKOHpqanpZT1pWNzZRTDAraDFTOFo2WkMzNnJPYkxYTnJUYTRVcTU3NnVGRlBKaWwKbk01R29sN3J4SE9Yak5BdDBGVlFIR3JnQ2p6NDJGVUh1cDh0K0dhZGUwaFZiTnNyOE9BY0RoZld1OHYvWEhUbworMjU5SHE4ZTdhNURPRjl5YkwvNWFHSEthUEV3eGVGQmk1dkZYYU8vamFIejhpWDQ0TFFPRHR0RU1Gd2NVbFh6CldLVWJjNzFraWtlVlhqL3I5UDBDc09IT3dreW9IcHFZNUFacERsbTJPUjIyekE0ZjgrMWdUT1lrbjQ0Y2I5N2kKc1NoaWVpWC90Nmw3WVN4TUdBalgrbjlGM0RYU3NFUmlhOXc9Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=","signerName":"kubernetes.io/kube-apiserver-client","usages":["client auth"]}}
    creationTimestamp: "2024-11-20T14:13:33Z"
    name: myuser
    resourceVersion: "842806"
    uid: a99b199d-38d8-4120-9f3b-7fd2028ed026
  spec:
    expirationSeconds: 86400
    groups:
    - kubeadm:cluster-admins
    - system:authenticated
    request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0diWGwxYzJWeU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQXR4UmhFUy9IK2N4U2pDa1ViZTZxTnBKUWoyWUoyYnZpM2VwREM0N1BjOFlkCmlFdk1KYzRCdHVMdXJjL1Mwd2lPN213bytCMHo5T2pjSCs4QUdSV09Qa0x6Z2pGMkFYYzBlSVVPY1lVaG0xNjMKSFlGQituOFlNeEZSOXNudnBQS2tGN1oxUTFBMW5RZ3FUak9nek9FUUhFQk14YUJ0RVZqakZaT0pUTHVDZVZHMQpYMllrTGNVSDkxcEsrTHFVZDIvMGlWMnIvWjlhRVJiT1Rqa2ptUFkyNTJONDZOODF3ZnRyZmJlMDdRb1dyTXRUCmZIY3NNaXMxWEV5Tms1YklpOWJ1ek9MY0tHOGVRcWdIZEZtYlhWbWF3aFdUQXJyY2JsTW03MytiMTJiUTlQZncKL3ZQTndLK0VZL3QraW1vY2piODdKRXNJMXBhK05KcXlHSEE0R3NwK3d3SURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBRjhMYTQrdG9kWENmaVpMSWdzMmRyU2ZIc3NYa3pBRVo5L21PUTZGODFqakN1ZitFUk5zClFpWXZFTEJWbVJjeXVRa0hKOHpqanpZT1pWNzZRTDAraDFTOFo2WkMzNnJPYkxYTnJUYTRVcTU3NnVGRlBKaWwKbk01R29sN3J4SE9Yak5BdDBGVlFIR3JnQ2p6NDJGVUh1cDh0K0dhZGUwaFZiTnNyOE9BY0RoZld1OHYvWEhUbworMjU5SHE4ZTdhNURPRjl5YkwvNWFHSEthUEV3eGVGQmk1dkZYYU8vamFIejhpWDQ0TFFPRHR0RU1Gd2NVbFh6CldLVWJjNzFraWtlVlhqL3I5UDBDc09IT3dreW9IcHFZNUFacERsbTJPUjIyekE0ZjgrMWdUT1lrbjQ0Y2I5N2kKc1NoaWVpWC90Nmw3WVN4TUdBalgrbjlGM0RYU3NFUmlhOXc9Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=
    signerName: kubernetes.io/kube-apiserver-client
    usages:
    - client auth
    username: kubernetes-admin
  status: {}
kind: List
metadata:
  resourceVersion: ""

Approveすると証明書が発行される

$ k certificate approve myuser
certificatesigningrequest.certificates.k8s.io/myuser approved
$ k get csr
NAME     AGE     SIGNERNAME                            REQUESTOR          REQUESTEDDURATION   CONDITION
myuser   2m36s   kubernetes.io/kube-apiserver-client   kubernetes-admin   24h                 Approved,Issued

describe csr では statusが変わっただけだが・・

$ k describe csr myuser
Name:         myuser
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"certificates.k8s.io/v1","kind":"CertificateSigningRequest","metadata":{"annotations":{},"name":"myuser"},"spec":{"expirationSeconds":86400,"request":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0diWGwxYzJWeU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQXR4UmhFUy9IK2N4U2pDa1ViZTZxTnBKUWoyWUoyYnZpM2VwREM0N1BjOFlkCmlFdk1KYzRCdHVMdXJjL1Mwd2lPN213bytCMHo5T2pjSCs4QUdSV09Qa0x6Z2pGMkFYYzBlSVVPY1lVaG0xNjMKSFlGQituOFlNeEZSOXNudnBQS2tGN1oxUTFBMW5RZ3FUak9nek9FUUhFQk14YUJ0RVZqakZaT0pUTHVDZVZHMQpYMllrTGNVSDkxcEsrTHFVZDIvMGlWMnIvWjlhRVJiT1Rqa2ptUFkyNTJONDZOODF3ZnRyZmJlMDdRb1dyTXRUCmZIY3NNaXMxWEV5Tms1YklpOWJ1ek9MY0tHOGVRcWdIZEZtYlhWbWF3aFdUQXJyY2JsTW03MytiMTJiUTlQZncKL3ZQTndLK0VZL3QraW1vY2piODdKRXNJMXBhK05KcXlHSEE0R3NwK3d3SURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBRjhMYTQrdG9kWENmaVpMSWdzMmRyU2ZIc3NYa3pBRVo5L21PUTZGODFqakN1ZitFUk5zClFpWXZFTEJWbVJjeXVRa0hKOHpqanpZT1pWNzZRTDAraDFTOFo2WkMzNnJPYkxYTnJUYTRVcTU3NnVGRlBKaWwKbk01R29sN3J4SE9Yak5BdDBGVlFIR3JnQ2p6NDJGVUh1cDh0K0dhZGUwaFZiTnNyOE9BY0RoZld1OHYvWEhUbworMjU5SHE4ZTdhNURPRjl5YkwvNWFHSEthUEV3eGVGQmk1dkZYYU8vamFIejhpWDQ0TFFPRHR0RU1Gd2NVbFh6CldLVWJjNzFraWtlVlhqL3I5UDBDc09IT3dreW9IcHFZNUFacERsbTJPUjIyekE0ZjgrMWdUT1lrbjQ0Y2I5N2kKc1NoaWVpWC90Nmw3WVN4TUdBalgrbjlGM0RYU3NFUmlhOXc9Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=","signerName":"kubernetes.io/kube-apiserver-client","usages":["client auth"]}}

CreationTimestamp:   Wed, 20 Nov 2024 23:08:43 +0900
Requesting User:     kubernetes-admin
Signer:              kubernetes.io/kube-apiserver-client
Requested Duration:  24h
Status:              Approved,Issued
Subject:
         Common Name:    myuser
         Serial Number:
Events:  <none>

k get csr でみると、certificate: が追加されてる!

$ k get csr myuser -o yaml
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"certificates.k8s.io/v1","kind":"CertificateSigningRequest","metadata":{"annotations":{},"name":"myuser"},"spec":{"expirationSeconds":86400,"request":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0diWGwxYzJWeU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQXR4UmhFUy9IK2N4U2pDa1ViZTZxTnBKUWoyWUoyYnZpM2VwREM0N1BjOFlkCmlFdk1KYzRCdHVMdXJjL1Mwd2lPN213bytCMHo5T2pjSCs4QUdSV09Qa0x6Z2pGMkFYYzBlSVVPY1lVaG0xNjMKSFlGQituOFlNeEZSOXNudnBQS2tGN1oxUTFBMW5RZ3FUak9nek9FUUhFQk14YUJ0RVZqakZaT0pUTHVDZVZHMQpYMllrTGNVSDkxcEsrTHFVZDIvMGlWMnIvWjlhRVJiT1Rqa2ptUFkyNTJONDZOODF3ZnRyZmJlMDdRb1dyTXRUCmZIY3NNaXMxWEV5Tms1YklpOWJ1ek9MY0tHOGVRcWdIZEZtYlhWbWF3aFdUQXJyY2JsTW03MytiMTJiUTlQZncKL3ZQTndLK0VZL3QraW1vY2piODdKRXNJMXBhK05KcXlHSEE0R3NwK3d3SURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBRjhMYTQrdG9kWENmaVpMSWdzMmRyU2ZIc3NYa3pBRVo5L21PUTZGODFqakN1ZitFUk5zClFpWXZFTEJWbVJjeXVRa0hKOHpqanpZT1pWNzZRTDAraDFTOFo2WkMzNnJPYkxYTnJUYTRVcTU3NnVGRlBKaWwKbk01R29sN3J4SE9Yak5BdDBGVlFIR3JnQ2p6NDJGVUh1cDh0K0dhZGUwaFZiTnNyOE9BY0RoZld1OHYvWEhUbworMjU5SHE4ZTdhNURPRjl5YkwvNWFHSEthUEV3eGVGQmk1dkZYYU8vamFIejhpWDQ0TFFPRHR0RU1Gd2NVbFh6CldLVWJjNzFraWtlVlhqL3I5UDBDc09IT3dreW9IcHFZNUFacERsbTJPUjIyekE0ZjgrMWdUT1lrbjQ0Y2I5N2kKc1NoaWVpWC90Nmw3WVN4TUdBalgrbjlGM0RYU3NFUmlhOXc9Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=","signerName":"kubernetes.io/kube-apiserver-client","usages":["client auth"]}}
  creationTimestamp: "2024-11-20T14:13:33Z"
  name: myuser
  resourceVersion: "842940"
  uid: a99b199d-38d8-4120-9f3b-7fd2028ed026
spec:
  expirationSeconds: 86400
  groups:
  - kubeadm:cluster-admins
  - system:authenticated
  request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0diWGwxYzJWeU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQXR4UmhFUy9IK2N4U2pDa1ViZTZxTnBKUWoyWUoyYnZpM2VwREM0N1BjOFlkCmlFdk1KYzRCdHVMdXJjL1Mwd2lPN213bytCMHo5T2pjSCs4QUdSV09Qa0x6Z2pGMkFYYzBlSVVPY1lVaG0xNjMKSFlGQituOFlNeEZSOXNudnBQS2tGN1oxUTFBMW5RZ3FUak9nek9FUUhFQk14YUJ0RVZqakZaT0pUTHVDZVZHMQpYMllrTGNVSDkxcEsrTHFVZDIvMGlWMnIvWjlhRVJiT1Rqa2ptUFkyNTJONDZOODF3ZnRyZmJlMDdRb1dyTXRUCmZIY3NNaXMxWEV5Tms1YklpOWJ1ek9MY0tHOGVRcWdIZEZtYlhWbWF3aFdUQXJyY2JsTW03MytiMTJiUTlQZncKL3ZQTndLK0VZL3QraW1vY2piODdKRXNJMXBhK05KcXlHSEE0R3NwK3d3SURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBRjhMYTQrdG9kWENmaVpMSWdzMmRyU2ZIc3NYa3pBRVo5L21PUTZGODFqakN1ZitFUk5zClFpWXZFTEJWbVJjeXVRa0hKOHpqanpZT1pWNzZRTDAraDFTOFo2WkMzNnJPYkxYTnJUYTRVcTU3NnVGRlBKaWwKbk01R29sN3J4SE9Yak5BdDBGVlFIR3JnQ2p6NDJGVUh1cDh0K0dhZGUwaFZiTnNyOE9BY0RoZld1OHYvWEhUbworMjU5SHE4ZTdhNURPRjl5YkwvNWFHSEthUEV3eGVGQmk1dkZYYU8vamFIejhpWDQ0TFFPRHR0RU1Gd2NVbFh6CldLVWJjNzFraWtlVlhqL3I5UDBDc09IT3dreW9IcHFZNUFacERsbTJPUjIyekE0ZjgrMWdUT1lrbjQ0Y2I5N2kKc1NoaWVpWC90Nmw3WVN4TUdBalgrbjlGM0RYU3NFUmlhOXc9Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=
  signerName: kubernetes.io/kube-apiserver-client
  usages:
  - client auth
  username: kubernetes-admin
status:
  certificate: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM5ekNDQWQrZ0F3SUJBZ0lSQU5yZnpBL0hWMDM2QTdYLzl1MHdKRFl3RFFZSktvWklodmNOQVFFTEJRQXcKRlRFVE1CRUdBMVVFQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRFeE1qQXhOREE1TlRkYUZ3MHlOREV4TWpFeApOREE1TlRkYU1CRXhEekFOQmdOVkJBTVRCbTE1ZFhObGNqQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQCkFEQ0NBUW9DZ2dFQkFMY1VZUkV2eC9uTVVvd3BGRzN1cWphU1VJOW1DZG03NHQzcVF3dU96M1BHSFloTHpDWE8KQWJiaTdxM1AwdE1JanU1c0tQZ2RNL1RvM0IvdkFCa1ZqajVDODRJeGRnRjNOSGlGRG5HRkladGV0eDJCUWZwLwpHRE1SVWZiSjc2VHlwQmUyZFVOUU5aMElLazR6b016aEVCeEFUTVdnYlJGWTR4V1RpVXk3Z25sUnRWOW1KQzNGCkIvZGFTdmk2bEhkdjlJbGRxLzJmV2hFV3prNDVJNWoyTnVkamVPamZOY0g3YTMyM3RPMEtGcXpMVTN4M0xESXIKTlZ4TWpaT1d5SXZXN3N6aTNDaHZIa0tvQjNSWm0xMVptc0lWa3dLNjNHNVRKdTkvbTlkbTBQVDM4UDd6emNDdgpoR1A3Zm9wcUhJMi9PeVJMQ05hV3ZqU2FzaGh3T0JyS2ZzTUNBd0VBQWFOR01FUXdFd1lEVlIwbEJBd3dDZ1lJCkt3WUJCUVVIQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JURGx3RU9zWWVYYTV4VE9HYzIKTHNFNVI1Z2p4VEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBZGc1VkJhTnpOdk1OeVRJcWVaKzBNRVRwZzhpdwpqWEFrWW9xQk9uSDVhajlTMWhBY0EzVFlUdWtEK3MzcHI5WENiWDVMcm5oby8wRzhiMjRHb3JKK0FMcytCYVVICkJWUVFGUkRSNWZZTWVGZTRqa1d5amhJc0greldDVFRqVDR2VlNMTFNRckZld1YrZHJrS3dGcFNlNFIrZUk2WTYKdWtZblRrZjNRMmJKVkZpaTNFbzdkS1pnU0ZOSnhhaWhGNVFLbWJ3VXU5bTNSbVBabzZCclQ5L1A0VTBrRFRIVAowVnJiTGxnT2xra2FoOXJtNWFTMjd0THdackVWbUYvUkZEdTVGOGVDOC81SnFuek5leG1QQ1NKcGIzSndXK21GCmlEWTlaYnBJakZCZG1yekNxZGtZeU9YR012bEQ5eDdib2ErQWtmRU5zNnZBMTlwM2w1VjV4V2E2SVE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
  conditions:
  - lastTransitionTime: "2024-11-20T14:14:57Z"
    lastUpdateTime: "2024-11-20T14:14:58Z"
    message: This CSR was approved by kubectl certificate approve.
    reason: KubectlApprove
    status: "True"
    type: Approved

certificate を certificate というファイルにしました

$ cat certificate | base64 -d
-----BEGIN CERTIFICATE-----
MIIC9zCCAd+gAwIBAgIRANrfzA/HV036A7X/9u0wJDYwDQYJKoZIhvcNAQELBQAw
FTETMBEGA1UEAxMKa3ViZXJuZXRlczAeFw0yNDExMjAxNDA5NTdaFw0yNDExMjEx
NDA5NTdaMBExDzANBgNVBAMTBm15dXNlcjCCA...

$ cat certificate | base64 -d | openssl x509 -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            da:df:cc:0f:c7:57:4d:fa:03:b5:ff:f6:ed:30:24:36
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=kubernetes
        Validity
            Not Before: Nov 20 14:09:57 2024 GMT
            Not After : Nov 21 14:09:57 2024 GMT
        Subject: CN=myuser
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:b7:14:61:11:2f:c7:f9:cc:52:8c:29:14:6d:ee:
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Extended Key Usage: 
                TLS Web Client Authentication
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Authority Key Identifier: 
                C3:97:01:0E:B1:87:97:6B:9C:53:38:67:36:2E:C1:39:47:98:23:C5
    Signature Algorithm: sha256WithRSAEncryption
    Signature Value:
        76:0e:55:05:a3:73:36:f3:0d:c9:32:2a:79:9f:b4:30:44:e9:

なるほどたまに見かける CN=kubernetes はこうして作られてるのか

CertificateSigningRequestの中身

シンプルにすると kubernetes-admin というusernameで作られていることに気づく

apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: myuser
spec:
  expirationSeconds: 86400
  groups:
  - kubeadm:cluster-admins
  - system:authenticated
  request: <CSR pem>
  signerName: kubernetes.io/kube-apiserver-client
  usages:
  - client auth
  username: kubernetes-admin     <------ これkubernetesのユーザ名?
status:
  certificate: <pem>
  conditions:
  - lastTransitionTime: "2024-11-20T14:14:57Z"
    lastUpdateTime: "2024-11-20T14:14:58Z"
    message: This CSR was approved by kubectl certificate approve.
    reason: KubectlApprove
    status: "True"
    type: Approved

このcertを使って認証すると誰になるのかよくわからん

今、自分は kubernetes-admin らしい

$ k auth whoami
ATTRIBUTE   VALUE
Username    kubernetes-admin
Groups      [kubeadm:cluster-admins system:authenticated]

しかしcan-iの結果は違う。 Groupsの違いなのか?

$ kubectl auth can-i get pods
yes

$ kubectl auth can-i get pods --as=kubernetes-admin
no

次の授業でやりそうな気がする!


1
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?