CKA取りたいので基礎から学ぶ
会社の Udemy business を申し込みそびれたので、このシリーズでやってみよう
https://www.youtube.com/watch?v=tHAQWLKMTB0&list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC&index=10
day 8
apiVersion がわからないときに使う
apiVersion: v1 でエラーになるとき、 apiVersion: apps/v1
が正しいことが確認できる
kubectl explain replicaset
k explain deployment
config系
永遠に覚えられない
# list up contexts
k config get-contexts
k config use-context rancher-desktop
k config set-context --current --namespace=default
or
k config set-context kind-kind --namespace default
今使っている ns を知る方法
$ k config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kind-kind kind-kind kind-kind ns12
rancher-desktop rancher-desktop rancher-desktop
Current ns の全てのリソースを表示。便利
k get all
k get all -o wide
k get pods -o wide
k get nodes -o wide
labelを表示
k get po --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 6m45s run=nginx
describe
slashで区切ってresource/name でアクセスできる
describe は省略記法がないらしい。不便。。 desc
でよくしてくれー
k describe rs/nginx-rs
k describe deploy/nginx
replication controller, replicaset, deployment
replication controllerは古い。selectorがイマイチ。
replicasetが今の推奨。
replicasetを作るのがdeployment
直接 pod を作成
これ便利!!
k run ng --image nginx
yamlを出すことも!
k run ng --image nginx --dry-run=client -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: ng
name: ng
spec:
containers:
- image: nginx
name: ng
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
直接deploymentを作成
k create deploy deploy-nginx --image=nginx --replicas=3
# replicasを増やしたりできる
k edit deploy/deploy-nginx
k scale deploy/nginx --replicas=5
# imageを変えられる k set image
k set image deploy/nginx-deploy nginx=nginx:1.27.2
# deploymentから作ったreplicasetは直接編集できない(すぐdeploymentで上書きされる)
# replicasetを直接作ってたら、編集できる
k edit rs nginx-7854ff8877
k delete deploy/deploy-nginx # slashあり版
k delete deploy deploy-nginx # slashの代わりにスペース。どっちでもいい
create と同時にlabelをつける方法はわからなかった。後からlabelを追加するコマンド
k label deploy deploy-nginx newlabel=111
k label pod ds-pdvfw newlabel=111
$ k describe pod ds-pdvfw | grep -B3 111
Labels: app=nginx
controller-revision-hash=6d59468b47
env=demo
newlabel=111
podなら直接labelをつけれる
$ k run ng --image nginx --labels="env=me,app=nginx"
pod/ng created
$ k describe pod ng | grep -A1 Label
Labels: app=nginx
env=me
yaml templateを使う
k create deploy deploy/nginx-new --image=nginx --dry-run=client -o yaml > deploy.yaml
rollout
undoしたらimageが古くなった。replica変えたのはundoされなかった気がすうr
kubectl rollout history deploy/nginx-rs
k rollout undo deploy/nginx-rs
kubectl rollout history deploy/nginx-rs
replicasetを直接作る
yaml書いてapply -f
deployと同じ感じで操作できる
k edit rs/nginx-rs
k scale --replicas=10 rs/nginx-rs
k delete rs/nginx-rs
day 9
kind の使い方
kindはコンテナ内にk8s clusterを作る。ので、nodeportが動かない。そのためにはclusetrを作成し直しになる。一度作ったcluseterの編集はできない
# three node (two workers) cluster config
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30001
hostPort: 30001
- role: worker
- role: worker
kind create cluster --config kind.yaml
kubectl cluster-info --context kind-kind # default name = kind
kubectl cluster-info --context kind-kind --name kind2
kubectl cluster-info dump
kind delete cluster --name=kind
extraPortMappingsすると、k get nodes の ip は関係なく、localhost:30001 がそのままnodeportに繋がる。
apiVersion: v1
kind: Service
metadata:
name: nodeport-svc
labels:
env: demo
spec:
type: NodePort
ports:
- nodePort: 30001
port: 80
targetPort: 80
selector:
app: deploy-nginx
selectorを変えてみる
k create deploy apache --image httpd --replicas 3
$ k describe deploy/apache | grep -i labels
Labels: app=apache
Labels: app=apache
selector:
app: apache
こうしてnodeport applyすると、apacheが localhost:30001 でみれる
Service type: LoadBalancer, ClusterIP
同様に lb, cluster ip も作るとselectorにヒットしたpodに勝手に繋がる。
どのpodかを選ぶのががselector。targetPortはその、selectorでヒットしたpodのport.
LBのtargetPortはdefault 80 らしい。targetPortを消してもapache:80に繋がるけど、targetPort 8080を入れると繋がらなくなった。
spec:
type: LoadBalancer
ports:
- port: 80
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
こんなんでpodの中から試した。
k exec -it apache-7bdd4c55dc-8hnsk -- sh
load balancerのexternal IPs はpending.
k get svc -o wide
service間の接続
service.nodeportをservice.LBに or service.ClusterIPに繋げることはできない。
仕組みが全く違うらしい。
serviceがつあんゲルのはselectorが指定できる podsだけ だと思おう
ExternalName Service の謎
何が便利なのかさっぱりわからなかった。用途不明
-> わかった
Day 10 namespace
k8s namespaceはnetworkを全く切り分けてなかった。そうだったんだ。
異なるns同士で普通に通信できる。name.ns.svc.cluster.local
でもいけるし、pod ip addressを直接打ってもアクセスできる。そうだったんだ。
pod.cluster.local
はない。必ずserviceを経由しないといけない。抽象化のため。どうしてもやりたいならpod ip address直でアクセスする。
ns間のACLって設定できるの?ということで NetworkPolicy
を試したけど思うように動かなかった。そのうち学ぶだろう。
$ k explain NetworkPolicy
GROUP: networking.k8s.io
KIND: NetworkPolicy
VERSION: v1
apiVersion は Group
/VERSION
になる。
この場合、yamlはこうなる
apiVersion: networking.k8s.io/v1
Day 11 initContainers
k apply じゃなくて k create があることを知った。作成だけできる。
k create -f pod.yaml
もう一回叩くとエラーになる
$ k create -f pod.yaml
Error from server (AlreadyExists): error when creating "pod.yaml": pods "myapp" already exists
pod nameが被ってるからエラー。なるほどね
pod metadata.name
を変えたら成功した
$ k create -f pod.yaml
pod/myapp2 created
色々いじったものをapplyしたら怒られた。createで作ると後からapplyできなくてめんど。よっぽど固定したいもの以外には使わないな
$ k apply -f pod.yaml
Warning: resource pods/myapp is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
The Pod "myapp" is invalid: spec.initContainers: Forbidden: pod updates may not add or remove containers
k create deploy
は特別なコマンド。yamlなしで作れるのは数少ない!
serviceaccount
secret
configmap
namespace
deploy
k create はこれくらいしか対応してないらしい。
initContainer
sh って until; do; done;
なんてできたんだ 知らなかった
initContainers:
- name: init
image: busybox
command: ["sh", "-c", "echo init started.; until nslookup myapp.ns11.svc.cluster.local; do date; sleep 1; done; echo init completed."]
k get containers
は存在しない。pod describe で見るしかない
Day 12 DaemonSet
Daemonset
daemonsetというresourceを知った。すべてのノードにpodを確実に載せるためのコントローラらしい。(または指定ラベルと持つノード)
$ k get daemonset -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kindnet 3 3 3 3 3 kubernetes.io/os=linux 24h
kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 24h
$ k get ds
でもdaemonset になる
$ k describe daemonset kube-proxy -n kube-system
Name: kube-proxy
Selector: k8s-app=kube-proxy
Node-Selector: kubernetes.io/os=linux
Labels: k8s-app=kube-proxy
Annotations: deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 3
Current Number of Nodes Scheduled: 3
Number of Nodes Scheduled with Up-to-date Pods: 3
Number of Nodes Scheduled with Available Pods: 3
Number of Nodes Misscheduled: 0
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: k8s-app=kube-proxy
Service Account: kube-proxy
Containers:
kube-proxy:
Image: registry.k8s.io/kube-proxy:v1.29.2
Port: <none>
Host Port: <none>
Command:
/usr/local/bin/kube-proxy
--config=/var/lib/kube-proxy/config.conf
--hostname-override=$(NODE_NAME)
Environment:
NODE_NAME: (v1:spec.nodeName)
Mounts:
/lib/modules from lib-modules (ro)
/run/xtables.lock from xtables-lock (rw)
/var/lib/kube-proxy from kube-proxy (rw)
Volumes:
kube-proxy:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kube-proxy
Optional: false
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
Priority Class Name: system-node-critical
Events: <none>
-A
optionですべてのnamespaceを検索できる
$ k get ds -A
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system kindnet 3 3 3 3 3 kubernetes.io/os=linux 24h
kube-system kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 24h
$ k get pods -A
pod は po
に略せる
k get po
Controlled By ReplicaSet
普通に k create deploy
すると Controlled By ReplicaSet
になる
$ k describe po nginx-7854ff8877-mjs9m
Name: nginx-7854ff8877-mjs9m
...
Controlled By: ReplicaSet/nginx-7854ff8877
Controlled by DaemonSet
これで daemon setを作れる。deploy/replicasetとの違いは「replicas」がいらないこと。worker nodeにつき1 podが勝手に保証される。*cplaneにはpodは置かれない
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ds
spec:
template:
metadata:
labels:
env: demo # <-- ここと
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
selector:
matchLabels:
env: demo # <-- ここが一致しないとエラーになる
deployのように、podが作られる。
$ k get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ds-pdvfw 0/1 ContainerCreating 0 2s <none> kind-worker2 <none> <none>
ds-sv9fg 0/1 ContainerCreating 0 2s <none> kind-worker <none> <none>
Controlled By: DaemonSet/ds
になった
$ k describe pod/ds-pdvfw
Name: ds-pdvfw
...
Controlled By: DaemonSet/ds
Day 13 scheduler
実習: schedulerを止めると、podが作れなくなる!
control plane は /etc/kubernetes/manifests
にあるファイルをwatchしているらしい。kube-scheduler.yaml
を消すと、その瞬間に kube-scheduler-kind-control-plane
podがいなくなる!
watch しながら
watch -n1 "kubectl get po -n kube-system "
Every 1.0s: kubectl get po -n kube-system mMHYTXJ314T: Sun Nov 10 16:41:27 2024
NAME READY STATUS RESTARTS AGE
coredns-76f75df574-rt9xx 1/1 Running 1 (3d23h ago) 4d17h
coredns-76f75df574-vxtpq 1/1 Running 1 (3d23h ago) 4d17h
etcd-kind-control-plane 1/1 Running 1 (3d23h ago) 4d17h
kindnet-bjpx8 1/1 Running 1 (3d23h ago) 4d17h
kindnet-m6g6m 1/1 Running 1 (3d23h ago) 4d17h
kindnet-ww6vb 1/1 Running 1 (3d23h ago) 4d17h
kube-apiserver-kind-control-plane 1/1 Running 1 (3d23h ago) 4d17h
kube-controller-manager-kind-control-plane 1/1 Running 1 (3d23h ago) 4d17h
kube-proxy-cbx5k 1/1 Running 1 (3d23h ago) 4d17h
kube-proxy-hgrq8 1/1 Running 1 (3d23h ago) 4d17h
kube-proxy-hnxcv 1/1 Running 1 (3d23h ago) 4d17h
kube-scheduler-kind-control-plane 1/1 Running 1 (3d23h ago) 4d17h
yamlを消すよ〜
root@kind-control-plane:/etc/kubernetes/manifests# ll
total 28
drwxr-xr-x 1 root root 4096 Nov 5 13:41 .
drwxr-xr-x 1 root root 4096 Nov 5 13:41 ..
-rw------- 1 root root 2406 Nov 5 13:41 etcd.yaml
-rw------- 1 root root 3896 Nov 5 13:41 kube-apiserver.yaml
-rw------- 1 root root 3428 Nov 5 13:41 kube-controller-manager.yaml
-rw------- 1 root root 1463 Nov 5 13:41 kube-scheduler.yaml
root@kind-control-plane:/etc/kubernetes/manifests# mv kube-scheduler.yaml /tmp
一番下のschedulerがいなくなった!
NAME READY STATUS RESTARTS AGE
coredns-76f75df574-rt9xx 1/1 Running 1 (3d23h ago) 4d17h
coredns-76f75df574-vxtpq 1/1 Running 1 (3d23h ago) 4d17h
etcd-kind-control-plane 1/1 Running 1 (3d23h ago) 4d17h
kindnet-bjpx8 1/1 Running 1 (3d23h ago) 4d17h
kindnet-m6g6m 1/1 Running 1 (3d23h ago) 4d17h
kindnet-ww6vb 1/1 Running 1 (3d23h ago) 4d17h
kube-apiserver-kind-control-plane 1/1 Running 1 (3d23h ago) 4d17h
kube-controller-manager-kind-control-plane 1/1 Running 1 (3d23h ago) 4d17h
kube-proxy-cbx5k 1/1 Running 1 (3d23h ago) 4d17h
kube-proxy-hgrq8 1/1 Running 1 (3d23h ago) 4d17h
kube-proxy-hnxcv 1/1 Running 1 (3d23h ago) 4d17h
podを作るよ=
$ k run no-sche-nginx --image nginx
podはつくれたけど Pending
だよ・・
$ k get pods
NAME READY STATUS RESTARTS AGE
ng 1/1 Running 0 5m58s
nginx-7854ff8877-4wgsv 1/1 Running 0 3d17h
no-sche-nginx 0/1 Pending 0 12s <-----
Event: <none>
のまま変わりません
$ k describe pod no-sche-nginx
Name: no-sche-nginx
Namespace: ns2
Priority: 0
Service Account: default
Node: <none>
Labels: run=no-sche-nginx
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
no-sche-nginx:
Image: nginx
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ff2ls (ro)
Volumes:
kube-api-access-ff2ls:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
yamlを戻すよ〜
root@kind-control-plane:/etc/kubernetes/manifests# mv /tmp/kube-scheduler.yaml .
scheduler podが復活!
Every 1.0s: kubectl get po -n kube-system mMHYTXJ314T: Sun Nov 10 16:44:22 2024
NAME READY STATUS RESTARTS AGE
coredns-76f75df574-rt9xx 1/1 Running 1 (3d23h ago) 4d18h
coredns-76f75df574-vxtpq 1/1 Running 1 (3d23h ago) 4d18h
etcd-kind-control-plane 1/1 Running 1 (3d23h ago) 4d18h
kindnet-bjpx8 1/1 Running 1 (3d23h ago) 4d18h
kindnet-m6g6m 1/1 Running 1 (3d23h ago) 4d18h
kindnet-ww6vb 1/1 Running 1 (3d23h ago) 4d18h
kube-apiserver-kind-control-plane 1/1 Running 1 (3d23h ago) 4d18h
kube-controller-manager-kind-control-plane 1/1 Running 1 (3d23h ago) 4d18h
kube-proxy-cbx5k 1/1 Running 1 (3d23h ago) 4d18h
kube-proxy-hgrq8 1/1 Running 1 (3d23h ago) 4d18h
kube-proxy-hnxcv 1/1 Running 1 (3d23h ago) 4d18h
kube-scheduler-kind-control-plane 1/1 Running 0 5s
pod が runningに!
$ k get po
NAME READY STATUS RESTARTS AGE
ng 1/1 Running 0 10m
nginx-7854ff8877-4wgsv 1/1 Running 0 3d17h
no-sche-nginx 1/1 Running 0 18s <------
Eventが入ったよ!
$ k describe pod no-sche-nginx
Name: no-sche-nginx
Namespace: ns2
Priority: 0
Service Account: default
Node: kind-worker/10.201.0.3
Start Time: Sun, 10 Nov 2024 16:40:15 +0900
Labels: run=no-sche-nginx
Annotations: <none>
Status: Running
IP: 10.244.1.16
IPs:
IP: 10.244.1.16
Containers:
no-sche-nginx:
Container ID: containerd://1a661d8c3619e70c616b25b539dd95b43c0795e740aacf7289552ce0ff1242cf
Image: nginx
Image ID: docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb
Port: <none>
Host Port: <none>
State: Running
Started: Sun, 10 Nov 2024 16:40:21 +0900
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ff2ls (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-ff2ls:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13s default-scheduler Successfully assigned ns2/no-sche-nginx to kind-worker
Normal Pulling 13s kubelet Pulling image "nginx"
Normal Pulled 7s kubelet Successfully pulled image "nginx" in 5.587s (5.587s including waiting)
Normal Created 7s kubelet Created container no-sche-nginx
Normal Started 7s kubelet Started container no-sche-nginx
Controle planeってyamlの出し入れで動くんだ・・
psと、あるフォルダの中身が連動する。不思議な管理手法だなぁ
Node指定
worker node探す
$ k get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane 4d18h v1.29.2
kind-worker Ready <none> 4d18h v1.29.2
kind-worker2 Ready <none> 4d18h v1.29.2
template作る
$ k run nginx --image nginx -o yaml --dry-run=client > pod.yaml
pod yaml with nodeName
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
nodeName: kind-worker <------
apply
$ k apply -f pod.yaml
pod/nginx created
$ k describe po nginx | grep Node
Node: kind-worker/10.201.0.3 <-----
NodeName 指定すると、schedulerがいなくてもdeployできる
事前に scheduler を止めておく
root@kind-control-plane:/etc/kubernetes/manifests# mv kube-scheduler.yaml /tmp
$ k get po -n kube-system | grep sche
$
nodeName指定した pod をdeploy
$ k apply -f pod.yaml
pod/nginx created
$ k get pods
NAME READY STATUS RESTARTS AGE
nginx 0/1 ContainerCreating 0 2s
$ k get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 7s <---- 動いた!
schedulerがいなくても、podはnodeName使えばdeployできる!
k get pods --selector=
の使い方
準備、create pods
$ k run apache --image httpd
pod/apache created
$ k run httpd --image httpd --labels="app=httpd,tier=one"
pod/httpd created
all pods
$ k get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
apache 1/1 Running 0 28s run=apache
httpd 0/1 ContainerCreating 0 7s app=httpd,tier=one
nginx 1/1 Running 0 9m11s run=nginx
--selector=""
で検索
これはserviceからアクセスできない時のdebugに使えそう。
$ k get pods --selector="run=apache"
NAME READY STATUS RESTARTS AGE
apache 1/1 Running 0 3m20s
--selector key=val
でも使える
$ k get pods --selector tier=one
NAME READY STATUS RESTARTS AGE
httpd 1/1 Running 0 4m41s
Annotation
anotationはlabelsに似ているけどちょっと違う
last-applied-configuration
は最後のapplyが記録されている
$ k edit po nginx
apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"run":"nginx"},"name":"nginx","namespace":"ns13"},"spec":{"containers":[{"image":"nginx","name":"nginx"}],"nodeName":"kind-worker"}}
Dayy 14 Taints and Tolerations
まず taint ってなんやねん
taintの概念と「汚染」の関連性
ノードへの制約: taintは、ノードに特定の制約を付与するための仕組みです。この制約によって、特定のポッドがそのノードで実行されることを禁止したり、優先度を下げたりすることができます。
汚染の比喩: この制約は、あたかもノードが 特定の「汚染物質」によって汚染されている かのように捉えることができます。汚染されたノードには、特定の種類のポッドしか住めない、あるいは住みにくい という状況を表現しているのです。
しっくりこんなぁ〜
taintはnodeにつける
tolerationはpodにつける
制限(taint)を持ってるのはnode
その制限を潜り抜けるようにしてやるかぁ、がpod(toleration)
$ k get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kind-control-plane Ready control-plane 6d v1.29.2 10.201.0.2 <none> Debian GNU/Linux 12 (bookworm) 6.6.41-0-virt containerd://1.7.13
kind-worker Ready <none> 6d v1.29.2 10.201.0.3 <none> Debian GNU/Linux 12 (bookworm) 6.6.41-0-virt containerd://1.7.13
kind-worker2 Ready <none> 6d v1.29.2 10.201.0.4 <none> Debian GNU/Linux 12 (bookworm) 6.6.41-0-virt containerd://1.7.13
$ k taint node kind-worker gpu=true:NoSchedule
node/kind-worker tainted
$ k taint node kind-worker2 gpu=true:NoSchedule
node/kind-worker2 tainted
key-value形式: taintは、key=value:effectという形式で設定されます。
key: 任意の文字列で、ノードの属性を表します。
value: keyの説明や詳細な情報を付加できますが、必須ではありません。
effect: taintの効果を表し、NoSchedule、PreferNoSchedule、NoExecuteのいずれかになります。
$ k describe node kind-worker2 | grep -i taint
Taints: gpu=true:NoSchedule
$ k run nginx --image nginx
pod/nginx created
$ k get po
NAME READY STATUS RESTARTS AGE
nginx 0/1 Pending 0 7s
$ k describe pod nginx
...
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 16s default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) had untolerated taint {gpu: true}. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
日本語訳
3つのノードのうち、利用可能なノードはありません。
- 1つのノード: 「node-role.kubernetes.io/control-plane」という耐えられないタイント(汚染)を持っています。これは、このノードがコントロールプレーンの役割を担っており、他のポッドがスケジュールされないことを意味します。
- 2つのノード: 「gpu: true」という耐えられないタイントを持っています。これは、これらのノードにGPUが搭載されており、GPUを使用しないポッドはスケジュールされないことを意味します。
- プリエンプション: 3つのノードのうち、利用可能なノードはなく、プリエンプション(優先度の低いポッドを強制終了してリソースを解放する)はスケジューリングに役立ちません。
ここでの taint は restriction, 制限 と捉えるといいかもしれない。
untolerated: taintに耐えらえれない = taint(制限) をクリアできない = pod をdeployできない
tolerated taint: taintに耐えらえる = taint(制限) をクリアできる = pod をdeployできる
メッセージ通り、controle planeは確かにその Taint を持っていた。
$ k describe node kind-control-plane | grep Taint
Taints: node-role.kubernetes.io/control-plane:NoSchedule
taintのeffectは、そのtaintがポッドのスケジューリングに与える影響を決定する重要な要素です。効果には主に3種類あり、それぞれ以下のような意味を持ちます。
NoSchedule
意味: 指定されたノードに、このtaintをtoleration(耐性)を持っていないポッドは絶対にスケジュールされません。
例: gpu=true:NoSchedule
このtaintを持つノードには、GPUを使用できるtolerationを持っていないポッドはスケジュールされません。つまり、このノードはGPU専用のノードとして扱われます。PreferNoSchedule
意味: 指定されたノードに、このtaintをtolerationを持っていないポッドは、他の選択肢がある限りスケジュールされません。ただし、他の選択肢が全くない場合は、このノードにスケジュールされる可能性があります。
例: old-node:PreferNoSchedule
このtaintを持つノードは、古いノードであることを示します。新しいノードがある限り、新しいノードにポッドがスケジュールされます。しかし、新しいノードが全て埋まっている場合は、古いノードにスケジュールされる可能性があります。NoExecute
意味: 指定されたノードで実行中のポッドは、このtaintをtolerationを持たない場合、強制終了されます。
例: evict:NoExecute
このtaintを持つノードから、このtaintをtolerationを持たないポッドが削除されます。ノードのメンテナンスなど、ノードを一時的に使用不可にする際に使用されます。
effectの選択と活用
NoSchedule: ノードを特定の用途に特化させたい場合に利用します。
PreferNoSchedule: ノードの負荷を分散させたい場合や、古いノードへのポッドのスケジュールを抑制したい場合に利用します。
NoExecute: ノードのメンテナンスや、特定のポッドを強制終了させたい場合に利用します。
apiVersion: v1
kind: Pod
metadata:
name: apache
spec:
containers:
- image: httpd
name: apache
tolerations:
- key: gpu
value: "true"
operator: Equal
effect: NoSchedule
$ k apply -f taint-pod.yaml
pod/apache created
$ k get po
NAME READY STATUS RESTARTS AGE
apache 1/1 Running 0 6s
nginx 0/1 Pending 0 13m
Delete taint (Untaint)
今、2つのpodがtaintにあうnodeがなくて困ってる状態
$ k get po
NAME READY STATUS RESTARTS AGE
apache 0/1 Pending 0 25s
nginx 0/1 Pending 0 15m
最後にdashをつけると削除(untaint)
$ k taint node kind-worker2 gpu=true:NoSchedule-
node/kind-worker2 untainted
podが動き出した
$ k get po
NAME READY STATUS RESTARTS AGE
apache 1/1 Running 0 4m17s
nginx 1/1 Running 0 19m
default-scheduler によって Successfully assigned というevent logが出ていた
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 5m31s default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) had untolerated taint {gpu: true}. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
Normal Scheduled 87s default-scheduler Successfully assigned ns14/apache to kind-worker2
Normal Pulling 86s kubelet Pulling image "httpd"
Normal Pulled 75s kubelet Successfully pulled image "httpd" in 5.506s (10.859s including waiting)
Normal Created 75s kubelet Created container apache
Normal Started 75s kubelet Started container apache
既存のworker nodeにはACL通ってるけど、追加したworker nodeには載せたくない時とかに使えそうな気がする。
nodeSelector
は labelに対して使う。taintに対しては使えない。taintはlabelより強い。
今、node=true
のtainted worker nodeがあるけど、これではpodはPendingのままになる
$ k taint node kind-worker gpu=true:NoSchedule
node/kind-worker tainted
apiVersion: v1
kind: Pod
metadata:
name: apache
spec:
containers:
- image: httpd
name: apache
nodeSelector: <-----
gpu: "true" <-----
$ k apply -f taint-pod.yaml
pod/apache created
$ k get po
NAME READY STATUS RESTARTS AGE
apache 0/1 Pending 0 3s
$ k describe pod apache
...
Node-Selectors: gpu=true
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 69s default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
labelを追加してもpendingのまま。
$ k label node kind-worker gpu=true
node/kind-worker labeled
$ k get pod
NAME READY STATUS RESTARTS AGE
apache 0/1 Pending 0 3m55s
taintがないnodeにlabel追加すると、nodeSelectorしていた pod が動き出す
$ k label node kind-worker2 gpu=true
node/kind-worker2 labeled
$ k get po
NAME READY STATUS RESTARTS AGE
apache 1/1 Running 0 5m8s
nodeSelectorでもeventのerror messageはtaintがらみ?
nodeSelector
を使ってpodを起動
nodeSelector:
gpu: "false"
そんなnodeないのでPending
$ k get po
NAME READY STATUS RESTARTS AGE
apache 0/1 Pending 0 60s
この時、event logにはtaintの文字が出る。
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 9s default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
よくみたら "untolerated taint" は control planeのエラー。scheduleできない時、これは必ず出るのね。
よくみると
2 node(s) didn't match Pod's node affinity/selector.
ちゃんと worker nodeの方にはselectorがmatchしないって出てた。
labelを上書きすると動き始めました。
$ k label node kind-worker gpu=false --overwrite
node/kind-worker labeled
$ k get po
NAME READY STATUS RESTARTS AGE
apache 0/1 ContainerCreating 0 3m14s
labelの削除方法
keyに dash をつけると消せる
$ k label node kind-worker gpu-
node/kind-worker unlabeled
消した後もscheduleされたpodは動きづつける。label/nodeSelectorが常時チェックされてるわけじゃないのか
$ k get po
NAME READY STATUS RESTARTS AGE
apache 1/1 Running 0 5m2s
たまにチェックされるぽいらしいけどしばらく経っても変わらなかったよ
noexecute:NoExecute
メンテナンスノード: メンテナンス中のノードに「noexecute:NoExecute」というtaintを設定すると、そのノード上で実行中のポッドは強制終了されます。これは、汚染された場所から人が避難させられる状況に例えられます。
$ k taint node kind-worker noexecute:NoExecute
node/kind-worker tainted
k run してたpodが一瞬で消えました。 違うnodeに移動することもなく消えました
一方 deployment はちゃんと違うnodeにpodを移動させました。(正確にいうと既存pod強制終了 -> 違うnodeに新規作成、なのでサービスは一時停止してる)
Day 15 Node Affinity
podをどのnodeにおくかを nodeAffinity でやってみる
選べるのはrequiredかpreferredの2つ
- requiredDuringSchedulingIgnoredDuringExecution
- preferredDuringSchedulingIgnoredDuringExecution
すっごい名前だな。 Ignored...じゃないやつはない。podを作る時にしか効果がない。runnningのpodはnodeのlabelが消えようが影響を受けない。
requiredDuringSchedulingIgnoredDuringExecution
これは必須。あうlabelがなければpendingになる
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd
$ k apply -f affinity.yaml
pod/nginx created
$ k get po
NAME READY STATUS RESTARTS AGE
nginx 0/1 Pending 0 5s
Warning FailedScheduling 68s default-scheduler 0/3 nodes are available:
1 node(s) didn't match Pod's node affinity/selector,
1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: },
1 node(s) had untolerated taint {noexecute: }.
preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
nodeに label をつける。affinityってただのlabelのことか。
$ k label node kind-worker disktype=ssd
node/kind-worker labeled
起動した
Warning FailedScheduling 2m42s default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
Normal Scheduled 61s default-scheduler Successfully assigned day15/nginx to kind-worker
Normal Pulling 61s kubelet Pulling image "nginx"
Normal Pulled 5s kubelet Successfully pulled image "nginx" in 55.525s (55.525s including waiting)
Normal Created 5s kubelet Created container nginx
Normal Started 5s kubelet Started container nginx
$ k get po
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 7m54s
HDDにしてみる
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- hdd
エラーになってapplyできなかった
$ k apply -f affinity.yaml
The Pod "nginx" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`,`spec.initContainers[*].image`,`spec.activeDeadlineSeconds`,`spec.tolerations` (only additions to existing tolerations),`spec.terminationGracePeriodSeconds` (allow it to be set to 1 if it was previously negative)
core.PodSpec{
... // 15 identical fields
Subdomain: "",
SetHostnameAsFQDN: nil,
Affinity: &core.Affinity{
NodeAffinity: &core.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &core.NodeSelector{
NodeSelectorTerms: []core.NodeSelectorTerm{
{
MatchExpressions: []core.NodeSelectorRequirement{
{
Key: "disktype",
Operator: "In",
- Values: []string{"ssd"},
+ Values: []string{"hdd"},
},
},
MatchFields: nil,
},
},
},
PreferredDuringSchedulingIgnoredDuringExecution: nil,
},
PodAffinity: nil,
PodAntiAffinity: nil,
},
SchedulerName: "default-scheduler",
Tolerations: {{Key: "node.kubernetes.io/not-ready", Operator: "Exists", Effect: "NoExecute", TolerationSeconds: &300}, {Key: "node.kubernetes.io/unreachable", Operator: "Exists", Effect: "NoExecute", TolerationSeconds: &300}},
... // 13 identical fields
}
podはssdのまま走り続けている
$ k get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 10m 10.244.1.37 kind-worker <none> <none>
preferredDuringSchedulingIgnoredDuringExecution
これはできればって話なのであうnodeがなくてもpodは走る
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx2
spec:
containers:
- image: nginx
name: nginx
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: disktype
operator: In
values:
- hdd
どこにも disktype=hdd な node はないが Running.
$ k get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx2 1/1 Running 0 8s 10.244.2.35 kind-worker2 <none> <none>
ssdのnode labelを消しても、hdd podが走るべきnodeにhdd labelを追加しても、走ってるpodは何も変わらない。
$ k label node kind-worker disktype-
node/kind-worker unlabeled
$ k label node kind-worker disktype=hdd
node/kind-worker labeled
$ k get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 17m 10.244.1.37 kind-worker <none> <none>
nginx2 1/1 Running 0 4m36s 10.244.2.35 kind-worker2 <none> <none>
operator: Exists
operator: Exists
の場合、labelがblankでも効く。keyさえあればいい。
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx3
spec:
containers:
- image: nginx
name: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: Exists
$ k get po
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 20m
nginx2 1/1 Running 0 8m
nginx3 0/1 Pending 0 2s
$ k label node kind-worker2 disktype=
node/kind-worker2 labeled
$ k get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 22m 10.244.1.37 kind-worker <none> <none>
nginx2 1/1 Running 0 9m19s 10.244.2.35 kind-worker2 <none> <none>
nginx3 1/1 Running 0 81s 10.244.2.36 kind-worker2 <none> <none>
Day 16 Requests and Limits
requestsは、コンテナの最小リソース要件を指定し、スケジューリングとQoSに影響を与えます。
limitsは、コンテナの最大リソース使用量を制限し、リソースの過剰使用を防ぎます。
k top
metrics-server を作ると k top
が打てるようになる。これめっちゃ便利
stress pod
k apply -f このyamlで、 polinux/stress
pod を作ってみる
$ k get po -n kube-system | grep metri
metrics-server-67fc4df55-bkn8z 1/1 Running 0 49s
node usage
$ k top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
kind-control-plane 86m 4% 876Mi 14%
kind-worker 20m 1% 337Mi 5%
kind-worker2 17m 0% 373Mi 6%
stress test
$ k create namespace mem-example
namespace/mem-example created
apiVersion: v1
kind: Pod
metadata:
name: stress
spec:
containers:
- image: polinux/stress
name: stress
resources:
requests:
memory: "100Mi"
limits:
memory: "200Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]
$ k apply -f stress.yaml -n mem-example
pod/stress created
$ k logs pod/stress -n mem-example
stress: info: [1] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
$ k top pod stress
NAME CPU(cores) MEMORY(bytes)
stress 6m 153Mi
OOMKilled される pod
もう一個作ってみる。今度はlimitsよりも多いmemoryを使う。
apiVersion: v1
kind: Pod
metadata:
name: stress2
spec:
containers:
- image: polinux/stress
name: stress
resources:
requests:
memory: "50Mi"
limits:
memory: "100Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "250M", "--vm-hang", "1"]
$ k apply -f stress2.yaml
pod/stress2 created
$ k get po
NAME READY STATUS RESTARTS AGE
stress 1/1 Running 0 3m4s
stress2 0/1 OOMKilled 0 11s
podが死んだreason fieldなんてあったんだ
$ k describe pod stress2
State: Terminated
Reason: OOMKilled
Exit Code: 137
Last State: Terminated
Reason: OOMKilled
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 65s default-scheduler Successfully assigned mem-example/stress2 to kind-worker2
Normal Pulled 34s kubelet Successfully pulled image "polinux/stress" in 5.513s (5.513s including waiting)
Normal Pulling 7s (x4 over 65s) kubelet Pulling image "polinux/stress"
Normal Created 2s (x4 over 59s) kubelet Created container stress
Normal Pulled 2s kubelet Successfully pulled image "polinux/stress" in 5.465s (5.465s including waiting)
Normal Started 1s (x4 over 59s) kubelet Started container stress
Warning BackOff 0s (x5 over 53s) kubelet Back-off restarting failed container stress in pod stress2_mem-example(664c4f46-970f-4ac4-9269-ade4d437bb99)
絶対ない 1T メモリを指定
apiVersion: v1
kind: Pod
metadata:
name: stress3
spec:
containers:
- image: polinux/stress
name: stress
resources:
requests:
memory: "1000Gi"
limits:
memory: "1000Gi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "1000G", "--vm-hang", "1"]
$ k get po
NAME READY STATUS RESTARTS AGE
stress 1/1 Running 0 8m47s
stress3 0/1 Pending 0 14s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 32s default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
Warning FailedScheduling 20s default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
2 Insufficient memory
が見えますな。ナイス。
Upgrade Kind Kubernetes version
CKAは使うk8s versionが決まってるらしい
Software Version: Kubernetes v1.31
https://training.linuxfoundation.org/certification/certified-kubernetes-administrator-cka/
スーパー最新やん。まじか。
kind clusterを作り直しましむ。
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.31.2
extraPortMappings:
- containerPort: 30001
hostPort: 30001
- role: worker
image: kindest/node:v1.31.2
- role: worker
image: kindest/node:v1.31.2
- role: worker
image: kindest/node:v1.31.2
$ kind delete cluster
$ kind create cluster --config kind.yaml
$ k get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane NotReady control-plane 21s v1.31.2
kind-worker NotReady <none> 8s v1.31.2
kind-worker2 NotReady <none> 9s v1.31.2
kind-worker3 NotReady <none> 8s v1.31.2
楽だな〜〜〜 kind 最高だわ
Day 17/40 - Kubernetes Autoscaling Explained| HPA Vs VPA
scalingはtypeによって使うものが異なる
- pods, horizontal autoscaling: HPA
- pods, vertical autoscaling: VPA
- nodes, horizontal autoscaling: Cluster Autoscaler
- nodes, vertical autoscaling: Node AutoProvisioning
KEDAというのが有名らしい
KEDA is a Kubernetes-based Event Driven Autoscaler. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed.
https://keda.sh/
HorizontalPodAutoscaler
deploy.yamlをapplyしたら registry.k8s.io/hpa-example
が起動する...んだが、自分のmacではweb serverとして機能しなかった。理由不明。curl して OK が返らなければ同じ症状です。
結局 php:apache
を使ってpodの中にこれを置きました
echo '<?php
$x = 0.0001;
for ($i = 0; $i <= 10000000; $i++) {
$x += sqrt($x);
}
echo "OK!";
?>
' > index.php
chmod 777 index.php
autoscaleしてみる
k autoscale deplopyment php-apache --cpu-percent=50 --min=1 --max=10
HorizontalPodAutoscaler が作られる
$ k autoscale deploy php-apache --cpu-percent=50 --min=1 --max=10
horizontalpodautoscaler.autoscaling/php-apache autoscaled
負荷をかける。コマンドはここから
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
kubectl run -i --tty load-generator --rm --image=busybox:1.28 --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"
あっという間にscalingされた
--watch
をつけると便利!
$ k get hpa --watch
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache cpu: <unknown>/50% 1 10 0 4s
php-apache Deployment/php-apache cpu: 0%/50% 1 10 1 15s
php-apache Deployment/php-apache cpu: 55%/50% 1 10 1 30s
php-apache Deployment/php-apache cpu: 55%/50% 1 10 2 45s
php-apache Deployment/php-apache cpu: 248%/50% 1 10 2 60s
php-apache Deployment/php-apache cpu: 122%/50% 1 10 4 75s
php-apache Deployment/php-apache cpu: 58%/50% 1 10 5 90s
php-apache Deployment/php-apache cpu: 55%/50% 1 10 5 105s
php-apache Deployment/php-apache cpu: 57%/50% 1 10 5 2m
php-apache Deployment/php-apache cpu: 47%/50% 1 10 6 2m15s
php-apache Deployment/php-apache cpu: 54%/50% 1 10 6 2m30s
php-apache Deployment/php-apache cpu: 44%/50% 1 10 6 2m45s
php-apache Deployment/php-apache cpu: 46%/50% 1 10 6 3m
php-apache Deployment/php-apache cpu: 20%/50% 1 10 6 3m15s
php-apache Deployment/php-apache cpu: 1%/50% 1 10 6 3m30s
php-apache Deployment/php-apache cpu: 0%/50% 1 10 6 3m45s
負荷を止めて 0% になっても、replicaは減らない・・・と思ったら
五分後に減った!!
php-apache Deployment/php-apache cpu: 0%/50% 1 10 6 8m
php-apache Deployment/php-apache cpu: 0%/50% 1 10 3 8m15s
php-apache Deployment/php-apache cpu: 0%/50% 1 10 1 8m30s
すげ〜〜〜〜〜〜〜 なるほど〜
Day 18/40 - Kubernetes Health Probes Explained | Liveness vs Readiness Probes
startup: slow/legacy apps
readiness: whether app is ready
liveness: restart if fails
このサンプルをただ眺めた
liveness-http and readiness-http
apiVersion: v1
kind: Pod
metadata:
name: hello
spec:
containers:
- name: liveness
image: registry.k8s.io/e2e-test-images/agnhost:2.40
args:
- liveness
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 15
periodSeconds: 10
liveness command
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: registry.k8s.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
liveness-tcp
apiVersion: v1
kind: Pod
metadata:
name: tcp-pod
labels:
app: tcp-pod
spec:
containers:
- name: goproxy
image: registry.k8s.io/goproxy:0.1
ports:
- containerPort: 8080
livenessProbe:
tcpSocket:
port: 3000
initialDelaySeconds: 10
periodSeconds: 5
liveness exec
livenessProbe に shell が使えるんだ。ヘェ〜〜〜 log fileのサイズ増えてるかチェックする、みたいのに使えそう
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: registry.k8s.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 10; rm -f /tmp/healthy; sleep 10
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 1
ほんとに定期的に restart する。
$ k get po -o wide --watch
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
liveness-exec 1/1 Running 0 55s 10.244.2.15 kind-worker3 <none> <none>
liveness-exec 1/1 Running 1 (11s ago) 66s 10.244.2.15 kind-worker3 <none> <none>
liveness-exec 1/1 Running 2 (11s ago) 2m 10.244.2.15 kind-worker3 <none> <none>
liveness-exec 1/1 Running 3 (11s ago) 2m54s 10.244.2.15 kind-worker3 <none> <none>
liveness-exec 1/1 Running 4 (12s ago) 3m49s 10.244.2.15 kind-worker3 <none> <none>
liveness-exec 1/1 Running 5 (12s ago) 4m43s 10.244.2.15 kind-worker3 <none> <none>
liveness-exec 0/1 CrashLoopBackOff 5 (0s ago) 5m25s 10.244.2.15 kind-worker3 <none> <none>
liveness-exec 1/1 Running 6 (93s ago) 6m58s 10.244.2.15 kind-worker3 <none> <none>
describe には Unhealthy と Killing 、そしてPulling が再生成(restart) の合図
k8s的にはrestartって再生成なんだな。gemini「コンテナ停止、image pull, create a new container, start he container」これがrestartだって言ってる。
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m12s default-scheduler Successfully assigned day18/liveness-exec to kind-worker3
Normal Pulled 3m kubelet Successfully pulled image "registry.k8s.io/busybox" in 11.132s (11.132s including waiting). Image
size: 1144547 bytes.
Normal Pulled 2m6s kubelet Successfully pulled image "registry.k8s.io/busybox" in 11.029s (11.029s including waiting). Image
size: 1144547 bytes.
Normal Created 72s (x3 over 3m) kubelet Created container liveness
Normal Started 72s (x3 over 3m) kubelet Started container liveness
Normal Pulled 72s kubelet Successfully pulled image "registry.k8s.io/busybox" in 11.125s (11.125s including waiting). Image
size: 1144547 bytes.
Warning Unhealthy 59s (x9 over 2m49s) kubelet Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
Normal Killing 59s (x3 over 2m47s) kubelet Container liveness failed liveness probe, will be restarted
Normal Pulling 29s (x4 over 3m11s) kubelet Pulling image "registry.k8s.io/busybox"
Day 19 ConfigMap
deployment + env
deploymentを作る
k create deploy deploy --image busybox --dry-run=client -o yaml > day19/deploy.yaml
envをハードコードして起動
containers:
- image: busybox
name: busybox
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
env:
- name: MY_ENV
value: "Hello from env"
k apply -f deploy.yaml
k exec -it deploy-7c66f97755-kb47m -- env | grep MY
MY_ENV=Hello from env
deploymentにconfigmapを使う
cmを作る
k create cm app-cm --from-literal=cm_name=piyush --dry-run=client -o yaml > cm.yaml
k describe cm app-cm
作ったらdeploymentに追加.
env.name が podから見える env の名前になるのびびった!そっちが使われるんか だからみんなkeyと同じenv.nameにするのね
containers:
- image: busybox
name: busybox
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
env:
- name: config_map_deploy_name <-- pod に入るenvのkey!!!
valueFrom:
configMapKeyRef:
name: app-cm <---- cm の名前
key: cm_name <---- cm のkey name
$ k exec -it deploy-5f8567658d-m27c9 -- env | grep config
config_map_deploy_name=piyush
$ k describe pod deploy-5f8567658d-m27c9 | grep -A3 Env
Environment:
MY_ENV: Hello from env
config_map_deploy_name: <set to the key 'cm_name' of config map 'app-cm'> Optional: false
fileからconfigmapは失敗
file作る
$ cat <<EOT >> env
> AAA=111
> BBB=222
> EOT
ダメなconfigmap
dry-runすると、展開されたyamlを作れる。file importが入るわけじゃない
$ k create cm cm-file --from-file=env --dry-run=client -o yaml
apiVersion: v1
data:
env: | <------ココがダメ
AAA=111
BBB=222
kind: ConfigMap
metadata:
name: cm-file
この状態で作ると、残念なことになる
作る
$ k create cm cm-file --from-file=env
configmap/cm-file created
k get cm cm-file
NAME DATA AGE
cm-file 1 7s
k describe cm cm-file
env:
----
AAA=111
BBB=222
$ k exec -it deploy-b95b8c894-mhvbn -- sh
/ # echo $BBB
/ # echo AAA
AAA
値が無茶苦茶。よくみたら、A=1
の形式がダメなのね
apiVersion: v1
data:
env: |
AAA=111 <---- そもそも colon で区切るべき。 AAA: 111
BBB=222
そのうち出るだろ。今回は諦め。
Day 20
RSA key exchangeという過去の話だった
Day 21 CertificateSigningRequest
user用のclient certを発行してみる
ref: https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/
openssl genrsa -out myuser.key 2048
openssl req -new -key myuser.key -out myuser.csr -subj "/CN=myuser"
$ openssl req -text -noout -verify -in myuser.csr
Certificate request self-signature verify OK
Certificate Request:
Data:
Version: 1 (0x0)
Subject: CN=myuser
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:b7:14:61:11:2f:c7:f9:cc:52:8c:29:14:6d:ee:
Exponent: 65537 (0x10001)
Attributes:
(none)
Requested Extensions:
Signature Algorithm: sha256WithRSAEncryption
Signature Value:
5f:0b:6b:8f:ad:a1:d5:c2:7e:26:4b:22:0b:36:76:b4:9f:1e:
CSRのbase64を作る
$ cat myuser.csr | base64 | tr -d "\n"
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0diWGwxYzJWeU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQXR4UmhFUy9IK2N4U2pDa1ViZTZxTnBKUWoyWUoyYnZpM2VwREM0N1BjOFlkCmlFdk1KYzRCdHVMdXJjL1Mwd2lPN213bytCMHo5T2pjSCs4QUdSV09Qa0x6Z2pGMkFYYzBlSVVPY1lVaG0xNjMKSFlGQituOFlNeEZSOXNudnBQS2tGN1oxUTFBMW5RZ3FUak9nek9FUUhFQk14YUJ0RVZqakZaT0pUTHVDZVZHMQpYMllrTGNVSDkxcEsrTHFVZDIvMGlWMnIvWjlhRVJiT1Rqa2ptUFkyNTJONDZOODF3ZnRyZmJlMDdRb1dyTXRUCmZIY3NNaXMxWEV5Tms1YklpOWJ1ek9MY0tHOGVRcWdIZEZtYlhWbWF3aFdUQXJyY2JsTW03MytiMTJiUTlQZncKL3ZQTndLK0VZL3QraW1vY2piODdKRXNJMXBhK05KcXlHSEE0R3NwK3d3SURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBRjhMYTQrdG9kWENmaVpMSWdzMmRyU2ZIc3NYa3pBRVo5L21PUTZGODFqakN1ZitFUk5zClFpWXZFTEJWbVJjeXVRa0hKOHpqanpZT1pWNzZRTDAraDFTOFo2WkMzNnJPYkxYTnJUYTRVcTU3NnVGRlBKaWwKbk01R29sN3J4SE9Yak5BdDBGVlFIR3JnQ2p6NDJGVUh1cDh0K0dhZGUwaFZiTnNyOE9BY0RoZld1OHYvWEhUbworMjU5SHE4ZTdhNURPRjl5YkwvNWFHSEthUEV3eGVGQmk1dkZYYU8vamFIejhpWDQ0TFFPRHR0RU1Gd2NVbFh6CldLVWJjNzFraWtlVlhqL3I5UDBDc09IT3dreW9IcHFZNUFacERsbTJPUjIyekE0ZjgrMWdUT1lrbjQ0Y2I5N2kKc1NoaWVpWC90Nmw3WVN4TUdBalgrbjlGM0RYU3NFUmlhOXc9Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=
それをyamlのrequestに入れる
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: myuser
spec:
request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0diWGwxYzJWeU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQXR4UmhFUy9IK2N4U2pDa1ViZTZxTnBKUWoyWUoyYnZpM2VwREM0N1BjOFlkCmlFdk1KYzRCdHVMdXJjL1Mwd2lPN213bytCMHo5T2pjSCs4QUdSV09Qa0x6Z2pGMkFYYzBlSVVPY1lVaG0xNjMKSFlGQituOFlNeEZSOXNudnBQS2tGN1oxUTFBMW5RZ3FUak9nek9FUUhFQk14YUJ0RVZqakZaT0pUTHVDZVZHMQpYMllrTGNVSDkxcEsrTHFVZDIvMGlWMnIvWjlhRVJiT1Rqa2ptUFkyNTJONDZOODF3ZnRyZmJlMDdRb1dyTXRUCmZIY3NNaXMxWEV5Tms1YklpOWJ1ek9MY0tHOGVRcWdIZEZtYlhWbWF3aFdUQXJyY2JsTW03MytiMTJiUTlQZncKL3ZQTndLK0VZL3QraW1vY2piODdKRXNJMXBhK05KcXlHSEE0R3NwK3d3SURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBRjhMYTQrdG9kWENmaVpMSWdzMmRyU2ZIc3NYa3pBRVo5L21PUTZGODFqakN1ZitFUk5zClFpWXZFTEJWbVJjeXVRa0hKOHpqanpZT1pWNzZRTDAraDFTOFo2WkMzNnJPYkxYTnJUYTRVcTU3NnVGRlBKaWwKbk01R29sN3J4SE9Yak5BdDBGVlFIR3JnQ2p6NDJGVUh1cDh0K0dhZGUwaFZiTnNyOE9BY0RoZld1OHYvWEhUbworMjU5SHE4ZTdhNURPRjl5YkwvNWFHSEthUEV3eGVGQmk1dkZYYU8vamFIejhpWDQ0TFFPRHR0RU1Gd2NVbFh6CldLVWJjNzFraWtlVlhqL3I5UDBDc09IT3dreW9IcHFZNUFacERsbTJPUjIyekE0ZjgrMWdUT1lrbjQ0Y2I5N2kKc1NoaWVpWC90Nmw3WVN4TUdBalgrbjlGM0RYU3NFUmlhOXc9Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=
signerName: kubernetes.io/kube-apiserver-client
expirationSeconds: 86400 # one day
usages:
- client auth
$ k apply -f csr.yaml
certificatesigningrequest.certificates.k8s.io/myuser created
Pendingになる
$ k get csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
myuser 3s kubernetes.io/kube-apiserver-client kubernetes-admin 24h Pending
$ k describe csr myuser
Name: myuser
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"certificates.k8s.io/v1","kind":"CertificateSigningRequest","metadata":{"annotations":{},"name":"myuser"},"spec":{"expirationSeconds":86400,"request":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0diWGwxYzJWeU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQXR4UmhFUy9IK2N4U2pDa1ViZTZxTnBKUWoyWUoyYnZpM2VwREM0N1BjOFlkCmlFdk1KYzRCdHVMdXJjL1Mwd2lPN213bytCMHo5T2pjSCs4QUdSV09Qa0x6Z2pGMkFYYzBlSVVPY1lVaG0xNjMKSFlGQituOFlNeEZSOXNudnBQS2tGN1oxUTFBMW5RZ3FUak9nek9FUUhFQk14YUJ0RVZqakZaT0pUTHVDZVZHMQpYMllrTGNVSDkxcEsrTHFVZDIvMGlWMnIvWjlhRVJiT1Rqa2ptUFkyNTJONDZOODF3ZnRyZmJlMDdRb1dyTXRUCmZIY3NNaXMxWEV5Tms1YklpOWJ1ek9MY0tHOGVRcWdIZEZtYlhWbWF3aFdUQXJyY2JsTW03MytiMTJiUTlQZncKL3ZQTndLK0VZL3QraW1vY2piODdKRXNJMXBhK05KcXlHSEE0R3NwK3d3SURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBRjhMYTQrdG9kWENmaVpMSWdzMmRyU2ZIc3NYa3pBRVo5L21PUTZGODFqakN1ZitFUk5zClFpWXZFTEJWbVJjeXVRa0hKOHpqanpZT1pWNzZRTDAraDFTOFo2WkMzNnJPYkxYTnJUYTRVcTU3NnVGRlBKaWwKbk01R29sN3J4SE9Yak5BdDBGVlFIR3JnQ2p6NDJGVUh1cDh0K0dhZGUwaFZiTnNyOE9BY0RoZld1OHYvWEhUbworMjU5SHE4ZTdhNURPRjl5YkwvNWFHSEthUEV3eGVGQmk1dkZYYU8vamFIejhpWDQ0TFFPRHR0RU1Gd2NVbFh6CldLVWJjNzFraWtlVlhqL3I5UDBDc09IT3dreW9IcHFZNUFacERsbTJPUjIyekE0ZjgrMWdUT1lrbjQ0Y2I5N2kKc1NoaWVpWC90Nmw3WVN4TUdBalgrbjlGM0RYU3NFUmlhOXc9Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=","signerName":"kubernetes.io/kube-apiserver-client","usages":["client auth"]}}
CreationTimestamp: Wed, 20 Nov 2024 23:08:43 +0900
Requesting User: kubernetes-admin
Signer: kubernetes.io/kube-apiserver-client
Requested Duration: 24h
Status: Pending
Subject:
Common Name: myuser
Serial Number:
Events: <none>
describeよりk get csr の方が詳しい不思議。なんと、後で k get csr に発行された証明書が追加される。
$ k get csr -o yaml
apiVersion: v1
items:
- apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"certificates.k8s.io/v1","kind":"CertificateSigningRequest","metadata":{"annotations":{},"name":"myuser"},"spec":{"expirationSeconds":86400,"request":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0diWGwxYzJWeU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQXR4UmhFUy9IK2N4U2pDa1ViZTZxTnBKUWoyWUoyYnZpM2VwREM0N1BjOFlkCmlFdk1KYzRCdHVMdXJjL1Mwd2lPN213bytCMHo5T2pjSCs4QUdSV09Qa0x6Z2pGMkFYYzBlSVVPY1lVaG0xNjMKSFlGQituOFlNeEZSOXNudnBQS2tGN1oxUTFBMW5RZ3FUak9nek9FUUhFQk14YUJ0RVZqakZaT0pUTHVDZVZHMQpYMllrTGNVSDkxcEsrTHFVZDIvMGlWMnIvWjlhRVJiT1Rqa2ptUFkyNTJONDZOODF3ZnRyZmJlMDdRb1dyTXRUCmZIY3NNaXMxWEV5Tms1YklpOWJ1ek9MY0tHOGVRcWdIZEZtYlhWbWF3aFdUQXJyY2JsTW03MytiMTJiUTlQZncKL3ZQTndLK0VZL3QraW1vY2piODdKRXNJMXBhK05KcXlHSEE0R3NwK3d3SURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBRjhMYTQrdG9kWENmaVpMSWdzMmRyU2ZIc3NYa3pBRVo5L21PUTZGODFqakN1ZitFUk5zClFpWXZFTEJWbVJjeXVRa0hKOHpqanpZT1pWNzZRTDAraDFTOFo2WkMzNnJPYkxYTnJUYTRVcTU3NnVGRlBKaWwKbk01R29sN3J4SE9Yak5BdDBGVlFIR3JnQ2p6NDJGVUh1cDh0K0dhZGUwaFZiTnNyOE9BY0RoZld1OHYvWEhUbworMjU5SHE4ZTdhNURPRjl5YkwvNWFHSEthUEV3eGVGQmk1dkZYYU8vamFIejhpWDQ0TFFPRHR0RU1Gd2NVbFh6CldLVWJjNzFraWtlVlhqL3I5UDBDc09IT3dreW9IcHFZNUFacERsbTJPUjIyekE0ZjgrMWdUT1lrbjQ0Y2I5N2kKc1NoaWVpWC90Nmw3WVN4TUdBalgrbjlGM0RYU3NFUmlhOXc9Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=","signerName":"kubernetes.io/kube-apiserver-client","usages":["client auth"]}}
creationTimestamp: "2024-11-20T14:13:33Z"
name: myuser
resourceVersion: "842806"
uid: a99b199d-38d8-4120-9f3b-7fd2028ed026
spec:
expirationSeconds: 86400
groups:
- kubeadm:cluster-admins
- system:authenticated
request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0diWGwxYzJWeU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQXR4UmhFUy9IK2N4U2pDa1ViZTZxTnBKUWoyWUoyYnZpM2VwREM0N1BjOFlkCmlFdk1KYzRCdHVMdXJjL1Mwd2lPN213bytCMHo5T2pjSCs4QUdSV09Qa0x6Z2pGMkFYYzBlSVVPY1lVaG0xNjMKSFlGQituOFlNeEZSOXNudnBQS2tGN1oxUTFBMW5RZ3FUak9nek9FUUhFQk14YUJ0RVZqakZaT0pUTHVDZVZHMQpYMllrTGNVSDkxcEsrTHFVZDIvMGlWMnIvWjlhRVJiT1Rqa2ptUFkyNTJONDZOODF3ZnRyZmJlMDdRb1dyTXRUCmZIY3NNaXMxWEV5Tms1YklpOWJ1ek9MY0tHOGVRcWdIZEZtYlhWbWF3aFdUQXJyY2JsTW03MytiMTJiUTlQZncKL3ZQTndLK0VZL3QraW1vY2piODdKRXNJMXBhK05KcXlHSEE0R3NwK3d3SURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBRjhMYTQrdG9kWENmaVpMSWdzMmRyU2ZIc3NYa3pBRVo5L21PUTZGODFqakN1ZitFUk5zClFpWXZFTEJWbVJjeXVRa0hKOHpqanpZT1pWNzZRTDAraDFTOFo2WkMzNnJPYkxYTnJUYTRVcTU3NnVGRlBKaWwKbk01R29sN3J4SE9Yak5BdDBGVlFIR3JnQ2p6NDJGVUh1cDh0K0dhZGUwaFZiTnNyOE9BY0RoZld1OHYvWEhUbworMjU5SHE4ZTdhNURPRjl5YkwvNWFHSEthUEV3eGVGQmk1dkZYYU8vamFIejhpWDQ0TFFPRHR0RU1Gd2NVbFh6CldLVWJjNzFraWtlVlhqL3I5UDBDc09IT3dreW9IcHFZNUFacERsbTJPUjIyekE0ZjgrMWdUT1lrbjQ0Y2I5N2kKc1NoaWVpWC90Nmw3WVN4TUdBalgrbjlGM0RYU3NFUmlhOXc9Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=
signerName: kubernetes.io/kube-apiserver-client
usages:
- client auth
username: kubernetes-admin
status: {}
kind: List
metadata:
resourceVersion: ""
Approveすると証明書が発行される
$ k certificate approve myuser
certificatesigningrequest.certificates.k8s.io/myuser approved
$ k get csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
myuser 2m36s kubernetes.io/kube-apiserver-client kubernetes-admin 24h Approved,Issued
describe csr では statusが変わっただけだが・・
$ k describe csr myuser
Name: myuser
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"certificates.k8s.io/v1","kind":"CertificateSigningRequest","metadata":{"annotations":{},"name":"myuser"},"spec":{"expirationSeconds":86400,"request":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0diWGwxYzJWeU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQXR4UmhFUy9IK2N4U2pDa1ViZTZxTnBKUWoyWUoyYnZpM2VwREM0N1BjOFlkCmlFdk1KYzRCdHVMdXJjL1Mwd2lPN213bytCMHo5T2pjSCs4QUdSV09Qa0x6Z2pGMkFYYzBlSVVPY1lVaG0xNjMKSFlGQituOFlNeEZSOXNudnBQS2tGN1oxUTFBMW5RZ3FUak9nek9FUUhFQk14YUJ0RVZqakZaT0pUTHVDZVZHMQpYMllrTGNVSDkxcEsrTHFVZDIvMGlWMnIvWjlhRVJiT1Rqa2ptUFkyNTJONDZOODF3ZnRyZmJlMDdRb1dyTXRUCmZIY3NNaXMxWEV5Tms1YklpOWJ1ek9MY0tHOGVRcWdIZEZtYlhWbWF3aFdUQXJyY2JsTW03MytiMTJiUTlQZncKL3ZQTndLK0VZL3QraW1vY2piODdKRXNJMXBhK05KcXlHSEE0R3NwK3d3SURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBRjhMYTQrdG9kWENmaVpMSWdzMmRyU2ZIc3NYa3pBRVo5L21PUTZGODFqakN1ZitFUk5zClFpWXZFTEJWbVJjeXVRa0hKOHpqanpZT1pWNzZRTDAraDFTOFo2WkMzNnJPYkxYTnJUYTRVcTU3NnVGRlBKaWwKbk01R29sN3J4SE9Yak5BdDBGVlFIR3JnQ2p6NDJGVUh1cDh0K0dhZGUwaFZiTnNyOE9BY0RoZld1OHYvWEhUbworMjU5SHE4ZTdhNURPRjl5YkwvNWFHSEthUEV3eGVGQmk1dkZYYU8vamFIejhpWDQ0TFFPRHR0RU1Gd2NVbFh6CldLVWJjNzFraWtlVlhqL3I5UDBDc09IT3dreW9IcHFZNUFacERsbTJPUjIyekE0ZjgrMWdUT1lrbjQ0Y2I5N2kKc1NoaWVpWC90Nmw3WVN4TUdBalgrbjlGM0RYU3NFUmlhOXc9Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=","signerName":"kubernetes.io/kube-apiserver-client","usages":["client auth"]}}
CreationTimestamp: Wed, 20 Nov 2024 23:08:43 +0900
Requesting User: kubernetes-admin
Signer: kubernetes.io/kube-apiserver-client
Requested Duration: 24h
Status: Approved,Issued
Subject:
Common Name: myuser
Serial Number:
Events: <none>
k get csr でみると、certificate:
が追加されてる!
$ k get csr myuser -o yaml
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"certificates.k8s.io/v1","kind":"CertificateSigningRequest","metadata":{"annotations":{},"name":"myuser"},"spec":{"expirationSeconds":86400,"request":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0diWGwxYzJWeU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQXR4UmhFUy9IK2N4U2pDa1ViZTZxTnBKUWoyWUoyYnZpM2VwREM0N1BjOFlkCmlFdk1KYzRCdHVMdXJjL1Mwd2lPN213bytCMHo5T2pjSCs4QUdSV09Qa0x6Z2pGMkFYYzBlSVVPY1lVaG0xNjMKSFlGQituOFlNeEZSOXNudnBQS2tGN1oxUTFBMW5RZ3FUak9nek9FUUhFQk14YUJ0RVZqakZaT0pUTHVDZVZHMQpYMllrTGNVSDkxcEsrTHFVZDIvMGlWMnIvWjlhRVJiT1Rqa2ptUFkyNTJONDZOODF3ZnRyZmJlMDdRb1dyTXRUCmZIY3NNaXMxWEV5Tms1YklpOWJ1ek9MY0tHOGVRcWdIZEZtYlhWbWF3aFdUQXJyY2JsTW03MytiMTJiUTlQZncKL3ZQTndLK0VZL3QraW1vY2piODdKRXNJMXBhK05KcXlHSEE0R3NwK3d3SURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBRjhMYTQrdG9kWENmaVpMSWdzMmRyU2ZIc3NYa3pBRVo5L21PUTZGODFqakN1ZitFUk5zClFpWXZFTEJWbVJjeXVRa0hKOHpqanpZT1pWNzZRTDAraDFTOFo2WkMzNnJPYkxYTnJUYTRVcTU3NnVGRlBKaWwKbk01R29sN3J4SE9Yak5BdDBGVlFIR3JnQ2p6NDJGVUh1cDh0K0dhZGUwaFZiTnNyOE9BY0RoZld1OHYvWEhUbworMjU5SHE4ZTdhNURPRjl5YkwvNWFHSEthUEV3eGVGQmk1dkZYYU8vamFIejhpWDQ0TFFPRHR0RU1Gd2NVbFh6CldLVWJjNzFraWtlVlhqL3I5UDBDc09IT3dreW9IcHFZNUFacERsbTJPUjIyekE0ZjgrMWdUT1lrbjQ0Y2I5N2kKc1NoaWVpWC90Nmw3WVN4TUdBalgrbjlGM0RYU3NFUmlhOXc9Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=","signerName":"kubernetes.io/kube-apiserver-client","usages":["client auth"]}}
creationTimestamp: "2024-11-20T14:13:33Z"
name: myuser
resourceVersion: "842940"
uid: a99b199d-38d8-4120-9f3b-7fd2028ed026
spec:
expirationSeconds: 86400
groups:
- kubeadm:cluster-admins
- system:authenticated
request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0diWGwxYzJWeU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQXR4UmhFUy9IK2N4U2pDa1ViZTZxTnBKUWoyWUoyYnZpM2VwREM0N1BjOFlkCmlFdk1KYzRCdHVMdXJjL1Mwd2lPN213bytCMHo5T2pjSCs4QUdSV09Qa0x6Z2pGMkFYYzBlSVVPY1lVaG0xNjMKSFlGQituOFlNeEZSOXNudnBQS2tGN1oxUTFBMW5RZ3FUak9nek9FUUhFQk14YUJ0RVZqakZaT0pUTHVDZVZHMQpYMllrTGNVSDkxcEsrTHFVZDIvMGlWMnIvWjlhRVJiT1Rqa2ptUFkyNTJONDZOODF3ZnRyZmJlMDdRb1dyTXRUCmZIY3NNaXMxWEV5Tms1YklpOWJ1ek9MY0tHOGVRcWdIZEZtYlhWbWF3aFdUQXJyY2JsTW03MytiMTJiUTlQZncKL3ZQTndLK0VZL3QraW1vY2piODdKRXNJMXBhK05KcXlHSEE0R3NwK3d3SURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBRjhMYTQrdG9kWENmaVpMSWdzMmRyU2ZIc3NYa3pBRVo5L21PUTZGODFqakN1ZitFUk5zClFpWXZFTEJWbVJjeXVRa0hKOHpqanpZT1pWNzZRTDAraDFTOFo2WkMzNnJPYkxYTnJUYTRVcTU3NnVGRlBKaWwKbk01R29sN3J4SE9Yak5BdDBGVlFIR3JnQ2p6NDJGVUh1cDh0K0dhZGUwaFZiTnNyOE9BY0RoZld1OHYvWEhUbworMjU5SHE4ZTdhNURPRjl5YkwvNWFHSEthUEV3eGVGQmk1dkZYYU8vamFIejhpWDQ0TFFPRHR0RU1Gd2NVbFh6CldLVWJjNzFraWtlVlhqL3I5UDBDc09IT3dreW9IcHFZNUFacERsbTJPUjIyekE0ZjgrMWdUT1lrbjQ0Y2I5N2kKc1NoaWVpWC90Nmw3WVN4TUdBalgrbjlGM0RYU3NFUmlhOXc9Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=
signerName: kubernetes.io/kube-apiserver-client
usages:
- client auth
username: kubernetes-admin
status:
certificate: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM5ekNDQWQrZ0F3SUJBZ0lSQU5yZnpBL0hWMDM2QTdYLzl1MHdKRFl3RFFZSktvWklodmNOQVFFTEJRQXcKRlRFVE1CRUdBMVVFQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRFeE1qQXhOREE1TlRkYUZ3MHlOREV4TWpFeApOREE1TlRkYU1CRXhEekFOQmdOVkJBTVRCbTE1ZFhObGNqQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQCkFEQ0NBUW9DZ2dFQkFMY1VZUkV2eC9uTVVvd3BGRzN1cWphU1VJOW1DZG03NHQzcVF3dU96M1BHSFloTHpDWE8KQWJiaTdxM1AwdE1JanU1c0tQZ2RNL1RvM0IvdkFCa1ZqajVDODRJeGRnRjNOSGlGRG5HRkladGV0eDJCUWZwLwpHRE1SVWZiSjc2VHlwQmUyZFVOUU5aMElLazR6b016aEVCeEFUTVdnYlJGWTR4V1RpVXk3Z25sUnRWOW1KQzNGCkIvZGFTdmk2bEhkdjlJbGRxLzJmV2hFV3prNDVJNWoyTnVkamVPamZOY0g3YTMyM3RPMEtGcXpMVTN4M0xESXIKTlZ4TWpaT1d5SXZXN3N6aTNDaHZIa0tvQjNSWm0xMVptc0lWa3dLNjNHNVRKdTkvbTlkbTBQVDM4UDd6emNDdgpoR1A3Zm9wcUhJMi9PeVJMQ05hV3ZqU2FzaGh3T0JyS2ZzTUNBd0VBQWFOR01FUXdFd1lEVlIwbEJBd3dDZ1lJCkt3WUJCUVVIQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JURGx3RU9zWWVYYTV4VE9HYzIKTHNFNVI1Z2p4VEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBZGc1VkJhTnpOdk1OeVRJcWVaKzBNRVRwZzhpdwpqWEFrWW9xQk9uSDVhajlTMWhBY0EzVFlUdWtEK3MzcHI5WENiWDVMcm5oby8wRzhiMjRHb3JKK0FMcytCYVVICkJWUVFGUkRSNWZZTWVGZTRqa1d5amhJc0greldDVFRqVDR2VlNMTFNRckZld1YrZHJrS3dGcFNlNFIrZUk2WTYKdWtZblRrZjNRMmJKVkZpaTNFbzdkS1pnU0ZOSnhhaWhGNVFLbWJ3VXU5bTNSbVBabzZCclQ5L1A0VTBrRFRIVAowVnJiTGxnT2xra2FoOXJtNWFTMjd0THdackVWbUYvUkZEdTVGOGVDOC81SnFuek5leG1QQ1NKcGIzSndXK21GCmlEWTlaYnBJakZCZG1yekNxZGtZeU9YR012bEQ5eDdib2ErQWtmRU5zNnZBMTlwM2w1VjV4V2E2SVE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
conditions:
- lastTransitionTime: "2024-11-20T14:14:57Z"
lastUpdateTime: "2024-11-20T14:14:58Z"
message: This CSR was approved by kubectl certificate approve.
reason: KubectlApprove
status: "True"
type: Approved
certificate
を certificate というファイルにしました
$ cat certificate | base64 -d
-----BEGIN CERTIFICATE-----
MIIC9zCCAd+gAwIBAgIRANrfzA/HV036A7X/9u0wJDYwDQYJKoZIhvcNAQELBQAw
FTETMBEGA1UEAxMKa3ViZXJuZXRlczAeFw0yNDExMjAxNDA5NTdaFw0yNDExMjEx
NDA5NTdaMBExDzANBgNVBAMTBm15dXNlcjCCA...
$ cat certificate | base64 -d | openssl x509 -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
da:df:cc:0f:c7:57:4d:fa:03:b5:ff:f6:ed:30:24:36
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=kubernetes
Validity
Not Before: Nov 20 14:09:57 2024 GMT
Not After : Nov 21 14:09:57 2024 GMT
Subject: CN=myuser
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:b7:14:61:11:2f:c7:f9:cc:52:8c:29:14:6d:ee:
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Extended Key Usage:
TLS Web Client Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
C3:97:01:0E:B1:87:97:6B:9C:53:38:67:36:2E:C1:39:47:98:23:C5
Signature Algorithm: sha256WithRSAEncryption
Signature Value:
76:0e:55:05:a3:73:36:f3:0d:c9:32:2a:79:9f:b4:30:44:e9:
なるほどたまに見かける CN=kubernetes
はこうして作られてるのか
CertificateSigningRequestの中身
シンプルにすると kubernetes-admin
というusernameで作られていることに気づく
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: myuser
spec:
expirationSeconds: 86400
groups:
- kubeadm:cluster-admins
- system:authenticated
request: <CSR pem>
signerName: kubernetes.io/kube-apiserver-client
usages:
- client auth
username: kubernetes-admin <------ これkubernetesのユーザ名?
status:
certificate: <pem>
conditions:
- lastTransitionTime: "2024-11-20T14:14:57Z"
lastUpdateTime: "2024-11-20T14:14:58Z"
message: This CSR was approved by kubectl certificate approve.
reason: KubectlApprove
status: "True"
type: Approved
このcertを使って認証すると誰になるのかよくわからん
今、自分は kubernetes-admin らしい
$ k auth whoami
ATTRIBUTE VALUE
Username kubernetes-admin
Groups [kubeadm:cluster-admins system:authenticated]
しかしcan-iの結果は違う。 Groupsの違いなのか?
$ kubectl auth can-i get pods
yes
$ kubectl auth can-i get pods --as=kubernetes-admin
no
次の授業でやりそうな気がする!
Day 22/40 - Kubernetes Authentication and Authorization Simply Explained
https://www.youtube.com/watch?v=P0bogYEyfeI&list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC&index=24
半年空いたので何もかも忘れてしまった
$ k create deploy nginx --image=nginx --replicas=3
deployment.apps/nginx created
k8sのauthorization(認可)を学ぶ。
control planeに入る
docker ps | grep control
# node名を見つけたらloginする
docker exec -it kind-control-plane bash
# control planeの中
cd /etc/kubernetes/manifests/
root@kind-control-plane:/etc/kubernetes/manifests# ls -l
total 16
-rw------- 1 root root 2547 Mar 31 12:31 etcd.yaml
-rw------- 1 root root 3896 Mar 31 12:31 kube-apiserver.yaml
-rw------- 1 root root 3428 Mar 31 12:31 kube-controller-manager.yaml
-rw------- 1 root root 1463 Mar 31 12:31 kube-scheduler.yaml
kube-apiserver.yamlには起動コマンドがある。その中にauthorizationが記述されている
`--authorization-mode=Node,RBAC` で Node, RBACが使われていることがわかる
kube-apiserver manual
> https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/
```yaml
root@kind-control-plane:/etc/kubernetes/manifests# cat kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.201.0.3:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=10.201.0.3
- --allow-privileged=true
- --authorization-mode=Node,RBAC <------------------ コレ!
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --runtime-config=
- --secure-port=6443
- --service-account-issuer=https://kubernetes.default.svc.cluster.local
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/16
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: registry.k8s.io/kube-apiserver:v1.31.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 10.201.0.3
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-apiserver
readinessProbe:
failureThreshold: 3
httpGet:
host: 10.201.0.3
path: /readyz
port: 6443
scheme: HTTPS
periodSeconds: 1
timeoutSeconds: 15
resources:
requests:
cpu: 250m
startupProbe:
failureThreshold: 24
httpGet:
host: 10.201.0.3
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
/etc/kubernetes/pki
にはいろんな証明書があるねぇ
kubelet, etcdなどの認証に使う証明書がある。
sa.* はservice accountらしい
root@kind-control-plane:/etc/kubernetes/pki# ls
apiserver-etcd-client.crt apiserver-kubelet-client.key ca.crt front-proxy-ca.crt front-proxy-client.key
apiserver-etcd-client.key apiserver.crt ca.key front-proxy-ca.key sa.key
apiserver-kubelet-client.crt apiserver.key etcd front-proxy-client.crt sa.pub
気になった証明書を見てみたら、clusterのFQDN?やIPがSANに書かれた証明書だった. 今日作られたってことはk8s cluster起動時に作り直すのかもしれない
root@kind-control-plane:/etc/kubernetes/manifests# cat /etc/kubernetes/pki/apiserver.crt| openssl x509 -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 8200919631278145059 (0x71cf84f44ddc7e23)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = kubernetes
Validity
Not Before: Mar 31 12:26:21 2025 GMT
Not After : Mar 31 12:31:21 2026 GMT
Subject: CN = kube-apiserver
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:ae:86:15:d0:00:b5:1b:c0:8c:d9:a4:e0:e3:f5:
...
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
C3:97:01:0E:B1:87:97:6B:9C:53:38:67:36:2E:C1:39:47:98:23:C5
X509v3 Subject Alternative Name:
DNS:kind-control-plane, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:localhost, IP Address:10.96.0.1, IP Address:10.201.0.3, IP Address:127.0.0.1
Signature Algorithm: sha256WithRSAEncryption
...
Day 23/40 - Kubernetes RBAC Explained - Role Based Access Control Kubernetes
https://www.youtube.com/watch?v=uGcDt7iNFkE&list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC&index=24
Day 21で作った証明書はk8sをたたたくユーザのクライアント証明書だった!
今日はそのユーザに対して権限を与える方法を学ぶ
clusterに操作をするauthorizationまわりを学ぶ
つまり、clusterにUserを作る方法を学ぶ。
自分は誰でclusterに認証している?
$ k auth whoami
ATTRIBUTE VALUE
Username kubernetes-admin
Groups [kubeadm:cluster-admins system:authenticated]
Userを作る
sampleは間違ってるので修正
$ openssl genrsa -out adam.key 2048
$ openssl req -new -key adam.key -out adam.csr -subj "/CN=adam"
$ cat adam.csr | base64
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZEQ0NBVHdDQVFBd0R6RU5NQXNHQTFVRUF3d0VZV1JoYlRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRApnZ0VQQURDQ0FRb0NnZ0VCQUozdm16Q2FrcDlGM3N4ZHlvWW83bnFHak1jcTV6WDZtSnJrc2hRbVhublUwYWNJCkhMMTlRcUVwOUpOWERCUnI4VTJoSTZ4VWwvMzlSUnVzVytnbWljZ3hRcEpKU3VGekEzMlI0RmJyRnloTmxhYjEKU2ZFMU0xaFVHR29ZNi94QU92QlcrVUFnRitiV012eHYya3o1Z1dBMGp0bHNncGZ6bW9zUTRlZk9XTXo0SS96MQp0TDdYTUhiVUkrUjBqQ2oxZmJKSDRSeTF3OU00Vkcza1BuRHYrdXN1Y1ZwN1ROT01RWi9ObDlSa3lDRmg5ZS9SCllhVTlaM3gyRDNqTkV6N2k5cGxJd1NKdWxaVmVTRFZPKzJIME5WM3VqNGMxS0J3UXFNdWc3ejk5dUpibnYrSUoKNUtEeWQ1Y1ZqSFFRdWZoV0h1SHpmU1hvajBoWURyUFVuK3JkRGRzQ0F3RUFBYUFBTUEwR0NTcUdTSWIzRFFFQgpDd1VBQTRJQkFRQUVMeXVwK1laRW4xOXgxYUtMQnJsVFI5V3hjZkJpc25nWnJ1UFJGYzM1dHZUdkJJeDIvTmRjCktjQTRKd1M2dGhKSmtjd1Z5YkROaTJ6MkFMbVowSUUwRzQzVURuOXVRY0lIcENtVDFVelNHbHgydHpJL0VaQ2sKQUJveHJzeU55NURPTGtrZkxzcm1seThzU0ZyRXBrK2VWdmU2Q2szMU1JVVVUTU55S2NCWC9XVDJQMGwxZmdMWAo0S3czc2MrZjFUZncrcTZyUUtXSTZpUk9YcGh0MWd3NWN3ZTFqb0ZndGVJVGc5SUszS1FVdFlKT1hXUmJjdFN6CkZpejY0Wk9LcXF2dFkwdzFwNFljVWxWUkc1TTZUN1FCWXVJV01LSTREQTlJb2FXMHVqdHNreVBpNitvd0pGTmsKMHhZV2NWdEsrZVBxbDYvNDlEWngwdU9iWG85U0NsYkUKLS0tLS1FTkQgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCg==
$ k apply -f csr.yaml
certificatesigningrequest.certificates.k8s.io/adam created
$ k certificate approve adam
certificatesigningrequest.certificates.k8s.io/adam approved
$ kubectl get csr adam -o jsonpath='{.status.certificate}'| base64 -d > adam.crt
自分 / adam は k get pod を叩ける?
$ k auth can-i get pod
yes
$ k auth can-i get pod --as adam
no
Role pod-reader
を作る
https://github.com/piyushsachdeva/CKA-2024/tree/main/Resources/Day23
$ k apply -f role.yaml
role.rbac.authorization.k8s.io/pod-reader created
$ k get roles
No resources found in day23 namespace.
$ k get roles --namespace=default
NAME CREATED AT
pod-reader 2025-03-31T13:44:10Z
$ k describe roles pod-reader --namespace=default
Name: pod-reader
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
pods [] [] [get watch list]
$ k get roles -A
NAMESPACE NAME CREATED AT
default pod-reader 2025-03-31T13:55:59Z
kube-public kubeadm:bootstrap-signer-clusterinfo 2024-11-12T13:58:45Z
kube-public system:controller:bootstrap-signer 2024-11-12T13:58:44Z
kube-system extension-apiserver-authentication-reader 2024-11-12T13:58:44Z
kube-system kube-proxy 2024-11-12T13:58:46Z
kube-system kubeadm:kubelet-config 2024-11-12T13:58:45Z
kube-system kubeadm:nodes-kubeadm-config 2024-11-12T13:58:45Z
kube-system system::leader-locking-kube-controller-manager 2024-11-12T13:58:44Z
kube-system system::leader-locking-kube-scheduler 2024-11-12T13:58:44Z
kube-system system:controller:bootstrap-signer 2024-11-12T13:58:44Z
kube-system system:controller:cloud-provider 2024-11-12T13:58:44Z
kube-system system:controller:token-cleaner 2024-11-12T13:58:44Z
Rolebindingを作る
$ k apply -f role-binding.yaml
rolebinding.rbac.authorization.k8s.io/read-pods created
$ k get rolebindings --namespace=default
NAME ROLE AGE
read-pods Role/pod-reader 19s
adam userで試す
context接続情報を作る
# .kube/config に Userを作成する
$ k config set-credentials adam --client-key=adam.key --client-certificate=adam.crt
User "adam" set.
# .kube/config に context を作成する
$ k config set-context adam --cluster=kind-kind --user=adam
Context "adam" created.
$ k config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
adam kind-kind adam
* kind-kind kind-kind kind-kind day23
$ k config use-context adam
Switched to context "adam".
$ k auth whoami
ATTRIBUTE VALUE
Username adam
Groups [system:authenticated]
うまく動かないので krishna userでやり直し
$ cd krishna
$ openssl genrsa -out krishna.key 2048
$ openssl req -new -key adam.key -out krishna.csr -subj "/CN=krishna"
Could not open file or uri for loading private key from adam.key: No such file or directory
$ openssl req -new -key krishna.key -out krishna.csr -subj "/CN=krishna"
$ k auth can-i create po
yes
$ k auth can-i create po --as krishna
no
$ cat *.csr | base64
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1Z6Q0NBVDhDQVFBd0VqRVFNQTRHQTFVRUF3d0hhM0pwYzJodVlUQ0NBU0l3RFFZSktvWklodmNOQVFFQgpCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFLZDgzSHNXNW54VHQvak9oSnhpbVJ6ZkVHQ0M3K1g4WmRLNlJwQTNrandSCkdxTVNtdHhtQVZlSVoxdGxwUzZsTTNXOWwyUEY5emI3aGozOFZpMWdqckVaTHpkVHI5amNDUkNmUytKWjYwOUEKblJDYnU4cWtMSjlkY0J5OTEvSTdsQVpuVzBnaWJHL0NvQnRhMWRZSHo0R2VQK3FwMTVucnl3Vy9tV2NaelpQZApybGJhV3RGUzYyYU1sdHc3b1c0clU3aGFDcTRRcEVGaCszVzU1TTlzcitONXRPOTR5SnFtQ0loTUtFT2V2WHNMClRmVjJBYjVzWmdjUlpnNWp5cHpRdlQrWjZOTkNTekZndWRqMVVOZlZQb2pqZzkrbWZCNUxaY1BmRkpqTm5NSnAKMmwxSE1uMmwzc1JHWVhueVdTanpqZ2EzNGZsVExmaUhDbUlPQUlwMlVTOENBd0VBQWFBQU1BMEdDU3FHU0liMwpEUUVCQ3dVQUE0SUJBUUJMUk9xZTZGS0pLdTl1VVU2TllFVjJZem4vY2NGR3MyR2NvemFub1FtTnRsOXFKTy9XCjVHSmpMc0tRaXQxblp1b1FKNCtORDROcmRJMVVhb0RVSGw2b241MXBTbmNSYzZSWlVuRlY4RUZ1NlN0NHppTFIKKzg3OEJWUzBKc0xoM2pSZitxeDU2YzNadm5LYTF3Z3VtSTJkamFGcEI3NDNZb2pZWm94VzFSR2daZlhSclhSZwpuYVRuL2QwQnNjNWprL1dYd1hObDBEbWxCbmtMalVwbW5VMWplNjhlNnBEN0NzYlppSWFzbklqangzWmlNTk1KCkNYUGRzbnpITzVDNHlWSWE4KzY4eWZacDVNeDZRVlJpNStIaVRDNExEYlZQYXppMUpPREUzcFRhM3NicjU4R08KSWdwNkNpbGFsa2hVVFMwc2RlY3oreVdUc01WQnpMYU9PQng4Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=
# csr.yamlにbase64を書いた上で
$ k apply -f csr.yaml
certificatesigningrequest.certificates.k8s.io/krishna created
$ k certificate approve krishna
certificatesigningrequest.certificates.k8s.io/krishna approved
$ kubectl get csr krishna -o jsonpath='{.status.certificate}'| base64 -d > krishna.crt
$ k config set-credentials krishna --client-key=krishna.key --client-certificate=krishna.crt
User "krishna" set.
$ k config set-context krishna --cluster=kind-kind --user=krishna
Context "krishna" created.
$ k config use-context krishna
Switched to context "krishna".
$ k auth whoami
ATTRIBUTE VALUE
Username krishna
Groups [system:authenticated]
role-bindingにkrishnaを追加した...が、うまくいかない
気づいた。namespaceごとにroleを作る必要がある
default nsにroleをつけてたので、default nsなら Yes が返ってきていた!
$ k auth can-i get po --namespace=default --as krishna
yes
$ k auth can-i get po --namespace=default --as adam
yes
Role/RoleBindingを day23 nsに作り直したら、普通に動いた
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: day23 <-------
$ k apply -f role.yaml
role.rbac.authorization.k8s.io/pod-reader created
$ k apply -f role-binding.yaml
rolebinding.rbac.authorization.k8s.io/read-pods created
$ k auth can-i get po --as adam
yes
ok~
これでkind-kind clusterに接続できるユーザが増えた。
- kind-kind: admin
- adam: pod-reader
- krishna: pod-reader
admin以外で操作できない何かを試すときに使えそう。
コマンドで role を作る
k create role pod-reader2 --verb=describe --verb=get --verb=list --resource=pods --namespace=day23
k create rolebinding read-pods2 --role=pod-reader2 --user=adam --user=krishna --namespace=day23
describeはverbにないらしいけど、思ったように動いた
$ k auth can-i describe po --as krishna
Warning: verb 'describe' is not a known verb
yes
$ k auth can-i describe po --as krishna --namespace=default
Warning: verb 'describe' is not a known verb
no - RBAC: role.rbac.authorization.k8s.io "pod-reader2" not found
apiを直で叩く
まず ca.crt が必要なので control-plane から持ってくる
$ docker exec -it kind-control-plane bash
Password:
root@kind-control-plane:/# cat /etc/kubernetes/pki/ca.*
-----BEGIN CERTIFICATE-----
MIIDBTCCAe2gAwIBAgIIEIVB+V+PV0EwDQYJ...HTz1KXJNbxlcbzTkRNbbeT9RI
GFiZHyDhkTj0
-----END CERTIFICATE-----
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEA2Md6Q...xJ5E6iJpM7
-----END RSA PRIVATE KEY-----
control-plane の ipaddress を取得する。
$ kind get nodes
kind-worker3
kind-control-plane
kind-worker
kind-worker2
# ip addressを探す
$ kubectl describe node kind-control-plane
Name: kind-control-plane
Roles: control-plane
...
Addresses:
InternalIP: 10.201.0.3
Hostname: kind-control-plane
# portを探す
$ docker port kind-control-plane
6443/tcp -> 127.0.0.1:58167
30001/tcp -> 0.0.0.0:30001
アクセスはできたが、adamとして認証されなかった。謎。諦めた。
$ curl -k https://localhost:6443/api/v1/namespaces/default/pods --key adam.key --cert adam.crt
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
Day 24/40 - Kubernetes RBAC Continued - Clusterrole and Clusterrole Binding
https://www.youtube.com/watch?v=DswQe7shSa4
https://github.com/piyushsachdeva/CKA-2024
Roleはnamespace scoped. list/watch/getの対象はpods
ClusterRoleはcluster scoped. list/watch/getの対象はnodes
k create clusterrole --help
k create clusterrole node-reader2 --verb=get,list,watch --resource=node
k delete clusterrole node-reader2
k create clusterrole node-reader --verb=get,list,watch --resource=node
k create clusterrole role-test --verb=get,list,watch --resource=node
k get clusterroles | grep node-
k create clusterrolebinding reader-binding --clusterrole=node-reader --user=krishna
k get clusterrolebinding | grep reader
k describe clusterrolebinding reader-binding
k config use-context krishna
k auth can-i get nodes
k get nodes
動いた。最初 --user の名前をtypoしてハマった。存在しないユーザでもエラーにならず成功するので注意!
Day 25/40 - Kubernetes Service Account - RBAC Continued
https://www.youtube.com/watch?v=k2iCq7IlMKM
https://github.com/piyushsachdeva/CKA-2024/tree/main/Resources/Day25
k config use-context kind-kind
k get sa
k get sa -A
k describe sa default
k get sa default -o yaml
k create sa build-sa
k describe sa build-sa
service-accountのtokenを作る。
短命なtokenなら簡単に取れるが
kubectl create token build-robot
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
long-lived tokenなservice-account-tokenはsecretとして生成する。
このsecret は kubernetes.io/service-account.name
で sa に紐づける。
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: build-robot-secret
annotations:
kubernetes.io/service-account.name: build-sa
secret作るとca.cert や tokenがsecretに入っている!
k apply -f secret.yaml
$ k get secret -o yaml
apiVersion: v1
items:
- apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJRUlWQitWK1BWMEV3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRFeE1USXhNelV6TXpWYUZ3MHpOREV4TVRBeE16VTRNelZhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURZeDNwQm0xVjFkRk9ORTVmaDJjeUI0UU9VUzZiSFlwMHdYTmVHUGtIVnJzMG1HS3JzM0NYVEVPeVgKQWtjaEZ3Q0VXVC9KbjVJd3Y2ekEvTXFNdHlueWJDZDZLQS92UHh5R1QxT25YeXQvT3FYNWc0aUk1eGZaR0hGRgpDTDl2ajIzU24ybFdjMWdCNTlNWkdCdkpiNGlCWUl5MUVNaFI0NWFpQlloUmV2V1dtOGJYcEFrVTJhQnhRWWl0CkZ6anlqWldoU3VoYm5ieGFGWlNWRkhKc05xOUpJSmx3YWZIaHJ5blJnd2xsZFdhTVQ5MjQ3SUJFd0o2QnN6Q1UKSEIzaWZqUWVvb0g0ZkxpSkRWV0xmSTduU1pmWGp1ZVVmMitEQUtSVG1zQitWM1M0Y09rZDQxZlRPQmhhWmlkcQo3NTJMVWFtWjZaU0pJWSs2T1lGYWU4NUN5M2VQQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJURGx3RU9zWWVYYTV4VE9HYzJMc0U1UjVnanhUQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ0lwM0xuQWRDTwpKOUN3TTA1SEE4d0RBWWkyV0dkSTdrdDZJZWI1SnlZNnJOd3R6dmlyVG10QTc3ZHlBN0JHVVRqL1pTNXB2YzRpCnlxUi9aTTdEVk4rNkNDWnhpdEtpWGZZR0wzcC9mdEE3WXlGZjlLR0piajJxL0pHOU91N0JXdlNSZWJnMGdOSlQKMFkwZXJiVms2aTEwbU9RV1JMaHdhTmtiYXlQOEFsSU1NK25oMnlzN3phcWpVSEIvSTJ3QjVSVE5YRkRRWlNncQpPdU9lcHVNdjlseG1xeXJRUHlLSzgzU1lGZWd4ZWRUUjlTbVFUejVGNHJ6elNhMktZMWtxeWhMN2UwTVh6dE9iClEyWXl6SndwRzhuWC8wSGwwdHZBR3RJRVR6TlhmcnFEMkdGOC8wMUhUejFLWEpOYnhsY2J6VGtSTmJiZVQ5UkkKR0ZpWkh5RGhrVGowCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
namespace: ZGVmYXVsdA==
token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNkltdE5aRmhzZFhoS1NVVnRka2t3Y1cxTmVVNXFiR2xVWW0xVmVUaGlMWHAwV0RaclNsaDVlVE5xT1RRaWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUprWldaaGRXeDBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5elpXTnlaWFF1Ym1GdFpTSTZJbUoxYVd4a0xYSnZZbTkwTFhObFkzSmxkQ0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVnlkbWxqWlMxaFkyTnZkVzUwTG01aGJXVWlPaUppZFdsc1pDMXpZU0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVnlkbWxqWlMxaFkyTnZkVzUwTG5WcFpDSTZJak0yT0dObE5XRTNMVGhsWXpVdE5EVmpOaTA0WlRjekxXWXpOVEJpTjJVeE4yWXdZeUlzSW5OMVlpSTZJbk41YzNSbGJUcHpaWEoyYVdObFlXTmpiM1Z1ZERwa1pXWmhkV3gwT21KMWFXeGtMWE5oSW4wLmFEazROZS02czR2eWtMdmpWdnVCTUI5RThCckwxMTJFX3cwRl9HTFNha2RqdXJvblcyTXhfdVJiejl4UnJSeG9KTUxMN2FST0FEQm9VVzk5cEM0Ml9UY1pQaERUbEhES3lhZU43NjB3NllWZXd3VE5nUGVpNzdCNmtBVHlZYm5Pa29SdWkwMzhwYlhGUFlmMDA0aUF5Z0R6QUhUYmVPRmIxb2c0OGp6SE55SU92aFYwWUZIS1RfclNwSGgxNlBlVVFnaWxLVjJjeTVaOTQzOXFvUm91MWxtZW92cm1FaU1yVjhSaUw3V2VhemRVX1ljemd2YTlHXzVnNjhmR3VqcTA4ZnJfaFI2Q01JSklINWV1dmM4c0lwSUtZSDRxa0lUVWRKWlNCcmFXV0duYnNFMW5CQm1ncmV5VDB0V29EdFNXNzZKaFJoOU9NVE5pQmdfbHBXQTlDUQ==
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Secret","metadata":{"annotations":{"kubernetes.io/service-account.name":"build-sa"},"name":"build-robot-secret","namespace":"default"},"type":"kubernetes.io/service-account-token"}
kubernetes.io/service-account.name: build-sa
kubernetes.io/service-account.uid: 368ce5a7-8ec5-45c6-8e73-f350b7e17f0c
creationTimestamp: "2025-04-01T13:19:24Z"
name: build-robot-secret
namespace: default
resourceVersion: "2928370"
uid: f544b35d-22f5-41b2-8e42-ec08405b5889
type: kubernetes.io/service-account-token
kind: List
metadata:
resourceVersion: ""
saにroleをつける
$ k auth can-i get pods --as build-sa
no
$ k create role build-role \
> --verb=get,list,watch \
> --resource=pod
role.rbac.authorization.k8s.io/build-role created
$ k create rolebinding build-binding \
> --role=build-role \
> --user=build-sa
rolebinding.rbac.authorization.k8s.io/build-binding created
$ k auth can-i get pods --as build-sa
yes
普通のkind-kind userでpodを作る
k create deploy nginx --image=nginx --replicas=3
--as で k get もできる
$ k get pods --as build-sa
NAME READY STATUS RESTARTS AGE
nginx-676b6c5bbc-cfgh9 1/1 Running 0 8s
nginx-676b6c5bbc-lbj2h 1/1 Running 0 8s
nginx-676b6c5bbc-xwhjv 1/1 Running 0 8s
このpodのservice accountはdefaultになっている
そしてMountsにserviceaccountの情報が勝手に入っている
$ k describe pod nginx-676b6c5bbc-cfgh9
...
Service Account: default
Containers:
...
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5ts85 (ro)
podの中から見ると、serviceaccountのtokenが勝手にmountされていてtokenが取れる!
どゆこと??
# login to the pod
P=nginx-676b6c5bbc-cfgh9
k exec -it $P -- bash
# このフォルダにmountされている
root@nginx-676b6c5bbc-cfgh9:/# cd /var/run/secrets/kubernetes.io/serviceaccount/
root@nginx-676b6c5bbc-cfgh9:/var/run/secrets/kubernetes.io/serviceaccount# ls
ca.crt namespace token
root@nginx-676b6c5bbc-cfgh9:/var/run/secrets/kubernetes.io/serviceaccount# cat token
eyJhbGciOiJSUzI1NiIsImtpZCI6ImtNZFhsdXhKSUVtdkkwcW1NeU5qbGlUYm1VeThiLXp0WDZrSlh5eTNqOTQifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzc1MDUwNDY4LCJpYXQiOjE3NDM1MTQ0NjgsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMmZiYjhmY2QtZDNlNy00NTQzLTg2ZWEtZWE1MzZkNTU0Mzk0Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0Iiwibm9kZSI6eyJuYW1lIjoia2luZC13b3JrZXIiLCJ1aWQiOiJkY2NlYjM2MS02ZjRhLTQ0N2EtYjA2ZS02NGU1ODdiYzg3M2IifSwicG9kIjp7Im5hbWUiOiJuZ2lueC02NzZiNmM1YmJjLWNmZ2g5IiwidWlkIjoiYTI4NjNkZmQtMmFhZC00MjUxLThhYmItMTE2MTA5ZTMwMzFjIn0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkZWZhdWx0IiwidWlkIjoiZDgyZjkzMmMtM2JiYi00NDkyLWJkOWYtOTdkYjAwZDA2Y2YzIn0sIndhcm5hZnRlciI6MTc0MzUxODA3NX0sIm5iZiI6MTc0MzUxNDQ2OCwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6ZGVmYXVsdCJ9.O1AaPVBP6b1YvyTju35LNqzxcBF8gd8cZFjcvQ2QeiFNDze1rj_Bc0gTY3bPElxylvgAupw78ptqLWou9Stdl09u2Y817iuUHBrqAE74UeVDVLaUpu4isV5KZSiYwuEla-2AEmTyTX0sxRvnLUrejMmmw70Cbr8QeqkmvgFWyzP7y1EmvumbeoXus9J2eItokww0n_hnOV_ntc5XeotFpO8PT_CbOCa7g5XACosUWIInVbWwaDgzXNRvhpfTzD1s7mS9_KXhbU1750MBeXQLbgv8qaS5jWqOlWNDFy0k0G7NZOYHDR5OSKMoNY4caSAs63oKssf39jMsQuvvOlJPmQ
トークンが自動的にマウントされる理由
KubernetesのService Accountは、Pod内で動作するプロセスがKubernetes APIサーバーに対して認証を行うためのIDです。人間がkubectlコマンドを実行する際に使うユーザーアカウントとは異なります。
APIサーバーへの認証: Pod内のアプリケーションがKubernetes APIサーバーにアクセスする必要がある場合(例えば、Pod自身の情報を取得したり、他のリソースを操作したりする場合)、APIサーバーに対して自身を認証する必要があります。この認証にService Accountのトークンが使用されます。
デフォルトの動作: Kubernetesは、特に明示的な設定がない限り、Podを作成する際に自動的にdefault Service AccountをPodに割り当てます。そして、このdefault Service AccountのトークンをPod内の特定のディレクトリ (/var/run/secrets/kubernetes.io/serviceaccount/) にマウントします。
利便性とセキュリティ: この自動マウントの仕組みにより、Pod内のアプリケーションは特別な設定なしに、あらかじめ用意された認証情報を使用してAPIサーバーと通信できます。これにより、アプリケーション開発者は認証処理を自分で実装する必要がなくなり、開発の効率が向上します。また、トークンはPodのライフサイクルに紐づいているため、セキュリティ的にも比較的安全です。
pod にsaを指定してみる
$ cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
serviceAccountName: build-sa
containers:
- image: nginx
name: nginx
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
k apply -f pod.yaml
$ k exec -it nginx -- bash
root@nginx:/# ls /var/run/secrets/kubernetes.io/serviceaccount
ca.crt namespace token
$ k describe pod nginx
Name: nginx
Namespace: default
Service Account: build-sa
...
しかし、mountされたtokenは secret のbuild-robot-secret.token(build-saに紐づいているtoken)とは違うものだった。わからなかった。
mounted tokenを使って認証を試してみる。
podの中からk8s apiを叩いてみたが、認可エラー。tokenはちゃんと指定のsaとして認証されている。
$ k exec -it nginx -- bash
root@nginx:/# TOKEN_PATH="/var/run/secrets/kubernetes.io/serviceaccount/token"
TOKEN_PATH="/var/run/secrets/kubernetes.io/serviceaccount/token"
CACERT_PATH="/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
TOKEN=$(cat $TOKEN_PATH)
CACERT=$CACERT_PATH
API_SERVER_IP=$(echo $KUBERNETES_PORT_443_TCP_ADDR)
API_SERVER_PORT=$(echo $KUBERNETES_PORT_443_TCP_PORT)
API_SERVER="https://${API_SERVER_IP}:${API_SERVER_PORT}"
echo curl --cacert $CACERT -H "Authorization: Bearer $TOKEN" $API_SERVER/api/v1/namespaces/default/pods
curl --cacert $CACERT -H "Authorization: Bearer $TOKEN" $API_SERVER/api/v1/namespaces/default/pods
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "pods is forbidden: User \"system:serviceaccount:default:build-sa\" cannot list resource \"pods\" in API group \"\" at the cluster scope",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}
ダメな理由は、rolebindingがUserとしてsaにあたっているからだった。
saを ServiceAccountとしてrolebindingしたらapi認可に成功しました。
$ kubectl get rolebinding build-binding -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: "2025-04-01T13:30:52Z"
name: build-binding
namespace: default
resourceVersion: "2934218"
uid: d2278c8b-c788-4d56-a330-dc0796c5cc4d
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: build-role
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: build-sa
- kind: ServiceAccount <----------- ここを追加
name: build-sa
namespace: default
RoleBindingの User の方は k get pods
するのに必須。一方、APIにアクセスするには ServiceAccount としてもロールが必要だった。学び!
ImagePullSecrets
ImagePullSecretsはdocker registryにアクセスするときに使う。
ここはdocker registryがないので説明しただけ、実技なし。
# ImagePullSecrets 作成
kubectl create secret docker-registry myregistrykey
--docker-server=<registry name> \
--docker-username=DUMMY_USERNAME
--docker-password=DUMMY_DOCKER_PASSWORD \
--docker-email=DUMMY_DOCKER_EMAIL
# ImagePullSecretsを service accountに紐付ける
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2021-07-07T22:02:39Z
name: default
namespace: default
uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6
imagePullSecrets:
- name: myregistrykey
# またはpod作るときに imagePullSecrets を指定する
apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey