0
0

More than 3 years have passed since last update.

[Kubernetes]Taints/Tolerationsの動作を確認する

Last updated at Posted at 2020-06-14

はじめに

前回はPodのスケジューリング制御方法としてNode Affinityなどの動作を確認しました。今回は似たような機能でTaints/Tolerationsの動作を確認したいと思います。
Node AffinityなどはPodデプロイ時にスケジューリングするNodeを引き付ける動作でしたが、Taints/Tolerationsは逆になります。Nodeに対してTaints(直訳すると汚れ、汚染)を設定し、PodにTolerations(許容範囲)を設定することで、Taintsを許容できるPodのみをNodeが許可します。

Effect

TaintsとTolerationsが合わないときの挙動を「Effect」で指定します。Effectは以下の3種類があります。

Effect 概要
PreferNoSchedule Taintsを許容できないPodを可能な限りスケジューリングしませんが、他にスケジューリングできるノードがなければスケジューリングされます。
NoSchedule Taintsを許容できなPodをスケジューリングしません。ただし、すでにデプロイされているPodはそのままです。
Noexecute Taintsを許容できなPodをスケジューリングしません。さらに、すでにデプロイされているPodは追い出されます。

NoSchedule

各Effectの動作を確認します。順番が異なりますが、まずはNoScheduleから確認します。

Taintsの設定

Nodeに対してTaintsを設定します。今回はk8s-worker01のみに設定します。

$ kubectl taint node k8s-worker01 env=prd:NoSchedule
node/k8s-worker01 tainted
$ kubectl describe node k8s-worker01 | grep Taints
Taints:             env=prd:NoSchedule
$ kubectl describe node k8s-worker02 | grep Taints
Taints:             <none>

Tolerationsの設定とPodのデプロイ

TolerationsはPodのspec.tolerations配下に指定します。

nginx-prd.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-prd
spec:
  replicas: 4
  selector:
    matchLabels:
      app: app1
  template:
    metadata:
      labels:
        app: app1
    spec:
      containers:
        - name: nginx
          image: nginx:latest
      tolerations:
      - key: "env"
        operator: "Equal" #Equal/Existsいずれかを指定。Equal:keyとvalueが等しい/Exists:keyが存在する
        value: "prd"
        effect: "NoSchedule"

このほかに、valueをstgにした「nginx-stg」とTolerationsを指定していない「nginx-unspecified」を用意しました。

各Deploymentをapplyします。

$ kubectl apply -f nginx-prd.yaml
deployment.apps/nginx-prd created
$ kubectl apply -f nginx-stg.yaml
deployment.apps/nginx-stg created
$ kubectl apply -f nginx-unspecified.yaml
deployment.apps/nginx-unspecified created
$ kubectl get pod -o wide
NAME                                 READY   STATUS    RESTARTS   AGE   IP               NODE           NOMINATED NODE   READINESS GATES
nginx-prd-686bbc6cfd-9b688           1/1     Running   0          96s   192.168.79.78    k8s-worker01   <none>           <none>
nginx-prd-686bbc6cfd-dttdl           1/1     Running   0          96s   192.168.69.218   k8s-worker02   <none>           <none>
nginx-prd-686bbc6cfd-q9p62           1/1     Running   0          96s   192.168.69.227   k8s-worker02   <none>           <none>
nginx-prd-686bbc6cfd-rsvcd           1/1     Running   0          96s   192.168.79.125   k8s-worker01   <none>           <none>
nginx-stg-74c9c9d964-f5pjj           1/1     Running   0          56s   192.168.69.237   k8s-worker02   <none>           <none>
nginx-stg-74c9c9d964-sn7xs           1/1     Running   0          56s   192.168.69.234   k8s-worker02   <none>           <none>
nginx-stg-74c9c9d964-svt86           1/1     Running   0          56s   192.168.69.232   k8s-worker02   <none>           <none>
nginx-stg-74c9c9d964-tf8zd           1/1     Running   0          56s   192.168.69.236   k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-5rxnj   1/1     Running   0          22s   192.168.69.246   k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-t5psw   1/1     Running   0          22s   192.168.69.248   k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-vn6b8   1/1     Running   0          22s   192.168.69.241   k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-z8m5p   1/1     Running   0          22s   192.168.69.243   k8s-worker02   <none>           <none>

nginx-prdは2つのノードに分散されていて、stgとunspecifiedはk8s-worker02に寄せられていますね。
Taintsを許容できないPodをスケジューリングしませんが、許容できるPodをすべてスケジューリングするわけではないですね。

image.png

Taintsを設定したNodeしかない場合の動作

今度はTaintsを設定したNodeしかない場合の動作を確認します。

いったん、全てのDeploymentを削除し、k8s-worker02をスケジューリング対象から外します。

$ kubectl delete -f .
deployment.apps "nginx-prd" deleted
deployment.apps "nginx-stg" deleted
deployment.apps "nginx-unspecified" deleted
$ kubectl cordon k8s-worker02
node/k8s-worker02 cordoned
$ kubectl get node
NAME           STATUS                     ROLES    AGE    VERSION
k8s-master     Ready                      master   107d   v1.17.3
k8s-worker01   Ready                      <none>   107d   v1.17.3
k8s-worker02   Ready,SchedulingDisabled   <none>   107d   v1.17.3

再度、全てのDeploymentをapplyします。

$ kubectl apply -f .
deployment.apps/nginx-prd created
deployment.apps/nginx-stg created
deployment.apps/nginx-unspecified created
$ kubectl get pod -o wide
NAME                                 READY   STATUS    RESTARTS   AGE   IP               NODE           NOMINATED NODE   READINESS GATES
nginx-prd-686bbc6cfd-2p2qv           1/1     Running   0          73s   192.168.79.91    k8s-worker01   <none>           <none>
nginx-prd-686bbc6cfd-fvvk6           1/1     Running   0          73s   192.168.79.95    k8s-worker01   <none>           <none>
nginx-prd-686bbc6cfd-lmrb5           1/1     Running   0          73s   192.168.79.100   k8s-worker01   <none>           <none>
nginx-prd-686bbc6cfd-tt9d5           1/1     Running   0          73s   192.168.79.83    k8s-worker01   <none>           <none>
nginx-stg-74c9c9d964-2k8hw           0/1     Pending   0          73s   <none>           <none>         <none>           <none>
nginx-stg-74c9c9d964-stn65           0/1     Pending   0          73s   <none>           <none>         <none>           <none>
nginx-stg-74c9c9d964-tqlb4           0/1     Pending   0          73s   <none>           <none>         <none>           <none>
nginx-stg-74c9c9d964-v48zf           0/1     Pending   0          73s   <none>           <none>         <none>           <none>
nginx-unspecified-5589d85476-5hbl9   0/1     Pending   0          73s   <none>           <none>         <none>           <none>
nginx-unspecified-5589d85476-76s62   0/1     Pending   0          73s   <none>           <none>         <none>           <none>
nginx-unspecified-5589d85476-bljlw   0/1     Pending   0          73s   <none>           <none>         <none>           <none>
nginx-unspecified-5589d85476-c8cpn   0/1     Pending   0          73s   <none>           <none>         <none>           <none>

stgとunspecifiedはスケジューリングできるNodeがないので、Pendingで止まっています。

image.png

PreferNoSchedule

次にPreferNoScheduleの動作を確認します。

Deploymentを削除し、k8s-worker02をスケジューリング対象に戻しておきます。

$ kubectl delete -f .
deployment.apps "nginx-prd" deleted
deployment.apps "nginx-stg" deleted
deployment.apps "nginx-unspecified" deleted
$ kubectl uncordon k8s-worker02
node/k8s-worker02 uncordoned

Taintsの設定

k8s-worker01のTaintsを削除してから、再設定します。削除する際には「<key>-」を指定します。

$ kubectl taint node k8s-worker01 env-
node/k8s-worker01 untainted
$ kubectl taint node k8s-worker01 env=prd:PreferNoSchedule
node/k8s-worker01 tainted
$ kubectl describe node k8s-worker01 | grep Taint
Taints:             env=prd:PreferNoSchedule

Podのデプロイ

nginx-prd/nginx-stg のマニフェストのeffectを「PreferNoSchedule」に変更してapplyします。

$ kubectl apply -f .
deployment.apps/nginx-prd created
deployment.apps/nginx-stg created
deployment.apps/nginx-unspecified created
$ kubectl get pod -o wide
NAME                                 READY   STATUS    RESTARTS   AGE   IP               NODE           NOMINATED NODE   READINESS GATES
nginx-prd-5678ddf968-5hck2           1/1     Running   0          38s   192.168.69.193   k8s-worker02   <none>           <none>
nginx-prd-5678ddf968-h9hw4           1/1     Running   0          38s   192.168.69.245   k8s-worker02   <none>           <none>
nginx-prd-5678ddf968-qg9th           1/1     Running   0          38s   192.168.79.65    k8s-worker01   <none>           <none>
nginx-prd-5678ddf968-spz2k           1/1     Running   0          38s   192.168.79.81    k8s-worker01   <none>           <none>
nginx-stg-cb887fb9d-84vkr            1/1     Running   0          38s   192.168.69.200   k8s-worker02   <none>           <none>
nginx-stg-cb887fb9d-9mlvc            1/1     Running   0          38s   192.168.69.198   k8s-worker02   <none>           <none>
nginx-stg-cb887fb9d-cl5m2            1/1     Running   0          38s   192.168.79.89    k8s-worker01   <none>           <none>
nginx-stg-cb887fb9d-n7sqv            1/1     Running   0          38s   192.168.69.196   k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-hkw9b   1/1     Running   0          38s   192.168.69.235   k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-p5nm9   1/1     Running   0          38s   192.168.69.201   k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-t86h4   1/1     Running   0          38s   192.168.79.68    k8s-worker01   <none>           <none>
nginx-unspecified-5589d85476-zrc7x   1/1     Running   0          38s   192.168.69.199   k8s-worker02   <none>           <none>

Taintsを許容できなPodは可能な限りスケジューリングしないはずですが、stgとunspecifiedも1つのPodはk8s-worker01にスケジューリングされています。
クラスタのNodeが2Nodeなので、許容できないNodeにもスケジューリングされたのかなと想像しています。

image.png

NoExecute

最後にNoExecuteの動作を確認します。

Taintsの設定

Taintsを削除し、NoExecuteに再設定します。

$ kubectl taint node k8s-worker01 env-
node/k8s-worker01 untainted
$ kubectl taint node k8s-worker01 env=prd:NoExecute
node/k8s-worker01 tainted

Podの移動

Taintsを設定する際に、別ターミナルでPodの状態を監視していました。
Taintsを再設定したら、Podが以下のように移動しました。

$ kubectl get pod -o wide -w
NAME                                 READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES
nginx-prd-5678ddf968-5hck2           1/1     Running   0          2m34s   192.168.69.193   k8s-worker02   <none>           <none>
nginx-prd-5678ddf968-h9hw4           1/1     Running   0          2m34s   192.168.69.245   k8s-worker02   <none>           <none>
nginx-prd-5678ddf968-qg9th           1/1     Running   0          2m34s   192.168.79.65    k8s-worker01   <none>           <none>
nginx-prd-5678ddf968-spz2k           1/1     Running   0          2m34s   192.168.79.81    k8s-worker01   <none>           <none>
nginx-stg-cb887fb9d-84vkr            1/1     Running   0          2m34s   192.168.69.200   k8s-worker02   <none>           <none>
nginx-stg-cb887fb9d-9mlvc            1/1     Running   0          2m34s   192.168.69.198   k8s-worker02   <none>           <none>
nginx-stg-cb887fb9d-cl5m2            1/1     Running   0          2m34s   192.168.79.89    k8s-worker01   <none>           <none>
nginx-stg-cb887fb9d-n7sqv            1/1     Running   0          2m34s   192.168.69.196   k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-hkw9b   1/1     Running   0          2m34s   192.168.69.235   k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-p5nm9   1/1     Running   0          2m34s   192.168.69.201   k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-t86h4   1/1     Running   0          2m34s   192.168.79.68    k8s-worker01   <none>           <none>
nginx-unspecified-5589d85476-zrc7x   1/1     Running   0          2m34s   192.168.69.199   k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-t86h4   1/1     Terminating   0          3m27s   192.168.79.68    k8s-worker01   <none>           <none>
nginx-prd-5678ddf968-qg9th           1/1     Terminating   0          3m27s   192.168.79.65    k8s-worker01   <none>           <none>
nginx-prd-5678ddf968-spz2k           1/1     Terminating   0          3m27s   192.168.79.81    k8s-worker01   <none>           <none>
nginx-stg-cb887fb9d-cl5m2            1/1     Terminating   0          3m27s   192.168.79.89    k8s-worker01   <none>           <none>
nginx-stg-cb887fb9d-xs45w            0/1     Pending       0          0s      <none>           <none>         <none>           <none>
nginx-prd-5678ddf968-t2znv           0/1     Pending       0          0s      <none>           <none>         <none>           <none>
nginx-unspecified-5589d85476-9cmk9   0/1     Pending       0          0s      <none>           <none>         <none>           <none>
nginx-stg-cb887fb9d-xs45w            0/1     Pending       0          0s      <none>           k8s-worker02   <none>           <none>
nginx-prd-5678ddf968-t2znv           0/1     Pending       0          0s      <none>           k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-9cmk9   0/1     Pending       0          0s      <none>           k8s-worker02   <none>           <none>
nginx-prd-5678ddf968-6rt79           0/1     Pending       0          0s      <none>           <none>         <none>           <none>
nginx-prd-5678ddf968-6rt79           0/1     Pending       0          0s      <none>           k8s-worker02   <none>           <none>
nginx-stg-cb887fb9d-xs45w            0/1     ContainerCreating   0          0s      <none>           k8s-worker02   <none>           <none>
nginx-prd-5678ddf968-t2znv           0/1     ContainerCreating   0          0s      <none>           k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-9cmk9   0/1     ContainerCreating   0          0s      <none>           k8s-worker02   <none>           <none>
nginx-prd-5678ddf968-6rt79           0/1     ContainerCreating   0          0s      <none>           k8s-worker02   <none>           <none>
nginx-stg-cb887fb9d-xs45w            0/1     ContainerCreating   0          2s      <none>           k8s-worker02   <none>           <none>
nginx-stg-cb887fb9d-cl5m2            0/1     Terminating         0          3m29s   192.168.79.89    k8s-worker01   <none>           <none>
nginx-prd-5678ddf968-6rt79           0/1     ContainerCreating   0          2s      <none>           k8s-worker02   <none>           <none>
nginx-prd-5678ddf968-t2znv           0/1     ContainerCreating   0          2s      <none>           k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-9cmk9   0/1     ContainerCreating   0          2s      <none>           k8s-worker02   <none>           <none>
nginx-prd-5678ddf968-spz2k           0/1     Terminating         0          3m30s   192.168.79.81    k8s-worker01   <none>           <none>
nginx-stg-cb887fb9d-cl5m2            0/1     Terminating         0          3m30s   192.168.79.89    k8s-worker01   <none>           <none>
nginx-stg-cb887fb9d-cl5m2            0/1     Terminating         0          3m30s   192.168.79.89    k8s-worker01   <none>           <none>
nginx-stg-cb887fb9d-cl5m2            0/1     Terminating         0          3m30s   192.168.79.89    k8s-worker01   <none>           <none>
nginx-prd-5678ddf968-qg9th           0/1     Terminating         0          3m30s   192.168.79.65    k8s-worker01   <none>           <none>
nginx-unspecified-5589d85476-t86h4   0/1     Terminating         0          3m30s   <none>           k8s-worker01   <none>           <none>
nginx-prd-5678ddf968-spz2k           0/1     Terminating         0          3m32s   192.168.79.81    k8s-worker01   <none>           <none>
nginx-prd-5678ddf968-spz2k           0/1     Terminating         0          3m32s   192.168.79.81    k8s-worker01   <none>           <none>
nginx-stg-cb887fb9d-xs45w            1/1     Running             0          6s      192.168.69.255   k8s-worker02   <none>           <none>
nginx-prd-5678ddf968-6rt79           1/1     Running             0          9s      192.168.69.254   k8s-worker02   <none>           <none>
nginx-prd-5678ddf968-qg9th           0/1     Terminating         0          3m38s   192.168.79.65    k8s-worker01   <none>           <none>
nginx-prd-5678ddf968-qg9th           0/1     Terminating         0          3m38s   192.168.79.65    k8s-worker01   <none>           <none>
nginx-unspecified-5589d85476-t86h4   0/1     Terminating         0          3m38s   <none>           k8s-worker01   <none>           <none>
nginx-unspecified-5589d85476-t86h4   0/1     Terminating         0          3m38s   <none>           k8s-worker01   <none>           <none>
nginx-prd-5678ddf968-t2znv           1/1     Running             0          14s     192.168.69.252   k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-9cmk9   1/1     Running             0          19s     192.168.69.205   k8s-worker02   <none>           <none>

最終的には、以下のように全てのPodがk8s-worker02に移動しました。
これはPodのeffectがPreferNoScheduleのままですので、これも含めて許容できないと判断されてk8s-worker01から追い出されたのだと思います。

$ kubectl get pod -o wide
NAME                                 READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES
nginx-prd-5678ddf968-5hck2           1/1     Running   0          4m30s   192.168.69.193   k8s-worker02   <none>           <none>
nginx-prd-5678ddf968-6rt79           1/1     Running   0          63s     192.168.69.254   k8s-worker02   <none>           <none>
nginx-prd-5678ddf968-h9hw4           1/1     Running   0          4m30s   192.168.69.245   k8s-worker02   <none>           <none>
nginx-prd-5678ddf968-t2znv           1/1     Running   0          63s     192.168.69.252   k8s-worker02   <none>           <none>
nginx-stg-cb887fb9d-84vkr            1/1     Running   0          4m30s   192.168.69.200   k8s-worker02   <none>           <none>
nginx-stg-cb887fb9d-9mlvc            1/1     Running   0          4m30s   192.168.69.198   k8s-worker02   <none>           <none>
nginx-stg-cb887fb9d-n7sqv            1/1     Running   0          4m30s   192.168.69.196   k8s-worker02   <none>           <none>
nginx-stg-cb887fb9d-xs45w            1/1     Running   0          63s     192.168.69.255   k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-9cmk9   1/1     Running   0          63s     192.168.69.205   k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-hkw9b   1/1     Running   0          4m30s   192.168.69.235   k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-p5nm9   1/1     Running   0          4m30s   192.168.69.201   k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-zrc7x   1/1     Running   0          4m30s   192.168.69.199   k8s-worker02   <none>           <none>

image.png


nginx-prd/nginx-stg のマニフェストのeffectを「NoExecute」に変更してapplyします。

$ kubectl apply -f .
deployment.apps/nginx-prd configured
deployment.apps/nginx-stg configured
deployment.apps/nginx-unspecified unchanged
$ kubectl get pod -o wide
NAME                                 READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES
nginx-prd-8c4f499f6-6cd8d            1/1     Running   0          33s     192.168.79.115   k8s-worker01   <none>           <none>
nginx-prd-8c4f499f6-787jr            1/1     Running   0          42s     192.168.79.104   k8s-worker01   <none>           <none>
nginx-prd-8c4f499f6-j9tf2            1/1     Running   0          42s     192.168.79.99    k8s-worker01   <none>           <none>
nginx-prd-8c4f499f6-w7pn5            1/1     Running   0          36s     192.168.79.114   k8s-worker01   <none>           <none>
nginx-stg-7c7f996574-cf4gk           1/1     Running   0          33s     192.168.69.206   k8s-worker02   <none>           <none>
nginx-stg-7c7f996574-j5zwd           1/1     Running   0          36s     192.168.69.209   k8s-worker02   <none>           <none>
nginx-stg-7c7f996574-m6h6z           1/1     Running   0          42s     192.168.69.244   k8s-worker02   <none>           <none>
nginx-stg-7c7f996574-zp8b5           1/1     Running   0          42s     192.168.69.207   k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-9cmk9   1/1     Running   0          9m58s   192.168.69.205   k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-hkw9b   1/1     Running   0          13m     192.168.69.235   k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-p5nm9   1/1     Running   0          13m     192.168.69.201   k8s-worker02   <none>           <none>
nginx-unspecified-5589d85476-zrc7x   1/1     Running   0          13m     192.168.69.199   k8s-worker02   <none>           <none>

Taintsを許容できるnginx-prdのみがk8s-worker01にスケジューリングされていますね。

image.png

tolerationSecondsフィールド

NoExecutedeでは、tolerationSecondsフィールドを指定してすることで、許容できるPodを一定時間スケジューリングすることができます。

以下のマニフェストで動作を確認します。

nginx-prd.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-prd
spec:
  replicas: 2
  selector:
    matchLabels:
      app: app1
  template:
    metadata:
      labels:
        app: app1
    spec:
      containers:
        - name: nginx
          image: nginx:latest
      tolerations:
      - key: "env"
        operator: "Equal"
        value: "prd"
        effect: "NoExecute"
        tolerationSeconds: 30

マニフェストをapplyして、Podの状態を別ターミナルで監視します。
なお、あらかじめ全てのPodは削除しています。

$ kubectl apply -f nginx-prd.yaml
deployment.apps/nginx-prd created
$ kubectl get pod -o wide -w
NAME                        READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
nginx-prd-cc5f68778-hhmx7   0/1     Pending   0          0s    <none>   <none>   <none>           <none>
nginx-prd-cc5f68778-hhmx7   0/1     Pending   0          0s    <none>   k8s-worker01   <none>           <none>
nginx-prd-cc5f68778-r8l9p   0/1     Pending   0          0s    <none>   <none>         <none>           <none>
nginx-prd-cc5f68778-r8l9p   0/1     Pending   0          0s    <none>   k8s-worker02   <none>           <none>
nginx-prd-cc5f68778-hhmx7   0/1     ContainerCreating   0          0s    <none>   k8s-worker01   <none>           <none>
nginx-prd-cc5f68778-r8l9p   0/1     ContainerCreating   0          0s    <none>   k8s-worker02   <none>           <none>
nginx-prd-cc5f68778-r8l9p   0/1     ContainerCreating   0          1s    <none>   k8s-worker02   <none>           <none>
nginx-prd-cc5f68778-hhmx7   0/1     ContainerCreating   0          1s    <none>   k8s-worker01   <none>           <none>
nginx-prd-cc5f68778-r8l9p   1/1     Running             0          6s    192.168.69.195   k8s-worker02   <none>           <none>
nginx-prd-cc5f68778-hhmx7   1/1     Running             0          7s    192.168.79.110   k8s-worker01   <none>           <none>
nginx-prd-cc5f68778-hhmx7   1/1     Terminating         0          30s   192.168.79.110   k8s-worker01   <none>           <none>
nginx-prd-cc5f68778-l8zkk   0/1     Pending             0          0s    <none>           <none>         <none>           <none>
nginx-prd-cc5f68778-l8zkk   0/1     Pending             0          0s    <none>           k8s-worker01   <none>           <none>
nginx-prd-cc5f68778-l8zkk   0/1     ContainerCreating   0          0s    <none>           k8s-worker01   <none>           <none>
nginx-prd-cc5f68778-l8zkk   0/1     ContainerCreating   0          1s    <none>           k8s-worker01   <none>           <none>
nginx-prd-cc5f68778-hhmx7   0/1     Terminating         0          32s   <none>           k8s-worker01   <none>           <none>
nginx-prd-cc5f68778-hhmx7   0/1     Terminating         0          33s   <none>           k8s-worker01   <none>           <none>
nginx-prd-cc5f68778-hhmx7   0/1     Terminating         0          33s   <none>           k8s-worker01   <none>           <none>
nginx-prd-cc5f68778-l8zkk   1/1     Running             0          6s    192.168.79.126   k8s-worker01   <none>           <none>

k8s-worker01にスケジューリングされたPodは指定した30秒だけ起動して停止していることがわかります。
Deploymentだと起動停止を繰り返します。指定時間だけ起動して停止させたままにするには、Podでデプロイする必要がありますね。

使いどころが難しそうな印象です。

まとめ

前回のNode Affinityなどと比べると少しわかりづらいですね。使い分けはまだ思いつかないです。

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0