2
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

[Oracle Cloud] OKE で HPA をやってみた

Posted at

はじめに

Oracle Cloud で提供されている Kubernentes マネージドサービスの OKE で、HPA(Horizontal Pod Autoscaler) を動かしてみます。

前提条件

HPA の動作のために、Metrics Server が導入されていることが必要です。次の1コマンドで簡単に導入できます。

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml

Deployment

適当に Deloyment を作成します。stress コマンドがインストールされているコンテナイメージを使っています。command で stress コマンドを使って、CPU 負荷を掛けています。

cat <<'EOF' > ~/workdir/stress-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: stress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: stress
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1
  template:
    metadata:
      labels:
        app: stress
    spec:
      containers:
      - image: sugimount/toolbox:release-0.0.3
        name: toolbox
        command: ["stress", "--cpu", "4"]
        resources:
          limits:
            cpu: "500m"
EOF

Apply

kubectl apply -f ~/workdir/stress-deployment.yaml

結果を確認です。1個のPodが動いています。

[opc@bastion ~]$ kubectl get deployment -o wide
NAME     READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                            SELECTOR
stress   1/1     1            1           52s   toolbox      sugimount/toolbox:release-0.0.3   app=stress
[opc@bastion ~]$
[opc@bastion ~]$ kubectl get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
stress-574db78dfc-rjxx9   1/1     Running   0          60s   10.244.1.7   10.0.10.13   <none>           <none>
[opc@bastion ~]$

top コマンドで、CPU リソースの利用率を確認できます。Resource Limit の 500m で頭打ちになっています。

[opc@bastion ~]$ kubectl top pods
NAME                      CPU(cores)   MEMORY(bytes)
stress-574db78dfc-rjxx9   499m         2Mi

HPA 作成

HPA リソースを作ります。最小・最大Pod数、目標のCPU利用率を指定します。

cat <<'EOF' > ~/workdir/stress-hpa.yaml
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: hpa-testa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: stress
  minReplicas: 1
  maxReplicas: 5
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50
EOF

apply

kubectl apply -f ~/workdir/stress-hpa.yaml

確認

[opc@bastion ~]$ kubectl get hpa -o wide
NAME        REFERENCE           TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
hpa-testa   Deployment/stress   <unknown>/50%   1         5         0          3s

HPA で指定した CPU 使用率を上回っているため、自動的にスケールアウトが走ります。

[opc@bastion ~]$ kubectl get hpa -o wide
NAME        REFERENCE           TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
hpa-testa   Deployment/stress   100%/50%   1         5         1          21s

Pod が2個に自動スケールします。

[opc@bastion ~]$ kubectl get hpa -o wide
NAME        REFERENCE           TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
hpa-testa   Deployment/stress   100%/50%   1         5         2          39s

Pod が4個に自動スケールします。

[opc@bastion ~]$ kubectl get hpa -o wide
NAME        REFERENCE           TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-testa   Deployment/stress   99%/50%   1         5         4          99s

Pod が5個に自動スケールします。

[opc@bastion ~]$ kubectl get hpa -o wide
NAME        REFERENCE           TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-testa   Deployment/stress   99%/50%   1         5         5          2m32s

Deploymentの詳細も確認すると、desired が5になっています。

[opc@bastion ~]$ kubectl describe deployment stress
Name:                   stress
Namespace:              default
CreationTimestamp:      Sun, 19 Jul 2020 01:54:18 +0000
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 2
                        kubectl.kubernetes.io/last-applied-configuration:
                          {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"stress","namespace":"default"},"spec":{"replicas":1,"sele...
Selector:               app=stress
Replicas:               5 desired | 5 updated | 5 total | 5 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  0 max unavailable, 1 max surge
Pod Template:
  Labels:  app=stress
  Containers:
   toolbox:
    Image:      sugimount/toolbox:release-0.0.3
    Port:       <none>
    Host Port:  <none>
    Command:
      stress
      --cpu
      4
    Limits:
      cpu:        500m
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Progressing    True    NewReplicaSetAvailable
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  <none>
NewReplicaSet:   stress-7964879478 (5/5 replicas created)
Events:
  Type    Reason             Age                 From                   Message
  ----    ------             ----                ----                   -------
  Normal  ScalingReplicaSet  39m                 deployment-controller  Scaled up replica set stress-574db78dfc to 1
  Normal  ScalingReplicaSet  20m                 deployment-controller  Scaled up replica set stress-574db78dfc to 2
  Normal  ScalingReplicaSet  14m                 deployment-controller  Scaled down replica set stress-574db78dfc to 1
  Normal  ScalingReplicaSet  14m                 deployment-controller  Scaled up replica set stress-7964879478 to 1
  Normal  ScalingReplicaSet  14m                 deployment-controller  Scaled down replica set stress-574db78dfc to 0
  Normal  ScalingReplicaSet  4m53s               deployment-controller  Scaled down replica set stress-7964879478 to 1
  Normal  ScalingReplicaSet  3m9s (x2 over 14m)  deployment-controller  Scaled up replica set stress-7964879478 to 2
  Normal  ScalingReplicaSet  2m8s (x2 over 13m)  deployment-controller  Scaled up replica set stress-7964879478 to 4
  Normal  ScalingReplicaSet  68s (x2 over 12m)   deployment-controller  Scaled up replica set stress-7964879478 to 5
[opc@bastion ~]$

Podも5個になっています。

[opc@bastion ~]$ kubectl get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
stress-7964879478-b4jpw   1/1     Running   0          99s     10.244.0.12   10.0.10.12   <none>           <none>
stress-7964879478-jzgcq   1/1     Running   0          3m40s   10.244.0.10   10.0.10.12   <none>           <none>
stress-7964879478-p652z   1/1     Running   0          2m39s   10.244.0.11   10.0.10.12   <none>           <none>
stress-7964879478-whfj5   1/1     Running   0          15m     10.244.1.8    10.0.10.13   <none>           <none>
stress-7964879478-zlwt6   1/1     Running   0          2m39s   10.244.1.10   10.0.10.13   <none>           <none>
[opc@bastion ~]$

参考URL

2
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
2
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?