0
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 1 year has passed since last update.

EKS on Fargateを試してみる

Last updated at Posted at 2023-01-06

はじめに

EKS on Fargateを構築して、使用感(オンプレKubernetesとの違い)を確認します。

コンソールマシン(EC2)の構築

Amazon Linux(t2.micro)を使用します。

kubectl のインストール

$ curl -o kubectl https://s3.us-west-2.amazonaws.com/amazon-eks/1.23.7/2022-06-29/bin/linux/amd64/kubectl
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 44.4M  100 44.4M    0     0  4066k      0  0:00:11  0:00:11 --:--:-- 7521k
$ chmod +x ./kubectl
$ mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
$ kubectl version --short --client
Client Version: v1.23.7-eks-4721010

eksctl のインストール

$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
$ sudo mv /tmp/eksctl /usr/local/bin
$ eksctl version
0.124.0

IAMロールの作成

スクリーンショット 2023-01-05 13.57.43.png

スクリーンショット 2023-01-05 13.58.13.png

スクリーンショット 2023-01-05 13.58.39.png

スクリーンショット 2023-01-05 13.59.05.png

EC2インスタンスにIAMロールを割り当て

スクリーンショット 2023-01-05 14.01.22.png

スクリーンショット 2023-01-05 14.01.43.png

EKSクラスタの作成

$ eksctl create cluster --name eks-cluster-20230105 --region ap-northeast-1 --fargate
2023-01-05 05:02:55 [ℹ]  eksctl version 0.124.0
2023-01-05 05:02:55 [ℹ]  using region ap-northeast-1
2023-01-05 05:02:55 [ℹ]  setting availability zones to [ap-northeast-1a ap-northeast-1d ap-northeast-1c]
2023-01-05 05:02:55 [ℹ]  subnets for ap-northeast-1a - public:192.168.0.0/19 private:192.168.96.0/19
2023-01-05 05:02:55 [ℹ]  subnets for ap-northeast-1d - public:192.168.32.0/19 private:192.168.128.0/19
2023-01-05 05:02:55 [ℹ]  subnets for ap-northeast-1c - public:192.168.64.0/19 private:192.168.160.0/19
2023-01-05 05:02:55 [ℹ]  using Kubernetes version 1.23
2023-01-05 05:02:55 [ℹ]  creating EKS cluster "eks-cluster-20230105" in "ap-northeast-1" region with Fargate profile
2023-01-05 05:02:55 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=ap-northeast-1 --cluster=eks-cluster-20230105'
2023-01-05 05:02:55 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "eks-cluster-20230105" in "ap-northeast-1"
2023-01-05 05:02:55 [ℹ]  CloudWatch logging will not be enabled for cluster "eks-cluster-20230105" in "ap-northeast-1"
2023-01-05 05:02:55 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=ap-northeast-1 --cluster=eks-cluster-20230105'
2023-01-05 05:02:55 [ℹ]  
2 sequential tasks: { create cluster control plane "eks-cluster-20230105", 
    2 sequential sub-tasks: { 
        wait for control plane to become ready,
        create fargate profiles,
    } 
}
2023-01-05 05:02:55 [ℹ]  building cluster stack "eksctl-eks-cluster-20230105-cluster"
2023-01-05 05:02:55 [ℹ]  deploying stack "eksctl-eks-cluster-20230105-cluster"
2023-01-05 05:03:25 [ℹ]  waiting for CloudFormation stack "eksctl-eks-cluster-20230105-cluster"
2023-01-05 05:03:55 [ℹ]  waiting for CloudFormation stack "eksctl-eks-cluster-20230105-cluster"
2023-01-05 05:04:55 [ℹ]  waiting for CloudFormation stack "eksctl-eks-cluster-20230105-cluster"
2023-01-05 05:05:56 [ℹ]  waiting for CloudFormation stack "eksctl-eks-cluster-20230105-cluster"
2023-01-05 05:06:56 [ℹ]  waiting for CloudFormation stack "eksctl-eks-cluster-20230105-cluster"
2023-01-05 05:07:56 [ℹ]  waiting for CloudFormation stack "eksctl-eks-cluster-20230105-cluster"
2023-01-05 05:08:56 [ℹ]  waiting for CloudFormation stack "eksctl-eks-cluster-20230105-cluster"
2023-01-05 05:09:56 [ℹ]  waiting for CloudFormation stack "eksctl-eks-cluster-20230105-cluster"
2023-01-05 05:10:56 [ℹ]  waiting for CloudFormation stack "eksctl-eks-cluster-20230105-cluster"
2023-01-05 05:11:56 [ℹ]  waiting for CloudFormation stack "eksctl-eks-cluster-20230105-cluster"
2023-01-05 05:12:56 [ℹ]  waiting for CloudFormation stack "eksctl-eks-cluster-20230105-cluster"
2023-01-05 05:13:56 [ℹ]  waiting for CloudFormation stack "eksctl-eks-cluster-20230105-cluster"
2023-01-05 05:15:57 [ℹ]  creating Fargate profile "fp-default" on EKS cluster "eks-cluster-20230105"
2023-01-05 05:18:09 [ℹ]  created Fargate profile "fp-default" on EKS cluster "eks-cluster-20230105"
2023-01-05 05:18:39 [ℹ]  "coredns" is now schedulable onto Fargate
2023-01-05 05:19:42 [ℹ]  "coredns" is now scheduled onto Fargate
2023-01-05 05:19:42 [ℹ]  "coredns" pods are now scheduled onto Fargate
2023-01-05 05:19:42 [ℹ]  waiting for the control plane to become ready
2023-01-05 05:19:43 [!]  failed to determine authenticator version, leaving API version as default v1alpha1: failed to parse versions: unable to parse first version "": strconv.ParseUint: parsing "": invalid syntax
2023-01-05 05:19:43 [✔]  saved kubeconfig as "/home/ec2-user/.kube/config"
2023-01-05 05:19:43 [ℹ]  no tasks
2023-01-05 05:19:43 [✔]  all EKS cluster resources for "eks-cluster-20230105" have been created
2023-01-05 05:19:44 [ℹ]  kubectl command should work with "/home/ec2-user/.kube/config", try 'kubectl get nodes'
2023-01-05 05:19:44 [✔]  EKS cluster "eks-cluster-20230105" in "ap-northeast-1" region is ready

failedと1つ出てるけど、構築はできたようです。

$ kubectl get node -o wide
NAME                                                         STATUS   ROLES    AGE     VERSION                INTERNAL-IP       EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION                  CONTAINER-RUNTIME
fargate-ip-192-168-154-134.ap-northeast-1.compute.internal   Ready    <none>   4m18s   v1.23.14-eks-a1bebd3   192.168.154.134   <none>        Amazon Linux 2   4.14.294-220.533.amzn2.x86_64   containerd://1.6.6
fargate-ip-192-168-161-7.ap-northeast-1.compute.internal     Ready    <none>   4m17s   v1.23.14-eks-a1bebd3   192.168.161.7     <none>        Amazon Linux 2   4.14.294-220.533.amzn2.x86_64   containerd://1.6.6
$ kubectl get pod -A
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   coredns-6d97794bdd-cjfwn   1/1     Running   0          7m5s
kube-system   coredns-6d97794bdd-v9l54   1/1     Running   0          7m6s

現時点ではv1.26が最新ですが、プロビジョニングされたのはv1.23ですね。

コンソールでも確認できます。
1.24にバージョンアップできるようですね。プロビジョニングする際に、バージョンの指定が必要だったのかな?

スクリーンショット 2023-01-05 14.43.21.png

確認

pod

作成、削除、ログイン

とりあえずPodを作成してみます。

$ kubectl run nginx --image nginx
pod/nginx created
$ kubectl get pod
NAME    READY   STATUS    RESTARTS   AGE
nginx   0/1     Pending   0          13s
$ kubectl get pod
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          2m31s

Pendingの時間が長かったですが、デプロイできました。
nodeが増えてますね。

$ kubectl get node
NAME                                                         STATUS   ROLES    AGE     VERSION
fargate-ip-192-168-123-120.ap-northeast-1.compute.internal   Ready    <none>   2m58s   v1.23.14-eks-a1bebd3
fargate-ip-192-168-154-134.ap-northeast-1.compute.internal   Ready    <none>   11m     v1.23.14-eks-a1bebd3
fargate-ip-192-168-161-7.ap-northeast-1.compute.internal     Ready    <none>   11m     v1.23.14-eks-a1bebd3

ログインもできますね。

$ kubectl exec -it nginx -- sh
# ls
bin  boot  dev  docker-entrypoint.d  docker-entrypoint.sh  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
# touch /home/test
# ls /home/test
/home/test
# 
# exit

ログの確認もできます。

$ kubectl logs nginx
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/01/05 05:39:56 [notice] 1#1: using the "epoll" event method
2023/01/05 05:39:56 [notice] 1#1: nginx/1.23.3
2023/01/05 05:39:56 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
2023/01/05 05:39:56 [notice] 1#1: OS: Linux 4.14.294-220.533.amzn2.x86_64
2023/01/05 05:39:56 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1024:65535
2023/01/05 05:39:56 [notice] 1#1: start worker processes
2023/01/05 05:39:56 [notice] 1#1: start worker process 28
2023/01/05 05:39:56 [notice] 1#1: start worker process 29

削除します。

$ kubectl delete pod nginx
pod "nginx" deleted
$ kubectl get node
NAME                                                         STATUS   ROLES    AGE   VERSION
fargate-ip-192-168-154-134.ap-northeast-1.compute.internal   Ready    <none>   13m   v1.23.14-eks-a1bebd3
fargate-ip-192-168-161-7.ap-northeast-1.compute.internal     Ready    <none>   13m   v1.23.14-eks-a1bebd3
$ kubectl get pod
No resources found in default namespace.

nodeも削除されてますね。削除はすぐに反映されました。

複数コンテナ

複数コンテナがあるPodを作成してみます。

pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: sample-pod1
spec:
  containers:
    - name: nginx
      image: nginx:latest
    - name: redis
      image: redis:latest
$ kubectl apply -f pod.yaml
pod/sample-pod1 created
$ kubectl get pod
NAME          READY   STATUS    RESTARTS   AGE
sample-pod1   2/2     Running   0          76s
$ kubectl get node
NAME                                                         STATUS   ROLES    AGE   VERSION
fargate-ip-192-168-117-49.ap-northeast-1.compute.internal    Ready    <none>   31s   v1.23.14-eks-a1bebd3
fargate-ip-192-168-154-134.ap-northeast-1.compute.internal   Ready    <none>   34m   v1.23.14-eks-a1bebd3
fargate-ip-192-168-161-7.ap-northeast-1.compute.internal     Ready    <none>   34m   v1.23.14-eks-a1bebd3

複数コンテナがあっても1Pod/1nodeですね。

それぞれのコンテナにログインできます。

$ kubectl exec -it sample-pod1 -c nginx -- sh
# ls
bin  boot  dev  docker-entrypoint.d  docker-entrypoint.sh  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
# exit
$ kubectl exec -it sample-pod1 -c redis -- sh
# ls
# exit

node

nodeの詳細は確認できますね。

$ kubectl describe node fargate-ip-192-168-154-134.ap-northeast-1.compute.internal
Name:               fargate-ip-192-168-154-134.ap-northeast-1.compute.internal
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    eks.amazonaws.com/compute-type=fargate
                    failure-domain.beta.kubernetes.io/region=ap-northeast-1
                    failure-domain.beta.kubernetes.io/zone=ap-northeast-1d
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=ip-192-168-154-134.ap-northeast-1.compute.internal
                    kubernetes.io/os=linux
                    topology.kubernetes.io/region=ap-northeast-1
                    topology.kubernetes.io/zone=ap-northeast-1d
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Thu, 05 Jan 2023 05:19:21 +0000
Taints:             eks.amazonaws.com/compute-type=fargate:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  fargate-ip-192-168-154-134.ap-northeast-1.compute.internal
  AcquireTime:     <unset>
  RenewTime:       Thu, 05 Jan 2023 05:36:42 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Thu, 05 Jan 2023 05:35:10 +0000   Thu, 05 Jan 2023 05:19:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 05 Jan 2023 05:35:10 +0000   Thu, 05 Jan 2023 05:19:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 05 Jan 2023 05:35:10 +0000   Thu, 05 Jan 2023 05:19:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Thu, 05 Jan 2023 05:35:10 +0000   Thu, 05 Jan 2023 05:19:32 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:   192.168.154.134
  InternalDNS:  ip-192-168-154-134.ap-northeast-1.compute.internal
  Hostname:     ip-192-168-154-134.ap-northeast-1.compute.internal
Capacity:
  attachable-volumes-aws-ebs:  39
  cpu:                         2
  ephemeral-storage:           30787492Ki
  hugepages-1Gi:               0
  hugepages-2Mi:               0
  memory:                      3977000Ki
  pods:                        1
Allocatable:
  attachable-volumes-aws-ebs:  39
  cpu:                         2
  ephemeral-storage:           28373752581
  hugepages-1Gi:               0
  hugepages-2Mi:               0
  memory:                      3874600Ki
  pods:                        1
System Info:
  Machine ID:                 
  System UUID:                EC22711B-D268-6BF7-7F44-32BC2C4216C5
  Boot ID:                    f7ff6bfe-c595-4566-a924-bd8e4f767796
  Kernel Version:             4.14.294-220.533.amzn2.x86_64
  OS Image:                   Amazon Linux 2
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.6.6
  Kubelet Version:            v1.23.14-eks-a1bebd3
  Kube-Proxy Version:         v1.23.14-eks-a1bebd3
ProviderID:                   aws:///ap-northeast-1d/bdc7de8c30-814aa4356ea74a07bbe5ca8f9c4ce8f6/fargate-ip-192-168-154-134.ap-northeast-1.compute.internal
Non-terminated Pods:          (1 in total)
  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-6d97794bdd-v9l54    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     18m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource                    Requests   Limits
  --------                    --------   ------
  cpu                         100m (5%)  0 (0%)
  memory                      70Mi (1%)  170Mi (4%)
  ephemeral-storage           0 (0%)     0 (0%)
  hugepages-1Gi               0 (0%)     0 (0%)
  hugepages-2Mi               0 (0%)     0 (0%)
  attachable-volumes-aws-ebs  0          0
Events:
  Type     Reason                   Age                From     Message
  ----     ------                   ----               ----     -------
  Normal   Starting                 17m                kubelet  Starting kubelet.
  Warning  InvalidDiskCapacity      17m                kubelet  invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  17m (x2 over 17m)  kubelet  Node fargate-ip-192-168-154-134.ap-northeast-1.compute.internal status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    17m (x2 over 17m)  kubelet  Node fargate-ip-192-168-154-134.ap-northeast-1.compute.internal status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     17m (x2 over 17m)  kubelet  Node fargate-ip-192-168-154-134.ap-northeast-1.compute.internal status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  17m                kubelet  Updated Node Allocatable limit across pods
  Normal   NodeReady                17m                kubelet  Node fargate-ip-192-168-154-134.ap-northeast-1.compute.internal status is now: NodeReady
  • OS
    • Amazon Linux
  • Container Runtime
    • Containerd
  • CPU
    • 2vCPU
  • ストレージ
    • 約30GB
  • メモリ
    • 約4GB

ですね。PodのCapacityも1になってます。

Taintも設定されていますね。
eks.amazonaws.com/compute-type=fargate:NoSchedule

$ kubectl taint node fargate-ip-192-168-118-55.ap-northeast-1.compute.internal eks.amazonaws.com/compute-type=fargate:NoSchedule-
node/fargate-ip-192-168-118-55.ap-northeast-1.compute.internal untainted
$ kubectl describe node fargate-ip-192-168-118-55.ap-northeast-1.compute.internal | grep Taint
Taints:             <none>

はずせた。。。
PodのCapacityを変更できれば、1Nodeに複数Podを起動できそうですが、調べてみるとkubeletを設定しないといけないようなので、Fargateでは変更できないですね。
一応、nodeNameを指定してPodをデプロイしてみましたが、UnexpectedAdmissionErrorでできませんでした。

hostPath

hostPathでコンテナからWorker Nodeのrootディレクトリをマウントしてみます。

hostpath.yaml
apiVersion: v1
kind: Pod
metadata:
  name: hostpath
spec:
  containers:
  - image: nginx
    name: hostpath
    volumeMounts:
    - mountPath: /test
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      path: /
      type: Directory
$ kubectl apply -f hostpath.yaml 
pod/hostpath created
$ kubectl get pod
NAME       READY   STATUS    RESTARTS   AGE
hostpath   0/1     Pending   0          2m10s

いつまでもPendingで止まってるので、詳細を確認します。

$ kubectl describe pod hostpath
Name:                 hostpath
Namespace:            default
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 <none>
Labels:               eks.amazonaws.com/fargate-profile=fp-default
Annotations:          kubernetes.io/psp: eks.privileged
Status:               Pending
・・・
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  2m20s  fargate-scheduler  Pod not supported on Fargate: volumes not supported: test-volume is of an unsupported volume Type

hostPathはサポートしてないですね。

Deployment / AutoScale

metrics server のインストール

$ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
$ kubectl get deployment metrics-server -n kube-system
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
metrics-server   1/1     1            1           97s
$ kubectl get pod -A
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-6d97794bdd-cjfwn          1/1     Running   0          69m
kube-system   coredns-6d97794bdd-v9l54          1/1     Running   0          69m
kube-system   metrics-server-599b86cfbf-tsrhr   1/1     Running   0          92s
$ kubectl get node
NAME                                                         STATUS   ROLES    AGE   VERSION
fargate-ip-192-168-154-134.ap-northeast-1.compute.internal   Ready    <none>   67m   v1.23.14-eks-a1bebd3
fargate-ip-192-168-161-7.ap-northeast-1.compute.internal     Ready    <none>   67m   v1.23.14-eks-a1bebd3
fargate-ip-192-168-177-225.ap-northeast-1.compute.internal   Ready    <none>   17s   v1.23.14-eks-a1bebd3

しばらく時間をおいてみたけど、metrics-serverがデプロイされているNodeのメトリックは<unknown>のまま。

$ kubectl top node
NAME                                                         CPU(cores)   CPU%        MEMORY(bytes)   MEMORY%     
fargate-ip-192-168-154-134.ap-northeast-1.compute.internal   15m          0%          111Mi           2%          
fargate-ip-192-168-161-7.ap-northeast-1.compute.internal     14m          0%          110Mi           2%          
fargate-ip-192-168-177-225.ap-northeast-1.compute.internal   <unknown>    <unknown>   <unknown>       <unknown>  

サンプルアプリ(Deployment)のデプロイ

Kubernetesのマニュアルに沿って確認します。

$ kubectl apply -f https://k8s.io/examples/application/php-apache.yaml
deployment.apps/php-apache created
service/php-apache created
$ kubectl get deployment,pod,svc
NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/php-apache   1/1     1            1           2m36s

NAME                              READY   STATUS    RESTARTS   AGE
pod/php-apache-7d665c4ddf-69pcb   1/1     Running   0          2m36s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.100.0.1      <none>        443/TCP   102m
service/php-apache   ClusterIP   10.100.60.125   <none>        80/TCP    2m36s

やっぱりmetrics-serverがデプロイされているNodeのメトリックは<unknown>のままですね。

$ kubectl get node
NAME                                                         STATUS   ROLES    AGE     VERSION
fargate-ip-192-168-112-5.ap-northeast-1.compute.internal     Ready    <none>   2m17s   v1.23.14-eks-a1bebd3
fargate-ip-192-168-154-134.ap-northeast-1.compute.internal   Ready    <none>   92m     v1.23.14-eks-a1bebd3
fargate-ip-192-168-161-7.ap-northeast-1.compute.internal     Ready    <none>   92m     v1.23.14-eks-a1bebd3
fargate-ip-192-168-177-225.ap-northeast-1.compute.internal   Ready    <none>   25m     v1.23.14-eks-a1bebd3
$ kubectl top node
NAME                                                         CPU(cores)   CPU%        MEMORY(bytes)   MEMORY%     
fargate-ip-192-168-112-5.ap-northeast-1.compute.internal     14m          0%          165Mi           4%          
fargate-ip-192-168-154-134.ap-northeast-1.compute.internal   14m          0%          111Mi           2%          
fargate-ip-192-168-161-7.ap-northeast-1.compute.internal     14m          0%          112Mi           2%          
fargate-ip-192-168-177-225.ap-northeast-1.compute.internal   <unknown>    <unknown>   <unknown>       <unknown> 

HPAの作成

すぐにスケールするようにCPU負荷の閾値は25%にしておきます。

$ kubectl autoscale deployment php-apache --cpu-percent=25 --min=1 --max=5
horizontalpodautoscaler.autoscaling/php-apache autoscaled
$ kubectl get hpa
NAME         REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   0%/25%    1         5         1          57s

テスト

別ターミナルを開いてEC2にログインし、Deploymentにクエリを送って負荷をかけます。

$ kubectl run -i --tty load-generator --rm --image=busybox --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"

If you don't see a command prompt, try pressing enter.

OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!・・・

HPAの状態を監視すると、負荷に応じてDeploymentのReplicasが増えていきます。

$ kubectl get hpa -w
NAME         REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   0%/25%    1         5         1          5m8s
php-apache   Deployment/php-apache   0%/25%    1         5         1          5m16s
php-apache   Deployment/php-apache   97%/25%   1         5         1          5m31s
php-apache   Deployment/php-apache   120%/25%   1         5         4          5m46s
php-apache   Deployment/php-apache   119%/25%   1         5         5          6m1s

Podも増えてますね。load-generatorは別ターミナルで起動した負荷を送ってるPodです。

$ kubectl get deployment,pod
NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/php-apache   5/5     5            5           16m

NAME                              READY   STATUS    RESTARTS   AGE
pod/load-generator                1/1     Running   0          3m3s
pod/php-apache-7d665c4ddf-69pcb   1/1     Running   0          16m
pod/php-apache-7d665c4ddf-cn2tk   1/1     Running   0          97s
pod/php-apache-7d665c4ddf-fqm4h   1/1     Running   0          97s
pod/php-apache-7d665c4ddf-gk4pk   1/1     Running   0          82s
pod/php-apache-7d665c4ddf-jpsnx   1/1     Running   0          97s

NodeもPodに応じて増えてます。

$ kubectl get node
NAME                                                         STATUS   ROLES    AGE     VERSION
fargate-ip-192-168-104-131.ap-northeast-1.compute.internal   Ready    <none>   81s     v1.23.14-eks-a1bebd3
fargate-ip-192-168-112-5.ap-northeast-1.compute.internal     Ready    <none>   16m     v1.23.14-eks-a1bebd3
fargate-ip-192-168-116-155.ap-northeast-1.compute.internal   Ready    <none>   66s     v1.23.14-eks-a1bebd3
fargate-ip-192-168-136-4.ap-northeast-1.compute.internal     Ready    <none>   81s     v1.23.14-eks-a1bebd3
fargate-ip-192-168-154-134.ap-northeast-1.compute.internal   Ready    <none>   106m    v1.23.14-eks-a1bebd3
fargate-ip-192-168-154-222.ap-northeast-1.compute.internal   Ready    <none>   2m46s   v1.23.14-eks-a1bebd3
fargate-ip-192-168-161-7.ap-northeast-1.compute.internal     Ready    <none>   106m    v1.23.14-eks-a1bebd3
fargate-ip-192-168-177-225.ap-northeast-1.compute.internal   Ready    <none>   39m     v1.23.14-eks-a1bebd3
fargate-ip-192-168-190-142.ap-northeast-1.compute.internal   Ready    <none>   80s     v1.23.14-eks-a1bebd3

具体的な時間は測ってないですが、HPAのReplicasの数値が増えてから実際にPodが増えるまでは通常のKubernetesよりも遅く感じました。
これはNodeがプロビジョニングされる時間があるためですね。

別ターミナルで動かしてた負荷を停止します。

OK!OK!OK!OK!^C
E0105 07:08:07.459556    1013 v2.go:105] EOF
pod "load-generator" deleted
pod default/load-generator terminated (Error)

しばらく待つと、PodがMinimumに戻って、Nodeも削除されてます。

$ kubectl get hpa
NAME         REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   0%/25%    1         5         1          19m
$ kubectl get pod
NAME                          READY   STATUS    RESTARTS   AGE
php-apache-7d665c4ddf-69pcb   1/1     Running   0          25m
$ kubectl get node
NAME                                                         STATUS   ROLES    AGE    VERSION
fargate-ip-192-168-112-5.ap-northeast-1.compute.internal     Ready    <none>   28m    v1.23.14-eks-a1bebd3
fargate-ip-192-168-154-134.ap-northeast-1.compute.internal   Ready    <none>   118m   v1.23.14-eks-a1bebd3
fargate-ip-192-168-161-7.ap-northeast-1.compute.internal     Ready    <none>   118m   v1.23.14-eks-a1bebd3
fargate-ip-192-168-177-225.ap-northeast-1.compute.internal   Ready    <none>   50m    v1.23.14-eks-a1bebd3

NodePort

NodePortはFargateでは使えないという記載がありますが、マニュアルには使えないとは書かれてないので試してみます。

$ kubectl get pod -o wide
NAME    READY   STATUS    RESTARTS   AGE     IP               NODE                                                        NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          2m57s   192.168.119.40   fargate-ip-192-168-119-40.ap-northeast-1.compute.internal   <none>           <none>
$ kubectl expose pod nginx --port=8080 --target-port=80 --name=nodeport --type=NodePort
service/nodeport exposed
$ kubectl get svc 
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.100.0.1       <none>        443/TCP          8h
nodeport     NodePort    10.100.244.250   <none>        8080:31668/TCP   8s
$ kubectl describe svc nodeport
Name:                     nodeport
Namespace:                default
Labels:                   eks.amazonaws.com/fargate-profile=fp-default
                          run=nginx
Annotations:              <none>
Selector:                 eks.amazonaws.com/fargate-profile=fp-default,run=nginx
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.244.250
IPs:                      10.100.244.250
Port:                     <unset>  8080/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31668/TCP
Endpoints:                192.168.119.40:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

Endpointは設定されてますのでNodePortとしては設定できてそうですが、FargateのNodeがプライベートアドレスなのでVPCの中からだったらアクセスできるのかな?

Load Balancer

これに沿って設定します。

AWS Load Balancer Controller アドオンのインストール

アドオンが必要とのことなので、これに沿ってインストールします。

IAMポリシーを作成します。

$ curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.4/docs/install/iam_policy.json
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  7617  100  7617    0     0  28686      0 --:--:-- --:--:-- --:--:-- 28743
$ ls -l
合計 8
-rw-rw-r-- 1 ec2-user ec2-user 7617  1月  6 00:29 iam_policy.json
$ aws iam create-policy \
>     --policy-name AWSLoadBalancerControllerIAMPolicy \
>     --policy-document file://iam_policy.json
{
    "Policy": {
        "PolicyName": "AWSLoadBalancerControllerIAMPolicy", 
        "PermissionsBoundaryUsageCount": 0, 
        "CreateDate": "2023-01-06T00:32:06Z", 
        "AttachmentCount": 0, 
        "IsAttachable": true, 
        "PolicyId": "ANPATPBGQF25MU5MG7VAR", 
        "DefaultVersionId": "v1", 
        "Path": "/", 
        "Arn": "arn:aws:iam::238451437242:policy/AWSLoadBalancerControllerIAMPolicy", 
        "UpdateDate": "2023-01-06T00:32:06Z"
    }
}

IAMロールを作成します。

$ eksctl create iamserviceaccount \
>  --cluster=eks-cluster-20230105 \
>  --namespace=kube-system \
>  --name=aws-load-balancer-controller \
>  --role-name "AmazonEKSLoadBalancerControllerRole" \
>  --attach-policy-arn=arn:aws:iam::<アカウントID>:policy/AWSLoadBalancerControllerIAMPolicy \
>  --approve
2023-01-06 01:45:08 [!]  no IAM OIDC provider associated with cluster, try 'eksctl utils associate-iam-oidc-provider --region=ap-northeast-1 --cluster=eks-cluster-20230105'
Error: unable to create iamserviceaccount(s) without IAM OIDC provider enabled

エラー。
メッセージの通りに実行してみます。

$ eksctl utils associate-iam-oidc-provider --region=ap-northeast-1 --cluster=eks-cluster-20230105
2023-01-06 01:46:03 [ℹ]  (plan) would create IAM Open ID Connect provider for cluster "eks-cluster-20230105" in "ap-northeast-1"
2023-01-06 01:46:03 [!]  no changes were applied, run again with '--approve' to apply the changes

--approveオプションが必要らしい。

$ eksctl utils associate-iam-oidc-provider --region=ap-northeast-1 --cluster=eks-cluster-20230105 --approve
2023-01-06 01:46:27 [ℹ]  will create IAM Open ID Connect provider for cluster "eks-cluster-20230105" in "ap-northeast-1"
2023-01-06 01:46:28 [✔]  created IAM Open ID Connect provider for cluster "eks-cluster-20230105" in "ap-northeast-1"

成功
再度IAMロールを作成します。

$ eksctl create iamserviceaccount \
>  --cluster=eks-cluster-20230105 \
>  --namespace=kube-system \
>  --name=aws-load-balancer-controller \
>  --role-name "AmazonEKSLoadBalancerControllerRole" \
>  --attach-policy-arn=arn:aws:iam::<アカウントID>:policy/AWSLoadBalancerControllerIAMPolicy \
>  --approve
2023-01-06 01:46:57 [ℹ]  1 iamserviceaccount (kube-system/aws-load-balancer-controller) was included (based on the include/exclude rules)
2023-01-06 01:46:57 [!]  serviceaccounts that exist in Kubernetes will be excluded, use --override-existing-serviceaccounts to override
2023-01-06 01:46:57 [ℹ]  1 task: { 
    2 sequential sub-tasks: { 
        create IAM role for serviceaccount "kube-system/aws-load-balancer-controller",
        create serviceaccount "kube-system/aws-load-balancer-controller",
    } }2023-01-06 01:46:57 [ℹ]  building iamserviceaccount stack "eksctl-eks-cluster-20230105-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2023-01-06 01:46:58 [ℹ]  deploying stack "eksctl-eks-cluster-20230105-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2023-01-06 01:46:58 [ℹ]  waiting for CloudFormation stack "eksctl-eks-cluster-20230105-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2023-01-06 01:47:28 [ℹ]  waiting for CloudFormation stack "eksctl-eks-cluster-20230105-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2023-01-06 01:48:12 [ℹ]  waiting for CloudFormation stack "eksctl-eks-cluster-20230105-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2023-01-06 01:48:12 [ℹ]  created serviceaccount "kube-system/aws-load-balancer-controller"

serviceaccountが作成されてます。

$ kubectl get sa -n kube-system |grep aws
aws-cloud-provider                   1         20h
aws-load-balancer-controller         1         6m20s
aws-node                             1         20h

AWS Load Balancer Controllerをインストールするには、Helmとマニフェストをapplyする方法があります。
Fargateの場合はHelmじゃないとうまくいかないとの情報があったので、まずHelmをインストールします。

$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
$ chmod 700 get_helm.sh
$ ./get_helm.sh
[WARNING] Could not find git. It is required for plugin installation.
Downloading https://get.helm.sh/helm-v3.10.3-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
$ helm version
version.BuildInfo{Version:"v3.10.3", GitCommit:"835b7334cfe2e5e27870ab3ed4135f136eecc704", GitTreeState:"clean", GoVersion:"go1.18.9"}
$ helm repo add eks https://aws.github.io/eks-charts
"eks" has been added to your repositories
$ helm repo list
NAME    URL                             
eks     https://aws.github.io/eks-charts
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "eks" chart repository
Update Complete. ⎈Happy Helming!⎈

アップグレードする際にCRDが必要なようなので、インストールします。
(今回は新規だから不要?)

$ kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"
customresourcedefinition.apiextensions.k8s.io/ingressclassparams.elbv2.k8s.aws created
customresourcedefinition.apiextensions.k8s.io/targetgroupbindings.elbv2.k8s.aws created

AWS Load Balancer Controllerをインストールします。

$ helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
>   -n kube-system \
>   --set clusterName=eks-cluster-20230105 \
>   --set serviceAccount.create=false \
>   --set serviceAccount.name=aws-load-balancer-controller \
>   --set region=ap-northeast-1 \
>   --set vpcId=vpc-05000000000000 \
>   --set image.repository=602401143452.dkr.ecr.ap-northeast-1.amazonaws.com/amazon/aws-load-balancer-controller
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

失敗。
エラーメッセージで検索すると、Kubernetes v1.24以降で同じErrorが出るとの記載がありました。
今回はv1.23なので対象外だと思いましたが、とりあえず対処方法のAWS CLIをバージョンアップします。

$ aws --version
aws-cli/1.18.147 Python/2.7.18 Linux/5.10.157-139.675.amzn2.x86_64 botocore/1.18.6
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 45.9M  100 45.9M    0     0  47.6M      0 --:--:-- --:--:-- --:--:-- 47.6M
$ unzip -q awscliv2.zip 
$ sudo ./aws/install 
You can now run: /usr/local/bin/aws --version
$ aws --version
aws-cli/2.9.13 Python/3.9.11 Linux/5.10.157-139.675.amzn2.x86_64 exe/x86_64.amzn.2 prompt/off

kubeconfigをアップデートします。

$ aws eks update-kubeconfig --region ap-northeast-1 --name eks-cluster-20230105
Added new context arn:aws:eks:ap-northeast-1:238451437242:cluster/eks-cluster-20230105 to /home/ec2-user/.kube/config
$ kubectl config get-contexts
CURRENT   NAME                                                                   CLUSTER                                                                AUTHINFO                                                               NAMESPACE
*         arn:aws:eks:ap-northeast-1:238451437242:cluster/eks-cluster-20230105   arn:aws:eks:ap-northeast-1:238451437242:cluster/eks-cluster-20230105   arn:aws:eks:ap-northeast-1:238451437242:cluster/eks-cluster-20230105   
          i-0ec50ad83ca291d20@eks-cluster-20230105.ap-northeast-1.eksctl.io      eks-cluster-20230105.ap-northeast-1.eksctl.io                          i-0ec50ad83ca291d20@eks-cluster-20230105.ap-northeast-1.eksctl.io      

再度インストールします。

$ helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
>   -n kube-system \
>   --set clusterName=eks-cluster-20230105 \
>   --set serviceAccount.create=false \
>   --set serviceAccount.name=aws-load-balancer-controller \
>   --set region=ap-northeast-1 \
>   --set vpcId=vpc-05000000000000 \
>   --set image.repository=602401143452.dkr.ecr.ap-northeast-1.amazonaws.com/amazon/aws-load-balancer-controller
NAME: aws-load-balancer-controller
LAST DEPLOYED: Fri Jan  6 03:59:06 2023
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
AWS Load Balancer controller installed!

今度は成功しました。
確認します。

$ kubectl get deployment,pod -n kube-system
NAME                                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/aws-load-balancer-controller   2/2     2            2           9m24s
deployment.apps/coredns                        2/2     2            2           22h
deployment.apps/metrics-server                 1/1     1            1           21h

NAME                                                READY   STATUS    RESTARTS   AGE
pod/aws-load-balancer-controller-5dbc4cd485-7mf4g   1/1     Running   0          9m23s
pod/aws-load-balancer-controller-5dbc4cd485-pwdvj   1/1     Running   0          9m23s
pod/coredns-6d97794bdd-cjfwn                        1/1     Running   0          22h
pod/coredns-6d97794bdd-v9l54                        1/1     Running   0          22h
pod/metrics-server-599b86cfbf-tsrhr                 1/1     Running   0          21h

Podに合わせてNodeも増えてます。

$ kubectl get node
NAME                                                         STATUS   ROLES    AGE     VERSION
fargate-ip-192-168-113-56.ap-northeast-1.compute.internal    Ready    <none>   7m28s   v1.23.14-eks-a1bebd3
fargate-ip-192-168-154-134.ap-northeast-1.compute.internal   Ready    <none>   22h     v1.23.14-eks-a1bebd3
fargate-ip-192-168-161-7.ap-northeast-1.compute.internal     Ready    <none>   22h     v1.23.14-eks-a1bebd3
fargate-ip-192-168-177-225.ap-northeast-1.compute.internal   Ready    <none>   21h     v1.23.14-eks-a1bebd3
fargate-ip-192-168-96-66.ap-northeast-1.compute.internal     Ready    <none>   7m26s   v1.23.14-eks-a1bebd3

サンプルアプリとLoad Balancerのデプロイ

これでやっとLoad Balancerの確認ができます。

  • deploymentの作成
$ kubectl create deployment nginx --image nginx --replicas=2
deployment.apps/nginx created
$ kubectl get pod -L app
NAME                     READY   STATUS    RESTARTS   AGE     APP
nginx-85b98978db-5x9zd   1/1     Running   0          2m14s   nginx
nginx-85b98978db-lcmtr   1/1     Running   0          2m14s   nginx
  • LoadBalancerのデプロイ

以下のマニフェストで作成します。
annotationを指定する以外は通常と同じですね。

lb.yaml
apiVersion: v1
kind: Service
metadata:
  name: nlb-sample-service
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: external
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  type: LoadBalancer
  selector:
    app: nginx
$ kubectl apply -f lb.yaml 
service/nlb-sample-service created
$ kubectl get svc
NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP                                                                         PORT(S)        AGE
kubernetes           ClusterIP      10.100.0.1      <none>                                                                              443/TCP        23h
nlb-sample-service   LoadBalancer   10.100.183.90   k8s-default-nlbsampl-cc96d8c0d1-a2e7a819a21f287b.elb.ap-northeast-1.amazonaws.com   80:32093/TCP   11s
$ kubectl describe svc nlb-sample-service
Name:                     nlb-sample-service
Namespace:                default
Labels:                   <none>
Annotations:              service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
                          service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
                          service.beta.kubernetes.io/aws-load-balancer-type: external
Selector:                 app=nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.183.90
IPs:                      10.100.183.90
LoadBalancer Ingress:     k8s-default-nlbsampl-cc96d8c0d1-a2e7a819a21f287b.elb.ap-northeast-1.amazonaws.com
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  32093/TCP
Endpoints:                192.168.154.197:80,192.168.154.236:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason                  Age   From                Message
  ----    ------                  ----  ----                -------
  Normal  EnsuringLoadBalancer    69s   service-controller  Ensuring load balancer
  Normal  SuccessfullyReconciled  66s   service             Successfully reconciled

コンソールでもLoadBalancerがデプロイされていることがわかります。

スクリーンショット 2023-01-06 13.39.46.png

External-IPのURLを指定して疎通を確認します。

$ curl k8s-default-nlbsampl-cc96d8c0d1-a2e7a819a21f287b.elb.ap-northeast-1.amazonaws.com
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

LoadBalancerを削除すると、AWSのLoadBalancerも削除されます。

$ kubectl delete -f lb.yaml 
service "nlb-sample-service" deleted

namespace

namespaceを作成します。

$ kubectl create ns red
namespace/red created
$ kubectl get ns |grep red
red               Active   97s

namespaceを指定してPodを作成します。

$ kubectl run nginx --image nginx -n red
pod/nginx created
$ kubectl describe pod nginx -n red
Name:         nginx
Namespace:    red
・・・
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  13s   default-scheduler  0/5 nodes are available: 5 node(s) had taint {eks.amazonaws.com/compute-type: fargate}, that the pod didn't tolerate.

TaintによってスケジューリングできるNodeがないので、デプロイできませんでした。
Fargateプロファイルによって、デフォルトではdefaultとkube-systemでしか使えないようですね。

スクリーンショット 2023-01-06 13.52.38.png

新しいプロファイルを追加します。

$ eksctl create fargateprofile --cluster eks-cluster-20230105 --name red --namespace red 
2023-01-06 04:57:46 [ℹ]  creating Fargate profile "red" on EKS cluster "eks-cluster-20230105"
2023-01-06 04:58:04 [ℹ]  created Fargate profile "red" on EKS cluster "eks-cluster-20230105"

再度namespaceを指定してPodを作成します。

$ kubectl run nginx --image nginx -n red
pod/nginx created
$ kubectl get pod -n red
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          2m19s

追加したnamespaceにもデプロイできるようになりました。

バージョンアップ

マニュアルに沿って、v1.23からv1.24にバージョンアップします。

現行のバージョンを確認

$ kubectl version --short
Client Version: v1.23.7-eks-4721010
Server Version: v1.23.14-eks-ffeb93d
$ kubectl get node
NAME                                                         STATUS   ROLES    AGE     VERSION
fargate-ip-192-168-113-56.ap-northeast-1.compute.internal    Ready    <none>   3h55m   v1.23.14-eks-a1bebd3
fargate-ip-192-168-148-117.ap-northeast-1.compute.internal   Ready    <none>   33s     v1.23.14-eks-a1bebd3
fargate-ip-192-168-154-134.ap-northeast-1.compute.internal   Ready    <none>   26h     v1.23.14-eks-a1bebd3
fargate-ip-192-168-161-7.ap-northeast-1.compute.internal     Ready    <none>   26h     v1.23.14-eks-a1bebd3
fargate-ip-192-168-177-225.ap-northeast-1.compute.internal   Ready    <none>   25h     v1.23.14-eks-a1bebd3
fargate-ip-192-168-96-66.ap-northeast-1.compute.internal     Ready    <none>   3h55m   v1.23.14-eks-a1bebd3

以下のコマンドを実行して、エラーが表示されないことを確認する。

$ kubectl get psp eks.privileged
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
NAME             PRIV   CAPS   SELINUX    RUNASUSER   FSGROUP    SUPGROUP   READONLYROOTFS   VOLUMES
eks.privileged   true   *      RunAsAny   RunAsAny    RunAsAny   RunAsAny   false            *

eksctlが0.117以降であることを確認する。

$ eksctl version
0.124.0

アップグレード可能なバージョンをここで確認する。
v1.24が書かれてないけど、大丈夫かな?コンソールだと1.24にアップグレードできるとなっている。
kubeadm upgrade planみたいなのがあるといいなぁ)

アップグレードします。

$ eksctl upgrade cluster --name eks-cluster-20230105 --version 1.24 --approve
2023-01-06 08:00:03 [ℹ]  will upgrade cluster "eks-cluster-20230105" control plane from current version "1.23" to "1.24"
2023-01-06 08:09:30 [✔]  cluster "eks-cluster-20230105" control plane has been upgraded to version "1.24"
2023-01-06 08:09:30 [ℹ]  you will need to follow the upgrade procedure for all of nodegroups and add-ons
2023-01-06 08:09:31 [ℹ]  re-building cluster stack "eksctl-eks-cluster-20230105-cluster"
2023-01-06 08:09:31 [✔]  all resources in cluster stack "eksctl-eks-cluster-20230105-cluster" are up-to-date

確認します。

$ kubectl version --short
Client Version: v1.23.7-eks-4721010
Server Version: v1.24.8-eks-ffeb93d

Server Versionが上がってます。Clientは個別にあげないといけないみたいです。

nodeは上がってないですね。

$ kubectl get node
NAME                                                         STATUS   ROLES    AGE     VERSION
fargate-ip-192-168-113-56.ap-northeast-1.compute.internal    Ready    <none>   4h16m   v1.23.14-eks-a1bebd3
fargate-ip-192-168-148-117.ap-northeast-1.compute.internal   Ready    <none>   22m     v1.23.14-eks-a1bebd3
fargate-ip-192-168-154-134.ap-northeast-1.compute.internal   Ready    <none>   26h     v1.23.14-eks-a1bebd3
fargate-ip-192-168-161-7.ap-northeast-1.compute.internal     Ready    <none>   26h     v1.23.14-eks-a1bebd3
fargate-ip-192-168-177-225.ap-northeast-1.compute.internal   Ready    <none>   25h     v1.23.14-eks-a1bebd3
fargate-ip-192-168-96-66.ap-northeast-1.compute.internal     Ready    <none>   4h16m   v1.23.14-eks-a1bebd3

Podを作ってみます。

$ kubectl run redis --image redis
pod/redis created
$ kubectl get node
NAME                                                         STATUS   ROLES    AGE     VERSION
fargate-ip-192-168-113-56.ap-northeast-1.compute.internal    Ready    <none>   4h19m   v1.23.14-eks-a1bebd3
fargate-ip-192-168-148-117.ap-northeast-1.compute.internal   Ready    <none>   24m     v1.23.14-eks-a1bebd3
fargate-ip-192-168-154-134.ap-northeast-1.compute.internal   Ready    <none>   26h     v1.23.14-eks-a1bebd3
fargate-ip-192-168-161-7.ap-northeast-1.compute.internal     Ready    <none>   26h     v1.23.14-eks-a1bebd3
fargate-ip-192-168-171-185.ap-northeast-1.compute.internal   Ready    <none>   14s     v1.24.8-eks-a1bebd3
fargate-ip-192-168-177-225.ap-northeast-1.compute.internal   Ready    <none>   25h     v1.23.14-eks-a1bebd3
fargate-ip-192-168-96-66.ap-northeast-1.compute.internal     Ready    <none>   4h19m   v1.23.14-eks-a1bebd3

新しくプロビジョニングされたNodeはv1.24になってますね。
Worker NodeのバージョンアップにはPodの作り直しが必要なのかな?

削除

$ eksctl delete cluster --name eks-cluster-20230105
2023-01-06 08:22:58 [ℹ]  deleting EKS cluster "eks-cluster-20230105"
2023-01-06 08:22:59 [ℹ]  deleting Fargate profile "fp-default"
2023-01-06 08:25:07 [ℹ]  deleted Fargate profile "fp-default"
2023-01-06 08:25:07 [ℹ]  deleting Fargate profile "red"
2023-01-06 08:27:15 [ℹ]  deleted Fargate profile "red"
2023-01-06 08:27:15 [ℹ]  deleted 2 Fargate profile(s)
2023-01-06 08:27:16 [✔]  kubeconfig has been updated
2023-01-06 08:27:16 [ℹ]  cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
2023-01-06 08:27:17 [ℹ]  
2 sequential tasks: { 
    2 sequential sub-tasks: { 
        2 sequential sub-tasks: { 
            delete IAM role for serviceaccount "kube-system/aws-load-balancer-controller",
            delete serviceaccount "kube-system/aws-load-balancer-controller",
        },
        delete IAM OIDC provider,
    }, delete cluster control plane "eks-cluster-20230105" [async] 
}
2023-01-06 08:27:17 [ℹ]  will delete stack "eksctl-eks-cluster-20230105-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2023-01-06 08:27:17 [ℹ]  waiting for stack "eksctl-eks-cluster-20230105-addon-iamserviceaccount-kube-system-aws-load-balancer-controller" to get deleted
2023-01-06 08:27:17 [ℹ]  waiting for CloudFormation stack "eksctl-eks-cluster-20230105-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2023-01-06 08:27:47 [ℹ]  waiting for CloudFormation stack "eksctl-eks-cluster-20230105-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2023-01-06 08:27:47 [ℹ]  deleted serviceaccount "kube-system/aws-load-balancer-controller"
2023-01-06 08:27:48 [ℹ]  will delete stack "eksctl-eks-cluster-20230105-cluster"
2023-01-06 08:27:48 [✔]  all cluster resources were deleted
$ kubectl config get-contexts
CURRENT   NAME                                                                   CLUSTER                                                                AUTHINFO                                                               NAMESPACE
*         arn:aws:eks:ap-northeast-1:238451437242:cluster/eks-cluster-20230105   arn:aws:eks:ap-northeast-1:238451437242:cluster/eks-cluster-20230105   arn:aws:eks:ap-northeast-1:238451437242:cluster/eks-cluster-20230105   

AWS CLIをアップデートした後に追加したContextだけ残ってますね。

0
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?