Help us understand the problem. What is going on with this article?

EKS on Fargateでguestbookアプリをデプロイする

この記事は、Amazon EKS Advent Calendar 2019 24日目の記事です。

先日のre:Invent 2019で、待望のEKS on Fargateがリリースされました。
すでにAdventCalenderにもいくつか記事が出てきていますが、
本記事では、拙記事「わりとゴツいKubernetesハンズオン」にて各環境の第一歩としてやっているguestbookアプリのデプロイを行います。
これができればあとはわりとどうにでもなる気がする!

クラスタを作成する

eksctlで作ります。
ちなみにeksctlは2019/7/23にEKSの公式CLIになりました

各ツールのバージョンはこちら

$ eksctl version
[ℹ]  version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.11.1"}

$ aws --version
aws-cli/1.16.305 Python/3.7.3 Darwin/19.2.0 botocore/1.13.41

$ kubectl version --client --short
Client Version: v1.17.0

--fargateフラグをつけるだけで、EKS on Fargateのクラスタができます。
EC2が立たない分早くできそうな期待がありましたが、
Fargate profileを作るからか、20分程度かかります…とてもつらい :sob:

$ eksctl create cluster --name guestbook --fargate
[ℹ]  eksctl version 0.11.1
[ℹ]  using region ap-northeast-1
[ℹ]  setting availability zones to [ap-northeast-1d ap-northeast-1a ap-northeast-1c]
[ℹ]  subnets for ap-northeast-1d - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ]  subnets for ap-northeast-1a - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ]  subnets for ap-northeast-1c - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ]  using Kubernetes version 1.14
[ℹ]  creating EKS cluster "guestbook" in "ap-northeast-1" region with Fargate profile
[ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=ap-northeast-1 --cluster=guestbook'
[ℹ]  CloudWatch logging will not be enabled for cluster "guestbook" in "ap-northeast-1"
[ℹ]  you can enable it with 'eksctl utils update-cluster-logging --region=ap-northeast-1 --cluster=guestbook'
[ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "guestbook" in "ap-northeast-1"
[ℹ]  1 task: { create cluster control plane "guestbook" }
[ℹ]  building cluster stack "eksctl-guestbook-cluster"
[ℹ]  deploying stack "eksctl-guestbook-cluster"
[✔]  all EKS cluster resources for "guestbook" have been created
[✔]  saved kubeconfig as "/Users/kta-m/.kube/config"
[ℹ]  creating Fargate profile "fp-default" on EKS cluster "guestbook"
[ℹ]  created Fargate profile "fp-default" on EKS cluster "guestbook"
[ℹ]  "coredns" is now schedulable onto Fargate
[ℹ]  "coredns" is now scheduled onto Fargate
[ℹ]  "coredns" pods are now scheduled onto Fargate
[ℹ]  kubectl command should work with "/Users/kta-m/.kube/config", try 'kubectl get nodes'
[✔]  EKS cluster "guestbook" in "ap-northeast-1" region is ready

Fargate profileの主な役割はPodセレクターです。
Podセレクターで指定したNamespaceに属するPodしか起動しないようになっています。

デフォルトで作られるFargate profile fp-defaultでは、kube-systemdefaultのみが指定されています。
ほかのNamespaceでPodを起動させようとする場合は、Fargate profileの追加が必要です。

ALB Ingress Controllerを作る

わりとゴツいKubernetesハンズオンでEC2を使ったEKSクラスタを立てたときは、Serviceのタイプを LoadBalancerにすればいい感じにELBができるのでそれを使っていました。
しかし、EKS on Fargateではその手が使えません。

EKSには、Ingressを使って柔軟にアクセス制御をする方法として、ALB Ingress Controllerなるものがあります。
Fargateでクラスタを外部公開する場合はこれを利用します。
とはいえ、こちらのドキュメントは(今のところ)EC2ノードのクラスタ前提で書かれているので、IAMロールの扱いの部分がこのままではうまくいきません。
Fargateの場合はPodを操作するサービスアカウントにIAMロールを渡す必要があります。
Introducing fine-grained IAM roles for service accounts の記事を参考にしてやっていきます。

サービスアカウントの作成

まず、サービスアカウントがIAMロールを持てるよう、クラスタにIAM Open ID Connect providerを作成します。

$ eksctl utils associate-iam-oidc-provider --region=ap-northeast-1 --cluster=guestbook --approve
[ℹ]  eksctl version 0.11.1
[ℹ]  using region ap-northeast-1
[ℹ]  will create IAM Open ID Connect provider for cluster "guestbook" in "ap-northeast-1"
[✔]  created IAM Open ID Connect provider for cluster "guestbook" in "ap-northeast-1"
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/iam-policy.json

次に、サービスアカウントを作成します。

$ policyArn=$(aws iam create-policy \
  --policy-name ALBIngressControllerIAMPolicy \
  --policy-document file://iam-policy.json | jq -r .Policy.Arn)
$ rm iam-policy.json
$ eksctl create iamserviceaccount --name alb-ingress-controller \
  --namespace kube-system \
  --cluster guestbook \
  --attach-policy-arn ${policyArn}  \
  --approve --override-existing-serviceaccounts
[ℹ]  eksctl version 0.11.1
[ℹ]  using region ap-northeast-1
[ℹ]  1 iamserviceaccount (kube-system/alb-ingress-controller) was included (based on the include/exclude rules)
[!]  metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set
[ℹ]  1 task: { 2 sequential sub-tasks: { create IAM role for serviceaccount "kube-system/alb-ingress-controller", create serviceaccount "kube-system/alb-ingress-controller" } }
[ℹ]  building iamserviceaccount stack "eksctl-guestbook-addon-iamserviceaccount-kube-system-alb-ingress-controller"
[ℹ]  deploying stack "eksctl-guestbook-addon-iamserviceaccount-kube-system-alb-ingress-controller"
[ℹ]  created serviceaccount "kube-system/alb-ingress-controller"

成功すれば、以下のようにサービスアカウントに紐付いたIAMロールのARNが確認できます。

$ kubectl get sa -n kube-system alb-ingress-controller -o jsonpath="{.metadata.annotations['eks\.amazonaws\.com/role-arn']}"
arn:aws:iam::XXXXXXXXXXXX:role/eksctl-guestbook-addon-iamserviceaccount-kub-Role1-XXXXXXXXXXXX

Role-Based Access Control (RBAC)の設定

サービスアカウントを使うのでRBACも設定しておきましょう。

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/rbac-role.yaml

ALB Ingress Controllerのデプロイ

マニフェストファイルを以下のように作成します。

alb-ingress-controller.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/name: alb-ingress-controller
  name: alb-ingress-controller
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: alb-ingress-controller
  template:
    metadata:
      labels:
        app.kubernetes.io/name: alb-ingress-controller
    spec:
      serviceAccountName: alb-ingress-controller
      containers:
        - name: alb-ingress-controller
          image: docker.io/amazon/aws-alb-ingress-controller:v1.1.4
          args:
            - --ingress-class=alb
            - --cluster-name=guestbook # クラスタ名
            - --aws-region=ap-northeast-1
            - --aws-vpc-id=vpc-xxxx # eksctlで作成されたVPCのid
          resources: {}

そしてデプロイ!

$ kubectl apply -f alb-ingress-controller.yaml

guestbookアプリのデプロイ

guestbookアプリののマニフェストファイルを取ってきます。

$ git clone git@github.com:kubernetes/examples.git

ServiceのタイプをClusterIPにします。

examples/guestbook/frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # comment or delete the following line if you want to use a LoadBalancer
  # type: NodePort
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  type: ClusterIP
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

そしてデプロイ!

$ kubectl apply -f examples/guestbook/

Ingressのデプロイ

マニフェストファイルを以下のように作成します。
alb.ingress.kubernetes.io/target-type: ipが重要らしい。

nginx.yaml
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: nginx
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
spec:
  rules:
    - http:
        paths:
          - path: /*
            backend:
              serviceName: frontend
              servicePort: 80

そしてデプロイ!

$ kubectl apply -f nginx.yaml

うまくいけば以下のようなログが出てくるはず。

$ kubectl logs -n kube-system $(kubectl get po -n kube-system -o name | grep alb | cut -d/ -f2) -f
-------------------------------------------------------------------------------
AWS ALB Ingress controller
  Release:    v1.1.3
  Build:      git-0db46039
  Repository: https://github.com/kubernetes-sigs/aws-alb-ingress-controller.git
-------------------------------------------------------------------------------

W1221 20:44:59.754825       1 client_config.go:549] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I1221 20:44:59.820425       1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource"  "controller"="alb-ingress-controller" "source"={"Type":{"metadata":{"creationTimestamp":null}}}
I1221 20:44:59.820694       1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource"  "controller"="alb-ingress-controller" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{},"status":{"loadBalancer":{}}}}
I1221 20:44:59.820752       1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource"  "controller"="alb-ingress-controller" "source"=
I1221 20:44:59.820903       1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource"  "controller"="alb-ingress-controller" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{},"status":{"loadBalancer":{}}}}
I1221 20:44:59.820937       1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource"  "controller"="alb-ingress-controller" "source"=
I1221 20:44:59.821045       1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource"  "controller"="alb-ingress-controller" "source"={"Type":{"metadata":{"creationTimestamp":null}}}
I1221 20:44:59.821307       1 controller.go:121] kubebuilder/controller "level"=0 "msg"="Starting EventSource"  "controller"="alb-ingress-controller" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{},"status":{"daemonEndpoints":{"kubeletEndpoint":{"Port":0}},"nodeInfo":{"machineID":"","systemUUID":"","bootID":"","kernelVersion":"","osImage":"","containerRuntimeVersion":"","kubeletVersion":"","kubeProxyVersion":"","operatingSystem":"","architecture":""}}}}
I1221 20:44:59.821624       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/ingress-controller-leader-alb...
I1221 20:45:16.071715       1 leaderelection.go:214] successfully acquired lease kube-system/ingress-controller-leader-alb
I1221 20:45:16.172899       1 controller.go:134] kubebuilder/controller "level"=0 "msg"="Starting Controller"  "controller"="alb-ingress-controller"
I1221 20:45:16.273136       1 controller.go:154] kubebuilder/controller "level"=0 "msg"="Starting workers"  "controller"="alb-ingress-controller" "worker count"=1
E1221 20:46:06.292652       1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="no object matching key \"default/nginx\" in local store"  "controller"="alb-ingress-controller" "request"={"Namespace":"default","Name":"nginx"}
I1221 20:46:09.386726       1 security_group.go:36] default/nginx: creating securityGroup 88704138-default-nginx-ef8b:managed LoadBalancer securityGroup by ALB Ingress Controller
I1221 20:46:09.472251       1 tags.go:69] default/nginx: modifying tags {  ingress.k8s.aws/resource: "ManagedLBSecurityGroup",  kubernetes.io/cluster-name: "guestbook",  kubernetes.io/namespace: "default",  kubernetes.io/ingress-name: "nginx",  ingress.k8s.aws/cluster: "guestbook",  ingress.k8s.aws/stack: "default/nginx"} on sg-00644bf833d3e22af
I1221 20:46:09.597685       1 security_group.go:75] default/nginx: granting inbound permissions to securityGroup sg-00644bf833d3e22af: [{    FromPort: 80,    IpProtocol: "tcp",    IpRanges: [{        CidrIp: "0.0.0.0/0",        Description: "Allow ingress on port 80 from 0.0.0.0/0"      }],    ToPort: 80  }]
I1221 20:46:09.797729       1 loadbalancer.go:191] default/nginx: creating LoadBalancer 88704138-default-nginx-ef8b
I1221 20:46:10.720829       1 loadbalancer.go:208] default/nginx: LoadBalancer 88704138-default-nginx-ef8b created, ARN: arn:aws:elasticloadbalancing:ap-northeast-1:XXXXXXXXXXXX:loadbalancer/app/88704138-default-nginx-ef8b/f38fd5222d13eed9
I1221 20:46:10.866826       1 targetgroup.go:119] default/nginx: creating target group 88704138-3fcc75cb1898279c122
I1221 20:46:11.034105       1 targetgroup.go:138] default/nginx: target group 88704138-3fcc75cb1898279c122 created: arn:aws:elasticloadbalancing:ap-northeast-1:XXXXXXXXXXXX:targetgroup/88704138-3fcc75cb1898279c122/6c3561d5fe7832b4
I1221 20:46:11.054416       1 tags.go:43] default/nginx: modifying tags {  ingress.k8s.aws/resource: "default/nginx-frontend:80",  kubernetes.io/cluster/guestbook: "owned",  kubernetes.io/namespace: "default",  kubernetes.io/ingress-name: "nginx",  ingress.k8s.aws/cluster: "guestbook",  ingress.k8s.aws/stack: "default/nginx",  kubernetes.io/service-name: "frontend",  kubernetes.io/service-port: "80"} on arn:aws:elasticloadbalancing:ap-northeast-1:XXXXXXXXXXXX:targetgroup/88704138-3fcc75cb1898279c122/6c3561d5fe7832b4
I1221 20:46:11.160612       1 targets.go:80] default/nginx: Adding targets to arn:aws:elasticloadbalancing:ap-northeast-1:XXXXXXXXXXXX:targetgroup/88704138-3fcc75cb1898279c122/6c3561d5fe7832b4: 192.168.133.249:80
I1221 20:46:11.416142       1 listener.go:110] default/nginx: creating listener 80
I1221 20:46:11.453183       1 rules.go:60] default/nginx: creating rule 1 on arn:aws:elasticloadbalancing:ap-northeast-1:XXXXXXXXXXXX:listener/app/88704138-default-nginx-ef8b/f38fd5222d13eed9/969acd682c64ffe0
I1221 20:46:11.480280       1 rules.go:77] default/nginx: rule 1 created with conditions [{    Field: "path-pattern",    Values: ["/*"]  }]
I1221 20:46:11.952942       1 instance_attachment_v2.go:192] default/nginx: granting inbound permissions to securityGroup sg-09e10020509367f5f: [{    FromPort: 0,    IpProtocol: "tcp",    ToPort: 65535,    UserIdGroupPairs: [{        GroupId: "sg-00644bf833d3e22af"      }]  }]
I1221 20:46:13.116078       1 rules.go:82] default/nginx: modifying rule 1 on arn:aws:elasticloadbalancing:ap-northeast-1:XXXXXXXXXXXX:listener/app/88704138-default-nginx-ef8b/f38fd5222d13eed9/969acd682c64ffe0
I1221 20:46:13.137496       1 rules.go:98] default/nginx: rule 1 modified with conditions [{    Field: "path-pattern",    Values: ["/*"]  }]

Ingressを確認するとELBのホスト名が出ているはずなので、アクセスしてみましょう。
ELBのヘルスチェックが通るまで少し時間がかかるので、数分待ちます。

$ kubectl get ing
NAME    HOSTS   ADDRESS                                                                  PORTS   AGE
nginx   *       88704138-default-nginx-ef8b-617276039.ap-northeast-1.elb.amazonaws.com   80      24s

出ました!

スクリーンショット 2019-12-21 9.00.00.png

ターゲットグループを見てみるとこんな感じ。
Fargateへのロードバランシングなので、EC2インスタンスではなくIPアドレスがターゲットになっていますね。

スクリーンショット 2019-12-21 9.14.54.png

お片付け

eksctlでクラスタの削除コマンドを実行してもリソースが残ることが多いです。
成功したようなメッセージが出ますが、CloudFormationの削除処理が走り始めたところまでしか見ていないので、コンソールで本当に削除されたか確認しておきましょう。
EKSクラスタが削除されずに残っていたら費用が発生し続けるので大変です。

$ kubectl delete -f nginx.yaml
$ kubectl delete -f examples/guestbook/
$ kubectl delete -f alb-ingress-controller.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/rbac-role.yaml
$ eksctl delete iamserviceaccount --name alb-ingress-controller \
  --namespace kube-system \
  --cluster guestbook
$ eksctl delete cluster --name guestbook

まとめ

今回はguestbookアプリをデプロイするだけだったので、あまりFargateのありがたみが見えない(というかむしろめんどくさそうな)記事になりましたが、EC2ノードから開放されるのはうれしいはず!

参考 :pray:

https://839.hateblo.jp/entry/2019/12/08/172020
https://dev.classmethod.jp/cloud/aws/eksctl-usage-for-eks-fargate/

Kta-M
fusic
個性をかき集めて、驚きの角度から世の中をアップデートしつづける。
https://fusic.co.jp/
Why not register and get more from Qiita?
  1. We will deliver articles that match you
    By following users and tags, you can catch up information on technical fields that you are interested in as a whole
  2. you can read useful information later efficiently
    By "stocking" the articles you like, you can search right away
Comments
No comments
Sign up for free and join this conversation.
If you already have a Qiita account
Why do not you register as a user and use Qiita more conveniently?
You need to log in to use this function. Qiita can be used more conveniently after logging in.
You seem to be reading articles frequently this month. Qiita can be used more conveniently after logging in.
  1. We will deliver articles that match you
    By following users and tags, you can catch up information on technical fields that you are interested in as a whole
  2. you can read useful information later efficiently
    By "stocking" the articles you like, you can search right away
ユーザーは見つかりませんでした