自分の勉強用のメモです。
AWSのEC2上でEKSのクラスタを作成する
1.SSHで EC2のインスタンスにログインして AWS CLI をセットアップします。
一旦、EC2上のBastion用のサーバーとかにSSHでログインします。
EC2にログイン後、以下のコマンドを実行してCLIをセットアップします。
aws configure
公式ドキュメント
https://docs.aws.amazon.com/ja_jp/cli/latest/userguide/cli-chap-configure.html
2.認証情報の表示
以下のコマンドでAWSのAccount、UserID、Arnが取得できます。
aws sts get-caller-identity
参考にさせていただいたQiitaの記事
https://qiita.com/kooohei/items/2a8a09e5f36bac614879
3.クラスタの作成
eksctl create cluster --name test --region=us-east-1 --node-type t3.medium --nodes 3
ターミナルの出力例
[ℹ] eksctl version 0.11.1
[ℹ] using region us-east-1
[ℹ] setting availability zones to [us-east-1c us-east-1f]
[ℹ] subnets for us-east-1c - public:***.***.0.0/19 private:***.***.64.0/19
[ℹ] subnets for us-east-1f - public:***.***.32.0/19 private:***.***.96.0/19
[ℹ] nodegroup "ng-07a533c2" will use "ami-*****" [AmazonLinux2/1.14]
[ℹ] using Kubernetes version 1.14
[ℹ] creating EKS cluster "test" in "us-east-1" region with un-managed nodes
[ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-1 --cluster=test'
[ℹ] CloudWatch logging will not be enabled for cluster "test" in "us-east-1"
[ℹ] you can enable it with 'eksctl utils update-cluster-logging --region=us-east-1 --cluster=test'
[ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "test" in "us-east-1"
[ℹ] 2 sequential tasks: { create cluster control plane "test", create nodegroup "ng-07a533c2" }
[ℹ] building cluster stack "eksctl-test-cluster"
[ℹ] deploying stack "eksctl-test-cluster"
[ℹ] building nodegroup stack "eksctl-test-nodegroup-ng-07a533c2"
[ℹ] --nodes-min=3 was set automatically for nodegroup ng-07a533c2
[ℹ] --nodes-max=3 was set automatically for nodegroup ng-07a533c2
[ℹ] deploying stack "eksctl-test-nodegroup-ng-07a533c2"
[✔] all EKS cluster resources for "test" have been created
[✔] saved kubeconfig as "/home/user/.kube/config"
[ℹ] adding identity "arn:aws:iam::*****:role/eksctl-test-nodegroup-ng-07a533c2-NodeInstanceRole-1DJPUV1THLN54" to auth ConfigMap
[ℹ] nodegroup "ng-07a533c2" has 0 node(s)
[ℹ] waiting for at least 3 node(s) to become ready in "ng-07a533c2"
[ℹ] nodegroup "ng-07a533c2" has 3 node(s)
[ℹ] node "**126.ec2.internal" is ready
[ℹ] node "**152.ec2.internal" is ready
[ℹ] node "**111.ec2.internal" is ready
[ℹ] kubectl command should work with "/home/user/.kube/config", try 'kubectl get nodes'
[✔] EKS cluster "test" in "us-east-1" region is ready
node-typeにはインスタンスタイプを指定します。
インスタンスタイプ
https://aws.amazon.com/jp/ec2/instance-types/
nodeの数を指定していない場合は、2個作成されます。(デフォルトが2)
公式ドキュメント
https://docs.aws.amazon.com/ja_jp/eks/latest/userguide/create-cluster.html
途中ターミナルが止まったかのように見えましたが(汗)、15分くらいで出来上がりました。
ノードを確認してみます。
[user@**** ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
**126.ec2.internal Ready <none> 104s v1.14.7-eks-1861c5
**152.ec2.internal Ready <none> 105s v1.14.7-eks-1861c5
**111.ec2.internal Ready <none> 105s v1.14.7-eks-1861c5
[user@**** ~]$
クラスタを確認してみます。
[user@**** ~]$ kubectl config get-clusters
NAME
test.us-east-1.eksctl.io
[user@**** ~]$
4.その他のパラメータについて
以下は--helpの出力結果です。
[userxxx@******** ~]$ eksctl create cluster --help
Create a cluster
Usage: eksctl create cluster [flags]
General flags:
-n, --name string EKS cluster name (generated if unspecified, e.g. "hilarious-wardrobe-1577715578")
--tags stringToString A list of KV pairs used to tag the AWS resources (e.g. "Owner=John Doe,Team=Some Team") (default [])
-r, --region string AWS region
--zones strings (auto-select if unspecified)
--version string Kubernetes version (valid options: 1.12, 1.13, 1.14) (default "1.14")
-f, --config-file string load configuration from a file (or stdin if set to '-')
--timeout duration maximum waiting time for any long-running operation (default 25m0s)
--install-vpc-controllers Install VPC controller that's required for Windows workloads
--managed Create EKS-managed nodegroup
--fargate Create a Fargate profile scheduling pods in the default and kube-system namespaces onto Fargate
Initial nodegroup flags:
--nodegroup-name string name of the nodegroup (generated if unspecified, e.g. "ng-1a88f5a7")
--without-nodegroup if set, initial nodegroup will not be created
-t, --node-type string node instance type (default "m5.large")
-N, --nodes int total number of nodes (for a static ASG) (default 2)
-m, --nodes-min int minimum nodes in ASG (default 2)
-M, --nodes-max int maximum nodes in ASG (default 2)
--node-volume-size int node volume size in GB
--node-volume-type string node volume type (valid options: gp2, io1, sc1, st1) (default "gp2")
--max-pods-per-node int maximum number of pods per node (set automatically if unspecified)
--ssh-access control SSH access for nodes. Uses ~/.ssh/id_rsa.pub as default key path if enabled
--ssh-public-key string SSH public key to use for nodes (import from local path, or use existing EC2 key pair)
--node-ami string Advanced use cases only. If 'static' is supplied (default) then eksctl will use static AMIs; if 'auto' is supplied then eksctl will automatically set the AMI based on version/region/instance type; if any other value is supplied it will override the AMI to use for the nodes. Use with extreme care. (default "static")
--node-ami-family string Advanced use cases only. If 'AmazonLinux2' is supplied (default), then eksctl will use the official AWS EKS AMIs (Amazon Linux 2); if 'Ubuntu1804' is supplied, then eksctl will use the official Canonical EKS AMIs (Ubuntu 18.04). (default "AmazonLinux2")
-P, --node-private-networking whether to make nodegroup networking private
--node-security-groups strings Attach additional security groups to nodes, so that it can be used to allow extra ingress/egress access from/to pods
--node-labels stringToString Extra labels to add when registering the nodes in the nodegroup, e.g. "partition=backend,nodeclass=hugememory" (default [])
--node-zones strings (inherited from the cluster if unspecified)
Cluster and nodegroup add-ons flags:
--asg-access enable IAM policy for cluster-autoscaler
--external-dns-access enable IAM policy for external-dns
--full-ecr-access enable full access to ECR
--appmesh-access enable full access to AppMesh
--alb-ingress-access enable full access for alb-ingress-controller
VPC networking flags:
--vpc-cidr ipNet global CIDR to use for VPC (default 192.168.0.0/16)
--vpc-private-subnets strings re-use private subnets of an existing VPC
--vpc-public-subnets strings re-use public subnets of an existing VPC
--vpc-from-kops-cluster string re-use VPC from a given kops cluster
--vpc-nat-mode string VPC NAT mode, valid options: HighlyAvailable, Single, Disable (default "Single")
AWS client flags:
-p, --profile string AWS credentials profile to use (overrides the AWS_PROFILE environment variable)
--cfn-role-arn string IAM role used by CloudFormation to call AWS API on your behalf
Output kubeconfig flags:
--kubeconfig string path to write kubeconfig (incompatible with --auto-kubeconfig) (default "/home/cloud_user/.kube/config")
--authenticator-role-arn string AWS IAM role to assume for authenticator
--set-kubeconfig-context if true then current-context will be set in kubeconfig; if a context is already set then it will be overwritten (default true)
--auto-kubeconfig save kubeconfig file by cluster name, e.g. "/home/cloud_user/.kube/eksctl/clusters/hilarious-wardrobe-1577715578"
--write-kubeconfig toggle writing of kubeconfig (default true)
Common flags:
-C, --color string toggle colorized logs (valid options: true, false, fabulous) (default "true")
-h, --help help for this command
-v, --verbose int set log level, use 0 to silence, 4 for debugging and 5 for debugging with AWS debug logging (default 3)
Use 'eksctl create cluster [command] --help' for more information about a command.
EKSの Master Nodeはどこにあるのか
EKSはフルマネージド型の Kubernetes サービスなので、Masterノードは作らなくていいんですね。なるほど。(いまさら感)
こちらの記事が参考になりました。
「eksctl」コマンドを使ったAmazon EKS構築入門
https://dev.classmethod.jp/cloud/aws/getting-started-amazon-eks-with-eksctl/
Kubernetesの修行(勉強)は続く。。