LoginSignup
52
31

More than 3 years have passed since last update.

TerraformでEKS環境を構築する

Last updated at Posted at 2019-05-02

はじめに

GKEと比べてEKSは環境を整えるのが面倒なのでTerraformで簡単に行えるようにしました。
すでに手動で作成したものがある人はTerraformingでリソースをtfファイルに変換して確認すると良いです。

実装

ディレクトリ構成は以下のようになります。

├── eks.tf
├── iam.tf
├── outputs.tf
├── variables.tf
└── vpc.tf

iam.tf

まずはIAMの設定です。
masterとnodeを分けてIAMを書いて、policyをアタッチします。

iam.tf
# ---
# EKS master
resource "aws_iam_role" "eks-master" {
  name = "eks-master-role"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "eks-cluster" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = "${aws_iam_role.eks-master.name}"
}

resource "aws_iam_role_policy_attachment" "eks-service" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
  role       = "${aws_iam_role.eks-master.name}"
}

# ---
# EKS node
resource "aws_iam_role" "eks-node" {
  name = "eks-node-role"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "eks-worker-node" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = "${aws_iam_role.eks-node.name}"
}

resource "aws_iam_role_policy_attachment" "eks-cni" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = "${aws_iam_role.eks-node.name}"
}

resource "aws_iam_role_policy_attachment" "ecr-ro" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = "${aws_iam_role.eks-node.name}"
}

resource "aws_iam_instance_profile" "eks-node" {
  name = "eks-node-profile"
  role = "${aws_iam_role.eks-node.name}"
}

vpc.tf

EKSクラスタのためのVPCの設定を行います。ここまでは準備過程です。

大きく分けて次の5つを定義しています。
1. VPC
2. Subnet
3. Internet Gateway
4. Route Table
5. Security Group

vpc.tf
# ---
# VPC
resource "aws_vpc" "vpc" {
  cidr_block           = "${var.vpc_cidr_block}"
  enable_dns_hostnames = true
  enable_dns_support   = true
  instance_tenancy     = "default"
  tags                 = "${merge(local.default_tags, map("Name","eks-vpc"))}"
}

# ---
# Subnet
resource "aws_subnet" "sn" {
  count                   = "${var.num_subnets}"
  vpc_id                  = "${aws_vpc.vpc.id}"
  cidr_block              = "${cidrsubnet(var.vpc_cidr_block, 8, count.index)}"
  availability_zone       = "${element(data.aws_availability_zones.available.names, count.index % var.num_subnets)}"
  tags   = "${merge(local.default_tags, map("Name","eks-sn"))}"
}

# ---
# Internet Gateway
resource "aws_internet_gateway" "igw" {
  vpc_id = "${aws_vpc.vpc.id}"
  tags   = "${merge(local.default_tags, map("Name","eks-igw"))}"
}

# ---
# Route Table
resource "aws_route_table" "rt" {
  vpc_id = "${aws_vpc.vpc.id}"
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = "${aws_internet_gateway.igw.id}"
  }
  tags = "${merge(local.default_tags, map("Name","eks-rt"))}"
}

resource "aws_route_table_association" "rta" {
  count          = "${var.num_subnets}"
  subnet_id      = "${element(aws_subnet.sn.*.id, count.index)}"
  route_table_id = "${aws_route_table.rt.id}"
}

# ---
# Security Group
resource "aws_security_group" "eks-master" {
  name        = "eks-master-sg"
  description = "EKS master security group"
  vpc_id = "${aws_vpc.vpc.id}"

  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = "${merge(local.default_tags, map("Name","eks-master-sg"))}"
}

resource "aws_security_group" "eks-node" {
  name        = "eks-node-sg"
  description = "EKS node security group"
  vpc_id = "${aws_vpc.vpc.id}"

  ingress {
    description     = "Allow cluster master to access cluster node"
    from_port       = 1025
    to_port         = 65535
    protocol        = "tcp"
    security_groups = ["${aws_security_group.eks-master.id}"]
  }

  ingress {
    description     = "Allow cluster master to access cluster node"
    from_port       = 443
    to_port         = 443
    protocol        = "tcp"
    security_groups = ["${aws_security_group.eks-master.id}"]
    self            = false
  }

  ingress {
    description = "Allow inter pods communication"
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    self        = true
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags   = "${merge(local.default_tags, map("Name","eks-node-sg"))}"
}

eks.tf

masterとnodeの設定です。
用いるインスタンスタイプ、イメージ、オートスケールパラメータ、などをここで定義します。

eks.tf
resource "aws_eks_cluster" "cluster" {
  name     = "${local.cluster_name}"
  role_arn = "${aws_iam_role.eks-master.arn}"
  version  = "${local.cluster_version}"

  vpc_config {
    security_group_ids = ["${aws_security_group.eks-master.id}"]
    subnet_ids = ["${aws_subnet.sn.*.id}"]
  }

  depends_on = [
    "aws_iam_role_policy_attachment.eks-cluster",
    "aws_iam_role_policy_attachment.eks-service",
  ]
}

locals {
  userdata = <<USERDATA
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh --apiserver-endpoint "${aws_eks_cluster.cluster.endpoint}" --b64-cluster-ca "${aws_eks_cluster.cluster.certificate_authority.0.data}" "${aws_eks_cluster.cluster.name}"
USERDATA
}

data "aws_ami" "eks-node" {
  most_recent = true
  owners      = ["602401143452"]

  filter {
    name   = "name"
    values = ["amazon-eks-node-${aws_eks_cluster.cluster.version}-v*"]
  }
}

resource "aws_launch_configuration" "lc" {
  associate_public_ip_address = true
  iam_instance_profile        = "${aws_iam_instance_profile.eks-node.id}"
  image_id                    = "${data.aws_ami.eks-node.image_id}"
  instance_type               = "${var.instance_type}"
  name_prefix                 = "eks-node"
  key_name                    = "${var.key_name}"

  root_block_device {
    volume_type = "gp2"
    volume_size = "50"
  }

  security_groups  = ["${aws_security_group.eks-node.id}"]
  user_data_base64 = "${base64encode(local.userdata)}"

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_autoscaling_group" "asg" {
  name                 = "EKS node autoscaling group"
  desired_capacity     = "${var.desired_capacity}"
  launch_configuration = "${aws_launch_configuration.lc.id}"
  max_size             = "${var.max_size}"
  min_size             = "${var.min_size}"
  vpc_zone_identifier = ["${aws_subnet.sn.*.id}"]

  tag {
    key                 = "Name"
    value               = "eks-asg"
    propagate_at_launch = true
  }

  tag {
    key                 = "kubernetes.io/cluster/${local.cluster_name}"
    value               = "owned"
    propagate_at_launch = true
  }
}

variables.tf

ここでは他のtfファイルで用いる変数を定義します。
基本的に編集はこのファイルに対してのみになることが理想です。
リージョンはap-northeast-1にしています。

variables.tf
provider "aws" {
  region = "ap-northeast-1"
}

data "aws_availability_zones" "available" {}

variable "project" {
  default = "eks"
}

variable "environment" {
  default = "dev"
}

variable "vpc_cidr_block" {
  default = "10.0.0.0/16"
}

variable "num_subnets" {
  default = 2
}

variable "instance_type" {
  default = "t2.small"
}

variable "desired_capacity" {
  default = 2
}

variable "max_size" {
  default = 2
}

variable "min_size" {
  default = 2
}

variable "key_name" {
  default = "KEY"
}

locals {
  base_tags = {
    Project     = "${var.project}"
    Terraform   = "true"
    Environment = "${var.environment}"
  }

  default_tags    = "${merge(local.base_tags, map("kubernetes.io/cluster/${local.cluster_name}", "shared"))}"
  base_name       = "${var.project}-${var.environment}"
  cluster_name    = "${local.base_name}-cluster"
  cluster_version = "1.12"
}

outputs.tf

標準出力させる内容をここで記述します。
kubeconfig、EKSのConfigMapを定義します。
EKSのconfigmapはmasterとnodeを紐づけるためのものです。

outputs.tf
locals {
  kubeconfig = <<KUBECONFIG
apiVersion: v1
clusters:
- cluster:
    server: ${aws_eks_cluster.cluster.endpoint}
    certificate-authority-data: ${aws_eks_cluster.cluster.certificate_authority.0.data}
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: aws
  name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: aws-iam-authenticator
      args:
        - "token"
        - "-i"
        - "${local.cluster_name}"
KUBECONFIG

  eks_configmap = <<CONFIGMAPAWSAUTH
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: ${aws_iam_role.eks-node.arn}
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
CONFIGMAPAWSAUTH
}

output "kubectl config" {
  value = "${local.kubeconfig}"
}

output "EKS ConfigMap" {
  value = "${local.eks_configmap}"
}

実行

terraformの実行

下記コマンドを実行するだけです。
数分かかります。

$ terraform init
$ terraform plan
$ terraform apply

configの反映

outputs.tfで定義した内容が出力されるので、それを設定に反映させていきます。


kubeconfigは~/.kube/configに書き込み、contextを変更しましょう。

$ kubectl config use-context HOGE


EKS ConfigMapは出力されたものをそのままeks_configmap.yamlなどとして保存して、内容を反映させます。

$ kubectl apply -f eks_configmap.yaml

確認

nodeのステータスを確認してみます。

$ kubectl get nodes
NAME                                            STATUS    ROLES     AGE       VERSION
ip-10-0-0-242.ap-northeast-1.compute.internal   Ready     <none>    48s       v1.12.7
ip-10-0-1-208.ap-northeast-1.compute.internal   Ready     <none>    47s       v1.12.7

まとめ

terraformでEKS環境を構築する方法を紹介しました。
最後configをいじるのが面倒なのでそれを何とかしたいなと思いました。
ただ、コンソールで全て設定するよりかははるかに楽です。

参考

Amazon EKS の使用開始
Terraformを使ってEKSを作成してみた
AWS EKS Introduction
Terraformを使ってEKSを構築する

52
31
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
52
31