LoginSignup
1
2

More than 3 years have passed since last update.

AnsibleでEKS環境を構築する

Last updated at Posted at 2019-08-19

はじめに

パブリッククラウドでKubernetes環境を構築するとなるとTerraformが一番人気だと思いますが、今回はAnsibleでEKS環境を構築してみました。

TerraformでのEKS環境構築については以前書いた記事をご覧ください。

今回用いたコードはGitHubにあげてあります。

環境

OS: macOS Mojave 10.14.1
Ansible: 2.8.4 (Homebrew)
Python: 3.7.4
awscli: 1.16.220 (pip)
boto: 2.49.0 (pip)
boto3: 1.9.210 (pip)
botocore: 1.12.210 (pip)
kubectl: v1.10.11

フォルダ構造

├── ansible.cfg
├── inventory
│   └── inventory.ini
├── playbooks
│   ├── build_eks.yaml
│   └── destroy_eks.yaml (説明対象外)
└── roles
    └── eks
        ├── vars
        │   └── main.yaml
        ├── tasks
        │   ├── iam
        │   │   ├── create_iam_role.yaml
        │   │   └── delete_iam_role.yaml (説明対象外)
        │   ├── vpc
        │   │   ├── create_vpc.yaml
        │   │   └── delete_vpc.yaml (説明対象外)
        │   ├── eks
        │   │   ├── create_eks_cluster.yaml
        │   │   └── delete_eks_cluster.yaml (説明対象外)
        │   └── ec2
        │       ├── create_eks_worker.yaml
        │       ├── join_eks_cluster.yaml
        │       └── delete_eks_worker.yaml (説明対象外)
        ├── files
        │   ├── amazon-eks-nodegroup.yaml
        │   ├── ec2-trust-policy.json
        │   └── eks-trust-policy.json
        └── templates
            └── aws-auth-cm.yaml

今回は環境削除に関するファイル内容の説明は割愛します。

実装

ansible.cfg

Ansibleの設定ファイルです。インベントリファイルとroleのフォルダパスを定義しています。

ansible.cfg
[defaults]
inventory = ./inventory/inventory.ini
roles_path = ./roles

inventory

インベントリファイルです。今回はローカルマシンから直接実行します。

inventory/inventory.ini
[local]
localhost ansible_connection=local

[local:vars]
ansible_python_interpreter=/usr/local/bin/python3

playbooks

実行するタスクを記述します。構築には以下5つのタスクが含まれます。

  1. IAM Roleの作成
  2. VPC関連リソースの作成
  3. EKSクラスタの作成
  4. EKSワーカーノードの作成
  5. クラスタとワーカーノードの紐付け
playbooks/build_eks.yaml
- name: BUILD EKS
  hosts: local
  gather_facts: false
  tasks:
    - name: CREATE IAM ROLE
      include_role:
        name: eks
        tasks_from: iam/create_iam_role.yaml
    - name: CREATE VPC
      include_role:
        name: eks
        tasks_from: vpc/create_vpc.yaml
    - name: CREATE EKS CLUSTER
      include_role:
        name: eks
        tasks_from: eks/create_eks_cluster.yaml
    - name: CREATE EKS WORKER NODES
      include_role:
        name: eks
        tasks_from: ec2/create_eks_worker.yaml
    - name: JOIN EKS WORKER NODES TO EKS CLUSTER
      include_role:
        name: eks
        tasks_from: ec2/join_eks_cluster.yaml

[].tasks[].include_role.nameの値にはroles配下のフォルダ名を入れます。今回はeksです。

roles

vars

変数定義を下記ファイルで行います。

roles/eks/vars/main.yaml
common:
  project: ansible
  region: ap-northeast-1
  profile: default

vpc:
  name: "{{ common.project}}-vpc"
  cidr_block: "10.0.0.0/16"

subnets:
  - cidr: 10.0.10.0/24
    az: "{{ common.region }}a"
  - cidr: 10.0.11.0/24
    az: "{{ common.region }}c"
  - cidr: 10.0.12.0/24
    az: "{{ common.region }}d"

security_groups:
  - name: "{{ common.project }}-cluster-sg"
    description: "Security group for EKS cluster"
    rules:
      - group_name: "{{ common.project }}-worker-sg"
        group_desc: "Security group for EKS worker nodes"
        rule_desc: "Allow pods to communicate with the cluster API server"
        proto: tcp
        ports: 443
    rules_egress:
      - group_name: "{{ common.project }}-worker-sg"
        group_desc: "Security group for EKS worker nodes"
        rule_desc: "Allow the cluster control plane to communicate with the worker Kubelet and pods"
        proto: tcp
        from_port: 1025
        to_port: 65535
      - group_name: "{{ common.project }}-worker-sg"
        group_desc: "Security group for EKS worker nodes"
        rule_desc: "Allow the cluster control plane to communicate with pods running extension API servers on port 443"
        proto: tcp
        ports: 443
  - name: "{{ common.project }}-worker-sg"
    description: "Security group for EKS worker nodes"
    rules:
      - group_name: "{{ common.project }}-worker-sg"
        group_desc: "Security group for EKS worker nodes"
        rule_desc: "Allow worker nodes to communicate with each other"
        proto: all
        from_port: 1
        to_port: 65535
      - group_name: "{{ common.project }}-cluster-sg"
        group_desc: "Security group for EKS cluster"
        rule_desc: "Allow worker Kubelets and pods to receive communication from the cluster control plane"
        proto: tcp
        from_port: 1025
        to_port: 65535
      - group_name: "{{ common.project }}-cluster-sg"
        group_desc: "Security group for EKS cluster"
        rule_desc: "Allow pods running extension API servers on port 443 to receive communication from cluster control plane"
        proto: tcp
        ports: 443

eks_cluster:
  name: "{{ common.project }}-cluster"
  role_name: eks-cluster-iam-role
  version: "1.13"
  security_groups: "{{ common.project }}-cluster-sg"

eks_worker:
  stack_name: "{{ common.project }}-stack"
  role_name: eks-worker-iam-role
  nodegroup_name: "{{ common.project }}-ng"
  autoscaling_min_size: 1
  autoscaling_max_size: 4
  autoscaling_desired_size: 2
  instance_type: t3.medium
  image_id: ami-0fde798d17145fae1
  volume_size: 20
  key_name: ec2-key
  bootstrap_args: ""

common.profile~/.aws/credentialsのprofile名です。下記コマンドで設定を行うと~/.aws/credentials~/.aws/configに内容が書き込まれます。

$ aws configure
~/.aws/credentials
[default]
aws_access_key_id = XXXXXXXX
aws_secret_access_key = YYYYYYYYYY
~/.aws/config
[default]
region = ap-northeast-1
output = json

tasks

順に見ていきます。まずはIAM Roleの設定です。
policyの設定ファイルはroles/eks/files/配下に格納してあります。

roles/eks/tasks/iam/create_iam_role.yaml
- name: IAM | create EKS service role
  iam_role:
    name: "{{ eks_cluster.role_name }}"
    profile: "{{ common.profile }}"
    region: "{{ common.region }}"
    managed_policies:
      - AmazonEKSClusterPolicy
      - AmazonEKSServicePolicy
    assume_role_policy_document: "{{ lookup('file', 'eks-trust-policy.json') }}"
    description: "Allows EKS to manage clusters on your behalf."
  register: eks_cluster_iam_role_results

- name: IAM | create IAM worker node role
  iam_role:
    name: "{{ eks_worker.role_name }}"
    profile: "{{ common.profile }}"
    region: "{{ common.region }}"
    managed_policies:
      - AmazonEKSWorkerNodePolicy
      - AmazonEKS_CNI_Policy
      - AmazonEC2ContainerRegistryReadOnly
    assume_role_policy_document: "{{ lookup('file', 'ec2-trust-policy.json') }}"
  register: eks_worker_iam_role_results



次はVPC関連リソースの設定です。5つのリソースの作成を行います。

  1. VPC
  2. Subnet
  3. Internet Gateway
  4. Root Table
  5. Security Group
roles/eks/tasks/vpc/create_vpc.yaml
- name: VPC | create VPC
  ec2_vpc_net:
    name: "{{ vpc.name }}"
    profile: "{{ common.profile }}"
    region: "{{ common.region }}"
    cidr_block: "{{ vpc.cidr_block }}"
  register: vpc_results

- name: VPC | create subnets 
  loop: "{{ subnets }}"
  ec2_vpc_subnet:
    profile: "{{ common.profile }}"
    region: "{{ common.region }}"
    vpc_id: "{{ vpc_results.vpc.id }}"
    cidr: "{{ item.cidr }}"
    az: "{{ item.az }}"
  register: subnet_results

- name: VPC | create igw
  ec2_vpc_igw:
    profile: "{{ common.profile }}"
    region: "{{ common.region }}"
    vpc_id: "{{ vpc_results.vpc.id }}"
  register: igw_results

- name: VPC | create public route table
  ec2_vpc_route_table:
    profile: "{{ common.profile }}"
    region: "{{ common.region }}"
    vpc_id: "{{ vpc_results.vpc.id }}"
    subnets: "{{ subnet_results.results | json_query('[].subnet.id') }}"
    routes:
      - dest: 0.0.0.0/0
        gateway_id: "{{ igw_results.gateway_id }}"
  register: rt_results

- name: VPC | create security groups 
  loop: "{{ security_groups }}"
  ec2_group:
    profile: "{{ common.profile }}"
    region: "{{ common.region }}"
    name: "{{ item.name }}"
    description: "{{ item.description }}"
    rules: "{{ item.rules }}"
    rules_egress: "{{ item.rules_egress|default(omit) }}"
    vpc_id: '{{ vpc_results.vpc.id }}'
    purge_rules: false
    purge_rules_egress: false
  register: sg_results



次はEKSクラスタの設定です。
wait: trueとすることでクラスタ構築完了まで次の作業に進まないようにします。
これを定義しておかないとEKSワーカーノード作成時にクラスタ情報を参照することができません。

roles/eks/tasks/eks/create_eks_cluster.yaml
- name: EKS | create EKS cluster
  aws_eks_cluster:
    name: "{{ eks_cluster.name }}"
    profile: "{{ common.profile }}"
    region: "{{ common.region }}"
    version: "{{ eks_cluster.version }}"
    role_arn: "{{ eks_cluster_iam_role_results.arn }}"
    subnets: "{{ subnet_results.results | json_query('[].subnet.id') }}"
    security_groups: "{{ eks_cluster.security_groups }}"
    wait: true
  register: eks_cluster_results



次はEKSワーカーノードの設定です。
テンプレートファイルのamazon-eks-nodegroup.yamlは公式のものをそのまま使用しています。(後述)

roles/eks/tasks/ec2/create_eks_worker.yaml
- name: EC2 | create EKS worker nodes
  cloudformation:
    stack_name: "{{ eks_worker.stack_name }}"
    profile: "{{ common.profile }}"
    region: "{{ common.region }}"
    template: ../roles/eks/files/amazon-eks-nodegroup.yaml
    template_parameters:
      ClusterName: "{{ eks_cluster_results.name }}"
      ClusterControlPlaneSecurityGroup: "{{ ','.join(eks_cluster_results.resources_vpc_config.security_group_ids) }}"
      NodeGroupName: "{{ eks_worker.nodegroup_name }}"
      NodeAutoScalingGroupMinSize: "{{ eks_worker.autoscaling_min_size }}"
      NodeAutoScalingGroupDesiredCapacity: "{{ eks_worker.autoscaling_desired_size }}"
      NodeAutoScalingGroupMaxSize: "{{ eks_worker.autoscaling_max_size }}"
      NodeInstanceType: "{{ eks_worker.instance_type }}"
      NodeImageId: "{{ eks_worker.image_id }}"
      NodeVolumeSize: "{{ eks_worker.volume_size }}"
      KeyName: "{{ eks_worker.key_name }}"
      BootstrapArguments: "{{ eks_worker.bootstrap_args }}"
      VpcId: "{{ eks_cluster_results.resources_vpc_config.vpc_id }}"
      Subnets: "{{ ','.join(eks_cluster_results.resources_vpc_config.subnet_ids) }}"
  register: eks_worker_results



最後にEKSのクラスタとワーカーノードの紐付けを行います。
roles/eks/templates配下にあるaws-auth-cm.yamlにワーカーノードの情報を代入したものを同じファイル名でroles/eks/files配下にコピーしています。

roles/eks/tasks/ec2/join_eks_cluster.yaml
- name: config | update kubeconfig
  shell: aws eks --region {{ common.region }} update-kubeconfig --name {{ eks_cluster_results.name }}

- name: EC2 | copy a new version of aws-auth-cm.yaml from template
  template:
    src: ../roles/eks/templates/aws-auth-cm.yaml
    dest: ../roles/eks/files/aws-auth-cm.yaml

- name: EC2 | join EKS worker nodes to EKS cluster
  shell: kubectl apply -f ../roles/eks/files/aws-auth-cm.yaml

files

amazon-eks-nodegroup.yamlはEKSワーカーノードを作成するためのテンプレートファイルです。内容は公式のものと同じです。
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/amazon-eks-nodegroup.yaml

roles/eks/files/amazon-eks-nodegroup.yaml
---
AWSTemplateFormatVersion: 2010-09-09
Description: Amazon EKS - Node Group

Parameters:

  KeyName:
    Description: The EC2 Key Pair to allow SSH access to the instances
    Type: AWS::EC2::KeyPair::KeyName

  NodeImageId:
    Description: AMI id for the node instances.
    Type: AWS::EC2::Image::Id

  NodeInstanceType:
    Description: EC2 instance type for the node instances
    Type: String
    Default: t3.medium
    ConstraintDescription: Must be a valid EC2 instance type
    AllowedValues:
      - t2.small
      - t2.medium
      - t2.large
      - t2.xlarge
      - t2.2xlarge
      - t3.nano
      - t3.micro
      - t3.small
      - t3.medium
      - t3.large
      - t3.xlarge
      - t3.2xlarge
      - m3.medium
      - m3.large
      - m3.xlarge
      - m3.2xlarge
      - m4.large
      - m4.xlarge
      - m4.2xlarge
      - m4.4xlarge
      - m4.10xlarge
      - m5.large
      - m5.xlarge
      - m5.2xlarge
      - m5.4xlarge
      - m5.12xlarge
      - m5.24xlarge
      - c4.large
      - c4.xlarge
      - c4.2xlarge
      - c4.4xlarge
      - c4.8xlarge
      - c5.large
      - c5.xlarge
      - c5.2xlarge
      - c5.4xlarge
      - c5.9xlarge
      - c5.18xlarge
      - i3.large
      - i3.xlarge
      - i3.2xlarge
      - i3.4xlarge
      - i3.8xlarge
      - i3.16xlarge
      - r3.xlarge
      - r3.2xlarge
      - r3.4xlarge
      - r3.8xlarge
      - r4.large
      - r4.xlarge
      - r4.2xlarge
      - r4.4xlarge
      - r4.8xlarge
      - r4.16xlarge
      - x1.16xlarge
      - x1.32xlarge
      - p2.xlarge
      - p2.8xlarge
      - p2.16xlarge
      - p3.2xlarge
      - p3.8xlarge
      - p3.16xlarge
      - p3dn.24xlarge
      - r5.large
      - r5.xlarge
      - r5.2xlarge
      - r5.4xlarge
      - r5.12xlarge
      - r5.24xlarge
      - r5d.large
      - r5d.xlarge
      - r5d.2xlarge
      - r5d.4xlarge
      - r5d.12xlarge
      - r5d.24xlarge
      - z1d.large
      - z1d.xlarge
      - z1d.2xlarge
      - z1d.3xlarge
      - z1d.6xlarge
      - z1d.12xlarge

  NodeAutoScalingGroupMinSize:
    Description: Minimum size of Node Group ASG.
    Type: Number
    Default: 1

  NodeAutoScalingGroupMaxSize:
    Description: Maximum size of Node Group ASG. Set to at least 1 greater than NodeAutoScalingGroupDesiredCapacity.
    Type: Number
    Default: 4

  NodeAutoScalingGroupDesiredCapacity:
    Description: Desired capacity of Node Group ASG.
    Type: Number
    Default: 3

  NodeVolumeSize:
    Description: Node volume size
    Type: Number
    Default: 20

  ClusterName:
    Description: The cluster name provided when the cluster was created. If it is incorrect, nodes will not be able to join the cluster.
    Type: String

  BootstrapArguments:
    Description: Arguments to pass to the bootstrap script. See files/bootstrap.sh in https://github.com/awslabs/amazon-eks-ami
    Type: String
    Default: ""

  NodeGroupName:
    Description: Unique identifier for the Node Group.
    Type: String

  ClusterControlPlaneSecurityGroup:
    Description: The security group of the cluster control plane.
    Type: AWS::EC2::SecurityGroup::Id

  VpcId:
    Description: The VPC of the worker instances
    Type: AWS::EC2::VPC::Id

  Subnets:
    Description: The subnets where workers can be created.
    Type: List<AWS::EC2::Subnet::Id>

Metadata:

  AWS::CloudFormation::Interface:
    ParameterGroups:
      - Label:
          default: EKS Cluster
        Parameters:
          - ClusterName
          - ClusterControlPlaneSecurityGroup
      - Label:
          default: Worker Node Configuration
        Parameters:
          - NodeGroupName
          - NodeAutoScalingGroupMinSize
          - NodeAutoScalingGroupDesiredCapacity
          - NodeAutoScalingGroupMaxSize
          - NodeInstanceType
          - NodeImageId
          - NodeVolumeSize
          - KeyName
          - BootstrapArguments
      - Label:
          default: Worker Network Configuration
        Parameters:
          - VpcId
          - Subnets

Resources:

  NodeInstanceProfile:
    Type: AWS::IAM::InstanceProfile
    Properties:
      Path: "/"
      Roles:
        - !Ref NodeInstanceRole

  NodeInstanceRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service: ec2.amazonaws.com
            Action: sts:AssumeRole
      Path: "/"
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly

  NodeSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Security group for all nodes in the cluster
      VpcId: !Ref VpcId
      Tags:
        - Key: !Sub kubernetes.io/cluster/${ClusterName}
          Value: owned

  NodeSecurityGroupIngress:
    Type: AWS::EC2::SecurityGroupIngress
    DependsOn: NodeSecurityGroup
    Properties:
      Description: Allow node to communicate with each other
      GroupId: !Ref NodeSecurityGroup
      SourceSecurityGroupId: !Ref NodeSecurityGroup
      IpProtocol: -1
      FromPort: 0
      ToPort: 65535

  NodeSecurityGroupFromControlPlaneIngress:
    Type: AWS::EC2::SecurityGroupIngress
    DependsOn: NodeSecurityGroup
    Properties:
      Description: Allow worker Kubelets and pods to receive communication from the cluster control plane
      GroupId: !Ref NodeSecurityGroup
      SourceSecurityGroupId: !Ref ClusterControlPlaneSecurityGroup
      IpProtocol: tcp
      FromPort: 1025
      ToPort: 65535

  ControlPlaneEgressToNodeSecurityGroup:
    Type: AWS::EC2::SecurityGroupEgress
    DependsOn: NodeSecurityGroup
    Properties:
      Description: Allow the cluster control plane to communicate with worker Kubelet and pods
      GroupId: !Ref ClusterControlPlaneSecurityGroup
      DestinationSecurityGroupId: !Ref NodeSecurityGroup
      IpProtocol: tcp
      FromPort: 1025
      ToPort: 65535

  NodeSecurityGroupFromControlPlaneOn443Ingress:
    Type: AWS::EC2::SecurityGroupIngress
    DependsOn: NodeSecurityGroup
    Properties:
      Description: Allow pods running extension API servers on port 443 to receive communication from cluster control plane
      GroupId: !Ref NodeSecurityGroup
      SourceSecurityGroupId: !Ref ClusterControlPlaneSecurityGroup
      IpProtocol: tcp
      FromPort: 443
      ToPort: 443

  ControlPlaneEgressToNodeSecurityGroupOn443:
    Type: AWS::EC2::SecurityGroupEgress
    DependsOn: NodeSecurityGroup
    Properties:
      Description: Allow the cluster control plane to communicate with pods running extension API servers on port 443
      GroupId: !Ref ClusterControlPlaneSecurityGroup
      DestinationSecurityGroupId: !Ref NodeSecurityGroup
      IpProtocol: tcp
      FromPort: 443
      ToPort: 443

  ClusterControlPlaneSecurityGroupIngress:
    Type: AWS::EC2::SecurityGroupIngress
    DependsOn: NodeSecurityGroup
    Properties:
      Description: Allow pods to communicate with the cluster API Server
      GroupId: !Ref ClusterControlPlaneSecurityGroup
      SourceSecurityGroupId: !Ref NodeSecurityGroup
      IpProtocol: tcp
      ToPort: 443
      FromPort: 443

  NodeGroup:
    Type: AWS::AutoScaling::AutoScalingGroup
    Properties:
      DesiredCapacity: !Ref NodeAutoScalingGroupDesiredCapacity
      LaunchConfigurationName: !Ref NodeLaunchConfig
      MinSize: !Ref NodeAutoScalingGroupMinSize
      MaxSize: !Ref NodeAutoScalingGroupMaxSize
      VPCZoneIdentifier: !Ref Subnets
      Tags:
        - Key: Name
          Value: !Sub ${ClusterName}-${NodeGroupName}-Node
          PropagateAtLaunch: true
        - Key: !Sub kubernetes.io/cluster/${ClusterName}
          Value: owned
          PropagateAtLaunch: true
    UpdatePolicy:
      AutoScalingRollingUpdate:
        MaxBatchSize: 1
        MinInstancesInService: !Ref NodeAutoScalingGroupDesiredCapacity
        PauseTime: PT5M

  NodeLaunchConfig:
    Type: AWS::AutoScaling::LaunchConfiguration
    Properties:
      AssociatePublicIpAddress: true
      IamInstanceProfile: !Ref NodeInstanceProfile
      ImageId: !Ref NodeImageId
      InstanceType: !Ref NodeInstanceType
      KeyName: !Ref KeyName
      SecurityGroups:
        - !Ref NodeSecurityGroup
      BlockDeviceMappings:
        - DeviceName: /dev/xvda
          Ebs:
            VolumeSize: !Ref NodeVolumeSize
            VolumeType: gp2
            DeleteOnTermination: true
      UserData:
        Fn::Base64:
          !Sub |
            #!/bin/bash
            set -o xtrace
            /etc/eks/bootstrap.sh ${ClusterName} ${BootstrapArguments}
            /opt/aws/bin/cfn-signal --exit-code $? \
                     --stack  ${AWS::StackName} \
                     --resource NodeGroup  \
                     --region ${AWS::Region}

Outputs:

  NodeInstanceRole:
    Description: The node instance role
    Value: !GetAtt NodeInstanceRole.Arn

  NodeSecurityGroup:
    Description: The security group for the node group
    Value: !Ref NodeSecurityGroup



IAM Role作成時に用いるpolicyのjsonファイルをEKSクラスタとEKSワーカーノードの2つ分用意します。

roles/eks/files/eks-trust-policy.json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
roles/eks/files/ec2-trust-policy.json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

templates

EKSのクラスタとワーカーノードの紐付けに用いるConfigMapリソースのマニフェストファイルを用意します。
rolearnの値にワーカーノードのインスタンスロールが代入されます。

roles/eks/template/aws-auth-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: {{ eks_worker_results.stack_outputs.NodeInstanceRole }}
      username: system:node:{{ '{{EC2PrivateDNSName}}' }}
      groups:
        - system:bootstrappers
        - system:nodes

実行

EKS環境の構築は下記コマンドで行います。

$ ansible-playbook playbooks/build_eks.yaml

PLAY [BUILD EKS] ***************************************************************************************************************************************************************************************

TASK [CREATE IAM ROLE] *********************************************************************************************************************************************************************************

TASK [eks : IAM | create EKS service role] *************************************************************************************************************************************************************
changed: [localhost]

TASK [eks : IAM | create IAM worker node role] *********************************************************************************************************************************************************
changed: [localhost]

TASK [CREATE VPC] **************************************************************************************************************************************************************************************

TASK [eks : VPC | create VPC] **************************************************************************************************************************************************************************
changed: [localhost]

TASK [eks : VPC | create subnets] **********************************************************************************************************************************************************************
changed: [localhost] => (item={'cidr': '10.0.10.0/24', 'az': 'ap-northeast-1a'})
changed: [localhost] => (item={'cidr': '10.0.11.0/24', 'az': 'ap-northeast-1c'})
changed: [localhost] => (item={'cidr': '10.0.12.0/24', 'az': 'ap-northeast-1d'})

TASK [eks : VPC | create igw] **************************************************************************************************************************************************************************
changed: [localhost]

TASK [eks : VPC | create public route table] ***********************************************************************************************************************************************************
changed: [localhost]

TASK [eks : VPC | create security groups] **************************************************************************************************************************************************************
changed: [localhost] => (item={'name': 'ansible-cluster-sg', 'description': 'Security group for EKS cluster', 'rules': [{'group_name': 'ansible-worker-sg', 'group_desc': 'Security group for EKS worker nodes', 'rule_desc': 'Allow pods to communicate with the cluster API server', 'proto': 'tcp', 'ports': 443}], 'rules_egress': [{'group_name': 'ansible-worker-sg', 'group_desc': 'Security group for EKS worker nodes', 'rule_desc': 'Allow the cluster control plane to communicate with the worker Kubelet and pods', 'proto': 'tcp', 'from_port': 1025, 'to_port': 65535}, {'group_name': 'ansible-worker-sg', 'group_desc': 'Security group for EKS worker nodes', 'rule_desc': 'Allow the cluster control plane to communicate with pods running extension API servers on port 443', 'proto': 'tcp', 'ports': 443}]})
changed: [localhost] => (item={'name': 'ansible-worker-sg', 'description': 'Security group for EKS worker nodes', 'rules': [{'group_name': 'ansible-worker-sg', 'group_desc': 'Security group for EKS worker nodes', 'rule_desc': 'Allow worker nodes to communicate with each other', 'proto': 'all', 'from_port': 1, 'to_port': 65535}, {'group_name': 'ansible-cluster-sg', 'group_desc': 'Security group for EKS cluster', 'rule_desc': 'Allow worker Kubelets and pods to receive communication from the cluster control plane', 'proto': 'tcp', 'from_port': 1025, 'to_port': 65535}, {'group_name': 'ansible-cluster-sg', 'group_desc': 'Security group for EKS cluster', 'rule_desc': 'Allow pods running extension API servers on port 443 to receive communication from cluster control plane', 'proto': 'tcp', 'ports': 443}]})

TASK [CREATE EKS CLUSTER] ******************************************************************************************************************************************************************************

TASK [eks : EKS | create EKS cluster] ******************************************************************************************************************************************************************
changed: [localhost]

TASK [CREATE EKS WORKER NODES] *************************************************************************************************************************************************************************

TASK [eks : EC2 | create EKS worker nodes] *************************************************************************************************************************************************************
changed: [localhost]

TASK [JOIN EKS WORKER NODES TO EKS CLUSTER] ************************************************************************************************************************************************************

TASK [eks : config | update kubeconfig] ****************************************************************************************************************************************************************
changed: [localhost]

TASK [eks : EC2 | copy a new version of aws-auth-cm.yaml from template] ********************************************************************************************************************************
changed: [localhost]

TASK [eks : EC2 | join EKS worker nodes to EKS cluster] ************************************************************************************************************************************************
changed: [localhost]

PLAY RECAP *********************************************************************************************************************************************************************************************
localhost                  : ok=12   changed=12   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

完了後にkubectlでワーカーノードが紐付けられているか確認してみましょう。

$ kubectl get node
NAME                                             STATUS    ROLES     AGE       VERSION
ip-10-0-11-206.ap-northeast-1.compute.internal   Ready     <none>    2m        v1.13.7-eks-c57ff8
ip-10-0-12-67.ap-northeast-1.compute.internal    Ready     <none>    2m        v1.13.7-eks-c57ff8



今回ファイル内容の説明は省きましたが、EKS環境の削除は下記コマンドで行います。

$ ansible-playbook playbooks/destroy_eks.yaml

PLAY [DESTROY EKS] *************************************************************************************************************************************************************************************

TASK [DELETE EKS WORKER NODES] *************************************************************************************************************************************************************************

TASK [eks : EC2 | delete EKS worker nodes] *************************************************************************************************************************************************************
changed: [localhost]

TASK [DELETE EKS CLUSTER] ******************************************************************************************************************************************************************************

TASK [eks : EKS | delete EKS cluster] ******************************************************************************************************************************************************************
changed: [localhost]

TASK [eks : EKS | wait 10 min for EKS cluster to be deleted] *******************************************************************************************************************************************
Pausing for 600 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [localhost]

TASK [DELETE VPC] **************************************************************************************************************************************************************************************

TASK [eks : VPC | get VPC] *****************************************************************************************************************************************************************************
ok: [localhost]

TASK [eks : VPC | get route table] *********************************************************************************************************************************************************************
ok: [localhost]

TASK [eks : VPC | delete public route table] ***********************************************************************************************************************************************************
skipping: [localhost] => (item={'id': 'rtb-07db0226caafb28d0', 'routes': [{'destination_cidr_block': '10.0.0.0/16', 'gateway_id': 'local', 'instance_id': None, 'interface_id': None, 'vpc_peering_connection_id': None, 'state': 'active', 'origin': 'CreateRouteTable'}], 'associations': [{'id': 'rtbassoc-03f60e6cd732c0987', 'route_table_id': 'rtb-07db0226caafb28d0', 'subnet_id': None, 'main': True}], 'tags': {}, 'vpc_id': 'vpc-0af7dd4f1891d0981'}) 
changed: [localhost] => (item={'id': 'rtb-0f8b7524bbf5e6dda', 'routes': [{'destination_cidr_block': '10.0.0.0/16', 'gateway_id': 'local', 'instance_id': None, 'interface_id': None, 'vpc_peering_connection_id': None, 'state': 'active', 'origin': 'CreateRouteTable'}, {'destination_cidr_block': '0.0.0.0/0', 'gateway_id': 'igw-04d8395d96f34316f', 'instance_id': None, 'interface_id': None, 'vpc_peering_connection_id': None, 'state': 'active', 'origin': 'CreateRoute'}], 'associations': [{'id': 'rtbassoc-09e75ccec74991762', 'route_table_id': 'rtb-0f8b7524bbf5e6dda', 'subnet_id': 'subnet-0ac96de4386bb63b7', 'main': False}, {'id': 'rtbassoc-0c274c4a1b95496ef', 'route_table_id': 'rtb-0f8b7524bbf5e6dda', 'subnet_id': 'subnet-0a399ac33c6f17d70', 'main': False}, {'id': 'rtbassoc-09ddc4e870c76a580', 'route_table_id': 'rtb-0f8b7524bbf5e6dda', 'subnet_id': 'subnet-0285ffb6ff7c19cdc', 'main': False}], 'tags': {}, 'vpc_id': 'vpc-0af7dd4f1891d0981'})

TASK [eks : VPC | delete igw] **************************************************************************************************************************************************************************
changed: [localhost]

TASK [eks : VPC | get security groups] *****************************************************************************************************************************************************************
ok: [localhost]

TASK [eks : VPC | set security group rule lists empty] *************************************************************************************************************************************************
changed: [localhost] => (item={'description': 'Security group for EKS cluster', 'group_name': 'ansible-cluster-sg', 'ip_permissions': [{'from_port': 443, 'ip_protocol': 'tcp', 'ip_ranges': [], 'ipv6_ranges': [], 'prefix_list_ids': [], 'to_port': 443, 'user_id_group_pairs': [{'description': 'Allow pods to communicate with the cluster API server', 'group_id': 'sg-09fd422bb4e7764c0', 'user_id': '601207319152'}]}], 'owner_id': '601207319152', 'group_id': 'sg-02dbf6f7fa528548b', 'ip_permissions_egress': [{'from_port': 1025, 'ip_protocol': 'tcp', 'ip_ranges': [], 'ipv6_ranges': [], 'prefix_list_ids': [], 'to_port': 65535, 'user_id_group_pairs': [{'description': 'Allow the cluster control plane to communicate with the worker Kubelet and pods', 'group_id': 'sg-09fd422bb4e7764c0', 'user_id': '601207319152'}]}, {'from_port': 443, 'ip_protocol': 'tcp', 'ip_ranges': [], 'ipv6_ranges': [], 'prefix_list_ids': [], 'to_port': 443, 'user_id_group_pairs': [{'description': 'Allow the cluster control plane to communicate with pods running extension API servers on port 443', 'group_id': 'sg-09fd422bb4e7764c0', 'user_id': '601207319152'}]}], 'vpc_id': 'vpc-0af7dd4f1891d0981', 'tags': {}})
changed: [localhost] => (item={'description': 'Security group for EKS worker nodes', 'group_name': 'ansible-worker-sg', 'ip_permissions': [{'ip_protocol': '-1', 'ip_ranges': [], 'ipv6_ranges': [], 'prefix_list_ids': [], 'user_id_group_pairs': [{'description': 'Allow worker nodes to communicate with each other', 'group_id': 'sg-09fd422bb4e7764c0', 'user_id': '601207319152'}]}, {'from_port': 1025, 'ip_protocol': 'tcp', 'ip_ranges': [], 'ipv6_ranges': [], 'prefix_list_ids': [], 'to_port': 65535, 'user_id_group_pairs': [{'description': 'Allow worker Kubelets and pods to receive communication from the cluster control plane', 'group_id': 'sg-02dbf6f7fa528548b', 'user_id': '601207319152'}]}, {'from_port': 443, 'ip_protocol': 'tcp', 'ip_ranges': [], 'ipv6_ranges': [], 'prefix_list_ids': [], 'to_port': 443, 'user_id_group_pairs': [{'description': 'Allow pods running extension API servers on port 443 to receive communication from cluster control plane', 'group_id': 'sg-02dbf6f7fa528548b', 'user_id': '601207319152'}]}], 'owner_id': '601207319152', 'group_id': 'sg-09fd422bb4e7764c0', 'ip_permissions_egress': [{'ip_protocol': '-1', 'ip_ranges': [{'cidr_ip': '0.0.0.0/0'}], 'ipv6_ranges': [], 'prefix_list_ids': [], 'user_id_group_pairs': []}], 'vpc_id': 'vpc-0af7dd4f1891d0981', 'tags': {}})
changed: [localhost] => (item={'description': 'default VPC security group', 'group_name': 'default', 'ip_permissions': [{'ip_protocol': '-1', 'ip_ranges': [], 'ipv6_ranges': [], 'prefix_list_ids': [], 'user_id_group_pairs': [{'group_id': 'sg-0edd4b79db4e338cf', 'user_id': '601207319152'}]}], 'owner_id': '601207319152', 'group_id': 'sg-0edd4b79db4e338cf', 'ip_permissions_egress': [{'ip_protocol': '-1', 'ip_ranges': [{'cidr_ip': '0.0.0.0/0'}], 'ipv6_ranges': [], 'prefix_list_ids': [], 'user_id_group_pairs': []}], 'vpc_id': 'vpc-0af7dd4f1891d0981', 'tags': {}})

TASK [eks : VPC | delete security groups] **************************************************************************************************************************************************************
changed: [localhost] => (item={'description': 'Security group for EKS cluster', 'group_name': 'ansible-cluster-sg', 'ip_permissions': [{'from_port': 443, 'ip_protocol': 'tcp', 'ip_ranges': [], 'ipv6_ranges': [], 'prefix_list_ids': [], 'to_port': 443, 'user_id_group_pairs': [{'description': 'Allow pods to communicate with the cluster API server', 'group_id': 'sg-09fd422bb4e7764c0', 'user_id': '601207319152'}]}], 'owner_id': '601207319152', 'group_id': 'sg-02dbf6f7fa528548b', 'ip_permissions_egress': [{'from_port': 1025, 'ip_protocol': 'tcp', 'ip_ranges': [], 'ipv6_ranges': [], 'prefix_list_ids': [], 'to_port': 65535, 'user_id_group_pairs': [{'description': 'Allow the cluster control plane to communicate with the worker Kubelet and pods', 'group_id': 'sg-09fd422bb4e7764c0', 'user_id': '601207319152'}]}, {'from_port': 443, 'ip_protocol': 'tcp', 'ip_ranges': [], 'ipv6_ranges': [], 'prefix_list_ids': [], 'to_port': 443, 'user_id_group_pairs': [{'description': 'Allow the cluster control plane to communicate with pods running extension API servers on port 443', 'group_id': 'sg-09fd422bb4e7764c0', 'user_id': '601207319152'}]}], 'vpc_id': 'vpc-0af7dd4f1891d0981', 'tags': {}})
changed: [localhost] => (item={'description': 'Security group for EKS worker nodes', 'group_name': 'ansible-worker-sg', 'ip_permissions': [{'ip_protocol': '-1', 'ip_ranges': [], 'ipv6_ranges': [], 'prefix_list_ids': [], 'user_id_group_pairs': [{'description': 'Allow worker nodes to communicate with each other', 'group_id': 'sg-09fd422bb4e7764c0', 'user_id': '601207319152'}]}, {'from_port': 1025, 'ip_protocol': 'tcp', 'ip_ranges': [], 'ipv6_ranges': [], 'prefix_list_ids': [], 'to_port': 65535, 'user_id_group_pairs': [{'description': 'Allow worker Kubelets and pods to receive communication from the cluster control plane', 'group_id': 'sg-02dbf6f7fa528548b', 'user_id': '601207319152'}]}, {'from_port': 443, 'ip_protocol': 'tcp', 'ip_ranges': [], 'ipv6_ranges': [], 'prefix_list_ids': [], 'to_port': 443, 'user_id_group_pairs': [{'description': 'Allow pods running extension API servers on port 443 to receive communication from cluster control plane', 'group_id': 'sg-02dbf6f7fa528548b', 'user_id': '601207319152'}]}], 'owner_id': '601207319152', 'group_id': 'sg-09fd422bb4e7764c0', 'ip_permissions_egress': [{'ip_protocol': '-1', 'ip_ranges': [{'cidr_ip': '0.0.0.0/0'}], 'ipv6_ranges': [], 'prefix_list_ids': [], 'user_id_group_pairs': []}], 'vpc_id': 'vpc-0af7dd4f1891d0981', 'tags': {}})
skipping: [localhost] => (item={'description': 'default VPC security group', 'group_name': 'default', 'ip_permissions': [{'ip_protocol': '-1', 'ip_ranges': [], 'ipv6_ranges': [], 'prefix_list_ids': [], 'user_id_group_pairs': [{'group_id': 'sg-0edd4b79db4e338cf', 'user_id': '601207319152'}]}], 'owner_id': '601207319152', 'group_id': 'sg-0edd4b79db4e338cf', 'ip_permissions_egress': [{'ip_protocol': '-1', 'ip_ranges': [{'cidr_ip': '0.0.0.0/0'}], 'ipv6_ranges': [], 'prefix_list_ids': [], 'user_id_group_pairs': []}], 'vpc_id': 'vpc-0af7dd4f1891d0981', 'tags': {}}) 

TASK [eks : VPC | delete subnets] **********************************************************************************************************************************************************************
changed: [localhost] => (item={'cidr': '10.0.10.0/24', 'az': 'ap-northeast-1a'})
changed: [localhost] => (item={'cidr': '10.0.11.0/24', 'az': 'ap-northeast-1c'})
changed: [localhost] => (item={'cidr': '10.0.12.0/24', 'az': 'ap-northeast-1d'})

TASK [eks : VPC | delete VPC] **************************************************************************************************************************************************************************
changed: [localhost]

TASK [DELETE IAM ROLE] *********************************************************************************************************************************************************************************

TASK [eks : IAM | delete EKS service role] *************************************************************************************************************************************************************
changed: [localhost]

TASK [eks : IAM | delete IAM worker node role] *********************************************************************************************************************************************************
changed: [localhost]

PLAY RECAP *********************************************************************************************************************************************************************************************
localhost                  : ok=14   changed=10   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0 

まとめ

AnsibleでのEKS環境構築方法を説明しました。

Ansibleの利点としてはyamlで書けることですが、依存関係(例.EKSワーカー作成前にEKSクラスタが必要)の処理や、リソース削除用ファイルの作成が非常に面倒なのでクラウドと相性が悪いです。

Amazon CDK、Pulumi、Terraformと比べるとAnsibleを使うメリットがなさそうです。

参考

https://docs.ansible.com/ansible/latest/modules/list_of_cloud_modules.html
https://github.com/lgg42/ansible-role-eks
https://github.com/justindav1s/ansible-aws-eks
https://github.com/rishabh-bohra/ansible-aws-eks

1
2
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
2