LoginSignup
34
33

More than 3 years have passed since last update.

DjangoをTerraformでFargateにデプロイする話

Last updated at Posted at 2019-04-28

ただの集団 AdventCalendar PtW.2019の2日目

はじめに

今回はタイトル通り、Djangoで作ったアプリをAWSのECSのFargateにDeployする話です。
最近Terraformを学び始めたので、AWSのリソースを準備するのにTerraformを使ってみたので備忘録として書いていきます。

手順は下記のようになります。
1. Djangoで雛形を作成
2. Dockerの用意
3. ECSにDeployするために必要なリソースを準備をTerraformを使って準備
4. ecrにイメージをpush
5. ecs-cliを用いてDeploy

0. 事前準備

$ brew install awscli
$ aws configure --profile prototype
$ brew install amazon-ecs-cli
$ brew install tfenv
$ tfenv install 0.11.13

1. Djangoで雛形を作成

今回はアプリを開発することがメインでなく、Deployすることに注力したいのでアプリはヘルスチェック用のエンドポイントだけ用意して、スタータスコード200が返ってくるような簡単な作りにします。

(1) 雛形の作成

$ mkdir prototype
$ cd prototype
$ mkdir api
$ cd api
$ vim requirements.txt
Django==2.2
gunicorn==19.9.0
$ pip install -r requirements.txt
$ django-admin startproject config . 
$ cd config

(2) views.pyの作成

from django.http import HttpResponse
import logging

logger = logging.getLogger(__name__)


def health_check(request):
    logger.info("healthy")
    return HttpResponse("health check passed")

(3) settings.pyの編集

ALLOWED_HOSTS = ['*']

STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static')

LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'formatters': {
        'develop': {
            'format': '%(asctime)s [%(levelname)s] %(pathname)s:%(lineno)d '
                      '%(message)s'
        },
    },
    'handlers': {
        'console': {
            'level': 'DEBUG',
            'class': 'logging.StreamHandler',
            'formatter': 'develop',
        },
    },
    'loggers': {
        '': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'django': {
            'handlers': ['console'],
            'level': 'INFO',
            'propagate': False,
        },
        'django.db.backends': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
    }
}

(4) urls.pyの編集

from django.conf import settings
from django.conf.urls import url
from django.conf.urls.static import static
from django.contrib import admin
from django.urls import path
from . import views

urlpatterns = [
    path('health_check', views.health_check, name='health_check'),
    path('admin/', admin.site.urls),
] += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)

(5) 動作確認

$ cd ../
$ python manage.py collectstatic
$ python manage.py runserver
$ curl localhost:8000/health_check

2. Dockerの用意

(1) api配下にdjango用のDockerfileの作成

FROM python:3.7-alpine
WORKDIR /api
ADD . /api/
RUN pip install -r requirements.txt

(2) nginxの準備

$ mkdir nginx
$ cd nginx
$ vim nginx.conf
user nginx;
worker_processes 1;
events {
    worker_connections 1024;
}
http {
    include /etc/nginx/mime.types;
    client_max_body_size 100M;
    server {
        listen 80;
        server_name localhost;
        charset utf-8;
        location / {
            proxy_pass http://api:8000;
        }
    }
}

Dockerfile

FROM nginx:latest

ADD nginx.conf /etc/nginx/nginx.conf

(3) docker-compose.ymlの作成

version: '3'
services:
  api:
    build: ./api
    command: gunicorn -w 1 --bind 0.0.0.0:8000 config.wsgi
    ports:
      - "8000:8000"

  nginx:
    build: ./nginx
    command: nginx -g 'daemon off;'
    ports:
      - "80:80"

(4) 動作確認

$ docker-compose up
$ curl localhost:80/health_check

(5) 現状のフォルダ構成確認

$ tree
prototype
├── api
│   ├── Dockerfile
│   ├── __init__.py
│   ├── config
│   │   ├── __init__.py
│   │   ├── settings.py
│   │   ├── urls.py
│   │   ├── views.py
│   │   └── wsgi.py
│   ├── db.sqlite3
│   ├── manage.py
│   └── requirements.txt
├── docker-compose.yml
└── nginx
    ├── Dockerfile
    └── nginx.conf

3. ECSにDeployするために必要なリソースを準備をTerraformを使って準備

(1) prototype直下にterraformディレクトリを作成し初期セットアップ

*今回はバックアップ用のs3は用意しないでいきます。
*記事を書く都合上、1つのtfファイルに全部のリソース情報を書いていきます

$ mkdir terraform
$ cd terraform
$ vim main.tf
# Step1
variable "access_key" {}
variable "secret_key" {}
variable "region" {
  default = "ap-northeast-1"
}

provider "aws" {
  profile = "default"
  access_key = "${var.access_key}"
  secret_key = "${var.secret_key}"
  region = "${var.region}"
}
$ export TF_VAR_access_key=Your_access_key
$ export TF_VAR_secret_key=Your_secret_key
$ terraform init
$ terraform plan
$ terraform apply

(2) VPC, Subnet, Internet Gateway, Root Tableの用意

*これ以降は適宜plan, applyする

$ vim main.tf
# Step2 VPCの作成
resource "aws_vpc" "prototype" {
  cidr_block = "10.0.0.0/16"
  tags {
    Name = "Prototype VPC"
  }
}

# Step3 サブネットの作成
resource "aws_subnet" "api_a" {
  vpc_id = "${aws_vpc.prototype.id}"
  cidr_block = "10.0.1.0/24"
  availability_zone = "${var.region}a"
  tags {
    Name = "Public Subnet A"
  }
}

resource "aws_subnet" "api_b" {
  vpc_id = "${aws_vpc.prototype.id}"
  cidr_block = "10.0.2.0/24"
  availability_zone = "${var.region}c"
  tags {
    Name = "Public Subnet B"
  }
}

# Step4 インターネットゲートウェイの作成
resource "aws_internet_gateway" "prototype" {
  vpc_id = "${aws_vpc.prototype.id}"

  tags {
    Name = "Prototype Internet Gateway"
  }
}

# Step5 ルートテーブルの作成
resource "aws_route_table" "prototype" {
  vpc_id = "${aws_vpc.prototype.id}"

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = "${aws_internet_gateway.prototype.id}"
  }

  tags {
    Name = "Prototype Route Table"
  }
}

# Step6 ルートテーブルとサブネットの紐づけ
resource "aws_route_table_association" "api_a" {
  subnet_id = "${aws_subnet.api_a.id}"
  route_table_id = "${aws_route_table.prototype.id}"
}

resource "aws_route_table_association" "api_b" {
  subnet_id = "${aws_subnet.api_b.id}"
  route_table_id = "${aws_route_table.prototype.id}"
}

(3) Security Group, ALB, ALBログ用のS3バケットの用意

# Step7 Security Groupの作成
resource "aws_security_group" "prototype_alb" {
  name = "prototype-alb"
  vpc_id = "${aws_vpc.prototype.id}"

  ingress = {
    from_port = 80
    to_port = 80
    protocol = "tcp"
    cidr_blocks = [
      "0.0.0.0/0"]
  }

  egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = [
      "0.0.0.0/0"]
  }

  tags {
    Name = "Prototype ALB SG"
  }
}

resource "aws_security_group" "prototype_api" {
  name = "prototype-api"
  vpc_id = "${aws_vpc.prototype.id}"

  ingress = {
    from_port = 0
    to_port = 65535
    protocol = "tcp"

    security_groups = [
      "${aws_security_group.prototype_alb.id}",
    ]
  }

  egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = [
      "0.0.0.0/0"]
  }

  tags {
    Name = "Prototype API SG"
  }
}


# Step8: alb用のs3バケットの作成
resource "aws_s3_bucket" "alb_log" {
  bucket = "alb-log-20190428" # 名前は適宜変えて
  lifecycle_rule {
    enabled = true
    expiration {
      days = 180
    }
  }
}

data "aws_iam_policy_document" "alb_log" {
  statement {
    effect = "Allow"
    actions = ["s3:PutObject"]
    resources = ["arn:aws:s3:::${aws_s3_bucket.alb_log.id}/*"]
    principals {
      type = "AWS"
      identifiers = ["582318560864"]
    }
  }
}

resource "aws_s3_bucket_policy" "alb_log" {
  bucket = "${aws_s3_bucket.alb_log.id}"
  policy = "${data.aws_iam_policy_document.alb_log.json}"
}


# Step9: albの作成
resource "aws_alb" "prototype" {
  name = "prototype"
  security_groups = ["${aws_security_group.prototype_alb.id}"]
  subnets = [
    "${aws_subnet.api_a.id}",
    "${aws_subnet.api_b.id}",
  ]
  internal = false
  enable_deletion_protection = true
  access_logs {
    bucket = "${aws_s3_bucket.alb_log.id}"
    enabled = true
  }
}

resource "aws_alb_target_group" "prototype" {
  name = "prototype"
  port = 8000
  protocol = "HTTP"
  vpc_id = "${aws_vpc.prototype.id}"
  target_type = "ip"

  health_check {
    interval = 60
    path = "/health_check"
    port = 80
    protocol = "HTTP"
    timeout = 30
    unhealthy_threshold = 3
    matcher = 200
  }
}

resource "aws_alb_listener" "prototype" {
  load_balancer_arn = "${aws_alb.prototype.arn}"
  port = "80"
  protocol = "HTTP"

  default_action {
    target_group_arn = "${aws_alb_target_group.prototype.arn}"
    type = "forward" # リクエストをターゲットグループに転送
  }
}

(4) ECSクラスター, サービスロール, Cloudwatchロググループの作成

# Step10: ecsクラスターの作成
resource "aws_ecs_cluster" "prototype" {
  name = "prototype"
}


# Step11: Cloudwatchロググループの作成
resource "aws_cloudwatch_log_group" "prototype_api" {
  name = "/ecs/api"
  retention_in_days = 180
}

resource "aws_cloudwatch_log_group" "prototype_nginx" {
  name = "/ecs/nginx"
  retention_in_days = 180
}


# Step12: ECS Task Execution Roleの作成
resource "aws_iam_role" "ecs_task_execution" {
  name = "ecs-task-execution"
  assume_role_policy = "${data.aws_iam_policy_document.ecs_tasks_role.json}"
}

data "aws_iam_policy_document" "ecs_tasks_role" {
  statement {
    actions = ["sts:AssumeRole"]
    principals {
      type = "Service"
      identifiers = ["ecs-tasks.amazonaws.com"]
    }
  }
}

resource "aws_iam_policy" "ecs_task_execution" {
  name = "ecs-task-execution"
  policy = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"

}

resource "aws_iam_role_policy_attachment" "ecs_task_execution" {
  role = "${aws_iam_role.ecs_task_execution.name}"
  policy_arn = "${aws_iam_policy.ecs_task_execution.arn}"
}

4. ecrにイメージをpush

(1) nginx.confの値を書き換え

        location / {
            proxy_pass http://127.0.0.1:8000;
        }

(2)nginxのイメージをecrにpush

$ aws ecr create-repository --repository-name nginx --region ap-northeast-1
$(aws ecr get-login --no-include-email --region ap-northeast-1)
$ cd nginx
$ docker build -t nginx .
$ docker tag nginx:latest ${registryId}.dkr.ecr.ap-northeast-1.amazonaws.com/nginx:latest
$ docker push ${registryId}.dkr.ecr.ap-northeast-1.amazonaws.com/nginx:latest

(3) Djangoのイメージをecrにpush

$ aws ecr create-repository --repository-name api --region ap-northeast-1
$ cd api
$ docker build -t api .
$ docker tag api:latest ${registryId}.dkr.ecr.ap-northeast-1.amazonaws.com/api:latest
$ docker push ${registryId}.dkr.ecr.ap-northeast-1.amazonaws.com/api:latest

5. ecs-cliを用いてDeploy

(1) docker-compose.production.ymlを作成しcloudwatch log用の設定を書く

version: '3'
services:
  api:
    image: ${registryId}.dkr.ecr.ap-northeast-1.amazonaws.com/api:latest
    logging:
      driver: awslogs
      options:
        awslogs-group: /ecs/api
        awslogs-region: ap-northeast-1
        awslogs-stream-prefix: api

  nginx:
    image: ${registryId}.dkr.ecr.ap-northeast-1.amazonaws.com/nginx:latest
    logging:
      driver: awslogs
      options:
        awslogs-group: /ecs/nginx
        awslogs-region: ap-northeast-1
        awslogs-stream-prefix: nginx

(2) ecs-params.ymlを作成し必要な情報を書き込む

version: 1
task_definition:
  ecs_network_mode: awsvpc
  task_execution_role: ecs-task-execution
  task_size:
    cpu_limit: 256
    mem_limit: 512
  services:
    api:
      essential: true
    nginx:
      essential: true


run_params:
  network_configuration:
    awsvpc_configuration:
      subnets:
        - ${"aws_subnet" "api_a"のID}
        - ${"aws_subnet" "api_b"のID}
      security_groups:
        - ${"aws_security_group" "prototype_alb"のID}

(3) 現状のフォルダ構成確認

$ tree
prototype
├── api
│   ├── Dockerfile
│   ├── __init__.py
│   ├── config
│   │   ├── __init__.py
│   │   ├── settings.py
│   │   ├── urls.py
│   │   ├── views.py
│   │   └── wsgi.py
│   ├── db.sqlite3
│   ├── manage.py
│   └── requirements.txt
├── docker-compose.yml
├── docker-compose.production.yml
├── ecs-params.yml
├── terraform
│   └── main.tf
└── nginx
    ├── Dockerfile
    └── nginx.conf

(4) ecs-cliでdeployする

ecs-cli \
    compose \
        --verbose \
        --file docker-compose.yml \
        --file docker-compose.production.yml \
        --ecs-params ecs-params.yml \
        --region ap-northeast-1 \
        --cluster prototype-cluster \
        --project-name prototype-ali \
    service up \
        --target-group-arn targetグループのarn \
        --container-name nginx \
        --container-port 80

(5) ALBのDNSでアクセスしてみる

(6) ecs-cliでサービスをダウンする

ecs-cli \
    compose \ 
        --verbose \
        --file docker-compose.yml \
        --file docker-compose.production.yml \
        --ecs-params ecs-params.yml \
        --region ap-northeast-1 \
        --cluster prototype-cluster \
    service down 

終わりに

これでやっとDjangoの開発に専念出来るようになりました。
もう少し色々構築したのですが長くなったので一旦ここまでにします。
時間があったら続編を書こうと思います。

参照

  1. ecs-cli compose service
  2. ecs-cli compose service up
  3. Task Definition Parameters
34
33
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
34
33