7
7

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

AWS Fargateから、AWS FireLens → Amazon Kinesis Data Firehose → Amazon S3とログをルーティングする

Posted at

What's

AWS FireLensから、Amazon Kinesis Data Firehoseへのログルーティングを試してみたいなということで。

お題

以下のお題でやってみます。

  • AWS FireLens(AWS for Fluent Bit)をサイドカーに組み込んだ、nginx入りのAWS Fargateクラスターを構築する
  • nginxのログは、AWS FireLens(AWS for Fluent Bit)を介してAmazon Kinesis Data Firehoseへ送信する
  • Amazon Kinesis Data Firehoseで受け取ったログは、さらにAmazon S3に転送する
  • AWS for Fluent Bitのログは、Amazon CloudWatch Logsに出力する
  • 環境は、Terraformで構築する

環境

今回の環境は、こちら。

$ terraform version
Terraform v0.13.3
+ provider registry.terraform.io/hashicorp/aws v3.8.0


$ aws --version
aws-cli/2.0.53 Python/3.7.3 Linux/4.15.0-112-generic exe/x86_64.ubuntu.20

AWSのクレデンシャルは、環境変数で設定しているものとします。

$ export AWS_ACCESS_KEY_ID=...
$ export AWS_SECRET_ACCESS_KEY=...
$ export AWS_DEFAULT_REGION=ap-northeast-1

Terraform構成ファイルの定義

では、Terraformの構成ファイルで環境定義を行っていきます。

とりあえず、terraformブロックを定義して

main.tf

terraform {
  required_version = "0.13.3"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "3.8.0"
    }
  }
}

provider "aws" {
}

VPCやセキュリティグループ、ALBも作っておくのですが、これは最後にまとめて記載します。

Terraformモジュールを作っておいたので、この結果をローカル変数に格納して、以降で参照します。

locals {
  vpc_id = module.vpc.vpc_id

  private_subnets                = module.vpc.private_subnets
  nginx_service_security_groups  = [module.nginx_service_sg.this_security_group_id]
  load_balancer_target_group_arn = module.load_balancer.target_group_arns[0]
}

Amazon Kinesis Data Firehoseリソースを定義する

最初に、Amazon Kinesis Data Firehoseに関するリソース定義を行っていきます。

今回作成するAmazon Kinesis Data Firehoseの配信ストリームは、AWS Fargateで動作するコンテナのログの送信先となります。

AWS FireLensを使って、Amazon Kinesis Data Firehoseにログをルーティングする例は、こちらに載っているので参考にして作っていきます。

FireLens構成を使用するタスク定義の作成

また、IAMロールに関する説明もあります。

Amazon Kinesis Data Firehose によるアクセスの制御

Assume Roleと、Amazon S3へ配信する時に必要な権限定義はこちら。

Kinesis Data Firehose に IAM ロールを割り当てる

Amazon S3 の送信先へのアクセス権を Kinesis Data Firehose に付与する

Terraformを使って、配信先がAmazon S3のAmazon Kinesis Data Firehoseの配信ストリームを作成する場合、s3_configurationを使う方法とextended_s3_configurationを使う方法の2種類があるようですが、extended_s3_configurations3_configurationに変換などができるように拡張したもので、基本はこちらを使えばよさそうです。

aws_kinesis_firehose_delivery_stream / Etended S3 Destination

aws_kinesis_firehose_delivery_stream / S3 Destination

Amazon Kinesis Data Firehoseに割り当てる、IAMロール。

data "aws_iam_policy_document" "firehose_assume_role" {
  statement {
    actions = ["sts:AssumeRole"]

    principals {
      type        = "Service"
      identifiers = ["firehose.amazonaws.com"]
    }
  }
}

data "aws_iam_policy_document" "log_delivery_firehose_policy_document" {
  statement {
    actions = [
      "s3:AbortMultipartUpload",
      "s3:GetBucketLocation",
      "s3:GetObject",
      "s3:ListBucket",
      "s3:ListBucketMultipartUploads",
      "s3:PutObject",
      "kinesis:DescribeStream",
      "kinesis:GetShardIterator",
      "kinesis:GetRecords",
      "kinesis:ListShards",
      "kms:Decrypt",
      "kms:GenerateDataKey"
    ]

    resources = ["*"]
  }
}

resource "aws_iam_policy" "log_delivery_firehose_role_policy" {
  name   = "MyLogDeliveryFirehosePolicy"
  policy = data.aws_iam_policy_document.log_delivery_firehose_policy_document.json
}

resource "aws_iam_role" "log_delivery_firehose_role" {
  name               = "MyLogDeliveryFirehoseRole"
  assume_role_policy = data.aws_iam_policy_document.firehose_assume_role.json
}

resource "aws_iam_role_policy_attachment" "log_delivery_firehose_role_policy_attachment" {
  role       = aws_iam_role.log_delivery_firehose_role.name
  policy_arn = aws_iam_policy.log_delivery_firehose_role_policy.arn
}

IAMロールの、アクセス先リソース制限は緩いですが…。

Amazon Kinesis Data Firehoseの配信先となる、Amazon S3バケット。

resource "aws_s3_bucket" "log_destination" {
  bucket = "nginx-cluster-log-bucket"
  acl    = "private"

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }
}

Amazon Kinesis Data Firehoseの配信ストリーム。

resource "aws_kinesis_firehose_delivery_stream" "log_delivery_stream" {
  name        = "nginx-cluster-log-delivery-stream"
  destination = "extended_s3"

  server_side_encryption {
    enabled = true
    key_type = "AWS_OWNED_CMK"
  }

  extended_s3_configuration {
    bucket_arn = aws_s3_bucket.log_destination.arn
    role_arn   = aws_iam_role.log_delivery_firehose_role.arn

    processing_configuration {
      enabled = "false"
    }
  }
}

今回は、特に変換処理などは行いません。

ところで、Amazon Kinesis Data Firehoseが配信先にできるのは、ひとつなんですね。

Amazon Kinesis Data Firehose についてのよくある質問

Q: 単一の配信ストリームで複数の Amazon S3 バケットにデータを配信することはできますか?

現在、単一の配信ストリームがデータを配信できるのは単一の Amazon S3 バケットに対してのみです。複数の S3 バケットにデータを配信する場合は、複数の配信ストリームを作成できます。

Q: 単一の配信ストリームで複数の Amazon Redshift クラスターまたはテーブルにデータを配信することはできますか?

現在、単一の配信ストリームがデータを配信できるのは単一の Amazon Redshift クラスターまたはテーブルのみです。複数の Redshift クラスターまたはテーブルにデータを配信する場合は、複数の配信ストリームを作成できます。

Q: 単一の配信ストリームで複数の Amazon Elasticsearch Service ドメインまたはインデックスにデータを配信することはできますか?

現在、単一の配信ストリームがデータを配信できるのは単一の Amazon Elasticsearch Service ドメインおよびインデックスのみです。複数の Amazon Elasticsearch ドメインまたはインデックスにデータを配信する場合は、複数の配信ストリームを作成できます。

ある配信先へのバックアップとして、Amazon S3にも同時に配信することはできるようですが。

Amazon Kinesis Data Firehose とは

宛先を選択する

AWS Fargateクラスターのリソース定義を行う

続いて、コンテナを動かすためのAWS Fargateクラスターを定義します。

以下に記述するTerraformの構成には、

  • IAMロール
  • CloudWatch Logsロググループ
  • AWS Fargateクラスターおよびタスク等の定義

が含まれています。

data "aws_iam_policy_document" "ecs_task_assume_role" {
  statement {
    actions = ["sts:AssumeRole"]

    principals {
      type        = "Service"
      identifiers = ["ecs-tasks.amazonaws.com"]
    }
  }
}

data "aws_iam_policy_document" "ecs_task_role_policy_document" {
  statement {
    effect = "Allow"

    actions = [
      "firehose:PutRecordBatch",
      "logs:DescribeLogStreams",
      "logs:CreateLogGroup",
      "logs:CreateLogStream",
      "logs:PutLogEvents"
    ]

    resources = ["*"]
  }
}

resource "aws_iam_role" "ecs_task_execution_role" {
  name               = "MyEcsTaskExecutionRole"
  assume_role_policy = data.aws_iam_policy_document.ecs_task_assume_role.json
}

resource "aws_iam_role_policy_attachment" "ecs_task_execution_role_policy_attachment" {
  role       = aws_iam_role.ecs_task_execution_role.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}

resource "aws_iam_policy" "ecs_task_role_policy" {
  name   = "MyEcsTaskPolicy"
  policy = data.aws_iam_policy_document.ecs_task_role_policy_document.json
}

resource "aws_iam_role" "ecs_task_role" {
  name               = "MyEcsTaskRole"
  assume_role_policy = data.aws_iam_policy_document.ecs_task_assume_role.json
}

resource "aws_iam_role_policy_attachment" "ecs_task_role_policy_attachment" {
  role       = aws_iam_role.ecs_task_role.name
  policy_arn = aws_iam_policy.ecs_task_role_policy.arn
}

resource "aws_cloudwatch_log_group" "fluentbit_container_log_group" {
  name = "/fargate/containers/fluentbit"
}

resource "aws_ecs_cluster" "nginx" {
  name = "nginx-cluster"
}

resource "aws_ecs_task_definition" "nginx" {
  family                   = "nginx-task-definition"
  cpu                      = "512"
  memory                   = "1024"
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]

  execution_role_arn = aws_iam_role.ecs_task_execution_role.arn
  task_role_arn      = aws_iam_role.ecs_task_role.arn

  container_definitions = <<JSON
  [
    {
      "name": "nginx",
      "image": "nginx:1.19.2",
      "essential": true,
      "portMappings": [
        {
          "protocol": "tcp",
          "containerPort": 80
        }
      ],
      "cpu": 256,
      "memory": 512,
      "logConfiguration": {
        "logDriver": "awsfirelens",
        "options": {
          "Name": "firehose",
          "region": "ap-northeast-1",
          "delivery_stream": "nginx-cluster-log-delivery-stream"
        }
      }
    },
    {
      "name": "log_router",
      "image": "906394416424.dkr.ecr.ap-northeast-1.amazonaws.com/aws-for-fluent-bit:latest",
      "essential": true,
      "cpu": 256,
      "memory": 512,
      "firelensConfiguration": {
        "type": "fluentbit"
      },
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "/fargate/containers/fluentbit",
          "awslogs-region": "ap-northeast-1",
          "awslogs-stream-prefix": "fluentbit-"
        }
      }
    }
  ]
  JSON
}

resource "aws_ecs_service" "nginx" {
  name             = "nginx-service"
  cluster          = aws_ecs_cluster.nginx.arn
  task_definition  = aws_ecs_task_definition.nginx.arn
  desired_count    = 3
  launch_type      = "FARGATE"
  platform_version = "1.4.0"

  deployment_minimum_healthy_percent = 50

  network_configuration {
    assign_public_ip = false
    security_groups  = local.nginx_service_security_groups
    subnets          = local.private_subnets
  }

  load_balancer {
    target_group_arn = local.load_balancer_target_group_arn
    container_name   = "nginx"
    container_port   = 80
  }
}

タスクに割り当てるIAMロールには、AWS for Fluent BitからAmazon Kinesis Data FirehoseおよびAmazon CloudWatch Logsにログを送信するための権限を割り当てます。

data "aws_iam_policy_document" "ecs_task_role_policy_document" {
  statement {
    effect = "Allow"

    actions = [
      "firehose:PutRecordBatch",
      "logs:DescribeLogStreams",
      "logs:CreateLogGroup",
      "logs:CreateLogStream",
      "logs:PutLogEvents"
    ]

    resources = ["*"]
  }
}

Amazon Kinesis Data Firehoseにログを送信する場合の権限の例は、Amazon ECSのドキュメントに記載があります。

必要な IAM アクセス許可

nginxのタスク定義では、logConfigurationでログドライバーとしてawsfirelensを指定して、送信先をAmazon Kinesis Data Firehoseに設定します。

    {
      "name": "nginx",
      "image": "nginx:1.19.2",
      "essential": true,
      "portMappings": [
        {
          "protocol": "tcp",
          "containerPort": 80
        }
      ],
      "cpu": 256,
      "memory": 512,
      "logConfiguration": {
        "logDriver": "awsfirelens",
        "options": {
          "Name": "firehose",
          "region": "ap-northeast-1",
          "delivery_stream": "nginx-cluster-log-delivery-stream"
        }
      }
    },

この定義例も、Amazon ECSのドキュメントに記載があります。

タスク定義の例

ログ送信に利用するFluent Bitのプラグインは、こちらになります。

Fluent Bit Plugin for Amazon Kinesis Firehose

AWS FireLensとして動かすFluent Bit自身のログ送信先は、Amazon CloudWatch Logsとしています。

    {
      "name": "log_router",
      "image": "906394416424.dkr.ecr.ap-northeast-1.amazonaws.com/aws-for-fluent-bit:latest",
      "essential": true,
      "cpu": 256,
      "memory": 512,
      "firelensConfiguration": {
        "type": "fluentbit"
      },
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "/fargate/containers/fluentbit",
          "awslogs-region": "ap-northeast-1",
          "awslogs-stream-prefix": "fluentbit-"
        }
      }
    }

確認

では、applyして環境を構築して、動作確認してみましょう。

$ terraform apply

構築完了後、しばらく待っているとnginxへのアクセスが可能になります。

$ curl [ALBのDNS名]
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Fluent Bitのログ。

$ aws logs tail --follow /fargate/containers/fluentbit
2020-09-30T10:31:00.226000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 AWS for Fluent Bit Container Image Version 2.7.0
2020-09-30T10:31:00.511000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 Fluent Bit v1.5.6
2020-09-30T10:31:00.511000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 * Copyright (C) 2019-2020 The Fluent Bit Authors
2020-09-30T10:31:00.511000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 * Copyright (C) 2015-2018 Treasure Data
2020-09-30T10:31:00.511000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
2020-09-30T10:31:00.511000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 * https://fluentbit.io
2020-09-30T10:31:00.520000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 time="2020-09-30T10:31:00Z" level=info msg="[firehose 0] plugin parameter delivery_stream = 'nginx-cluster-log-delivery-stream'"
2020-09-30T10:31:00.521000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 time="2020-09-30T10:31:00Z" level=info msg="[firehose 0] plugin parameter region = 'ap-northeast-1'"
2020-09-30T10:31:00.521000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 time="2020-09-30T10:31:00Z" level=info msg="[firehose 0] plugin parameter data_keys = ''"
2020-09-30T10:31:00.521000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 time="2020-09-30T10:31:00Z" level=info msg="[firehose 0] plugin parameter role_arn = ''"
2020-09-30T10:31:00.521000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 time="2020-09-30T10:31:00Z" level=info msg="[firehose 0] plugin parameter endpoint = ''"
2020-09-30T10:31:00.521000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 time="2020-09-30T10:31:00Z" level=info msg="[firehose 0] plugin parameter sts_endpoint = ''"
2020-09-30T10:31:00.521000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 time="2020-09-30T10:31:00Z" level=info msg="[firehose 0] plugin parameter time_key = ''"
2020-09-30T10:31:00.521000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 time="2020-09-30T10:31:00Z" level=info msg="[firehose 0] plugin parameter time_key_format = ''"
2020-09-30T10:31:00.521000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 time="2020-09-30T10:31:00Z" level=info msg="[firehose 0] plugin parameter log_key = ''"
2020-09-30T10:31:00.521000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 [2020/09/30 10:31:00] [ info] [engine] started (pid=1)
2020-09-30T10:31:00.521000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 [2020/09/30 10:31:00] [ info] [storage] version=1.0.5, initializing...
2020-09-30T10:31:00.521000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 [2020/09/30 10:31:00] [ info] [storage] in-memory
2020-09-30T10:31:00.521000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 [2020/09/30 10:31:00] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
2020-09-30T10:31:00.521000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 [2020/09/30 10:31:00] [ info] [input:tcp:tcp.0] listening on 127.0.0.1:8877
2020-09-30T10:31:00.521000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 [2020/09/30 10:31:00] [ info] [input:forward:forward.1] listening on unix:///var/run/fluent.sock
2020-09-30T10:31:00.521000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 [2020/09/30 10:31:00] [ info] [input:forward:forward.2] listening on 127.0.0.1:24224
2020-09-30T10:31:00.522000+00:00 fluentbit-/log_router/59d20a03-e034-493f-9218-c6f2b6fab650 [2020/09/30 10:31:00] [ info] [sp] stream processor started

Amazon Kinesis Data Firehoseへの配信設定も見えます。

msg="[firehose 0] plugin parameter delivery_stream = 'nginx-cluster-log-delivery-stream'"
msg="[firehose 0] plugin parameter region = 'ap-northeast-1'"
msg="[firehose 0] plugin parameter data_keys = ''"
msg="[firehose 0] plugin parameter role_arn = ''"
msg="[firehose 0] plugin parameter endpoint = ''"
msg="[firehose 0] plugin parameter sts_endpoint = ''"
msg="[firehose 0] plugin parameter time_key = ''"
msg="[firehose 0] plugin parameter time_key_format = ''"
msg="[firehose 0] plugin parameter log_key = ''"

さて、Amazon S3の方にnginxのログが出力されているはずですが、どうなったでしょう?

最初は、中身が空っぽです。

$ aws s3 ls nginx-cluster-log-bucket

しばらく待っていると、日付のディレクトリができて、中にログが置かれるようになります。

$ aws s3 ls nginx-cluster-log-bucket/2020/09/30/10/
2020-09-30 19:36:07      49848 nginx-cluster-log-delivery-stream-1-2020-09-30-10-31-05-626d2328-65c4-46e9-96e8-0dd3323bcc38

中身を見てみましょう。

$ aws s3 cp s3://nginx-cluster-log-bucket/2020/09/30/10/nginx-cluster-log-delivery-stream-1-2020-09-30-10-31-05-626d2328-65c4-46e9-96e8-0dd3323bcc38 -
{"container_id":"59d20a03-e034-493f-9218-c6f2b6fab650-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/59d20a03-e034-493f-9218-c6f2b6fab650","ecs_task_definition":"nginx-task-definition:26","log":"/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration","source":"stdout"}
{"container_id":"59d20a03-e034-493f-9218-c6f2b6fab650-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/59d20a03-e034-493f-9218-c6f2b6fab650","ecs_task_definition":"nginx-task-definition:26","log":"/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/","source":"stdout"}
{"container_id":"59d20a03-e034-493f-9218-c6f2b6fab650-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/59d20a03-e034-493f-9218-c6f2b6fab650","ecs_task_definition":"nginx-task-definition:26","log":"/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh","source":"stdout"}
{"container_id":"59d20a03-e034-493f-9218-c6f2b6fab650-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/59d20a03-e034-493f-9218-c6f2b6fab650","ecs_task_definition":"nginx-task-definition:26","log":"10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf","source":"stdout"}
{"container_id":"59d20a03-e034-493f-9218-c6f2b6fab650-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/59d20a03-e034-493f-9218-c6f2b6fab650","ecs_task_definition":"nginx-task-definition:26","log":"10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf","source":"stdout"}
{"container_id":"59d20a03-e034-493f-9218-c6f2b6fab650-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/59d20a03-e034-493f-9218-c6f2b6fab650","ecs_task_definition":"nginx-task-definition:26","log":"/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh","source":"stdout"}
{"container_id":"59d20a03-e034-493f-9218-c6f2b6fab650-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/59d20a03-e034-493f-9218-c6f2b6fab650","ecs_task_definition":"nginx-task-definition:26","log":"/docker-entrypoint.sh: Configuration complete; ready for start up","source":"stdout"}
{"container_id":"961cae65-9b31-4ab6-819d-8922e768328a-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/961cae65-9b31-4ab6-819d-8922e768328a","ecs_task_definition":"nginx-task-definition:26","log":"/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration","source":"stdout"}
{"container_id":"961cae65-9b31-4ab6-819d-8922e768328a-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/961cae65-9b31-4ab6-819d-8922e768328a","ecs_task_definition":"nginx-task-definition:26","log":"/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/","source":"stdout"}
{"container_id":"961cae65-9b31-4ab6-819d-8922e768328a-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/961cae65-9b31-4ab6-819d-8922e768328a","ecs_task_definition":"nginx-task-definition:26","log":"/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh","source":"stdout"}
{"container_id":"961cae65-9b31-4ab6-819d-8922e768328a-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/961cae65-9b31-4ab6-819d-8922e768328a","ecs_task_definition":"nginx-task-definition:26","log":"10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf","source":"stdout"}
{"container_id":"961cae65-9b31-4ab6-819d-8922e768328a-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/961cae65-9b31-4ab6-819d-8922e768328a","ecs_task_definition":"nginx-task-definition:26","log":"10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf","source":"stdout"}
{"container_id":"961cae65-9b31-4ab6-819d-8922e768328a-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/961cae65-9b31-4ab6-819d-8922e768328a","ecs_task_definition":"nginx-task-definition:26","log":"/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh","source":"stdout"}
{"container_id":"961cae65-9b31-4ab6-819d-8922e768328a-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/961cae65-9b31-4ab6-819d-8922e768328a","ecs_task_definition":"nginx-task-definition:26","log":"/docker-entrypoint.sh: Configuration complete; ready for start up","source":"stdout"}
{"container_id":"8da3f3d0-fa85-405b-86ad-caf263d843e6-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/8da3f3d0-fa85-405b-86ad-caf263d843e6","ecs_task_definition":"nginx-task-definition:26","log":"/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration","source":"stdout"}
{"container_id":"8da3f3d0-fa85-405b-86ad-caf263d843e6-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/8da3f3d0-fa85-405b-86ad-caf263d843e6","ecs_task_definition":"nginx-task-definition:26","log":"/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/","source":"stdout"}
{"container_id":"8da3f3d0-fa85-405b-86ad-caf263d843e6-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/8da3f3d0-fa85-405b-86ad-caf263d843e6","ecs_task_definition":"nginx-task-definition:26","log":"/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh","source":"stdout"}
{"container_id":"8da3f3d0-fa85-405b-86ad-caf263d843e6-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/8da3f3d0-fa85-405b-86ad-caf263d843e6","ecs_task_definition":"nginx-task-definition:26","log":"10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf","source":"stdout"}
{"container_id":"8da3f3d0-fa85-405b-86ad-caf263d843e6-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/8da3f3d0-fa85-405b-86ad-caf263d843e6","ecs_task_definition":"nginx-task-definition:26","log":"10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf","source":"stdout"}
{"container_id":"8da3f3d0-fa85-405b-86ad-caf263d843e6-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/8da3f3d0-fa85-405b-86ad-caf263d843e6","ecs_task_definition":"nginx-task-definition:26","log":"/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh","source":"stdout"}
{"container_id":"8da3f3d0-fa85-405b-86ad-caf263d843e6-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/8da3f3d0-fa85-405b-86ad-caf263d843e6","ecs_task_definition":"nginx-task-definition:26","log":"/docker-entrypoint.sh: Configuration complete; ready for start up","source":"stdout"}
{"container_id":"8da3f3d0-fa85-405b-86ad-caf263d843e6-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/8da3f3d0-fa85-405b-86ad-caf263d843e6","ecs_task_definition":"nginx-task-definition:26","log":"10.0.20.21 - - [30/Sep/2020:10:31:17 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"ELB-HealthChecker/2.0\" \"-\"","source":"stdout"}
{"container_id":"59d20a03-e034-493f-9218-c6f2b6fab650-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/59d20a03-e034-493f-9218-c6f2b6fab650","ecs_task_definition":"nginx-task-definition:26","log":"10.0.20.21 - - [30/Sep/2020:10:31:17 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"ELB-HealthChecker/2.0\" \"-\"","source":"stdout"}

〜省略〜

{"container_id":"59d20a03-e034-493f-9218-c6f2b6fab650-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/59d20a03-e034-493f-9218-c6f2b6fab650","ecs_task_definition":"nginx-task-definition:26","log":"10.0.20.21 - - [30/Sep/2020:10:33:16 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.68.0\" \"124.211.189.238\"","source":"stdout"}

〜省略〜

{"container_id":"59d20a03-e034-493f-9218-c6f2b6fab650-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/59d20a03-e034-493f-9218-c6f2b6fab650","ecs_task_definition":"nginx-task-definition:26","log":"10.0.20.21 - - [30/Sep/2020:10:34:22 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.68.0\" \"124.211.189.238\"","source":"stdout"}
{"container_id":"59d20a03-e034-493f-9218-c6f2b6fab650-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/59d20a03-e034-493f-9218-c6f2b6fab650","ecs_task_definition":"nginx-task-definition:26","log":"10.0.10.234 - - [30/Sep/2020:10:34:22 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.68.0\" \"124.211.189.238\"","source":"stdout"}
{"container_id":"59d20a03-e034-493f-9218-c6f2b6fab650-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/59d20a03-e034-493f-9218-c6f2b6fab650","ecs_task_definition":"nginx-task-definition:26","log":"10.0.10.234 - - [30/Sep/2020:10:34:23 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.68.0\" \"124.211.189.238\"","source":"stdout"}

〜省略〜

{"container_id":"8da3f3d0-fa85-405b-86ad-caf263d843e6-2531612879","container_name":"nginx","ecs_cluster":"nginx-cluster","ecs_task_arn":"arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task/8da3f3d0-fa85-405b-86ad-caf263d843e6","ecs_task_definition":"nginx-task-definition:26","log":"10.0.10.234 - - [30/Sep/2020:10:35:57 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"ELB-HealthChecker/2.0\" \"-\"","source":"stdout"}

ちゃんと出力されていますね。

また、さらにしばらく待っていると、ログが増えていきます。

$ aws s3 ls nginx-cluster-log-bucket/2020/09/30/10/
2020-09-30 19:36:07      49848 nginx-cluster-log-delivery-stream-1-2020-09-30-10-31-05-626d2328-65c4-46e9-96e8-0dd3323bcc38
2020-09-30 19:41:21      39359 nginx-cluster-log-delivery-stream-1-2020-09-30-10-36-20-2e82b968-35ea-47ec-9cac-11b404d343ee


$ aws s3 ls nginx-cluster-log-bucket/2020/09/30/10/
2020-09-30 19:36:07      49848 nginx-cluster-log-delivery-stream-1-2020-09-30-10-31-05-626d2328-65c4-46e9-96e8-0dd3323bcc38
2020-09-30 19:41:21      39359 nginx-cluster-log-delivery-stream-1-2020-09-30-10-36-20-2e82b968-35ea-47ec-9cac-11b404d343ee
2020-09-30 19:46:23      49162 nginx-cluster-log-delivery-stream-1-2020-09-30-10-41-20-e35b41af-ff4c-4160-a8d2-7dcd94a03849

こんな感じで、バッチ的にログファイルが置かれ、そのタイミングでの日付のディレクトリが作られていくんでしょうね。

おおまかに、初歩的な使い方はわかったので、これでOKでしょう。

VPC〜ALBまで(〜AWS Fargateも)

最後に、省略していたVPCからALBまでの定義を含めた、全体のTerraform定義を載せておきます。

main.tf

terraform {
  required_version = "0.13.3"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "3.8.0"
    }
  }
}

provider "aws" {
}

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "2.55.0"

  name = "my-vpc"
  cidr = "10.0.0.0/16"

  enable_dns_hostnames = true
  enable_dns_support   = true

  azs             = ["ap-northeast-1a", "ap-northeast-1c"]
  public_subnets  = ["10.0.10.0/24", "10.0.20.0/24"]
  private_subnets = ["10.0.30.0/24", "10.0.40.0/24"]

  map_public_ip_on_launch = true

  enable_nat_gateway     = true
  single_nat_gateway     = false
  one_nat_gateway_per_az = true
}

module "load_balancer_sg" {
  source  = "terraform-aws-modules/security-group/aws//modules/http-80"
  version = "3.16.0"

  name   = "load-balancer-sg"
  vpc_id = module.vpc.vpc_id

  ingress_cidr_blocks = ["0.0.0.0/0"]
}

module "nginx_service_sg" {
  source  = "terraform-aws-modules/security-group/aws"
  version = "3.16.0"

  name   = "nginx-service-sg"
  vpc_id = module.vpc.vpc_id

  ingress_with_cidr_blocks = [
    {
      from_port   = 80
      to_port     = 80
      protocol    = "tcp"
      description = "nginx-service inbound ports"
      cidr_blocks = "10.0.10.0/24"
    },
    {
      from_port   = 80
      to_port     = 80
      protocol    = "tcp"
      description = "nginx-service inbound ports"
      cidr_blocks = "10.0.20.0/24"
    }
  ]

  egress_with_cidr_blocks = [
    {
      from_port   = 0
      to_port     = 0
      protocol    = "-1"
      description = "nginx-service outbound ports"
      cidr_blocks = "0.0.0.0/0"
    }
  ]
}

module "load_balancer" {
  source  = "terraform-aws-modules/alb/aws"
  version = "5.9.0"

  name = "nginx"

  vpc_id             = module.vpc.vpc_id
  load_balancer_type = "application"
  internal           = false

  subnets         = module.vpc.public_subnets
  security_groups = [module.load_balancer_sg.this_security_group_id]

  target_groups = [
    {
      backend_protocol = "HTTP"
      backend_port     = 80
      target_type      = "ip"

      health_check = {
        interval = 20
      }
    }
  ]

  http_tcp_listeners = [
    {
      port     = 80
      protocol = "HTTP"
    }
  ]
}

locals {
  vpc_id = module.vpc.vpc_id

  private_subnets                = module.vpc.private_subnets
  nginx_service_security_groups  = [module.nginx_service_sg.this_security_group_id]
  load_balancer_target_group_arn = module.load_balancer.target_group_arns[0]
}

data "aws_iam_policy_document" "firehose_assume_role" {
  statement {
    actions = ["sts:AssumeRole"]

    principals {
      type        = "Service"
      identifiers = ["firehose.amazonaws.com"]
    }
  }
}

data "aws_iam_policy_document" "log_delivery_firehose_policy_document" {
  statement {
    actions = [
      "s3:AbortMultipartUpload",
      "s3:GetBucketLocation",
      "s3:GetObject",
      "s3:ListBucket",
      "s3:ListBucketMultipartUploads",
      "s3:PutObject",
      "kinesis:DescribeStream",
      "kinesis:GetShardIterator",
      "kinesis:GetRecords",
      "kinesis:ListShards",
      "kms:Decrypt",
      "kms:GenerateDataKey"
    ]

    resources = ["*"]
  }
}

resource "aws_iam_policy" "log_delivery_firehose_role_policy" {
  name   = "MyLogDeliveryFirehosePolicy"
  policy = data.aws_iam_policy_document.log_delivery_firehose_policy_document.json
}

resource "aws_iam_role" "log_delivery_firehose_role" {
  name               = "MyLogDeliveryFirehoseRole"
  assume_role_policy = data.aws_iam_policy_document.firehose_assume_role.json
}

resource "aws_iam_role_policy_attachment" "log_delivery_firehose_role_policy_attachment" {
  role       = aws_iam_role.log_delivery_firehose_role.name
  policy_arn = aws_iam_policy.log_delivery_firehose_role_policy.arn
}

resource "aws_s3_bucket" "log_destination" {
  bucket = "nginx-cluster-log-bucket"
  acl    = "private"

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }
}

resource "aws_kinesis_firehose_delivery_stream" "log_delivery_stream" {
  name        = "nginx-cluster-log-delivery-stream"
  destination = "extended_s3"

  server_side_encryption {
    enabled = true
    key_type = "AWS_OWNED_CMK"
  }

  extended_s3_configuration {
    bucket_arn = aws_s3_bucket.log_destination.arn
    role_arn   = aws_iam_role.log_delivery_firehose_role.arn

    processing_configuration {
      enabled = "false"
    }
  }
}

data "aws_iam_policy_document" "ecs_task_assume_role" {
  statement {
    actions = ["sts:AssumeRole"]

    principals {
      type        = "Service"
      identifiers = ["ecs-tasks.amazonaws.com"]
    }
  }
}

data "aws_iam_policy_document" "ecs_task_role_policy_document" {
  statement {
    effect = "Allow"

    actions = [
      "firehose:PutRecordBatch",
      "logs:DescribeLogStreams",
      "logs:CreateLogGroup",
      "logs:CreateLogStream",
      "logs:PutLogEvents"
    ]

    resources = ["*"]
  }
}

resource "aws_iam_role" "ecs_task_execution_role" {
  name               = "MyEcsTaskExecutionRole"
  assume_role_policy = data.aws_iam_policy_document.ecs_task_assume_role.json
}

resource "aws_iam_role_policy_attachment" "ecs_task_execution_role_policy_attachment" {
  role       = aws_iam_role.ecs_task_execution_role.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}

resource "aws_iam_policy" "ecs_task_role_policy" {
  name   = "MyEcsTaskPolicy"
  policy = data.aws_iam_policy_document.ecs_task_role_policy_document.json
}

resource "aws_iam_role" "ecs_task_role" {
  name               = "MyEcsTaskRole"
  assume_role_policy = data.aws_iam_policy_document.ecs_task_assume_role.json
}

resource "aws_iam_role_policy_attachment" "ecs_task_role_policy_attachment" {
  role       = aws_iam_role.ecs_task_role.name
  policy_arn = aws_iam_policy.ecs_task_role_policy.arn
}

resource "aws_cloudwatch_log_group" "fluentbit_container_log_group" {
  name = "/fargate/containers/fluentbit"
}

resource "aws_ecs_cluster" "nginx" {
  name = "nginx-cluster"
}

resource "aws_ecs_task_definition" "nginx" {
  family                   = "nginx-task-definition"
  cpu                      = "512"
  memory                   = "1024"
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]

  execution_role_arn = aws_iam_role.ecs_task_execution_role.arn
  task_role_arn      = aws_iam_role.ecs_task_role.arn

  container_definitions = <<JSON
  [
    {
      "name": "nginx",
      "image": "nginx:1.19.2",
      "essential": true,
      "portMappings": [
        {
          "protocol": "tcp",
          "containerPort": 80
        }
      ],
      "cpu": 256,
      "memory": 512,
      "logConfiguration": {
        "logDriver": "awsfirelens",
        "options": {
          "Name": "firehose",
          "region": "ap-northeast-1",
          "delivery_stream": "nginx-cluster-log-delivery-stream"
        }
      }
    },
    {
      "name": "log_router",
      "image": "906394416424.dkr.ecr.ap-northeast-1.amazonaws.com/aws-for-fluent-bit:latest",
      "essential": true,
      "cpu": 256,
      "memory": 512,
      "firelensConfiguration": {
        "type": "fluentbit"
      },
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "/fargate/containers/fluentbit",
          "awslogs-region": "ap-northeast-1",
          "awslogs-stream-prefix": "fluentbit-"
        }
      }
    }
  ]
  JSON
}

resource "aws_ecs_service" "nginx" {
  name             = "nginx-service"
  cluster          = aws_ecs_cluster.nginx.arn
  task_definition  = aws_ecs_task_definition.nginx.arn
  desired_count    = 3
  launch_type      = "FARGATE"
  platform_version = "1.4.0"

  deployment_minimum_healthy_percent = 50

  network_configuration {
    assign_public_ip = false
    security_groups  = local.nginx_service_security_groups
    subnets          = local.private_subnets
  }

  load_balancer {
    target_group_arn = local.load_balancer_target_group_arn
    container_name   = "nginx"
    container_port   = 80
  }
}
7
7
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
7
7

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?