0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

PodのアプリケーションログをFluentBitを使ってS3バケットに送りたかった(EC2)

Last updated at Posted at 2024-02-27

普段はアプリケーションの実行基盤にFargateを使用しており、組み込みログルーター(Fluentbit)を使ってログを転送していたが、諸事情でEC2ノードでもアプリを実行することになったため同様にログを外部転送するためにやったことをアウトプット。

構成

  • EKSクラスターとワーカーノードとしてEC2インスタンス1台(EKS on EC2)
  • EC2のノードにDaemonSetとしてFluentbitを起動し、対象とするログを読み込むように設定する
  • ログの配信先はS3なので、Fluentbit~Firehose~S3という構成でS3バケットへログを配信、保存する

Fluentbitのデプロイ

re:Postの投稿をなぞるだけです。

まず、namespace「amazon-cloudwatch」を作成します。

[root@ip-192-168-0-50 ~]# k get ns
NAME              STATUS   AGE
default           Active   7h25m
kube-node-lease   Active   7h25m
kube-public       Active   7h25m
kube-system       Active   7h25m
[root@ip-192-168-0-50 ~]# kubectl apply -f https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/cloudwatch-namespace.yaml
namespace/amazon-cloudwatch created
[root@ip-192-168-0-50 ~]# k get ns
NAME                STATUS   AGE
amazon-cloudwatch   Active   3s
default             Active   7h25m
kube-node-lease     Active   7h25m
kube-public         Active   7h25m
kube-system         Active   7h25m

次に、Fluentbitの動作を設定するconfigmapを作成します。
_ClusterName_はデプロイするEKSクラスタの名前を指定します。

[root@ip-192-168-0-50 ~]# ClusterName=eks-demo
RegionName=ap-northeast-1
FluentBitHttpPort='2020'
FluentBitReadFromHead='Off'
[[ ${FluentBitReadFromHead} = 'On' ]] && FluentBitReadFromTail='Off'|| FluentBitReadFromTail='On'
[[ -z ${FluentBitHttpPort} ]] && FluentBitHttpServer='Off' || FluentBitHttpServer='On'
kubectl create configmap fluent-bit-cluster-info \
--from-literal=cluster.name=${ClusterName} \
--from-literal=http.server=${FluentBitHttpServer} \
--from-literal=http.port=${FluentBitHttpPort} \
--from-literal=read.head=${FluentBitReadFromHead} \
--from-literal=read.tail=${FluentBitReadFromTail} \
--from-literal=logs.region=${RegionName} -n amazon-cloudwatch
configmap/fluent-bit-cluster-info created

ややわかりにくいですが、中身はこんな感じです。

k describe cm -n amazon-cloudwatch fluent-bit-cluster-info
[root@ip-192-168-0-50 ~]# k describe cm -n amazon-cloudwatch fluent-bit-cluster-info
Name:         fluent-bit-cluster-info
Namespace:    amazon-cloudwatch
Labels:       <none>
Annotations:  <none>

Data
====
cluster.name:
----
eks-demo
http.port:
----
2020
http.server:
----
On
logs.region:
----
ap-northeast-1
read.head:
----
Off
read.tail:
----
On

BinaryData
====

Events:  <none>

FluentBit一式をデプロイします。

[root@ip-192-168-0-50 ~]# kubectl apply -f https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/fluent-bit/fluent-bit.yaml
serviceaccount/fluent-bit created
clusterrole.rbac.authorization.k8s.io/fluent-bit-role created
clusterrolebinding.rbac.authorization.k8s.io/fluent-bit-role-binding created
configmap/fluent-bit-config created
daemonset.apps/fluent-bit created
[root@ip-192-168-0-50 ~]# k get po -n amazon-cloudwatch
NAME               READY   STATUS    RESTARTS   AGE
fluent-bit-tz9d5   1/1     Running   0          11s

S3バケットとFirehoseストリームの作成

Cloudformationを使って作成します。今回は検証なので、ほぼデフォルト設定で進めます。

s3_and_firehose.yaml
AWSTemplateFormatVersion: "2010-09-09"
Description: Create an Amazon Kinesis Data Firehose delivery stream with IAM role for S3 delivery
Resources:
  FirehoseDeliveryStream:
    Type: "AWS::KinesisFirehose::DeliveryStream"
    Properties:
      DeliveryStreamType: DirectPut
      S3DestinationConfiguration:
        BucketARN: !GetAtt LogSavingBucket.Arn
        RoleARN: !GetAtt FirehoseIAMRole.Arn
        BufferingHints:
          SizeInMBs: 5
          IntervalInSeconds: 300
        CompressionFormat: UNCOMPRESSED

  LogSavingBucket:
    Type: "AWS::S3::Bucket"
    Properties:
      BucketName: log-saving

  FirehoseIAMRole:
    Type: "AWS::IAM::Role"
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: firehose.amazonaws.com
            Action: "sts:AssumeRole"
      Policies:
        - PolicyName: FirehoseS3Policy
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              - Effect: Allow
                Action:
                  - "s3:AbortMultipartUpload"
                  - "s3:GetBucketLocation"
                  - "s3:GetObject"
                  - "s3:ListBucket"
                  - "s3:ListBucketMultipartUploads"
                  - "s3:PutObject"
                Resource:
                  - !GetAtt LogSavingBucket.Arn
                  - !Sub "${LogSavingBucket.Arn}/*"

Fluentbitのコンフィグを更新する

Fluentbitの設定を行っているConfigMapを変更し、Firehoseへログを配信するように設定を更新します。

[root@ip-192-168-0-50 ~]# k get cm -n amazon-cloudwatch fluent-bit-config -o yaml
apiVersion: v1
data:
  application-log.conf: |
    [INPUT]
        Name                tail
        Tag                 application.*
        Exclude_Path        /var/log/containers/cloudwatch-agent*, /var/log/containers/fluent-bit*, /var/log/containers/aws-node*, /var/log/containers/kube-proxy*
        Path                /var/log/containers/*.log
        multiline.parser    docker, cri
        DB                  /var/fluent-bit/state/flb_container.db
        Mem_Buf_Limit       50MB
        Skip_Long_Lines     On
        Refresh_Interval    10
        Rotate_Wait         30
        storage.type        filesystem
        Read_from_Head      ${READ_FROM_HEAD}

    [OUTPUT]
        Name             firehose
        Match            application.*
        delivery_stream  firehose-FirehoseDeliveryStream-VgcipUpnbpmP
        region           ap-northeast-1

    ~~~以下略~~~

ConfigMapを更新したら、Podも再起動しておきましょう。

[root@ip-192-168-0-50 fluentbit]# k rollout restart -n amazon-cloudwatch daemonset fluent-bit
daemonset.apps/fluent-bit restarted
[root@ip-192-168-0-50 fluentbit]# k get po -n amazon-cloudwatch
NAME               READY   STATUS    RESTARTS   AGE
fluent-bit-lnzv4   1/1     Running   0          6s

今回はNodeGroupで指定したIAMロールに付与した権限でログをFirehoseへ配信するので、権限がないとエラーになります。

time="2024-02-25T12:35:45Z" level=error msg="[firehose 0] AccessDeniedException: User: arn:aws:sts::123456789012:assumed-role/_test/i-gahiu7h5u27ij575gt is not authorized to perform: firehose:PutRecordBatch on resource: arn:aws:firehose:ap-northeast-1:123456789012:deliverystream/firehose-FirehoseDeliveryStream-VgcipUpnbpmP because no identity-based policy allows the firehose:PutRecordBatch action\n\tstatus code: 400,request id: fc89d358-5e3d-e9c9-a2dc-e539c56230b6\n"

以下の権限を設定しておきましょう。

{
 "Version": "2012-10-17",
 "Statement": [
  {
   "Sid": "VisualEditor0",
   "Effect": "Allow",
   "Action": "firehose:PutRecordBatch",
   "Resource": "arn:aws:firehose:ap-northeast-1:123456789012:deliverystream/firehose-FirehoseDeliveryStream-VgcipUpnbpmP"
  }
 ]
}

Firehoseのバッファが300秒なので、ログが最初に配信されて5分待ってからS3バケットを覗きましょう。

[root@ip-192-168-0-50 fluentbit]# aws s3 ls s3://log-saving-test/2024/02/25/12/
2024-02-25 12:43:21     476794 firehose-FirehoseDeliveryStream-VgcipUpnbpmP-1-2024-02-25-12-38-19-0b3bc806-cc14-4007-95c8-637839cb5ed4

中身を確認してみます。

[root@ip-192-168-0-50 fluentbit]# aws s3 cp s3://log-saving-test/2024/02/25/12/firehose-FirehoseDeliveryStream-VgcipUpnbpmP-1-2024-02-25-12-56-43-3efa0173-3d58-4341-89ca-4be58e31b5cb -
{"_p":"F","kubernetes":{"container_hash":"docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107","container_image":"docker.io/library/nginx:latest","container_name":"nginx","docker_id":"c7441e4ae478994c7a501ac00219a445892db4ef39cf00ab1fd746220fc13fa4","host":"ip-192-168-0-31.ap-northeast-1.compute.internal","namespace_name":"default","pod_id":"e0bfc531-edb2-44f0-b965-3296799b49e6","pod_name":"nginx"},"log":"192.168.0.131 - - [25/Feb/2024:12:56:41 +0000] \"HEAD / HTTP/1.1\" 200 0 \"-\" \"curl/8.6.0\" \"-\"","stream":"stdout","time":"2024-02-25T12:56:41.982164259Z"}
{"_p":"F","kubernetes":{"container_hash":"docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107","container_image":"docker.io/library/nginx:latest","container_name":"nginx","docker_id":"c7441e4ae478994c7a501ac00219a445892db4ef39cf00ab1fd746220fc13fa4","host":"ip-192-168-0-31.ap-northeast-1.compute.internal","namespace_name":"default","pod_id":"e0bfc531-edb2-44f0-b965-3296799b49e6","pod_name":"nginx"},"log":"192.168.0.131 - - [25/Feb/2024:12:56:54 +0000] \"HEAD / HTTP/1.1\" 200 0 \"-\" \"curl/8.6.0\" \"-\"","stream":"stdout","time":"2024-02-25T12:56:54.303800297Z"}
{"_p":"F","kubernetes":{"container_hash":"docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107","container_image":"docker.io/library/nginx:latest","container_name":"nginx","docker_id":"c7441e4ae478994c7a501ac00219a445892db4ef39cf00ab1fd746220fc13fa4","host":"ip-192-168-0-31.ap-northeast-1.compute.internal","namespace_name":"default","pod_id":"e0bfc531-edb2-44f0-b965-3296799b49e6","pod_name":"nginx"},"log":"192.168.0.131 - - [25/Feb/2024:12:56:56 +0000] \"HEAD / HTTP/1.1\" 200 0 \"-\" \"curl/8.6.0\" \"-\"","stream":"stdout","time":"2024-02-25T12:56:56.402928644Z"}
{"_p":"F","kubernetes":{"container_hash":"docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107","container_image":"docker.io/library/nginx:latest","container_name":"nginx","docker_id":"c7441e4ae478994c7a501ac00219a445892db4ef39cf00ab1fd746220fc13fa4","host":"ip-192-168-0-31.ap-northeast-1.compute.internal","namespace_name":"default","pod_id":"e0bfc531-edb2-44f0-b965-3296799b49e6","pod_name":"nginx"},"log":"192.168.0.131 - - [25/Feb/2024:12:56:58 +0000] \"HEAD / HTTP/1.1\" 200 0 \"-\" \"curl/8.6.0\" \"-\"","stream":"stdout","time":"2024-02-25T12:56:58.871090145Z"}

このEC2ノード上で起動したPod(nginx)のログがS3バケットに保存されていることがわかりました。今回は、EC2で起動したPodのログをS3バケットに保存するところまでを検証の対象としていたので、ここまでにしたいと思います。

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?