はじめに
CloudWatch Logs の CreateExportTask API を実行して、CloudWatch Logs のログを S3 へエクスポートする際、以下のようにログのキー名には、ランダムな文字列であるエクスポートタスク ID(CreateExportTask実行時に返却される)が含まれています。
<バケット名>/<destinationPrefix に指定した値 OR exportedlogs>/<エクスポートタスクの ID>/<ログストリーム名>/<ログファイル名>
場合によっては、このエクスポートタスク ID をエクスポートされたログのキー名に含めずにエクスポートしたいことがあると思います。
現状では、このエクスポートタスク ID を含めずにエクスポートすることが出来ないので、エクスポートタスク ID を含めたくない場合は、エクスポート完了後にキー名を変更する必要があります。
しかしながら、毎度手動で変更するのも面倒且つオペレーションミスを誘発しそうなので、Lambda を使用して自動的に変更する方法を紹介したいと思います。
処理の流れは以下になります。
環境構築
各設定値を予め環境変数に設定
$ LogGroupName=export-task-test
$ LogStreamName=test-stream
$ S3BucketName=<Bucket Name>
$ LambdaRoleName=lambda-s3-copy-object-role
$ LambdaRoleArn=arn:aws:iam::<AWS Account ID>:role/$LambdaRoleName
$ LambdaFunctionName=move-exported-log
CloudWatch Logs のロググループを作成
$ aws logs create-log-group \
--log-group-name $LogGroupName
確認
$ aws logs describe-log-groups \
--log-group-name-prefix $LogGroupName
{
"logGroups": [
{
"logGroupName": "export-task-test",
"creationTime": 1585128988867,
"metricFilterCount": 0,
"arn": "arn:aws:logs:ap-northeast-1:<AWS Account ID>:log-group:export-task-test:*",
"storedBytes": 0
}
]
}
CloudWatch Logs のログストリームを作成
$ aws logs create-log-stream \
--log-group-name $LogGroupName \
--log-stream-name $LogStreamName
確認
$ aws logs describe-log-streams \
--log-group-name $LogGroupName
{
"logStreams": [
{
"logStreamName": "test-stream",
"creationTime": 1585129244632,
"arn": "arn:aws:logs:ap-northeast-1:<AWS Account ID>:log-group:export-task-test:log-stream:test-stream",
"storedBytes": 0
}
]
}
CloudWatch Logs のログストリームにログを追加
$ aws logs put-log-events \
--log-group-name $LogGroupName \
--log-stream-name $LogStreamName \
--log-events timestamp=`date +%s`000,message="test"
{
"nextSequenceToken": "49605200498425818635249562987166057764478997389507621362"
}
確認
$ aws logs get-log-events \
--log-group-name $LogGroupName \
--log-stream-name $LogStreamName
{
"events": [
{
"timestamp": 1585129637000,
"message": "test",
"ingestionTime": 1585129638304
}
],
"nextForwardToken": "f/35349572141376339595642616875239835279383229376686194688",
"nextBackwardToken": "b/35349572141376339595642616875239835279383229376686194688"
}
Lambda 関数が使用する IAM ロールを作成
IAM ロールを作成
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
$ aws iam create-role \
--role-name $LambdaRoleName \
--assume-role-policy-document file://assume-role-policy-document.json
{
"Role": {
"Path": "/",
"RoleName": "lambda-s3-copy-object-role",
"RoleId": "AROAXXXXXXXXXXXXXXXXX",
"Arn": "arn:aws:iam::<AWS Account ID>:role/lambda-s3-copy-object-role",
"CreateDate": "2020-03-25T10:04:57Z",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
}
}
インラインポリシーを IAM ロールに設定
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CopyObject",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:DeleteObject",
"s3:GetObject"
],
"Resource": "arn:aws:s3:::<Bucket Name>/*"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PutLogEvents",
"Effect": "Allow",
"Action": "logs:PutLogEvents",
"Resource": "arn:aws:logs:ap-northeast-1:<AWS Account ID>:log-group:*:log-stream:*"
},
{
"Sid": "CreateLogStream",
"Effect": "Allow",
"Action": "logs:CreateLogStream",
"Resource": "arn:aws:logs:ap-northeast-1:<AWS Account ID>:log-group:*"
},
{
"Sid": "CreateLogGroup",
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "*"
}
]
}
$ aws iam put-role-policy \
--role-name $LambdaRoleName \
--policy-name copy-object-policy \
--policy-document file://copy-object-policy.json
$ aws iam put-role-policy \
--role-name $LambdaRoleName \
--policy-name cloudwatch-logs-policy \
--policy-document file://cloudwatch-logs-policy.json
確認
$ aws iam get-role-policy \
--role-name $LambdaRoleName \
--policy-name copy-object-policy
{
"RoleName": "lambda-s3-copy-object-role",
"PolicyName": "copy-object-policy",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CopyObject",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:DeleteObject",
"s3:GetObject"
],
"Resource": "arn:aws:s3:::<Bucket Name>/*"
}
]
}
}
$ aws iam get-role-policy \
--role-name $LambdaRoleName \
--policy-name cloudwatch-logs-policy
{
"RoleName": "lambda-s3-copy-object-role",
"PolicyName": "cloudwatch-logs-policy",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PutLogEvents",
"Effect": "Allow",
"Action": "logs:PutLogEvents",
"Resource": "arn:aws:logs:ap-northeast-1:<AWS Account ID>:log-group:*:log-stream:*"
},
{
"Sid": "CreateLogStream",
"Effect": "Allow",
"Action": "logs:CreateLogStream",
"Resource": "arn:aws:logs:ap-northeast-1:<AWS Account ID>:log-group:*"
},
{
"Sid": "CreateLogGroup",
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "*"
}
]
}
}
Lambda 関数を作成
$ mkdir code
$ cd code
$ touch lambda_function.py
import sys, traceback, json
import boto3
s3 = boto3.client('s3')
BUCKET_NAME = '<Bucket Name>'
LOG_GROUP_NAME = 'export-task-test'
def copy_object(src_key, dst_key):
try:
result = s3.copy_object(
Bucket=BUCKET_NAME,
CopySource='%s/%s' % (BUCKET_NAME, src_key),
Key=dst_key
)
result = True
except:
traceback.print_exc()
result = False
return result
def delete_object(src_key):
try:
s3.delete_object(
Bucket=BUCKET_NAME,
Key=src_key
)
result = True
except:
traceback.print_exc()
result = False
return result
def cleanup_objects(src_key):
# Delete Old Object & aws-logs-write-test Object
return delete_object(src_key) and delete_object('exportedlogs/aws-logs-write-test')
def move_log(src_key):
result = False
group_stream_log_list = [LOG_GROUP_NAME] + src_key.split('/')[-2:]
# <LogGrpupName>/<LogStreamName>/<LogName>
dst_key = '/'.join(group_stream_log_list)
return copy_object(src_key, dst_key)
def move_log_and_cleanup_objects(record):
src_key = record['s3']['object']['key']
event_name_category = record['eventName'].split(':')[0]
if event_name_category != 'ObjectCreated':
return 'Skipped: %s %s' % (event_name_category, src_key)
if not move_log(src_key):
return 'Failed to move %s' % src_key
if not cleanup_objects(src_key):
return 'Failed to cleanup %s' % src_key
return 'Successfully moved %s' % src_key
def lambda_handler(event, context):
record = event['Records'][0]
result = move_log_and_cleanup_objects(record)
print(result)
return result
$ zip -r lambda_function.zip ./
adding: lambda_function.py (deflated 64%)
$ aws lambda create-function \
--function-name $LambdaFunctionName \
--runtime python3.8 \
--role $LambdaRoleArn \
--handler lambda_function.lambda_handler \
--zip-file fileb://lambda_function.zip
{
"FunctionName": "move-exported-log",
"FunctionArn": "arn:aws:lambda:ap-northeast-1:<AWS Account ID>:function:move-exported-log",
"Runtime": "python3.8",
"Role": "arn:aws:iam::<AWS Account ID>:role/lambda-s3-copy-object-role",
"Handler": "lambda_function.lambda_handler",
"CodeSize": 787,
"Description": "",
"Timeout": 3,
"MemorySize": 128,
"LastModified": "2020-03-25T15:27:22.284+0000",
"CodeSha256": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"Version": "$LATEST",
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"State": "Active",
"LastUpdateStatus": "Successful"
}
S3 バケットを作成
$ aws s3api create-bucket \
--bucket $S3BucketName \
--create-bucket-configuration LocationConstraint=ap-northeast-1
{
"Location": "http://<Bucket Name>.s3.amazonaws.com/"
}
S3 バケットのバケットポリシーで CloudWatch Logs サービスからのアクセスを許可
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:GetBucketAcl",
"Effect": "Allow",
"Resource": "arn:aws:s3:::<Bucket Name>",
"Principal": { "Service": "logs.ap-northeast-1.amazonaws.com" }
},
{
"Action": "s3:PutObject" ,
"Effect": "Allow",
"Resource": "arn:aws:s3:::<Bucket Name>/*",
"Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } },
"Principal": { "Service": "logs.ap-northeast-1.amazonaws.com" }
}
]
}
$ aws s3api put-bucket-policy \
--bucket $S3BucketName \
--policy file://bucket-policy.json
確認
$ aws s3api get-bucket-policy \
--bucket $S3BucketName \
| jq -r .Policy \
| jq .
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "logs.ap-northeast-1.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::<Bucket Name>"
},
{
"Effect": "Allow",
"Principal": {
"Service": "logs.ap-northeast-1.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::<Bucket Name>/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}
]
}
Lambda 関数に S3 からの実行許可を与える
aws lambda add-permission \
--function-name $LambdaFunctionName \
--principal s3.amazonaws.com \
--statement-id s3invoke \
--action "lambda:InvokeFunction" \
--source-arn arn:aws:s3:::$S3BucketName \
--source-account <AWS Account ID> \
| jq -r .Statement \
| jq .
{
"Sid": "s3invoke",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:ap-northeast-1:<AWS Account ID>:function:move-exported-log",
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "<AWS Account ID>"
},
"ArnLike": {
"AWS:SourceArn": "arn:aws:s3:::<Bucket Name>"
}
}
}
S3 イベントを設定
{
"LambdaFunctionConfigurations": [
{
"Id": "move-exported-log",
"LambdaFunctionArn": "arn:aws:lambda:ap-northeast-1:<AWS Account ID>:function:move-exported-log",
"Events": [
"s3:ObjectCreated:*"
],
"Filter": {
"Key": {
"FilterRules": [
{
"Name": "prefix",
"Value": "exportedlogs/"
},
{
"Name": "suffix",
"Value": ".gz"
}
]
}
}
}
]
}
$ aws s3api put-bucket-notification-configuration \
--bucket $S3BucketName \
--notification-configuration file://bucket-notification-configuration.json
確認
$ aws s3api get-bucket-notification-configuration \
--bucket $S3BucketName
{
"LambdaFunctionConfigurations": [
{
"Id": "move-exported-log",
"LambdaFunctionArn": "arn:aws:lambda:ap-northeast-1:<AWS Account ID>:function:move-exported-log",
"Events": [
"s3:ObjectCreated:*"
],
"Filter": {
"Key": {
"FilterRules": [
{
"Name": "Prefix",
"Value": "exportedlogs/"
},
{
"Name": "Suffix",
"Value": ".gz"
}
]
}
}
}
]
}
検証
$ today=`date +%Y-%m-%d`
$ begining_of_yesterday=`date -j -v -1d -f "%Y-%m-%d %T" $today" 00:00:00" +%s`000
$ begining_of_today=`date -j -f "%Y-%m-%d %T" $today" 00:00:00" +%s`000
$ aws logs create-export-task \
--log-group-name $LogGroupName \
--from $begining_of_yesterday \
--to $begining_of_today \
--destination $S3BucketName
{
"taskId": "cd8073fd-d3d9-4297-ab8b-071e2246ad5b"
}
結果
$ aws logs start-query \
--log-group-name "/aws/lambda/${LambdaFunctionName}" \
--start-time `date -v -1d +%s`000 \
--end-time `date +%s`000 \
--query-string "FIELDS @message"
{
"queryId": "95e4aaba-6bac-4e66-8524-ee2f8a59acaf"
}
$ aws logs get-query-results \
--query-id 95e4aaba-6bac-4e66-8524-ee2f8a59acaf
{
"results": [
...
[
{
"field": "@message",
"value": "Successfully moved exportedlogs/cd8073fd-d3d9-4297-ab8b-071e2246ad5b/test-stream/000000.gz\n"
},
...
],
...
$ aws s3 ls s3://<Bucket Name>/$LogGroupName/$LogStreamName/
2020-03-26 18:57:14 50 000000.gz
まとめ
上記のように、CloudWatch Logs と S3 イベント、Lambda を組み合わせることで、CreateExportTask でエクスポートされたログのキー名からエクスポートタスク ID 名を削除することが可能です。
エクスポート先の S3 バケットには、<ロググループ名>/<ログストリーム名>/<ログ名> のキーでログが出力されるので、ログ一覧の可視性を上げることが出来ます。
良ければお試しください。