1
0

More than 3 years have passed since last update.

Raspberry piにrcloneを導入してAWS S3をmountする

Posted at

S3の事前準備

普通にS3を作って、そこにアクセス可能なIAM Userを用意する。
ポリシーはこんな感じにしておいた。

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject"
            ],
            "Resource": "arn:aws:s3:::<bucketname>/*"
        },
        {
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::<bucketname>"
        }
    ]
}

rcloneの導入

$ sudo apt update
$ sudo apt install rclone

rcloneの設定

新規設定を作成する。まずはわかり易い名前を適当に。

$ rclone config
2021/05/27 07:02:36 NOTICE: Config file "/home/pi/.config/rclone/rclone.conf" not found - using defaults
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> AmazonS3_01

ストレージの種類を選択する。今回はAmazon S3を使うので"s3"。

Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / A stackable unification remote, which can appear to merge the contents of several remotes
   \ "union"
 2 / Alias for a existing remote
   \ "alias"
 3 / Amazon Drive
   \ "amazon cloud drive"
 4 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)
   \ "s3"
    (中略)
24 / http Connection
   \ "http"
Storage> s3
** See help for s3 backend at: https://rclone.org/s3/ **

利用するS3プロバイダの選択。結構互換品があるのね。
今回はAmazon S3なので"AWS"。

Choose your S3 provider.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Amazon Web Services (AWS) S3
   \ "AWS"
 2 / Ceph Object Storage
   \ "Ceph"
 3 / Digital Ocean Spaces
   \ "DigitalOcean"
 4 / Dreamhost DreamObjects
   \ "Dreamhost"
 5 / IBM COS S3
   \ "IBMCOS"
 6 / Minio Object Storage
   \ "Minio"
 7 / Wasabi Object Storage
   \ "Wasabi"
 8 / Any other S3 compatible provider
   \ "Other"
provider> AWS

credentialsについて。今回はraspbery piから利用するので、環境変数やメタデータからの取得ではなく、事前に用意しておいたIAM Userのcredentialsを利用する。
false(default)を選択後、access_key_id, secret_access_keyを求められるがままに応じて入力。

Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
 1 / Enter AWS credentials in the next step
   \ "false"
 2 / Get AWS credentials from the environment (env vars or IAM)
   \ "true"
env_auth>
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
access_key_id> ********
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
secret_access_key> ****************

接続先リージョンの指定。"ap-northeast-1"を指定。

Region to connect to.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
   / The default endpoint - a good choice if you are unsure.
 1 | US Region, Northern Virginia or Pacific Northwest.
   | Leave location constraint empty.
   \ "us-east-1"
   / US East (Ohio) Region
 2 | Needs location constraint us-east-2.
   \ "us-east-2"
    (中略)
   / Asia Pacific (Tokyo) Region
11 | Needs location constraint ap-northeast-1.
   \ "ap-northeast-1"
   / Asia Pacific (Seoul)
12 | Needs location constraint ap-northeast-2.
   \ "ap-northeast-2"
   / Asia Pacific (Mumbai)
13 | Needs location constraint ap-south-1.
   \ "ap-south-1"
   / South America (Sao Paulo) Region
14 | Needs location constraint sa-east-1.
   \ "sa-east-1"
region> ap-northeast-1

S3のEndpoint指定はdefault ""で。
今回は既存のBucketを利用するつもりなのでLocation constraintは別に何でも良いのだけれど、一応"ap-northeast-1"を指定。

Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Enter a string value. Press Enter for the default ("").
endpoint>
Location constraint - must be set to match the Region.
Used when creating buckets only.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Empty for US Region, Northern Virginia or Pacific Northwest.
   \ ""
 2 / US East (Ohio) Region.
   \ "us-east-2"
 3 / US West (Oregon) Region.
   \ "us-west-2"
 4 / US West (Northern California) Region.
   \ "us-west-1"
 5 / Canada (Central) Region.
   \ "ca-central-1"
 6 / EU (Ireland) Region.
   \ "eu-west-1"
 7 / EU (London) Region.
   \ "eu-west-2"
 8 / EU Region.
   \ "EU"
 9 / Asia Pacific (Singapore) Region.
   \ "ap-southeast-1"
10 / Asia Pacific (Sydney) Region.
   \ "ap-southeast-2"
11 / Asia Pacific (Tokyo) Region.
   \ "ap-northeast-1"
12 / Asia Pacific (Seoul)
   \ "ap-northeast-2"
13 / Asia Pacific (Mumbai)
   \ "ap-south-1"
14 / South America (Sao Paulo) Region.
   \ "sa-east-1"
location_constraint> ap-northeast-1

ACLはdefautの"private"

Canned ACL used when creating buckets and storing or copying objects.

For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl

Note that this ACL is applied when server side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
   \ "private"
 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
   \ "public-read"
   / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
 3 | Granting this on a bucket is generally not recommended.
   \ "public-read-write"
 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
   \ "authenticated-read"
   / Object owner gets FULL_CONTROL. Bucket owner gets READ access.
 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ "bucket-owner-read"
   / Both the object owner and the bucket owner get FULL_CONTROL over the object.
 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ "bucket-owner-full-control"
acl>

暗号化については適当に指定。今回は事前にAES256にしてたので"2"
KMSのarn指定はなし。

The server-side encryption algorithm used when storing this object in S3.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / None
   \ ""
 2 / AES256
   \ "AES256"
 3 / aws:kms
   \ "aws:kms"
server_side_encryption> 2
If using KMS ID you must provide the ARN of Key.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / None
   \ ""
 2 / arn:aws:kms:*
   \ "arn:aws:kms:us-east-1:*"
sse_kms_key_id>

storage classは用途に応じて適当に。今回はSTANDARDを指定。

The storage class to use when storing new objects in S3.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Default
   \ ""
 2 / Standard storage class
   \ "STANDARD"
 3 / Reduced redundancy storage class
   \ "REDUCED_REDUNDANCY"
 4 / Standard Infrequent Access storage class
   \ "STANDARD_IA"
 5 / One Zone Infrequent Access storage class
   \ "ONEZONE_IA"
storage_class> STANDARD

advanced configのお誘いを断ったら、後は設定の見直しをして完了。

Edit advanced config? (y/n)
y) Yes
n) No
y/n> n
Remote config
--------------------
[AmazonS3_01]
provider = AWS
access_key_id = ********************
secret_access_key = ***************************************
region = ap-northeast-1
location_constraint = ap-northeast-1
server_side_encryption = AES256
storage_class = STANDARD
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:

Name                 Type
====                 ====
AmazonS3_01          s3

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q

動作確認

定義した名前に対してlsを発行してみる……が、403エラーになる。あ、ListBucket権限付けるときにBucketの指定をしたんだったか。

$ rclone ls AmazonS3_01:
2021/05/27 07:33:59 ERROR : : error listing: AccessDenied: Access Denied
    status code: 403, request id: , host id: 
2021/05/27 07:33:59 Failed to ls: AccessDenied: Access Denied
    status code: 403, request id: , host id: 

Bucket nameを指定すればエラーは出ない。ただ、中身が無いから何も返ってこないわ。

$ rclone ls AmazonS3_01:<bucketname>

適当なファイルをcopyしてからlsすれば問題なく確認できた。

$ rclone copy lcd.py AmazonS3_01:<bukcetname>
$ rclone ls AmazonS3_01:<bucketname>
     3461 lcd.py

rclone mount

mkdir ~/s3
rclone mount AmazonS3_01:<bucketname> ~/s3 &
[1] 1165
$ ls ~/s3/ -l
total 4
-rw-r--r-- 1 pi pi 3461 May 25 03:54 lcd.py

ポイントはrclone mountコマンドの末尾の" &"。
ヘルプに以下の様に記述がある通り、普通に実行するとrcloneがmountのために掛かりっきりになっちゃうので、続けて作業したいときはrcloneをバックグラウンドで実行してやる必要があり。

When the program ends, either via Ctrl+C or receiving a SIGINT or SIGTERM signal,the mount is automatically stopped.

止めるときはSIGINTやSIGTERMをプロセスに送ってあげてね。

$ ps aux | grep rclone
pi        1165  0.5  5.1 883532 22604 pts/0    Sl   07:51   0:01 rclone mount AmazonS3_01:<bucketname> ~/s3
pi        1194  0.0  0.4   7332  1816 pts/0    S+   07:57   0:00 grep --color=auto rclone
$ kill -s SIGINT 1165
$ ps aux | grep rclone
pi        1199  0.0  0.4   7332  2032 pts/0    S+   08:00   0:00 grep --color=auto rclone
[1]+  Done                    rclone mount AmazonS3_01:sand-river-raspi ./s3
1
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
0