LoginSignup
3
4

More than 5 years have passed since last update.

IDCFクラウドのオブジェクトストレージでs3cmdとawscliの性能比較

Last updated at Posted at 2016-03-13

お詫び

ちょうど私がテストした際はスループットの調子が悪かったようで、3/16に再度試した所
結果がまったく違いました。。(5GBのPUT/GETのみ)
ただ、やはり大きいファイルについてはaws−cliの方が早いです。

s3cmd 5GBx1 PUT

# time s3cmd put 5GB s3://hoppy/
upload: '5GB' -> 's3://hoppy/5GB'  [part 1 of 318, 15MB] [1 of 1]
 15728640 of 15728640   100% in    1s    12.43 MB/s  done

real    9m28.059s
user    1m24.496s
sys 0m23.931s

aws-cli 5GBx1 PUT

# time aws --endpoint-url https://ds.jp-east.idcfcloud.com s3 cp 5GB s3://hoppy/
upload: ./5GB to s3://hoppy/5GB

real    2m16.366s
user    0m41.617s
sys 0m21.943s

s3cmd 5GBx1 GET

# time s3cmd get s3://hoppy/5GB 5GB
download: 's3://hoppy/5GB' -> '5GB'  [1 of 1]
 5000000000 of 5000000000   100% in  204s    23.35 MB/s  done

real    3m24.365s
user    1m16.915s
sys 0m25.168s

aws-cli 5GBx1 GET

# time aws --endpoint-url https://ds.jp-east.idcfcloud.com s3 cp s3://hoppy/5GB 5GB
download: s3://hoppy/5GB to ./5GB

real    1m28.344s
user    0m36.069s
sys 0m17.853s
  • 環境

IDCFクラウド henryゾーン
Standard.S4
1CPU 2.4GHz
mem 4GB
CentOS7.1

結果

結果からお伝えします

s3cmd aws-cli
put 1GB 約2分 約25秒
put 3GB 約7分 約1分30秒
put 5GB 約11分(※3/16 9分30秒) 約3分(※3/16 2分30秒)
put 1MBx1000 約13分 約20分
get 1GB 約40秒 約20秒
get 5GB オプション指定なし 約40分(3/16 3分30秒) 約3分(デフォルトmulitpart 8MBで処理)(※3/16 1分30秒)
ls ほぼ同じ ほぼ同じ
rm ほぼ同じ ほぼ同じ

1ファイルあたり大きいサイズのアップロード
※今回は1GB,3GB,5GB
は、aws−cliの方が圧倒的に早い
getもaws−cliの方が早い

細かいファイルサイズを大量にアップロード
※今回は1MBx1000
する場合は、s3cmdの方が早いと思われる

aws−cli、s3cmdともにconfigはデフォルトで使用しており、
カスタマイズの余地はある?が、大きいサイズのものであれば、aws−cliを使ったほうがよい

以降、各ツールインストール手順、検証時のコマンドとなります

awscliインストール

# pip install awscli

初期設定

# aws configure
AWS Access Key ID [None]: xxxxxxxxxxxxxx
AWS Secret Access Key [None]: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Default region name [None]:
Default output format [None]:

上記より、~/.aws以下にcredentialsファイルが作成される
[default]
aws_access_key_id = xxxxxxxxxxxxxx
aws_secret_access_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

aws−cliバージョン確認

# aws --version
aws-cli/1.10.12 Python/2.7.5 Linux/3.10.0-229.14.1.el7.x86_64 botocore/1.4.3

s3cmdインストール

# cd /usr/local/src/
# wget https://pypi.python.org/packages/source/s/s3cmd/s3cmd-1.6.1.tar.gz#md5=d7477e7000a98552932d23e279d69a11
# tar zxvf s3cmd-1.6.1.tar.gz
# cd s3cmd-1.6.1/
# python setup.py install

※インストールされていなければ必要
# yum install python-dateutil

s3cmd パージョン確認

# s3cmd --version
s3cmd version 1.6.1

1GBのファイル作成

# dd if=/dev/zero of=./1GB bs=1M count=1000
1000+0 レコード入力
1000+0 レコード出力
1048576000 バイト (1.0 GB) コピーされました、 5.53735 秒、 189 MB/秒

s3cmd 1GB x 1 put

# time s3cmd put 1GB s3://hoppy/
upload: '1GB' -> 's3://hoppy/1GB'  [part 1 of 67, 15MB] [1 of 1]
<snip>
upload: '1GB' -> 's3://hoppy/1GB'  [part 67 of 67, 10MB] [1 of 1]
 10485760 of 10485760   100% in    2s     3.54 MB/s  done

real    1m49.415s
user    0m15.736s
sys 0m2.147s

aws-cliと同じmultipartuploadの単位を8MB指定で試してみる

参考URL
http://docs.aws.amazon.com/cli/latest/topic/s3-config.html

multipart_chunksize
Default - 8MB

Minimum For Uploads - 5MB

Once the S3 commands have decided to use multipart operations, the file is divided into chunks. This configuration option specifies what the chunk size (also referred to as the part size) should be. This value can specified using the same semantics as multipart_threshold, that is either as the number of bytes as an integer, or using a size suffix.

multipart 8MB

# time s3cmd --multipart-chunk-size-mb=8 put 1GB s3://hoppy/
upload: '1GB' -> 's3://hoppy/1GB'  [part 1 of 125, 8MB] [1 of 1]
 8388608 of 8388608   100% in    1s     5.48 MB/s  done
<snip>
upload: '1GB' -> 's3://hoppy/1GB'  [part 125 of 125, 8MB] [1 of 1]
 8388608 of 8388608   100% in    1s     7.42 MB/s  done

real    2m22.127s
user    0m16.872s
sys 0m2.260s

アップロード時間は変わらず

s3cmd 1GB x 1 get

# time s3cmd get s3://hoppy/1GB
download: 's3://hoppy/1GB' -> './1GB'  [1 of 1]
 1048576000 of 1048576000   100% in   36s    27.12 MB/s  done

real    0m37.054s
user    0m14.992s
sys 0m5.258s

aws-cli 1GB x 1 put

# time aws --endpoint-url https://ds.jp-east.idcfcloud.com s3 cp 1GB-awscli s3://hoppy/
upload: ./1GB-awscli to s3://hoppy/1GB-awscli

real    0m23.165s
user    0m7.691s
sys 0m2.741s

aws-cliを今度はs3cmdデフォルトと同じ、multipartサイズのデフォルトを15MBにして試す

# aws configure set default.s3.multipart_chunksize 15MB

上記コマンドより、~/.aws/configファイルが作成されます

# cat ~/.aws/config
[default]
s3 =
    multipart_chunksize = 15MB
# time aws --endpoint-url https://ds.jp-east.idcfcloud.com s3 cp 1GB s3://hoppy/
upload: ./1GB to s3://hoppy/1GB

real    0m24.294s
user    0m7.020s
sys 0m2.270s

特にアップロード時間に変化なし

aws-cli 1GB x 1 get

# time aws --endpoint-url https://ds.jp-east.idcfcloud.com s3 cp s3://hoppy/1GB 1GB
download: s3://hoppy/1GB to ./1GB

real    0m16.517s
user    0m6.105s
sys 0m3.067s

s3cmd 1MB x 1000 put

今度は細かいファイルを大量に作成し、アップロード時間を計測

1MBを1000ファイル作成

# for i in {1..1000} ; do dd if=/dev/zero of=$i bs=1M count=1 ; done
# time for i in {1..1000} ; do s3cmd put $i s3://hoppy/ ; done
upload: '1' -> 's3://hoppy/1'  [1 of 1]
 1048576 of 1048576   100% in    0s  1064.32 kB/s  done
upload: '2' -> 's3://hoppy/2'  [1 of 1]
 1048576 of 1048576   100% in    0s     7.81 MB/s  done

upload: '1000' -> 's3://hoppy/1000'  [1 of 1]
 1048576 of 1048576   100% in    0s     4.21 MB/s  done

real    13m20.315s
user    3m3.908s
sys 0m33.912s

aws-cli 1MB x 1000 put

1回目

# time for i in {1..1000} ; do aws --endpoint-url https://ds.jp-east.idcfcloud.com s3 cp $i s3://kurohoppy/ ; done
upload: ./1 to s3://kurohoppy/1

upload: ./1000 to s3://kurohoppy/1000

real    20m58.931s
user    6m16.116s
sys 0m59.190s

2回目

# time for i in {1..1000} ; do aws --endpoint-url https://ds.jp-east.idcfcloud.com s3 cp $i s3://hoppy/ ; done
upload: ./1 to s3://hoppy/1

upload: ./1000 to s3://hoppy/1000

real    21m22.521s
user    6m19.122s
sys 1m1.867s

オブジェクト取得時間を比較してみる

# time aws --endpoint-url https://ds.jp-east.idcfcloud.com s3 ls s3://hoppy/
real    0m3.666s
user    0m0.690s
sys 0m0.095s
# time s3cmd ls s3://hoppy/

real    0m4.788s
user    0m1.040s
sys 0m0.060s

オブジェクト削除

# time for i in {1..1000} ; do aws --endpoint-url https://ds.jp-east.idcfcloud.com s3 rm s3://hoppy/$i ; done
delete: s3://hoppy/1

delete: s3://hoppy/1000

real    17m51.598s
user    6m1.646s
sys 0m57.292s

5GBファイル作成

# dd if=/dev/zero of=./5GB bs=1M count=5000

aws-cli 5GB x 1 put

# time aws --endpoint-url https://ds.jp-east.idcfcloud.com s3 cp 5GB s3://hoppy/
upload: ./5GB to s3://hoppy/5GB


Completed 64 of 625 part(s) with 1 file(s) remaining

real    2m7.412s
user    0m41.645s
sys 0m21.901s

2回目

upload: ./5GB to s3://hoppy/5GB

real    3m13.921s
user    0m41.968s
sys 0m21.975s

Delete

# time aws --endpoint-url https://ds.jp-east.idcfcloud.com s3 rm  s3://hoppy/5GB
delete: s3://hoppy/5GB

real    0m2.027s
user    0m0.451s
sys 0m0.185s

s3cmd 5GB x 1 put

# time s3cmd put 5GB s3://hoppy/
upload: '5GB' -> 's3://hoppy/5GB'  [part 1 of 334, 15MB] [1 of 1]

upload: '5GB' -> 's3://hoppy/5GB'  [part 334 of 334, 5MB] [1 of 1]
 5242880 of 5242880   100% in    0s     6.62 MB/s  done

real    11m1.136s
user    1m27.351s
sys 0m23.442s
# time s3cmd rm s3://hoppy/5GB
delete: 's3://hoppy/5GB'

real    0m0.937s
user    0m0.166s
sys 0m0.035s

aws-cli 5GB x 1 get

multipart_chunksize 15MBの場合

# time aws --endpoint-url https://ds.jp-east.idcfcloud.com s3 cp s3://hoppy/5GB 5GB

Completed 238 of 333 part(s) with 1 file(s) remaining

download: s3://hoppy/5GB to ./5GB

real    3m7.454s
user    0m31.841s
sys 0m16.476s
# aws configure set default.s3.multipart_chunksize 8MB
# cat ~/.aws/config
[default]
s3 =
    multipart_chunksize = 8MB

multipart_chunksize 8MBの場合

# time aws --endpoint-url https://ds.jp-east.idcfcloud.com s3 cp s3://hoppy/5GB 5GB
Completed 206 of 625 part(s) with 1 file(s) remaining

download: s3://hoppy/5GB to ./5GB

real    3m11.580s
user    0m32.481s
sys 0m16.640s

s3cmd 5GB x 1 get

# time s3cmd --multipart-chunk-size-mb=15 get s3://hoppy/5GB
download: 's3://hoppy/5GB' -> './5GB'  [1 of 1]
 5242880000 of 5242880000   100% in 2481s     2.02 MB/s  done

real    41m21.554s
user    1m15.421s
sys 0m27.096s

aws-cli 3GB x 1 put

# time aws --endpoint-url https://ds.jp-east.idcfcloud.com s3 cp 3GB s3://hoppy/
Completed 88 of 375 part(s) with 1 file(s) remaining

upload: ./3GB to s3://hoppy/3GB

real    1m21.823s
user    0m24.165s
sys 0m12.303s

delete

# time aws --endpoint-url https://ds.jp-east.idcfcloud.com s3 rm s3://hoppy/3GB
delete: s3://hoppy/3GB

real    0m2.908s
user    0m0.536s
sys 0m0.320s

s3cmd 3GB x 1 put

# time s3cmd put 3GB s3://hoppy/
upload: '3GB' -> 's3://hoppy/3GB'  [part 1 of 200, 15MB] [1 of 1]
 15728640 of 15728640   100% in    1s    13.29 MB/s  done

upload: '3GB' -> 's3://hoppy/3GB'  [part 200 of 200, 15MB] [1 of 1]
 15728640 of 15728640   100% in    1s    12.07 MB/s  done

real    7m20.045s
user    0m48.661s
sys 0m12.791s

delete

# time s3cmd rm s3://hoppy/3GB
delete: 's3://hoppy/3GB'

real    0m0.951s
user    0m0.176s
sys 0m0.054s

注意点

5GBを超えるファイルをアップロードする場合は、mulitipart uploadが必ず必要となります。

参考URL
https://www.faq.idcf.jp/app/answers/detail/a_id/332/c/37

以上、ご参考まで

3
4
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
3
4