GCSは、データの保存先してSingle region/Dual regions/Multi regionsを指定できますが、保存先の指定によりアプロード速度がどの程度差があるのか試してみました。
リージョンは、ロンドンなどの特定の地理的な場所となります。
デュアルリージョンは、フィンランドやオランダなどのリージョンの特定のペアです。
マルチリージョンは、米国などの、2 つ以上の地理的な場所を含む広い地理的なエリアとなります。
前提
Storage classは全部Standardでバケットを作成し、1GBのDummyファイル作成してアップロードにかかる時間を計測しました。gsutilコマンドはCloudshellから実行しています。Cloud shellは、asia-east1-a(台湾)で起動してます。
$ curl -H "Metadata-Flavor: Google" metadata/computeMetadata/v1/instance/zone
projects/*********/zones/asia-east1-a
結論
DualやMultiを指定したからといって、処理時間が2倍になるといったことはなかったです。
手順とログ
ダミーファイルの作成
dd if=/dev/zero of=dummy bs=1G count=1
バケットの作成
gsutil mb -c standard -l asia-northeast1 -b on gs://my-bucket-single-tokyo/
gsutil mb -c standard -l asia-northeast1 -b on gs://my-bucket-multi-asia/
gsutil mb -c standard -l asia-northeast1 -b on gs://my-bucket-single-us-central/
gsutil mb -c standard -l asia-northeast1 -b on gs://my-bucket-dual-nam4/
gsutil mb -c standard -l asia-northeast1 -b on gs://my-bucket-multi-us/
ダミーファイルアップロード
$ time gsutil cp dummy gs://my-bucket-single-tokyo/
Copying file://dummy [Content-Type=application/octet-stream]...
==> NOTE: You are uploading one or more large file(s), which would run
significantly faster if you enable parallel composite uploads. This
feature can be enabled by editing the
"parallel_composite_upload_threshold" value in your .boto
configuration file. However, note that if you do this large files will
be uploaded as `composite objects
<https://cloud.google.com/storage/docs/composite-objects>`_,which
means that any user who downloads such objects will need to have a
compiled crcmod installed (see "gsutil help crcmod"). This is because
without a compiled crcmod, computing checksums on composite objects is
so slow that gsutil disables downloads of composite objects.
- [1 files][ 1.0 GiB/ 1.0 GiB] 52.8 MiB/s
Operation completed over 1 objects/1.0 GiB.
real 0m22.698s
user 0m5.779s
sys 0m3.355s
$ time gsutil cp dummy gs://my-bucket-multi-asia/
Copying file://dummy [Content-Type=application/octet-stream]...
==> NOTE: You are uploading one or more large file(s), which would run
significantly faster if you enable parallel composite uploads. This
feature can be enabled by editing the
"parallel_composite_upload_threshold" value in your .boto
configuration file. However, note that if you do this large files will
be uploaded as `composite objects
<https://cloud.google.com/storage/docs/composite-objects>`_,which
means that any user who downloads such objects will need to have a
compiled crcmod installed (see "gsutil help crcmod"). This is because
without a compiled crcmod, computing checksums on composite objects is
so slow that gsutil disables downloads of composite objects.
/ [1 files][ 1.0 GiB/ 1.0 GiB] 49.9 MiB/s
Operation completed over 1 objects/1.0 GiB.
real 0m21.694s
user 0m5.753s
sys 0m3.291s
$ time gsutil cp dummy gs://my-bucket-single-us-central/
Copying file://dummy [Content-Type=application/octet-stream]...
==> NOTE: You are uploading one or more large file(s), which would run
significantly faster if you enable parallel composite uploads. This
feature can be enabled by editing the
"parallel_composite_upload_threshold" value in your .boto
configuration file. However, note that if you do this large files will
be uploaded as `composite objects
<https://cloud.google.com/storage/docs/composite-objects>`_,which
means that any user who downloads such objects will need to have a
compiled crcmod installed (see "gsutil help crcmod"). This is because
without a compiled crcmod, computing checksums on composite objects is
so slow that gsutil disables downloads of composite objects.
| [1 files][ 1.0 GiB/ 1.0 GiB] 55.6 MiB/s
Operation completed over 1 objects/1.0 GiB.
real 0m20.936s
user 0m5.780s
sys 0m3.225s
$ time gsutil cp dummy gs://my-bucket-dual-nam4/
Copying file://dummy [Content-Type=application/octet-stream]...
==> NOTE: You are uploading one or more large file(s), which would run
significantly faster if you enable parallel composite uploads. This
feature can be enabled by editing the
"parallel_composite_upload_threshold" value in your .boto
configuration file. However, note that if you do this large files will
be uploaded as `composite objects
<https://cloud.google.com/storage/docs/composite-objects>`_,which
means that any user who downloads such objects will need to have a
compiled crcmod installed (see "gsutil help crcmod"). This is because
without a compiled crcmod, computing checksums on composite objects is
so slow that gsutil disables downloads of composite objects.
- [1 files][ 1.0 GiB/ 1.0 GiB] 43.5 MiB/s
Operation completed over 1 objects/1.0 GiB.
real 0m24.753s
user 0m6.829s
sys 0m4.467s
$ time gsutil cp dummy gs://my-bucket-multi-us/
Copying file://dummy [Content-Type=application/octet-stream]...
==> NOTE: You are uploading one or more large file(s), which would run
significantly faster if you enable parallel composite uploads. This
feature can be enabled by editing the
"parallel_composite_upload_threshold" value in your .boto
configuration file. However, note that if you do this large files will
be uploaded as `composite objects
<https://cloud.google.com/storage/docs/composite-objects>`_,which
means that any user who downloads such objects will need to have a
compiled crcmod installed (see "gsutil help crcmod"). This is because
without a compiled crcmod, computing checksums on composite objects is
so slow that gsutil disables downloads of composite objects.
- [1 files][ 1.0 GiB/ 1.0 GiB] 42.6 MiB/s
Operation completed over 1 objects/1.0 GiB.
real 0m25.268s
user 0m6.737s
sys 0m4.723s