29
5

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

Oracle Cloud Compute IOパフォーマンス MAXスピード・チャレンジ: 600万IOPS, 200Gbpsスループット出してみてみた

Last updated at Posted at 2025-09-24

第5世代 AMD EPYCTMプロセッサを搭載した Oracle Cloud Infrastructure (OCI) Compute E6 Standard が発表されました。

ストレージ技術基礎より、ストレージの IOPS(Input Output Per Second) と スループット(データ転送速度)の算出は次のようになります。

[スループット] = [1回のアクセスで処理する容量] × [IOPS] 
 --> 200Gbps = 4KB × IOPS
 -->    IOPS = 200Gb / 4KB = 25GB / 4 KB = 25,000,000 KB / 4 KB = 6,250,000

ということで 200Gbps の帯域幅を備えた BM.Standard.E6 コンピユートのストレージ・パフォーマンスは、625万 IOPS出そうです。
E6画像.png
E6 Standard Bare metal instance は、高スループットのコンピュート集中型のワークロード向けに設計されています。各インスタンスには、256コア、3 TBのメモリ、200 Gbpsのネットワーク・スループットが付属しています。これは、E5 Standardと比較して、コンピュートおよびメモリーが33%多く、ネットワーク帯域幅が2倍であり、業界標準のベンチマークで最大2倍のパフォーマンスを実現します。
オラクルのワークロード・テストでは、E6 Standardは、E5 Standardよりも最大2倍のコア当たりパフォーマンスが向上し、コストを損なうことなく必要なコンピュート・パワーが得られます。
image.png

  • E6 Standardインスタンスの理想的なユース・ケース
    ・ ビデオ・トランスコーディング、会議、オンデマンド・ビデオ
    ・ 大規模なパラレル・バッチ・ジョブ
    ・ インメモリー・データベースとキャッシュ・フリート
    ・ エンタープライズ・アプリケーション用のバックエンド・サーバー
    ・ Webサーバーとアプリケーション・サーバー
    ・ ゲーム・サーバー
    ・ アプリケーション開発環境
    ・ ビッグ・データ処理 (Spark、Hadoopなど)
    ・ 財務モデリングとリアルタイム分析
    ・ 高パフォーマンス・コンピューティング (HPC)
    ・ 科学的シミュレーションと3Dレンダリング

ということで、Oracle Cloud Infrastructure (OCI) BM.Standard.E6 で IOパフォーマンス 600万IOPS, 200Gbpsスループットを手段問わず無邪気にだしてみてみてみます。

■ 超高パフォーマンス・ボリュームのためのアタッチメントの構成

超高パフォーマンス・レベルで構成されたボリュームをアタッチして、最適なパフォーマンスを実現するには、ボリューム・アタッチメントをマルチパス対応にする必要があります。
ブロック・ボリューム・サービスでは、ボリュームをアタッチする際に、アタッチメントがマルチパス対応になるよう試行します。満たしていない前提条件があると、ボリューム・アタッチメントはマルチパス対応になりません。

● マルチパス対応の iSCSI アタッチメント前提条件

マルチパス対応の iSCSI ボリュームアタッチメントを構成するために必要な前提条件
・参考: 超高パフォーマンス・ボリュームのためのアタッチメントの構成
 - サポートされているシェイプ
 - サポートされてるイメージ
 - インスタンスのブロック・ボリューム管理プラグインを有効化
 - ブロック・ボリューム管理プラグインのパブリックIPアドレスまたはサービス・ゲートウェイを設定
 - 権限の構成
 - 一貫性のあるデバイス・パスを使用
 - インスタンスに、同じインスタンスの別のマルチパス対応アタッチメントが存在しないこと(Oracle Cloud Agentバージョン1.39未満の場合)

■ 権限の構成

ブロック・ボリューム管理プラグインが、マルチパス対応iSCSIアタッチメントの iSCSI設定の結果をレポートできるよう、権限を構成する必要があります

1) 動的グループの作成

次のコード・サンプルの一致ルールを使用して動的グループを作成し、指定したコンパートメント内のすべてのインスタンスを追加します。
ここでは、UHP_Dynamic_Group という名前で作成

例)
ANY {instance.compartment.id = '<テナンシOCID>', instance.compartment.id = '<コンパートメントOCID>'}

00_DynamicGroup01.png

2) 動的グループのポリシーの構成

前のステップで作成した動的グループ: UHP_Dynamic_Group に権限を付与するポリシーを構成して、インスタンス・エージェントによるアクセスを可能にし、ブロック・ボリューム・サービスをコールして、アタッチメント構成を取得できるようにします。

例)
Allow <dynamic-group名> InstantAgent to use instances in tenancy
Allow <dynamic-group名> InstantAgent to use volume-attachments in tenancy

00_UHP-Policy01.png

■ サービス・ゲートウェイとルート制御設定

コンピュート・インスタンスが Oracleサービス (OCN) にアクセスできるよう
サービス・ゲートウェイを作成して、インスタンス配置するサブネットの ルート・ルールへ「 All Oracle Services Network Service」を設定します。
1) Serive Gateway設定
00_ServiceGW01.png

2) Route Table設定
Compute配置するSubnetに付属の Route Table へ Service Gatewayへ通じるRoute Rouleを設定
00_RT01.png

■ コンピュート・インスタンス作成

1) OCIコンソール画面
メニューから [コンピュート] > [インスタンス] をクリック
01_E6作成00.png

2) インスタンス画面
[インスタンス作成]をクリック
01_E6作成01.png

3) コンピュート・インスタンスの作成画面
今回、BM.Standard.E6 を使用して UHPブロック・ボリュームのマルチパス有効化するために 以下項目を設定して作成します。

・サポートされているシェイプ: Oracle Linux 9
・サポートされてるイメージ: BM.Standard.E6
・インスタンスのブロック・ボリューム管理プラグイン: 有効化

・すべてのシェイプの参照画面
主に次の項目を設定して[Select Shape]をクリック

Instance type: 今回は、[Bare metal machine]を選択
Shape name: 今回は、[BM.Standard.E6.256]を選択
Provision only a percentage of cores and disable the rest: 192あるCPUから利用する%を指定します。

01_E6作成03.png
・コンピュート・インスタンスの作成: Basic画面
Oracle Cloud Agent項目で [ブロック・ボリューム管理]をチェック
01_E6作成05.png
01_E6作成06.png
01_E6作成07.png
01_E6作成08.png

4)コンピュート・インスタンスの作成: Review画面
全ての項目の入力項目を確認したら、[Create]をクリック
01_E6作成13.png
01_E6作成14.png
01_E6作成15.png

5) 作成中
01_E6作成17.png

6) 作成完了
01_E6作成18.png

7) Block Volume Management プラグイン確認
UHPのiSCSIを自動認識するようにBlock Volume Management プラグインが有効化されていることを確認
01_E6作成21.png

● 複数 UHPボリューム・サポート確認

Oracle Cloud Agentバージョン1.39以降のインスタンスでは、複数の超高パフォーマンス・ボリュームがサポートされています。次のコマンドを使用して、バージョンを確認します。

[opc@bm-e6 ~]$ yum info oracle-cloud-agent
Ksplice for Oracle Linux 9 (x86_64)                                                                                                                                                 76 MB/s |  10 MB     00:00
Oracle Linux 9 OCI Included Packages (x86_64)                                                                                                                                       68 MB/s | 184 MB     00:02
Oracle Linux 9 BaseOS Latest (x86_64)                                                                                                                                              155 MB/s |  85 MB     00:00
Oracle Linux 9 Application Stream Packages (x86_64)                                                                                                                                152 MB/s |  69 MB     00:00
Oracle Linux 9 Addons (x86_64)                                                                                                                                                      14 MB/s | 733 kB     00:00
Oracle Linux 9 UEK Release 8 (x86_64)                                                                                                                                              112 MB/s |  15 MB     00:00
Last metadata expiration check: 0:00:01 ago on Tue 23 Sep 2025 02:08:18 PM GMT.
Installed Packages
Name         : oracle-cloud-agent
Version      : 1.53.0
Release      : 3.el9
Architecture : x86_64
Size         : 493 M
Source       : oracle-cloud-agent-1.53.0-3.el9.src.rpm
Repository   : @System
Summary      : Oracle Cloud Agent
URL          : https://docs.cloud.oracle.com/iaas/
License      : https://oss.oracle.com/licenses/upl/
Description  : Oracle Cloud Infrastructure agent for management and monitoring.

■ ブロック・ボリューム作成とアタッチ

VPUのパフォーマンスの特性を参照してブロック・ボリュームのパフォーマンスが最大になるように設定します。
超高パフォーマンス VPU=120は、1つのブロック・ボリュームを 1,333GBで作成すると最大IOPS は 300,000 です。
6,250,000 IOPSになるように 21つのブロック・ボリュームを作成して コンピュートへアタッチします。

1) OCIコンソール画面
02_BlockVBolume作成01.png

2) ブロック・ボリューム作成画面
次の項目を入力し、[ブロック・ボリュームの作成]をクリック

・Volume size (in GB): 1400
・VPUs/GB: 120

02_BlockVBolume作成02.png

3) 作成完了
02_BlockVBolume作成04.png

4) 21個作成
同様に全部で21個作成します。
02_BlockVBolume作成04.png

● ブロック・ボリュームのアタッチ

作成した21個のブロック・ボリュームをアタッチします。
1) コンピュート画面
左リストにある[Attached block volumes]をクリックし、[Attach block volume]をクリック
03_AttachedBlockVolumes01.png

2) Attach block volume画面
次の項目を入力し、[Attach]をクリック

 ・ Select volume: 作成したボリュームを選択
 ・ Use Oracle Cloud Agent to automatically connect to iSCSI-attached volumes:自動認識するようにチェック

03_AttachedBlockVolumes02.png

3) Attach block volume画面
内容を確認し、[Close]をクリック
03_AttachedBlockVolumes03.png

4) コンピュート: Attached block volumes画面
アタッチ完了したら、Multipath の項目が 'Yes' であることを確認
03_AttachedBlockVolumes04.png

5) 21個の Block Volumeをアタッチ
21個の Block Volumeをアタッチして、全て Multipath の項目が 'Yes' であることを確認
03_AttachedBlockVolumes05.png

● OSコマンドで物理ボリューム確認

物理ボリュームの情報を表示して追加した /dev/oracleoci/oraclevdb があることを確認

・ 物理ボリュームの情報

[opc@bm-e6 ~]$ sudo -i
[root@bm-e6 ~]# pvs -a
  PV                       VG        Fmt  Attr PSize  PFree
  /dev/nvme0n1                            ---      0     0
  /dev/nvme1n1                            ---      0     0
  /dev/oracleoci/oraclevdb                ---      0     0
  /dev/oracleoci/oraclevdc                ---      0     0
  /dev/oracleoci/oraclevdd                ---      0     0
  /dev/oracleoci/oraclevde                ---      0     0
  /dev/oracleoci/oraclevdf                ---      0     0
  /dev/oracleoci/oraclevdg                ---      0     0
  /dev/oracleoci/oraclevdh                ---      0     0
  /dev/oracleoci/oraclevdi                ---      0     0
  /dev/oracleoci/oraclevdj                ---      0     0
  /dev/oracleoci/oraclevdk                ---      0     0
  /dev/oracleoci/oraclevdl                ---      0     0
  /dev/oracleoci/oraclevdm                ---      0     0
  /dev/oracleoci/oraclevdn                ---      0     0
  /dev/oracleoci/oraclevdo                ---      0     0
  /dev/oracleoci/oraclevdp                ---      0     0
  /dev/oracleoci/oraclevdq                ---      0     0
  /dev/oracleoci/oraclevdr                ---      0     0
  /dev/oracleoci/oraclevds                ---      0     0
  /dev/oracleoci/oraclevdt                ---      0     0
  /dev/oracleoci/oraclevdu                ---      0     0
  /dev/oracleoci/oraclevdv                ---      0     0
  /dev/sda1                               ---      0     0
  /dev/sda2                               ---      0     0
  /dev/sda3                ocivolume lvm2 a--  44.50g    0

・ 各物理ボリュームの詳細

[root@bm-e6 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda3
  VG Name               ocivolume
  PV Size               44.50 GiB / not usable 0
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              11392
  Free PE               0
  Allocated PE          11392
  PV UUID               znkklg-UAMl-AH9e-OEE5-2CQl-gH12-c6Ykd4

・ LVM ブロックデバイスをすべてスキャン

[root@bm-e6 ~]# pvscan
  PV /dev/sda3   VG ocivolume   lvm2 [44.50 GiB / 0    free]
  Total: 1 [44.50 GiB] / in use: 1 [44.50 GiB] / in no VG: 0 [0   ]

・ 物理ボリュームに使用できるブロックデバイスを一覧表示

[root@bm-e6 ~]# lvmdiskscan
  /dev/oracleoci/oraclevdj [      <1.37 TiB]
  /dev/oracleoci/oraclevdk [      <1.37 TiB]
  /dev/oracleoci/oraclevdl [      <1.37 TiB]
  /dev/oracleoci/oraclevdm [      <1.37 TiB]
  /dev/oracleoci/oraclevdn [      <1.37 TiB]
  /dev/oracleoci/oraclevdo [      <1.37 TiB]
  /dev/oracleoci/oraclevdp [      <1.37 TiB]
  /dev/oracleoci/oraclevdq [      <1.37 TiB]
  /dev/oracleoci/oraclevdr [      <1.37 TiB]
  /dev/oracleoci/oraclevds [      <1.37 TiB]
  /dev/oracleoci/oraclevdb [      <1.37 TiB]
  /dev/oracleoci/oraclevdt [      <1.37 TiB]
  /dev/oracleoci/oraclevdu [      <1.37 TiB]
  /dev/oracleoci/oraclevdc [      <1.37 TiB]
  /dev/oracleoci/oraclevdd [      <1.37 TiB]
  /dev/oracleoci/oraclevde [      <1.37 TiB]
  /dev/oracleoci/oraclevdf [      <1.37 TiB]
  /dev/oracleoci/oraclevdg [      <1.37 TiB]
  /dev/oracleoci/oraclevdh [      <1.37 TiB]
  /dev/oracleoci/oraclevdi [      <1.37 TiB]
  /dev/oracleoci/oraclevdv [      <1.37 TiB]
  /dev/sda1                [     100.00 MiB]
  /dev/sda2                [       2.00 GiB]
  /dev/sda3                [      44.50 GiB] LVM physical volume
  /dev/nvme0n1             [     894.25 GiB]
  /dev/nvme1n1             [     894.25 GiB]
  20 disks
  4 partitions
  0 LVM physical volume whole disks
  1 LVM physical volume

・ ブロックデバイス確認

[root@bm-e6 ~]# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda                  8:0    0   100G  0 disk
├─sda1               8:1    0   100M  0 part  /boot/efi
├─sda2               8:2    0     2G  0 part  /boot
└─sda3               8:3    0  44.5G  0 part
  ├─ocivolume-root 252:0    0  29.5G  0 lvm   /
  └─ocivolume-oled 252:1    0    15G  0 lvm   /var/oled
sdb                  8:16   0   1.4T  0 disk
└─mpatha           252:2    0   1.4T  0 mpath
sdc                  8:32   0   1.4T  0 disk
└─mpatha           252:2    0   1.4T  0 mpath
sdd                  8:48   0   1.4T  0 disk
└─mpatha           252:2    0   1.4T  0 mpath
sde                  8:64   0   1.4T  0 disk
└─mpatha           252:2    0   1.4T  0 mpath
sdf                  8:80   0   1.4T  0 disk
└─mpatha           252:2    0   1.4T  0 mpath
sdg                  8:96   0   1.4T  0 disk
└─mpathb           252:3    0   1.4T  0 mpath
sdh                  8:112  0   1.4T  0 disk
└─mpathb           252:3    0   1.4T  0 mpath
sdi                  8:128  0   1.4T  0 disk
└─mpathb           252:3    0   1.4T  0 mpath
sdj                  8:144  0   1.4T  0 disk
└─mpathb           252:3    0   1.4T  0 mpath
sdk                  8:160  0   1.4T  0 disk
└─mpathb           252:3    0   1.4T  0 mpath
sdl                  8:176  0   1.4T  0 disk
└─mpathc           252:4    0   1.4T  0 mpath
sdm                  8:192  0   1.4T  0 disk
└─mpathc           252:4    0   1.4T  0 mpath
sdn                  8:208  0   1.4T  0 disk
└─mpathc           252:4    0   1.4T  0 mpath
sdo                  8:224  0   1.4T  0 disk
└─mpathc           252:4    0   1.4T  0 mpath
sdp                  8:240  0   1.4T  0 disk
└─mpathc           252:4    0   1.4T  0 mpath
sdq                 65:0    0   1.4T  0 disk
└─mpathd           252:5    0   1.4T  0 mpath
sdr                 65:16   0   1.4T  0 disk
└─mpathd           252:5    0   1.4T  0 mpath
sds                 65:32   0   1.4T  0 disk
└─mpathd           252:5    0   1.4T  0 mpath
sdt                 65:48   0   1.4T  0 disk
└─mpathd           252:5    0   1.4T  0 mpath
sdu                 65:64   0   1.4T  0 disk
└─mpathd           252:5    0   1.4T  0 mpath
sdv                 65:80   0   1.4T  0 disk
└─mpathe           252:6    0   1.4T  0 mpath
sdw                 65:96   0   1.4T  0 disk
└─mpathe           252:6    0   1.4T  0 mpath
sdx                 65:112  0   1.4T  0 disk
└─mpathe           252:6    0   1.4T  0 mpath
sdy                 65:128  0   1.4T  0 disk
└─mpathe           252:6    0   1.4T  0 mpath
sdz                 65:144  0   1.4T  0 disk
└─mpathe           252:6    0   1.4T  0 mpath
sdaa                65:160  0   1.4T  0 disk
└─mpathf           252:7    0   1.4T  0 mpath
sdab                65:176  0   1.4T  0 disk
└─mpathf           252:7    0   1.4T  0 mpath
sdac                65:192  0   1.4T  0 disk
└─mpathf           252:7    0   1.4T  0 mpath
sdad                65:208  0   1.4T  0 disk
└─mpathf           252:7    0   1.4T  0 mpath
sdae                65:224  0   1.4T  0 disk
└─mpathf           252:7    0   1.4T  0 mpath
sdaf                65:240  0   1.4T  0 disk
└─mpathg           252:8    0   1.4T  0 mpath
sdag                66:0    0   1.4T  0 disk
└─mpathg           252:8    0   1.4T  0 mpath
sdah                66:16   0   1.4T  0 disk
└─mpathg           252:8    0   1.4T  0 mpath
sdai                66:32   0   1.4T  0 disk
└─mpathg           252:8    0   1.4T  0 mpath
sdaj                66:48   0   1.4T  0 disk
└─mpathg           252:8    0   1.4T  0 mpath
sdak                66:64   0   1.4T  0 disk
└─mpathh           252:9    0   1.4T  0 mpath
sdal                66:80   0   1.4T  0 disk
└─mpathh           252:9    0   1.4T  0 mpath
sdam                66:96   0   1.4T  0 disk
└─mpathh           252:9    0   1.4T  0 mpath
sdan                66:112  0   1.4T  0 disk
└─mpathh           252:9    0   1.4T  0 mpath
sdao                66:128  0   1.4T  0 disk
└─mpathh           252:9    0   1.4T  0 mpath
sdap                66:144  0   1.4T  0 disk
└─mpathi           252:10   0   1.4T  0 mpath
sdaq                66:160  0   1.4T  0 disk
└─mpathi           252:10   0   1.4T  0 mpath
sdar                66:176  0   1.4T  0 disk
└─mpathi           252:10   0   1.4T  0 mpath
sdas                66:192  0   1.4T  0 disk
└─mpathi           252:10   0   1.4T  0 mpath
sdat                66:208  0   1.4T  0 disk
└─mpathi           252:10   0   1.4T  0 mpath
sdau                66:224  0   1.4T  0 disk
└─mpathj           252:11   0   1.4T  0 mpath
sdav                66:240  0   1.4T  0 disk
└─mpathj           252:11   0   1.4T  0 mpath
sdaw                67:0    0   1.4T  0 disk
└─mpathj           252:11   0   1.4T  0 mpath
sdax                67:16   0   1.4T  0 disk
└─mpathj           252:11   0   1.4T  0 mpath
sday                67:32   0   1.4T  0 disk
└─mpathj           252:11   0   1.4T  0 mpath
sdaz                67:48   0   1.4T  0 disk
└─mpathk           252:12   0   1.4T  0 mpath
sdba                67:64   0   1.4T  0 disk
└─mpathk           252:12   0   1.4T  0 mpath
sdbb                67:80   0   1.4T  0 disk
└─mpathk           252:12   0   1.4T  0 mpath
sdbc                67:96   0   1.4T  0 disk
└─mpathk           252:12   0   1.4T  0 mpath
sdbd                67:112  0   1.4T  0 disk
└─mpathk           252:12   0   1.4T  0 mpath
sdbe                67:128  0   1.4T  0 disk
└─mpathl           252:13   0   1.4T  0 mpath
sdbf                67:144  0   1.4T  0 disk
└─mpathl           252:13   0   1.4T  0 mpath
sdbg                67:160  0   1.4T  0 disk
└─mpathl           252:13   0   1.4T  0 mpath
sdbh                67:176  0   1.4T  0 disk
└─mpathl           252:13   0   1.4T  0 mpath
sdbi                67:192  0   1.4T  0 disk
└─mpathl           252:13   0   1.4T  0 mpath
sdbj                67:208  0   1.4T  0 disk
└─mpathm           252:14   0   1.4T  0 mpath
sdbk                67:224  0   1.4T  0 disk
└─mpathm           252:14   0   1.4T  0 mpath
sdbl                67:240  0   1.4T  0 disk
└─mpathm           252:14   0   1.4T  0 mpath
sdbm                68:0    0   1.4T  0 disk
└─mpathm           252:14   0   1.4T  0 mpath
sdbn                68:16   0   1.4T  0 disk
└─mpathm           252:14   0   1.4T  0 mpath
sdbo                68:32   0   1.4T  0 disk
└─mpathn           252:15   0   1.4T  0 mpath
sdbp                68:48   0   1.4T  0 disk
└─mpathn           252:15   0   1.4T  0 mpath
sdbq                68:64   0   1.4T  0 disk
└─mpathn           252:15   0   1.4T  0 mpath
sdbr                68:80   0   1.4T  0 disk
└─mpathn           252:15   0   1.4T  0 mpath
sdbs                68:96   0   1.4T  0 disk
└─mpathn           252:15   0   1.4T  0 mpath
sdbt                68:112  0   1.4T  0 disk
└─mpatho           252:16   0   1.4T  0 mpath
sdbu                68:128  0   1.4T  0 disk
└─mpatho           252:16   0   1.4T  0 mpath
sdbv                68:144  0   1.4T  0 disk
└─mpatho           252:16   0   1.4T  0 mpath
sdbw                68:160  0   1.4T  0 disk
└─mpatho           252:16   0   1.4T  0 mpath
sdbx                68:176  0   1.4T  0 disk
└─mpatho           252:16   0   1.4T  0 mpath
sdby                68:192  0   1.4T  0 disk
└─mpathp           252:17   0   1.4T  0 mpath
sdbz                68:208  0   1.4T  0 disk
└─mpathp           252:17   0   1.4T  0 mpath
sdca                68:224  0   1.4T  0 disk
└─mpathp           252:17   0   1.4T  0 mpath
sdcb                68:240  0   1.4T  0 disk
└─mpathp           252:17   0   1.4T  0 mpath
sdcc                69:0    0   1.4T  0 disk
└─mpathp           252:17   0   1.4T  0 mpath
sdcd                69:16   0   1.4T  0 disk
└─mpathq           252:18   0   1.4T  0 mpath
sdce                69:32   0   1.4T  0 disk
└─mpathq           252:18   0   1.4T  0 mpath
sdcf                69:48   0   1.4T  0 disk
└─mpathq           252:18   0   1.4T  0 mpath
sdcg                69:64   0   1.4T  0 disk
└─mpathq           252:18   0   1.4T  0 mpath
sdch                69:80   0   1.4T  0 disk
└─mpathq           252:18   0   1.4T  0 mpath
sdci                69:96   0   1.4T  0 disk
└─mpathr           252:19   0   1.4T  0 mpath
sdcj                69:112  0   1.4T  0 disk
└─mpathr           252:19   0   1.4T  0 mpath
sdck                69:128  0   1.4T  0 disk
└─mpathr           252:19   0   1.4T  0 mpath
sdcl                69:144  0   1.4T  0 disk
└─mpathr           252:19   0   1.4T  0 mpath
sdcm                69:160  0   1.4T  0 disk
└─mpathr           252:19   0   1.4T  0 mpath
sdcn                69:176  0   1.4T  0 disk
└─mpaths           252:20   0   1.4T  0 mpath
sdco                69:192  0   1.4T  0 disk
└─mpaths           252:20   0   1.4T  0 mpath
sdcp                69:208  0   1.4T  0 disk
└─mpaths           252:20   0   1.4T  0 mpath
sdcq                69:224  0   1.4T  0 disk
└─mpaths           252:20   0   1.4T  0 mpath
sdcr                69:240  0   1.4T  0 disk
└─mpaths           252:20   0   1.4T  0 mpath
sdcs                70:0    0   1.4T  0 disk
└─mpatht           252:21   0   1.4T  0 mpath
sdct                70:16   0   1.4T  0 disk
└─mpatht           252:21   0   1.4T  0 mpath
sdcu                70:32   0   1.4T  0 disk
└─mpatht           252:21   0   1.4T  0 mpath
sdcv                70:48   0   1.4T  0 disk
└─mpatht           252:21   0   1.4T  0 mpath
sdcw                70:64   0   1.4T  0 disk
└─mpatht           252:21   0   1.4T  0 mpath
sdcx                70:80   0   1.4T  0 disk
└─mpathu           252:23   0   1.4T  0 mpath
sdcy                70:96   0   1.4T  0 disk
└─mpathu           252:23   0   1.4T  0 mpath
sdcz                70:112  0   1.4T  0 disk
└─mpathu           252:23   0   1.4T  0 mpath
sdda                70:128  0   1.4T  0 disk
└─mpathu           252:23   0   1.4T  0 mpath
sddb                70:144  0   1.4T  0 disk
└─mpathu           252:23   0   1.4T  0 mpath
nvme0n1            259:0    0 894.3G  0 disk
nvme1n1            259:1    0 894.3G  0 disk

⚫︎ マルチパスの構成確認

・ マルチパスのステータス確認

[root@bm-e6 ~]# mpathconf
multipath is enabled
find_multipaths is yes
user_friendly_names is enabled
recheck_wwid is disabled
default property blacklist is disabled
enable_foreign is not set (no foreign multipath devices will be shown)
dm_multipath module is loaded
multipathd is running

・ mpath確認

[root@bm-e6 ~]# ls -l /dev/mapper/mpath*
lrwxrwxrwx. 1 root root 7 Sep 18 12:06 /dev/mapper/mpatha -> ../dm-2
lrwxrwxrwx. 1 root root 7 Sep 18 12:08 /dev/mapper/mpathb -> ../dm-3
lrwxrwxrwx. 1 root root 7 Sep 18 12:09 /dev/mapper/mpathc -> ../dm-4
lrwxrwxrwx. 1 root root 7 Sep 18 12:09 /dev/mapper/mpathd -> ../dm-5
lrwxrwxrwx. 1 root root 7 Sep 18 12:10 /dev/mapper/mpathe -> ../dm-6
lrwxrwxrwx. 1 root root 7 Sep 18 12:10 /dev/mapper/mpathf -> ../dm-7
lrwxrwxrwx. 1 root root 7 Sep 18 12:10 /dev/mapper/mpathg -> ../dm-8
lrwxrwxrwx. 1 root root 7 Sep 18 12:11 /dev/mapper/mpathh -> ../dm-9
lrwxrwxrwx. 1 root root 8 Sep 18 12:11 /dev/mapper/mpathi -> ../dm-10
lrwxrwxrwx. 1 root root 8 Sep 18 12:12 /dev/mapper/mpathj -> ../dm-11
lrwxrwxrwx. 1 root root 8 Sep 18 12:12 /dev/mapper/mpathk -> ../dm-12
lrwxrwxrwx. 1 root root 8 Sep 18 12:12 /dev/mapper/mpathl -> ../dm-13
lrwxrwxrwx. 1 root root 8 Sep 18 12:13 /dev/mapper/mpathm -> ../dm-14
lrwxrwxrwx. 1 root root 8 Sep 18 12:13 /dev/mapper/mpathn -> ../dm-15
lrwxrwxrwx. 1 root root 8 Sep 18 12:13 /dev/mapper/mpatho -> ../dm-16
lrwxrwxrwx. 1 root root 8 Sep 18 12:14 /dev/mapper/mpathp -> ../dm-17
lrwxrwxrwx. 1 root root 8 Sep 18 12:14 /dev/mapper/mpathq -> ../dm-18
lrwxrwxrwx. 1 root root 8 Sep 18 12:15 /dev/mapper/mpathr -> ../dm-19
lrwxrwxrwx. 1 root root 8 Sep 18 12:15 /dev/mapper/mpaths -> ../dm-20
lrwxrwxrwx. 1 root root 8 Sep 18 12:16 /dev/mapper/mpatht -> ../dm-22

・ マルチパス構成表示

[root@bm-e6 ~]# multipath -ll
mpatha (360fb0a309e2f4dd394cd0e65c0b7f187) dm-2 ORACLE,BlockVolume
size=1.4T features='4 queue_if_no_path retain_attached_hw_handler queue_mode bio' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 10:0:0:2   sdb     8:16   active ready running
  |- 11:0:0:2   sdc     8:32   active ready running
  |- 12:0:0:2   sdd     8:48   active ready running
  |- 13:0:0:2   sde     8:64   active ready running
  `- 14:0:0:2   sdf     8:80   active ready running
mpathb (360f38c0542094096b5f23ebd40f27ffa) dm-3 ORACLE,BlockVolume
size=1.4T features='4 queue_if_no_path retain_attached_hw_handler queue_mode bio' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 15:0:0:3   sdg     8:96   active ready running
  |- 16:0:0:3   sdh     8:112  active ready running
  |- 17:0:0:3   sdi     8:128  active ready running
  |- 18:0:0:3   sdj     8:144  active ready running
  `- 19:0:0:3   sdk     8:160  active ready running
mpathc (360ce5f8a81044da9b562ab781e7033ba) dm-4 ORACLE,BlockVolume
size=1.4T features='4 queue_if_no_path retain_attached_hw_handler queue_mode bio' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 20:0:0:4   sdl     8:176  active ready running
  |- 21:0:0:4   sdm     8:192  active ready running
  |- 22:0:0:4   sdn     8:208  active ready running
  |- 23:0:0:4   sdo     8:224  active ready running
  `- 24:0:0:4   sdp     8:240  active ready running
mpathd (360110659ee73484099254b362c701e21) dm-5 ORACLE,BlockVolume
size=1.4T features='4 queue_if_no_path retain_attached_hw_handler queue_mode bio' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 25:0:0:5   sdq     65:0   active ready running
  |- 26:0:0:5   sdr     65:16  active ready running
  |- 27:0:0:5   sds     65:32  active ready running
  |- 28:0:0:5   sdt     65:48  active ready running
  `- 29:0:0:5   sdu     65:64  active ready running
mpathe (360daa8e46b0b45d290587926d3075c90) dm-6 ORACLE,BlockVolume
size=1.4T features='4 queue_if_no_path retain_attached_hw_handler queue_mode bio' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 30:0:0:6   sdv     65:80  active ready running
  |- 31:0:0:6   sdw     65:96  active ready running
  |- 32:0:0:6   sdx     65:112 active ready running
  |- 33:0:0:6   sdy     65:128 active ready running
  `- 34:0:0:6   sdz     65:144 active ready running
mpathf (360e0a0c71c6c4282a98f0ecb39e0723e) dm-7 ORACLE,BlockVolume
size=1.4T features='4 queue_if_no_path retain_attached_hw_handler queue_mode bio' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 35:0:0:7   sdaa    65:160 active ready running
  |- 36:0:0:7   sdab    65:176 active ready running
  |- 37:0:0:7   sdac    65:192 active ready running
  |- 38:0:0:7   sdad    65:208 active ready running
  `- 39:0:0:7   sdae    65:224 active ready running
mpathg (360c794743c384c278f1232ed43cf79ee) dm-8 ORACLE,BlockVolume
size=1.4T features='4 queue_if_no_path retain_attached_hw_handler queue_mode bio' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 40:0:0:8   sdaf    65:240 active ready running
  |- 41:0:0:8   sdag    66:0   active ready running
  |- 42:0:0:8   sdah    66:16  active ready running
  |- 43:0:0:8   sdai    66:32  active ready running
  `- 44:0:0:8   sdaj    66:48  active ready running
mpathh (36067b66cbf24469e86deb51c518f9209) dm-9 ORACLE,BlockVolume
size=1.4T features='4 queue_if_no_path retain_attached_hw_handler queue_mode bio' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 45:0:0:9   sdak    66:64  active ready running
  |- 46:0:0:9   sdal    66:80  active ready running
  |- 47:0:0:9   sdam    66:96  active ready running
  |- 48:0:0:9   sdan    66:112 active ready running
  `- 49:0:0:9   sdao    66:128 active ready running
mpathi (360efd5ccbc4e4e418135640fe90d1c55) dm-10 ORACLE,BlockVolume
size=1.4T features='4 queue_if_no_path retain_attached_hw_handler queue_mode bio' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 50:0:0:10  sdap    66:144 active ready running
  |- 51:0:0:10  sdaq    66:160 active ready running
  |- 52:0:0:10  sdar    66:176 active ready running
  |- 53:0:0:10  sdas    66:192 active ready running
  `- 54:0:0:10  sdat    66:208 active ready running
mpathj (360806912bcd044a2b7474f80787f2861) dm-11 ORACLE,BlockVolume
size=1.4T features='4 queue_if_no_path retain_attached_hw_handler queue_mode bio' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 55:0:0:11  sdau    66:224 active ready running
  |- 56:0:0:11  sdav    66:240 active ready running
  |- 57:0:0:11  sdaw    67:0   active ready running
  |- 58:0:0:11  sdax    67:16  active ready running
  `- 59:0:0:11  sday    67:32  active ready running
mpathk (360b243c8723b4014a5ec8b20bd9b4152) dm-12 ORACLE,BlockVolume
size=1.4T features='4 queue_if_no_path retain_attached_hw_handler queue_mode bio' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 60:0:0:12  sdaz    67:48  active ready running
  |- 61:0:0:12  sdba    67:64  active ready running
  |- 62:0:0:12  sdbb    67:80  active ready running
  |- 63:0:0:12  sdbc    67:96  active ready running
  `- 64:0:0:12  sdbd    67:112 active ready running
mpathl (360a0679ebc61400787dbf7e398a8a866) dm-13 ORACLE,BlockVolume
size=1.4T features='4 queue_if_no_path retain_attached_hw_handler queue_mode bio' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 65:0:0:13  sdbe    67:128 active ready running
  |- 66:0:0:13  sdbf    67:144 active ready running
  |- 67:0:0:13  sdbg    67:160 active ready running
  |- 68:0:0:13  sdbh    67:176 active ready running
  `- 69:0:0:13  sdbi    67:192 active ready running
mpathm (360ee0803d33c474393a4f49758252404) dm-14 ORACLE,BlockVolume
size=1.4T features='4 queue_if_no_path retain_attached_hw_handler queue_mode bio' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 70:0:0:14  sdbj    67:208 active ready running
  |- 71:0:0:14  sdbk    67:224 active ready running
  |- 72:0:0:14  sdbl    67:240 active ready running
  |- 73:0:0:14  sdbm    68:0   active ready running
  `- 74:0:0:14  sdbn    68:16  active ready running
mpathn (3600cbcecb3ea4482b650a97f12b7e1ee) dm-15 ORACLE,BlockVolume
size=1.4T features='4 queue_if_no_path retain_attached_hw_handler queue_mode bio' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 75:0:0:15  sdbo    68:32  active ready running
  |- 76:0:0:15  sdbp    68:48  active ready running
  |- 77:0:0:15  sdbq    68:64  active ready running
  |- 78:0:0:15  sdbr    68:80  active ready running
  `- 79:0:0:15  sdbs    68:96  active ready running
mpatho (360db207e3af6462182720e77f3c8a112) dm-16 ORACLE,BlockVolume
size=1.4T features='4 queue_if_no_path retain_attached_hw_handler queue_mode bio' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 80:0:0:16  sdbt    68:112 active ready running
  |- 81:0:0:16  sdbu    68:128 active ready running
  |- 82:0:0:16  sdbv    68:144 active ready running
  |- 83:0:0:16  sdbw    68:160 active ready running
  `- 84:0:0:16  sdbx    68:176 active ready running
mpathp (3607996ca82294d3e896fe8c90ef28456) dm-17 ORACLE,BlockVolume
size=1.4T features='4 queue_if_no_path retain_attached_hw_handler queue_mode bio' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 85:0:0:17  sdby    68:192 active ready running
  |- 86:0:0:17  sdbz    68:208 active ready running
  |- 87:0:0:17  sdca    68:224 active ready running
  |- 88:0:0:17  sdcb    68:240 active ready running
  `- 89:0:0:17  sdcc    69:0   active ready running
mpathq (3601109c7601045c3a70f3e9cee0f5d8a) dm-18 ORACLE,BlockVolume
size=1.4T features='4 queue_if_no_path retain_attached_hw_handler queue_mode bio' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 90:0:0:18  sdcd    69:16  active ready running
  |- 91:0:0:18  sdce    69:32  active ready running
  |- 92:0:0:18  sdcf    69:48  active ready running
  |- 93:0:0:18  sdcg    69:64  active ready running
  `- 94:0:0:18  sdch    69:80  active ready running
mpathr (360aecd292d23464295058ef5f4f8f9a1) dm-19 ORACLE,BlockVolume
size=1.4T features='4 queue_if_no_path retain_attached_hw_handler queue_mode bio' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 95:0:0:19  sdci    69:96  active ready running
  |- 96:0:0:19  sdcj    69:112 active ready running
  |- 97:0:0:19  sdck    69:128 active ready running
  |- 98:0:0:19  sdcl    69:144 active ready running
  `- 99:0:0:19  sdcm    69:160 active ready running
mpaths (360f39a40d51c4635b8c237223b50d19a) dm-20 ORACLE,BlockVolume
size=1.4T features='4 queue_if_no_path retain_attached_hw_handler queue_mode bio' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 100:0:0:20 sdcn    69:176 active ready running
  |- 101:0:0:20 sdco    69:192 active ready running
  |- 102:0:0:20 sdcp    69:208 active ready running
  |- 103:0:0:20 sdcq    69:224 active ready running
  `- 104:0:0:20 sdcr    69:240 active ready running
mpatht (3600eaadc22ee4eb78a62f08eec4d09f7) dm-21 ORACLE,BlockVolume
size=1.4T features='4 queue_if_no_path retain_attached_hw_handler queue_mode bio' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 105:0:0:21 sdcs    70:0   active ready running
  |- 106:0:0:21 sdct    70:16  active ready running
  |- 107:0:0:21 sdcu    70:32  active ready running
  |- 108:0:0:21 sdcv    70:48  active ready running
  `- 109:0:0:21 sdcw    70:64  active ready running
mpathu (3602bfcdbf4e445e88882958504b7a002) dm-23 ORACLE,BlockVolume
size=1.4T features='4 queue_if_no_path retain_attached_hw_handler queue_mode bio' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 110:0:0:22 sdcx    70:80  active ready running
  |- 111:0:0:22 sdcy    70:96  active ready running
  |- 112:0:0:22 sdcz    70:112 active ready running
  |- 113:0:0:22 sdda    70:128 active ready running
  `- 114:0:0:22 sddb    70:144 active ready running

・ マルチパス構成ファイル確認

[root@bm-e6 ~]# cat /etc/multipath.conf
defaults {
  user_friendly_names yes
  find_multipaths yes
  rr_weight uniform
  path_selector "queue-length 0"
  path_grouping_policy multibus
  polling_interval 30
  path_checker tur
  checker_timeout 300
  failback immediate
  verbosity 2
  rr_min_io 1
  rr_min_io_rq 1
  dev_loss_tmo 9000
  fast_io_fail_tmo off
  no_path_retry queue
  skip_kpartx no
  features "2 queue_mode bio"
}

・ multipath.conf 設定の例ファイル
DM-Multipathがサポートするストレージ・アレイの詳細とそのデフォルトの構成値を次のファイルで確認できます。

[root@bm-e6 ~]# cat /usr/share/doc/device-mapper-multipath/multipath.conf
# This is a basic configuration file with some examples, for device mapper
# multipath.
#
# For a complete list of the default configuration values, run either
# multipath -t
# or
# multipathd show config
#
# For a list of configuration options with descriptions, see the multipath.conf
# man page

## By default, devices with vendor = "IBM" and product = "S/390.*" are
## blacklisted. To enable mulitpathing on these devies, uncomment the
## following lines.
#blacklist_exceptions {
#	device {
#		vendor	"IBM"
#		product	"S/390.*"
#	}
#}

## Use user friendly names, instead of using WWIDs as names.
defaults {
	user_friendly_names yes
	find_multipaths yes
}
##
## Here is an example of how to configure some standard options.
##
#
#defaults {
#	udev_dir		/dev
#	polling_interval 	10
#	selector		"round-robin 0"
#	path_grouping_policy	multibus
#	prio			alua
#	path_checker		readsector0
#	rr_min_io		100
#	max_fds			8192
#	rr_weight		priorities
#	failback		immediate
#	no_path_retry		fail
#	user_friendly_names	yes
#}
##
## The wwid line in the following blacklist section is shown as an example
## of how to blacklist devices by wwid.  The 2 devnode lines are the
## compiled in default blacklist. If you want to blacklist entire types
## of devices, such as all scsi devices, you should use a devnode line.
## However, if you want to blacklist specific devices, you should use
## a wwid line.  Since there is no guarantee that a specific device will
## not change names on reboot (from /dev/sda to /dev/sdb for example)
## devnode lines are not recommended for blacklisting specific devices.
##
#blacklist {
#       wwid 26353900f02796769
#	devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
#	devnode "^hd[a-z]"
#}
#multipaths {
#	multipath {
#		wwid			3600508b4000156d700012000000b0000
#		alias			yellow
#		path_grouping_policy	multibus
#		path_checker		readsector0
#		path_selector		"round-robin 0"
#		failback		manual
#		rr_weight		priorities
#		no_path_retry		5
#	}
#	multipath {
#		wwid			1DEC_____321816758474
#		alias			red
#	}
#}
#devices {
#	device {
#		vendor			"COMPAQ  "
#		product			"HSV110 (C)COMPAQ"
#		path_grouping_policy	multibus
#		path_checker		readsector0
#		path_selector		"round-robin 0"
#		hardware_handler	"0"
#		failback		15
#		rr_weight		priorities
#		no_path_retry		queue
#	}
#	device {
#		vendor			"COMPAQ  "
#		product			"MSA1000         "
#		path_grouping_policy	multibus
#	}
#}

■ ストライプ化論理ボリューム(LV)作成

アタッチした 10個の UHP ブロック・ボリュームを LVM (logical volume manager) でひとつのボリュームグループにまとめ、単一の論理ボリューム(LV)/dev/volgroup01/striped_logical_volume を作成します。

● 物理ボリューム(PV)確認

1) 物理ボリューム(PV)作成
・ pvs確認

[root@bm-e6 ~]# pvs -a
  PV                       VG        Fmt  Attr PSize  PFree
  /dev/nvme0n1                            ---      0     0
  /dev/nvme1n1                            ---      0     0
  /dev/oracleoci/oraclevdb                ---      0     0
  /dev/oracleoci/oraclevdc                ---      0     0
  /dev/oracleoci/oraclevdd                ---      0     0
  /dev/oracleoci/oraclevde                ---      0     0
  /dev/oracleoci/oraclevdf                ---      0     0
  /dev/oracleoci/oraclevdg                ---      0     0
  /dev/oracleoci/oraclevdh                ---      0     0
  /dev/oracleoci/oraclevdi                ---      0     0
  /dev/oracleoci/oraclevdj                ---      0     0
  /dev/oracleoci/oraclevdk                ---      0     0
  /dev/oracleoci/oraclevdl                ---      0     0
  /dev/oracleoci/oraclevdm                ---      0     0
  /dev/oracleoci/oraclevdn                ---      0     0
  /dev/oracleoci/oraclevdo                ---      0     0
  /dev/oracleoci/oraclevdp                ---      0     0
  /dev/oracleoci/oraclevdq                ---      0     0
  /dev/oracleoci/oraclevdr                ---      0     0
  /dev/oracleoci/oraclevds                ---      0     0
  /dev/oracleoci/oraclevdt                ---      0     0
  /dev/oracleoci/oraclevdu                ---      0     0
  /dev/oracleoci/oraclevdv                ---      0     0
  /dev/sda1                               ---      0     0
  /dev/sda2                               ---      0     0
  /dev/sda3                ocivolume lvm2 a--  44.50g    0

・ PV作成
-vオプションを指定してコマンドを実行し、詳細情報を取得します。

[root@bm-e6 ~]# pvcreate -v /dev/oracleoci/oraclevd[b-v]
  Wiping signatures on new PV /dev/oracleoci/oraclevdb.
  Wiping signatures on new PV /dev/oracleoci/oraclevdc.
  Wiping signatures on new PV /dev/oracleoci/oraclevdd.
  Wiping signatures on new PV /dev/oracleoci/oraclevde.
  Wiping signatures on new PV /dev/oracleoci/oraclevdf.
  Wiping signatures on new PV /dev/oracleoci/oraclevdg.
  Wiping signatures on new PV /dev/oracleoci/oraclevdh.
  Wiping signatures on new PV /dev/oracleoci/oraclevdi.
  Wiping signatures on new PV /dev/oracleoci/oraclevdj.
  Wiping signatures on new PV /dev/oracleoci/oraclevdk.
  Wiping signatures on new PV /dev/oracleoci/oraclevdl.
  Wiping signatures on new PV /dev/oracleoci/oraclevdm.
  Wiping signatures on new PV /dev/oracleoci/oraclevdn.
  Wiping signatures on new PV /dev/oracleoci/oraclevdo.
  Wiping signatures on new PV /dev/oracleoci/oraclevdp.
  Wiping signatures on new PV /dev/oracleoci/oraclevdq.
  Wiping signatures on new PV /dev/oracleoci/oraclevdr.
  Wiping signatures on new PV /dev/oracleoci/oraclevds.
  Wiping signatures on new PV /dev/oracleoci/oraclevdt.
  Wiping signatures on new PV /dev/oracleoci/oraclevdu.
  Wiping signatures on new PV /dev/oracleoci/oraclevdv.
  Set up physical volume for "/dev/oracleoci/oraclevdb" with 2936012800 available sectors.
  Zeroing start of device /dev/oracleoci/oraclevdb.
  Writing physical volume data to disk "/dev/oracleoci/oraclevdb".
  Physical volume "/dev/oracleoci/oraclevdb" successfully created.
  Set up physical volume for "/dev/oracleoci/oraclevdc" with 2936012800 available sectors.
  Zeroing start of device /dev/oracleoci/oraclevdc.
  Writing physical volume data to disk "/dev/oracleoci/oraclevdc".
  Physical volume "/dev/oracleoci/oraclevdc" successfully created.
  Set up physical volume for "/dev/oracleoci/oraclevdd" with 2936012800 available sectors.
  Zeroing start of device /dev/oracleoci/oraclevdd.
  Writing physical volume data to disk "/dev/oracleoci/oraclevdd".
  Physical volume "/dev/oracleoci/oraclevdd" successfully created.
  Set up physical volume for "/dev/oracleoci/oraclevde" with 2936012800 available sectors.
  Zeroing start of device /dev/oracleoci/oraclevde.
  Writing physical volume data to disk "/dev/oracleoci/oraclevde".
  Physical volume "/dev/oracleoci/oraclevde" successfully created.
  Set up physical volume for "/dev/oracleoci/oraclevdf" with 2936012800 available sectors.
  Zeroing start of device /dev/oracleoci/oraclevdf.
  Writing physical volume data to disk "/dev/oracleoci/oraclevdf".
  Physical volume "/dev/oracleoci/oraclevdf" successfully created.
  Set up physical volume for "/dev/oracleoci/oraclevdg" with 2936012800 available sectors.
  Zeroing start of device /dev/oracleoci/oraclevdg.
  Writing physical volume data to disk "/dev/oracleoci/oraclevdg".
  Physical volume "/dev/oracleoci/oraclevdg" successfully created.
  Set up physical volume for "/dev/oracleoci/oraclevdh" with 2936012800 available sectors.
  Zeroing start of device /dev/oracleoci/oraclevdh.
  Writing physical volume data to disk "/dev/oracleoci/oraclevdh".
  Physical volume "/dev/oracleoci/oraclevdh" successfully created.
  Set up physical volume for "/dev/oracleoci/oraclevdi" with 2936012800 available sectors.
  Zeroing start of device /dev/oracleoci/oraclevdi.
  Writing physical volume data to disk "/dev/oracleoci/oraclevdi".
  Physical volume "/dev/oracleoci/oraclevdi" successfully created.
  Set up physical volume for "/dev/oracleoci/oraclevdj" with 2936012800 available sectors.
  Zeroing start of device /dev/oracleoci/oraclevdj.
  Writing physical volume data to disk "/dev/oracleoci/oraclevdj".
  Physical volume "/dev/oracleoci/oraclevdj" successfully created.
  Set up physical volume for "/dev/oracleoci/oraclevdk" with 2936012800 available sectors.
  Zeroing start of device /dev/oracleoci/oraclevdk.
  Writing physical volume data to disk "/dev/oracleoci/oraclevdk".
  Physical volume "/dev/oracleoci/oraclevdk" successfully created.
  Set up physical volume for "/dev/oracleoci/oraclevdl" with 2936012800 available sectors.
  Zeroing start of device /dev/oracleoci/oraclevdl.
  Writing physical volume data to disk "/dev/oracleoci/oraclevdl".
  Physical volume "/dev/oracleoci/oraclevdl" successfully created.
  Set up physical volume for "/dev/oracleoci/oraclevdm" with 2936012800 available sectors.
  Zeroing start of device /dev/oracleoci/oraclevdm.
  Writing physical volume data to disk "/dev/oracleoci/oraclevdm".
  Physical volume "/dev/oracleoci/oraclevdm" successfully created.
  Set up physical volume for "/dev/oracleoci/oraclevdn" with 2936012800 available sectors.
  Zeroing start of device /dev/oracleoci/oraclevdn.
  Writing physical volume data to disk "/dev/oracleoci/oraclevdn".
  Physical volume "/dev/oracleoci/oraclevdn" successfully created.
  Set up physical volume for "/dev/oracleoci/oraclevdo" with 2936012800 available sectors.
  Zeroing start of device /dev/oracleoci/oraclevdo.
  Writing physical volume data to disk "/dev/oracleoci/oraclevdo".
  Physical volume "/dev/oracleoci/oraclevdo" successfully created.
  Set up physical volume for "/dev/oracleoci/oraclevdp" with 2936012800 available sectors.
  Zeroing start of device /dev/oracleoci/oraclevdp.
  Writing physical volume data to disk "/dev/oracleoci/oraclevdp".
  Physical volume "/dev/oracleoci/oraclevdp" successfully created.
  Set up physical volume for "/dev/oracleoci/oraclevdq" with 2936012800 available sectors.
  Zeroing start of device /dev/oracleoci/oraclevdq.
  Writing physical volume data to disk "/dev/oracleoci/oraclevdq".
  Physical volume "/dev/oracleoci/oraclevdq" successfully created.
  Set up physical volume for "/dev/oracleoci/oraclevdr" with 2936012800 available sectors.
  Zeroing start of device /dev/oracleoci/oraclevdr.
  Writing physical volume data to disk "/dev/oracleoci/oraclevdr".
  Physical volume "/dev/oracleoci/oraclevdr" successfully created.
  Set up physical volume for "/dev/oracleoci/oraclevds" with 2936012800 available sectors.
  Zeroing start of device /dev/oracleoci/oraclevds.
  Writing physical volume data to disk "/dev/oracleoci/oraclevds".
  Physical volume "/dev/oracleoci/oraclevds" successfully created.
  Set up physical volume for "/dev/oracleoci/oraclevdt" with 2936012800 available sectors.
  Zeroing start of device /dev/oracleoci/oraclevdt.
  Writing physical volume data to disk "/dev/oracleoci/oraclevdt".
  Physical volume "/dev/oracleoci/oraclevdt" successfully created.
  Set up physical volume for "/dev/oracleoci/oraclevdu" with 2936012800 available sectors.
  Zeroing start of device /dev/oracleoci/oraclevdu.
  Writing physical volume data to disk "/dev/oracleoci/oraclevdu".
  Physical volume "/dev/oracleoci/oraclevdu" successfully created.
  Set up physical volume for "/dev/oracleoci/oraclevdv" with 2936012800 available sectors.
  Zeroing start of device /dev/oracleoci/oraclevdv.
  Writing physical volume data to disk "/dev/oracleoci/oraclevdv".
  Physical volume "/dev/oracleoci/oraclevdv" successfully created.
  Not creating system devices file due to existing VGs.

2) pvs 確認

[root@bm-e6 ~]# pvs
  PV                       VG        Fmt  Attr PSize  PFree
  /dev/oracleoci/oraclevdb           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdc           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdd           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevde           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdf           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdg           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdh           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdi           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdj           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdk           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdl           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdm           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdn           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdo           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdp           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdq           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdr           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevds           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdt           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdu           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdv           lvm2 ---  <1.37t <1.37t
  /dev/sda3                ocivolume lvm2 a--  44.50g     0

● ボリュームグループ(VG) 作成

1) ボリュームグループ volgroup01 を作成
新しく作成した物理ボリュームを使用してボリューム・グループ(VG)を作成します。

[[root@bm-e6 ~]# vgs
  VG        #PV #LV #SN Attr   VSize  VFree
  ocivolume   1   2   0 wz--n- 44.50g    0

[root@bm-e6 ~]# vgcreate -v volgroup01 /dev/oracleoci/oraclevd[b-v]
  Wiping signatures on new PV /dev/oracleoci/oraclevdb.
  Wiping signatures on new PV /dev/oracleoci/oraclevdc.
  Wiping signatures on new PV /dev/oracleoci/oraclevdd.
  Wiping signatures on new PV /dev/oracleoci/oraclevde.
  Wiping signatures on new PV /dev/oracleoci/oraclevdf.
  Wiping signatures on new PV /dev/oracleoci/oraclevdg.
  Wiping signatures on new PV /dev/oracleoci/oraclevdh.
  Wiping signatures on new PV /dev/oracleoci/oraclevdi.
  Wiping signatures on new PV /dev/oracleoci/oraclevdj.
  Wiping signatures on new PV /dev/oracleoci/oraclevdk.
  Wiping signatures on new PV /dev/oracleoci/oraclevdl.
  Wiping signatures on new PV /dev/oracleoci/oraclevdm.
  Wiping signatures on new PV /dev/oracleoci/oraclevdn.
  Wiping signatures on new PV /dev/oracleoci/oraclevdo.
  Wiping signatures on new PV /dev/oracleoci/oraclevdp.
  Wiping signatures on new PV /dev/oracleoci/oraclevdq.
  Wiping signatures on new PV /dev/oracleoci/oraclevdr.
  Wiping signatures on new PV /dev/oracleoci/oraclevds.
  Wiping signatures on new PV /dev/oracleoci/oraclevdt.
  Wiping signatures on new PV /dev/oracleoci/oraclevdu.
  Wiping signatures on new PV /dev/oracleoci/oraclevdv.
  Not creating system devices file due to existing VGs.
  Adding physical volume '/dev/oracleoci/oraclevdb' to volume group 'volgroup01'
  Adding physical volume '/dev/oracleoci/oraclevdc' to volume group 'volgroup01'
  Adding physical volume '/dev/oracleoci/oraclevdd' to volume group 'volgroup01'
  Adding physical volume '/dev/oracleoci/oraclevde' to volume group 'volgroup01'
  Adding physical volume '/dev/oracleoci/oraclevdf' to volume group 'volgroup01'
  Adding physical volume '/dev/oracleoci/oraclevdg' to volume group 'volgroup01'
  Adding physical volume '/dev/oracleoci/oraclevdh' to volume group 'volgroup01'
  Adding physical volume '/dev/oracleoci/oraclevdi' to volume group 'volgroup01'
  Adding physical volume '/dev/oracleoci/oraclevdj' to volume group 'volgroup01'
  Adding physical volume '/dev/oracleoci/oraclevdk' to volume group 'volgroup01'
  Adding physical volume '/dev/oracleoci/oraclevdl' to volume group 'volgroup01'
  Adding physical volume '/dev/oracleoci/oraclevdm' to volume group 'volgroup01'
  Adding physical volume '/dev/oracleoci/oraclevdn' to volume group 'volgroup01'
  Adding physical volume '/dev/oracleoci/oraclevdo' to volume group 'volgroup01'
  Adding physical volume '/dev/oracleoci/oraclevdp' to volume group 'volgroup01'
  Adding physical volume '/dev/oracleoci/oraclevdq' to volume group 'volgroup01'
  Adding physical volume '/dev/oracleoci/oraclevdr' to volume group 'volgroup01'
  Adding physical volume '/dev/oracleoci/oraclevds' to volume group 'volgroup01'
  Adding physical volume '/dev/oracleoci/oraclevdt' to volume group 'volgroup01'
  Adding physical volume '/dev/oracleoci/oraclevdu' to volume group 'volgroup01'
  Adding physical volume '/dev/oracleoci/oraclevdv' to volume group 'volgroup01'
  Creating volume group backup "/etc/lvm/backup/volgroup01" (seqno 1).
  Volume group "volgroup01" successfully created

2) VG作成確認
vgs コマンドを使用すると、作成したボリュームグループの属性を表示できます。

[root@bm-e6 ~]# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  ocivolume    1   2   0 wz--n- 44.50g     0
  volgroup01  21   0   0 wz--n- 28.71t 28.71t

● ストライプ化論理ボリューム(LV)作成

1) 論理ボリューム(LV)作成
ストライプサイズが 4 キロバイトの 21 つのストライプがある、全てのボリューム・サイズを使用するストライプ化論理ボリューム(LV)を作成します。

[root@bm-e6 ~]# lvs
  LV   VG        Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  oled ocivolume -wi-ao---- 15.00g
  root ocivolume -wi-ao---- 29.50g

[root@bm-e6 ~]# lvcreate -v -i 21 -I 4 -l 100%FREE -n striped_logical_volume volgroup01
  Converted 100% of FREE (7526379) extents into 7526379 (with mimages 1 and stripes 21 for segtype striped).
  Creating logical volume striped_logical_volume
  Archiving volume group "volgroup01" metadata (seqno 1).
  Activating logical volume volgroup01/striped_logical_volume.
  activation/volume_list configuration setting not defined: Checking only host tags for volgroup01/striped_logical_volume.
  Creating volgroup01-striped_logical_volume
  Loading table for volgroup01-striped_logical_volume (252:22).
  Resuming volgroup01-striped_logical_volume (252:22).
  Wiping known signatures on logical volume volgroup01/striped_logical_volume.
  Initializing 4.00 KiB of logical volume volgroup01/striped_logical_volume with value 0.
  Logical volume "striped_logical_volume" created.
  Creating volume group backup "/etc/lvm/backup/volgroup01" (seqno 2).

2) LV作成確認
volgroup01 VG内に含まれるすべての論理ボリュームが表示されます。

[root@bm-e6 ~]# lvs
  LV                     VG         Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  oled                   ocivolume  -wi-ao---- 15.00g
  root                   ocivolume  -wi-ao---- 29.50g
  striped_logical_volume volgroup01 -wi-a----- 28.71t

3) LV詳細表示
論理ボリュームの詳細な情報を表示

[root@bm-e6 ~]# lvdisplay /dev/volgroup01
  --- Logical volume ---
  LV Path                /dev/volgroup01/striped_logical_volume
  LV Name                striped_logical_volume
  VG Name                volgroup01
  LV UUID                NPELec-2fAT-2ePk-75hR-wNAR-zil1-nwFjIC
  LV Write Access        read/write
  LV Creation host, time bm-e6, 2025-09-24 09:11:34 +0000
  LV Status              available
  # open                 0
  LV Size                28.71 TiB
  Current LE             7526379
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     336
  Block device           252:22

4) LV一覧表示

[root@bm-e6 ~]# lvscan
  ACTIVE            '/dev/ocivolume/oled' [15.00 GiB] inherit
  ACTIVE            '/dev/ocivolume/root' [29.50 GiB] inherit
  ACTIVE            '/dev/volgroup01/striped_logical_volume' [28.71 TiB] inherit

5) ブロックデバイス一覧表示

[root@bm-e6 ~]# lsblk
NAME                                  MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda                                     8:0    0   100G  0 disk
├─sda1                                  8:1    0   100M  0 part  /boot/efi
├─sda2                                  8:2    0     2G  0 part  /boot
└─sda3                                  8:3    0  44.5G  0 part
  ├─ocivolume-root                    252:0    0  29.5G  0 lvm   /
  └─ocivolume-oled                    252:1    0    15G  0 lvm   /var/oled
sdb                                     8:16   0   1.4T  0 disk
└─mpatha                              252:2    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdc                                     8:32   0   1.4T  0 disk
└─mpatha                              252:2    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdd                                     8:48   0   1.4T  0 disk
└─mpatha                              252:2    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sde                                     8:64   0   1.4T  0 disk
└─mpatha                              252:2    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdf                                     8:80   0   1.4T  0 disk
└─mpatha                              252:2    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdg                                     8:96   0   1.4T  0 disk
└─mpathb                              252:3    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdh                                     8:112  0   1.4T  0 disk
└─mpathb                              252:3    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdi                                     8:128  0   1.4T  0 disk
└─mpathb                              252:3    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdj                                     8:144  0   1.4T  0 disk
└─mpathb                              252:3    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdk                                     8:160  0   1.4T  0 disk
└─mpathb                              252:3    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdl                                     8:176  0   1.4T  0 disk
└─mpathc                              252:4    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdm                                     8:192  0   1.4T  0 disk
└─mpathc                              252:4    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdn                                     8:208  0   1.4T  0 disk
└─mpathc                              252:4    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdo                                     8:224  0   1.4T  0 disk
└─mpathc                              252:4    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdp                                     8:240  0   1.4T  0 disk
└─mpathc                              252:4    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdq                                    65:0    0   1.4T  0 disk
└─mpathd                              252:5    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdr                                    65:16   0   1.4T  0 disk
└─mpathd                              252:5    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sds                                    65:32   0   1.4T  0 disk
└─mpathd                              252:5    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdt                                    65:48   0   1.4T  0 disk
└─mpathd                              252:5    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdu                                    65:64   0   1.4T  0 disk
└─mpathd                              252:5    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdv                                    65:80   0   1.4T  0 disk
└─mpathe                              252:6    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdw                                    65:96   0   1.4T  0 disk
└─mpathe                              252:6    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdx                                    65:112  0   1.4T  0 disk
└─mpathe                              252:6    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdy                                    65:128  0   1.4T  0 disk
└─mpathe                              252:6    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdz                                    65:144  0   1.4T  0 disk
└─mpathe                              252:6    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdaa                                   65:160  0   1.4T  0 disk
└─mpathf                              252:7    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdab                                   65:176  0   1.4T  0 disk
└─mpathf                              252:7    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdac                                   65:192  0   1.4T  0 disk
└─mpathf                              252:7    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdad                                   65:208  0   1.4T  0 disk
└─mpathf                              252:7    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdae                                   65:224  0   1.4T  0 disk
└─mpathf                              252:7    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdaf                                   65:240  0   1.4T  0 disk
└─mpathg                              252:8    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdag                                   66:0    0   1.4T  0 disk
└─mpathg                              252:8    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdah                                   66:16   0   1.4T  0 disk
└─mpathg                              252:8    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdai                                   66:32   0   1.4T  0 disk
└─mpathg                              252:8    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdaj                                   66:48   0   1.4T  0 disk
└─mpathg                              252:8    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdak                                   66:64   0   1.4T  0 disk
└─mpathh                              252:9    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdal                                   66:80   0   1.4T  0 disk
└─mpathh                              252:9    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdam                                   66:96   0   1.4T  0 disk
└─mpathh                              252:9    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdan                                   66:112  0   1.4T  0 disk
└─mpathh                              252:9    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdao                                   66:128  0   1.4T  0 disk
└─mpathh                              252:9    0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdap                                   66:144  0   1.4T  0 disk
└─mpathi                              252:10   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdaq                                   66:160  0   1.4T  0 disk
└─mpathi                              252:10   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdar                                   66:176  0   1.4T  0 disk
└─mpathi                              252:10   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdas                                   66:192  0   1.4T  0 disk
└─mpathi                              252:10   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdat                                   66:208  0   1.4T  0 disk
└─mpathi                              252:10   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdau                                   66:224  0   1.4T  0 disk
└─mpathj                              252:11   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdav                                   66:240  0   1.4T  0 disk
└─mpathj                              252:11   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdaw                                   67:0    0   1.4T  0 disk
└─mpathj                              252:11   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdax                                   67:16   0   1.4T  0 disk
└─mpathj                              252:11   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sday                                   67:32   0   1.4T  0 disk
└─mpathj                              252:11   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdaz                                   67:48   0   1.4T  0 disk
└─mpathk                              252:12   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdba                                   67:64   0   1.4T  0 disk
└─mpathk                              252:12   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbb                                   67:80   0   1.4T  0 disk
└─mpathk                              252:12   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbc                                   67:96   0   1.4T  0 disk
└─mpathk                              252:12   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbd                                   67:112  0   1.4T  0 disk
└─mpathk                              252:12   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbe                                   67:128  0   1.4T  0 disk
└─mpathl                              252:13   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbf                                   67:144  0   1.4T  0 disk
└─mpathl                              252:13   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbg                                   67:160  0   1.4T  0 disk
└─mpathl                              252:13   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbh                                   67:176  0   1.4T  0 disk
└─mpathl                              252:13   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbi                                   67:192  0   1.4T  0 disk
└─mpathl                              252:13   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbj                                   67:208  0   1.4T  0 disk
└─mpathm                              252:14   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbk                                   67:224  0   1.4T  0 disk
└─mpathm                              252:14   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbl                                   67:240  0   1.4T  0 disk
└─mpathm                              252:14   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbm                                   68:0    0   1.4T  0 disk
└─mpathm                              252:14   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbn                                   68:16   0   1.4T  0 disk
└─mpathm                              252:14   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbo                                   68:32   0   1.4T  0 disk
└─mpathn                              252:15   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbp                                   68:48   0   1.4T  0 disk
└─mpathn                              252:15   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbq                                   68:64   0   1.4T  0 disk
└─mpathn                              252:15   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbr                                   68:80   0   1.4T  0 disk
└─mpathn                              252:15   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbs                                   68:96   0   1.4T  0 disk
└─mpathn                              252:15   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbt                                   68:112  0   1.4T  0 disk
└─mpatho                              252:16   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbu                                   68:128  0   1.4T  0 disk
└─mpatho                              252:16   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbv                                   68:144  0   1.4T  0 disk
└─mpatho                              252:16   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbw                                   68:160  0   1.4T  0 disk
└─mpatho                              252:16   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbx                                   68:176  0   1.4T  0 disk
└─mpatho                              252:16   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdby                                   68:192  0   1.4T  0 disk
└─mpathp                              252:17   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdbz                                   68:208  0   1.4T  0 disk
└─mpathp                              252:17   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdca                                   68:224  0   1.4T  0 disk
└─mpathp                              252:17   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdcb                                   68:240  0   1.4T  0 disk
└─mpathp                              252:17   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdcc                                   69:0    0   1.4T  0 disk
└─mpathp                              252:17   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdcd                                   69:16   0   1.4T  0 disk
└─mpathq                              252:18   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdce                                   69:32   0   1.4T  0 disk
└─mpathq                              252:18   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdcf                                   69:48   0   1.4T  0 disk
└─mpathq                              252:18   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdcg                                   69:64   0   1.4T  0 disk
└─mpathq                              252:18   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdch                                   69:80   0   1.4T  0 disk
└─mpathq                              252:18   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdci                                   69:96   0   1.4T  0 disk
└─mpathr                              252:19   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdcj                                   69:112  0   1.4T  0 disk
└─mpathr                              252:19   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdck                                   69:128  0   1.4T  0 disk
└─mpathr                              252:19   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdcl                                   69:144  0   1.4T  0 disk
└─mpathr                              252:19   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdcm                                   69:160  0   1.4T  0 disk
└─mpathr                              252:19   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdcn                                   69:176  0   1.4T  0 disk
└─mpaths                              252:20   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdco                                   69:192  0   1.4T  0 disk
└─mpaths                              252:20   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdcp                                   69:208  0   1.4T  0 disk
└─mpaths                              252:20   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdcq                                   69:224  0   1.4T  0 disk
└─mpaths                              252:20   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdcr                                   69:240  0   1.4T  0 disk
└─mpaths                              252:20   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdcs                                   70:0    0   1.4T  0 disk
└─mpatht                              252:21   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdct                                   70:16   0   1.4T  0 disk
└─mpatht                              252:21   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdcu                                   70:32   0   1.4T  0 disk
└─mpatht                              252:21   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdcv                                   70:48   0   1.4T  0 disk
└─mpatht                              252:21   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdcw                                   70:64   0   1.4T  0 disk
└─mpatht                              252:21   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  27.3T  0 lvm
sdcx                                   70:80   0   1.4T  0 disk
└─mpathu                              252:23   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  28.7T  0 lvm
sdcy                                   70:96   0   1.4T  0 disk
└─mpathu                              252:23   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  28.7T  0 lvm
sdcz                                   70:112  0   1.4T  0 disk
└─mpathu                              252:23   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  28.7T  0 lvm
sdda                                   70:128  0   1.4T  0 disk
└─mpathu                              252:23   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  28.7T  0 lvm
sddb                                   70:144  0   1.4T  0 disk
└─mpathu                              252:23   0   1.4T  0 mpath
  └─volgroup01-striped_logical_volume 252:22   0  28.7T  0 lvm
nvme0n1                               259:0    0 894.3G  0 disk
nvme1n1                               259:1    0 894.3G  0 disk

■ FIOコマンド使用してブロック・ボリューム・パフォーマンス・テスト

Linuxベース・インスタンスでブロック・ボリューム・パフォーマンス・テストを行うためのサンプルFIOコマンドを参照して FIOコマンドで IOPSとスループットのパフォーマンス・テストをします。

● FIO インストール

1) install fio
Oracle LinuxまたはCentOSシステムにFIOをインストールして構成するには、次のコマンドを実行します

[root@bm-e6 ~]# dnf install fio -y
Last metadata expiration check: 2:36:27 ago on Thu 18 Sep 2025 10:21:55 AM GMT.
Dependencies resolved.
===================================================================================================================================================================================================================
 Package                                              Architecture                              Version                                                 Repository                                            Size
===================================================================================================================================================================================================================
Installing:
 fio                                                  x86_64                                    3.35-1.el9                                              ol9_appstream                                        6.4 M
Installing dependencies:
 boost-iostreams                                      x86_64                                    1.75.0-10.el9                                           ol9_appstream                                         36 k
 libnbd                                               x86_64                                    1.20.3-1.el9                                            ol9_appstream                                        178 k
 librados2                                            x86_64                                    2:16.2.4-5.0.3.el9                                      ol9_appstream                                        3.4 M
 librbd1                                              x86_64                                    2:16.2.4-5.0.3.el9                                      ol9_appstream                                        3.0 M
 librdmacm                                            x86_64                                    54.0-1.el9                                              ol9_baseos_latest                                     74 k
Installing weak dependencies:
 fio-engine-http                                      x86_64                                    3.35-1.el9                                              ol9_appstream                                         17 k
 fio-engine-libaio                                    x86_64                                    3.35-1.el9                                              ol9_appstream                                         14 k
 fio-engine-nbd                                       x86_64                                    3.35-1.el9                                              ol9_appstream                                         13 k
 fio-engine-rados                                     x86_64                                    3.35-1.el9                                              ol9_appstream                                         15 k
 fio-engine-rbd                                       x86_64                                    3.35-1.el9                                              ol9_appstream                                         15 k
 fio-engine-rdma                                      x86_64                                    3.35-1.el9                                              ol9_appstream                                         19 k

Transaction Summary
===================================================================================================================================================================================================================
Install  12 Packages

Total download size: 13 M
Installed size: 32 M
Downloading Packages:
(1/12): librdmacm-54.0-1.el9.x86_64.rpm                                                                                                                                            844 kB/s |  74 kB     00:00
(2/12): boost-iostreams-1.75.0-10.el9.x86_64.rpm                                                                                                                                   361 kB/s |  36 kB     00:00
(3/12): fio-engine-http-3.35-1.el9.x86_64.rpm                                                                                                                                      1.2 MB/s |  17 kB     00:00
(4/12): fio-engine-nbd-3.35-1.el9.x86_64.rpm                                                                                                                                       996 kB/s |  13 kB     00:00
(5/12): fio-engine-libaio-3.35-1.el9.x86_64.rpm                                                                                                                                    348 kB/s |  14 kB     00:00
(6/12): fio-engine-rbd-3.35-1.el9.x86_64.rpm                                                                                                                                       1.8 MB/s |  15 kB     00:00
(7/12): fio-engine-rados-3.35-1.el9.x86_64.rpm                                                                                                                                     132 kB/s |  15 kB     00:00
(8/12): fio-engine-rdma-3.35-1.el9.x86_64.rpm                                                                                                                                      167 kB/s |  19 kB     00:00
(9/12): libnbd-1.20.3-1.el9.x86_64.rpm                                                                                                                                             950 kB/s | 178 kB     00:00
(10/12): librados2-16.2.4-5.0.3.el9.x86_64.rpm                                                                                                                                     7.0 MB/s | 3.4 MB     00:00
(11/12): librbd1-16.2.4-5.0.3.el9.x86_64.rpm                                                                                                                                       7.7 MB/s | 3.0 MB     00:00
(12/12): fio-3.35-1.el9.x86_64.rpm                                                                                                                                                 4.9 MB/s | 6.4 MB     00:01
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                               10 MB/s |  13 MB     00:01
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                                           1/1
  Installing       : librdmacm-54.0-1.el9.x86_64                                                                                                                                                              1/12
  Installing       : libnbd-1.20.3-1.el9.x86_64                                                                                                                                                               2/12
  Installing       : boost-iostreams-1.75.0-10.el9.x86_64                                                                                                                                                     3/12
  Installing       : librados2-2:16.2.4-5.0.3.el9.x86_64                                                                                                                                                      4/12
  Running scriptlet: librados2-2:16.2.4-5.0.3.el9.x86_64                                                                                                                                                      4/12
  Installing       : librbd1-2:16.2.4-5.0.3.el9.x86_64                                                                                                                                                        5/12
  Running scriptlet: librbd1-2:16.2.4-5.0.3.el9.x86_64                                                                                                                                                        5/12
  Installing       : fio-engine-http-3.35-1.el9.x86_64                                                                                                                                                        6/12
  Installing       : fio-engine-libaio-3.35-1.el9.x86_64                                                                                                                                                      7/12
  Installing       : fio-engine-nbd-3.35-1.el9.x86_64                                                                                                                                                         8/12
  Installing       : fio-engine-rados-3.35-1.el9.x86_64                                                                                                                                                       9/12
  Installing       : fio-engine-rdma-3.35-1.el9.x86_64                                                                                                                                                       10/12
  Installing       : fio-3.35-1.el9.x86_64                                                                                                                                                                   11/12
  Installing       : fio-engine-rbd-3.35-1.el9.x86_64                                                                                                                                                        12/12
  Running scriptlet: fio-engine-rbd-3.35-1.el9.x86_64                                                                                                                                                        12/12
  Verifying        : librdmacm-54.0-1.el9.x86_64                                                                                                                                                              1/12
  Verifying        : boost-iostreams-1.75.0-10.el9.x86_64                                                                                                                                                     2/12
  Verifying        : fio-3.35-1.el9.x86_64                                                                                                                                                                    3/12
  Verifying        : fio-engine-http-3.35-1.el9.x86_64                                                                                                                                                        4/12
  Verifying        : fio-engine-libaio-3.35-1.el9.x86_64                                                                                                                                                      5/12
  Verifying        : fio-engine-nbd-3.35-1.el9.x86_64                                                                                                                                                         6/12
  Verifying        : fio-engine-rados-3.35-1.el9.x86_64                                                                                                                                                       7/12
  Verifying        : fio-engine-rbd-3.35-1.el9.x86_64                                                                                                                                                         8/12
  Verifying        : fio-engine-rdma-3.35-1.el9.x86_64                                                                                                                                                        9/12
  Verifying        : libnbd-1.20.3-1.el9.x86_64                                                                                                                                                              10/12
  Verifying        : librados2-2:16.2.4-5.0.3.el9.x86_64                                                                                                                                                     11/12
  Verifying        : librbd1-2:16.2.4-5.0.3.el9.x86_64                                                                                                                                                       12/12

Installed:
  boost-iostreams-1.75.0-10.el9.x86_64       fio-3.35-1.el9.x86_64                  fio-engine-http-3.35-1.el9.x86_64       fio-engine-libaio-3.35-1.el9.x86_64       fio-engine-nbd-3.35-1.el9.x86_64
  fio-engine-rados-3.35-1.el9.x86_64         fio-engine-rbd-3.35-1.el9.x86_64       fio-engine-rdma-3.35-1.el9.x86_64       libnbd-1.20.3-1.el9.x86_64                librados2-2:16.2.4-5.0.3.el9.x86_64
  librbd1-2:16.2.4-5.0.3.el9.x86_64          librdmacm-54.0-1.el9.x86_64

Complete!

2) FIO インストール確認

[root@bm-e6 ~]# fio -v
fio-3.35

[root@bm-e6 ~]# fio -h
fio-3.35
fio [options] [job options] <job file(s)>
  --debug=options	Enable debug logging. May be one/more of:
			process,file,io,mem,blktrace,verify,random,parse,
			diskutil,job,mutex,profile,time,net,rate,compress,
			steadystate,helperthread,zbd
  --parse-only		Parse options only, don't start any IO
  --merge-blktrace-only	Merge blktraces only, don't start any IO
  --output		Write output to file
  --bandwidth-log	Generate aggregate bandwidth logs
  --minimal		Minimal (terse) output
  --output-format=type	Output format (terse,json,json+,normal)
  --terse-version=type	Set terse version output format (default 3, or 2 or 4 or 5)
  --version		Print version info and exit
  --help		Print this page
  --cpuclock-test	Perform test/validation of CPU clock
  --crctest=[type]	Test speed of checksum functions
  --cmdhelp=cmd		Print command help, "all" for all of them
  --enghelp=engine	Print ioengine help, or list available ioengines
  --enghelp=engine,cmd	Print help for an ioengine cmd
  --showcmd		Turn a job file into command line options
  --eta=when		When ETA estimate should be printed
            		May be "always", "never" or "auto"
  --eta-newline=t	Force a new line for every 't' period passed
  --status-interval=t	Force full status dump every 't' period passed
  --readonly		Turn on safety read-only checks, preventing writes
  --section=name	Only run specified section in job file, multiple sections can be specified
  --alloc-size=kb	Set smalloc pool to this size in kb (def 16384)
  --warnings-fatal	Fio parser warnings are fatal
  --max-jobs=nr		Maximum number of threads/processes to support
  --server=args		Start a backend fio server
  --daemonize=pidfile	Background fio server, write pid to file
  --client=hostname	Talk to remote backend(s) fio server at hostname
  --remote-config=file	Tell fio server to load this local job file
  --idle-prof=option	Report cpu idleness on a system or percpu basis
			(option=system,percpu) or run unit work
			calibration only (option=calibrate)
  --inflate-log=log	Inflate and output compressed log
  --trigger-file=file	Execute trigger cmd when file exists
  --trigger-timeout=t	Execute trigger at this time
  --trigger=cmd		Set this command as local trigger
  --trigger-remote=cmd	Set this command as remote trigger
  --aux-path=path	Use this path for fio state generated files

Fio was written by Jens Axboe <axboe@kernel.dk>

● IOPS パフォーマンス・テスト

IOPSパフォーマンスをテストするには、次のFIOサンプル・コマンドを使用します。コマンドを直接実行することも、コマンドを使用してジョブ・ファイルを作成してからそのジョブ・ファイルを実行することもできます。

・参考: Sample FIO Commands for Block Volume Performance Tests on Linux-based Instances

FIO実行するときは--numjobs オプションを使用して、ストライプしているデバイス(/dev/sd?)の数に応じて並列度を上げて実行します。
今回100個のデバイス・ファイルがあるので、10以上の値で調整します。

・ Sequential Reads テスト

データベース・ワークロードなど、順次アクセス・パターンを利用できるワークロードに対しては、順次読取りをテストすることにより、このパターンのパフォーマンスを確認できます。
順次読取りをテストするには、次のコマンドを実行します。

[root@bm-e6 ~]# fio --filename=/dev/volgroup01/striped_logical_volume --direct=1 --rw=read --bs=4k --ioengine=libaio --iodepth=256 --runtime=120 --numjobs=150 --time_based --group_reporting --name=iops-test-job --eta-newline=1 --readonly
iops-test-job: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.35
Starting 150 processes
Jobs: 150 (f=150): [R(150)][2.5%][r=22.5GiB/s][r=5904k IOPS][eta 01m:57s]
Jobs: 150 (f=150): [R(150)][4.2%][r=22.6GiB/s][r=5927k IOPS][eta 01m:55s]
Jobs: 150 (f=150): [R(150)][5.8%][r=19.7GiB/s][r=5172k IOPS][eta 01m:53s]
Jobs: 150 (f=150): [R(150)][7.5%][r=22.5GiB/s][r=5892k IOPS][eta 01m:51s]
Jobs: 150 (f=150): [R(150)][9.2%][r=22.2GiB/s][r=5810k IOPS][eta 01m:49s]
Jobs: 150 (f=150): [R(150)][10.8%][r=22.6GiB/s][r=5937k IOPS][eta 01m:47s]
Jobs: 150 (f=150): [R(150)][11.7%][r=22.6GiB/s][r=5922k IOPS][eta 01m:46s]
Jobs: 150 (f=150): [R(150)][12.5%][r=22.6GiB/s][r=5929k IOPS][eta 01m:45s]
Jobs: 150 (f=150): [R(150)][13.3%][r=22.7GiB/s][r=5942k IOPS][eta 01m:44s]
Jobs: 150 (f=150): [R(150)][14.2%][r=22.6GiB/s][r=5924k IOPS][eta 01m:43s]
Jobs: 150 (f=150): [R(150)][15.0%][r=22.5GiB/s][r=5900k IOPS][eta 01m:42s]
Jobs: 150 (f=150): [R(150)][15.8%][r=22.7GiB/s][r=5951k IOPS][eta 01m:41s]
Jobs: 150 (f=150): [R(150)][16.7%][r=22.6GiB/s][r=5926k IOPS][eta 01m:40s]
Jobs: 150 (f=150): [R(150)][17.5%][r=22.6GiB/s][r=5931k IOPS][eta 01m:39s]
Jobs: 150 (f=150): [R(150)][18.3%][r=22.6GiB/s][r=5923k IOPS][eta 01m:38s]
Jobs: 150 (f=150): [R(150)][19.2%][r=22.5GiB/s][r=5906k IOPS][eta 01m:37s]
Jobs: 150 (f=150): [R(150)][20.0%][r=22.6GiB/s][r=5936k IOPS][eta 01m:36s]
Jobs: 150 (f=150): [R(150)][20.8%][r=22.7GiB/s][r=5946k IOPS][eta 01m:35s]
Jobs: 150 (f=150): [R(150)][21.7%][r=22.6GiB/s][r=5933k IOPS][eta 01m:34s]
Jobs: 150 (f=150): [R(150)][22.5%][r=22.7GiB/s][r=5940k IOPS][eta 01m:33s]
Jobs: 150 (f=150): [R(150)][23.3%][r=22.6GiB/s][r=5921k IOPS][eta 01m:32s]
Jobs: 150 (f=150): [R(150)][24.2%][r=22.6GiB/s][r=5933k IOPS][eta 01m:31s]
Jobs: 150 (f=150): [R(150)][25.0%][r=22.5GiB/s][r=5904k IOPS][eta 01m:30s]
Jobs: 150 (f=150): [R(150)][25.8%][r=22.7GiB/s][r=5952k IOPS][eta 01m:29s]
Jobs: 150 (f=150): [R(150)][26.7%][r=22.7GiB/s][r=5940k IOPS][eta 01m:28s]
Jobs: 150 (f=150): [R(150)][27.5%][r=22.4GiB/s][r=5873k IOPS][eta 01m:27s]
Jobs: 150 (f=150): [R(150)][28.3%][r=22.2GiB/s][r=5821k IOPS][eta 01m:26s]
Jobs: 150 (f=150): [R(150)][29.2%][r=22.5GiB/s][r=5892k IOPS][eta 01m:25s]
Jobs: 150 (f=150): [R(150)][30.0%][r=22.6GiB/s][r=5937k IOPS][eta 01m:24s]
Jobs: 150 (f=150): [R(150)][30.8%][r=22.6GiB/s][r=5925k IOPS][eta 01m:23s]
Jobs: 150 (f=150): [R(150)][31.7%][r=22.4GiB/s][r=5881k IOPS][eta 01m:22s]
Jobs: 150 (f=150): [R(150)][32.5%][r=22.6GiB/s][r=5917k IOPS][eta 01m:21s]
Jobs: 150 (f=150): [R(150)][33.3%][r=22.5GiB/s][r=5889k IOPS][eta 01m:20s]
Jobs: 150 (f=150): [R(150)][34.2%][r=21.3GiB/s][r=5586k IOPS][eta 01m:19s]
Jobs: 150 (f=150): [R(150)][35.0%][r=22.6GiB/s][r=5924k IOPS][eta 01m:18s]
Jobs: 150 (f=150): [R(150)][35.8%][r=22.6GiB/s][r=5930k IOPS][eta 01m:17s]
Jobs: 150 (f=150): [R(150)][36.7%][r=22.6GiB/s][r=5929k IOPS][eta 01m:16s]
Jobs: 150 (f=150): [R(150)][37.5%][r=22.6GiB/s][r=5922k IOPS][eta 01m:15s]
Jobs: 150 (f=150): [R(150)][38.3%][r=22.5GiB/s][r=5887k IOPS][eta 01m:14s]
Jobs: 150 (f=150): [R(150)][40.0%][r=22.6GiB/s][r=5914k IOPS][eta 01m:12s]
Jobs: 150 (f=150): [R(150)][40.8%][r=22.5GiB/s][r=5896k IOPS][eta 01m:11s]
Jobs: 150 (f=150): [R(150)][42.5%][r=22.6GiB/s][r=5927k IOPS][eta 01m:09s]
Jobs: 150 (f=150): [R(150)][44.2%][r=21.4GiB/s][r=5600k IOPS][eta 01m:07s]
Jobs: 150 (f=150): [R(150)][45.8%][r=22.6GiB/s][r=5936k IOPS][eta 01m:05s]
Jobs: 150 (f=150): [R(150)][47.5%][r=22.6GiB/s][r=5932k IOPS][eta 01m:03s]
Jobs: 150 (f=150): [R(150)][49.2%][r=22.7GiB/s][r=5941k IOPS][eta 01m:01s]
Jobs: 150 (f=150): [R(150)][50.8%][r=22.1GiB/s][r=5802k IOPS][eta 00m:59s]
Jobs: 150 (f=150): [R(150)][52.5%][r=22.2GiB/s][r=5810k IOPS][eta 00m:57s]
Jobs: 150 (f=150): [R(150)][54.2%][r=22.7GiB/s][r=5943k IOPS][eta 00m:55s]
Jobs: 150 (f=150): [R(150)][55.8%][r=22.2GiB/s][r=5820k IOPS][eta 00m:53s]
Jobs: 150 (f=150): [R(150)][57.5%][r=22.6GiB/s][r=5927k IOPS][eta 00m:51s]
Jobs: 150 (f=150): [R(150)][59.2%][r=22.7GiB/s][r=5948k IOPS][eta 00m:49s]
Jobs: 150 (f=150): [R(150)][60.8%][r=22.6GiB/s][r=5938k IOPS][eta 00m:47s]
Jobs: 150 (f=150): [R(150)][62.5%][r=22.7GiB/s][r=5946k IOPS][eta 00m:45s]
Jobs: 150 (f=150): [R(150)][64.2%][r=22.6GiB/s][r=5916k IOPS][eta 00m:43s]
Jobs: 150 (f=150): [R(150)][65.8%][r=22.6GiB/s][r=5934k IOPS][eta 00m:41s]
Jobs: 150 (f=150): [R(150)][67.5%][r=19.8GiB/s][r=5197k IOPS][eta 00m:39s]
Jobs: 150 (f=150): [R(150)][69.2%][r=22.6GiB/s][r=5925k IOPS][eta 00m:37s]
Jobs: 150 (f=150): [R(150)][70.8%][r=22.7GiB/s][r=5940k IOPS][eta 00m:35s]
Jobs: 150 (f=150): [R(150)][72.5%][r=22.6GiB/s][r=5916k IOPS][eta 00m:33s]
Jobs: 150 (f=150): [R(150)][74.2%][r=18.3GiB/s][r=4798k IOPS][eta 00m:31s]
Jobs: 150 (f=150): [R(150)][75.8%][r=22.7GiB/s][r=5944k IOPS][eta 00m:29s]
Jobs: 150 (f=150): [R(150)][77.5%][r=22.6GiB/s][r=5930k IOPS][eta 00m:27s]
Jobs: 150 (f=150): [R(150)][79.2%][r=22.6GiB/s][r=5932k IOPS][eta 00m:25s]
Jobs: 150 (f=150): [R(150)][80.8%][r=22.6GiB/s][r=5928k IOPS][eta 00m:23s]
Jobs: 150 (f=150): [R(150)][82.5%][r=21.7GiB/s][r=5701k IOPS][eta 00m:21s]
Jobs: 150 (f=150): [R(150)][84.2%][r=22.6GiB/s][r=5935k IOPS][eta 00m:19s]
Jobs: 150 (f=150): [R(150)][85.8%][r=22.6GiB/s][r=5914k IOPS][eta 00m:17s]
Jobs: 150 (f=150): [R(150)][87.5%][r=22.6GiB/s][r=5938k IOPS][eta 00m:15s]
Jobs: 150 (f=150): [R(150)][89.2%][r=22.6GiB/s][r=5929k IOPS][eta 00m:13s]
Jobs: 150 (f=150): [R(150)][90.8%][r=22.7GiB/s][r=5938k IOPS][eta 00m:11s]
Jobs: 150 (f=150): [R(150)][92.5%][r=22.6GiB/s][r=5935k IOPS][eta 00m:09s]
Jobs: 150 (f=150): [R(150)][94.2%][r=22.7GiB/s][r=5942k IOPS][eta 00m:07s]
Jobs: 150 (f=150): [R(150)][95.8%][r=22.7GiB/s][r=5940k IOPS][eta 00m:05s]
Jobs: 150 (f=150): [R(150)][97.5%][r=22.7GiB/s][r=5946k IOPS][eta 00m:03s]
Jobs: 150 (f=150): [R(150)][99.2%][r=22.6GiB/s][r=5931k IOPS][eta 00m:01s]
Jobs: 150 (f=150): [R(150)][100.0%][r=22.7GiB/s][r=5944k IOPS][eta 00m:00s]
iops-test-job: (groupid=0, jobs=150): err= 0: pid=118888: Wed Sep 24 10:24:53 2025
  read: IOPS=5808k, BW=22.2GiB/s (23.8GB/s)(2659GiB/120005msec)
    slat (nsec): min=1692, max=156612k, avg=25012.65, stdev=175709.04
    clat (usec): min=320, max=345856, avg=6585.13, stdev=3031.68
     lat (usec): min=324, max=345862, avg=6610.14, stdev=3043.83
    clat percentiles (usec):
     |  1.00th=[ 3228],  5.00th=[ 3916], 10.00th=[ 4359], 20.00th=[ 4883],
     | 30.00th=[ 5342], 40.00th=[ 5735], 50.00th=[ 6128], 60.00th=[ 6521],
     | 70.00th=[ 7046], 80.00th=[ 7635], 90.00th=[ 8717], 95.00th=[10159],
     | 99.00th=[17957], 99.50th=[22676], 99.90th=[38011], 99.95th=[47449],
     | 99.99th=[72877]
   bw (  MiB/s): min= 9969, max=25636, per=100.00%, avg=22714.12, stdev=13.53, samples=35850
   iops        : min=2552108, max=6563027, avg=5814795.55, stdev=3464.37, samples=35850
  lat (usec)   : 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=5.77%, 10=88.97%, 20=4.52%, 50=0.70%
  lat (msec)   : 100=0.04%, 250=0.01%, 500=0.01%
  cpu          : usr=2.62%, sys=29.59%, ctx=43013554, majf=2, minf=215001
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
     issued rwts: total=697009084,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=256

Run status group 0 (all jobs):
   READ: bw=22.2GiB/s (23.8GB/s), 22.2GiB/s-22.2GiB/s (23.8GB/s-23.8GB/s), io=2659GiB (2855GB), run=120005-120005msec

・ IOPS パフォーマンス・テスト結果

IOPS パフォーマンスは、次の iops 行を見ると、max=6563027 で 625万IOPS 以上あることを確認できます。

iops行
   iops        : min=2552108, max=6563027, avg=5814795.55, stdev=3464.37, samples=35850

● スループット・パフォーマンス・テスト

スループット・パフォーマンスをテストするには、次のFIOサンプル・コマンドを使用します。

・ Sequential Reads テスト

データベース・ワークロードなど、順次アクセス・パターンを利用できるワークロードに対しては、順次読取りをテストすることにより、このパターンのパフォーマンスを確認できます。
順次読取りをテストするには、次のコマンドを実行します:

[root@bm-e6 ~]# fio --filename=/dev/volgroup01/striped_logical_volume --direct=1 --rw=read --bs=64k --ioengine=libaio --iodepth=64 --runtime=120 --numjobs=150 --time_based --group_reporting --name=throughput-test-job --eta-newline=1 --readonly
throughput-test-job: (g=0): rw=read, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=libaio, iodepth=64
...
fio-3.35
Starting 150 processes
Jobs: 150 (f=150): [R(150)][2.5%][r=21.8GiB/s][r=358k IOPS][eta 01m:57s]
Jobs: 150 (f=150): [R(150)][4.2%][r=22.6GiB/s][r=371k IOPS][eta 01m:55s]
Jobs: 150 (f=150): [R(150)][5.8%][r=22.7GiB/s][r=372k IOPS][eta 01m:53s]
Jobs: 150 (f=150): [R(150)][7.5%][r=22.6GiB/s][r=370k IOPS][eta 01m:51s]
Jobs: 150 (f=150): [R(150)][9.2%][r=22.5GiB/s][r=369k IOPS][eta 01m:49s]
Jobs: 150 (f=150): [R(150)][10.0%][r=22.4GiB/s][r=367k IOPS][eta 01m:48s]
Jobs: 150 (f=150): [R(150)][10.8%][r=20.6GiB/s][r=338k IOPS][eta 01m:47s]
Jobs: 150 (f=150): [R(150)][11.7%][r=20.9GiB/s][r=342k IOPS][eta 01m:46s]
Jobs: 150 (f=150): [R(150)][12.5%][r=20.7GiB/s][r=340k IOPS][eta 01m:45s]
Jobs: 150 (f=150): [R(150)][13.3%][r=16.6GiB/s][r=272k IOPS][eta 01m:44s]
Jobs: 150 (f=150): [R(150)][14.2%][r=21.7GiB/s][r=355k IOPS][eta 01m:43s]
Jobs: 150 (f=150): [R(150)][15.0%][r=22.2GiB/s][r=364k IOPS][eta 01m:42s]
Jobs: 150 (f=150): [R(150)][15.8%][r=19.7GiB/s][r=323k IOPS][eta 01m:41s]
Jobs: 150 (f=150): [R(150)][16.7%][r=22.4GiB/s][r=368k IOPS][eta 01m:40s]
Jobs: 150 (f=150): [R(150)][17.5%][r=22.7GiB/s][r=372k IOPS][eta 01m:39s]
Jobs: 150 (f=150): [R(150)][18.3%][r=22.5GiB/s][r=369k IOPS][eta 01m:38s]
Jobs: 150 (f=150): [R(150)][19.2%][r=22.6GiB/s][r=371k IOPS][eta 01m:37s]
Jobs: 150 (f=150): [R(150)][20.0%][r=22.6GiB/s][r=370k IOPS][eta 01m:36s]
Jobs: 150 (f=150): [R(150)][20.8%][r=14.9GiB/s][r=243k IOPS][eta 01m:35s]
Jobs: 150 (f=150): [R(150)][21.7%][r=22.6GiB/s][r=370k IOPS][eta 01m:34s]
Jobs: 150 (f=150): [R(150)][22.5%][r=11.5GiB/s][r=188k IOPS][eta 01m:33s]
Jobs: 150 (f=150): [R(150)][23.3%][r=22.2GiB/s][r=364k IOPS][eta 01m:32s]
Jobs: 150 (f=150): [R(150)][24.2%][r=22.6GiB/s][r=370k IOPS][eta 01m:31s]
Jobs: 150 (f=150): [R(150)][25.0%][r=22.7GiB/s][r=372k IOPS][eta 01m:30s]
Jobs: 150 (f=150): [R(150)][25.8%][r=22.7GiB/s][r=371k IOPS][eta 01m:29s]
Jobs: 150 (f=150): [R(150)][26.7%][r=22.6GiB/s][r=370k IOPS][eta 01m:28s]
Jobs: 150 (f=150): [R(150)][27.5%][r=22.6GiB/s][r=371k IOPS][eta 01m:27s]
Jobs: 150 (f=150): [R(150)][28.3%][r=22.2GiB/s][r=364k IOPS][eta 01m:26s]
Jobs: 150 (f=150): [R(150)][29.2%][r=20.0GiB/s][r=328k IOPS][eta 01m:25s]
Jobs: 150 (f=150): [R(150)][30.0%][r=22.6GiB/s][r=370k IOPS][eta 01m:24s]
Jobs: 150 (f=150): [R(150)][31.7%][r=22.6GiB/s][r=371k IOPS][eta 01m:22s]
Jobs: 150 (f=150): [R(150)][32.5%][r=22.6GiB/s][r=371k IOPS][eta 01m:21s]
Jobs: 150 (f=150): [R(150)][33.3%][r=22.5GiB/s][r=369k IOPS][eta 01m:20s]
Jobs: 150 (f=150): [R(150)][34.2%][r=22.4GiB/s][r=367k IOPS][eta 01m:19s]
Jobs: 150 (f=150): [R(150)][35.0%][r=22.7GiB/s][r=371k IOPS][eta 01m:18s]
Jobs: 150 (f=150): [R(150)][35.8%][r=22.0GiB/s][r=360k IOPS][eta 01m:17s]
Jobs: 150 (f=150): [R(150)][36.7%][r=22.7GiB/s][r=371k IOPS][eta 01m:16s]
Jobs: 150 (f=150): [R(150)][37.5%][r=22.5GiB/s][r=369k IOPS][eta 01m:15s]
Jobs: 150 (f=150): [R(150)][38.3%][r=22.7GiB/s][r=372k IOPS][eta 01m:14s]
Jobs: 150 (f=150): [R(150)][40.0%][r=22.7GiB/s][r=372k IOPS][eta 01m:12s]
Jobs: 150 (f=150): [R(150)][41.7%][r=22.7GiB/s][r=371k IOPS][eta 01m:10s]
Jobs: 150 (f=150): [R(150)][43.3%][r=21.9GiB/s][r=360k IOPS][eta 01m:08s]
Jobs: 150 (f=150): [R(150)][45.0%][r=22.6GiB/s][r=370k IOPS][eta 01m:06s]
Jobs: 150 (f=150): [R(150)][46.7%][r=22.6GiB/s][r=370k IOPS][eta 01m:04s]
Jobs: 150 (f=150): [R(150)][48.3%][r=21.6GiB/s][r=355k IOPS][eta 01m:02s]
Jobs: 150 (f=150): [R(150)][50.0%][r=22.6GiB/s][r=370k IOPS][eta 01m:00s]
Jobs: 150 (f=150): [R(150)][51.7%][r=16.2GiB/s][r=266k IOPS][eta 00m:58s]
Jobs: 150 (f=150): [R(150)][53.3%][r=21.2GiB/s][r=347k IOPS][eta 00m:56s]
Jobs: 150 (f=150): [R(150)][55.0%][r=20.2GiB/s][r=330k IOPS][eta 00m:54s]
Jobs: 150 (f=150): [R(150)][56.7%][r=21.8GiB/s][r=357k IOPS][eta 00m:52s]
Jobs: 150 (f=150): [R(150)][58.3%][r=22.6GiB/s][r=371k IOPS][eta 00m:50s]
Jobs: 150 (f=150): [R(150)][60.0%][r=22.6GiB/s][r=371k IOPS][eta 00m:48s]
Jobs: 150 (f=150): [R(150)][61.7%][r=22.6GiB/s][r=370k IOPS][eta 00m:46s]
Jobs: 150 (f=150): [R(150)][63.3%][r=17.2GiB/s][r=281k IOPS][eta 00m:44s]
Jobs: 150 (f=150): [R(150)][65.0%][r=22.6GiB/s][r=371k IOPS][eta 00m:42s]
Jobs: 150 (f=150): [R(150)][66.7%][r=22.7GiB/s][r=371k IOPS][eta 00m:40s]
Jobs: 150 (f=150): [R(150)][68.3%][r=22.7GiB/s][r=372k IOPS][eta 00m:38s]
Jobs: 150 (f=150): [R(150)][70.0%][r=22.5GiB/s][r=369k IOPS][eta 00m:36s]
Jobs: 150 (f=150): [R(150)][71.7%][r=18.8GiB/s][r=308k IOPS][eta 00m:34s]
Jobs: 150 (f=150): [R(150)][73.3%][r=16.1GiB/s][r=263k IOPS][eta 00m:32s]
Jobs: 150 (f=150): [R(150)][75.0%][r=22.7GiB/s][r=371k IOPS][eta 00m:30s]
Jobs: 150 (f=150): [R(150)][76.7%][r=22.4GiB/s][r=366k IOPS][eta 00m:28s]
Jobs: 150 (f=150): [R(150)][78.3%][r=22.3GiB/s][r=366k IOPS][eta 00m:26s]
Jobs: 150 (f=150): [R(150)][80.0%][r=22.7GiB/s][r=371k IOPS][eta 00m:24s]
Jobs: 150 (f=150): [R(150)][81.7%][r=22.6GiB/s][r=371k IOPS][eta 00m:22s]
Jobs: 150 (f=150): [R(150)][83.3%][r=22.4GiB/s][r=367k IOPS][eta 00m:20s]
Jobs: 150 (f=150): [R(150)][85.0%][r=19.9GiB/s][r=325k IOPS][eta 00m:18s]
Jobs: 150 (f=150): [R(150)][86.7%][r=22.6GiB/s][r=371k IOPS][eta 00m:16s]
Jobs: 150 (f=150): [R(150)][88.3%][r=22.7GiB/s][r=372k IOPS][eta 00m:14s]
Jobs: 150 (f=150): [R(150)][90.0%][r=22.4GiB/s][r=367k IOPS][eta 00m:12s]
Jobs: 150 (f=150): [R(150)][91.7%][r=22.7GiB/s][r=372k IOPS][eta 00m:10s]
Jobs: 150 (f=150): [R(150)][93.3%][r=22.4GiB/s][r=367k IOPS][eta 00m:08s]
Jobs: 150 (f=150): [R(150)][95.0%][r=22.6GiB/s][r=370k IOPS][eta 00m:06s]
Jobs: 150 (f=150): [R(150)][96.7%][r=21.0GiB/s][r=344k IOPS][eta 00m:04s]
Jobs: 150 (f=150): [R(150)][98.3%][r=21.8GiB/s][r=357k IOPS][eta 00m:02s]
Jobs: 150 (f=150): [R(150)][100.0%][r=22.4GiB/s][r=367k IOPS][eta 00m:00s]
throughput-test-job: (groupid=0, jobs=150): err= 0: pid=111131: Wed Sep 24 09:38:58 2025
  read: IOPS=354k, BW=21.6GiB/s (23.2GB/s)(2596GiB/120012msec)
    slat (usec): min=32, max=326566, avg=421.74, stdev=834.14
    clat (usec): min=550, max=641500, avg=26654.65, stdev=11529.47
     lat (usec): min=694, max=642471, avg=27076.39, stdev=11659.36
    clat percentiles (msec):
     |  1.00th=[   17],  5.00th=[   20], 10.00th=[   21], 20.00th=[   22],
     | 30.00th=[   23], 40.00th=[   24], 50.00th=[   26], 60.00th=[   27],
     | 70.00th=[   28], 80.00th=[   30], 90.00th=[   33], 95.00th=[   38],
     | 99.00th=[   62], 99.50th=[   84], 99.90th=[  174], 99.95th=[  234],
     | 99.99th=[  380]
   bw (  MiB/s): min= 7542, max=25713, per=100.00%, avg=22176.20, stdev=18.80, samples=35850
   iops        : min=120670, max=411413, avg=354815.26, stdev=300.77, samples=35850
  lat (usec)   : 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=8.44%, 50=89.86%
  lat (msec)   : 100=1.36%, 250=0.28%, 500=0.04%, 750=0.01%
  cpu          : usr=0.33%, sys=26.71%, ctx=41673389, majf=1, minf=445271
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=42530711,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=21.6GiB/s (23.2GB/s), 21.6GiB/s-21.6GiB/s (23.2GB/s-23.2GB/s), io=2596GiB (2787GB), run=120012-120012msec

・ スループット・パフォーマンス・テスト結果

スループット・パフォーマンスは、次の bw 行を見ると、max= 25713 (MiB/s) = 215.7 Gbit/s であることを確認できます。

bw行
   bw (  MiB/s): min= 7542, max=25713, per=100.00%, avg=22176.20, stdev=18.80, samples=35850

■ おまけ

● LVM 削除

1) XFS Umount

[root@bm-e6 ~]# df -hT | grep xfs
    /dev/sdb4      xfs        46G  2.8G   43G   7% /
    /dev/sdb3      xfs       960M  174M  787M  19% /boot
    /dev/mapper/volgroup01-striped_logical_volume xfs       4.4T  132G  4.3T   3% /xfs

[root@bm-e6 ~]# umount /xfs
    /dev/mapper/volgroup01-striped_logical_volume xfs       4.4T  132G  4.3T   3% /xfs

2) lvremove

[root@bm-e6 ~]# lvs
  LV                     VG         Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  oled                   ocivolume  -wi-ao---- 15.00g
  root                   ocivolume  -wi-ao---- 29.50g
  striped_logical_volume volgroup01 -wi-a----- 27.34t

[root@bm-e6 ~]# lvremove striped_logical_volume volgroup01
  Volume group "striped_logical_volume" not found
  Cannot process volume group striped_logical_volume
Do you really want to remove active logical volume volgroup01/striped_logical_volume? [y/n]: y
  Logical volume "striped_logical_volume" successfully removed.

3) vgremove

[root@bm-e6 ~]# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  ocivolume    1   2   0 wz--n- 44.50g     0
  volgroup01  20   0   0 wz--n- 27.34t 27.34t

[root@bm-e6 ~]# vgremove volgroup01
  Volume group "volgroup01" successfully removed

4) pvremove

[root@bm-e6 ~]# pvs
  PV                       VG        Fmt  Attr PSize  PFree
  /dev/oracleoci/oraclevdb           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdc           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdd           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevde           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdf           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdg           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdh           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdi           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdj           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdk           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdl           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdm           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdn           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdo           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdp           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdq           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdr           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevds           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdt           lvm2 ---  <1.37t <1.37t
  /dev/oracleoci/oraclevdu           lvm2 ---  <1.37t <1.37t
  /dev/sda3                ocivolume lvm2 a--  44.50g     0

[root@bm-e6 ~]# pvremove /dev/oracleoci/oraclevd[b-u]
  Labels on physical volume "/dev/oracleoci/oraclevdb" successfully wiped.
  Labels on physical volume "/dev/oracleoci/oraclevdc" successfully wiped.
  Labels on physical volume "/dev/oracleoci/oraclevdd" successfully wiped.
  Labels on physical volume "/dev/oracleoci/oraclevde" successfully wiped.
  Labels on physical volume "/dev/oracleoci/oraclevdf" successfully wiped.
  Labels on physical volume "/dev/oracleoci/oraclevdg" successfully wiped.
  Labels on physical volume "/dev/oracleoci/oraclevdh" successfully wiped.
  Labels on physical volume "/dev/oracleoci/oraclevdi" successfully wiped.
  Labels on physical volume "/dev/oracleoci/oraclevdj" successfully wiped.
  Labels on physical volume "/dev/oracleoci/oraclevdk" successfully wiped.
  Labels on physical volume "/dev/oracleoci/oraclevdl" successfully wiped.
  Labels on physical volume "/dev/oracleoci/oraclevdm" successfully wiped.
  Labels on physical volume "/dev/oracleoci/oraclevdn" successfully wiped.
  Labels on physical volume "/dev/oracleoci/oraclevdo" successfully wiped.
  Labels on physical volume "/dev/oracleoci/oraclevdp" successfully wiped.
  Labels on physical volume "/dev/oracleoci/oraclevdq" successfully wiped.
  Labels on physical volume "/dev/oracleoci/oraclevdr" successfully wiped.
  Labels on physical volume "/dev/oracleoci/oraclevds" successfully wiped.
  Labels on physical volume "/dev/oracleoci/oraclevdt" successfully wiped.
  Labels on physical volume "/dev/oracleoci/oraclevdu" successfully wiped.

[root@bm-e6 ~]# pvs
  PV         VG        Fmt  Attr PSize  PFree
  /dev/sda3  ocivolume lvm2 a--  44.50g    0

■ 結論


E6 Standard により、OCI は高いパフォーマンス、圧倒的なスケーラビリティ、そして低コストを両立させ、業界をリードする価値を提供します。Web サーバーやアプリケーション・サーバー、動画トランスコーディング、クラウドネイティブ・サービ、あるいは汎用ワークロードなど、どのようなワークロードを実行する場合でも、E6 Standard は期待どおりの価格で必要なパフォーマンスを提供します。同じ価格で最大 2 倍のパフォーマンス。比類のない価値です。E6 Standard ベアメタルおよびフレキシブル仮想マシン・インスタンスの詳細については、ドキュメントをご覧ください。

■ 参考

・ NEWS
 - AMD: 第 5 世代 AMD EPYC™ プロセッサ搭載の Oracle Cloud Infrastructure Compute E6 シェイプが、クラウドにおける卓越した性能と効率性を実現
 - ASCII.jp: Oracle Cloud Infrastructure、第5世代AMD EPYCプロセッサ搭載のE6シェイプを導入
 - hpcwire.com: AMD Powers Oracle Cloud E6 Shapes with 5th Gen EPYC CPUs

・ Oracle Cloud Infrastructureドキュメント
 - コンピュート・シェイプ
 - ブロック・ボリューム・パフォーマンス
 - シェイプのパフォーマンスの詳細
 - 複数ボリューム・アタッチメントのUltra High Performanceサポート
 - 超高パフォーマンス・ボリュームのためのアタッチメントの構成
 - Linuxベース・インスタンスでブロック・ボリューム・パフォーマンス・テストを行うためのサンプルFIOコマンド
 - Oracle Cloud Infrastructure(OCI)ネットワーキング
 - Virtual Cloud Network

・ Oracle Blog
 - Oracle launches OCI Compute E6 Standard Instances: 2X the Performance, Same Price
 - OracleがOCI Compute E6 Standard Instancesを発表: 2倍の性能、同じ価格
 - AMD EPYCプロセッサ上のE5インスタンスでパフォーマンスを最大2倍向上
 - Shatter the Million IOPS Barrier in the Cloud with OCI Block Storage

・ 計算方法
 - ストレージ技術基礎
 - データのサイズ コンバーター
 - IOPS, MB/s, GB/day Converter
 - メビバイト/秒 (MiB/s - 毎秒), 帯域幅
 - MB/sとIOPSの換算の仕方と変換ツール。ランダムリード/ライトの指標の変換。

・ My Oracle Support (MOS)
 - How to Calculate the Number of IOPS and Throughput of a Database (Doc ID 2206831.1)

・ Google Cloud
 - Linux VM で永続ディスクのパフォーマンスをベンチマークする

29
5
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
29
5

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?