はじめに
NetApp StorageであるAFFではAdvanced Disk Partitioning(ADP)v2という機能を使ってActive-Activeのストレージノード(ストレージコントローラ)が高い性能を発揮できるように工夫がされています。
性能面では工場出荷時の状態でストレージノードごとに一つずつ、合計2つのAggregateを作成する方がベターですが運用利便性を考えて1つのAggregateとして運用したい場合があるかと思います。
本記事ではこの場合の1Aggregateにまとめる手順を記載しております。
なおこの手順はSystem Manager(GUI)で対応していないためCLIを用いて行います。
ADP(v1、v2)のルールや制約については以下のKBを参考にしてください。
何ができる?
- NetApp AFFシリーズのADP2環境でData1パーティションとData2パーティションを一つのAggregateとして作成
既存のAggregateにパーティションやディスクを追加する場合はSpareの状態である必要があります。利用予定のパーティション、ディスクで既にAggregateを作成済みの場合は事前に削除する必要があります
Aggregateへのパーティション、ディスクなどの追加作業はシステムへの影響なくオンラインで実施可能です
ADPv2でdata1パーティションとdata2パーティションを同一RaidGroupにする事はできません。別のRaidGroupとして同一Aggregateに追加する事が可能です。
設定手順
以下のステップで作業を実施します。
- 所有権情報の確認
- 権限レベルをadvancedに変更
- 所有権の変更
- 既存Aggregateへの追加シミュレーション
- 既存Aggregateへの追加
所有権情報の確認
::> storage disk show -disk <disk_name> -partition-ownership
権限レベルをadvancedに変更
set -privilege advanced
所有権の変更
工場出荷時にデフォルトでいずれかのノードが所有者となっているためほとんどの場合-forceオプションが必要となります。
data1パーティションの操作は-data1、data2パーティションの操作は-data2オプションを利用します。
data3パーティションはシステム領域となるため所有者を変更する事はできず必要もありません。
::> storage disk assign -disk <disk_name> -owner <owner_name> -data1 true -force true
既存Aggregateへの追加シミュレーション
::> aggregate add-disks -aggregate <aggregate_name> -diskcount <disk数> -simulate true
既存Aggregateへの追加
::> aggregate add-disks -aggregate <aggregate_name> -diskcount <disk数>
実行サンプル
C250 15.3TB SSDが10本搭載されたADPv2の環境でData1パーティションのOwnerをcluster-02に変更しcluster_02_SSD_CAP_1 Aggregateにパーティションを追加した際のログ
##diagユーザーに権限変更
cluster::*> set diagnostic
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y
##現在のパーティション情報を確認(記事の手順と違いdiag権限で実行可能なコマンドで確認を実施しています)
##記事の手順通りadminユーザー権限で以下コマンドを実行した場合でも類似の情報が得られます
##storage disk show -disk * -partition-ownership
cluster::*> disk partition show
Usable Container Container
Partition Size Type Name Owner
------------------------- ------- ------------- ----------------- -----------------
1.0.0.P1 6.94TB spare Pool0 cluster-01
1.0.0.P2 6.94TB aggregate /cluster_02_SSD_CAP_1/plex0/rg0
cluster-02
1.0.0.P3 93.52GB aggregate /aggr0_cluster_02/plex0/rg0
cluster-02
1.0.1.P1 6.94TB spare Pool0 cluster-01
1.0.1.P2 6.94TB aggregate /cluster_02_SSD_CAP_1/plex0/rg0
cluster-02
1.0.1.P3 93.52GB aggregate /aggr0_cluster_02/plex0/rg0
cluster-02
1.0.2.P1 6.94TB spare Pool0 cluster-01
1.0.2.P2 6.94TB aggregate /cluster_02_SSD_CAP_1/plex0/rg0
cluster-02
1.0.2.P3 93.52GB aggregate /aggr0_cluster_02/plex0/rg0
cluster-02
1.0.3.P1 6.94TB spare Pool0 cluster-01
1.0.3.P2 6.94TB aggregate /cluster_02_SSD_CAP_1/plex0/rg0
cluster-02
1.0.3.P3 93.52GB aggregate /aggr0_cluster_02/plex0/rg0
cluster-02
1.0.4.P1 6.94TB spare Pool0 cluster-01
1.0.4.P2 6.94TB aggregate /cluster_02_SSD_CAP_1/plex0/rg0
cluster-02
1.0.4.P3 93.52GB spare Pool0 cluster-02
1.0.19.P1 6.94TB spare Pool0 cluster-01
1.0.19.P2 6.94TB aggregate /cluster_02_SSD_CAP_1/plex0/rg0
cluster-02
1.0.19.P3 93.52GB aggregate /aggr0_cluster_01/plex0/rg0
cluster-01
1.0.20.P1 6.94TB spare Pool0 cluster-01
1.0.20.P2 6.94TB aggregate /cluster_02_SSD_CAP_1/plex0/rg0
cluster-02
1.0.20.P3 93.52GB aggregate /aggr0_cluster_01/plex0/rg0
cluster-01
1.0.21.P1 6.94TB spare Pool0 cluster-01
1.0.21.P2 6.94TB aggregate /cluster_02_SSD_CAP_1/plex0/rg0
cluster-02
1.0.21.P3 93.52GB aggregate /aggr0_cluster_01/plex0/rg0
cluster-01
1.0.22.P1 6.94TB spare Pool0 cluster-01
1.0.22.P2 6.94TB aggregate /cluster_02_SSD_CAP_1/plex0/rg0
cluster-02
1.0.22.P3 93.52GB aggregate /aggr0_cluster_01/plex0/rg0
cluster-01
1.0.23.P1 6.94TB spare Pool0 cluster-01
1.0.23.P2 6.94TB spare Pool0 cluster-02
1.0.23.P3 93.52GB spare Pool0 cluster-01
30 entries were displayed.
## 10本のディスクのdata1パーティション(P1)の所有者をcluster-02に変更
cluster::*> storage disk assign -disk 1.0.0 -owner cluster-02 -data1 true -force true
cluster::*> storage disk assign -disk 1.0.1 -owner cluster-02 -data1 true -force true
cluster::*> storage disk assign -disk 1.0.2 -owner cluster-02 -data1 true -force true
cluster::*> storage disk assign -disk 1.0.3 -owner cluster-02 -data1 true -force true
cluster::*> storage disk assign -disk 1.0.4 -owner cluster-02 -data1 true -force true
cluster::*> storage disk assign -disk 1.0.19 -owner cluster-02 -data1 true -force true
cluster::*> storage disk assign -disk 1.0.20 -owner cluster-02 -data1 true -force true
cluster::*> storage disk assign -disk 1.0.21 -owner cluster-02 -data1 true -force true
cluster::*> storage disk assign -disk 1.0.22 -owner cluster-02 -data1 true -force true
cluster::*> storage disk assign -disk 1.0.23 -owner cluster-02 -data1 true -force true
##変更後のパーティション情報を確認
##data1パーティション(P1)所有者がcluster-02に変更されています
cluster::*> disk partition show
Usable Container Container
Partition Size Type Name Owner
------------------------- ------- ------------- ----------------- -----------------
1.0.0.P1 6.94TB spare Pool0 cluster-02
1.0.0.P2 6.94TB aggregate /cluster_02_SSD_CAP_1/plex0/rg0
cluster-02
1.0.0.P3 93.52GB aggregate /aggr0_cluster_02/plex0/rg0
cluster-02
1.0.1.P1 6.94TB spare Pool0 cluster-02
1.0.1.P2 6.94TB aggregate /cluster_02_SSD_CAP_1/plex0/rg0
cluster-02
1.0.1.P3 93.52GB aggregate /aggr0_cluster_02/plex0/rg0
cluster-02
1.0.2.P1 6.94TB spare Pool0 cluster-02
1.0.2.P2 6.94TB aggregate /cluster_02_SSD_CAP_1/plex0/rg0
cluster-02
1.0.2.P3 93.52GB aggregate /aggr0_cluster_02/plex0/rg0
cluster-02
1.0.3.P1 6.94TB spare Pool0 cluster-02
1.0.3.P2 6.94TB aggregate /cluster_02_SSD_CAP_1/plex0/rg0
cluster-02
1.0.3.P3 93.52GB aggregate /aggr0_cluster_02/plex0/rg0
cluster-02
1.0.4.P1 6.94TB spare Pool0 cluster-02
1.0.4.P2 6.94TB aggregate /cluster_02_SSD_CAP_1/plex0/rg0
cluster-02
1.0.4.P3 93.52GB spare Pool0 cluster-02
1.0.19.P1 6.94TB spare Pool0 cluster-02
1.0.19.P2 6.94TB aggregate /cluster_02_SSD_CAP_1/plex0/rg0
cluster-02
1.0.19.P3 93.52GB aggregate /aggr0_cluster_01/plex0/rg0
cluster-01
1.0.20.P1 6.94TB spare Pool0 cluster-02
1.0.20.P2 6.94TB aggregate /cluster_02_SSD_CAP_1/plex0/rg0
cluster-02
1.0.20.P3 93.52GB aggregate /aggr0_cluster_01/plex0/rg0
cluster-01
1.0.21.P1 6.94TB spare Pool0 cluster-02
1.0.21.P2 6.94TB aggregate /cluster_02_SSD_CAP_1/plex0/rg0
cluster-02
1.0.21.P3 93.52GB aggregate /aggr0_cluster_01/plex0/rg0
cluster-01
1.0.22.P1 6.94TB spare Pool0 cluster-02
1.0.22.P2 6.94TB aggregate /cluster_02_SSD_CAP_1/plex0/rg0
cluster-02
1.0.22.P3 93.52GB aggregate /aggr0_cluster_01/plex0/rg0
cluster-01
1.0.23.P1 6.94TB spare Pool0 cluster-02
1.0.23.P2 6.94TB spare Pool0 cluster-02
1.0.23.P3 93.52GB spare Pool0 cluster-01
30 entries were displayed.
##adminユーザーに権限を戻す(必須ではない)
cluster::*> set admin
##パーティションを追加予定のAggregateの情報を確認
cluster::> aggr show -aggregate cluster_02_SSD_CAP_1 -instance
Aggregate: cluster_02_SSD_CAP_1
Storage Type: ssd
Checksum Style: block
Number Of Disks: 9
Is Mirrored: false
Disks for First Plex: 1.0.0, 1.0.1, 1.0.2, 1.0.3,
1.0.19, 1.0.20, 1.0.21,
1.0.22, 1.0.4
Disks for Mirrored Plex: -
Partitions for First Plex: -
Partitions for Mirrored Plex: -
Node: cluster-02
Free Space Reallocation: off
HA Policy: sfo
Ignore Inconsistent: off
Space Reserved for Snapshot Copies: -
Aggregate Nearly Full Threshold Percent: 93%
Aggregate Full Threshold Percent: 96%
Checksum Verification: on
RAID Lost Write: on
Enable Thorough Scrub: off
Hybrid Enabled: false
Available Size: 45.32TB
Checksum Enabled: true
Checksum Status: active
Cluster: cluster
Home Cluster ID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
DR Home ID: -
DR Home Name: -
Inofile Version: 4
Has Mroot Volume: false
Has Partner Node Mroot Volume: false
Home ID: XXXXXXXXX
Home Name: cluster-02
Total Hybrid Cache Size: 0B
Hybrid: false
Inconsistent: false
Is Aggregate Home: true
Max RAID Size: 24
Flash Pool SSD Tier Maximum RAID Group Size: -
Owner ID: XXXXXXXXX
Owner Name: cluster-02
Used Percentage: 2%
Plexes: /cluster_02_SSD_CAP_1/
plex0 RAID Groups: /cluster_02_SSD_CAP_1/plex0/rg0 (block)
RAID Lost Write State: on
RAID Status: raid_dp, normal
RAID Type: raid_dp
SyncMirror Resync Snapshot Frequency in Minutes: 5
Is Root: false
Space Used by Metadata for Volume Efficiency: 0B
Size: 46.15TB
State: online
Maximum Write Alloc Blocks: 0
Used Size: 854.7GB
Uses Shared Disks: true
UUID String: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
Number Of Volumes: 10
Is Flash Pool Caching: -
Is Eligible for Auto Balance Aggregate: false
State of the aggregate being balanced: ineligible
Total Physical Used Size: 845.9GB
Physical Used Percentage: 2%
State Change Counter for Auto Balancer: 0
SnapLock Type: non-snaplock
Is NVE Capable: false
Is in the precommit phase of Copy-Free Transition: false
Is a 7-Mode transitioning aggregate that is not yet committed in clustered Data ONTAP and is currently out of space: false
Threshold When Aggregate Is Considered Unbalanced (%): 70
Threshold When Aggregate Is Considered Balanced (%): 40
Resynchronization Priority: -
Space Saved by Data Compaction: 499.2GB
Percentage Saved by Data Compaction: 37%
Amount of compacted data: 204.5GB
Timestamp of Aggregate Creation: 10/8/2024 01:36:02
Enable SIDL: off
Composite: false
Is FabricPool Mirrored: false
Capacity Tier Used Size: 0B
Space Saved by Storage Efficiency: 499.2GB
Percentage of Space Saved by Storage Efficiency: 37%
Amount of Shared bytes count by Storage Efficiency: 204.5GB
Inactive Data Reporting Enabled: false
Timestamp when Inactive Data Reporting was Enabled: -
Enable Aggregate level Encryption: false
Aggregate uses data protected SEDs: false
azcs read optimization: off
Metadata Reserve Space Required For Revert: 0B
##*既存Aggregateへの追加シミュレーション
cluster::> aggregate add-disks -aggregate cluster_02_SSD_CAP_1 -diskcount 9 -simulate true
Disks would be added to aggregate "cluster_02_SSD_CAP_1" on node "cluster-02" in the following manner:
First Plex
RAID Group rg1, 9 disks (block checksum, raid_dp)
Usable Physical
Position Disk Type Size Size
---------- ------------------------- ---------- -------- --------
shared 1.0.0 SSD-CAP - 6.94TB
shared 1.0.1 SSD-CAP - 6.94TB
shared 1.0.2 SSD-CAP 6.94TB 6.94TB
shared 1.0.3 SSD-CAP 6.94TB 6.94TB
shared 1.0.4 SSD-CAP 6.94TB 6.94TB
shared 1.0.19 SSD-CAP 6.94TB 6.94TB
shared 1.0.20 SSD-CAP 6.94TB 6.94TB
shared 1.0.21 SSD-CAP 6.94TB 6.94TB
shared 1.0.22 SSD-CAP 6.94TB 6.94TB
Aggregate capacity available for volume use would be increased by 46.15TB.
##*既存Aggregateへの追加
cluster::> aggregate add-disks -aggregate cluster_02_SSD_CAP_1 -diskcount 9
Info: Disks would be added to aggregate "cluster_02_SSD_CAP_1" on node "cluster-02" in the following manner:
First Plex
RAID Group rg1, 9 disks (block checksum, raid_dp)
Usable Physical
Position Disk Type Size Size
---------- ------------------------- ---------- -------- --------
shared 1.0.0 SSD-CAP - 6.94TB
shared 1.0.1 SSD-CAP - 6.94TB
shared 1.0.2 SSD-CAP 6.94TB 6.94TB
shared 1.0.3 SSD-CAP 6.94TB 6.94TB
shared 1.0.4 SSD-CAP 6.94TB 6.94TB
shared 1.0.19 SSD-CAP 6.94TB 6.94TB
shared 1.0.20 SSD-CAP 6.94TB 6.94TB
shared 1.0.21 SSD-CAP 6.94TB 6.94TB
shared 1.0.22 SSD-CAP 6.94TB 6.94TB
Aggregate capacity available for volume use would be increased by 46.15TB.
Do you want to continue? {y|n}: y
cluster::>
この記事が参考になった場合はいいねをお願いします!同様の記事を作成するはげみになります。
参考及びリンク
Manually assign ownership of partitioned disks in ONTAP
※上記リンクのOption 2(Manually assign disks with root-data-data (RD2) partitioning)が該当します
How to change ownership of disk or partition in ADP and AFF platforms
Add capacity to a local tier in ONTAP