0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

DataCore vFilOのShareをNFS4.2(pNFS)でマウントしてみる

Last updated at Posted at 2019-12-05

さっそくやってみます。

vFilOの準備と確認

IPアドレスの確認

まずはvFilO側の確認。
Cluster floating IPs: [192.168.4.200/24, 10.10.107.200/21]
この行のIPでマウントします。2個目のアドレスはマネジメントなので使いません。

admin@anvil3.datacore.jp> cluster-view
ID:                                        (削除)
Name:                                      CLUSTER3
State:                                     High Availability
IP:                                        10.10.107.200/21
Cluster floating IPs:                      [192.168.4.200/24, 10.10.107.200/21]
Portal floating IPs:                       [192.168.5.200/24]
Since:                                     2019-12-04 08:27:26 UTC
Timezone:                                  Asia/Tokyo
VVOL support:                              true
EULA accepted date:                        2019-12-04 08:29:26 UTC
Online activation support:                 true
License expiration date:                   2020-01-03 08:27:26 UTC
NAS volume capacity:                       [Total: 959.7GB, Used: 6.8GB, Free: 952.9GB]
Share space (quota):                       [Total: 1TB, Used: 0B, Free: 1TB]
Data directors:
                         [Object type: DATA_SPHERE, Node name: anvil4.datacore.jp, Role: SECONDARY, Oper state: UP, Admin state: UP]
                         [Object type: DATA_SPHERE, Node name: anvil3.datacore.jp, Role: PRIMARY, Oper state: UP, Admin state: UP]

admin@anvil3.datacore.jp>

今回の構成とノード一覧

ちなみに今回のクラスターはAnvilのメタデータはVNMeを、DSXのデータは生SSDをパスするーでアタッチしています。
ネットワークインターフェイスは全てSR-IOVで25Gb NICのVFを直接見せています。vSwitchは通していません。

admin@anvil3.datacore.jp> node-list
total 6
Name:                    anvil4.datacore.jp
Type:                    Product
Internal ID:             1073741829
ID:                      f0b912a2-4a39-5a6a-98e3-57fd9b371664
HW state:                OK
Node state:              MANAGED
Node mode:               ONLINE
Management IP:           10.10.107.204/21
SW version:              4.2.1-41

Name:                    anvil3.datacore.jp
Type:                    Product
Internal ID:             1073741832
ID:                      be9d4b3a-3db8-5231-abda-f551a9481425
HW state:                OK
Node state:              MANAGED
Node mode:               ONLINE
Management IP:           10.10.107.203/21
SW version:              4.2.1-41

Name:                    dsx1-1.datacore.jp
Type:                    Product
Internal ID:             1073741836
ID:                      eb6da562-672f-5a0e-a2aa-7793bfad5ce4
HW state:                OK
Node state:              MANAGED
Node mode:               ONLINE
Management IP:           10.10.107.211/21
SW version:              4.2.1-41

Name:                    dsx2-1.datacore.jp
Type:                    Product
Internal ID:             1073741840
ID:                      b460f7e9-19b5-5d2f-b9e7-765d60468438
HW state:                OK
Node state:              MANAGED
Node mode:               ONLINE
Management IP:           10.10.107.221/21
SW version:              4.2.1-41

Name:                    dsx1-2.datacore.jp
Type:                    Product
Internal ID:             1073741845
ID:                      04ef7678-a610-52ff-8fda-873c5cbfc4ab
HW state:                OK
Node state:              MANAGED
Node mode:               ONLINE
Management IP:           10.10.107.212/21
SW version:              4.2.1-41

Name:                    dsx2-2.datacore.jp
Type:                    Product
Internal ID:             1073741861
ID:                      035fead9-9f57-5be7-9de9-48db15f81990
HW state:                OK
Node state:              MANAGED
Node mode:               ONLINE
Management IP:           10.10.107.222/21
SW version:              4.2.1-41

Shareの確認

GUIで適当に作ります。今回はshare1にしました。

確認コマンド

admin@anvil3.datacore.jp> share-list --name share1
ID:                         9eb6084e-98d1-4995-a2c1-1d1385476f76
Name:                       share1
Internal ID:                2
State:                      PUBLISHED
Path:                       /share1
All applied objectives:
                         [ID: be321c5e-0edd-4e4d-8fff-2406daa87c52, Internal ID: 536870912, Name: keep-online]
                         [ID: 3b0e981f-6f21-4b5a-80fa-d7f94681bb9c, Internal ID: 536870914, Name: optimize-for-capacity]
                         [ID: d45d8170-44ae-4b99-8b85-e935d9f3fcc6, Internal ID: 536870915, Name: delegate-on-open]
                         [ID: bc945b58-e3d7-4cac-9f29-0387a6dde0bc, Internal ID: 536870916, Name: layout-get-on-open]
                         [ID: 5d30faed-b46a-4a0a-9db3-b2c3a3734be9, Internal ID: 536870918, Name: durability-1-nine]
                         [ID: 0b0e149d-3bf2-46dc-b2f6-5dc767f1ec38, Internal ID: 536870922, Name: availability-1-nine]
                         [ID: 0e685071-2730-4cda-96dc-705edb321bf7, Internal ID: 536870919, Name: durability-3-nines]
Active objectives:
                         [ID: 3b0e981f-6f21-4b5a-80fa-d7f94681bb9c, Internal ID: 536870914, Name: optimize-for-capacity]
                         [ID: d45d8170-44ae-4b99-8b85-e935d9f3fcc6, Internal ID: 536870915, Name: delegate-on-open]
Size:                       1TB
Warn when size crosses:     90%
Size limit state:           NORMAL
Export options:
                         [Subnet: *, Access permissions: RW, Root-squash: false]
Participant ID:             0
Replication participants:
                         ID:                                  00bba667-737a-47de-a715-dbe2072dc3b8
                         Participant share internal ID:       2
                         Participant site name:               CLUSTER3
                         Participant site management address: 10.10.107.200
                         Participant site data address:       192.168.4.200
                         Participant ID:                      0

admin@anvil3.datacore.jp>

NFSクライアントの準備

適当に好きなLinuxをインストールします。
今回はCentOS7を使いました。

# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core

ドライバ更新
# yum install /tmp/kmod-qlgc-fastlinq-8.42.9.0-1.rhel7u7.x86_64.rpm
(略)
# nmcli
ens192: 接続済み to ens192
        "QLogic FastLinQ QL45000"
        ethernet (qede), 00:0C:29:13:59:0C, hw, ポート 000e1ed3db68, mtu 1500
        inet4 192.168.4.101/24
        route4 192.168.4.0/24
        inet6 fe80::97f3:ab5c:4725:2edc/64
        route6 fe80::/64
        route6 ff00::/8
#
# yum install nfs-utils
(略)
# mount -t nfs -o v4.2 192.168.4.200:/share1 /mnt
#
# mount
(略)
192.168.4.200:/share1 on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.4.101,local_lock=none,addr=192.168.4.200)
#
# df -h
ファイルシス            サイズ  使用  残り 使用% マウント位置
192.168.4.200:/share1     932G     0  932G    0% /mnt
# lsmod | grep nfs_layout_flexfiles
nfs_layout_flexfiles    43542  1
nfsv4                 583218  2 nfs_layout_flexfiles
nfs                   261876  4 nfsv3,nfsv4,nfs_layout_flexfiles
sunrpc                354099  19 nfs,rpcsec_gss_krb5,auth_rpcgss,lockd,nfsv3,nfsv4,nfs_layout_flexfiles,nfs_acl

早速のぞいてみます。

# ls /mnt
# ls /mnt/.snapshot/
current
# ls /mnt/.snapshot/current/
# ls /mnt/.collections/
all                  live           open                   silent-access
assimilation-failed  misaligned     permanent-data-loss    snapshot
backup               not-selected   replication-collision  undelete
do-not-move          not-selected2  scan                   volatile
durable              offline        selected
errored              online         selected2

ファイルは何もありませんが、NFSだといろいろと見れますね。
工夫して使ってみて下さい。

書き込みを試してみます。

# vi /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536

# ulimit -n
65536
# cd /tmp
# echo {1..1000} | tee testfile{0..65535}

約4KBのファイルが約6万個作られます。

次回に続く。

参考文献

http://akishin.hatenablog.jp/entry/20130213/1360711554
https://qiita.com/kainos/items/5d8c47e64b5b06a60d0e
https://access.redhat.com/documentation/ja-jp/red_hat_enterprise_linux/7/html/storage_administration_guide/nfs-pnfs

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?