2
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 1 year has passed since last update.

RHEL 8.6 ppc64le で MD RAID (RAID1) を構成

Last updated at Posted at 2023-08-14

はじめに

RHEL on Power でディスク冗長化のため MD RAID を構成したログです。

(イメージ図)
md-raid.png


参照文書


環境

RHEL 8.6 (ppc64le, on Power S1022)

  • PowerVC でデプロイしてミラーリング用ディスクを接続 
  • testvmm というホスト名のサーバー

実行ログ

1 追加ディスク認識

PowerVC から RHEL VM をデプロイし、OS領域と同じ大きさのボリュームを作成して接続

1-1) デバイスのスキャン

[root@testvmm ~]# ls /sys/class/scsi_host/ | while read host ; do echo "- - -" > /sys/class/scsi_host/$host/scan ; done

接続確認

1-2) mpathb が認識されたことを確認

root@testvmm ~]# ls -l /dev/mapper
total 0
crw-------. 1 root root 10, 236 Jul 23 19:41 control
lrwxrwxrwx. 1 root root       7 Jul 23 19:41 mpatha -> ../dm-0
lrwxrwxrwx. 1 root root       7 Jul 23 19:41 mpatha1 -> ../dm-1
lrwxrwxrwx. 1 root root       7 Jul 23 19:41 mpatha2 -> ../dm-2
lrwxrwxrwx. 1 root root       7 Jul 23 19:41 mpatha3 -> ../dm-3
lrwxrwxrwx. 1 root root       7 Jul 23 20:03 mpathb -> ../dm-6
lrwxrwxrwx. 1 root root       7 Jul 23 19:41 rhel-root -> ../dm-5
lrwxrwxrwx. 1 root root       7 Jul 23 19:41 rhel-swap -> ../dm-4

[root@testvmm ~]#  multipath -ll
mpatha (360050763808106d7d800000000000128) dm-0 AIX,VDASD
size=50G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 0:0:1:0 sda 8:0  active ready running
  `- 1:0:1:0 sdb 8:16 active ready running
mpathb (360050763808106d7d800000000000129) dm-6 AIX,VDASD
size=50G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 0:0:3:0 sdc 8:32 active ready running
  `- 1:0:2:0 sdd 8:48 active ready running

2 /dev/sda を /dev/mapper/mpatha で読み替えて手順を実行

2-1)パーティション情報を確認

[root@testvmm ~]# parted /dev/mapper/mpatha u s p
Model: Linux device-mapper (multipath) (dm)
Disk /dev/mapper/mpatha: 104857600s
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start     End         Size        Type     File system  Flags
 1      2048s     10239s      8192s       primary               boot, prep
 2      10240s    2107391s    2097152s    primary  xfs
 3      2107392s  104857599s  102750208s  primary               lvm

1 は IBM PowerSystems のRHELのため存在しています。(prep partition)

2-2) 追加ディスクでパーティション作成

[root@testvmm ~]# parted /dev/mapper/mpathb mklabel msdos
Information: You may need to update /etc/fstab.

[root@testvmm ~]# parted /dev/mapper/mpathb mkpart primary 2048s 10239s
Information: You may need to update /etc/fstab.

[root@testvmm ~]# parted /dev/mapper/mpathb mkpart primary 10240s 2107391s
Information: You may need to update /etc/fstab.

[root@testvmm ~]# parted /dev/mapper/mpathb mkpart primary 2107392s 104857599s
Information: You may need to update /etc/fstab.

2-3) パーティションに RAID フラクを立てる(mpatha, mpathb に対して実行)

[root@testvmm ~]# parted /dev/mapper/mpatha set 1 raid on
Information: You may need to update /etc/fstab.

[root@testvmm ~]# parted /dev/mapper/mpatha set 2 raid on
Information: You may need to update /etc/fstab.

[root@testvmm ~]# parted /dev/mapper/mpatha set 3 raid on
Information: You may need to update /etc/fstab.

[root@testvmm ~]# parted /dev/mapper/mpathb set 1 raid on
Information: You may need to update /etc/fstab.

[root@testvmm ~]# parted /dev/mapper/mpathb set 2 raid on
Information: You may need to update /etc/fstab.

[root@testvmm ~]# parted /dev/mapper/mpathb set 3 raid on
Information: You may need to update /etc/fstab.

2-4) /dev/md0 を作成

[root@testvmm ~]# mdadm  --create /dev/md0 --level=1 --raid-disks=2 missing /dev/mapper/mpathb2 --metadata=1.0
mdadm: array /dev/md0 started.

2-5) /dev/md0 でファイルシステムを作成

[root@testvmm ~]# mkfs.xfs /dev/md0
meta-data=/dev/md0               isize=512    agcount=4, agsize=65532 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=262128, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=1566, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@testvmm ~]#

2-6) /boot でデータ同期、マウント

[root@testvmm ~]# mkdir /mnt/md0
[root@testvmm ~]# mount /dev/md0 /mnt/md0
[root@testvmm ~]# rsync -a /boot/ /mnt/md0/
[root@testvmm ~]# sync
[root@testvmm ~]# umount /mnt/md0
[root@testvmm ~]# rmdir /mnt/md0

2-7) /dev/md0 に /bootをマウント

[root@testvmm ~]# umount /boot
[root@testvmm ~]# mount /dev/md0 /boot

2-8) mpatha を md0 に追加

[root@testvmm ~]# mdadm /dev/md0 -a /dev/mapper/mpatha2
mdadm: added /dev/mapper/mpatha2

2-9) RAID 化状態を確認

[root@testvmm ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.0
     Creation Time : Tue Jul 25 00:20:29 2023
        Raid Level : raid1
        Array Size : 1048512 (1023.94 MiB 1073.68 MB)
     Used Dev Size : 1048512 (1023.94 MiB 1073.68 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Tue Jul 25 00:21:23 2023
             State : clean, degraded, recovering
    Active Devices : 1
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

    Rebuild Status : 95% complete     ##<= 95% complete が表示

              Name : testvmm:0  (local to host testvmm)
              UUID : 5dfd2d76:9a030e84:1fe2e429:a5348eec
            Events : 29

    Number   Major   Minor   RaidDevice State
       2     253        2        0      spare rebuilding   /dev/dm-2
       1     253        8        1      active sync   /dev/dm-8
[root@testvmm ~]#

数分待ち、Rebuild Status が complete になりました。

2-10) /etc/fstab を編集

/dev/md0 のUUID を確認

[root@testvmm ~]# blkid | grep md0
/dev/md0: UUID="41435ad6-deae-470a-a6dd-5597e6260a7f" BLOCK_SIZE="512" TYPE="xfs"

UUID=4693aeba-49a9-4864-8edf-44dcacf5a3ad /boot                   xfs     defaults        0 0
UUID=41435ad6-deae-470a-a6dd-5597e6260a7f /boot                   xfs     defaults        0 0

[root@testvmm ~]# vi /etc/fstab

-> /boot の UUID を変更


[root@testvmm ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Wed May  3 00:39:24 2023
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/rhel-root   /                       xfs     defaults        0 0
#UUID=9fc56ab1-d18f-4311-9c08-a1c1c17417f8 /boot                   xfs     defaults        0 0
/dev/mapper/rhel-swap   none                    swap    defaults        0 0
UUID=41435ad6-deae-470a-a6dd-5597e6260a7f /boot                   xfs     defaults        0 0

2-11) /dev/md1 の作成

[root@testvmm ~]# mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/mapper/mpathb3 --metadata=1.0
mdadm: array /dev/md1 started.

確認

[root@testvmm ~]# vgs
  VG   #PV #LV #SN Attr   VSize  VFree
  rhel   1   2   0 wz--n- 48.99g    0

VG "rhel" の PV は 1 です。

2-12) /dev/md1 に rhel を追加

[root@testvmm ~]# vgextend rhel /dev/md1
  Physical volume "/dev/md1" successfully created.
  Volume group "rhel" successfully extended

確認

[root@testvmm ~]# vgs
  VG   #PV #LV #SN Attr   VSize  VFree
  rhel   2   2   0 wz--n- 97.98g 48.99g

VG "rhel" の PV が2 になりました。

2-13) mpatha3 の 物理エクステントを新しいアレイに移動

[root@testvmm ~]# pvmove /dev/mapper/mpatha3 /dev/md1
  /dev/mapper/mpatha3: Moved: 0.01%
  /dev/mapper/mpatha3: Moved: 8.25%
  /dev/mapper/mpatha3: Moved: 22.94%
  /dev/mapper/mpatha3: Moved: 36.53%
  /dev/mapper/mpatha3: Moved: 47.19%
  /dev/mapper/mpatha3: Moved: 58.71%
  /dev/mapper/mpatha3: Moved: 76.09%
  /dev/mapper/mpatha3: Moved: 92.44%
  /dev/mapper/mpatha3: Moved: 100.00%

2-14) mpatha3 を VG "rhel" 、PV から削除

[root@testvmm ~]# vgreduce rhel /dev/mapper/mpatha3
  Removed "/dev/mapper/mpatha3" from volume group "rhel"

[root@testvmm ~]# pvremove /dev/mapper/mpatha3
  Labels on physical volume "/dev/mapper/mpatha3" successfully wiped.
[root@testvmm ~]#

2-15) mpatha3 を /dev/md1 に追加

[root@testvmm ~]# mdadm /dev/md1 -a /dev/mapper/mpatha3
mdadm: added /dev/mapper/mpatha3

2-16) RAID 化状態を確認

[root@testvmm ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.0
     Creation Time : Tue Jul 25 00:22:36 2023
        Raid Level : raid1
        Array Size : 51374976 (48.99 GiB 52.61 GB)
     Used Dev Size : 51374976 (48.99 GiB 52.61 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Tue Jul 25 00:25:38 2023
             State : clean, degraded, recovering
    Active Devices : 1
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

    Rebuild Status : 2% complete   ##< RAID化進行中

              Name : testvmm:1  (local to host testvmm)
              UUID : 6660f874:01c32e81:783d5695:ef10722f
            Events : 22

    Number   Major   Minor   RaidDevice State
       2     253        3        0      spare rebuilding   /dev/dm-3
       1     253        9        1      active sync   /dev/dm-9
[root@testvmm ~]#

 数分後、完了しました。

2-17) /etc/mdadm.conf に定義追加

[root@testvmm ~]#  mdadm --examine --scan > /etc/mdadm.conf
[root@testvmm ~]# cat /etc/mdadm.conf
ARRAY /dev/md/0  metadata=1.0 UUID=5dfd2d76:9a030e84:1fe2e429:a5348eec name=testvmm:0
ARRAY /dev/md/1  metadata=1.0 UUID=6660f874:01c32e81:783d5695:ef10722f name=testvmm:1

2-18) /etc/default/grub の更新

[root@testvmm ~]# vi /etc/default/grub

[root@testvmm ~]#  cat /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="ofconsole"
GRUB_CMDLINE_LINUX="rd.auto=1 rd.lvm.lv=rhel/root crashkernel=auto  rd.lvm.lv=rhel/swap"  ##<=修正した行
GRUB_DISABLE_RECOVERY="true"
GRUB_ENABLE_BLSCFG=true
GRUB_TERMINFO="terminfo -g 80x24 console"
GRUB_DISABLE_OS_PROBER=true

2-19) grub2.cfg を更新

[root@testvmm ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Generating boot entries from BLS files...
done
[root@testvmm ~]#

2-20) /boot/grub2/device.map を更新

[root@testvmm ~]# cat /boot/grub2/device.map
# this device map was generated by anaconda
(hd0)      /dev/mapper/mpatha
[root@testvmm ~]# vi /boot/grub2/device.map

[root@testvmm ~]# cat /boot/grub2/device.map
# this device map was generated by anaconda
(hd0)      /dev/mapper/mpatha
(hd1)      /dev/mapper/mpathb   ##<= 追加

2-21) grubを再導入

[root@testvmm ~]# grub2-install /dev/mapper/mpatha
Installing for powerpc-ieee1275 platform.
grub2-install: error: the chosen partition is not a PReP partition.
[root@testvmm ~]#  grub2-install /dev/mapper/mpathb
Installing for powerpc-ieee1275 platform.
grub2-install: error: the chosen partition is not a PReP partition.
[root@testvmm ~]#

PReP パーティションはエラーが発生することは既知の事象。


2-22) PReP パーティションの修正

root@testvmm ~]# dd if=/dev/mapper/mpatha1 of=/dev/mapper/mpathb1
8192+0 records in
8192+0 records out
4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0284917 s, 147 MB/s
[root@testvmm ~]#


[root@testvmm ~]# parted /dev/mapper/mpatha set 1 prep on
Information: You may need to update /etc/fstab.

[root@testvmm ~]# parted /dev/mapper/mpatha set 1 boot on
Information: You may need to update /etc/fstab.

[root@testvmm ~]# parted /dev/mapper/mpathb set 1 prep on
Information: You may need to update /etc/fstab.

[root@testvmm ~]# parted /dev/mapper/mpathb set 1 boot on
Information: You may need to update /etc/fstab.

2-23) mdデバイス確認

[root@testvmm ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.0
     Creation Time : Tue Jul 25 00:20:29 2023
        Raid Level : raid1
        Array Size : 1048512 (1023.94 MiB 1073.68 MB)
     Used Dev Size : 1048512 (1023.94 MiB 1073.68 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Tue Jul 25 00:30:50 2023
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : testvmm:0  (local to host testvmm)
              UUID : 5dfd2d76:9a030e84:1fe2e429:a5348eec
            Events : 31

    Number   Major   Minor   RaidDevice State
       2     253        2        0      active sync   /dev/dm-2
       1     253        8        1      active sync   /dev/dm-8
[root@testvmm ~]#

-> /dev/dm-2, /dev/dm-8 がメンバーであることを確認

[root@testvmm ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.0
     Creation Time : Tue Jul 25 00:22:36 2023
        Raid Level : raid1
        Array Size : 51374976 (48.99 GiB 52.61 GB)
     Used Dev Size : 51374976 (48.99 GiB 52.61 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Tue Jul 25 00:30:51 2023
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : testvmm:1  (local to host testvmm)
              UUID : 6660f874:01c32e81:783d5695:ef10722f
            Events : 71

    Number   Major   Minor   RaidDevice State
       2     253        3        0      active sync   /dev/dm-3
       1     253        9        1      active sync   /dev/dm-9
[root@testvmm ~]#

-> /dev/dm-3, /dev/dm-9 がメンバーであることを確認


2-24) initramfs イメージを mdadmconf で再度ビルド

intramfs のバックアップ

[root@testvmm ~]# cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.$(date +%m-%d-%H%M%S).bak

[root@testvmm ~]# dracut -f --mdadmconf
[root@testvmm ~]# echo $?
0
[root@testvmm ~]# date
Tue Jul 25 00:32:01 EDT 2023

[root@testvmm ~]# ls -ltr /boot
total 383072
-rw-------. 1 root root   3620730 Apr 15  2022 System.map-4.18.0-372.9.1.el8.ppc64le
-rw-r--r--. 1 root root    150984 Apr 15  2022 config-4.18.0-372.9.1.el8.ppc64le
-rwxr-xr-x. 1 root root  35634133 Apr 15  2022 vmlinuz-4.18.0-372.9.1.el8.ppc64le
drwxr-xr-x. 3 root root        17 May  2 20:40 efi
drwxr-xr-x. 3 root root        21 May  2 20:43 loader
lrwxrwxrwx. 1 root root        50 May  2 20:44 symvers-4.18.0-372.9.1.el8.ppc64le.gz -> /lib/modules/4.18.0-372.9.1.el8.ppc64le/symvers.gz
-rwxr-xr-x. 1 root root  35634133 May  2 20:44 vmlinuz-0-rescue-7d4848a6a4584ea7adae49b33f3b7450
-rw-------. 1 root root 107181634 May  2 20:45 initramfs-0-rescue-7d4848a6a4584ea7adae49b33f3b7450.img
-rw-------. 1 root root  34939392 May  2 20:49 initramfs-4.18.0-372.9.1.el8.ppc64lekdump.img
drwx------. 4 root root       104 Jul 25 00:29 grub2
-rw-------. 1 root root  58110011 Jul 25 00:30 initramfs-4.18.0-372.9.1.el8.ppc64le.img.07-25-003018.bak
-rw-------. 1 root root  58777371 Jul 25 00:31 initramfs-4.18.0-372.9.1.el8.ppc64le.img
[root@testvmm ~]#

=< 無事再作成されています。

2-25) VM 停止、LPAR起動

[root@testvmm ~]# shutdown -h now

HMC 仮想端末からRHEL のブートシークエンスを眺めました。(ログ省略)

Red Hat Enterprise Linux 8.6 (Ootpa)
Kernel 4.18.0-372.9.1.el8.ppc64le on an ppc64le

Activate the web console with: systemctl enable --now cockpit.socket

testvmm login:

ログイン後確認:

[root@testvmm ~]# multipath -ll
mpatha (360050763808106d7d800000000000128) dm-0 AIX,VDASD
size=50G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 0:0:1:0 sda 8:0  active ready running
  `- 1:0:1:0 sdc 8:32 active ready running
mpathb (360050763808106d7d800000000000129) dm-1 AIX,VDASD
size=50G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 0:0:3:0 sdb 8:16 active ready running
  `- 1:0:2:0 sdd 8:48 active ready running
[root@testvmm ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.0
     Creation Time : Tue Jul 25 00:20:29 2023
        Raid Level : raid1
        Array Size : 1048512 (1023.94 MiB 1073.68 MB)
     Used Dev Size : 1048512 (1023.94 MiB 1073.68 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Tue Jul 25 00:34:22 2023
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : testvmm:0  (local to host testvmm)
              UUID : 5dfd2d76:9a030e84:1fe2e429:a5348eec
            Events : 31

    Number   Major   Minor   RaidDevice State
       2     253        3        0      active sync   /dev/dm-3
       1     253        6        1      active sync   /dev/dm-6

-> md0のメンバーの/dev/dm-X の番号が変わっています。

[root@testvmm ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.0
     Creation Time : Tue Jul 25 00:22:36 2023
        Raid Level : raid1
        Array Size : 51374976 (48.99 GiB 52.61 GB)
     Used Dev Size : 51374976 (48.99 GiB 52.61 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Tue Jul 25 00:35:23 2023
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : testvmm:1  (local to host testvmm)
              UUID : 6660f874:01c32e81:783d5695:ef10722f
            Events : 73

    Number   Major   Minor   RaidDevice State
       2     253        4        0      active sync   /dev/dm-4
       1     253        7        1      active sync   /dev/dm-7

-> md0のメンバーの/dev/dm-X の番号が変わっています。

[root@testvmm ~]# ls -l /dev/mapper
total 0
crw-------. 1 root root 10, 236 Jul 25 00:34 control
lrwxrwxrwx. 1 root root       7 Jul 25 00:34 mpatha -> ../dm-0
lrwxrwxrwx. 1 root root       7 Jul 25 00:34 mpatha1 -> ../dm-2
lrwxrwxrwx. 1 root root       7 Jul 25 00:34 mpatha2 -> ../dm-3
lrwxrwxrwx. 1 root root       7 Jul 25 00:34 mpatha3 -> ../dm-4
lrwxrwxrwx. 1 root root       7 Jul 25 00:34 mpathb -> ../dm-1
lrwxrwxrwx. 1 root root       7 Jul 25 00:34 mpathb1 -> ../dm-5
lrwxrwxrwx. 1 root root       7 Jul 25 00:34 mpathb2 -> ../dm-6
lrwxrwxrwx. 1 root root       7 Jul 25 00:34 mpathb3 -> ../dm-7
lrwxrwxrwx. 1 root root       7 Jul 25 00:34 rhel-root -> ../dm-8
lrwxrwxrwx. 1 root root       7 Jul 25 00:34 rhel-swap -> ../dm-9

認識順番が入れ替わっていました。(LVデバイス(rhel-root、rhel-swap が後ろ)


おわりに

これでできたはず..、

詳細説明は省略していますが、冒頭にリンクしたRHELの knowledgebase を参照ください。

bootlist の設定は以下で確認しました。

以上です。

2
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
2
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?