LoginSignup
2
0
記事投稿キャンペーン 「2024年!初アウトプットをしよう」

【2024年01月版】Ubuntu で RAID 6 構築 - 障害復旧 メモ 【中年エンジニアのためのLinuxメモ】

Last updated at Posted at 2024-01-25

はじめに

RAID 6 でシステム構築することになったから、テスト構築時のメモ

環境

macOS 上の VMWare Fusion でテスト的に構築。
(Parallelsだと、6台のディスクを追加できなかった)
ディスクは、NVMeを 6台で、OSを含めると 7台を使用。
Ubuntu22.04 は ARM版、AMD版でも変わらない。

RAID 6 は、mdadm を使用して構築する。

$ uname -a
Linux **** 5.15.0-92-generic #102-Ubuntu SMP Wed Jan 10 09:37:39 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
$ cat /etc/os-release 
PRETTY_NAME="Ubuntu 22.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.3 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
$ cat /etc/lsb-release 
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04.3 LTS"

手順

作成

  • OS上で、OS以外の 4 つ以上のディスクを認識させる、今回は active に 4台、 standby に 2台の合計 6台を使用
  • RAID 6 の RAIDアレイ を作成
  • RAIDアレイに ホットスペアを追加
  • 構成情報を出力して、mdadm の設定ファイルを更新
  • RAIDアレイがシステム起動時に利用可能となるように initramfs を更新
  • 物理ボリュームの作成
  • ボリュームグループの作成
  • 論理ボリュームの作成
  • ファイルシステムの作成
  • マウントポイントの作成
  • マウント

障害復旧

  • シミュレートでディスクトラブルを発生させ、自動的にリビルドされることを確認する
  • ディスクトラブルが発生したディスクをRAIDアレイから外す
  • 新たなスタンバイディスクをRAIDアレイに追加する

削除

  • アンマウント
  • 論理ボリュームを削除
  • ボリュームグループを削除
  • 物理ボリュームを削除
  • RAIDアレイを停止
  • 自動マウントの設定を削除
  • mdadm の設定ファイルを更新
  • initramfs を更新

やりかた

準備

  • VMWare Fusion に Ubuntu22.04 の Server を通常インストール
  • VMWare Fusion の設定から、HDD(NVMe)を 6つ割り当てる

image.png

デバイスを確認

HDDデバイスとして、nvme のディスクを 6つ割り当て済み

$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
  :
  :
nvme0n2     259:4    0     2G  0 disk 
nvme0n3     259:5    0     2G  0 disk 
nvme0n4     259:6    0     2G  0 disk 
nvme0n5     259:7    0     2G  0 disk 
nvme0n6     259:8    0     2G  0 disk 
nvme0n7     259:9    0     2G  0 disk 
$ cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
unused devices: <none>

RAID 6 の RAIDアレイ を作成

RAIDアレイ /dev/md0 を作成する。active となるディスクを指定する。

sudo mdadm --create --verbose /dev/md0 --level=6 --raid-devices=4 /dev/nvme0n2 /dev/nvme0n3 /dev/nvme0n4 /dev/nvme0n5

作成中。

$ cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid6 nvme0n5[3] nvme0n4[2] nvme0n3[1] nvme0n2[0]
      4188160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
      [======>..............]  resync = 32.9% (689340/2094080) finish=0.1min speed=172335K/sec
      
unused devices: <none>
$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Fri Jan 26 06:14:13 2024
        Raid Level : raid6
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Fri Jan 26 06:14:23 2024
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : ****:0  (local to host ****)
              UUID : ********:********:********:********
            Events : 17

    Number   Major   Minor   RaidDevice State
       0     259        4        0      active sync   /dev/nvme0n2
       1     259        5        1      active sync   /dev/nvme0n3
       2     259        6        2      active sync   /dev/nvme0n4
       3     259        7        3      active sync   /dev/nvme0n5

作成完了。RAIDアレイ /dev/md0 が作成されている。

$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid6 nvme0n5[3] nvme0n4[2] nvme0n3[1] nvme0n2[0]
      4188160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
      
unused devices: <none>
$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Fri Jan 26 06:14:13 2024
        Raid Level : raid6
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Fri Jan 26 06:14:23 2024
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : ****:0  (local to host ****)
              UUID : ********:********:********:********
            Events : 17

    Number   Major   Minor   RaidDevice State
       0     259        4        0      active sync   /dev/nvme0n2
       1     259        5        1      active sync   /dev/nvme0n3
       2     259        6        2      active sync   /dev/nvme0n4
       3     259        7        3      active sync   /dev/nvme0n5

ログの確認

$ sudo dmesg --ctime
[Fri Jan 26 08:18:20 2024] md/raid:md0: not clean -- starting background reconstruction
[Fri Jan 26 08:18:20 2024] md/raid:md0: device nvme0n5 operational as raid disk 3
[Fri Jan 26 08:18:20 2024] md/raid:md0: device nvme0n4 operational as raid disk 2
[Fri Jan 26 08:18:20 2024] md/raid:md0: device nvme0n3 operational as raid disk 1
[Fri Jan 26 08:18:20 2024] md/raid:md0: device nvme0n2 operational as raid disk 0
[Fri Jan 26 08:18:20 2024] md/raid:md0: raid level 6 active with 4 out of 4 devices, algorithm 2
[Fri Jan 26 08:18:20 2024] md0: detected capacity change from 0 to 8376320
[Fri Jan 26 08:18:20 2024] md: resync of RAID array md0
[Fri Jan 26 08:18:31 2024] md: md0: resync done.
$ sudo journalctl
Jan 26 08:18:21 ha06 kernel: md/raid:md0: not clean -- starting background reconstruction
Jan 26 08:18:21 ha06 kernel: md/raid:md0: device nvme0n5 operational as raid disk 3
Jan 26 08:18:21 ha06 kernel: md/raid:md0: device nvme0n4 operational as raid disk 2
Jan 26 08:18:21 ha06 kernel: md/raid:md0: device nvme0n3 operational as raid disk 1
Jan 26 08:18:21 ha06 kernel: md/raid:md0: device nvme0n2 operational as raid disk 0
Jan 26 08:18:21 ha06 kernel: md/raid:md0: raid level 6 active with 4 out of 4 devices, algorithm 2
Jan 26 08:18:21 ha06 kernel: md0: detected capacity change from 0 to 8376320
Jan 26 08:18:21 ha06 kernel: md: resync of RAID array md0
Jan 26 08:18:21 ha06 systemd[1]: Started MD array monitor.
Jan 26 08:18:32 ha06 kernel: md: md0: resync done.

RAIDアレイに ホットスペアを追加

standby となるディスクを指定する

$ sudo mdadm --manage /dev/md0 --add /dev/nvme0n6 /dev/nvme0n7
mdadm: added /dev/nvme0n6
mdadm: added /dev/nvme0n7
$ cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid6 nvme0n7[5](S) nvme0n6[4](S) nvme0n5[3] nvme0n4[2] nvme0n3[1] nvme0n2[0]
      4188160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
      
unused devices: <none>
$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Fri Jan 26 06:14:13 2024
        Raid Level : raid6
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 6
       Persistence : Superblock is persistent

       Update Time : Fri Jan 26 06:17:38 2024
             State : clean 
    Active Devices : 4
   Working Devices : 6
    Failed Devices : 0
     Spare Devices : 2

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : ****:0  (local to host ****)
              UUID : ********:********:********:********
            Events : 19

    Number   Major   Minor   RaidDevice State
       0     259        4        0      active sync   /dev/nvme0n2
       1     259        5        1      active sync   /dev/nvme0n3
       2     259        6        2      active sync   /dev/nvme0n4
       3     259        7        3      active sync   /dev/nvme0n5

       4     259        8        -      spare   /dev/nvme0n6
       5     259        9        -      spare   /dev/nvme0n7
$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
  :
  :
nvme0n2     259:4    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 
nvme0n3     259:5    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 
nvme0n4     259:6    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 
nvme0n5     259:7    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 
nvme0n6     259:8    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 
nvme0n7     259:9    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 

構成情報を出力して、mdadm の設定ファイルを更新

$ sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
$ cat /etc/mdadm/mdadm.conf
 :
 :
# This configuration was auto-generated on Thu, 10 Aug 2023 00:29:22 +0000 by mkconf
ARRAY /dev/md0 metadata=1.2 spares=2 name=****:0 UUID=********:********:********:********

RAIDアレイがシステム起動時に利用可能となるように initramfs を更新

$ sudo update-initramfs -u
update-initramfs: Generating /boot/initrd.img-5.15.0-92-generic

再起動しても RAIDアレイ /dev/md0 が利用できることを確認

$ sudo reboot
  :
  :
$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
  :
  :
nvme0n2     259:4    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 
nvme0n3     259:5    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 
nvme0n4     259:6    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 
nvme0n5     259:7    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 
nvme0n6     259:8    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 
nvme0n7     259:9    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
md0 : active raid6 nvme0n4[2] nvme0n5[3] nvme0n6[4](S) nvme0n7[5](S) nvme0n2[0] nvme0n3[1]
      4188160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
      
unused devices: <none>
$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Fri Jan 26 06:14:13 2024
        Raid Level : raid6
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 6
       Persistence : Superblock is persistent

       Update Time : Fri Jan 26 06:17:38 2024
             State : clean 
    Active Devices : 4
   Working Devices : 6
    Failed Devices : 0
     Spare Devices : 2

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : ****:0  (local to host ****)
              UUID : ********:********:********:********
            Events : 19

    Number   Major   Minor   RaidDevice State
       0     259        4        0      active sync   /dev/nvme0n2
       1     259        5        1      active sync   /dev/nvme0n3
       2     259        6        2      active sync   /dev/nvme0n4
       3     259        7        3      active sync   /dev/nvme0n5

       4     259        8        -      spare   /dev/nvme0n6
       5     259        9        -      spare   /dev/nvme0n7

物理ボリュームの作成

$ sudo pvcreate /dev/md0
  Physical volume "/dev/md0" successfully created.

ボリュームグループの作成

$ sudo vgcreate vg1 /dev/md0
  Volume group "vg1" successfully created

論理ボリュームの作成

$ sudo lvcreate -l100%FREE -n lv1 vg1
  Logical volume "lv1" created.

物理ボリューム・ボリュームグループ・論理ボリュームの確認

$ sudo pvs
  PV             VG  Fmt  Attr PSize   PFree
  /dev/md0       vg1 lvm2 a--    3.99g    0 
$ sudo vgs
  VG  #PV #LV #SN Attr   VSize   VFree
  vg1   1   1   0 wz--n-   3.99g    0 
$ sudo lvs
  LV   VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv1  vg1 -wi-a-----   3.99g

ファイルシステムの作成

$ sudo mkfs.ext4 /dev/vg1/lv1
mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 1046528 4k blocks and 261632 inodes
Filesystem UUID: 9d61abc8-2882-49ab-8de1-0d8df71623e0
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 

マウントポイントの作成

$ sudo mkdir -p /mnt/raid6

自動マウントの設定

blkidUUID を取得して、/etc/fstab に書き込む

$ sudo blkid /dev/vg1/lv1
/dev/vg1/lv1: UUID="********-****-****-****-************" BLOCK_SIZE="4096" TYPE="ext4"
$ sudo vim /etc/fstab
$ cat /etc/fstab
 :
 :
 :
UUID=********-****-****-****-************ /mnt/raid6 ext4 defaults 0 0

マウント

$ sudo mount -a
$ df -h
Filesystem           Size  Used Avail Use% Mounted on
  :
  :
/dev/mapper/vg1-lv1  3.9G   24K  3.7G   1% /mnt/raid6
$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
  :
  :
nvme0n2     259:4    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 
  └─vg1-lv1 253:1    0     4G  0 lvm   /mnt/raid6
nvme0n3     259:5    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 
  └─vg1-lv1 253:1    0     4G  0 lvm   /mnt/raid6
nvme0n4     259:6    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 
  └─vg1-lv1 253:1    0     4G  0 lvm   /mnt/raid6
nvme0n5     259:7    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 
  └─vg1-lv1 253:1    0     4G  0 lvm   /mnt/raid6
nvme0n6     259:8    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 
  └─vg1-lv1 253:1    0     4G  0 lvm   /mnt/raid6
nvme0n7     259:9    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 
  └─vg1-lv1 253:1    0     4G  0 lvm   /mnt/raid6

書き込み確認

$ df -h
Filesystem           Size  Used Avail Use% Mounted on
  :
  :
/dev/mapper/vg1-lv1  3.9G   24K  3.7G   1% /mnt/raid6
$ sudo dd if=/dev/random of=/mnt/raid6/test.dat bs=1M status=progress
4093640704 bytes (4.1 GB, 3.8 GiB) copied, 8 s, 512 MB/s 
dd: error writing '/mnt/raid6/test.dat': No space left on device
3928+0 records in
3927+0 records out
4118343680 bytes (4.1 GB, 3.8 GiB) copied, 8.05844 s, 511 MB/s
$ df -h
Filesystem           Size  Used Avail Use% Mounted on
  :
  :
/dev/mapper/vg1-lv1  3.9G  3.9G     0 100% /mnt/raid6
$ sudo rm /mnt/raid6/test.dat 
$ df -h
Filesystem           Size  Used Avail Use% Mounted on
  :
  :
/dev/mapper/vg1-lv1  3.9G   24K  3.7G   1% /mnt/raid6
$ cd /mnt/raid6
$ sudo f3write ./
F3 write 8.0
Copyright (C) 2010 Digirati Internet LTDA.
This is free software; see the source for copying conditions.

Free space: 3.85 GB
Creating file 1.h2w ... OK!                      
Creating file 2.h2w ... OK!                        
Creating file 3.h2w ... OK!                        
Creating file 4.h2w ... OK!                        
Free space: 16.00 MB
Average writing speed: 522.84 MB/s
$ sudo f3read ./
F3 read 8.0
Copyright (C) 2010 Digirati Internet LTDA.
This is free software; see the source for copying conditions.

                  SECTORS      ok/corrupted/changed/overwritten
Validating file 1.h2w ... 2097152/        0/      0/      0
Validating file 2.h2w ... 2097152/        0/      0/      0
Validating file 3.h2w ... 2097152/        0/      0/      0
Validating file 4.h2w ... 1752136/        0/      0/      0

  Data OK: 3.84 GB (8043592 sectors)
Data LOST: 0.00 Byte (0 sectors)
	       Corrupted: 0.00 Byte (0 sectors)
	Slightly changed: 0.00 Byte (0 sectors)
	     Overwritten: 0.00 Byte (0 sectors)
Average reading speed: 1.20 GB/s

障害復旧

シミュレートでディスクトラブルを発生させ、自動的にリビルドされることを確認

$ sudo mdadm /dev/md0 --fail /dev/nvme0n2
mdadm: set /dev/nvme0n2 faulty in /dev/md0

リビルド中。

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
md0 : active raid6 nvme0n4[2] nvme0n5[3] nvme0n6[4] nvme0n7[5](S) nvme0n2[0](F) nvme0n3[1]
      4188160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/3] [_UUU]
      [=======>.............]  recovery = 38.4% (804736/2094080) finish=0.1min speed=201184K/sec
      
unused devices: <none>
$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Fri Jan 26 06:14:13 2024
        Raid Level : raid6
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 6
       Persistence : Superblock is persistent

       Update Time : Thu Jan 25 21:59:13 2024
             State : clean, degraded, recovering 
    Active Devices : 3
   Working Devices : 5
    Failed Devices : 1
     Spare Devices : 2

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 76% complete

              Name : ****:0  (local to host ****)
              UUID : ********:********:********:********
            Events : 33

    Number   Major   Minor   RaidDevice State
       4     259        8        0      spare rebuilding   /dev/nvme0n6
       1     259        5        1      active sync   /dev/nvme0n3
       2     259        6        2      active sync   /dev/nvme0n4
       3     259        7        3      active sync   /dev/nvme0n5

       0     259        4        -      faulty   /dev/nvme0n2
       5     259        9        -      spare   /dev/nvme0n7

リビルド完了。active なディスクに、/dev/nvme0n6 が含まれ、faulty なディスクに /dev/nvme0n2 が含まれている。

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
md0 : active raid6 nvme0n4[2] nvme0n5[3] nvme0n6[4] nvme0n7[5](S) nvme0n2[0](F) nvme0n3[1]
      4188160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
      
unused devices: <none>
$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Fri Jan 26 06:14:13 2024
        Raid Level : raid6
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 6
       Persistence : Superblock is persistent

       Update Time : Thu Jan 25 21:59:16 2024
             State : clean 
    Active Devices : 4
   Working Devices : 5
    Failed Devices : 1
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : ****:0  (local to host ****)
              UUID : ********:********:********:********
            Events : 38

    Number   Major   Minor   RaidDevice State
       4     259        8        0      active sync   /dev/nvme0n6
       1     259        5        1      active sync   /dev/nvme0n3
       2     259        6        2      active sync   /dev/nvme0n4
       3     259        7        3      active sync   /dev/nvme0n5

       0     259        4        -      faulty   /dev/nvme0n2
       5     259        9        -      spare   /dev/nvme0n7

ログの確認

mdadm が障害時にメールを送ろうとしてる。
/etc/mdadm/mdadm.confMAILADDR root で設定しておくとメールが飛ばせる。

$ sudo dmesg --ctime
  :
  :
[Fri Jan 26 08:29:48 2024] md/raid:md0: Disk failure on nvme0n2, disabling device.
[Fri Jan 26 08:29:48 2024] md/raid:md0: Operation continuing on 3 devices.
[Fri Jan 26 08:29:48 2024] md: recovery of RAID array md0
[Fri Jan 26 08:29:58 2024] md: md0: recovery done.
$ sudo journalctl
  :
  :
Jan 26 08:29:49 ha06 kernel: md/raid:md0: Disk failure on nvme0n2, disabling device.
Jan 26 08:29:49 ha06 kernel: md/raid:md0: Operation continuing on 3 devices.
Jan 26 08:29:49 ha06 mdadm[1480]: sh: 1: /usr/sbin/sendmail: not found
Jan 26 08:29:49 ha06 kernel: md: recovery of RAID array md0
Jan 26 08:29:59 ha06 kernel: md: md0: recovery done.

リビルド確認

$ sudo f3read ./
F3 read 8.0
Copyright (C) 2010 Digirati Internet LTDA.
This is free software; see the source for copying conditions.

                  SECTORS      ok/corrupted/changed/overwritten
Validating file 1.h2w ... 2097152/        0/      0/      0
Validating file 2.h2w ... 2097152/        0/      0/      0
Validating file 3.h2w ... 2097152/        0/      0/      0
Validating file 4.h2w ... 1752136/        0/      0/      0

  Data OK: 3.84 GB (8043592 sectors)
Data LOST: 0.00 Byte (0 sectors)
	       Corrupted: 0.00 Byte (0 sectors)
	Slightly changed: 0.00 Byte (0 sectors)
	     Overwritten: 0.00 Byte (0 sectors)
Average reading speed: 357.17 MB/s

ディスクトラブルが発生したディスクをRAIDアレイから外す

$ sudo mdadm /dev/md0 --remove /dev/nvme0n2
mdadm: hot removed /dev/nvme0n2 from /dev/md0
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
md0 : active raid6 nvme0n4[2] nvme0n5[3] nvme0n6[4] nvme0n7[5](S) nvme0n3[1]
      4188160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
      
unused devices: <none>
sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Fri Jan 26 06:14:13 2024
        Raid Level : raid6
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Thu Jan 25 22:10:59 2024
             State : clean 
    Active Devices : 4
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : ****:0  (local to host ****)
              UUID : ********:********:********:********
            Events : 39

    Number   Major   Minor   RaidDevice State
       4     259        8        0      active sync   /dev/nvme0n6
       1     259        5        1      active sync   /dev/nvme0n3
       2     259        6        2      active sync   /dev/nvme0n4
       3     259        7        3      active sync   /dev/nvme0n5

       5     259        9        -      spare   /dev/nvme0n7
$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
  :
  :
nvme0n2     259:4    0     2G  0 disk  
nvme0n3     259:5    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 
  └─vg1-lv1 253:1    0     4G  0 lvm   /mnt/raid6
nvme0n4     259:6    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 
  └─vg1-lv1 253:1    0     4G  0 lvm   /mnt/raid6
nvme0n5     259:7    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 
  └─vg1-lv1 253:1    0     4G  0 lvm   /mnt/raid6
nvme0n6     259:8    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 
  └─vg1-lv1 253:1    0     4G  0 lvm   /mnt/raid6
nvme0n7     259:9    0     2G  0 disk  
└─md0         9:0    0     4G  0 raid6 
  └─vg1-lv1 253:1    0     4G  0 lvm   /mnt/raid6

新たなスタンバイディスクをRAIDアレイに追加する

/dev/nvme0n2 は取り替えたものとする。

$ sudo mdadm /dev/md0 --add /dev/nvme0n2
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
md0 : active raid6 nvme0n2[6](S) nvme0n4[2] nvme0n5[3] nvme0n6[4] nvme0n7[5](S) nvme0n3[1]
      4188160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
      
unused devices: <none>
$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Fri Jan 26 06:14:13 2024
        Raid Level : raid6
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 6
       Persistence : Superblock is persistent

       Update Time : Thu Jan 25 22:16:09 2024
             State : clean 
    Active Devices : 4
   Working Devices : 6
    Failed Devices : 0
     Spare Devices : 2

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : ****:0  (local to host ****)
              UUID : ********:********:********:********
            Events : 40

    Number   Major   Minor   RaidDevice State
       4     259        8        0      active sync   /dev/nvme0n6
       1     259        5        1      active sync   /dev/nvme0n3
       2     259        6        2      active sync   /dev/nvme0n4
       3     259        7        3      active sync   /dev/nvme0n5

       5     259        9        -      spare   /dev/nvme0n7
       6     259        4        -      spare   /dev/nvme0n2

削除

アンマウント

umount: /mnt/raid6: target is busy. となる場合は、fuser /mnt/raid6 でリソースを使用しているプロセスを特定して開放する。

$ sudo umount /dev/vg1/lv1

論理ボリュームを削除

$ sudo lvremove vg1/lv1
Do you really want to remove and DISCARD active logical volume vg1/lv1? [y/n]: y
  Logical volume "lv1" successfully removed

ボリュームグループを削除

$ sudo vgremove vg1
  Volume group "vg1" successfully removed

物理ボリュームを削除

$ sudo pvremove /dev/md0
  Labels on physical volume "/dev/md0" successfully wiped.

RAIDアレイを停止

$ sudo mdadm --stop /dev/md0
mdadm: stopped /dev/md0
$ sudo mdadm --zero-superblock /dev/nvme0n2 /dev/nvme0n3 /dev/nvme0n4 /dev/nvme0n5 /dev/nvme0n6 /dev/nvme0n7

削除確認

$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
  :
  :
nvme0n2     259:4    0     2G  0 disk 
nvme0n3     259:5    0     2G  0 disk 
nvme0n4     259:6    0     2G  0 disk 
nvme0n5     259:7    0     2G  0 disk 
nvme0n6     259:8    0     2G  0 disk 
nvme0n7     259:9    0     2G  0 disk 

自動マウントの設定を削除

/mnt/raid6 の行を削除する

$ sudo vim /etc/fstab

mdadm の設定ファイルを更新

ARRAY /dev/md0 の行を削除する

$ sudo vim /etc/mdadm/mdadm.conf

initramfs を更新

$ sudo update-initramfs -u

再起動

自動マウントの設定の削除を忘れていると、起動が途中でとまるので注意

さいごに

かんたんでしたね

2
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
2
0