|
本帖最后由 18910025833 于 2025-3-30 18:19 编辑
问题现象:重启后提示阵列磁盘丢失。所有硬盘 系统已经识别,对应的7个硬盘组成阵列不能自动恢复。
[lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 1.8T 0 disk
**─sda1 8:1 0 1.8T 0 part
**─md125 9:125 0 0B 0 raid5
sdb 8:16 0 1.8T 0 disk
**─sdb1 8:17 0 1.8T 0 part
**─md125 9:125 0 0B 0 raid5
sdd 8:48 0 1.8T 0 disk
**─md127 9:127 0 0B 0 raid5
sde 8:64 0 1.8T 0 disk
**─md127 9:127 0 0B 0 raid5
sdf 8:80 0 1.8T 0 disk
**─md127 9:127 0 0B 0 raid5
sdg 8:96 0 1.8T 0 disk
**─md127 9:127 0 0B 0 raid5
sdh 8:112 0 1.8T 0 disk
**─md127 9:127 0 0B 0 raid5
内核日志:
2025-03-30T17:11:46.020975+08:00 hyper-v kernel: [ 148.058537] md/raid:md127: device sdd operational as raid disk 6
2025-03-30T17:11:46.021070+08:00 hyper-v kernel: [ 148.058548] md/raid:md127: device sdf operational as raid disk 3
2025-03-30T17:11:46.021080+08:00 hyper-v kernel: [ 148.058550] md/raid:md127: device sdg operational as raid disk 2
2025-03-30T17:11:46.021084+08:00 hyper-v kernel: [ 148.058553] md/raid:md127: device sde operational as raid disk 1
2025-03-30T17:11:46.021087+08:00 hyper-v kernel: [ 148.058555] md/raid:md127: device sdh operational as raid disk 4
2025-03-30T17:11:46.024945+08:00 hyper-v kernel: [ 148.062887] md/raid:md127: not enough operational devices (2/7 failed)
2025-03-30T17:11:57.064966+08:00 hyper-v kernel: [ 159.104397] md/raid:md125: device sdb1 operational as raid disk 0
2025-03-30T17:11:57.065060+08:00 hyper-v kernel: [ 159.104408] md/raid:md125: device sda1 operational as raid disk 1
2025-03-30T17:11:57.073048+08:00 hyper-v kernel: [ 159.108740] md/raid:md125: not enough operational devices (5/7 failed)
错误原因:
重建磁盘阵列不会删除原来的阵列信息。
解决方法:
系统内删除对应的存储空间→命令行查看对应的磁盘是否还存在阵列信息→删除没有删干净的阵列信息后→系统内重新建立存储空间→命令行确认磁盘阵列信息→重启设备确认是否正常使用。
阵列信息删除干净后的输出信息:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sdb 8:16 0 1.8T 0 disk
sdc 8:32 0 1.8T 0 disk
sdd 8:48 0 1.8T 0 disk
sde 8:64 0 1.8T 0 disk
sdf 8:80 0 1.8T 0 disk
sdg 8:96 0 1.8T 0 disk
sdh 8:112 0 1.8T 0 disk
重新建立阵列后的磁盘信息:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sdb 8:16 0 1.8T 0 disk
**─sdb1 8:17 0 1.8T 0 part
**─md0 9:0 0 10.9T 0 raid5
**─trim_5a70ab5d_f45e_46f0_8726_a462eab66a82-0 253:1 0 10.9T 0 lvm /vol2
sdc 8:32 0 1.8T 0 disk
**─sdc1 8:33 0 1.8T 0 part
**─md0 9:0 0 10.9T 0 raid5
**─trim_5a70ab5d_f45e_46f0_8726_a462eab66a82-0 253:1 0 10.9T 0 lvm /vol2
sdd 8:48 0 1.8T 0 disk
**─sdd1 8:49 0 1.8T 0 part
**─md0 9:0 0 10.9T 0 raid5
**─trim_5a70ab5d_f45e_46f0_8726_a462eab66a82-0 253:1 0 10.9T 0 lvm /vol2
sde 8:64 0 1.8T 0 disk
**─sde1 8:65 0 1.8T 0 part
**─md0 9:0 0 10.9T 0 raid5
**─trim_5a70ab5d_f45e_46f0_8726_a462eab66a82-0 253:1 0 10.9T 0 lvm /vol2
sdf 8:80 0 1.8T 0 disk
**─sdf1 8:81 0 1.8T 0 part
**─md0 9:0 0 10.9T 0 raid5
**─trim_5a70ab5d_f45e_46f0_8726_a462eab66a82-0 253:1 0 10.9T 0 lvm /vol2
sdg 8:96 0 1.8T 0 disk
**─sdg1 8:97 0 1.8T 0 part
**─md0 9:0 0 10.9T 0 raid5
**─trim_5a70ab5d_f45e_46f0_8726_a462eab66a82-0 253:1 0 10.9T 0 lvm /vol2
sdh 8:112 0 1.8T 0 disk
**─sdh1 8:113 0 1.8T 0 part
**─md0 9:0 0 10.9T 0 raid5
**─trim_5a70ab5d_f45e_46f0_8726_a462eab66a82-0 253:1 0 10.9T 0 lvm /vol2
|
|