收起左侧

记一次扩容Raid和LVM

1
回复
400
查看
[ 复制链接 ]

1

主题

0

回帖

0

牛值

江湖小虾

2025-2-10 11:24:55 显示全部楼层 阅读模式

在VMware下安装了一个fnOS试用,系统盘给了20G,数据盘给了20G,Docker等使用没有问题。

使用相册时发现需要大于20G才能使用AI相册,遂关机在VMware下扩容数据盘从20G为40G。

重新开机后发现磁盘已经识别了,但是在fnOS内没有扩容的选项,查询了一下官网也没有相关的信息,使用fnOS网页的用户可以SSH登陆到系统,于是动手自己扩容。

用户名和密码都是网页上登陆的用户名和密码!

过程记录如下:

  1. 检查原有的文件系统空间

    root@fnOS:/home/fnadmin# df -Th
    Filesystem                                              Type      Size  Used     Avail Use% Mounted on
    udev                                                    devtmpfs  1.9G     0  1.9G   0% /dev
    tmpfs                                                   tmpfs     390M  7.7M  383M   2% /run
    /dev/sda2                                               ext4       20G  9.3G  9.2G  51% /
    tmpfs                                                   tmpfs     2.0G  1.4M  2.0G   1% /dev/shm
    tmpfs                                                   tmpfs     5.0M     0  5.0M   0% /run/lock
    trimafs                                                 trimafs    20G  494M   19G   3% /fs
    /dev/mapper/trim_e4239c90_3a1d_4c43_ba7a_b36971db3280-0 btrfs      20G  494M   19G   3% /vol1
    tmpfs                                                   tmpfs     390M     0  390M   0% /run/user/1000
  2. 确认数据盘的大小为40GiB

    root@fnOS:/vol1# fdisk -l
    Disk /dev/sda: 20 GiB, 21474836480 bytes, 41943040 sectors
    Disk model: VMware Virtual S
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x07f88c45
    
    Device     Boot  Start      End  Sectors  Size Id Type
    /dev/sda1         2048   194559   192512   94M 83 Linux
    /dev/sda2       194560 41943039 41748480 19.9G 83 Linux
    GPT PMBR size mismatch (41943039 != 83886079) will be corrected by write.
    
    Disk /dev/sdb: 40 GiB, 42949672960 bytes, 83886080 sectors
    Disk model: VMware Virtual S
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: 125721BC-AA79-4ADD-89F6-95A50E1FD0AE
    
    Device     Start      End  Sectors Size Type
    /dev/sdb1   2048 41940991 41938944  20G Linux RAID
    
    Disk /dev/md127: 19.98 GiB, 21454913536 bytes, 41904128 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    Disk /dev/mapper/trim_e4239c90_3a1d_4c43_ba7a_b36971db3280-0: 19.98 GiB, 21453864960 bytes, 41902080 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
  3. 查看Raid的状态

    root@fnOS:/vol1# mdadm --detail /dev/md127
    
    /dev/md127:
              Version : 1.2
        Creation Time : Tue Jan 14 10:11:47 2025
            Raid Level : raid1
            Array Size : 20952064 (19.98 GiB 21.45 GB)
        Used Dev Size : 20952064 (19.98 GiB 21.45 GB)
          Raid Devices : 1
        Total Devices : 1
          Persistence : Superblock is persistent
    
        Intent Bitmap : Internal
    
          Update Time : Mon Feb 10 10:20:09 2025
                State : clean 
        Active Devices : 1
      Working Devices : 1
        Failed Devices : 0
        Spare Devices : 0
    
    Consistency Policy : bitmap
    
                  Name : fnOS:0  (local to host fnOS)
                  UUID : d98014bf:1057d925:bd877112:6c5efb51
                Events : 8
    
        Number   Major   Minor   RaidDevice State
          0       8       17        0      active sync   /dev/sdb1
  4. 执行命令扩容Raid

    root@fnOS:/vol1# sudo mdadm --grow /dev/md127 --size=max
    mdadm: component size of /dev/md127 unchanged at 41924591K

    执行完了之后确认Raid的状态

    root@fnOS:/vol1#  mdadm --detail /dev/md127
    /dev/md127:
              Version : 1.2
        Creation Time : Tue Jan 14 10:11:47 2025
            Raid Level : raid1
            Array Size : 41924591 (39.98 GiB 42.93 GB)
        Used Dev Size : 41924591 (39.98 GiB 42.93 GB)
          Raid Devices : 1
        Total Devices : 1
          Persistence : Superblock is persistent
    
        Intent Bitmap : Internal
    
          Update Time : Mon Feb 10 11:17:59 2025
                State : clean 
        Active Devices : 1
      Working Devices : 1
        Failed Devices : 0
        Spare Devices : 0
    
    Consistency Policy : bitmap
    
                  Name : fnOS:0  (local to host fnOS)
                  UUID : d98014bf:1057d925:bd877112:6c5efb51
                Events : 12
    
        Number   Major   Minor   RaidDevice State
          0       8       17        0      active sync   /dev/sdb1
  5. 扩容Raid之后,扩容LVM

    root@fnOS:/vol1# pvresize /dev/md127
      Physical volume "/dev/md127" changed
      1 physical volume(s) resized or updated / 0 physical volume(s) not resized
    root@fnOS:/vol1# vgs
      VG                                        #PV #LV #SN Attr   VSize  VFree 
      trim_e4239c90_3a1d_4c43_ba7a_b36971db3280   1   1   0 wz--n- 39.98g 20.00g
    root@fnOS:/vol1# lvextend -l +100%FREE /dev/mapper/trim_e4239c90_3a1d_4c43_ba7a_b36971db3280-0
      Size of logical volume trim_e4239c90_3a1d_4c43_ba7a_b36971db3280/0 changed from 19.98 GiB (5115 extents) to 39.98 GiB (10235 extents).
      Logical volume trim_e4239c90_3a1d_4c43_ba7a_b36971db3280/0 successfully resized.

    此时扩容LVM后,按 df -Th是无法显示已经扩容的,还需要扩容文件系统

    root@fnOS:/vol1# sudo btrfs filesystem resize max /vol1
    Resize device id 1 (/dev/mapper/trim_e4239c90_3a1d_4c43_ba7a_b36971db3280-0) from 19.98GiB to max
    root@fnOS:/vol1# df -Th
    Filesystem                                              Type      Size  Used Avail Use% Mounted on
    udev                                                    devtmpfs  1.9G     0  1.9G   0% /dev
    tmpfs                                                   tmpfs     390M  7.7M  383M   2% /run
    /dev/sda2                                               ext4       20G  9.3G  9.2G  51% /
    tmpfs                                                   tmpfs     2.0G  1.4M  2.0G   1% /dev/shm
    tmpfs                                                   tmpfs     5.0M     0  5.0M   0% /run/lock
    trimafs                                                 trimafs    40G  494M   39G   2% /fs
    /dev/mapper/trim_e4239c90_3a1d_4c43_ba7a_b36971db3280-0 btrfs      40G  494M   39G   2% /vol1
    tmpfs                                                   tmpfs     390M     0  390M   0% /run/user/1000

至此,扩容完毕。

收藏
送赞 1
分享

13

主题

29

回帖

0

牛值

初出茅庐

2025-2-11 10:03:05 显示全部楼层
碉堡了, 学习了
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则