本文只适合对linux系统有了解的用户,小白绕道。
本文涉及的变更可能对系统造成未知的破坏,如果你系统里面存了重要的东西建议不要折腾。
总所周知飞牛底层用的是lvm2管理磁盘,官方当前的ui并不支持添加ssd缓存,但是可以通过手动添加的方式实现
考虑到ssd缓存meta数据的重要性,建议将meta存放到raid磁盘上,就算ssd挂了也可以通过meta来恢复
前提,准备ssd,
- 如果你对安全性要求不高可以就1块
- 如果你对安全性要求不高,对速度要求高可以准备2块做raid0
- 如果你对安全性要求高,可以考虑2块做raid1
- 如果你又要速度,又要安全性,4块做raid10吧
考虑到数据的安全性,后面的例子我使用了raid5磁盘作为ssd cache meta 万一盘坏了还可以通过meta来救
我当前的硬件配置
一块垃圾H81主板 E3-1265L 16G内存,主板上有4个sata,2个3g,2个6g
4块3T接了主板,1块120g 2块240g ssd接3008卡,3008当前为直通模式
当前系统dm信息
dmsetup table
trim_77e63fb8_7115_43dd_9515_d4b44fd6dcb6-0: 0 17578696704 linear 9:0 3072
trim_f9d13e76_9e61_4ae9_9c2d_5e4d1ba19e92-0: 0 100147200 linear 9:125 2048
可选步骤,在raid5分区中划分一块用来放ssd的meta
btrfs filesystem resize -1g /vol1
lvreduce -L -1G /dev/trim_77e63fb8_7115_43dd_9515_d4b44fd6dcb6/0
vgdisplay 看对应的vg已经空出来了1g的空间
--- Volume group ---
VG Name trim_77e63fb8_7115_43dd_9515_d4b44fd6dcb6
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size <8.19 TiB
PE Size 4.00 MiB
Total PE 2146093
Alloc PE / Size 2145837 / <8.19 TiB
Free PE / Size 256 / 1.00 GiB
VG UUID UYknJE-IEz0-eQcC-nnkE-uEY7-dUtE-Au9Gl3
创建ssd磁盘的raid
mdadm -v -C /dev/md/x24-nas-2:ssdcache -l 0 -n 2 /dev/sdg /dev/sdf --name=ssdcache
找到你要挂ssd缓存的设备
vgdisplay
--- Volume group ---
VG Name trim_f9d13e76_9e61_4ae9_9c2d_5e4d1ba19e92
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 47.75 GiB
PE Size 4.00 MiB
Total PE 12225
Alloc PE / Size 12225 / 47.75 GiB
Free PE / Size 0 / 0
VG UUID xr9as6-5fdV-muqE-eAgj-9QIv-p1ZK-9WyO27
--- Volume group ---
VG Name trim_77e63fb8_7115_43dd_9515_d4b44fd6dcb6
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size <8.19 TiB
PE Size 4.00 MiB
Total PE 2146093
Alloc PE / Size 2145837 / <8.19 TiB
Free PE / Size 256 / 1.00 GiB
VG UUID UYknJE-IEz0-eQcC-nnkE-uEY7-dUtE-Au9Gl3
这里选trim_77e63fb8_7115_43dd_9515_d4b44fd6dcb6, 给vg挂上ssd
vgextend trim_77e63fb8_7115_43dd_9515_d4b44fd6dcb6 /dev/md/x24-nas-2:ssdcache
查看
root@x24-nas-2:~# pvdisplay
--- Physical volume ---
PV Name /dev/md125
VG Name trim_f9d13e76_9e61_4ae9_9c2d_5e4d1ba19e92
PV Size <47.76 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 12225
Free PE 0
Allocated PE 12225
PV UUID orQ5ft-ad5M-LCOS-5AyQ-9l6V-8dj5-8aVrKl
--- Physical volume ---
PV Name /dev/md0
VG Name trim_77e63fb8_7115_43dd_9515_d4b44fd6dcb6
PV Size <8.19 TiB / not usable 2.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 2146093
Free PE 0
Allocated PE 2146093
PV UUID ea2JYi-rojz-5vxw-PTGi-Z3Yf-H8k2-LBrS5g
--- Physical volume ---
PV Name /dev/md127
VG Name trim_77e63fb8_7115_43dd_9515_d4b44fd6dcb6
PV Size <446.89 GiB / not usable 3.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 114403
Free PE 114403
Allocated PE 0
PV UUID ACvRKe-sEpk-Z5Gu-nIKB-7URj-6f2V-EHXIHw
在raid5的磁盘整列中创建ssd-cache-meta(我的系统md0是raid5, md127是ssd)
lvcreate -L300M -n cache-meta trim_77e63fb8_7115_43dd_9515_d4b44fd6dcb6 /dev/md0
我们ssd物理设备为/dev/md127, 然后创建缓存
lvcreate -n cache -l100%PVS trim_77e63fb8_7115_43dd_9515_d4b44fd6dcb6 /dev/md127
转换为cache-pool
lvconvert --type cache-pool --poolmetadata trim_77e63fb8_7115_43dd_9515_d4b44fd6dcb6/cache-meta trim_77e63fb8_7115_43dd_9515_d4b44fd6dcb6/cache
为数据盘添加cache
lvconvert --type cache --cachepool trim_77e63fb8_7115_43dd_9515_d4b44fd6dcb6/cache --cachemode writethrough trim_77e63fb8_7115_43dd_9515_d4b44fd6dcb6/0
查看
root@x24-nas-2:/vol1# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
0 trim_77e63fb8_7115_43dd_9515_d4b44fd6dcb6 Cwi-aoC--- <8.19t [cache_cpool] [0_corig] 1.54 2.43 0.00
[0_corig] trim_77e63fb8_7115_43dd_9515_d4b44fd6dcb6 owi-aoC--- <8.19t
[cache_cpool] trim_77e63fb8_7115_43dd_9515_d4b44fd6dcb6 Cwi---C--- <446.89g 1.54 2.43 0.00
[cache_cpool_cdata] trim_77e63fb8_7115_43dd_9515_d4b44fd6dcb6 Cwi-ao---- <446.89g
[cache_cpool_cmeta] trim_77e63fb8_7115_43dd_9515_d4b44fd6dcb6 ewi-ao---- 300.00m
[lvol0_pmspare] trim_77e63fb8_7115_43dd_9515_d4b44fd6dcb6 ewi------- 300.00m
0 trim_f9d13e76_9e61_4ae9_9c2d_5e4d1ba19e92 -wi-ao---- 47.75g
如果你是直接在ssd上创建cache-pool,那么要注意需要额外留空间给meta用,所以我们不能用满,留多少,根据我的测试446G用了50M不到,算上一个clone的meta,留100M足够了,当前PE size是4M,所以一共要留25个PE,所以当前可用PE是114403-25=114378个
测试4块盘Raid5下
没有ssd缓存
root@x24-nas-2:/vol1# dd if=/dev/zero of=10g bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 36.5574 s, 294 MB/s
有ssd缓存, 写入更慢了,因为是writethrough,所以ssd这里没帮助
root@x24-nas-2:/vol1# dd if=/dev/zero of=10g bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 39.3203 s, 273 MB/s