Data striping with LVM is (theoretically) similar to RAID 0 : it allows (advertises ?) better performance by sharing I/O across several devices (to the cost of data protection, though).
LVM also allows mirroring data, like RAID 1 does. article about LVM mirroring
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 12G 0 disk ├─sda1 8:1 0 11G 0 part / ├─sda2 8:2 0 1K 0 part └─sda5 8:5 0 1022M 0 part [SWAP] sdb 8:16 0 100M 0 disk disk 1 sdc 8:32 0 100M 0 disk disk 2 sdd 8:48 0 100M 0 disk disk 3 sr0 11:0 1 1024M 0 rom
Physical volume "/dev/sdb" successfully created. Physical volume "/dev/sdc" successfully created. Physical volume "/dev/sdd" successfully created.
pvcreate /dev/sd[bcd] might have done the job as well
Volume group "myVolumeGroup" successfully created
--- Volume group --- VG Name myVolumeGroup System ID Format lvm2 Metadata Areas 3 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 3 Act PV 3 VG Size 288.00 MiB approximately the sum of the 3 disks size PE Size 4.00 MiB Total PE 72 Alloc PE / Size 0 / 0 Free PE / Size 72 / 288.00 MiB VG UUID D3vnxv-UV8u-fhkH-K63N-9Wpn-r9Ix-lOkda7
Rounding size 200.00 MiB (50 extents) up to stripe boundary size 204.00 MiB (51 extents).
Logical volume "myLogicalVolume" created.
LV VG Attr #Str Type SSize myLogicalVolume myVolumeGroup -wi-a----- 3 striped 204.00m
--- Logical volume --- LV Path /dev/myVolumeGroup/myLogicalVolume LV Name myLogicalVolume VG Name myVolumeGroup LV UUID vQlrkI-SNm8-qMtL-pyOi-f5XI-4oTh-yIFkEO LV Write Access read/write LV Creation host, time myVirtualMachine, 2017-10-26 16:55:17 +0200 LV Status available # open 0 LV Size 204.00 MiB Current LE 51 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 1536 Block device 254:0 --- Segments --- Logical extents 0 to 50: Type striped Stripes 3 Stripe size 128.00 KiB Stripe 0: Physical volume /dev/sdb Physical extents 0 to 16 Stripe 1: Physical volume /dev/sdc Physical extents 0 to 16 Stripe 2: Physical volume /dev/sdd Physical extents 0 to 16
Filesystem Size Used Avail Use% Mounted on /dev/mapper/myVolumeGroup-myLogicalVolume 194M 1.8M 178M 1% /mnt/myMountPoint
/dev/myVolumeGroup/myLogicalVolume and /dev/mapper/myVolumeGroup-myLogicalVolume are both symlinks pointing to a child of /dev.
-rw-r--r-- 1 root root 160M Nov 3 15:53 /mnt/myMountPoint/myFile Filesystem Size Used Avail Use% Mounted on /dev/mapper/myVolumeGroup-myLogicalVolume 194M 162M 18M 91% /mnt/myMountPoint
lvs docker/thinpool docker/thinpoolmeta -o lv_name,lv_attr,modules,metadata_lv
LV Attr Modules Meta thinpool -wi-a----- thinpoolmeta -wi-a-----
LV Attr Modules Meta thinpool twi-a-t--- thin-pool [thinpool_tmeta] thinpoolmeta -wi-a-----
Here are the step-by-step instructions to resize my /home partition.
/dev/mapper/myLVM-homes on /home type ext4 (rw,relatime,data=ordered)
Filesystem Size Used Avail Use% Mounted on /dev/mapper/myLVM-homes 5.2T 2.0T 3.2T 39% /home2TB are used so far, so a final size of 4TB looks ok.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 12G 0 disk ├─sda1 8:1 0 11G 0 part / ├─sda2 8:2 0 1K 0 part └─sda5 8:5 0 1022M 0 part [SWAP] sdb 8:16 0 100M 0 disk sdc 8:32 0 100M 0 disk sdd 8:48 0 100M 0 disk sr0 11:0 1 1024M 0 rom
myDisks='/dev/sd[bcd]'; vgName='myVolumeGroup'; lvName='myLogicalVolume'; lvmDisk="/dev/$vgName/$lvName"; mountPoint='/mnt/mountPoint'; testFile="$mountPoint/testFile_"; md5File="$mountPoint/MD5SUM"; \ pvcreate $myDisks \ && vgcreate "$vgName" $myDisks \ && lvcreate -l 100%VG -n "$lvName" "$vgName" \ && mkfs.ext4 "$lvmDisk" \ && mkdir -p "$mountPoint" \ && mount "$lvmDisk" "$mountPoint" \ && df -h "$mountPoint" \ && for i in {01..16}; do fallocate -l 10m "$testFile$i"; done \ && df -h "$mountPoint"; md5sum "$testFile"* > "$md5File"
Filesystem Size Used Avail Use% Mounted on /dev/mapper/myVolumeGroup-myLogicalVolume 271M 2.1M 251M 1% /mnt/mountPoint Filesystem Size Used Avail Use% Mounted on /dev/mapper/myVolumeGroup-myLogicalVolume 271M 163M 251M 65% /mnt/mountPoint
We now have ~270MB in total, with ~160MB used. This means the 1st disk is full, the 2nd disk is ~50% used and the 3rd is empty.
Our mission is now to remove the 2nd disk (/dev/sdc). To do so, we'll have to move the used space of this disk to the others.
PV VG Fmt Attr PSize PFree /dev/sdb myVolumeGroup lvm2 a-- 96.00m 0 /dev/sdc myVolumeGroup lvm2 a-- 96.00m 0 the one I want to remove /dev/sdd myVolumeGroup lvm2 a-- 96.00m 0
fsck from util-linux 2.29.2
/dev/mapper/myVolumeGroup-myLogicalVolume: clean, 27/73728 files, 183415/294912 blocks
resize2fs: New size smaller than minimum (294912)
fsadm: Resize ext4 failed
fsadm failed: 1
Filesystem resize failed.
Known Bugs The minimum size of the filesystem as estimated by resize2fs may be incorrect, especially for filesystems with 1k and 2k blocksizes.
Block count: 294912
Block size: 1024 1 KiB
Let's do the maths :
294912 blocks of 1 KiB each x 1024 gives bytes / 1024 gives KiB / 1024 gives MiB
288
196608
48with 4MiB being the PE Size reported by :
PE Size 4.00 MiB(we'll use this 48 later )
Resizing the filesystem on /dev/mapper/myVolumeGroup-myLogicalVolume to 196608 (1k) blocks. The filesystem on /dev/mapper/myVolumeGroup-myLogicalVolume is now 196608 (1k) blocks long.
WARNING: Reducing active logical volume to 192.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce myVolumeGroup/myLogicalVolume? [y/n]: y Size of logical volume myVolumeGroup/myLogicalVolume changed from 288.00 MiB (72 extents) to 192.00 MiB (48 extents). Logical volume myVolumeGroup/myLogicalVolume successfully resized.
PV VG Fmt Attr PSize PFree /dev/sdb myVolumeGroup lvm2 a-- 96.00m 0 /dev/sdc myVolumeGroup lvm2 a-- 96.00m 0 /dev/sdd myVolumeGroup lvm2 a-- 96.00m 96.00m free space here (double check with pvdisplay -m)
/dev/sdc: Moved: % /dev/sdc: Moved: % /dev/sdc: Moved: 100.00%
PV VG Fmt Attr PSize PFree /dev/sdb myVolumeGroup lvm2 a-- 96.00m 0 /dev/sdc myVolumeGroup lvm2 a-- 96.00m 96.00m moved here... /dev/sdd myVolumeGroup lvm2 a-- 96.00m 0 ... from here
Removed "/dev/sdc" from volume group "myVolumeGroup"
PV VG Fmt Attr PSize PFree /dev/sdb myVolumeGroup lvm2 a-- 96.00m 0 /dev/sdc lvm2 --- 100.00m 100.00m not allocatable anymore /dev/sdd myVolumeGroup lvm2 a-- 96.00m 0For details about attributes : man pvs, then search pv_attr bits
Filesystem Size Used Avail Use% Mounted on /dev/mapper/myVolumeGroup-myLogicalVolume 178M 162M 3.0M 99% /mnt/mountPoint /mnt/mountPoint/testFile_01: OK /mnt/mountPoint/testFile_02: OK /mnt/mountPoint/testFile_16: OKNo files were harmed in the process
The whole procedure above has been performed on a test machine, where I created tiny disks on purpose (3x 100MB), to stay light on resources. Filesystems were created with defaults options, which led to 1KB blocks and difficulties with resize2fs. With 4KB blocks (which is the standard block size on any decent "data" volume), things are WAY easier :
myDisks='/dev/sd[bcd]'; vgName='myVolumeGroup'; lvName='myLogicalVolume'; lvmDisk="/dev/$vgName/$lvName"; mountPoint='/mnt/mountPoint'; testFile="$mountPoint/testFile_"; md5File="$mountPoint/MD5SUM"; \ pvcreate $myDisks \ && vgcreate "$vgName" $myDisks \ && lvcreate -l 100%VG -n "$lvName" "$vgName" \ && mkfs.ext4 -b 4096 "$lvmDisk" \ && mkdir -p "$mountPoint" \ && mount "$lvmDisk" "$mountPoint" \ && df -h "$mountPoint" \ && for i in {01..16}; do fallocate -l 10m "$testFile$i"; done \ && df -h "$mountPoint"; md5sum "$testFile"* > "$md5File"
--- Logical volume --- LV Path /dev/myVolumeGroup/myLogicalVolume LV Name myLogicalVolume VG Name myVolumeGroup LV Size 192.00 MiB Current LE 48 Segments 2 --- Segments --- Logical extents 0 to 23: Type linear Physical volume /dev/sdb Physical extents 0 to 23 Logical extents 24 to 47: Type linear Physical volume /dev/sdc Physical extents 0 to 23
PV VG Fmt Attr PSize PFree Start SSize /dev/sdb myVolumeGroup lvm2 a-- 96.00m 0 0 24 /dev/sdc myVolumeGroup lvm2 a-- 96.00m 0 0 24 /dev/sdd lvm2 --- 100.00m 100.00m 0 0
LV VG Attr #Str Type SSize Devices SSize myLogicalVolume myVolumeGroup -wi-a----- 1 linear 96.00m /dev/sdb(0) 24 myLogicalVolume myVolumeGroup -wi-a----- 1 linear 96.00m /dev/sdc(0) 24
Stack level | What's there ? | Example value | To get information on this (and learn about what's right below) |
Sample output |
---|---|---|---|---|
top | file system | /var/lib/mysql | mount | grep /var/lib/mysql | /dev/mapper/vg1-mysql on /var/lib/mysql type ext4 (rw,noatime,user_xattr,barrier=1,data=ordered) |
... | logical volume | /dev/mapper/vg1-mysql | lvs /dev/mapper/vg1-mysql | LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert mysql vg1 -wi-ao-- 25,00g |
lvdisplay /dev/mapper/vg1-mysql | --- Logical volume ---
LV Path /dev/vg1/mysql
LV Name mysql
VG Name vg1
LV UUID 5wJtup-Nedb-cMIM-Xi6N-V4Vf-6Nc0-hJfLfB
LV Write Access read/write
LV Creation host, time myServer, 2015-11-05 11:29:48 +0100
LV Status available
# open 1
LV Size 25,00 GiB
Current LE 6399
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:2 |
|||
... | volume group | vg1 | pvs | grep -E 'PV|vg1' |
PV VG Fmt Attr PSize PFree /dev/sdb vg1 lvm2 a-- 25,00g 0 A volume group is usually made of several distinct physical volumes :
PV VG Fmt Attr PSize PFree /dev/sda3 caramba-vg lvm2 a-- 40.00g 19.17g /dev/sda5 caramba-vg lvm2 a-- 39.76g 0 |
pvdisplay | grep -B2 -A7 vg1 | --- Physical volume --- PV Name /dev/sdb VG Name vg1 PV Size 25,00 GiB / not usable 4,00 MiB Allocatable yes (but full) PE Size 4,00 MiB Total PE 6399 Free PE 0 Allocated PE 6399 PV UUID 60OAZ0-XRcm-YF5k-2EA2-4V5W-6wMI-WclhqN |
|||
... | physical volume | /dev/sdb | This is a special case since we're using the full disk as a physical volume (details). This could also be a partition : /dev/sdbx | |
bottom | hardware storage unit (hard drive) |
|||
Stack level | What's there ? | Example value | To get information on this (and learn about what's right below) |
Sample output |
spaceSeparatedListOfMountPoints='/home /tmp /var'; for mountPoint in $spaceSeparatedListOfMountPoints; do echo -e "\nMount point :\t\t$mountPoint"; logicalVolume=$(mount | grep "$mountPoint " | cut -d' ' -f1); echo -e "Logical volume :\t$logicalVolume"; volumeGroup=$(lvs --noheadings "$logicalVolume" | awk '{print $2}'); echo -e "Volume group :\t\t$volumeGroup"; physicalVolumes=$(pvs | awk '/'$volumeGroup'/ {print $1}' | xargs); echo -e "Physical volume(s) :\t$physicalVolumes"; echo -e "\tPV\t\tSize\tFree"; for volume in $physicalVolumes; do pvs --noheadings "$volume" | awk '{print "\t"$1"\t"$5"\t"$6}'; done; done
Mount point : /home Logical volume : /dev/mapper/caramba--vg-home Volume group : caramba-vg Physical volume(s) : /dev/sda3 /dev/sda5 PV Size Free /dev/sda3 40.00g 19.17g /dev/sda5 39.76g 0 Mount point : /tmp Logical volume : /dev/mapper/caramba--vg-tmp Volume group : caramba-vg Physical volume(s) : /dev/sda3 /dev/sda5 PV Size Free /dev/sda3 40.00g 19.17g /dev/sda5 39.76g 0 Mount point : /var Logical volume : /dev/mapper/caramba--vg-var Volume group : caramba-vg Physical volume(s) : /dev/sda3 /dev/sda5 PV Size Free /dev/sda3 40.00g 19.17g /dev/sda5 39.76g 0
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 80G 0 disk ├─sda1 8:1 0 243M 0 part /boot ├─sda2 8:2 0 1K 0 part ├─sda3 8:3 0 40G 0 part │ ├─caramba--vg-root 254:0 0 20G 0 lvm / │ └─caramba--vg-var 254:1 0 12G 0 lvm /var └─sda5 8:5 0 39.8G 0 part ├─caramba--vg-root 254:0 0 20G 0 lvm / ├─caramba--vg-var 254:1 0 12G 0 lvm /var ├─caramba--vg-swap_1 254:2 0 3.8G 0 lvm [SWAP] ├─caramba--vg-tmp 254:3 0 380M 0 lvm /tmp └─caramba--vg-home 254:4 0 24.4G 0 lvm /home sdb 8:16 0 596.2G 0 disk └─sdb1 8:17 0 596.2G 0 part /media/usb sr0 11:0 1 1024M 0 rom