LVM - Logical Volume Management

How to setup LVM with striping ?

Data striping with LVM is (theoretically) similar to RAID 0 : it allows (advertises ?) better performance by sharing I/O accross several devices (to the cost of data protection, though).
LVM also allows mirroring data, like RAID 1 does. article about LVM mirroring

  • When looking for redundancy (RAIDx, LVM on RAID, ...), redundancy layers must come closest to the hardware (source).
  • In a general-purpose workstation context, there is no real guarantee of I/O throughput increase with LVM striping : it depends on other factors (hardware, file size, number of files, concurrent I/O, ...) (details)
  • LVM is designed for flexibility of storage management (add, remove, extend, ...), whereas RAID offers performance and reliability.

I tested all this on a virtual machine. The first step was to create 3 new virtual disks.
  1. apt install lvm2
  2. lsblk
    NAME	MAJ:MIN	RM	SIZE	RO	TYPE	MOUNTPOINT
    sda	8:0	0	12G	0	disk
    ├─sda1	8:1	0	11G	0	part	/
    ├─sda2	8:2	0	1K	0	part
    └─sda5	8:5	0	1022M	0	part	[SWAP]
    sdb	8:16	0	100M	0	disk			disk 1
    sdc	8:32	0	100M	0	disk			disk 2
    sdd	8:48	0	100M	0	disk			disk 3
    sr0	11:0	1	1024M	0	rom
  3. for device in sdb sdc sdd; do pvcreate "/dev/$device"; done
    Physical volume "/dev/sdb" successfully created.
    Physical volume "/dev/sdc" successfully created.
    Physical volume "/dev/sdd" successfully created.

    pvcreate /dev/sd[bcd] might have done the job as well

  4. vgcreate myVolumeGroup /dev/sd{b,c,d}
    Volume group "myVolumeGroup" successfully created
  5. vgdisplay
    --- Volume group ---
    VG Name			myVolumeGroup
    System ID
    Format			lvm2
    Metadata Areas		3
    Metadata Sequence No	1
    VG Access		read/write
    VG Status		resizable
    MAX LV			0
    Cur LV			0
    Open LV			0
    Max PV			0
    Cur PV			3
    Act PV			3
    VG Size			288.00 MiB		approximately the sum of the 3 disks size
    PE Size			4.00 MiB
    Total PE		72
    Alloc PE / Size		0 / 0
    Free  PE / Size		72 / 288.00 MiB
    VG UUID			D3vnxv-UV8u-fhkH-K63N-9Wpn-r9Ix-lOkda7
  6. lvcreate -i3 -I128 -L200M -n myLogicalVolume myVolumeGroup
    The 128KB stripe size is arbitrary since I've not (yet) been able to find a generic guide to compute this value.
    Rounding size 200.00 MiB (50 extents) up to stripe boundary size 204.00 MiB (51 extents).
    Logical volume "myLogicalVolume" created.
  7. Let's see what we have :
    • lvs --segments
      LV			VG		Attr		#Str	Type		SSize
      myLogicalVolume		myVolumeGroup	-wi-a-----	3	striped		204.00m
    • details about a striped logical volume :
      lvdisplay -m
      --- Logical volume ---
      LV Path			/dev/myVolumeGroup/myLogicalVolume
      LV Name			myLogicalVolume
      VG Name			myVolumeGroup
      LV UUID			vQlrkI-SNm8-qMtL-pyOi-f5XI-4oTh-yIFkEO
      LV Write Access		read/write
      LV Creation host, time	myVirtualMachine, 2017-10-26 16:55:17 +0200
      LV Status		available
      # open			0
      LV Size			204.00 MiB
      Current LE		51
      Segments		1
      Allocation		inherit
      Read ahead sectors	auto
      - currently set to	1536
      Block device		254:0
      
      --- Segments ---
      Logical extents 0 to 50:
      	Type				striped
      	Stripes				3
      	Stripe size			128.00 KiB
      	Stripe 0:
      		Physical volume		/dev/sdb
      		Physical extents	0 to 16
      	Stripe 1:
      		Physical volume		/dev/sdc
      		Physical extents	0 to 16
      	Stripe 2:
      		Physical volume		/dev/sdd
      		Physical extents	0 to 16
  8. Time to create a filesystem :
    mkfs.ext4 /dev/myVolumeGroup/myLogicalVolume
  9. mountPoint='/mnt/myMountPoint'; mkdir -p "$mountPoint"; mount /dev/myVolumeGroup/myLogicalVolume "$mountPoint"; df -h "$mountPoint"
    Filesystem					Size	Used	Avail	Use%	Mounted on
    /dev/mapper/myVolumeGroup-myLogicalVolume	194M	1.8M	178M	1%	/mnt/myMountPoint

    /dev/myVolumeGroup/myLogicalVolume and /dev/mapper/myVolumeGroup-myLogicalVolume are both symlinks pointing to a child of /dev.

  10. Now let's write some data :
    testFile="$mountPoint/myFile"; fallocate -l 160m "$testFile"; ll "$testFile"; df -h "$mountPoint"
    -rw-r--r--	1	root	root	160M	Nov	3	15:53	/mnt/myMountPoint/myFile
    
    Filesystem					Size	Used	Avail	Use%	Mounted on
    /dev/mapper/myVolumeGroup-myLogicalVolume	194M	162M	18M	91%	/mnt/myMountPoint

thin pool, thin volume

Logical volumes can be thinly provisioned. This allows to create logical volumes that are larger than the available extents. In other words, this allows to over-commit the physical storage with logical volumes expanding as they're written to.
thin pool
storage space containing thin volumes. This can be expanded dynamically.
thin volume
thin volumes are a special flavour of logical volumes. They are not actually as big as their "official" size. Instead, they grow only when they are written to.

How to detect whether a logical volume is a "regular" or a "thin" one ?

Just read it from the logical volume attributes :

lvs docker/thinpool docker/thinpoolmeta -o lv_name,lv_attr,modules,metadata_lv

  • regular :
    LV		Attr		Modules		Meta
    thinpool	-wi-a-----
    thinpoolmeta	-wi-a-----
  • thin :
    LV		Attr		Modules		Meta
    thinpool	twi-a-t---	thin-pool	[thinpool_tmeta]
    thinpoolmeta	-wi-a-----
    • the 1st t (bit 1) means the volume type is thin pool
    • the 2nd t (bit 7) mean the target type is thin
    For details : man lvs, then search lv_attr bits.
I've hit this thin / regular volumes subject while configuring Docker. After the lvconvert step (see previous link), there should be no more thinpoolmeta logical volume. This one is still visible in my examples above because of an Ansible playbook that was not actually idempotent.

How to add a disk (a physical volume) into an existing logical volume ?

Let's say we want to add the physical volume /dev/sdb to the /dev/myLVM/homes logical volume :

  1. Create a new partition on this new drive :
    fdisk /dev/sdb
  2. Format it :
    mkfs.ext4 /dev/sdb1

Steps above are optional if you wish to use the entire disk (details)

  1. Make it a physical volume :
    pvcreate /dev/sdb1
  2. Add this new volume to the existing volume group myLVM :
    vgextend myLVM /dev/sdb1
  3. Assign free space to the existing logical volume /dev/myLVM/homes (this syntax is a special case, details) :
    lvextend /dev/myLVM/homes /dev/sdb1
  4. umount the partition, check it, then resize it accordingly.
    It is possible to enlarge a logical volume and the filesystem on it at the same time (and without unmounting it!).

How to setup a LVM architecture ?

apt install lvm2
it all starts here
pvcreate /dev/sda1
initialize a physical volume (disk or partition) for use by LVM
You can also use an entire unpartitionned unformatted drive if you like : pvcreate /dev/sdx
vgcreate myVolumeGroup /dev/sda1 /dev/sdb1
create a volume group named myVolumeGroup with /dev/sda1 and /dev/sdb1
view details of volume groups
lvcreate -n myLogicalVolume --size 1g myVolumeGroup
create a 1GB logical volume named myLogicalVolume in the existing volume group myVolumeGroup
view details of logical volumes
mkfs ...
this is why you did all this

How to reduce a logical volume ?

Here are the step-by-step instructions to resize my /home partition.

Preliminary steps :

  1. Consider cleaning up old / unused / dusty / obsolete / ... data from /home
  2. Backup everything important from /home
  3. Get the device name :
    mount | grep /home
    /dev/mapper/myLVM-homes on /home type ext4 (rw,relatime,data=ordered)
  4. Determine the new filesystem size, considering the currently available free space (you can only remove free space, but you'll still want some free space afterwards)
    lvreduce's manual is not very explicit, but it sounds like you _may_ actually remove some used storage space :
    df -h /home
    Filesystem		Size	Used	Avail	Use%	Mounted on
    /dev/mapper/myLVM-homes	5.2T	2.0T	3.2T	39%	/home
    2TB are used so far, so a final size of 4TB looks ok.

Honey, I shrunk my /home :

  1. Unmount the target filesystem : umount /home
  2. Check it :
    fsck.ext4 -f /dev/mapper/myLVM-homes
    • Use the fsck flavour matching your filesystem type
    • All fsck steps must be OK
  3. Now let's resize it :
    resize2fs /dev/mapper/myLVM-homes 4096G
  4. Then reduce the logical volume holding it :
    lvreduce --size 4096G /dev/mapper/myLVM-homes
  5. Note it worked :
    • vgdisplay | grep Size
      VG Size			5.23 TiB		/home was 100% of that
      PE Size			4.00 MiB
      Alloc PE / Size		1054536 / 4.02 TiB	now /home is this
      Free  PE / Size		316641 / 1.21 TiB	this is avalable for new logical volumes
    • shorter version : vgs
      VG	#PV	#LV	#SN	Attr	VSize	VFree
      myLVM	3	2	0	wz--n-	5.23t	1.21t
  6. Re-check the filesystem
  7. And now, you can re-mount the resized volume : mount /dev/mapper/myLVM-homes /home

How to display used devices/free space when using LVM ?

  1. lvdisplay : show logical volumes.
  2. vgdisplay : show volume groups (including free space available). Check the Free PE / Size line.
  3. pvdisplay : show physical volumes. Check the Free PE line of each PV.

How to add a disk (a physical volume) into an existing volume group ?

Let's say we want to add the physical volume /dev/sdc to the root_vg volume group :

  1. make sure /dev/sdc is available :
    mount | grep "/dev/sdc"
  2. make /dev/sdc a physical volume ready for LVM :
    pvcreate /dev/sdc
  3. Get the "before" status : vgs
    VG          #PV #LV #SN Attr   VSize   VFree
    data_vg       1   8   0 wz--n- 100.00g  82.93g
    root_vg       2   7   0 wz--n-  29.84g 896.00m
    root_vg_alt   1   7   0 wz--n-  19.88g 896.00m
  4. Extend the volume group thanks to this new physical volume :
    vgextend root_vg /dev/sdc
  5. Get the "after" status : vgs
    VG          #PV #LV #SN Attr   VSize   VFree
    data_vg       1   8   0 wz--n- 100.00g  82.93g
    root_vg       2   7   0 wz--n-  29.84g  10.84g
    root_vg_alt   1   7   0 wz--n-  19.88g 896.00m

How to remove a disk from a logical volume ?

The idea

Let's consider a machine with 3 data disks (3x 100MB virtual disks) concatenated into a large virtual volume via linear LVM. The setup is pretty similar to what has been done while playing with striped LVM, which is why I won't give too much details on the commands used to create the "BEFORE" picture .
This time, our goal is to remove the second disk : /dev/sdc.
  1. lsblk
    NAME	MAJ:MIN	RM	SIZE	RO	TYPE	MOUNTPOINT
    sda	8:0	0	12G	0	disk
    ├─sda1	8:1	0	11G	0	part	/
    ├─sda2	8:2	0	1K	0	part
    └─sda5	8:5	0	1022M	0	part	[SWAP]
    sdb	8:16	0	100M	0	disk
    sdc	8:32	0	100M	0	disk
    sdd	8:48	0	100M	0	disk
    sr0	11:0	1	1024M	0	rom
  2. Now let's do our LVM magic. We'll get a ~270MB volume that will be filled with 16x 10MB files :
    myDisks='/dev/sd[bcd]'; vgName='myVolumeGroup'; lvName='myLogicalVolume'; lvmDisk="/dev/$vgName/$lvName"; mountPoint='/mnt/mountPoint'; testFile="$mountPoint/testFile_"; md5File="$mountPoint/MD5SUM"; pvcreate $myDisks && vgcreate "$vgName" $myDisks && lvcreate -l 100%VG -n "$lvName" "$vgName" && mkfs.ext4 "$lvmDisk" && mkdir -p "$mountPoint" && mount "$lvmDisk" "$mountPoint" && df -h "$mountPoint" && for i in {01..16}; do fallocate -l 10m "$testFile$i"; done && df -h "$mountPoint"; md5sum "$testFile"* > "$md5File"
    Filesystem					Size	Used	Avail	Use%	Mounted on
    /dev/mapper/myVolumeGroup-myLogicalVolume	271M	2.1M	251M	1%	/mnt/mountPoint
    Filesystem					Size	Used	Avail	Use%	Mounted on
    /dev/mapper/myVolumeGroup-myLogicalVolume	271M	163M	251M	65%	/mnt/mountPoint
  3. To quickly destroy everything and retry the step above :
    umount "$mountPoint" && vgremove -f "$vgName" && pvremove $myDisks

We now have ~270MB in total, with ~160MB used. This means the 1st disk is full, the 2nd disk is ~50% used and the 3rd is empty.
Our mission is now to remove the 2nd disk (/dev/sdc). To do so, we'll have to move the used space of this disk to the others.

Let's free 1 disk

  1. Let's see what we have :
    pvs
    PV		VG		Fmt	Attr	PSize	PFree
    /dev/sdb	myVolumeGroup	lvm2	a--	96.00m	0
    /dev/sdc	myVolumeGroup	lvm2	a--	96.00m	0
    /dev/sdd	myVolumeGroup	lvm2	a--	96.00m	0
  2. Theoretically, this should suffice :
    umount "$mountPoint" && lvreduce -r --extents 48 "$vgName/$lvName" && vgreduce "$vgName" /dev/sdd
    ... but it doesn't actually .
    SPOILER ALERT : this is caused by special conditions in my test environment (details)
    The part that fails is :
    lvreduce -r --extents 48 "$vgName/$lvName"
    fsck from util-linux 2.29.2
    /dev/mapper/myVolumeGroup-myLogicalVolume: clean, 27/73728 files, 183415/294912 blocks
    	resize2fs: New size smaller than minimum (294912)
    
    fsadm: Resize ext4 failed
    	fsadm failed: 1
    	Filesystem resize failed.
  3. There's a known bug in resize2fs :
    Known Bugs
    The minimum size of the filesystem as estimated by resize2fs may be incorrect, especially for filesystems with 1k and 2k blocksizes.
    Guess the current block size (source) ?
    dumpe2fs -h /dev/mapper/myVolumeGroup-myLogicalVolume | grep -E 'Block (count|size)'
    Block count:		294912
    Block size:		1024		1 KiB
    Let's do the maths :
    294912 blocks of 1 KiB each
    	x 1024 gives bytes
    	/ 1024 gives KiB
    	/ 1024 gives MiB
    echo 294912/1024 | bc
    288
    resize2fs computes a wrong minimum size (telling me the current size is actually the minimum size), then refuses to shrink the filesystem. We'll have to compute the final size ourselves and somewhat "force it".
  4. Let's compute the final size, in number of blocks :
    • Our target is to use 2 out of 3 drives : 2x 96MiB = 192MiB (96MB is the PV size reported by pvs)
    • 192MiB of 1KiB blocks : echo 192*1024 | bc
      196608
    • which also gives, in number of extents : echo 192/4 | bc
      48
      with 4MiB being the PE Size reported by :
      vgdisplay | grep 'PE Size'
      PE Size		4.00 MiB
      (we'll use this 48 later )
  5. Now we have all the numbers, we're ready to go :
    umount "$mountPoint" && resize2fs -f /dev/mapper/myVolumeGroup-myLogicalVolume 196608
    Resizing the filesystem on /dev/mapper/myVolumeGroup-myLogicalVolume to 196608 (1k) blocks.
    The filesystem on /dev/mapper/myVolumeGroup-myLogicalVolume is now 196608 (1k) blocks long.
  6. Done with shrinking the filesystem. Let's go for the logical volume :
    lvreduce --extents 48 "$vgName/$lvName"
    	WARNING: Reducing active logical volume to 192.00 MiB.
    	THIS MAY DESTROY YOUR DATA (filesystem etc.)
    Do you really want to reduce myVolumeGroup/myLogicalVolume? [y/n]: y
    	Size of logical volume myVolumeGroup/myLogicalVolume changed from 288.00 MiB (72 extents) to 192.00 MiB (48 extents).
    	Logical volume myVolumeGroup/myLogicalVolume successfully resized.
  7. Check :
    pvs
    PV		VG		Fmt	Attr	PSize	PFree
    /dev/sdb	myVolumeGroup	lvm2	a--	96.00m	0
    /dev/sdc	myVolumeGroup	lvm2	a--	96.00m	0
    /dev/sdd	myVolumeGroup	lvm2	a--	96.00m	96.00m		free space here (double check with pvdisplay -m)
    We've almost reached our goal : 1 of the 3 disks is now free, but not (yet) the right one.

Let's make /dev/sdc free

  1. Let's proceed by moving used space from /dev/sdc to /dev/sdd. This is slow like hell !!!
    pvmove /dev/sdc /dev/sdd
    /dev/sdc: Moved: ...%
    /dev/sdc: Moved: ...%
    /dev/sdc: Moved: 100.00%
  2. Check :
    pvs
    PV		VG		Fmt	Attr	PSize	PFree
    /dev/sdb	myVolumeGroup	lvm2	a--	96.00m	0
    /dev/sdc	myVolumeGroup	lvm2	a--	96.00m	96.00m		moved here...
    /dev/sdd	myVolumeGroup	lvm2	a--	96.00m	0		... from here
  3. Now we can declare /dev/sdc is no longer part of the "myVolumeGroup" volume group :
    vgreduce "$vgName" /dev/sdc
    Removed "/dev/sdc" from volume group "myVolumeGroup"
  4. Check :
    pvs
    PV		VG		Fmt	Attr	PSize	PFree
    /dev/sdb	myVolumeGroup	lvm2	a--	96.00m	0
    /dev/sdc			lvm2	---	100.00m	100.00m		not allocatable anymore
    /dev/sdd	myVolumeGroup	lvm2	a--	96.00m	0
    For details about attributes : man pvs, then search pv_attr bits
  5. mount "$lvmDisk" "$mountPoint" && df -h "$mountPoint" && md5sum -c "$md5File"
    Filesystem					Size	Used	Avail	Use%	Mounted on
    /dev/mapper/myVolumeGroup-myLogicalVolume	178M	162M	3.0M	99%	/mnt/mountPoint
    
    /mnt/mountPoint/testFile_01: OK
    /mnt/mountPoint/testFile_02: OK
    ...
    /mnt/mountPoint/testFile_16: OK
    No files were harmed in the process

Final test : what if filesystems were created with 4KB blocks ?

The whole procedure above has been performed on a test machine, where I created tiny disks on purpose (3x 100MB), to stay light on resources. Filesystems were created with defaults options, which led to 1KB blocks and difficulties with resize2fs. With 4KB blocks (which is the standard block size on any decent "data" volume), things are WAY easier :

  1. myDisks='/dev/sd[bcd]'; vgName='myVolumeGroup'; lvName='myLogicalVolume'; lvmDisk="/dev/$vgName/$lvName"; mountPoint='/mnt/mountPoint'; testFile="$mountPoint/testFile_"; md5File="$mountPoint/MD5SUM"; pvcreate $myDisks && vgcreate "$vgName" $myDisks && lvcreate -l 100%VG -n "$lvName" "$vgName" && mkfs.ext4 -b 4096 "$lvmDisk" && mkdir -p "$mountPoint" && mount "$lvmDisk" "$mountPoint" && df -h "$mountPoint" && for i in {01..16}; do fallocate -l 10m "$testFile$i"; done && df -h "$mountPoint"; md5sum "$testFile"* > "$md5File"
  2. umount "$mountPoint" && lvreduce -r --extents 48 "$vgName/$lvName" && vgreduce "$vgName" /dev/sdd
  3. mount "$lvmDisk" "$mountPoint" && df -h "$mountPoint" && md5sum -c "$md5File"

About segments

Here's a good example of what segments are :
lvdisplay -m
--- Logical volume ---
LV Path			/dev/myVolumeGroup/myLogicalVolume
LV Name			myLogicalVolume
VG Name			myVolumeGroup
...
LV Size			192.00 MiB
Current LE		48
Segments		2
...

--- Segments ---
Logical extents 0 to 23:
	Type			linear
	Physical volume		/dev/sdb
	Physical extents	0 to 23

Logical extents 24 to 47:
	Type			linear
	Physical volume		/dev/sdc
	Physical extents	0 to 23
pvs --segments
PV		VG		Fmt	Attr	PSize	PFree	Start	SSize
/dev/sdb	myVolumeGroup	lvm2	a--	96.00m	0	0	24
/dev/sdc	myVolumeGroup	lvm2	a--	96.00m	0	0	24
/dev/sdd			lvm2	---	100.00m	100.00m	0	0
lvs --segments -o +devices,seg_size_pe
LV			VG		Attr		#Str	Type	SSize	Devices		SSize
myLogicalVolume		myVolumeGroup	-wi-a-----	1	linear	96.00m	/dev/sdb(0)	24
myLogicalVolume		myVolumeGroup	-wi-a-----	1	linear	96.00m	/dev/sdc(0)	24

LVM glossary

Logical Extent (LE) :
Each logical volume is split into chunks of data, known as logical extents. The extent size is the same for all logical volumes in the volume group.
Logical Volume (LV)
The equivalent of a disk partition in a non-LVM system. The LV is visible as a standard block device; as such the LV can contain a file system (eg. /home).
Volume Group (VG)
The Volume Group is the highest level abstraction used within the LVM. It gathers together a collection of logical volumes and physical volumes into one administrative unit.
Physical Extent (PE) :
Each physical volume is divided chunks of data, known as physical extents, these extents have the same size as the logical extents for the volume group.
Physical Volume (PV)
A physical volume is typically a hard disk, though it may well just be a device that 'looks' like a hard disk (eg. a software raid device).
Extent
A contiguous area of storage reserved for a file in a file system, represented as a range. A file can consist of zero or more extents; one file fragment requires one extent. The direct benefit is in storing each range compactly as two numbers, instead of canonically storing every block number in the range. Also, extent allocation results in less file fragmentation.
Segment
Contiguous allocation of space on a physical volume, i.e. uninterrupted sequence of physical extents (source : man pvs + search --segments)

How things stack up :

Stack level What's there ? Example value To get information on this
(and learn about what's right below)
Sample output
top file system /var/lib/mysql mount | grep /var/lib/mysql
/dev/mapper/vg1-mysql on /var/lib/mysql type ext4 (rw,noatime,user_xattr,barrier=1,data=ordered)
... logical volume /dev/mapper/vg1-mysql lvs /dev/mapper/vg1-mysql
LV    VG   Attr     LSize  Pool Origin Data%  Move Log Copy%  Convert
mysql vg1  -wi-ao-- 25,00g
lvdisplay /dev/mapper/vg1-mysql
--- Logical volume ---
	LV Path                /dev/vg1/mysql
	LV Name                mysql
	VG Name                vg1
	LV UUID                5wJtup-Nedb-cMIM-Xi6N-V4Vf-6Nc0-hJfLfB
	LV Write Access        read/write
	LV Creation host, time myServer, 2015-11-05 11:29:48 +0100
	LV Status              available
	# open                 1
	LV Size                25,00 GiB
	Current LE             6399
	Segments               1
	Allocation             inherit
	Read ahead sectors     auto
	- currently set to     256
	Block device           254:2
... volume group vg1 pvs | grep -E 'PV|vg1'
PV         VG   Fmt  Attr PSize  PFree
/dev/sdb   vg1  lvm2 a--  25,00g    0
A volume group is usually made of several distinct physical volumes :
PV         VG         Fmt  Attr PSize  PFree
/dev/sda3  caramba-vg lvm2 a--  40.00g 19.17g
/dev/sda5  caramba-vg lvm2 a--  39.76g     0
pvdisplay | grep -B2 -A7 vg1
--- Physical volume ---
PV Name               /dev/sdb
VG Name               vg1
PV Size               25,00 GiB / not usable 4,00 MiB
Allocatable           yes (but full)
PE Size               4,00 MiB
Total PE              6399
Free PE               0
Allocated PE          6399
PV UUID               60OAZ0-XRcm-YF5k-2EA2-4V5W-6wMI-WclhqN
... physical volume /dev/sdb This is a special case since we're using the full disk as a physical volume (details). This could also be a partition : /dev/sdbx
bottom hardware storage unit
(hard drive)
Stack level What's there ? Example value To get information on this
(and learn about what's right below)
Sample output
A one-liner to sort this out :

spaceSeparatedListOfMountPoints='/home /tmp /var'; for mountPoint in $spaceSeparatedListOfMountPoints; do echo -e "\nMount point :\t\t$mountPoint"; logicalVolume=$(mount | grep "$mountPoint " | cut -d' ' -f1); echo -e "Logical volume :\t$logicalVolume"; volumeGroup=$(lvs --noheadings "$logicalVolume" | awk '{print $2}'); echo -e "Volume group :\t\t$volumeGroup"; physicalVolumes=$(pvs | awk '/'$volumeGroup'/ {print $1}' | xargs); echo -e "Physical volume(s) :\t$physicalVolumes"; echo -e "\tPV\t\tSize\tFree"; for volume in $physicalVolumes; do pvs --noheadings "$volume" | awk '{print "\t"$1"\t"$5"\t"$6}'; done; done

Mount point :		/home
Logical volume :	/dev/mapper/caramba--vg-home
Volume group :		caramba-vg
Physical volume(s) :	/dev/sda3 /dev/sda5
	PV		Size	Free
	/dev/sda3	40.00g	19.17g
	/dev/sda5	39.76g	0

Mount point :		/tmp
Logical volume :	/dev/mapper/caramba--vg-tmp
Volume group :		caramba-vg
Physical volume(s) :	/dev/sda3 /dev/sda5
	PV		Size	Free
	/dev/sda3	40.00g	19.17g
	/dev/sda5	39.76g	0

Mount point :		/var
Logical volume :	/dev/mapper/caramba--vg-var
Volume group :		caramba-vg
Physical volume(s) :	/dev/sda3 /dev/sda5
	PV		Size	Free
	/dev/sda3	40.00g	19.17g
	/dev/sda5	39.76g	0
lsblk gives a pretty good overview too :
NAME                   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                      8:0    0    80G  0 disk
├─sda1                   8:1    0   243M  0 part /boot
├─sda2                   8:2    0     1K  0 part
├─sda3                   8:3    0    40G  0 part
│ ├─caramba--vg-root   254:0    0    20G  0 lvm  /
│ └─caramba--vg-var    254:1    0    12G  0 lvm  /var		
└─sda5                   8:5    0  39.8G  0 part
  ├─caramba--vg-root   254:0    0    20G  0 lvm  /
  ├─caramba--vg-var    254:1    0    12G  0 lvm  /var		
  ├─caramba--vg-swap_1 254:2    0   3.8G  0 lvm  [SWAP]
  ├─caramba--vg-tmp    254:3    0   380M  0 lvm  /tmp
  └─caramba--vg-home   254:4    0  24.4G  0 lvm  /home
sdb                      8:16   0 596.2G  0 disk
└─sdb1                   8:17   0 596.2G  0 part /media/usb
sr0                     11:0    1  1024M  0 rom