LVM - Logical Volume Manager

Howto do all the basic LVM tasks you need in every days work: setup, extend, remove, fix and work with snapshots

Setup

See LVM section of my server setup howto.

For large disks use a greater PE (physical extend) than the default of 4M:

# create physical volume
pvcreate /dev/md0
# create volume group
vgcreate --physicalextentsize 32M vg00 /dev/md0
# create logical volume (100 GB in size)
lvcreate -L100G -n dta1 vg00
# make filesystem:
mkfs.xfs /dev/vg00/dta1

Activate a volume group

vgchange -a y vg_name

Extend a logical volume

1. extend the lvm logical volume:

lvextend -L+200G /dev/vg00/data

Output:

  Extending logical volume data to 300,00 GB
  Logical volume data successfully resized

This will add 200 GB of space (which is still unused and available on our Raid5 Array):

2.a extend the file system (xfs)
XFS file systems must be mounted to be resized and the mount-point is specified rather than the device name

xfs_growfs /data

Output:

meta-data=/dev/mapper/vg00-data  isize=256    agcount=16, agsize=1638400 blks
         =                       sectsz=512   attr=1
data     =                       bsize=4096   blocks=26214400, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=12800, version=1
         =                       sectsz=512   sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0
data blocks changed from 26214400 to 78643200

2.b extend the file system (ext3)
You can resize your ext3 filesystem while it is still mounted.

resize2fs /dev/vgc/lvvar

Output:

resize2fs 1.40-WIP (14-Nov-2006)
Filesystem at /dev/vgc/lvvar is mounted on /var; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of /dev/vgc/lvvar to 256000 (4k) blocks.
The filesystem on /dev/vgc/lvvar is now 256000 blocks long.

3. done :-)

df -h

Remove a logical volume

# umount filesystem:
umount /dev/vg0/data
# deactive the lv:
lvchange -a n /dev/vg0/data
# throw away lv:
lvremove /dev/vg0/data

Remove a volume group

# deactivate the volume group
vgchange -a n vg0
 
# remove it
vgremove vg0

Snapshots

To make an online backup of a LVM volume one can use LVM snapshots.

See http://tldp.org/HOWTO/LVM-HOWTO/snapshots_backup.html
and http://tldp.org/HOWTO/LVM-HOWTO/snapshotintro.html

A snapshot volume is a special type of volume that presents all the data that was in the volume at the time the snapshot was created.
# volume group name: vg0
# =======================
 
# create the snapshot
lvcreate --snapshot --size 1G --name snap_test vg0
 
# mount the snapshot
mount /dev/vg0/snap_test /mnt
 
# play around (make a backup)
cd /mnt && do_stuff
 
# unmount & destroy snapshot
umount /mnt
lvremove /dev/vg0/snap_test

In the LVM-Howto you read:

A snapshot volume can be as large or a small as you like but it must be large enough to hold all the changes that are likely to happen to the original volume during the lifetime of the snapshot!
If the snapshot logical volume becomes full it will be dropped (become unusable) so it is vitally important to allocate enough space. The amount of space necessary is dependent on the usage of the snapshot, so there is no set recipe to follow for this. If the snapshot size equals the origin size, it will never overflow.

Unkown devices

Reusing harddrives which where previously in a volume group might cause some trouble. I moved a pair of two drives to a new host and wanted to reinitialize the drives as swraid 1 with LVM on top. Software-Raid was no problem, synced as expected. Then, LVM complained:

pvscan
  Couldn't find device with uuid '63430H-JQB6-V0rh-epiN-Ntbv-aR1T-VySzqU'.
  Couldn't find device with uuid '63430H-JQB6-V0rh-epiN-Ntbv-aR1T-VySzqU'.
  PV /dev/sda2        VG vg0    lvm2 [595,88 GB / 20,02 GB free]
  PV unknown device   VG vg01   lvm2 [2,05 TB / 0    free]
  PV /dev/md0         VG vg01   lvm2 [931,50 GB / 31,50 GB free]
  Total: 3 [1,54 TB] / in use: 3 [1,54 TB] / in no VG: 0 [0   ]

vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "vg0" using metadata type lvm2
  Couldn't find device with uuid '63430H-JQB6-V0rh-epiN-Ntbv-aR1T-VySzqU'.
  Couldn't find all physical volumes for volume group vg01.
  Couldn't find device with uuid '63430H-JQB6-V0rh-epiN-Ntbv-aR1T-VySzqU'.
  Couldn't find all physical volumes for volume group vg01.
  Volume group "vg01" not found

Because the data on the old LVM was of no interest any more I just wanted to clean up and start over:

vgreduce --removemissing vg01

 Couldn't find device with uuid '63430H-JQB6-V0rh-epiN-Ntbv-aR1T-VySzqU'.
  Couldn't find all physical volumes for volume group vg01.
  Couldn't find device with uuid '63430H-JQB6-V0rh-epiN-Ntbv-aR1T-VySzqU'.
  Couldn't find all physical volumes for volume group vg01.
  Couldn't find device with uuid '63430H-JQB6-V0rh-epiN-Ntbv-aR1T-VySzqU'.
  Couldn't find device with uuid '63430H-JQB6-V0rh-epiN-Ntbv-aR1T-VySzqU'.
  bkup is expected to have only one segment using it, while it has 0
  Failed to find mirror_seg for bkup
  Wrote out consistent volume group vg01

And voila: everything cleaned up:

pvscan
  PV /dev/sda2   VG vg0    lvm2 [595,88 GB / 20,02 GB free]
  PV /dev/md0    VG vg01   lvm2 [931,50 GB / 931,50 GB free]

To match the namespace on that particular host I renamed the vg:

 vgrename vg01 vg1
  Volume group "vg01" successfully renamed to "vg1"

Done. The remaining steps are as usual …

linux/lvm.txt · Last modified: 2014-02 by tb
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0 ipv6 ready