Replacing a hard disk with a larger one in Linux

Well I finally got around to relaying out my new disk. The history of the requirement is that basically my 80GB disk died in a screaming heap (reboot number one had disk errors, reboot number two no disk found, yup it died), so I swapped in a spare 160Gb disk I had and did a restore from my latest (and up to date, yipee) CloneZilla disk image backup. So I had a fully working system again but 80Gb unallocated at the end of the drive, as of course the restore only needed the first 80G.

Do not attempt anything in this post without having an up to date backup. However as I assume you are only reading this post as you also have just replaced a drive with a larger one and restored onto it you know your backups work :-).

The origional 80Gb disk had /dev/sda1 (/boot), /dev/sda2 (/) under LVM VolGroup00 and /dev/sda3 (/home) also under LVM but VolGroup01.

I could have taken the easy way out and just created a new /dev/sda4 to use all the remaining free space but I could then only allocate it to one of the volume groups, and it would be four physical partitions to worry about for backup/restore.

The origional idea was to keep /home totally seperate for backups, but as it’s now using more space than all the other filesystems combined I will move everything back to one Volume Group now.

There are two ways of doing that, which I’ll cover a bit later. The common thing for both approached is to get /home out of a seperate Volume Group. So the steps for that were simply
(1) Used tar to backup the entire /home filesystem to an external drive
(2) umount /home of course AND COMMENT OUT THE MOUNTPOINT IN /etc/fstab
(3) **** Do THIS **** echo “0” > /selinux/enforce **** DO THIS ****. FC13 default selinux rules get in the way of LVM operations and you WILL corrupt your LVM environment if you do LVM operations while selinux is in enforcing mode (and brick your system, as I found out, restored from backups again, sigh)
(4) lvm lvremove /dev/VolGroup01/HomeVol00 (what my /home LV was named)
(5) lvm vgremove VolGroup01
(6) lvm pvremove /dev/sda3
(7) use fdisk to delete sda2
(8) reboot to test the changes took OK

Now I had a working server (still) with only sda1 and sda2, and a sinle Volume Group just using sda3, with all the free space at the end of the disk available in one chunk.

Now the two ways I could use the space.

The …least risk… way would simple be to use fdisk to allocate all the free space to another /dev/sda3, add /dev/sda3 as a PV, and add that PV as available to VolGroup00, which would make all the space available to the one Volume Group as required. The only drawback is that there would be two partitions, not a drawback for most users but while I have no problem with a VG spanning multiple physical disks I can’t see the point in having one spanning multiple partitions on a single disk without a good reason, and for me taking the easiest and least risk way of adding the free space was not the solution I chose.

Of course I chose the more risky way, knowing I had a good backup.
But don’t even consider the easy way unless you are comfortable you understand what is being done. I was very comfortable with it having already gone through the extremely complicated steps ( very well documented at http://fedorasolved.org/Members/zcat/shrink-lvm-for-new-partition ) on how to shrink LVM managed partitions which is incredibly more complicated.

The easy way to add all the free space to a Volume Group IF you are adding the space immediately after the exising PV for the Volume Group is
(1) shutdown as many processes as you can, ideally you should have booted off the recovery CD/DVD and not mounted any filesystems; but I did it on the running system in the hope that I could reboot before the kernal refreshed its copy of the physical disk partition table, which it doesn’t do very often.
(2) go into fdisk, the faint of heart should stop reading here
(3) remember the only partitions I had left were sda1 (boot) and sda2 (the running / filesystem). still in fdisk, delete partition sda2
(4) still in fdisk, delete create a new partition sda2, starting at the same sector as the just deleted one did but going to the end of the physical disk
(5) still in fdisk, write the new partition table to the disk
(6) immediately exit fdisk and “reboot” [not shutdown thats too slow, want to get down fast, reboot]. Of course if you had booted of the maintenace CD/DVD you could take your time.

And yes your server (if you were fast enough) will start back up again.
Remember fdisk only updates the disk partition table layout, not any data on the disk, so as long as your new sda2 started at the same place as the origional sda2 AND YOU ARE USING LVM the system will come back up.

Why the LVM requirement, because LVM uses ‘containers’ as the PV entry for sda2 has not been updated by fdisk, as far as the PV is concerned it is the same size it always was as it doesn’t know about the larger physical partition size yet. A quick check shows that the PV still knows about the smaller partition.

[root@falcon ~]# pvdisplay
— Physical volume —
PV Name /dev/sda2
VG Name VolGroup00
PV Size 35.01 GiB / not usable 7.98 MiB
Allocatable yes
PE Size 32.00 MiB
Total PE 1120
Free PE 275
Allocated PE 845
PV UUID d2EbkG-ocRw-LcAR-yaVj-Xvdf-VWQB-w8Moq1

So lets enlarge the PV.
(7) **** Do THIS **** echo “0” > /selinux/enforce **** DO THIS ****. FC13 default selinux rules get in the way of LVM operations and you WILL corrupt your LVM environment if you do LVM operations while selinux is in enforcing mode
(8) simple run ‘pvresize’ on the partition, it will work the new space available out for itself. As we see below the available space went from 35G above to 147G below just like magic.

[root@falcon ~]# pvresize /dev/sda2
Physical volume “/dev/sda2” changed
1 physical volume(s) resized / 0 physical volume(s) not resized

[root@falcon ~]# pvdisplay
— Physical volume —
PV Name /dev/sda2
VG Name VolGroup00
PV Size 147.00 GiB / not usable 26.90 MiB
Allocatable yes
PE Size 32.00 MiB
Total PE 4703
Free PE 3858
Allocated PE 845
PV UUID d2EbkG-ocRw-LcAR-yaVj-Xvdf-VWQB-w8Moq1

And then we are back to the common steps that would be needed by both methods, which is pretty important. We need to restore all the backed up data to a newly created /home filesystem. I chose to create it as 60Gb rather than my origional 40Gb, hey we have the space now, all in Volgroup00 even if you chose the less risky method.

(9) lvm lvcreate –size 60G –name HomeVol00 VolGroup00
(10) mke2fs -t ext4 /dev/VolGroup00/HomeVol00
(11) update /etc/fstab with the new LV for /home
(12) mount /home
(13) untar the tar backups into the new /home

And an optional step. tar -zcvf to create my backups did not preserve the selinux contexts, so I had to chcon -R a lot of files, but my bad.
Anyway, you could reboot now to make sure all is OK and maybe take another backup it it is.

And to be safe
(14) fixfiles -f relabel (note: any running applications may be unstable until after a reboot now)
(15) touch /.autorelabel
(16) reboot

(17) and, when you are happy the system is stable do another fresh CloneZilla backup

And yes it could have been made even simpler if instead of creating a seperate LV for /home I just allocated more space to the / LV and used resize2fs to detect the space but I still want to keep home in a separate filesystem.

But I’m done, apart from the last step :-). Will start another ‘latest’ backup after I have pulled down all the latest updates.

About mark

At work, been working on Tandems for around 30yrs (programming + sysadmin), plus AIX and Solaris sysadmin also thrown in during the last 20yrs; also about 5yrs on MVS (mainly operations and automation but also smp/e work). At home I have been using linux for decades. Programming background is commercially in TAL/COBOL/SCOBOL/C(Tandem); 370 assembler(MVS); C, perl and shell scripting in *nix; and Microsoft Macro Assembler(windows).
This entry was posted in Unix. Bookmark the permalink.