pretending to still be a geek

Sunday, January 03, 2010, at 07:19PM

By Eric Richardson

Server Guts
Eric Richardson

If you look back through the archives here, you'll find a lot of intense geekery. I ran Linux as a desktop OS for nearly a decade, administered servers, etc. I enjoyed it.

These days, though, I just tend to do less of it. Needs and situations change, and I don't end up doing as much geeky stuff as I once did.

Today, though, was an exception. Today I had to rebuild a home Linux server, migrating 350gb of data over to new drives before adding those same old drives into a logical volume.

Starting point: Linux server with three 250GB drives concocted into a weird 375GB RAID1 array. Two 400GB drives that have been sitting on my desk for a year.

Step 1: Install new OS onto 400GB Drives

Figured why not start fresh, so I hooked the two 400GB drives up along with a CD-ROM drive and installed a fresh Ubuntu server installation. Each drive got a 1GB partition and a 399GB partition, and each set was made into a RAID1 array. 1GB /dev/md0 became /boot/, and the 399GB /dev/md1 became the first part of an LVM volume mounted at /.

Step 2: Hook up old drives to copy data

It's an old server, and they're IDE drives, so I only had four slots to hook up five drives. It's RAID, though, so I simply used two of the three 250GB drives and pretended the third had failed.

Since the data was RAID / LVM, life was a little more complicated than just mounting the disk.

Rinse and repeat for md's 2, 3 and 4:

sudo /sbin/mdadm -A /dev/md2 /dev/sdb3
sudo /sbin/mdadm -R /dev/md2

Once the md devices were up, ask LVM to recognize the volume group:

sudo vgchange -a y

Then just mount it up:

mkdir /tmp/bit
sudo mount /dev/mapper/group1-main /tmp/bit/

Perfect. Now copy data at will.

Step 3: Unmount old drives and add them to /

Once all the data I needed was off, I wanted to turn the two 250GB drives into a RAID1 array and add that to the volume mounted as /, giving me roughly 650GB there.

First, unmount and deactivate the volume / arrays:

sudo umount /tmp/bit
sudo vgchange -a n group1
sudo vgremove group1
sudo /sbin/mdadm --stop /dev/md2
sudo /sbin/mdadm --stop /dev/md3
sudo /sbin/mdadm --stop /dev/md4

Then just use fdisk to re-partition the drives and create one big raid partition. Create a new md device with those two partitions:

sudo /sbin/mdadm -C /dev/md2 --level=1 \
  --raid-devices=2 /dev/sdb1 /dev/sdd1

Add that new device to the logical volume group:

sudo pvcreate /dev/md2
sudo vgextend root /dev/md2

And extend the volume to use the new space:

sudo lvextend -L +232.88G /dev/mapper/root-main

Finally, resize the filesystem to use the new space:

sudo resize2fs /dev/mapper/root-main


Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/root-main 594G  304G  260G  54% /

Here's a question, though: 594G total, with 304G used, equals 260G available? Where's my extra 30G?

The irony of all of this is that 600GB is a fairly pointless amount of storage, and already far less than I need. I plan to repeat this exercise in six months or so, except with the aim of putting a couple terabytes in instead (though either in a different server or potentially just in a device like a Drobo instead).