Friday, October 05, 2007

moving my RAID set to a new box: collision!

For performance, I have my videos stored on a stripe set, using Fedora's software RAID technology. I've recently setup my Dell Octo Core box, but had not yet migrated the RAID set to it. This morning, at about midnight, I decided to start the migration. That was my first mistake.

Contention for the Same Device Name
The RAID set is a couple of 120GB IDE drives on a Sil680 PCI card. Not the best performers, but I was minding my pennies when I bought the drives and card. So I popped the card and the drives in the server. Thankfully, the card was immediately recognized by the BIOS on bootup. However, from the dmesg output:
Oct 4 23:53:53 localhost kernel: md: considering hdd1 ...
Oct 4 23:53:53 localhost kernel: md: adding hdd1 ...
Oct 4 23:53:53 localhost kernel: md: adding hdc1 ...
Oct 4 23:53:53 localhost kernel: md: md0 already running, cannot run hdd1

I saw that the device name of RAID set that held my videos /dev/md0 conflicted with the RAID set that I had created as my / (root) partition for 64-bit Core 6. Argh! Once per year, like Christmas, I have to dust off my rusty mdadm skills. Ugh. This was that time.

The Plan
After reading a number of references listed below, I decided to eliminate the contention, by renaming my video RAID set from /dev/md0 to /dev/md1. To accomplish this, I had to update the superblock on the RAID set to a different preferred minor number. More on this in a moment.

Since putting the drives in the new server, I was a little nervous about the condition of the data on them drives. To give myself a bit more of comfort, I decided on the following course of action:
- put the drives and card back in the original computer
- renumber the preferred minor number of the RAID set there
- test to verify that I can still mount the filesystems on the RAID and access the data
- move the devices back into the new server
- assemble, test and mount the RAID

So Let's Get Started!
I put the card and drives back into the original box. Here is the detail of what the RAID set looked like there:
[root@computer ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sat Aug 19 23:57:28 2006
Raid Level : raid0
Array Size : 234436352 (223.58 GiB 240.06 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Fri Oct 5 14:31:37 2007
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Chunk Size : 64K
UUID : 9c4c078f:8935e3e4:bfface8f:6a3c2c18
Events : 0.15

Number Major Minor RaidDevice State
0 22 1 0 active sync /dev/hdc1
1 22 65 1 active sync /dev/hdd1


Update the RAID Device Number (Preferred Minor)
I first stopped the RAID set:
[root@computer ~]# mdadm --stop /dev/md0
mdadm: stopped /dev/md0


Next, I issued the following command to update the minor number. Unfortunately, it didn't work, as I received the following error:
[root@computer ~]# mdadm --assemble /dev/md1 --update=super-minor -m0 /dev/hdd1 /dev/hdc1
mdadm: error opening /dev/md1: No such file or directory


Oh boy. From the error, it looked like I needed to have a block device file called /dev/md1 created. I wasn't sure, though, as my mdadm and RAID chops were rusty. So, after a LOT of research (references listed below), I learned that I needed to create the block device file.

Creating a Block Device
Referring to these instructions, I created the block device for /dev/md1 with the following commands:
[root@computer ~]# mknod /dev/md1 b 9 1

I wanted to keep the permissions consistent with the old /dev/md0 device file, so I ran the following commands:
[root@computer ~]# chmod 640 /dev/md1;chown disk /dev/md1
[root@computer ~]# ll /dev/md*
brw-r----- 1 root disk 9, 0 Oct 5 14:24 /dev/md0
brw-r----- 1 root disk 9, 1 Oct 5 14:43 /dev/md1


Updating and Testing the Preferred Minor Number (device id)
Once the block device file was created, I issued the command to update the preferred minor number of the RAID set to 1:
[root@computer ~]# mdadm --assemble /dev/md1 --update=super-minor -m0 /dev/hdd1 /dev/hdc1
mdadm: /dev/md1 has been started with 2 drives.

Sweet! The RAID device started! Let's see how it looks (note the Preferred Minor number!):
[root@computer ~]# mdadm --detail /dev/md1
/dev/md1:
Version : 00.90.03
Creation Time : Sat Aug 19 23:57:28 2006
Raid Level : raid0
Array Size : 234436352 (223.58 GiB 240.06 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Fri Oct 5 15:43:48 2007
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Chunk Size : 64K
UUID : 9c4c078f:8935e3e4:bfface8f:6a3c2c18
Events : 0.20

Number Major Minor RaidDevice State
0 22 1 0 active sync /dev/hdc1
1 22 65 1 active sync /dev/hdd1


I like the word "clean"! And how are the individual drives making up the set doing?
[root@computer ~]# mdadm -E /dev/hdc1
/dev/hdc1:
Magic : a92b4efc
Version : 00.90.01
UUID : 9c4c078f:8935e3e4:bfface8f:6a3c2c18
Creation Time : Sat Aug 19 23:57:28 2006
Raid Level : raid0
Device Size : 117218176 (111.79 GiB 120.03 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1

Update Time : Fri Oct 5 16:03:24 2007
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : 8bd047df - correct
Events : 0.21
Chunk Size : 64K

Number Major Minor RaidDevice State
this 0 22 1 0 active sync /dev/hdc1

0 0 22 1 0 active sync /dev/hdc1
1 1 22 65 1 active sync /dev/hdd1

[root@computer ~]# mdadm -E /dev/hdd1
/dev/hdd1:
Magic : a92b4efc
Version : 00.90.01
UUID : 9c4c078f:8935e3e4:bfface8f:6a3c2c18
Creation Time : Sat Aug 19 23:57:28 2006
Raid Level : raid0
Device Size : 117218176 (111.79 GiB 120.03 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1

Update Time : Fri Oct 5 16:03:24 2007
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : 8bd04821 - correct
Events : 0.21
Chunk Size : 64K

Number Major Minor RaidDevice State
this 1 22 65 1 active sync /dev/hdd1

0 0 22 1 0 active sync /dev/hdc1
1 1 22 65 1 active sync /dev/hdd1


Love the word "correct"!

Is My Data Still There?
So how about we try a mount?
[root@computer ~]# mount -t ext2 /dev/md1 /mnt/videos
[root@computer ~]#

No errors on the mount! That's great! Now for the finale..let's look at a test file:
[root@computer ~]# head -2 /mnt/videos/paris/newtrip.xml
<?xml version="1.0"?>
<EDL VERSION="2.0CV" PROJECT_PATH="/root/installFiles/paris/newtrip.xml">

Awesome! I'm very relieved I can read the content off the drive. That is a load off my mind. The last task was to edit /etc/fstab and reboot to make sure the RAID set comes up correctly on boot. Blissfully, those steps were also successful.

Put 'Em In Da New Box!
I then took the whole kit and caboodle to the new server. I am very happy to report that the kernel recognized the newly renumbered RAID set, as shown in the output of dmesg:
md: created md1
md1: setting max_sectors to 128, segment boundary to 32767


and created the /dev/md1 device, as shown in this file listing:
[root@ogre ~]# ll /dev/md*
brw-r----- 1 root disk 9, 0 Oct 5 19:27 /dev/md0
brw-r----- 1 root disk 9, 1 Oct 5 19:27 /dev/md1


I added the following line to /etc/fstab:
/dev/md1 /mnt/videos ext2 defaults 1 1

And ran "mount -a" to reinitialize the file system table. Lo and behold, I've got data on my drive!
[root@ogre ~]# ls /mnt/videos
20060319 20060812 20070316 20070811 axe cinelerra movies paris_tape1 stockholm_tape1
20060406 20070111 20070425 20070912 bloody lost+found paris paris_tape2 stockholm_tape2


Caveat for RAID under a Knoppix CD
At one point in my debugging, I pulled out my trusty Knoppix bootable CD. If you need to load your RAID set from a rescue disk or more specifically, Knoppix, you'll need to load the md module and then run mdadm --assemble to start your existing RAID set.
root@Knoppix:/ramdisk/home/knoppix# modprobe md
root@Knoppix:/ramdisk/home/knoppix# mdadm --assemble -m 0 /dev/md0


Well, another chapter in the life of the Mule is closed. Hopefully, someone will find these notes instructive.

Update 2009/03/25
Some hdparm drive read measurements. Note the 60% read speed increase of the stripe set versus the mirrored set.

/dev/md0 is a software RAID0 (stripe) of two 500GB, 16MB cache SATA drives:
[mule@ogre ~]$ sudo hdparm -tT /dev/md0
sudo hdparm -tT /dev/md0

/dev/md0:
Timing cached reads: 5748 MB in 2.00 seconds = 2877.62 MB/sec
Timing buffered disk reads: 352 MB in 3.02 seconds = 116.68 MB/sec


/dev/md0 is a software RAID1 (mirror) of two 500GB, 16MB cache SATA drives:
[mule@ogre ~]$ sudo hdparm -tT /dev/md2

/dev/md2:
Timing cached reads: 5218 MB in 2.00 seconds = 2612.72 MB/sec
Timing buffered disk reads: 218 MB in 3.03 seconds = 72.04 MB/sec


*** end update ***

The Mule

References
http://www.redhat.com/magazine/019may06/departments/tips_tricks
http://www.linuxdevcenter.com/pub/a/linux/2002/12/05/RAID.html?page=1
http://www.docunext.com/category/raid/

Nice Beginner's Guide
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch26_:_Linux_Software_RAID

The Man Page
http://www.linuxmanpages.com/man8/mdadm.8.php

HowTo (with good description of chunk sizes)
http://www.tldp.org/HOWTO/Software-RAID-HOWTO.html

MDADM Recipes
http://www.koders.com/noncode/fid76840E0EBBC19222CBCC0913D4AED97C1F5D2A45.aspx

Notes for Debian MDADM users
http://svn.debian.org/wsvn/pkg-mdadm/mdadm/trunk/debian/README.upgrading-2.5.3?op=file

No comments: