RAID1 Mdadm and XenServer (Notes)

Posted on November 19, 2016 at 12:02 pm

Important thing to note:

/dev/sda1: the XenServer root partition
/dev/sda2: XenServer uses this partition for temporary space during upgrades
/dev/sda3: your storage repository should be in this logical volume

Show info about first disk and its partitions:

[root@server ~]# partx -l /dev/sda
# 1:      2048- 20973567 ( 20971520 sectors,  10737 MB) ---> Seems to be sda1 ---> XenServer root
# 2:  20973568- 41945087 ( 20971520 sectors,  10737 MB) ---> Seems to be sda2 ---> XenServer temporary space
# 3:  41945088-937697279 (895752192 sectors, 458625 MB) ---> Seems to be sda3 ---> XenServer storage repository

Show info about second disk and its partitions (same as first disk):

# 1:      2048- 20973567 ( 20971520 sectors,  10737 MB) ---> Seems to be sdb1 ---> Same partition as XenServer root (sda1)
# 2:  20973568- 41945087 ( 20971520 sectors,  10737 MB) ---> Seems to be sdb2 ---> Same partition as XenServer temporary space (sda2)
# 3:  41945088-937697279 (895752192 sectors, 458625 MB) ---> Seems to be sdb3 ---> Same partition as XenServer storage repository (sda3)

Lists disks and their partitions:

[root@server ~]# ls -la /dev | grep sd
brw-r-----  1 root disk   8,   0 Nov  4 23:33 sda  ---> First disk
brw-r-----  1 root disk   8,   1 Nov  4 23:33 sda1 ---> Partition for XenServer root
brw-r-----  1 root disk   8,   2 Nov  4 23:33 sda2 ---> Partition for XenServer temporary space
brw-r-----  1 root disk   8,   3 Nov  4 23:33 sda3 ---> Partition for XenServer storage repository
brw-r-----  1 root disk   8,  16 Nov  4 23:33 sdb  ---> Second disk
brw-r-----  1 root disk   8,  17 Nov  4 23:33 sdb1 ---> Same partition as XenServer root (sda1)
brw-r-----  1 root disk   8,  18 Nov  4 23:33 sdb2 ---> Same partition as XenServer temporary space (sda2)
brw-r-----  1 root disk   8,  19 Nov  4 23:33 sdb3 ---> Same partition as XenServer storage repository (sda3)

Show mdadm arrays:

[root@server ~]# mdadm -Esv
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=40d7e873:129b2b82:a4d2adc2:26fd5302
   devices=/dev/sda1,/dev/sdb1
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=fc1bbed2:004b3e79:a4d2adc2:26fd5302
   devices=/dev/sda2,/dev/sdb2
ARRAY /dev/md3 level=raid1 num-devices=2 UUID=d8a08c66:36ea3172:a4d2adc2:26fd5302
   devices=/dev/sda3,/dev/sdb3

Check mdadm status:

[root@server ~]# cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10]
md3 : active raid1 sda3[0] sdb3[1]
      447876032 blocks [2/2] [UU]
      bitmap: 0/4 pages [0KB], 65536KB chunk
 
md2 : active raid1 sda2[0] sdb2[1]
      10485696 blocks [2/2] [UU]
 
md1 : active raid1 sda1[0] sdb1[1]
      10485696 blocks [2/2] [UU]
 
unused devices: <none>

Read mdadm stats:

md3     ---> "md" stands for "mdadm device"
active  ---> the mirroring of data is active
raid1   ---> the type of raid (raid1 means the data of the first disk is mirrored to the second disk)
sda3[0] ---> sda means first disk, 3 means the third partition, that 0 means it is on first disk (disk0)
sdb3[1] ---> sdb means second disk, 3 means the third partition, that 1 means it is on second disk (disk1)
[UU]    ---> means both disks are fine
[_U]    ---> means the first disk has issues
[U_]    ---> means the second disk has issues

Receive updates via email

Other Posts

Updated Posts