Friday, October 5, 2012

zLinux - LVM using Multipath disks!!




1. Take a backup of /etc/zfcp.conf file.

2. Add the LUN entries which are provided in /etc/zfcp.conf file.

Example:
## 300GB LUN for the FS /opt/tibcoBPM
0.0.dc00   0x50060e800571f006      0x0051000000000000
0.0.dd00   0x50060e800571f007      0x0051000000000000
0.0.de00   0x50060e800571f016      0x0051000000000000
0.0.df00   0x50060e800571f017      0x0051000000000000
## 200GB LUN for the FS /opt/dataSTG
0.0.dc00   0x50060e800571f006      0x0052000000000000
0.0.dd00   0x50060e800571f007      0x0052000000000000
0.0.de00   0x50060e800571f016      0x0052000000000000
0.0.df00   0x50060e800571f017      0x0052000000000000

Make note of the LUN ID as highlighted above (51 & 52)

3. Take the fdisk -l | grep -i "/dev/dm-" and multipath -ll output.

4. Execute this sbin script zfcpconf.sh from anywhere as root to get the allocated disk.

5. Take fdisk -l and multipath -ll output and cross verify multipath -ll with the previous one, to get the newly added disks mpath. For the newly added mpath name will look like this /dev/dm-xx, add /dev to the mpath name for the device name. Example /dev/dm-xx.

Example : from Fdisk Output, found the below were the new disk after comparing with previous output.
Disk /dev/dm-12 doesn't contain a valid partition table
Disk /dev/dm-23 doesn't contain a valid partition table

Example : from multipath -ll output got the below new mpath with disk name
mpath8 (360060e800571f000000071f00000fd52) dm-23 HITACHI,OPEN-V
[size=200G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
 \_ 3:0:0:7    sdy   65:128 [active][ready]
 \_ 0:0:0:9    sdz   65:144 [active][ready]
 \_ 1:0:0:6    sdaa  65:160 [active][ready]
 \_ 2:0:0:6    sdab  65:176 [active][ready]
mpath7 (360060e800571f000000071f00000fd51) dm-12 HITACHI,OPEN-V
[size=300G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][enabled]
 \_ 0:0:0:8    sdv   65:80  [active][ready]
 \_ 1:0:0:5    sdw   65:96  [active][ready]
 \_ 2:0:0:5    sdx   65:112 [active][ready]
 \_ 3:0:0:8    sdac  65:192 [active][ready]

Make sure all the paths available in mpath. Like if you see the zfcp.conf entry for each disk 4 entries made, so 4 path should be available in the multipath –ll output aswell.
If not contact Storage Team or Mainframe team to get the failed paths fixed before creating the Physical Volume.

6. Create the PV with the mpath device name.
Example : pvcreate /dev/dm-12 /dev/dm-23
Cross check in pvdisplay or pvs output
/dev/dm-12             lvm2 --   300.00G 300.00G
/dev/dm-23             lvm2 --   200.00G 200.00G

7. Create the mountpoint if it is not available already.
mkdir /opt/tibcoBPM and /opt/dataSTG.

8. Set the ownership for the mount points created, verify with the requester.
Example:
chown -R tibco_u:tibco_g /opt/tibcoBPM
chown -R datastg_u:datastg_g /opt/dataSTG

9. Here, I am creating separate volume groups for each filesystem as per the request.
vgcreate (You can identify from multipath –ll for the new disk size)
vgcreate tibcoBPM_vg /dev/dm-12
vgcreate dataSTG_vg /dev/dm-23

Verify it with vgdisplay and vgs output.
tibcoBPM_vg      1   0   0 wz--n- 300.00G 300.00G
dataSTG_vg      1   0   0 wz--n- 200.00G 200.00G

10. Create the LV on the appropriate VG with the requested FS size. I had used the below command to use the entire VG space.
lvcreate -l 100%FREE -n tibcoBPM_lv tibcoBPM_vg
lvcreate -l 100%FREE -n dataSTG_lv dataSTG_vg

Verify with lvdisplay and lvs outputs
tibcoBPM_lv    tibcoBPM_vg    -wi-a- 300.00G
dataSTG_lv    dataSTG_vg    -wi-a- 200.00G

lvscan output
ACTIVE            '/dev/tibcoBPM_vg/tibcoBPM_lv' [300.00 GB] inherit
ACTIVE            '/dev/dataSTG_vg/dataSTG_lv' [200.00 GB] inherit

11. Format it to ext3 filesystem with the below command. Get the LV path from lvscan output.
mkfs.ext3 /dev/tibcoBPM_vg/tibcoBPM_lv
mkfs.ext3 /dev/dataSTG_vg/dataSTG_lv

12. Mount the LV to the created mount point, by making the entry to /etc/fstab. Manual mount is not recommended.
Similar way get the LV full path from lvscan output and make the entry as below. Take a backup of /etc/fstab “cp /etc/fstab /etc/fstab_bkp_”.

/dev/tibcoBPM_vg/tibcoBPM_lv      /opt/tibcoBPM   ext3    defaults        1 2
/dev/dataSTG_vg/dataSTG_lv      /opt/dataSTG   ext3    defaults        1 2

13. Take df –hT output prior mounting the FS and use “mount /opt/tibcoBPM” and “mount /opt/dataSTG
[root@zos_linux1 /]# df -hT /opt/tibcoBPM /opt/dataSTG
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/mapper/tibcoBPM_vg-tibcoBPM_lv
              ext3    296G  191M  281G   1% /opt/tibcoBPM
/dev/mapper/dataSTG_vg-dataSTG_lv
              ext3    197G  188M  187G   1% /opt/dataSTG
[root@zos_linux1 /]#

14. The End!!
 

0 comments:

Post a Comment

Pages