Showing posts with label Linux. Show all posts
Showing posts with label Linux. Show all posts

Thursday, October 25, 2012

How to find wwn in Redhat Linux?

How to find wwn in Redhat Linux?

a.    Below is the simple command to find WWN on Redhat servers.

systool -c fc_host –v

# systool -c fc_host -v | grep "port_name"
    port_name           = "0x5001438001347fdc"
    port_name           = "0x5001438001347fde"
    port_name           = "0x50014380013471f0"
    port_name           = "0x50014380013471f2"
#

b.    Below is the simple steps to find WWN in Linux.

Step 1:  cd /sys/class/fc_host
Step 2:  cd to the host# directory.
Step 3: cat node_name

Useful command to find to list the FC Adapter

lspci | grep -i Fibre


c.    To find WWN on linux with Emulex:
If you find difficult to find WWN on linux with Emulex HBA use below hbacmd to list the HBAs. This only available if hbanyware installed.

cd /usr/sbin/hbanyware
[root@tilak_phy01 hbanyware]# ./hbacmd ListHBAs

Manageable HBA List
Port WWN   : 11:00:00:00:d9:58:bf:92
Node WWN   : 10:00:00:00:b9:58:ba:92
Fabric Name: 20:3f:01:04:51:a1:71:00
Flags      : 1100a980
Host Name  : tilak_phy01
Mfg        : Emulex Corporation

Friday, October 5, 2012

zLinux - LVM using Multipath disks!!




1. Take a backup of /etc/zfcp.conf file.

2. Add the LUN entries which are provided in /etc/zfcp.conf file.

Example:
## 300GB LUN for the FS /opt/tibcoBPM
0.0.dc00   0x50060e800571f006      0x0051000000000000
0.0.dd00   0x50060e800571f007      0x0051000000000000
0.0.de00   0x50060e800571f016      0x0051000000000000
0.0.df00   0x50060e800571f017      0x0051000000000000
## 200GB LUN for the FS /opt/dataSTG
0.0.dc00   0x50060e800571f006      0x0052000000000000
0.0.dd00   0x50060e800571f007      0x0052000000000000
0.0.de00   0x50060e800571f016      0x0052000000000000
0.0.df00   0x50060e800571f017      0x0052000000000000

Make note of the LUN ID as highlighted above (51 & 52)

3. Take the fdisk -l | grep -i "/dev/dm-" and multipath -ll output.

4. Execute this sbin script zfcpconf.sh from anywhere as root to get the allocated disk.

5. Take fdisk -l and multipath -ll output and cross verify multipath -ll with the previous one, to get the newly added disks mpath. For the newly added mpath name will look like this /dev/dm-xx, add /dev to the mpath name for the device name. Example /dev/dm-xx.

Example : from Fdisk Output, found the below were the new disk after comparing with previous output.
Disk /dev/dm-12 doesn't contain a valid partition table
Disk /dev/dm-23 doesn't contain a valid partition table

Example : from multipath -ll output got the below new mpath with disk name
mpath8 (360060e800571f000000071f00000fd52) dm-23 HITACHI,OPEN-V
[size=200G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
 \_ 3:0:0:7    sdy   65:128 [active][ready]
 \_ 0:0:0:9    sdz   65:144 [active][ready]
 \_ 1:0:0:6    sdaa  65:160 [active][ready]
 \_ 2:0:0:6    sdab  65:176 [active][ready]
mpath7 (360060e800571f000000071f00000fd51) dm-12 HITACHI,OPEN-V
[size=300G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][enabled]
 \_ 0:0:0:8    sdv   65:80  [active][ready]
 \_ 1:0:0:5    sdw   65:96  [active][ready]
 \_ 2:0:0:5    sdx   65:112 [active][ready]
 \_ 3:0:0:8    sdac  65:192 [active][ready]

Make sure all the paths available in mpath. Like if you see the zfcp.conf entry for each disk 4 entries made, so 4 path should be available in the multipath –ll output aswell.
If not contact Storage Team or Mainframe team to get the failed paths fixed before creating the Physical Volume.

6. Create the PV with the mpath device name.
Example : pvcreate /dev/dm-12 /dev/dm-23
Cross check in pvdisplay or pvs output
/dev/dm-12             lvm2 --   300.00G 300.00G
/dev/dm-23             lvm2 --   200.00G 200.00G

7. Create the mountpoint if it is not available already.
mkdir /opt/tibcoBPM and /opt/dataSTG.

8. Set the ownership for the mount points created, verify with the requester.
Example:
chown -R tibco_u:tibco_g /opt/tibcoBPM
chown -R datastg_u:datastg_g /opt/dataSTG

9. Here, I am creating separate volume groups for each filesystem as per the request.
vgcreate (You can identify from multipath –ll for the new disk size)
vgcreate tibcoBPM_vg /dev/dm-12
vgcreate dataSTG_vg /dev/dm-23

Verify it with vgdisplay and vgs output.
tibcoBPM_vg      1   0   0 wz--n- 300.00G 300.00G
dataSTG_vg      1   0   0 wz--n- 200.00G 200.00G

10. Create the LV on the appropriate VG with the requested FS size. I had used the below command to use the entire VG space.
lvcreate -l 100%FREE -n tibcoBPM_lv tibcoBPM_vg
lvcreate -l 100%FREE -n dataSTG_lv dataSTG_vg

Verify with lvdisplay and lvs outputs
tibcoBPM_lv    tibcoBPM_vg    -wi-a- 300.00G
dataSTG_lv    dataSTG_vg    -wi-a- 200.00G

lvscan output
ACTIVE            '/dev/tibcoBPM_vg/tibcoBPM_lv' [300.00 GB] inherit
ACTIVE            '/dev/dataSTG_vg/dataSTG_lv' [200.00 GB] inherit

11. Format it to ext3 filesystem with the below command. Get the LV path from lvscan output.
mkfs.ext3 /dev/tibcoBPM_vg/tibcoBPM_lv
mkfs.ext3 /dev/dataSTG_vg/dataSTG_lv

12. Mount the LV to the created mount point, by making the entry to /etc/fstab. Manual mount is not recommended.
Similar way get the LV full path from lvscan output and make the entry as below. Take a backup of /etc/fstab “cp /etc/fstab /etc/fstab_bkp_”.

/dev/tibcoBPM_vg/tibcoBPM_lv      /opt/tibcoBPM   ext3    defaults        1 2
/dev/dataSTG_vg/dataSTG_lv      /opt/dataSTG   ext3    defaults        1 2

13. Take df –hT output prior mounting the FS and use “mount /opt/tibcoBPM” and “mount /opt/dataSTG
[root@zos_linux1 /]# df -hT /opt/tibcoBPM /opt/dataSTG
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/mapper/tibcoBPM_vg-tibcoBPM_lv
              ext3    296G  191M  281G   1% /opt/tibcoBPM
/dev/mapper/dataSTG_vg-dataSTG_lv
              ext3    197G  188M  187G   1% /opt/dataSTG
[root@zos_linux1 /]#

14. The End!!
 

Saturday, September 15, 2012

Linux LVM - Creating LV and mounting it


We had created a VG already here, we are going to use that VG to create some LVs or filesystems on it.

At first going to look for the VG size and free space available, vgdisplay and vgs commands will do that for us,


Here we find that only one VG is available on the server. If there are many VGs and if you want the details of the particular VG, you can use vgs or vgdisplay .

Currently i do not have any LVs available on the server apart from the root / filesystem.


We also have other useful commands to verify PV, VG and LV (pvscan ; vgscan ; lvscan), just for additional information.


Ooops ok.. thanks for your patience, lets start creating LVs.

There are several methods of creating a LV, which will be discussed later. The most commonly used default method of creating LV as below,

Syntax:
lvcreate -L -n

Example:
lvcreate -L 100M -n newLV1 newVG

So, now the LV is created successfully.

To verify,


So now we had created an LV successfully and verified.

Now the next step is to format the filesystem before mounting it to a mount point.

The device can formated in many different types, latest redhat/centOS versions has ext4.

Command used is mkfs.ext4.

The version which i use, do not have ext4 type format. So we are going to format it with ext3.


To format the LV, we need to know the complete path of the LV device created. Which can be seen in lvdisplay and lvscan outputs from the previous and below screen shots in this post.


Note the "newLV1" linked to "/dev/mapper/newVG-newLV1" which will be the one will be used as the device path for the mount point, even if you use /dev/newVG/newLV1. This can be identified in "mount -l" output after mounting it to a mount point.

Current "mount -l'  output



Also the formatted filesystem type will also be verified with the "mount -l" command only after mounting it to a mount point.

Lets start formatting the LV device and mount it to a mount point.

Syntax:
mkfs.ext3

Example:
mkfs.ext3 /dev/newVG/newLV1


Create a mount point, make a directory which is going to be a mountpoint for this LV. Which can be any path and located any where in the server.

I am going to create it in / directory itself. Which is /newFS.

mkdir /newFS


Mounting it to the created directory, which is called as the mount point.

mount /dev/newVG/newLV1 /newFS


Verify with df -h before and after mounting the filesystem, also with "mount -l".



After Mounting, from "mount -l" outptut.
/dev/mapper/newVG-newLV1 on /newFS type ext3 (rw)

So now successfully mounted the created filesystem. One last and final step is to we need to add this mount point and filesystem details in /etc/fstab for automount while booting.

Verifying for the current /etc/fstab entry.


Edit the /etc/fstab file and add the new mountpoint as below.


That ends the entire LVM.

Quick Recap of complete LVM


Pages