Table of contents
Preamble
When building a RAC architecture the very first step is at hardware level where you must ensure, as much as possible, that all your components are highly available and so redundant. On disk access level and so, in my case, access to an EMC SAN array we have in our HP Blades 2 Host Bus Adapter (HBA) cards. Each card has 2 Fiber Channel (FC) ports for maximum availability and, as well, load balancing.
For this failover and/or load balancing you have the choice of multiple commercial products like EMC Powerpath, Veritas Volume Manager (VxVM) or the freely available RedHat one called Device Mapper Multipathing (DM Multipathing). This blog post is using this last product and I will make the link with Oracle ASM at same time.
My testing has been done on Red Hat Enterprise Linux Server release 5.5 (Tikanga) with Oracle Enterprise Edition (RAC & ASM) 11.2.0.2.
Multipathing OS configuration
The 2 HBA cards with their 2 respective FC ports:
[root@server1 ~]# lspci | grep -i fibre 09:00.0 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02) 09:00.1 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02) 0c:00.0 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02) 0c:00.1 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02) |
First thing to do is create the configuration file /etc/multipath.conf. As described on MOS (My Oracle Support) you can start with something simple like (I do not understand why they blacklist sd[d-g] as those are exactly disks I ant to see in DM Multipathing). This will filter the internal disks, floppy and so on. Please note the quite interesting parameter user_friendly_names to be able to use multipath aliases (mpathX) instead of WWID:
blacklist { devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^hd[a-z]" devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]" } defaults { udev_dir /dev polling_interval 10 selector "round-robin 0" path_grouping_policy failover getuid_callout "/sbin/scsi_id -g -u -s /block/%n" prio_callout /bin/true path_checker readsector0 rr_min_io 100 rr_weight priorities failback immediate user_friendly_names yes } |
Ensure DM Multipathing is running:
[root@server1 ~]# modprobe -l | grep dm-multi /lib/modules/2.6.18-274.17.1.el5/kernel/drivers/md/dm-multipath.ko [root@server1 ~]# lsmod | grep dm dm_round_robin 36801 1 dm_multipath 58713 2 dm_round_robin scsi_dh 42561 2 scsi_dh_alua,dm_multipath dm_raid45 99785 0 dm_message 36289 1 dm_raid45 dm_region_hash 46145 1 dm_raid45 dm_mem_cache 38977 1 dm_raid45 dm_snapshot 52233 0 dm_zero 35265 0 dm_mirror 54737 0 dm_log 44993 3 dm_raid45,dm_region_hash,dm_mirror dm_mod 102289 106 dm_multipath,dm_raid45,dm_snapshot,dm_zero,dm_mirror,dm_log [root@server1 ~]# chkconfig --list multipathd multipathd 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@server1 ~]# service multipathd status multipathd (pid 6501) is running... |
[root@server1 ~]# multipath -ll mpath0 mpath0 (3600601602b702d006218b7de8130e111) dm-13 DGC,RAID 5 [size=67G][features=1 queue_if_no_path][hwhandler=1 alua][rw] \_ round-robin 0 [prio=4][active] \_ 3:0:0:0 sdae 65:224 [active][ready] \_ 3:0:1:0 sdat 66:208 [active][ready] \_ 1:0:0:0 sda 8:0 [active][ready] \_ 1:0:1:0 sdp 8:240 [active][ready] |
The SCSI chain can be displayed using below directories and files:
[root@server1 ~]# ll /sys/class/fc_host/host*/device lrwxrwxrwx 1 root root 0 Apr 18 16:22 host1/device -> ../../../devices/pci0000:00/0000:00:03.0/0000:09:00.0/host1 lrwxrwxrwx 1 root root 0 Apr 19 14:56 host2/device -> ../../../devices/pci0000:00/0000:00:03.0/0000:09:00.1/host2 lrwxrwxrwx 1 root root 0 Apr 19 14:56 host3/device -> ../../../devices/pci0000:00/0000:00:05.0/0000:0c:00.0/host3 lrwxrwxrwx 1 root root 0 Apr 19 14:56 host4/device -> ../../../devices/pci0000:00/0000:00:05.0/0000:0c:00.1/host4 |
[root@server1 ~]# ll /sys/class/fc_remote_ports/rport*/device lrwxrwxrwx 1 root root 0 Apr 19 15:07 /sys/class/fc_remote_ports/rport-1:0-0/device -> ../../../devices/pci0000:00/0000:00:03.0/0000:09:00.0/host1/rport-1:0-0 lrwxrwxrwx 1 root root 0 Apr 19 15:55 /sys/class/fc_remote_ports/rport-1:0-1/device -> ../../../devices/pci0000:00/0000:00:03.0/0000:09:00.0/host1/rport-1:0-1 lrwxrwxrwx 1 root root 0 Apr 19 15:55 /sys/class/fc_remote_ports/rport-3:0-0/device -> ../../../devices/pci0000:00/0000:00:05.0/0000:0c:00.0/host3/rport-3:0-0 lrwxrwxrwx 1 root root 0 Apr 18 16:21 /sys/class/fc_remote_ports/rport-3:0-1/device -> ../../../devices/pci0000:00/0000:00:05.0/0000:0c:00.0/host3/rport-3:0-1 |
[root@server1 ~]# ll /sys/class/fc_transport/target*/device lrwxrwxrwx 1 root root 0 Apr 18 17:03 /sys/class/fc_transport/target1:0:0/device -> ../../../devices/pci0000:00/0000:00:03.0/0000:09:00.0/host1/rport-1:0-0/target1:0:0 lrwxrwxrwx 1 root root 0 Apr 19 15:55 /sys/class/fc_transport/target1:0:1/device -> ../../../devices/pci0000:00/0000:00:03.0/0000:09:00.0/host1/rport-1:0-1/target1:0:1 lrwxrwxrwx 1 root root 0 Apr 19 15:55 /sys/class/fc_transport/target3:0:0/device -> ../../../devices/pci0000:00/0000:00:05.0/0000:0c:00.0/host3/rport-3:0-0/target3:0:0 lrwxrwxrwx 1 root root 0 Apr 19 15:55 /sys/class/fc_transport/target3:0:1/device -> ../../../devices/pci0000:00/0000:00:05.0/0000:0c:00.0/host3/rport-3:0-1/target3:0:1 |
[root@server1 ~]# ll /sys/block/sd*/device lrwxrwxrwx 1 root root 0 Apr 10 14:39 /sys/block/sdaa/device -> ../../devices/pci0000:00/0000:00:03.0/0000:09:00.0/host1/rport-1:0-1/target1:0:1/1:0:1:11 lrwxrwxrwx 1 root root 0 Apr 10 14:39 /sys/block/sdab/device -> ../../devices/pci0000:00/0000:00:03.0/0000:09:00.0/host1/rport-1:0-1/target1:0:1/1:0:1:12 lrwxrwxrwx 1 root root 0 Apr 10 14:39 /sys/block/sdac/device -> ../../devices/pci0000:00/0000:00:03.0/0000:09:00.0/host1/rport-1:0-1/target1:0:1/1:0:1:13 lrwxrwxrwx 1 root root 0 Apr 10 14:39 /sys/block/sdad/device -> ../../devices/pci0000:00/0000:00:03.0/0000:09:00.0/host1/rport-1:0-1/target1:0:1/1:0:1:14 lrwxrwxrwx 1 root root 0 Apr 10 14:39 /sys/block/sda/device -> ../../devices/pci0000:00/0000:00:03.0/0000:09:00.0/host1/rport-1:0-0/target1:0:0/1:0:0:0 lrwxrwxrwx 1 root root 0 Apr 10 14:39 /sys/block/sdae/device -> ../../devices/pci0000:00/0000:00:05.0/0000:0c:00.0/host3/rport-3:0-0/target3:0:0/3:0:0:0 lrwxrwxrwx 1 root root 0 Apr 10 14:39 /sys/block/sdaf/device -> ../../devices/pci0000:00/0000:00:05.0/0000:0c:00.0/host3/rport-3:0-0/target3:0:0/3:0:0:1 lrwxrwxrwx 1 root root 0 Apr 10 14:39 /sys/block/sdag/device -> ../../devices/pci0000:00/0000:00:05.0/0000:0c:00.0/host3/rport-3:0-0/target3:0:0/3:0:0:2 . . |
Remark:
SCSI devices list can be found in /sys/class/scsi_device or /sys/class/scsi_disk.
Link between SCSI path and disk name (fdisk -l) can also be listed with lsscsi command:
[root@server1 ~]# lsscsi [1:0:0:0] disk DGC RAID 5 0531 /dev/sda [1:0:0:1] disk DGC RAID 5 0531 /dev/sdb [1:0:0:2] disk DGC RAID 5 0531 /dev/sdc [1:0:0:3] disk DGC RAID 5 0531 /dev/sdd [1:0:0:4] disk DGC RAID 5 0531 /dev/sde [1:0:0:5] disk DGC RAID 5 0531 /dev/sdf [1:0:0:6] disk DGC RAID 5 0531 /dev/sdg . . |
Let’s come back to our multipath configuration, for multipath name mpath0 (/dev/mapper/mpathn, /dev/mpath/mpathn and /dev/dm-n) we can see 4 disks and link to dm-13, World Wide Identifier (WWID) is 3600601602b702d006218b7de8130e111. We can use the below script to display a table with disk name and WWID:
#!/bin/ksh for disk in `ls /dev/sd*` do disk_short=`basename $disk` wwid=`scsi_id -g -s /block/$disk_short` if [ "$wwid" != "" ] then echo -e "Disk:" $disk_short "\tWWID:" $wwid fi done Disk: sda WWID: 3600601602b702d006218b7de8130e111 Disk: sdaa WWID: 3600601602b702d000652b695c648e111 Disk: sdab WWID: 3600601602b702d000752b695c648e111 Disk: sdac WWID: 3600601602b702d007f2a73fbc648e111 Disk: sdad WWID: 3600601602b702d007e2a73fbc648e111 Disk: sdae WWID: 3600601602b702d006218b7de8130e111 Disk: sdaf WWID: 3600601602b702d0036d7cf191241e111 Disk: sdag WWID: 3600601602b702d0037d7cf191241e111 Disk: sdah WWID: 3600601602b702d0038d7cf191241e111 . . |
If I grep on the WWID of mapth0 I get exactly the same output as multipath command:
Disk: sda WWID: 3600601602b702d006218b7de8130e111 Disk: sdae WWID: 3600601602b702d006218b7de8130e111 Disk: sdat WWID: 3600601602b702d006218b7de8130e111 Disk: sdp WWID: 3600601602b702d006218b7de8130e111 |
Which can also be crosscheck with:
[root@server1 ~]# ll /dev/mpath/*8130* lrwxrwxrwx 1 root root 8 Jan 30 20:15 /dev/mpath/3600601602b702d006218b7de8130e111 -> ../dm-13 [root@server1 ~]# grep 8130 /var/lib/multipath/bindings mpath0 3600601602b702d006218b7de8130e111 |
Remark:
In /proc/partitions file you can also link major and minor peripheral numbers with multipath output (8 0 for /dev/sda):
[root@server1 ~]# cat /proc/partitions major minor #blocks name 104 0 292929210 cciss/c0d0 104 1 104391 cciss/c0d0p1 104 2 292824787 cciss/c0d0p2 8 0 70709760 sda 8 16 70710272 sdb 8 17 70710066 sdb1 8 32 70710272 sdc 8 33 70710066 sdc1 8 48 70710272 sdd . . |
dmsetup ls command also provide interesting display:
[root@server1 ~]# dmsetup ls mpath23 (253, 23) vg00-lvol5 (253, 5) mpath22 (253, 22) vg00-lvol4 (253, 1) mpath19 (253, 19) mpath0 (253, 13) mpath21 (253, 21) mpath20p1 (253, 26) |
Multipathing ASM configuration
On top of the standard ASMLib configuration you need to change the /etc/sysconfig/oracleasm configuration file:
[root@server1 ~]# ll /etc/sysconfig/oracleasm* lrwxrwxrwx 1 root root 24 Jan 19 13:09 /etc/sysconfig/oracleasm -> oracleasm-_dev_oracleasm -rw-r--r-- 1 root root 789 Apr 10 13:20 /etc/sysconfig/oracleasm-_dev_oracleasm -rw-r----- 1 root root 779 Jan 19 13:22 /etc/sysconfig/oracleasm-_dev_oracleasm.orig |
To insert something like. This will instruct ASMLib to discover multipath disks:
ORACLEASM_SCANORDER="mpath dm" ORACLEASM_SCANEXCLUDE="sd" |
Then to create ASM disks you may get lost, which one to use ? /dev/dm-n, /dev/mpath/ or /dev/mapper/ ? Fortunately on MOS the answer is clear and /dev/mapper must be used. But, even if Oracle does not insist a lot on it, you must create a Linux partition (83) on all your multipath devices:
[root@server1 ~]# fdisk -l /dev/mapper/mpath14 Disk /dev/mapper/mpath14: 72.4 GB, 72407318528 bytes 255 heads, 63 sectors/track, 8803 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/mapper/mpath14p1 1 8803 70710066 83 Linux |
Then mpath14p1 will be the disk to use when creating ASM disk (kpartx or partprobe to make them visible at Linux level, reboot must NOT be mandatory…). To create them use something like (thanks to user_friendly_names parameter):
[root@server1 ~]# /etc/init.d/oracleasm createdisk DAT /dev/mapper/mpathnp1 |
Remark:
This partition story makes reverse engineering of ASM disks and multipath devices complex.
To list configured ASM disks:
[root@server1 ~]# oracleasm listdisks BRTL_DISK1 BRTL_DISK2 DATA_DISK1 DATA_DISK2 DATA_DISK3 DATA_DISK4 DATA_DISK5 MLOG_DISK OCR_DISK1 OCR_DISK2 OCR_DISK3 OLOG_DISK SOFT_DISK1 SOFT_DISK2 |
Let’s take an example with DATA_DISK1:
[root@server1 ~]# oracleasm querydisk -d data_disk1 Disk "DATA_DISK1" is a valid ASM disk on device /dev/dm-27[253,27] [root@server1 ~]# oracleasm querydisk -p data_disk1 Disk "DATA_DISK1" is a valid ASM disk /dev/sdaf1: LABEL="DATA_DISK1" TYPE="oracleasm" /dev/mapper/mpath14p1: LABEL="DATA_DISK1" TYPE="oracleasm" /dev/sdb1: LABEL="DATA_DISK1" TYPE="oracleasm" /dev/sdq1: LABEL="DATA_DISK1" TYPE="oracleasm" /dev/sdau1: LABEL="DATA_DISK1" TYPE="oracleasm" |
Can be done reverser order to have link with partition and upper level disk name:
[root@server1 ~]# oracleasm querydisk /dev/dm-27 Device "/dev/dm-27" is marked an ASM disk with the label "DATA_DISK1" [root@server1 ~]# ll /dev/mpath/ | grep dm-27 lrwxrwxrwx 1 root root 8 Jan 30 20:15 3600601602b702d0036d7cf191241e111p1 -> ../dm-27 [root@server1 ~]# ll /dev/mpath/ | grep 3600601602b702d0036d7cf191241e111 lrwxrwxrwx 1 root root 8 Jan 30 20:15 3600601602b702d0036d7cf191241e111 -> ../dm-14 lrwxrwxrwx 1 root root 8 Jan 30 20:15 3600601602b702d0036d7cf191241e111p1 -> ../dm-27 |
And finally the multipath configuration of mapth14:
[root@server1 ~]# grep 3600601602b702d0036d7cf191241e111 /var/lib/multipath/bindings mpath14 3600601602b702d0036d7cf191241e111 [root@server1 ~]# multipath -ll mpath14 mpath14 (3600601602b702d0036d7cf191241e111) dm-14 DGC,RAID 5 [size=67G][features=1 queue_if_no_path][hwhandler=1 alua][rw] \_ round-robin 0 [prio=4][active] \_ 3:0:0:1 sdaf 65:240 [active][ready] \_ 3:0:1:1 sdau 66:224 [active][ready] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 1:0:1:1 sdq 65:0 [active][ready] |
Remark:
When creating a partition mpath14p1 on mpath14 it automatically creates partitions on /dev/sdaf, /dev/sdau, /dev/sdb and /dev/sdq.
Other interesting commands:
[root@server1 ~]# /sbin/blkid | grep oracleasm /dev/mapper/mpath21p1: LABEL="OCR_DISK1" TYPE="oracleasm" /dev/mapper/mpath22p1: LABEL="OCR_DISK2" TYPE="oracleasm" /dev/mapper/mpath23p1: LABEL="OCR_DISK3" TYPE="oracleasm" /dev/sdar1: LABEL="BRTL_DISK1" TYPE="oracleasm" /dev/sdaq1: LABEL="SOFT_DISK2" TYPE="oracleasm" /dev/sdap1: LABEL="SOFT_DISK1" TYPE="oracleasm" /dev/sdag1: LABEL="DATA_DISK2" TYPE="oracleasm" . . |
[root@server1 ~]# more /var/lib/multipath/bindings # Multipath bindings, Version : 1.0 # NOTE: this file is automatically maintained by the multipath program. # You should not need to edit this file in normal circumstances. # # Format: # alias wwid # mpath0 3600601602b702d006218b7de8130e111 mpath1 3600601602b702d0089a6a26c3f31e111 mpath2 3600601602b702d0088a6a26c3f31e111 mpath3 3600508b1001cab8358dfaef03cf53284 mpath4 3600601602b702d00bc7c96a31041e111 mpath5 3600601602b702d0005e08ee91041e111 mpath6 3600601602b702d0004e08ee91041e111 mpath7 3600601602b702d007e45572a1141e111 mpath8 3600601602b702d007c45572a1141e111 mpath9 3600601602b702d007f45572a1141e111 mpath10 3600601602b702d008045572a1141e111 mpath11 3600601602b702d007d45572a1141e111 mpath12 3600601602b702d00ba7c96a31041e111 mpath13 3600601602b702d00bb7c96a31041e111 mpath14 3600601602b702d0036d7cf191241e111 mpath15 3600601602b702d0037d7cf191241e111 . . |
References
- DM Multipath Configuration and Administration Edition 3
- Online Storage Reconfiguration Guide
- Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0, 11.2.0) on RHEL5/OL5 [ID 605828.1]
- How to Configure LUNs for ASM Disks using WWID, DM-Multipathing, and ASMLIB on RHEL 5/OEL 5 [ID 1365511.1]
- Configuration and Use of Device Mapper Multipathing on Oracle Enterprise Linux (OEL) [ID 555603.1]
- How To Setup ASM & ASMLIB On Native Linux Multipath Mapper disks? [ID 602952.1]
- How to Partition DM-Multipath Pseudo Devices [ID 470913.1]
- Troubleshooting a multi-node ASMLib installation [ID 811457.1]
- Oracle Linux and External Storage Systems [ID 753050.1]
Soma says:
Thank you for such a wonderful articles which made things far more easier to understand.
I have worked on a project (host-based migration) where I moved ASM disks from one SAN to another SAN. While creating ASM disks in the new SAN, I didn’t create partition on the LUNs in SAN. Would it create any issues in the future?
Also, is it mandatory to create partition on LUNS in SAN when it comes to host-based migration? If yes, what utility you prefer to use in ORACLE RAC “kpartx” or “fdisk”?
Many thanks in advance,
Yannick Jaquier says:
Thanks for nice comment !
If in latest ASM releases Oracle allow you to create ASM disk without partition then I suppose it is ok to do it…
I personally use fdisk, quite low level but sufficient in such case.
Nick says:
thanks for the blog.
I was wondering, how comes when you run the command “oracleasm querydisk -p data_disk1” – it still shows the single path devices as well? Surely all Oracle cares about is the “mpath” mapping?
e.g.
[root@server1 ~]# oracleasm querydisk -p data_disk1
Disk “DATA_DISK1″ is a valid ASM disk
/dev/sdaf1: LABEL=”DATA_DISK1″ TYPE=”oracleasm”
/dev/mapper/mpath14p1: LABEL=”DATA_DISK1″ TYPE=”oracleasm”
/dev/sdb1: LABEL=”DATA_DISK1″ TYPE=”oracleasm”
/dev/sdq1: LABEL=”DATA_DISK1″ TYPE=”oracleasm”
/dev/sdau1: LABEL=”DATA_DISK1″ TYPE=”oracleasm”
should give:
[root@server1 ~]# oracleasm querydisk -p data_disk1
Disk “DATA_DISK1″ is a valid ASM disk
/dev/mapper/mpath14p1: LABEL=”DATA_DISK1″ TYPE=”oracleasm”
only…
especially since the SCANECLUDE=”sd” is also given?
Thanks
Nick
Yannick Jaquier says:
Thanks for comment.
I would say yes and no, no it could not give the complete configuration but on the other hand it’s nice to have it right ? The ORACLEASM_SCANEXCLUDE is only for detection not for complete display of the path that ASM most probably get from the OS.
Majd says:
we have 4 servers , same HW and same SW , OS linux SUSE 9
each server has 2 HBA (QLA card) and each card connected to FC switch
we have 2 SAN storgae each one connected to FC with two LINKs
so each server must be have 4 paths to LUN
Now : 2 out of 4 servers can detect 4 paths to each LUN on the SAN
anoth 2 servers can detect 4 paths to some LUNs and 3 paths to another
all servers using same QLA driver / Same multipath config / same OS /
no HW problem
thanks for your Help ,
Adam DeRidder says:
I was wrong about the scsi_id parameter. Using the whitelisted parameter seems to be a newer way
This is documented in redhatbox:# more /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf
Adam DeRidder says:
Wow! Yannik – this is very clear and very helpful. Thank you for sharing..
Here are a few changes I made to my personal config for you to consider:
(Changed in RHEL6) ‘prio_callout /bin/true’ is now ‘prio const’ and const can be tweaked per storage
(RHEL6?) getuid_callout “/sbin/scsi_id -g -u -s /block/%n” which uses -g flag instead of whitelist
(update) For the path_selector bmarzins at redhat suggests using service-time instead of default of round-robin. I also see him updating code for the mpath daemon so I guess he’s qualified.
(addition) I added a ‘checker_timer 180’ param instead of using a udev rule that I see in Joseph’s step 5.
haha@nicole. Yep. That is easier all right. Not very complete, but easier.
Yannick Jaquier says:
Hi Adam,
Thanks to you for stopping by and nice comment !!! 🙂
Yannick.
nicole says:
Hello,
All are the information is very useful also after lot of googling i found the easiest way here too
http://expertisenpuru.com/how-to-configure-multipath-in-redhat-linux/
Thanks for sharing those device mapper multipathing configuration information.
vladius says:
>>you must create a Linux partition (83) on all your multipath devices
why?
what troubles may be without parttion?
Yannick Jaquier says:
From what I remember it was not possible to create the ASM disk on the root partition… But you can try and let us know…
Joseph says:
Great blog. I just want to confirm “Fortunately on MOS the answer is clear and /dev/mapper must be used. “. Can you refer me to the Note ID? As I am having trouble locating it?
Also, is this also true for RHEL 6.x?
As I found a guide that basically have these steps.
1. Create partition on /dev/mapper/mpath* (fdisk /dev/mapper/mpathb)
2. Add the partition mappings and re-read the partition table (kpartx -a /dev/mapper/mpathb)
3. Restart multipath service (service multipathd restart)
4. Get the SCSI ID (scsi_id –page=0x83 –whitelisted –device=/dev/mapper/mpathb1)
5. Create UDEV rules (/etc/udev/rules.d/99-oracle-asmdevices.rules)
KERNEL==”dm-*”, PROGRAM=”scsi_id –page=0x83 –whitelisted –device=/dev/%k”,RESULT==”36006xxxx..”, NAME=”asm-disk1″, OWNER:=”oracle”, GROUP:=”dba”
6. Reload and start UDEV
7. Check the ASM disk is mounted on the “Name” mentioned in step 5
brw-rw—-. 1 oracle dba 253, 13 Mar 26 07:22 /dev/asm-disk1
8. Create the ASM disk
NAME PATH HEADER_STATU TOTAL_MB FREE_MB
—————————— —————————— ———— ———- ———-
DATA_0000 /dev/asm-disk1 MEMBER 102400 52010
I am worried that I did not use /dev/mapper/mpathb on creating the disk group but I cannot find any clearer step that this.
Yannick Jaquier says:
Hi Joseph,
Thanks !
Please refer to MOS note in References section:
How to Configure LUNs for ASM Disks using WWID, DM-Multipathing, and ASMLIB on RHEL 5/OEL 5 [ID 1365511.1]
As stated in blog post I have really asked to myself which one to use… Please share your vision if different…
I suppose there is no difference in RHEL6/OEL6 but I have personally not tested it…
Mahesh says:
Thank you Yannick..We are working on multipath.conf options.
Mahesh Y says:
Hi Yannick,
Nice descriptive article..We are facing one typical issue in our RAC 3 node RHEL cluster.Hope you can help..
Issue:While doing SAN fail over re-silent test.When primary path of HBA is disconnected failover is happening and DB instance is running with no issues.When the primary is enabling and trying to disable the secondary SAN path of IBM XIV . DB instance is getting down with I/O error.Tried with changing failback immediate in the multipath.conf. But no luck.Any ideas on how to solve?
Yannick Jaquier says:
Hi Mahesh,
Not sure to fully understand what you do…
When you reconnect first port of your HBA card, do you wait enough that “kernel re-discover” it ? If yes then it deserves deeper investigation and search on potential bug as it is basic of HA you can expect…
Unfortunately I cannot test t from my side as my computer room is far from me…
Yannick.
Mahesh Y says:
Thanks for prompt reply.We can access external disks with no issues from OS side, if we disconnect the either of the HBA cards, Issue is coming up at database level.Do you suggest to update any timeout value in DB ?
Yannick Jaquier says:
I don’t think the issue is coming from database. We could imagine it coming from ASM but if you have configured your ASM disks as explained in MOS note 1365511.1 and so used /dev/mapper then Oracle ASM instance should not even see that one of the multiple accesses to your physical disk is broken… So in my opinion the problem is at Linux/Unix level…