Tricks With Multipathd

Technote (FAQ)


Question

Tricks With Multipathd

Answer

Multipathd has a mode (the -k flag) were it can be used to connect to the running multipathd process over a socket.

If there is no currently running multipathd you will get the following error.


[root@k2 ~]# multipathd -k
ux_socket_connect: Connection refused

If he daemon is running, you can issue commands in several ways.

  • Interactive with the -k flag
    # multipathd -k
    multipathd>
  • # echo "help" | multipathd -k
    multipathd> multipath-tools v0.4.7 (03/12, 2006)
    CLI commands reference:
    list|show paths
    ..excess deleted...
  • # multipathd -k"show paths"
    hcil dev dev_t pri dm_st chk_st next_check
    1:0:0:0 sda 8:0 10 [active][ready] X......... 3/20
    1:0:0:1 sdb 8:16 10 [active][ready] X......... 3/20
The second two formats can be used in shell scripts etc.

EXAMPLES

DISPLAY PATH STATUS
multipathd> show multipaths status
name failback queueing paths dm-st
mpath0 immediate - 4 active
mpath1 immediate - 4 active


SHOW TOPOLOGY
multipathd> show topology
mpath0 (360050768018380367000000000000049) dm-0 IBM,2145
[size=5.0G][features=1 queue_if_no_path][hwhandler=0 ]
\_ round-robin 0 [prio=100][enabled]
\_ 1:0:3:0 sdg 8:96 [active][ready]
\_ 1:0:1:0 sde 8:64 [active][ready]
\_ round-robin 0 [prio=20][enabled]
\_ 1:0:0:0 sda 8:0 [active][ready]
\_ 1:0:2:0 sdc 8:32 [active][ready]


SHOW PATHS
multipathd> show paths
hcil dev dev_t pri dm_st chk_st next_check
1:0:0:0 sda 8:0 10 [active][ready] XXXXXXX... 14/20
1:0:0:1 sdb 8:16 10 [active][ready] XXXXXXX... 14/20
1:0:2:0 sdc 8:32 10 [active][ready] XXXXXXX... 14/20
1:0:2:1 sdd 8:48 10 [active][ready] XXXXXXX... 14/20
... excess deleted ...


FAIL A PATH
We quickly check (before multipathd detects that the path is really good and restores it) we find:

# multipathd -k"fail path sdc"; multipathd -k"show paths"
ok
hcil dev dev_t pri dm_st chk_st next_check
1:0:0:0 sda 8:0 10 [active][ready] .......... 1/20
1:0:0:1 sdb 8:16 10 [active][ready] .......... 1/20
1:0:2:0 sdc 8:32 10 [failed][faulty ] X......... 3/20
1:0:2:1 sdd 8:48 10 [active][ready] .......... 1/20
1:0:1:0 sde 8:64 50 [active][ready] .......... 1/20
1:0:1:1 sdf 8:80 50 [active][ready] .......... 1/20
1:0:3:0 sdg 8:96 50 [active][ready] .......... 1/20
1:0:3:1 sdh 8:112 50 [active][ready] .......... 1/20

Within a few seconds the path will again be restored.

The messages file will show:

Dec 11 10:19:49 k2 multipathd: 8:32: mark as failed
Dec 11 10:19:49 k2 multipathd: mpath0: remaining active paths: 3
Dec 11 10:19:49 k2 kernel: device-mapper: multipath: Failing path 8:32.
Dec 11 10:19:52 k2 multipathd: sdc: tur checker reports path is up
Dec 11 10:19:52 k2 multipathd: 8:32: reinstated


DELETE A PATH
multipathd> del path sdc
ok

multipathd> multipathd> hcil dev dev_t pri dm_st chk_st next_check
1:0:0:0 sda 8:0 10 [active][ready] XXXXXX.... 13/20
1:0:0:1 sdb 8:16 10 [active][ready] XXXXXX.... 13/20
1:0:2:1 sdd 8:48 10 [active][ready] XXXXXX.... 13/20


SUSPEND A PATH
multipathd> suspend map mpath0
ok

multipathd> show multipaths status
name failback queueing paths dm-st
mpath0 immediate - 3 suspend
mpath1 immediate - 4 active

If we have a mount point such as

/dev/mapper/mpath0p1 1.9G 35M 1.8G 2% /mnt

# ls -l /dev/mnt
... this will hang ...

We then enable the map

multipathd> resume map mpath0

our ls will now complete.

This works because he have queue if no path set

mpath0 (360050768018380367000000000000049) dm-0 IBM,2145
[size=5.0G][features=1 queue_if_no_path][hwhandler=0]

The disadvantage of the is that processes may become hung and not
killable in situations where all the paths to the LUN go offline.
SEE: https://bugzilla.redhat.com/show_bug.cgi?id=488921

To avoid this, set no_path_retry <N> in /etc/multipath.conf (where <N> is the number of times the system should retry a path). When you do so, ensure that you remove the features "1 queue_if_no_path" option from /etc/multipath.conf as well. In RH 5.3 a new option was added:

queue_without_daemon (RHEL 5.3 and later) If set to no, the multipathd daemon will disable queueing for all devices when it is shut down.

# multipathd -k"show blacklist"
device node rules:
- blacklist:
(config file rule) ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*
(config file rule) ^hd[a-z]
(default rule) ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*
(default rule) ^hd[a-z]
- exceptions:
<empty>
wwid rules:
- blacklist:
<empty>
- exceptions:
<empty>
device rules:
- blacklist:
(default rule) DGC:LUNZ
(default rule) IBM:S/390.*
- exceptions:
<empty>

SHOW CONFIG
If you want to see the compiled in configuration, you can rename multipath.conf, restart multipathd and then run the following. If the multipath.conf file exists, what is then displayed is the logical merge between the built in configuration and the multipath.conf file.

# multipathd -k"show config" | more
.. excess deleted ...
blacklist_exceptions {
}
devices {
device {
vendor APPLE*
product Xserve RAID
path_grouping_policy multibus
}
device {
vendor 3PARdata
product VV
path_grouping_policy multib
... excess deleted ..

THE /var/log/messages FILE
Events creates using these methods will be logged into the messages file just as a failing event, however the will be tagged as "operator".

Dec 11 10:19:49 k2 multipathd: mpath0: fail path 8:32 (operator)
Dec 11 10:24:18 k2 multipathd: mpath0: suspend (operator)
Dec 11 10:30:26 k2 multipathd: mpath0: resume (operator)

Rate this page:

(0 users)Average rating

Document information


More support for:

Linux family

Software version:

5.4, 5.5

Operating system(s):

Linux

Software edition:

Enterprise

Reference #:

T1011985

Modified date:

2009-12-11

Translate my page

Machine Translation

Content navigation