I’m sure all the grey (or gray? I don’t know you pick one) beards out there know all about multipathd and it’s quirks and in and outs. Since a lot of people appear to be looking at other hypervisor solutions these days I decided to put my opinion out there too. You of course don’t need to listen to me as there are a number of resources out there. Matt Webb as an unofficial guide for Pure + Proxmox: https://dinocloud.net/2025/06/13/the-unofficial-proxmox-pure-storage-cookbook-iscsi-with-multipathing/. I like his approach but I think there may be a better way.
It’s possible to create a directory in the /etc/multipath directory named conf.d. Any .conf files you put there will be loaded as if they were in the /etc/multipath.conf file and override things. So my approach follows Matt’s but my /etc/multipath.conf file looks like:
defaults {
find_multipaths yes off
}
blacklist {
device {
vendor ".*"
product ".*"
}
}
The find_multipaths prevents me from needed to run multipath -a <wwid> whenever a new target is added (After talking with a colleague I changed this to off. This is misleading in that it set mutlitpath to strict (default) but sets mutlipathd to greedy. This makes a mpath device for all non blacklisted devices. You will get more predictable behavior this way). I also created the conf.d folder and added a number of files. The first being the configuration for a target I created on a Linux VM. I named it mystorage.conf:
blacklist_exceptions {
device {
vendor "IET"
product "VIRTUAL-DISK"
}
}
I got these values for the target by looking at the sys file system at /sys/block/sde/device/vendor and /sys/block/sde/device/model. After adding these and running multipath -r to reload the configuration I can see the devices being multipathed via mutlipath -ll:
multipath -ll
360000000000000000e00000000010001 dm-5 IET,VIRTUAL-DISK
size=10G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 8:0:0:1 sde 8:64 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
`- 9:0:0:1 sdf 8:80 active ready running
I also created a pure.conf since that’s what I’m really trying to do:
blacklist_exceptions {
wwid "3624a9370.*"
device {
vendor "PURE"
product ".*"
}
device {
vendor "NVME"
product "Pure Storage FlashArray"
}
}
devices {
device {
vendor "NVME"
product "Pure Storage FlashArray"
path_selector "queue-length 0"
path_grouping_policy group_by_prio
prio ana
failback immediate
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 60
}
device {
vendor "PURE"
product "FlashArray"
path_selector "service-time 0"
hardware_handler "1 alua"
path_grouping_policy group_by_prio
prio alua
failback immediate
path_checker tur
user_friendly_names no
no_path_retry 0
features 0
}
}
I stole most of these from Matt and I’m making the assumption they are correct for the products listed. You really may only need the blacklist_exception stanza and the rest should already be part of the upstream multipath default configuration. Once all of this is done any new targets that meet the blacklist exception criteria are automatically detected and added to the multipathing configuration. So well you know, that’s just like, my opinion man.