Also it's interesting to note that the paths under the multipath device (sdm, sdo, sdl, sdn) with LUN ID: 4 are also used by the second multipath device whereas it should use LUN 10 paths (which is currently in failed faulty status).
This looks multipath related but it would be helpful if we can get the os-brick logs for this 1GB volume attachment to understand if os-brick is doing something that is resulting in this.
I would also recommend to cleanup the system with any leftover devices of past failed detachments (i.e. flush and remove mpath devices not belonging to any instance) that might be interfering with this. Although I'm not certain if that's the case, it's still to cleanup those devices.
Hi,
Based on the given information, the strange part is same multipath device is used for the old and new volume 36e00084100ee7e 7ed6ad25d900002 f6b
36e00084100ee7e 7ed6ad25d900002 f6b dm-9 HUAWEI,XSG1 'service- time 0' prio=1 status=active
size=115G features='0' hwhandler='0' wp=rw
`-+- policy=
|- 14:0:0:4 sdm 8:192 active ready running
|- 15:0:0:4 sdo 8:224 active ready running
|- 16:0:0:4 sdl 8:176 active ready running
`- 17:0:0:4 sdn 8:208 active ready running
36e00084100ee7e 7ed6ad25d900002 f6b dm-9 HUAWEI,XSG1 'service- time 0' prio=1 status=active
size=115G features='0' hwhandler='0' wp=rw
`-+- policy=
|- 14:0:0:10 sdao 66:128 failed faulty running
|- 14:0:0:4 sdm 8:192 active ready running
|- 15:0:0:10 sdap 66:144 failed faulty running
|- 15:0:0:4 sdo 8:224 active ready running
|- 16:0:0:10 sdan 66:112 failed faulty running
|- 16:0:0:4 sdl 8:176 active ready running
|- 17:0:0:10 sdaq 66:160 failed faulty running
`- 17:0:0:4 sdn 8:208 active ready running
Also it's interesting to note that the paths under the multipath device (sdm, sdo, sdl, sdn) with LUN ID: 4 are also used by the second multipath device whereas it should use LUN 10 paths (which is currently in failed faulty status).
This looks multipath related but it would be helpful if we can get the os-brick logs for this 1GB volume attachment to understand if os-brick is doing something that is resulting in this.
I would also recommend to cleanup the system with any leftover devices of past failed detachments (i.e. flush and remove mpath devices not belonging to any instance) that might be interfering with this. Although I'm not certain if that's the case, it's still to cleanup those devices.