We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Steps to recreate: Create a volume docker volume create -d hpe --name m_conf_delay_vol1 -o size=1
Mount on node 1
now move on node2, mount it several times , it mounts
move back to node 1 and mount again it fails
Node 1
Mount 1 ------> Move to Node2 Mount 2 Mount 3 Mount 4
Mount 5 <----Move back to Node1 and mount fails
Logs attached
[root@ecostor-b14 ~]# docker run -it -v m_conf_delay_rcvol1:/data2 --name c2 --rm busybox /bin/sh /usr/bin/docker-current: Error response from daemon: error while mounting volume '/opt/hpe/data/hpedocker-dm-uuid-mpath-360002ac0000000000100850c000187b7': HPE Docker Volume Plugin Mount Failed: ('exception is : %s', '\n\n RAN: /bin/mount -t ext4 /dev/dm-5 /opt/hpe/data/hpedocker-dm-uuid-mpath-360002ac0000000000100850c000187b7\n\n STDOUT:\n\n\n STDERR:\nmount: /opt/hpe/data/hpedocker-dm-uuid-mpath-360002ac0000000000100850c000187b7: /dev/mapper/360002ac0000000000100850c000187b7 already mounted on /opt/hpe/data/hpedocker-dm-uuid-mpath-360002ac0000000000100850c000187b7.\n'). [root@ecostor-b14 ~]# multipath -ll 360002ac00000000001008508000187b7 dm-3 3PARdata,VV size=1.0G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw -+- policy='round-robin 0' prio=50 status=active - 1:0:5:0 sdb 8:16 active ready running 360002ac0000000000100850c000187b7 dm-5 3PARdata,VV size=1.0G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw -+- policy='round-robin 0' prio=50 status=active - 1:0:5:2 sdd 8:48 active ready running 360002ac00000000001008509000187b7 dm-4 3PARdata,VV size=16G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw -+- policy='round-robin 0' prio=50 status=active - 1:0:5:1 sdc 8:32 active ready running You have new mail in /var/spool/mail/root [root@ecostor-b14 ~]#
-+- policy='round-robin 0' prio=50 status=active
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Steps to recreate:
Create a volume
docker volume create -d hpe --name m_conf_delay_vol1 -o size=1
Mount on node 1
now move on node2, mount it several times , it mounts
move back to node 1 and mount again it fails
Node 1
Mount 1 ------> Move to Node2
Mount 2
Mount 3
Mount 4
Mount 5 <----Move back to Node1 and mount fails
Logs attached
[root@ecostor-b14 ~]# docker run -it -v m_conf_delay_rcvol1:/data2 --name c2 --rm busybox /bin/sh
/usr/bin/docker-current: Error response from daemon: error while mounting volume '/opt/hpe/data/hpedocker-dm-uuid-mpath-360002ac0000000000100850c000187b7': HPE Docker Volume Plugin Mount Failed: ('exception is : %s', '\n\n RAN: /bin/mount -t ext4 /dev/dm-5 /opt/hpe/data/hpedocker-dm-uuid-mpath-360002ac0000000000100850c000187b7\n\n STDOUT:\n\n\n STDERR:\nmount: /opt/hpe/data/hpedocker-dm-uuid-mpath-360002ac0000000000100850c000187b7: /dev/mapper/360002ac0000000000100850c000187b7 already mounted on /opt/hpe/data/hpedocker-dm-uuid-mpath-360002ac0000000000100850c000187b7.\n').
[root@ecostor-b14 ~]# multipath -ll
360002ac00000000001008508000187b7 dm-3 3PARdata,VV
size=1.0G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
-+- policy='round-robin 0' prio=50 status=active
- 1:0:5:0 sdb 8:16 active ready running360002ac0000000000100850c000187b7 dm-5 3PARdata,VV
size=1.0G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
-+- policy='round-robin 0' prio=50 status=active
- 1:0:5:2 sdd 8:48 active ready running360002ac00000000001008509000187b7 dm-4 3PARdata,VV
size=16G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
-+- policy='round-robin 0' prio=50 status=active
- 1:0:5:1 sdc 8:32 active ready runningYou have new mail in /var/spool/mail/root
[root@ecostor-b14 ~]#
The text was updated successfully, but these errors were encountered: