-
-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to resolve brick changes if the brick order changes #121
Comments
Does brick order not have to be 'correct'?
|
Quite possibly. This was a simple three-node replica - the same data on each node - so I wouldn't have thought it would make any difference. I can understand that it would for some other volume types though. |
I am having a somewhat similar issue. Being a bit OCD I have the servers in order in my bricks list. However Gluster lists them in a different order. For example:
From there Puppet comes back with the following:
|
I've just come across the same problem purely from moving the puppet hiera config from one location to another, nothing to do with the gluster volumes actually changed. The status of the volumes is fine but puppet agent now returns lots of notices like the one below
Is there any way to fix? |
Affected Puppet, Ruby, OS and module versions/distributions
Docker image: puppetserver 2.7.2, puppetexplorer 2.0.0, pupperboard 0.2.0, puppetdb 4.3.0, puppet-postgres 0.1.0
How to reproduce (e.g Puppet code you use)
Doing some stress testing of gluster I killed and rebuilt a node. Before doing this the gluster volume info reported the bricks in the order:
Brick1: server1:/data/voltest1/brick
Brick2: server2:/data/voltest1/brick
Brick3: server3:/data/voltest1/brick
After the rebuild the order is
Brick1: server2:/data/voltest1/brick
Brick2: server3:/data/voltest1/brick
Brick3: server1:/data/voltest1/brick
What are you seeing
Subsequent runs of puppet agent report
Notice: unable to resolve brick changes for Gluster volume voltest1!
Defined: server1:/data/voltest1/brick server2:/data/voltest1/brick server3:/data/voltest1/brick
Current: [server2:/data/voltest1/brick, server3:/data/voltest1/brick, server1:/data/voltest1/brick]
What behaviour did you expect instead
For it to not complain as the only change is the order of reporting. :)
Any additional information you'd like to impart
"server1", "server2" and "server3" above are not the real names of the servers.
The remove of the brick from the rebuilt server, the peer detach, and peer probe were done manually as the puppet module said it doesn't support removal of bricks from a volume.
The text was updated successfully, but these errors were encountered: