Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to resolve brick changes if the brick order changes #121

Open
ae-dg opened this issue May 16, 2017 · 4 comments
Open

Unable to resolve brick changes if the brick order changes #121

ae-dg opened this issue May 16, 2017 · 4 comments

Comments

@ae-dg
Copy link

ae-dg commented May 16, 2017

Affected Puppet, Ruby, OS and module versions/distributions

Docker image: puppetserver 2.7.2, puppetexplorer 2.0.0, pupperboard 0.2.0, puppetdb 4.3.0, puppet-postgres 0.1.0

  • Module version: 3.0.0

How to reproduce (e.g Puppet code you use)

Doing some stress testing of gluster I killed and rebuilt a node. Before doing this the gluster volume info reported the bricks in the order:

Brick1: server1:/data/voltest1/brick
Brick2: server2:/data/voltest1/brick
Brick3: server3:/data/voltest1/brick

After the rebuild the order is

Brick1: server2:/data/voltest1/brick
Brick2: server3:/data/voltest1/brick
Brick3: server1:/data/voltest1/brick

What are you seeing

Subsequent runs of puppet agent report

Notice: unable to resolve brick changes for Gluster volume voltest1!
Defined: server1:/data/voltest1/brick server2:/data/voltest1/brick server3:/data/voltest1/brick
Current: [server2:/data/voltest1/brick, server3:/data/voltest1/brick, server1:/data/voltest1/brick]

What behaviour did you expect instead

For it to not complain as the only change is the order of reporting. :)

Any additional information you'd like to impart

"server1", "server2" and "server3" above are not the real names of the servers.

The remove of the brick from the rebuilt server, the peer detach, and peer probe were done manually as the puppet module said it doesn't support removal of bricks from a volume.

@ae-dg ae-dg changed the title Unable to resolve brick changes if the blick order changes Unable to resolve brick changes if the brick order changes May 16, 2017
@alexjfisher
Copy link
Member

Does brick order not have to be 'correct'?
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/

To make sure that replica-set members are not placed on the same node, list the first brick on every server, then the second brick on every server in the same order, and so on.

coder-hugo added a commit to coder-hugo/puppet-gluster that referenced this issue May 17, 2017
@ae-dg
Copy link
Author

ae-dg commented May 17, 2017

Quite possibly. This was a simple three-node replica - the same data on each node - so I wouldn't have thought it would make any difference. I can understand that it would for some other volume types though.

@JeffPsycle
Copy link

JeffPsycle commented Sep 20, 2019

I am having a somewhat similar issue. Being a bit OCD I have the servers in order in my bricks list. However Gluster lists them in a different order. For example:

gluster::volumes:
  testvol:
    replica: 4
    bricks:
      - srv1:/mnt/gluster
      - srv2:/mnt/gluster
      - srv3:/mnt/gluster
      - srv4:/mnt/gluster
Status of volume: testvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick srv3:/mnt/gluster                          49153     0          Y       32689
Brick srv4:/mnt/gluster                          49153     0          Y       21251
Brick srv1:/mnt/gluster                          49153     0          Y       6820 
Brick srv2:/mnt/gluster                          49153     0          Y       9546

From there Puppet comes back with the following:

defined 'message' as "unable to resolve brick changes for Gluster volume testvol!\nDefined: srv1:/mnt/gluster srv2:/mnt/gluster srv3:/mnt/gluster srv4:/mnt/gluster\nCurrent: [srv3:/mnt/gluster, srv4:/mnt/gluster, srv1:/mnt/gluster, srv2:/mnt/gluster]"

@sazzle2611
Copy link

sazzle2611 commented May 31, 2021

I've just come across the same problem purely from moving the puppet hiera config from one location to another, nothing to do with the gluster volumes actually changed.

The status of the volumes is fine but puppet agent now returns lots of notices like the one below

Notice: unable to resolve brick changes for Gluster volume vol1!
Defined: server1:/export/vol1/brick server2:/export/vol1/brick server3:/export/vol1/brick
Current: [server2:/export/vol1/brick, server1:/export/vol1/brick, server3:/export/vol1/brick]
Notice: /Stage[main]/Profile::Gluster/Gluster::Volume[vol1]/Notify[unable to resolve brick changes for Gluster volume vol1!
Defined: server1:/export/vol1/brick server2:/export/vol1/brick server3:/export/vol1/brick
Current: [server2:/export/vol1/brick, server1:/export/vol1/brick, server3:/export/vol1/brick]]/message: defined 'message' as "unable to resolve brick changes for Gluster volume vol1!\nDefined: server1:/export/vol1/brick server2:/export/vol1/brick server3:/export/vol1/brick\nCurrent: [server2:/export/vol1/brick, server1:/export/vol1/brick, server3:/export/vol1/brick]"

Is there any way to fix?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants