Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Versioning broken when deploying to multiple clusters with same artifact #158

Open
ryanjohnsontv opened this issue Apr 19, 2023 · 2 comments

Comments

@ryanjohnsontv
Copy link
Contributor

It seems Spinnaker versioning is broken when deploying to multiple clusters that have the same artifact attached. If the number of versioned resources of a given artifact do not match across clusters, then the artifact attached to subsequent deployments will bind the versioned artifact found for the first deployment which might not exist.

@billiford
Copy link
Collaborator

billiford commented Jul 14, 2023

To reproduce this issue you will need to deploy to two Kubernetes clusters which will be referred to as cluster-a and cluster-b.

  1. Deploy to ONLY cluster-a a ConfigMap and Deployment that references this ConfigMap.

ConfigMap can be very simple:

apiVersion: v1
data:
  test.json: |
    {"test":"test"}
kind: ConfigMap
metadata:
  name: test-config

Deployment should reference this ConfigMap at .spec.template.spec.volumes:

spec:
  template:
    spec:
      volumes:
      - configMap:
          name: test-config
        name: test-config

The container used shouldn't matter, it can be any basic container like nginx, but you should mount the ConfigMap at .spec.template.spec.containers[0].volumeMounts[0] like so:

spec:
  template:
    spec:
      containers:
      - volumeMounts:
        - mountPath: /etc/test
           name: test-config

This will version the ConfigMap to test-config-v000 and attach it to the Deployment.

  1. Deploy the same ConfigMap and Deployment now to both clusters, cluster-a first then cluster-b, in the same pipeline. The Deploy (Manifest) stages should be run serially not parallel. This will create new versioned ConfigMaps named test-config-v001 in cluster-a and test-config-v000 in cluster-b, however at deploy time to cluster-b it will attempt to mount test-config-v001 which does not exist in cluster-b and the deployment will fail with a timeout.

It looks like artifacts in Spinnaker have some metadata for the account provider name (cluster a or b), so we should probably just reference this during the Deploy (Manifest) task and only attach artifacts that have the same .metadata.account.

{
  "reference": "test-config-v001",
  "metadata": {
    "account": "cluster-a"
  },
  "name": "test-config"
}

Code to inspect will be anything that references the function kubernetes.BindArtifact(...): https://github.com/search?q=repo%3Ahomedepot%2Fgo-clouddriver%20bindartifacts&type=code. I will leave implementation up to the code owners, you could filter or pass in the account name for validation or maybe find a better way to manage artifacts before deployment.

@billiford
Copy link
Collaborator

@abe21412 @guido9j, shouldn't we close this issue since #169 fixed it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants