-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Versioning broken when deploying to multiple clusters with same artifact #158
Comments
To reproduce this issue you will need to deploy to two Kubernetes clusters which will be referred to as
ConfigMap can be very simple: apiVersion: v1
data:
test.json: |
{"test":"test"}
kind: ConfigMap
metadata:
name: test-config Deployment should reference this ConfigMap at spec:
template:
spec:
volumes:
- configMap:
name: test-config
name: test-config The container used shouldn't matter, it can be any basic container like spec:
template:
spec:
containers:
- volumeMounts:
- mountPath: /etc/test
name: test-config This will version the ConfigMap to
It looks like artifacts in Spinnaker have some metadata for the account provider name (cluster a or b), so we should probably just reference this during the Deploy (Manifest) task and only attach artifacts that have the same {
"reference": "test-config-v001",
"metadata": {
"account": "cluster-a"
},
"name": "test-config"
} Code to inspect will be anything that references the function |
It seems Spinnaker versioning is broken when deploying to multiple clusters that have the same artifact attached. If the number of versioned resources of a given artifact do not match across clusters, then the artifact attached to subsequent deployments will bind the versioned artifact found for the first deployment which might not exist.
The text was updated successfully, but these errors were encountered: