-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
initial e2e CI job with 2 KinD clusters #35
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm from what I can tell. @waynz0r would you like to take a look?
removed the example codespaces .devcontainer from this PR |
ping ... let's either get this in or decide on a better approach |
run: | | ||
mkdir ${ARTIFACTS_DIR} | ||
kubectl config get-contexts --kubeconfig $KUBECONFIG1 | ||
kubectl config get-contexts --kubeconfig $KUBECONFIG2 | ||
kubectl get nodes --kubeconfig $KUBECONFIG1 | ||
kubectl get nodes --kubeconfig $KUBECONFIG2 | ||
kubectl get pods -A --kubeconfig $KUBECONFIG1 | ||
helm install --namespace=cluster-registry --create-namespace cluster-registry-controller deploy/charts/cluster-registry --set localCluster.name=c1 --set image.repository=cluster-registry-controller --set image.tag=ci-test --kubeconfig $KUBECONFIG1 | ||
kubectl get pods -A --kubeconfig $KUBECONFIG1 | ||
kubectl get pods -A --kubeconfig $KUBECONFIG2 | ||
helm install --namespace=cluster-registry --create-namespace cluster-registry-controller deploy/charts/cluster-registry --set localCluster.name=c2 --set image.repository=cluster-registry-controller --set image.tag=ci-test --kubeconfig $KUBECONFIG2 | ||
kubectl get pods -A --kubeconfig $KUBECONFIG2 | ||
kubectl describe deployment -n cluster-registry cluster-registry-controller-controller --kubeconfig $KUBECONFIG1 | ||
kubectl describe deployment -n cluster-registry cluster-registry-controller-controller --kubeconfig $KUBECONFIG2 | ||
kubectl wait pods -l app.kubernetes.io/name=cluster-registry-controller --timeout=120s --for condition=Ready -n cluster-registry --kubeconfig $KUBECONFIG1 | ||
kubectl wait pods -l app.kubernetes.io/name=cluster-registry-controller --timeout=120s --for condition=Ready -n cluster-registry --kubeconfig $KUBECONFIG2 | ||
kubectl get pods -A --kubeconfig $KUBECONFIG1 > ${ARTIFACTS_DIR}/pods_c1.txt | ||
kubectl get pods -A --kubeconfig $KUBECONFIG2 > ${ARTIFACTS_DIR}/pods_c2.txt | ||
kubectl get secrets -A --kubeconfig $KUBECONFIG1 > ${ARTIFACTS_DIR}/secrets_c1.txt | ||
kubectl get secrets -A --kubeconfig $KUBECONFIG2 > ${ARTIFACTS_DIR}/secrets_c2.txt | ||
kubectl get clusters -o yaml --kubeconfig $KUBECONFIG1 | ||
kubectl get clusters -o yaml --kubeconfig $KUBECONFIG2 | ||
echo "Getting cluster c1" | ||
kubectl get cluster c1 --kubeconfig $KUBECONFIG1 -o yaml | kubectl apply --kubeconfig $KUBECONFIG2 -f - | ||
echo "Getting secret c1" | ||
kubectl get secret -n cluster-registry c1 --kubeconfig $KUBECONFIG1 -o yaml | kubectl apply --kubeconfig $KUBECONFIG2 -f - | ||
echo "Getting cluster c2" | ||
kubectl get cluster c2 --kubeconfig $KUBECONFIG2 -o yaml | kubectl apply --kubeconfig $KUBECONFIG1 -f - | ||
echo "Getting secret c2" | ||
kubectl get secret -n cluster-registry c2 --kubeconfig $KUBECONFIG2 -o yaml | kubectl apply --kubeconfig $KUBECONFIG1 -f - | ||
kubectl get clusters -o yaml --kubeconfig $KUBECONFIG1 | ||
kubectl get clusters -o yaml --kubeconfig $KUBECONFIG2 | ||
kubectl get clusters -o yaml --kubeconfig $KUBECONFIG1 > ${ARTIFACTS_DIR}/clusters_c1.yaml | ||
kubectl get clusters -o yaml --kubeconfig $KUBECONFIG2 > ${ARTIFACTS_DIR}/clusters_c2.yaml | ||
sleep 30 | ||
sync1=$(kubectl get clusters -A --kubeconfig $KUBECONFIG1 -o jsonpath='{range .items[*]}{@.status.conditions[?(@.type=="ClustersSynced")].status}{end}') | ||
echo "Cluster 1 sync status = ${sync1}" | ||
sync2=$(kubectl get clusters -A --kubeconfig $KUBECONFIG2 -o jsonpath='{range .items[*]}{@.status.conditions[?(@.type=="ClustersSynced")].status}{end}') | ||
echo "Cluster 2 sync status = ${sync2}" | ||
if [[ "${sync1}" == "" || "${sync2}" == "" ]]; then | ||
echo "one or more clusters missing sync status" | ||
exit 1 | ||
fi | ||
sync_reason1=$(kubectl get clusters -A --kubeconfig $KUBECONFIG1 -o jsonpath='{range .items[*]}{@.status.conditions[?(@.type=="ClustersSynced")].reason}{end}') | ||
sync_reason2=$(kubectl get clusters -A --kubeconfig $KUBECONFIG2 -o jsonpath='{range .items[*]}{@.status.conditions[?(@.type=="ClustersSynced")].reason}{end}') | ||
echo "Sync status reason 1 = '${sync_reason1}'" | ||
echo "Sync status reason 2 = '${sync_reason2}'" | ||
if [[ "${sync1}" == "False" || "${sync2}" == "False" ]]; then | ||
echo "one or more clusters are not in sync" | ||
exit 1 | ||
fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this be extracted to a script? It'd be more readable and reusable for other stuff or for local testing.
id: test-multicluster | ||
run: | | ||
mkdir ${ARTIFACTS_DIR} | ||
kubectl config get-contexts --kubeconfig $KUBECONFIG1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be even more reusable if instead of kubectl
, K8s api server calls would be used somehow. E.g. some GO based test framework.
I understand you might not necessarily want to rewrite this now, consider this as a suggestion, which might be postponed for the future if you agree.
What's in this PR?
Validates the README example via a GH actions job called `e2e`.The job steps:
a) install cluster-registry-controller in both clusters via helm chart
b) copy cluster CR from c1 to c2 and vice versa
c) copy kubeconfig secret from c1 to c2 and vice versa
d) check CR status is synced
cluster
CR data as job artifactsWhy?
This is a representative test workflow to show the capability and config of github actions. There are similar tests in other open source projects, e.g. see NSM interdomain tests. There are no functional tests which validate the behavior of the cluster-registry-controller at the moment.
Additional context
The changes also include an example codespaces setup with a custom devcontainer that is built with all dependencies for KinD testing within the codespace.
Checklist