You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I followed the instructions to add a node to my AWS-based cluster. First I did a kargo aws --add --nodes 1 which succeeded - the instance was created, and I can see it added to the inventory.cfg, but only under [all], not under [kube-node], which seems odd. Then I did a kargo deploy... but the instance did not get Kubernetes components added to the cluster (e.g. the "Install kubelet launch script" task returned "ok" rather than "changed" for the new worker instance) and after the deployment kubectl get nodes only returns the original set of instances.
I then tried (just for fun) manually editing inventory.cfg and adding the new host under [kube-node]. This time the new instance gets Kubernetes installed etc, but it fails at the task "Link etcd certificates for calico-node" for the masters and existing workers with the message "Cannot link, file exists at destination" for "/etc/ssl/etcd/ssl/ca.pem".
After that, Kubernetes is aware of the node, but unsurprisingly the networking is broken - when I deploy something I get messages like: Failed to setup network for pod "foo" using network plugins "cni": failed to find plugin "loopback" in path [/opt/loopback/bin /opt/cni/bin]; Skipping pod
The text was updated successfully, but these errors were encountered:
Update: I just recreated my cluster, but using Flannel and not Calico as the overlay and although I still had to manually add the new worker to [kube-node] in inventory.cfg, the deploy succeeded and I have a healthy cluster.
I followed the instructions to add a node to my AWS-based cluster. First I did a
kargo aws --add --nodes 1
which succeeded - the instance was created, and I can see it added to the inventory.cfg, but only under [all], not under [kube-node], which seems odd. Then I did akargo deploy...
but the instance did not get Kubernetes components added to the cluster (e.g. the "Install kubelet launch script" task returned "ok" rather than "changed" for the new worker instance) and after the deploymentkubectl get nodes
only returns the original set of instances.I then tried (just for fun) manually editing inventory.cfg and adding the new host under [kube-node]. This time the new instance gets Kubernetes installed etc, but it fails at the task "Link etcd certificates for calico-node" for the masters and existing workers with the message "Cannot link, file exists at destination" for "/etc/ssl/etcd/ssl/ca.pem".
After that, Kubernetes is aware of the node, but unsurprisingly the networking is broken - when I deploy something I get messages like: Failed to setup network for pod "foo" using network plugins "cni": failed to find plugin "loopback" in path [/opt/loopback/bin /opt/cni/bin]; Skipping pod
The text was updated successfully, but these errors were encountered: