generated from TBD54566975/tbd-project-template
-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support zero downtime FTL upgrades on Kubernetes #2276
Comments
Open
This was referenced Aug 27, 2024
Open
Open
stuartwdouglas
added a commit
that referenced
this issue
Sep 4, 2024
This changes the way runners are provisioned, and how runners are allocated. Runners are now spawned knowing exactly which kube deployment they are for, and will always immedatly download and run that deployment. For kubernetes environments replicas are controlled by creating a kube deployment for each FTL deployment, and adjusting the number of replicas. For local scaling we create the runners directly for deployments as required. This also introduces an initial kubernetes test. fixes: #2449 #2276
stuartwdouglas
added a commit
that referenced
this issue
Sep 4, 2024
This changes the way runners are provisioned, and how runners are allocated. Runners are now spawned knowing exactly which kube deployment they are for, and will always immedatly download and run that deployment. For kubernetes environments replicas are controlled by creating a kube deployment for each FTL deployment, and adjusting the number of replicas. For local scaling we create the runners directly for deployments as required. This also introduces an initial kubernetes test. fixes: #2449 #2276
stuartwdouglas
added a commit
that referenced
this issue
Sep 4, 2024
This changes the way runners are provisioned, and how runners are allocated. Runners are now spawned knowing exactly which kube deployment they are for, and will always immedatly download and run that deployment. For kubernetes environments replicas are controlled by creating a kube deployment for each FTL deployment, and adjusting the number of replicas. For local scaling we create the runners directly for deployments as required. This also introduces an initial kubernetes test. fixes: #2449 #2276
stuartwdouglas
added a commit
that referenced
this issue
Sep 4, 2024
This changes the way runners are provisioned, and how runners are allocated. Runners are now spawned knowing exactly which kube deployment they are for, and will always immedatly download and run that deployment. For kubernetes environments replicas are controlled by creating a kube deployment for each FTL deployment, and adjusting the number of replicas. For local scaling we create the runners directly for deployments as required. This also introduces an initial kubernetes test. fixes: #2449 #2276
This is tracked in #2449, closing this one. |
stuartwdouglas
added a commit
that referenced
this issue
Sep 11, 2024
This changes the way runners are provisioned, and how runners are allocated. Runners are now spawned knowing exactly which kube deployment they are for, and will always immedatly download and run that deployment. For kubernetes environments replicas are controlled by creating a kube deployment for each FTL deployment, and adjusting the number of replicas. For local scaling we create the runners directly for deployments as required. This also introduces an initial kubernetes test. fixes: #2449 #2276
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
At the moment upgrading FTL can result in some deployments experiencing downtime. We need some level of Kubernetes integration to allow for zero downtime rolling upgrades.
The text was updated successfully, but these errors were encountered: