Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KUBESAW-187: Adjust ksctl adm restart command to use rollout-restart #79

Merged
merged 52 commits into from
Nov 27, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
52 commits
Select commit Hold shift + click to select a range
2748b57
KUBESAW-187: Adjust ksctl adm restart command to use rollout-restart
fbm3307 Sep 10, 2024
da57803
some checking
fbm3307 Sep 13, 2024
aeef8de
Merge branch 'master' into kubesaw170_restart
fbm3307 Sep 13, 2024
f2c29ee
golint
fbm3307 Sep 13, 2024
c742197
Merge branch 'master' into kubesaw170_restart
fbm3307 Sep 13, 2024
ba8866e
few changes to the logic
fbm3307 Sep 17, 2024
ad5348e
Merge branch 'kubesaw170_restart' of https://github.com/fbm3307/ksctl…
fbm3307 Sep 17, 2024
cd4b1bf
t cases
fbm3307 Sep 17, 2024
4cd8e26
Merge branch 'master' into kubesaw170_restart
MatousJobanek Sep 18, 2024
90f7d40
Merge branch 'kubesaw170_restart' of https://github.com/fbm3307/ksctl…
fbm3307 Sep 19, 2024
4c15cf0
eview comments
fbm3307 Sep 19, 2024
8796901
Review comments
fbm3307 Sep 19, 2024
1d68d34
check the args
fbm3307 Sep 19, 2024
47bc27e
adding unit test cases
fbm3307 Sep 23, 2024
8f56cbf
Change in test cases
fbm3307 Sep 25, 2024
9e8fc49
Merge branch 'master' into kubesaw170_restart
fbm3307 Sep 25, 2024
92d0237
minor change in unit test
fbm3307 Sep 25, 2024
c0332b1
unregister-member test
fbm3307 Sep 25, 2024
83e99b5
unit test case for restart
fbm3307 Sep 25, 2024
d5e5280
test case for delete
fbm3307 Sep 25, 2024
b6f3df1
Rc1
fbm3307 Sep 26, 2024
51e1e4e
golint
fbm3307 Sep 27, 2024
f2c234e
Merge branch 'master' into kubesaw170_restart
mfrancisc Sep 30, 2024
f5c19de
Merge branch 'master' into kubesaw170_restart
fbm3307 Oct 1, 2024
f3cf690
changes to the logic of restart
fbm3307 Oct 10, 2024
1868b12
Merge branch 'master' into kubesaw170_restart
fbm3307 Oct 11, 2024
2d4d4b1
Merge branch 'master' into kubesaw170_restart
fbm3307 Oct 17, 2024
1f5db06
Merge branch 'master' into kubesaw170_restart
fbm3307 Nov 4, 2024
fcf67b6
review comments-2
fbm3307 Nov 4, 2024
fd143c7
restart-test changes
fbm3307 Nov 5, 2024
b823e10
CI
fbm3307 Nov 6, 2024
97997f6
golang ci
fbm3307 Nov 6, 2024
e34b110
adding tc
fbm3307 Nov 7, 2024
144dd2c
some addition to test cases
fbm3307 Nov 7, 2024
0d80548
some changes
fbm3307 Nov 7, 2024
096d49a
adding some comments
fbm3307 Nov 8, 2024
bf63303
autoscalling buffer test case
fbm3307 Nov 8, 2024
09411ad
Modification of test cases
fbm3307 Nov 12, 2024
857fdc9
Go lint
fbm3307 Nov 12, 2024
760cf0c
Test case of status
fbm3307 Nov 12, 2024
3517338
Linter
fbm3307 Nov 12, 2024
2704a91
test of unregister_member
fbm3307 Nov 12, 2024
17da571
phase-3 rc
fbm3307 Nov 14, 2024
a4b5198
code cov
fbm3307 Nov 14, 2024
9b889cc
some changes to status func
fbm3307 Nov 14, 2024
9a14e2b
leftovers
fbm3307 Nov 14, 2024
6f5b0cb
Merge branch 'master' into kubesaw170_restart
fbm3307 Nov 15, 2024
4f477ce
merge conflict
fbm3307 Nov 15, 2024
6318f4e
some changes as per rc
fbm3307 Nov 21, 2024
8762ebc
go version fix
fbm3307 Nov 21, 2024
9c4ae9e
extra left overs
fbm3307 Nov 21, 2024
70c53e7
linter
fbm3307 Nov 21, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,7 @@ require (
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
github.com/lithammer/dedent v1.1.0 // indirect
github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-isatty v0.0.18 // indirect
Expand Down
2 changes: 2 additions & 0 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -436,6 +436,8 @@ github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+
github.com/leodido/go-urn v1.2.0/go.mod h1:+8+nEpDfqqsY+g338gtMEUOtuK+4dEMhiQEgxpxOKII=
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de h1:9TO3cAIGXtEhnIaL+V+BEER86oLrvS+kWobKpbJuye0=
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de/go.mod h1:zAbeS9B/r2mtpb6U+EI2rYA5OAXxsYw6wTamcNW+zcE=
github.com/lithammer/dedent v1.1.0 h1:VNzHMVCBNG1j0fh3OrsFRkVUwStdDArbgBWoPAffktY=
github.com/lithammer/dedent v1.1.0/go.mod h1:jrXYCQtgg0nJiN+StA2KgR7w6CiQNv9Fd/Z9BP0jIOc=
github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69Aj6K7nkY=
github.com/lucasb-eyer/go-colorful v1.2.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
Expand Down
10 changes: 0 additions & 10 deletions pkg/cmd/adm/register_member_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@ import (
"github.com/kubesaw/ksctl/pkg/utils"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
Expand Down Expand Up @@ -515,15 +514,6 @@ func verifyToolchainClusterSecret(t *testing.T, fakeClient *test.FakeClient, saN
require.Equal(t, fmt.Sprintf("token-secret-for-%s", saName), apiConfig.AuthInfos["auth"].Token)
}

func whenDeploymentThenUpdated(t *testing.T, fakeClient *test.FakeClient, namespacedName types.NamespacedName, currentReplicas int32, numberOfUpdateCalls *int) func(ctx context.Context, obj runtimeclient.Object, opts ...runtimeclient.UpdateOption) error {
return func(ctx context.Context, obj runtimeclient.Object, opts ...runtimeclient.UpdateOption) error {
if deployment, ok := obj.(*appsv1.Deployment); ok {
checkDeploymentBeingUpdated(t, fakeClient, namespacedName, currentReplicas, numberOfUpdateCalls, deployment)
}
return fakeClient.Client.Update(ctx, obj, opts...)
}
}

func newFakeClientsFromRestConfig(t *testing.T, initObjs ...runtimeclient.Object) (newClientFromRestConfigFunc, *test.FakeClient) {
fakeClient := test.NewFakeClient(t, initObjs...)
fakeClient.MockCreate = func(ctx context.Context, obj runtimeclient.Object, opts ...runtimeclient.CreateOption) error {
Expand Down
258 changes: 164 additions & 94 deletions pkg/cmd/adm/restart.go
Original file line number Diff line number Diff line change
@@ -1,157 +1,227 @@
package adm

import (
"context"
"fmt"
"os"
"time"

"github.com/kubesaw/ksctl/pkg/client"
"github.com/kubesaw/ksctl/pkg/cmd/flags"
"github.com/kubesaw/ksctl/pkg/configuration"
clicontext "github.com/kubesaw/ksctl/pkg/context"
"github.com/kubesaw/ksctl/pkg/ioutils"

"github.com/spf13/cobra"
appsv1 "k8s.io/api/apps/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/wait"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/cli-runtime/pkg/genericclioptions"
"k8s.io/cli-runtime/pkg/genericiooptions"
kubectlrollout "k8s.io/kubectl/pkg/cmd/rollout"
cmdutil "k8s.io/kubectl/pkg/cmd/util"
runtimeclient "sigs.k8s.io/controller-runtime/pkg/client"
)

type (
RolloutRestartFunc func(ctx *clicontext.CommandContext, deployment appsv1.Deployment) error
RolloutStatusCheckerFunc func(ctx *clicontext.CommandContext, deployment appsv1.Deployment) error
)

// NewRestartCmd() is a function to restart the whole operator, it relies on the target cluster and fetches the cluster config
// 1. If the command is run for host operator, it restart the whole host operator.(it deletes olm based pods(host-operator pods),
// waits for the new pods to come up, then uses rollout-restart command for non-olm based - registration-service)
// 2. If the command is run for member operator, it restart the whole member operator.(it deletes olm based pods(member-operator pods),
// waits for the new pods to come up, then uses rollout-restart command for non-olm based deployments - webhooks)
func NewRestartCmd() *cobra.Command {
var targetCluster string
command := &cobra.Command{
Use: "restart -t <cluster-name> <deployment-name>",
Short: "Restarts a deployment",
Long: `Restarts the deployment with the given name in the operator namespace.
If no deployment name is provided, then it lists all existing deployments in the namespace.`,
Args: cobra.RangeArgs(0, 1),
Use: "restart <cluster-name>",
Short: "Restarts an operator",
Long: `Restarts the whole operator, it relies on the target cluster and fetches the cluster config
1. If the command is run for host operator, it restarts the whole host operator.
(it deletes olm based pods(host-operator pods),waits for the new pods to
come up, then uses rollout-restart command for non-olm based deployments - registration-service)
2. If the command is run for member operator, it restarts the whole member operator.
(it deletes olm based pods(member-operator pods),waits for the new pods
to come up, then uses rollout-restart command for non-olm based deployments - webhooks)`,
Args: cobra.ExactArgs(1),

Check warning on line 44 in pkg/cmd/adm/restart.go

View check run for this annotation

Codecov / codecov/patch

pkg/cmd/adm/restart.go#L35-L44

Added lines #L35 - L44 were not covered by tests
RunE: func(cmd *cobra.Command, args []string) error {
term := ioutils.NewTerminal(cmd.InOrStdin, cmd.OutOrStdout)
ctx := clicontext.NewCommandContext(term, client.DefaultNewClient)
return restart(ctx, targetCluster, args...)
return restart(ctx, args[0])

Check warning on line 48 in pkg/cmd/adm/restart.go

View check run for this annotation

Codecov / codecov/patch

pkg/cmd/adm/restart.go#L48

Added line #L48 was not covered by tests
},
}
command.Flags().StringVarP(&targetCluster, "target-cluster", "t", "", "The target cluster")
flags.MustMarkRequired(command, "target-cluster")
return command
}

func restart(ctx *clicontext.CommandContext, clusterName string, deployments ...string) error {
func restart(ctx *clicontext.CommandContext, clusterName string) error {
kubeConfigFlags := genericclioptions.NewConfigFlags(true).WithDeprecatedPasswordFlag()
ioStreams := genericiooptions.IOStreams{
In: os.Stdin,
Out: os.Stdout,
ErrOut: os.Stderr,
}
kubeConfigFlags.ClusterName = nil // `cluster` flag is redefined for our own purpose
kubeConfigFlags.AuthInfoName = nil // unused here, so we can hide it
kubeConfigFlags.Context = nil // unused here, so we can hide it

cfg, err := configuration.LoadClusterConfig(ctx, clusterName)
if err != nil {
return err
}
cl, err := ctx.NewClient(cfg.Token, cfg.ServerAPI)
kubeConfigFlags.Namespace = &cfg.OperatorNamespace
kubeConfigFlags.APIServer = &cfg.ServerAPI
kubeConfigFlags.BearerToken = &cfg.Token
kubeconfig, err := client.EnsureKsctlConfigFile()
if err != nil {
return err
}

if len(deployments) == 0 {
err := printExistingDeployments(ctx.Terminal, cl, cfg.OperatorNamespace)
if err != nil {
ctx.Terminal.Printlnf("\nERROR: Failed to list existing deployments\n :%s", err.Error())
}
return fmt.Errorf("at least one deployment name is required, include one or more of the above deployments to restart")
}
deploymentName := deployments[0]
kubeConfigFlags.KubeConfig = &kubeconfig
factory := cmdutil.NewFactory(cmdutil.NewMatchVersionFlags(kubeConfigFlags))

if !ctx.AskForConfirmation(
ioutils.WithMessagef("restart the deployment '%s' in namespace '%s'", deploymentName, cfg.OperatorNamespace)) {
ioutils.WithMessagef("restart all the deployments in the cluster '%s' and namespace '%s' \n", clusterName, cfg.OperatorNamespace)) {
return nil
}
return restartDeployment(ctx, cl, cfg.OperatorNamespace, deploymentName)
}

func restartDeployment(ctx *clicontext.CommandContext, cl runtimeclient.Client, ns string, deploymentName string) error {
namespacedName := types.NamespacedName{
Namespace: ns,
Name: deploymentName,
cl, err := ctx.NewClient(cfg.Token, cfg.ServerAPI)
if err != nil {
return err

Check warning on line 86 in pkg/cmd/adm/restart.go

View check run for this annotation

Codecov / codecov/patch

pkg/cmd/adm/restart.go#L84-L86

Added lines #L84 - L86 were not covered by tests
}

originalReplicas, err := scaleToZero(cl, namespacedName)
return restartDeployments(ctx, cl, cfg.OperatorNamespace, func(ctx *clicontext.CommandContext, deployment appsv1.Deployment) error {
return checkRolloutStatus(ctx, factory, ioStreams, deployment)
}, func(ctx *clicontext.CommandContext, deployment appsv1.Deployment) error {
return restartNonOlmDeployments(ctx, deployment, factory, ioStreams)
})

Check warning on line 93 in pkg/cmd/adm/restart.go

View check run for this annotation

Codecov / codecov/patch

pkg/cmd/adm/restart.go#L89-L93

Added lines #L89 - L93 were not covered by tests
}

// This function has the whole logic of getting the list of olm and non-olm based deployment, then proceed on restarting/deleting accordingly
func restartDeployments(ctx *clicontext.CommandContext, cl runtimeclient.Client, ns string, checker RolloutStatusCheckerFunc, restarter RolloutRestartFunc) error {

ctx.Printlnf("Fetching the current OLM and non-OLM deployments of the operator in %s namespace", ns)
olmDeploymentList, nonOlmDeploymentList, err := getExistingDeployments(ctx, cl, ns)
if err != nil {
if apierrors.IsNotFound(err) {
ctx.Printlnf("\nERROR: The given deployment '%s' wasn't found.", deploymentName)
return printExistingDeployments(ctx, cl, ns)
}
return err
}
ctx.Println("The deployment was scaled to 0")
if err := scaleBack(ctx, cl, namespacedName, originalReplicas); err != nil {
ctx.Printlnf("Scaling the deployment '%s' in namespace '%s' back to '%d' replicas wasn't successful", originalReplicas)
ctx.Println("Please, try to contact administrators to scale the deployment back manually")
return err
//if there is no olm operator deployment, no need for restart
if len(olmDeploymentList.Items) == 0 {
fbm3307 marked this conversation as resolved.
Show resolved Hide resolved
return fmt.Errorf("no operator deployment found in namespace %s , it is required for the operator deployment to be running so the command can proceed with restarting the KubeSaw components", ns)
}
//Deleting the pods of the olm based operator deployment and then checking the status
for _, olmOperatorDeployment := range olmDeploymentList.Items {
ctx.Printlnf("Proceeding to delete the Pods of %v", olmOperatorDeployment.Name)

if err := deleteDeploymentPods(ctx, cl, olmOperatorDeployment); err != nil {
return err
}

Check warning on line 114 in pkg/cmd/adm/restart.go

View check run for this annotation

Codecov / codecov/patch

pkg/cmd/adm/restart.go#L113-L114

Added lines #L113 - L114 were not covered by tests
//sleeping here so that when the status is called we get the correct status
time.Sleep(1 * time.Second)

ctx.Printlnf("Checking the status of the deleted pod's deployment %v", olmOperatorDeployment.Name)
//check the rollout status
if err := checker(ctx, olmOperatorDeployment); err != nil {
return err
}
}

//Non-Olm deployments like reg-svc,to be restarted
//if no Non-OL deployment found it should just return with a message
if len(nonOlmDeploymentList.Items) == 0 {
// if there are no non-olm deployments
ctx.Printlnf("No Non-OLM deployment found in namespace %s, hence no restart happened", ns)
return nil
}
// if there is a Non-olm deployment found use rollout-restart command
for _, nonOlmDeployment := range nonOlmDeploymentList.Items {
//it should only use rollout restart for the deployments which are NOT autoscaling-buffer
if nonOlmDeployment.Name != "autoscaling-buffer" {
ctx.Printlnf("Proceeding to restart the non-olm deployment %v", nonOlmDeployment.Name)
//using rollout-restart
if err := restarter(ctx, nonOlmDeployment); err != nil {
return err
}
//check the rollout status
ctx.Printlnf("Checking the status of the rolled out deployment %v", nonOlmDeployment.Name)
if err := checker(ctx, nonOlmDeployment); err != nil {
return err
}

Check warning on line 145 in pkg/cmd/adm/restart.go

View check run for this annotation

Codecov / codecov/patch

pkg/cmd/adm/restart.go#L144-L145

Added lines #L144 - L145 were not covered by tests
//if the deployment is not auto-scaling buffer, it should return from the function and not go to print the message for autoscaling buffer
//We do not expect more than 1 non-olm deployment for each OLM deployment and hence returning here
return nil
fbm3307 marked this conversation as resolved.
Show resolved Hide resolved
fbm3307 marked this conversation as resolved.
Show resolved Hide resolved
}
//message if there is a autoscaling buffer, it shouldn't be restarted but successfully exit
ctx.Printlnf("Found only autoscaling-buffer deployment in namespace %s , which is not required to be restarted", ns)
}

ctx.Printlnf("The deployment was scaled back to '%d'", originalReplicas)
return nil
}

func restartHostOperator(ctx *clicontext.CommandContext, hostClient runtimeclient.Client, hostNamespace string) error {
deployments := &appsv1.DeploymentList{}
if err := hostClient.List(context.TODO(), deployments,
runtimeclient.InNamespace(hostNamespace),
runtimeclient.MatchingLabels{"olm.owner.namespace": "toolchain-host-operator"}); err != nil {
func deleteDeploymentPods(ctx *clicontext.CommandContext, cl runtimeclient.Client, deployment appsv1.Deployment) error {
//get pods by label selector from the deployment
pods := corev1.PodList{}
selector, _ := metav1.LabelSelectorAsSelector(deployment.Spec.Selector)
if err := cl.List(ctx, &pods,
runtimeclient.MatchingLabelsSelector{Selector: selector},
runtimeclient.InNamespace(deployment.Namespace)); err != nil {
return err
}
if len(deployments.Items) != 1 {
return fmt.Errorf("there should be a single deployment matching the label olm.owner.namespace=toolchain-host-operator in %s ns, but %d was found. "+
"It's not possible to restart the Host Operator deployment", hostNamespace, len(deployments.Items))

//delete pods
for _, pod := range pods.Items {
pod := pod // TODO We won't need it after upgrading to go 1.22: https://go.dev/blog/loopvar-preview
ctx.Printlnf("Deleting pod: %s", pod.Name)
if err := cl.Delete(ctx, &pod); err != nil {
return err
}

Check warning on line 173 in pkg/cmd/adm/restart.go

View check run for this annotation

Codecov / codecov/patch

pkg/cmd/adm/restart.go#L172-L173

Added lines #L172 - L173 were not covered by tests
}

return restartDeployment(ctx, hostClient, hostNamespace, deployments.Items[0].Name)
return nil

}

func printExistingDeployments(term ioutils.Terminal, cl runtimeclient.Client, ns string) error {
deployments := &appsv1.DeploymentList{}
if err := cl.List(context.TODO(), deployments, runtimeclient.InNamespace(ns)); err != nil {
func restartNonOlmDeployments(ctx *clicontext.CommandContext, deployment appsv1.Deployment, f cmdutil.Factory, ioStreams genericclioptions.IOStreams) error {

o := kubectlrollout.NewRolloutRestartOptions(ioStreams)

if err := o.Complete(f, nil, []string{"deployment/" + deployment.Name}); err != nil {
return err
}
deploymentList := "\n"
for _, deployment := range deployments.Items {
deploymentList += fmt.Sprintf("%s\n", deployment.Name)

if err := o.Validate(); err != nil {
return err

Check warning on line 189 in pkg/cmd/adm/restart.go

View check run for this annotation

Codecov / codecov/patch

pkg/cmd/adm/restart.go#L189

Added line #L189 was not covered by tests
}
term.PrintContextSeparatorWithBodyf(deploymentList, "Existing deployments in %s namespace", ns)
return nil
ctx.Printlnf("Running the rollout restart command for non-Olm deployment %v", deployment.Name)
return o.RunRestart()
}

func scaleToZero(cl runtimeclient.Client, namespacedName types.NamespacedName) (int32, error) {
// get the deployment
deployment := &appsv1.Deployment{}
if err := cl.Get(context.TODO(), namespacedName, deployment); err != nil {
return 0, err
func checkRolloutStatus(ctx *clicontext.CommandContext, f cmdutil.Factory, ioStreams genericclioptions.IOStreams, deployment appsv1.Deployment) error {

cmd := kubectlrollout.NewRolloutStatusOptions(ioStreams)

if err := cmd.Complete(f, []string{"deployment/" + deployment.Name}); err != nil {
return err

Check warning on line 200 in pkg/cmd/adm/restart.go

View check run for this annotation

Codecov / codecov/patch

pkg/cmd/adm/restart.go#L200

Added line #L200 was not covered by tests
}
// keep original number of replicas so we can bring it back
originalReplicas := *deployment.Spec.Replicas
zero := int32(0)
deployment.Spec.Replicas = &zero

// update the deployment so it scales to zero
return originalReplicas, cl.Update(context.TODO(), deployment)
if err := cmd.Validate(); err != nil {
return err
}

Check warning on line 205 in pkg/cmd/adm/restart.go

View check run for this annotation

Codecov / codecov/patch

pkg/cmd/adm/restart.go#L204-L205

Added lines #L204 - L205 were not covered by tests
ctx.Printlnf("Running the Rollout status to check the status of the deployment")
return cmd.Run()
}

func scaleBack(term ioutils.Terminal, cl runtimeclient.Client, namespacedName types.NamespacedName, originalReplicas int32) error {
return wait.PollUntilContextTimeout(context.TODO(), 500*time.Millisecond, 10*time.Second, false, func(ctx context.Context) (done bool, err error) {
term.Println("")
term.Printlnf("Trying to scale the deployment back to '%d'", originalReplicas)
// get the updated
deployment := &appsv1.Deployment{}
if err := cl.Get(context.TODO(), namespacedName, deployment); err != nil {
return false, err
}
// check if the replicas number wasn't already reset by a controller
if *deployment.Spec.Replicas == originalReplicas {
return true, nil
}
// set the original
deployment.Spec.Replicas = &originalReplicas
// and update to scale back
if err := cl.Update(context.TODO(), deployment); err != nil {
term.Printlnf("error updating Deployment '%s': %s. Will retry again...", namespacedName.Name, err.Error())
return false, nil
}
return true, nil
})
func getExistingDeployments(ctx *clicontext.CommandContext, cl runtimeclient.Client, ns string) (*appsv1.DeploymentList, *appsv1.DeploymentList, error) {

olmDeployments := &appsv1.DeploymentList{}
if err := cl.List(ctx, olmDeployments,
runtimeclient.InNamespace(ns),
runtimeclient.MatchingLabels{"kubesaw-control-plane": "kubesaw-controller-manager"}); err != nil {
mfrancisc marked this conversation as resolved.
Show resolved Hide resolved
return nil, nil, err
}

Check warning on line 217 in pkg/cmd/adm/restart.go

View check run for this annotation

Codecov / codecov/patch

pkg/cmd/adm/restart.go#L216-L217

Added lines #L216 - L217 were not covered by tests

nonOlmDeployments := &appsv1.DeploymentList{}
if err := cl.List(ctx, nonOlmDeployments,
runtimeclient.InNamespace(ns),
runtimeclient.MatchingLabels{"toolchain.dev.openshift.com/provider": "codeready-toolchain"}); err != nil {
return nil, nil, err
}

Check warning on line 224 in pkg/cmd/adm/restart.go

View check run for this annotation

Codecov / codecov/patch

pkg/cmd/adm/restart.go#L223-L224

Added lines #L223 - L224 were not covered by tests

return olmDeployments, nonOlmDeployments, nil
}
Loading
Loading