Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(): offboard ns when invalid labels are applied to ns #409

Closed
wants to merge 24 commits into from
Closed
Show file tree
Hide file tree
Changes from 22 commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
a1f77d7
fix(): initial changes for offboarding replicated ns
gourishkb Oct 3, 2024
6973574
fix(): expose offBoardPod function
gourishkb Oct 4, 2024
10ebd09
fix(): offboard objects, while cleaning up ns
gourishkb Oct 9, 2024
acea6a8
select the slice gateway services only
mridulgain Oct 22, 2024
242a323
fix(): offboardRequired function
gourishkb Oct 23, 2024
81f2f58
fix(): Add ignore_critical parameter to pipeline config
KRANTHI0918 Oct 29, 2024
a693a2e
fix(): Set ignore_critical to false in pipeline configuration
KRANTHI0918 Oct 29, 2024
91094ac
fix(): Add update_trivy flag with default 'false'
KRANTHI0918 Oct 29, 2024
07d43aa
fix(): vulnerabilities
gourishkb Oct 30, 2024
210435b
fix(): code cleanup
gourishkb Nov 4, 2024
5db5228
fix(): Update Jenkinsfile
KRANTHI0918 Nov 5, 2024
01e1136
Update README.md
uma-kt Oct 29, 2024
247409d
Update README.md
uma-kt Oct 29, 2024
fbf2dec
fix(): configurable ns exclusion list (#408)
priyank-upadhyay Oct 30, 2024
f3eb9a5
fix(): Fixed tunnel status reporting in the slicegw CR (#406)
bharath-avesha Nov 4, 2024
63cb4aa
fix(): update gw deploy if gateway sidecar image has been changed in …
mridulgain Nov 5, 2024
ffb8b62
add empty string check for sidecar image name
mridulgain Nov 5, 2024
3e5301d
fix(): test build
mridulgain Nov 5, 2024
71628a8
fix(): update gw deploy if gateway sidecar image pull policy is chang…
mridulgain Nov 6, 2024
46a48e9
fix: getNodeIp logic for no-network mode
mridulgain Nov 13, 2024
403decd
address review comments
mridulgain Nov 13, 2024
d54ebc5
Set custom Trivy DB repository in GitHub Action env
KRANTHI0918 Nov 13, 2024
5c889b8
fix(): Dockerfile golang 1.23.2
gourishkb Nov 19, 2024
b23a7f7
Merge branch 'master' into feature-nsreplication
gourishkb Nov 20, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .github/workflows/trivy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,8 @@ jobs:
format: 'sarif'
output: 'trivy-results.sarif'
severity: 'CRITICAL'
env:
TRIVY_DB_REPOSITORY: "public.ecr.aws/aquasecurity/trivy-db"

- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v2
Expand Down
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
##########################################################

# Build the manager binary
FROM golang:1.22.1 as builder
FROM golang:1.23.1 as builder

WORKDIR /workspace
# Copy the Go Modules manifests
Expand Down
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ The `kubeslice-worker` operator uses Kubebuilder, a framework for building Kuber
It is strongly recommended that you use a released version.

Please refer to our documentation on:
- [Get Started on KubeSlice](https://kubeslice.io/documentation/open-source/1.3.0/category/get-started)
- [Install KubeSlice](https://kubeslice.io/documentation/open-source/1.3.0/category/install-kubeslice)
- [Get Started on KubeSlice](https://kubeslice.io/documentation/open-source/latest/category/get-started)
- [Install KubeSlice](https://kubeslice.io/documentation/open-source/latest/category/install-kubeslice)

## Install `kubeslice-worker` on a Kind Cluster

Expand All @@ -19,7 +19,7 @@ Before you begin, make sure the following prerequisites are met:
* Docker is installed and running on your local machine.
* A running [`kind`](https://kind.sigs.k8s.io/) cluster.
* [`kubectl`](https://kubernetes.io/docs/tasks/tools/) is installed and configured.
* You have prepared the environment to install [`kubeslice-controller`](https://github.com/kubeslice/kubeslice-controller) on the controller cluster and [`worker-operator`](https://github.com/kubeslice/worker-operator) on the worker cluster. For more information, see [Prerequisites](https://kubeslice.io/documentation/open-source/1.3.0/category/prerequisites).
* You have prepared the environment to install [`kubeslice-controller`](https://github.com/kubeslice/kubeslice-controller) on the controller cluster and [`worker-operator`](https://github.com/kubeslice/worker-operator) on the worker cluster. For more information, see [Prerequisites](https://kubeslice.io/documentation/open-source/latest/category/prerequisites).

### Build and Deploy a Worker Operator on a Kind Cluster

Expand All @@ -31,13 +31,13 @@ docker pull aveshasystems/worker-operator:latest

### Setting up Your Helm Repo

If you have not added avesha helm repo yet, add it.
If you have not added Avesha's `helm repo` yet, add it.

```console
helm repo add avesha https://kubeslice.github.io/charts/
```

Upgrade the avesha helm repo.
Upgrade Avesha's `helm repo`.

```console
helm repo update
Expand Down Expand Up @@ -69,7 +69,7 @@ deploy/controller_secret.sh gke_avesha-dev_us-east1-c_xxxx kubeslice-cisco my-aw
```

2. Edit the `VERSION` variable in the Makefile to change the docker tag to be built.
The image is set as `docker.io/aveshasystems/worker-operator:$(VERSION)` in the Makefile. Modify this if required.
The image is set as `docker.io/aveshasystems/worker-operator:$(VERSION)` in the Makefile. Modify this as required.

```console
make docker-build
Expand Down Expand Up @@ -155,7 +155,7 @@ prefix-service-76bd89c44f-2p6dw 1/1 Running 0 48s

### Uninstall the Worker Operator

For more information, see [deregister the worker cluster](https://kubeslice.io/documentation/open-source/1.3.0/uninstall-kubeslice/#deregister-worker-clusters).
For more information, see [deregister the worker cluster](https://kubeslice.io/documentation/open-source/latest/uninstall-kubeslice/#deregister-worker-clusters).

```console
helm uninstall kubeslice-worker -n kubeslice-system
Expand Down
14 changes: 11 additions & 3 deletions api/v1beta1/slicegateway_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -132,9 +132,14 @@ type GwPodInfo struct {
PeerPodName string `json:"peerPodName,omitempty"`
PodIP string `json:"podIP,omitempty"`
LocalNsmIP string `json:"localNsmIP,omitempty"`
TunnelStatus TunnelStatus `json:"tunnelStatus,omitempty"`
RouteRemoved int32 `json:"routeRemoved,omitempty"`
// TunnelStatus is the status of the tunnel between this gw pod and its peer
TunnelStatus TunnelStatus `json:"tunnelStatus,omitempty"`
RouteRemoved int32 `json:"routeRemoved,omitempty"`
// RemotePort is the port number this gw pod is connected to on the remote cluster.
// Applicable only for gw clients. Would be set to 0 for gw servers.
RemotePort int32 `json:"remotePort,omitempty"`
}

type TunnelStatus struct {
IntfName string `json:"IntfName,omitempty"`
LocalIP string `json:"LocalIP,omitempty"`
Expand All @@ -143,7 +148,10 @@ type TunnelStatus struct {
TxRate uint64 `json:"TxRate,omitempty"`
RxRate uint64 `json:"RxRate,omitempty"`
PacketLoss uint64 `json:"PacketLoss,omitempty"`
Status int32 `json:"Status,omitempty"`
// Status is the status of the tunnel. 0: DOWN, 1: UP
Status int32 `json:"Status,omitempty"`
// TunnelState is the state of the tunnel in string format: UP, DOWN, UNKNOWN
TunnelState string `json:"TunnelState,omitempty"`
}

func init() {
Expand Down
14 changes: 14 additions & 0 deletions config/crd/bases/networking.kubeslice.io_slicegateways.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -169,10 +169,18 @@ spec:
type: string
podName:
type: string
remotePort:
description: |-
RemotePort is the port number this gw pod is connected to on the remote cluster.
Applicable only for gw clients. Would be set to 0 for gw servers.
format: int32
type: integer
routeRemoved:
format: int32
type: integer
tunnelStatus:
description: TunnelStatus is the status of the tunnel between
this gw pod and its peer
properties:
IntfName:
type: string
Expand All @@ -190,8 +198,14 @@ spec:
format: int64
type: integer
Status:
description: 'Status is the status of the tunnel. 0: DOWN,
1: UP'
format: int32
type: integer
TunnelState:
description: 'TunnelState is the state of the tunnel in
string format: UP, DOWN, UNKNOWN'
type: string
TxRate:
format: int64
type: integer
Expand Down
1 change: 1 addition & 0 deletions controllers/controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,7 @@ func GetSliceGatewayServers(ctx context.Context, c client.Client, sliceName stri
func GetSliceGwServices(ctx context.Context, c client.Client, sliceName string) (*corev1.ServiceList, error) {
sliceGwSvcList := &corev1.ServiceList{}
listOpts := []client.ListOption{
client.InNamespace(ControlPlaneNamespace),
client.MatchingLabels(map[string]string{ApplicationNamespaceSelectorLabelKey: sliceName}),
}

Expand Down
95 changes: 81 additions & 14 deletions controllers/slicegateway/slicegateway.go
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,11 @@
webhook "github.com/kubeslice/worker-operator/pkg/webhook/pod"
)

const (
DEFAULT_SIDECAR_IMG = "nexus.dev.aveshalabs.io/kubeslice/gw-sidecar:1.0.0"
DEFAULT_SIDECAR_PULLPOLICY = corev1.PullAlways
)

var (
vpnClientFileName = "openvpn_client.ovpn"
gwSidecarImage = os.Getenv("AVESHA_GW_SIDECAR_IMAGE")
Expand All @@ -78,7 +83,7 @@
}
}

func labelsForSliceGwService(name, svcName, depName string) map[string]string {
func labelsForSliceGwService(name, depName string) map[string]string {
return map[string]string{
controllers.SliceGatewaySelectorLabelKey: name,
"kubeslice.io/slice-gw-dep": depName,
Expand Down Expand Up @@ -110,8 +115,8 @@

var privileged = true

sidecarImg := "nexus.dev.aveshalabs.io/kubeslice/gw-sidecar:1.0.0"
sidecarPullPolicy := corev1.PullAlways
sidecarImg := DEFAULT_SIDECAR_IMG
sidecarPullPolicy := DEFAULT_SIDECAR_PULLPOLICY
vpnImg := "nexus.dev.aveshalabs.io/kubeslice/openvpn-server.ubuntu.18.04:1.0.0"
vpnPullPolicy := corev1.PullAlways
baseFileName := os.Getenv("CLUSTER_NAME") + "-" + g.Spec.SliceName + "-" + g.Status.Config.SliceGatewayName + ".vpn.aveshasystems.com"
Expand Down Expand Up @@ -360,7 +365,7 @@
},
Spec: corev1.ServiceSpec{
Type: "NodePort",
Selector: labelsForSliceGwService(g.Name, svcName, depName),
Selector: labelsForSliceGwService(g.Name, depName),
Ports: []corev1.ServicePort{{
Port: 11194,
Protocol: proto,
Expand Down Expand Up @@ -661,8 +666,25 @@
return ctrl.Result{}, err, true
}
gwPod.LocalNsmIP = status.NsmStatus.LocalIP
gwPod.TunnelStatus = kubeslicev1beta1.TunnelStatus(status.TunnelStatus)
// this grpc call fails untill the openvpn tunnel connection is not established, so its better to do not reconcile in case of errors, hence the reconciler does not proceedes further
gwPod.TunnelStatus = kubeslicev1beta1.TunnelStatus{
IntfName: status.TunnelStatus.IntfName,
LocalIP: status.TunnelStatus.LocalIP,
RemoteIP: status.TunnelStatus.RemoteIP,
Latency: status.TunnelStatus.Latency,
TxRate: status.TunnelStatus.TxRate,
RxRate: status.TunnelStatus.RxRate,
PacketLoss: status.TunnelStatus.PacketLoss,
Status: int32(status.TunnelStatus.Status),
TunnelState: status.TunnelStatus.TunnelState,
}

if isClient(slicegateway) {
// get the remote port number this gw pod is connected to on the remote cluster
_, remotePortInUse := getClientGwRemotePortInUse(ctx, r.Client, slicegateway, GetDepNameFromPodName(slicegateway.Status.Config.SliceGatewayID, gwPod.PodName))
gwPod.RemotePort = int32(remotePortInUse)

Check failure

Code scanning / CodeQL

Incorrect conversion between integer types High

Incorrect conversion of an integer with architecture-dependent bit size from
strconv.Atoi
to a lower bit size type int32 without an upper bound check.

Copilot Autofix AI about 2 months ago

To fix the problem, we need to ensure that the integer value obtained from strconv.Atoi is within the bounds of the int32 type before casting it. This can be achieved by using strconv.ParseInt with a specified bit size of 32, which directly returns an int64 that can be safely cast to int32 after ensuring it is within the valid range.

  1. Replace the use of strconv.Atoi with strconv.ParseInt specifying a bit size of 32.
  2. Add bounds checking to ensure the parsed integer is within the range of int32 before casting.
Suggested changeset 2
controllers/slicegateway/slicegateway.go

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/controllers/slicegateway/slicegateway.go b/controllers/slicegateway/slicegateway.go
--- a/controllers/slicegateway/slicegateway.go
+++ b/controllers/slicegateway/slicegateway.go
@@ -683,3 +683,8 @@
 			_, remotePortInUse := getClientGwRemotePortInUse(ctx, r.Client, slicegateway, GetDepNameFromPodName(slicegateway.Status.Config.SliceGatewayID, gwPod.PodName))
-			gwPod.RemotePort = int32(remotePortInUse)
+			if remotePortInUse >= math.MinInt32 && remotePortInUse <= math.MaxInt32 {
+				gwPod.RemotePort = int32(remotePortInUse)
+			} else {
+				log.Error(fmt.Errorf("remote port out of int32 range"), "Invalid remote port", "remotePortInUse", remotePortInUse)
+				gwPod.RemotePort = 0 // or some default value
+			}
 		}
EOF
@@ -683,3 +683,8 @@
_, remotePortInUse := getClientGwRemotePortInUse(ctx, r.Client, slicegateway, GetDepNameFromPodName(slicegateway.Status.Config.SliceGatewayID, gwPod.PodName))
gwPod.RemotePort = int32(remotePortInUse)
if remotePortInUse >= math.MinInt32 && remotePortInUse <= math.MaxInt32 {
gwPod.RemotePort = int32(remotePortInUse)
} else {
log.Error(fmt.Errorf("remote port out of int32 range"), "Invalid remote port", "remotePortInUse", remotePortInUse)
gwPod.RemotePort = 0 // or some default value
}
}
controllers/slicegateway/utils.go
Outside changed files

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/controllers/slicegateway/utils.go b/controllers/slicegateway/utils.go
--- a/controllers/slicegateway/utils.go
+++ b/controllers/slicegateway/utils.go
@@ -336,5 +336,6 @@
 				if envVar.Name == "NODE_PORT" {
-					nodePort, err := strconv.Atoi(envVar.Value)
+					nodePort64, err := strconv.ParseInt(envVar.Value, 10, 32)
 					if err == nil {
-						return true, nodePort
+						nodePort := int32(nodePort64)
+						return true, int(nodePort)
 					}
EOF
@@ -336,5 +336,6 @@
if envVar.Name == "NODE_PORT" {
nodePort, err := strconv.Atoi(envVar.Value)
nodePort64, err := strconv.ParseInt(envVar.Value, 10, 32)
if err == nil {
return true, nodePort
nodePort := int32(nodePort64)
return true, int(nodePort)
}
Copilot is powered by AI and may make mistakes. Always verify output.
Positive Feedback
Negative Feedback

Provide additional feedback

Please help us improve GitHub Copilot by sharing more details about this comment.

Please select one or more of the options
}

// this grpc call fails untill the openvpn tunnel connection is not established, so its better to do not reconcile in case of errors, hence the reconciler does not proceeds further
gwPod.PeerPodName, err = r.getRemoteGwPodName(ctx, slicegateway.Status.Config.SliceGatewayRemoteVpnIP, gwPod.PodIP)
if err != nil {
log.Error(err, "Error getting peer pod name", "PodName", gwPod.PodName, "PodIP", gwPod.PodIP)
Expand All @@ -671,10 +693,11 @@
if isGatewayStatusChanged(slicegateway, gwPod) {
toUpdate = true
}
if len(slicegateway.Status.GatewayPodStatus) != len(gwPodsInfo) {
toUpdate = true
}
}
if len(slicegateway.Status.GatewayPodStatus) != len(gwPodsInfo) {
toUpdate = true
}

if toUpdate {
log.Info("gwPodsInfo", "gwPodsInfo", gwPodsInfo)
slicegateway.Status.GatewayPodStatus = gwPodsInfo
Expand Down Expand Up @@ -725,6 +748,10 @@

err = retry.RetryOnConflict(retry.DefaultRetry, func() error {
err := r.Get(ctx, req.NamespacedName, slicegateway)
if err != nil {
log.Error(err, "Failed to get SliceGateway")
return err
}
slicegateway.Status.ConnectionContextUpdatedOn = time.Now().Unix()
err = r.Status().Update(ctx, slicegateway)
if err != nil {
Expand Down Expand Up @@ -1094,6 +1121,15 @@

func (r *SliceGwReconciler) ReconcileGwPodPlacement(ctx context.Context, sliceGw *kubeslicev1beta1.SliceGateway) error {
log := r.Log

// if the env variable is set, do not perform any gw pod rebalancing. This is useful in clusters where
// the k8s scheduler does not honor the pod anti-affinity rule and places the gw pods on the same node. Such scenarios
// could occur if the node with the kubeslice gateway label is cordoned off or if the node has insufficient resources or
// if the node has some taints that the gw pods cannot tolerate.
if os.Getenv("DISABLE_GW_POD_REBALANCING") == "true" {
return nil
}

// The gw pod rebalancing is always performed on a deployment. We expect the gw pods belonging to a slicegateway
// object between any two clusters to placed on different nodes marked as kubeslice gateway nodes. If they are
// initially placed on the same node due to lack of kubeslice-gateway nodes, the rebalancing algorithim is expected
Expand Down Expand Up @@ -1170,7 +1206,7 @@
return ctrl.Result{Requeue: true}, nil, true
}

func (r *SliceGwReconciler) handleSliceGwSvcDeletion(ctx context.Context, sliceGw *kubeslicev1beta1.SliceGateway, svcName, depName string) error {
func (r *SliceGwReconciler) handleSliceGwSvcDeletion(ctx context.Context, sliceGw *kubeslicev1beta1.SliceGateway, svcName string) error {
log := logger.FromContext(ctx).WithName("slicegw")
serviceFound := corev1.Service{}
err := r.Get(ctx, types.NamespacedName{Namespace: sliceGw.Namespace, Name: svcName}, &serviceFound)
Expand Down Expand Up @@ -1345,6 +1381,15 @@
}
}

sidecarImg := DEFAULT_SIDECAR_IMG
if len(gwSidecarImage) != 0 {
sidecarImg = gwSidecarImage
}
sidecarPullPolicy := DEFAULT_SIDECAR_PULLPOLICY
if len(gwSidecarImagePullPolicy) != 0 {
sidecarPullPolicy = corev1.PullPolicy(gwSidecarImagePullPolicy)
}

for gwInstance := 0; gwInstance < numGwInstances; gwInstance++ {
if !gwDeploymentIsPresent(sliceGwName, gwInstance, deployments) {
dep := r.deploymentForGateway(sliceGw, sliceGwName+"-"+fmt.Sprint(gwInstance)+"-"+"0", gwConfigKey)
Expand All @@ -1355,6 +1400,28 @@
return ctrl.Result{}, err, true
}
return ctrl.Result{Requeue: true}, nil, true
} else {
// update logic for gateways
for i := range deployments.Items {
deployment := &deployments.Items[i]
if deployment.Name == sliceGwName+"-"+fmt.Sprint(gwInstance)+"-"+"0" {
// update if gateway sidecar image has been changed in worker env vars
for j := range deployment.Spec.Template.Spec.Containers {
container := &deployment.Spec.Template.Spec.Containers[j]
if container.Name == "kubeslice-sidecar" && (container.Image != sidecarImg || container.ImagePullPolicy != sidecarPullPolicy) {
container.Image = sidecarImg
container.ImagePullPolicy = sidecarPullPolicy
log.Info("updating gw Deployment sidecar", "Name", deployment.Name, "image", gwSidecarImage)
err = r.Update(ctx, deployment)
if err != nil {
log.Error(err, "Failed to update Deployment", "Name", deployment.Name)
return ctrl.Result{}, err, true
}
return ctrl.Result{Requeue: true}, nil, true
}
}
}
}
}
}

Expand Down Expand Up @@ -1385,7 +1452,7 @@
}
// Update the port map
gwClientToRemotePortMap.Store(deployment.Name, portNumToUpdate)
err = r.updateGatewayDeploymentNodePort(ctx, r.Client, sliceGw, &deployment, portNumToUpdate)
err = r.updateGatewayDeploymentNodePort(ctx, sliceGw, &deployment, portNumToUpdate)
if err != nil {
return ctrl.Result{}, err, true
}
Expand All @@ -1399,7 +1466,7 @@
if foundInMap {
if portInMap != nodePortInUse {
// Update the deployment since the port numbers do not match
err := r.updateGatewayDeploymentNodePort(ctx, r.Client, sliceGw, &deployment, portInMap.(int))
err := r.updateGatewayDeploymentNodePort(ctx, sliceGw, &deployment, portInMap.(int))
if err != nil {
return ctrl.Result{}, err, true
}
Expand All @@ -1425,7 +1492,7 @@
if deploymentsToDelete != nil {
for _, depToDelete := range deploymentsToDelete.Items {
// Delete the gw svc associated with the deployment
err := r.handleSliceGwSvcDeletion(ctx, sliceGw, getGwSvcNameFromDepName(depToDelete.Name), depToDelete.Name)
err := r.handleSliceGwSvcDeletion(ctx, sliceGw, getGwSvcNameFromDepName(depToDelete.Name))
if err != nil {
log.Error(err, "Failed to delete gw svc", "svcName", depToDelete.Name)
return ctrl.Result{}, err, true
Expand Down Expand Up @@ -1615,7 +1682,7 @@

// updateGatewayDeploymentNodePort updates the gateway client deployments with the relevant updated ports
// from the workersliceconfig
func (r *SliceGwReconciler) updateGatewayDeploymentNodePort(ctx context.Context, c client.Client, g *kubeslicev1beta1.SliceGateway, deployment *appsv1.Deployment, nodePort int) error {
func (r *SliceGwReconciler) updateGatewayDeploymentNodePort(ctx context.Context, g *kubeslicev1beta1.SliceGateway, deployment *appsv1.Deployment, nodePort int) error {
containers := deployment.Spec.Template.Spec.Containers
for contIndex, cont := range containers {
if cont.Name == "kubeslice-sidecar" {
Expand Down
19 changes: 12 additions & 7 deletions controllers/slicegateway/utils.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,16 +22,17 @@ import (
"context"
"errors"
"fmt"
"os"
"strconv"
"strings"
"sync"

gwsidecarpb "github.com/kubeslice/gateway-sidecar/pkg/sidecar/sidecarpb"
kubeslicev1beta1 "github.com/kubeslice/worker-operator/api/v1beta1"
"github.com/kubeslice/worker-operator/controllers"
ossEvents "github.com/kubeslice/worker-operator/events"
"github.com/kubeslice/worker-operator/pkg/utils"
webhook "github.com/kubeslice/worker-operator/pkg/webhook/pod"
"os"
"strconv"
"strings"
"sync"

appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
Expand Down Expand Up @@ -112,6 +113,10 @@ func getPodNames(slicegateway *kubeslicev1beta1.SliceGateway) []string {
}

func GetDepNameFromPodName(sliceGwID, podName string) string {
if sliceGwID == "" || podName == "" {
return ""
}

after, found := strings.CutPrefix(podName, sliceGwID)
if !found {
return ""
Expand Down Expand Up @@ -204,13 +209,13 @@ func getPodPairToRebalance(podsOnNode []corev1.Pod, sliceGw *kubeslicev1beta1.Sl
func GetPeerGwPodName(gwPodName string, sliceGw *kubeslicev1beta1.SliceGateway) (string, error) {
podInfo := findGwPodInfo(sliceGw.Status.GatewayPodStatus, gwPodName)
if podInfo == nil {
return "", errors.New("Gw pod not found")
return "", errors.New("gw pod not found")
}
if podInfo.TunnelStatus.Status != int32(gwsidecarpb.TunnelStatusType_GW_TUNNEL_STATE_UP) {
return "", errors.New("Gw tunnel is down")
return "", errors.New("gw tunnel is down")
}
if podInfo.PeerPodName == "" {
return "", errors.New("Gw peer pod info unavailable")
return "", errors.New("gw peer pod info unavailable")
}

return podInfo.PeerPodName, nil
Expand Down
Loading