Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] [OpenSearch-2.20.0] [ Scaling Challenges With Statefulsets ] #550

Open
1d9akash opened this issue Jun 12, 2024 · 2 comments
Open

[BUG] [OpenSearch-2.20.0] [ Scaling Challenges With Statefulsets ] #550

1d9akash opened this issue Jun 12, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@1d9akash
Copy link

Describe the bug

When deploying OpenSearch using a Helm chart with separate nodeGroup configurations for "master" and "data" roles (each with 2 replicas), the deployment process requires manual creation of Persistent Volumes (PVs) to match Persistent Volume Claims (PVCs) generated by the Helm chart. This setup uses Amazon EFS as the storage backend, with a pre-existing StorageClass in the cluster. The manual step of creating PVs with specific labels to ensure proper PVC binding for the stateful sets (os-master-0, os-master-1, os-data-0, os-data-1) introduces complexity, especially when scaling the deployment up or down. I am seeking a solution to simplify the scaling process, either by allowing all replicas to share a single volume or by automating PV and PVC creation and binding during scaling operations.

To Reproduce

Steps to reproduce the behavior are not applicable as the issue relates to deployment and scaling infrastructure setup.

Expected behavior

The expected solution would automate the volume management process, eliminating the need for manual PV creation when scaling the OpenSearch deployment. Ideally, scaling up or down would automatically handle PV and PVC provisioning and binding, simplifying the management of stateful sets in Kubernetes.

Chart Name

OpenSearch Helm Chart Version: 20.04

Host/Environment

  • Helm Version: 3.13
  • Kubernetes Version: 1.29

Additional context

The current setup uses Amazon EFS as the persistent storage solution, with a StorageClass already defined in the Kubernetes cluster. The manual process of creating and labeling PVs to match PVCs is hectic, especially for dynamic scaling scenarios. Any guidance on automating this process or configuring the deployment to use shared volumes (if feasible) would be greatly appreciated.

@1d9akash 1d9akash added bug Something isn't working untriaged Issues that have not yet been triaged labels Jun 12, 2024
@dblock
Copy link
Member

dblock commented Jul 1, 2024

Thanks for opening this.

[Catch All Triage - Attendees 1, 2, 3, 4, 5]

@dblock dblock removed the untriaged Issues that have not yet been triaged label Jul 1, 2024
@getsaurabh02 getsaurabh02 moved this from 🆕 New to Later (6 months plus) in Engineering Effectiveness Board Jul 18, 2024
@roymanish
Copy link

What is the plan to fix this issue. I also feel this should be simplified in helm chart based deployments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Status: No status
Development

No branches or pull requests

3 participants