Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Customise POD defnition #39

Open
Iyappanj opened this issue Apr 23, 2024 · 7 comments
Open

Customise POD defnition #39

Iyappanj opened this issue Apr 23, 2024 · 7 comments

Comments

@Iyappanj
Copy link

Is your feature request related to a problem?

  1. I need to be able to customize the pod (memory, cpu, etc....) created using kubernetes as a provider.
  2. I want to understand where the data is stored by default, because I see the data in workspace is persistent by default until we delete the workspace. But I could not exactly identify where these data is stored. this is because I donot see PVC reated in my k8s cluster but the workspace is created successfully.

I cannot find detailed documents w.r.t Depod to go through. the only document what I go throught is https://devpod.sh/docs/what-is-devpod

Which solution do you suggest?

Which alternative solutions exist?

Additional context

@pascalbreuninger
Copy link
Member

Hi @Iyappanj, thanks for opening the issue. I'm going to transfer it to the kubernetes provider repository.

First up for the Pod customization we have the POD_MANIFEST_TEMPLATE option that allows you to specify extra configuration for your pod like so

apiVersion: v1
kind: Pod
spec:
- containers:
  name: devpod # make sure you name this exactly `devpod`
  resources: {} # resources go here

As for the second question, are you looking in all namespaces? kubectl get pvc -A

@pascalbreuninger pascalbreuninger transferred this issue from loft-sh/devpod Apr 24, 2024
@Iyappanj
Copy link
Author

Iyappanj commented Apr 24, 2024

Hi @pascalbreuninger, Thanks for taking time in answering the questions.

I already tried with the POD_MANIFEST_TEMPLATE, as per the example provided here - https://github.com/loft-sh/devpod-provider-kubernetes/blob/main/hack/provider/pod-manifest-template.yaml. But after adding the pod manifest, I am unable to create a workspace. the error I get is as below

09:32:44 info unmarshal pod template: error unmarshaling JSON: json: cannot unmarshal string into Go value of type v1.Pod
09:32:44 info devcontainer up: start dev container: error running devcontainer: exit status 1
09:32:45 error Try using the --debug flag to see a more verbose output
09:32:45 fatal run agent command: Process exited with status 1

without the pod template, it's working fine

I also read that I can specify the pod resources in provider.yaml but doing so will affect all the new pods that I will create. So I am trying to change it in the devcontainer.json as below. but nothing seems changing with the devcontainer.json. Do you have any reference for it ?

containerEnv": {
"RESOURCE_LIMITS": "cpu=1,memory=512Mi"
}

@pascalbreuninger
Copy link
Member

Are you on an older version of the kubernetes provider? If so, please make sure to update at least to v0.1.7 and try again.

I also read that I can specify the pod resources in provider.yaml but doing so will affect all the new pods that I will create. So I am trying to change it in the devcontainer.json as below. but nothing seems changing with the devcontainer.json. Do you have any reference for it ?

This needs to be configure on the provider level, not in the .devcontainer.json. You can see your current provider options in the Desktop App under Provider > Kubernetes or using the CLI devpod provider options kubernetes
If you then want to update the resources you'd do this either in the Desktop App again or run devpod provider set-options kubernetes -o RESOURCES="requests.cpu=1,requests.memory=512Mi.
Every option has a description that should help you get the correct format down

@Iyappanj
Copy link
Author

Iyappanj commented Apr 24, 2024

@pascalbreuninger instead of doing it on provider level, I want to restrict the resources on workspace level. that is the reason I was trying to achieve it on the devcontainer.json. you have any idea how we can achieve it on wprkspace level ?

Adding to it, when do a help for the command like below, I don't have a description that guides me with the args that -o accepts. If we have a guide that helps us with the arguments that a option accepts would be great.

devpod provider set-options kubernetes --help
Sets options for the given provider. Similar to 'devpod provider use', but does not switch the default provider.

Usage:
devpod provider set-options [flags]

Flags:
--dry Dry will not persist the options to file and instead return the new filled options
-h, --help help for set-options
-o, --option stringArray Provider option in the form KEY=VALUE
--reconfigure If enabled will not merge existing provider config
--single-machine If enabled will use a single machine for all workspaces

Global Flags:
--context string The context to use
--debug Prints the stack trace if an error occurs
--devpod-home string If defined will override the default devpod home. You can also use DEVPOD_HOME to set this
--log-output string The log format to use. Can be either plain, raw or json (default "plain")
--provider string The provider to use. Needs to be configured for the selected context.
--silent Run in silent mode and prevents any devpod log output except panics & fatals

@pascalbreuninger
Copy link
Member

pascalbreuninger commented Apr 24, 2024

instead of doing it on provider level, I want to restrict the resources on workspace level.

There's no such thing directly. Instead it'll use the latest provider options before starting a workspace.

Adding to it, when do a help for the command like below, I don't have a description that guides me with the args that -o accepts. If we have a guide that helps us with the arguments that a option accepts would be great.

Fair enough, we'll need to generate some documentation for this. For the time being you can always find the options in the provider.yaml file in each provider repository

@b4nst
Copy link

b4nst commented Apr 26, 2024

There's no such thing directly. Instead it'll use the latest provider options before starting a workspace.

This sounds quite limiting. A lot of those options (service account, resources, labels, etc...) are something one want to customise per workspace. Having to reconfigure your provider each time is not really convenient.

Imo this could live in the customizations part of the devcontainer.json (ref):

				"customizations": {
					"type": "object",
					"description": "Tool-specific configuration. Each tool should use a JSON object subproperty with a unique name to group its customizations."
				},

Would that be something the team is open to change? I'd be eager use that as a contribution starting point if so.

@pascalbreuninger
Copy link
Member

@b4nst That's definitely something we've discussed quite a bit and we don't have a definitive stance on it because it doesn't come without complexity and drawbacks:

  1. Separation of concerns - devcontainer.json defines the environment (read: image) and providers configure the infrastructure
  2. Versioning - all of the options here would need to be pinned to a specific provider and provider version.
  3. Shareability - let's say providers can be configured from within .devcontainer.json, then potentially anyone without DevPod wouldn't be able to create workspaces anymore, for example because the author added an extra mount that's configured in a Kubernetes option. Next up we'd need to initialize providers on the users machine if we detect them which might have unintended consequences, such as reading credentials from your aws CLI and creating workspaces on a project you did not intend to.

All this doesn't mean we won't ever add this, I just want to convey the effort and thoughts that need to go into it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants