-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
increase persisted user space ? #1
Comments
@rsignell-usgs Sorry, I didn't realize until the call today that we were using issues on this repo. I'll let you know when this config change is implemented. |
@craig-willis , fantastic. BTW, Congrats on the 👶 !! |
I've updated the singleuser config as follows and restarted the chart.
Note that the test cluster currently has limited storage -- I've only provisioned ~40GB storage total for testing. If we need to increase this in the short term, let me know. |
@craig-willis, I just looked, and was expecting to see increased space in
Still looks to be 1GB. I tried making my datashader environment and it failed again, running out of room. Perhaps since this is primarily to be used for training, perhaps we don't need too much local room? But it would be nice to have at least 5GB to test out a custom env or two... |
@rsignell-usgs My mistake, when changing this setting under JupyterHub on Kubernetes I have to delete any existing persistent volume claims (which will delete your user data). I've reduced the value to 5G per user for now (we can certainly make this bigger if we resize this cluster later). For this to take effect, I need to delete the PVC associated with your account. Let me know if this is OK. The next time you start a notebook, you'll get the 5GB disk. |
That’s totally fine. Blow it away! |
Deleted. Next time you start the server you should get a 5GB volume. Let me know if otherwise. |
@craig-willis, I hope you are messing around, or maybe I killed it. I create a custom env and ran this notebook, and seemed to be running fine, but then got error. And now https://esiphub.ndslabs.org/hub/home is returning 504. |
@rsignell-usgs Not sure what happened -- I wasn't doing anything with the system yesterday. The 504 from Jupyterhub usually signals that the server is under load. These instances are small (2 core 4GB ram) -- so maybe we're to the point that we need to rescale the cluster for more realistic use? |
Indeed, 4GB ram could be problematic for the datashader/trimesh grid stuff. I know that Unidata is running their JupyterHub on a XLarge instance on Jetstream, which is 60GB RAM. |
Yeah, it sounds like we're outgrowing the initial setup. It would be helpful to start collecting some of these requirements both for the core ESIPhub team (I'm assuming 10 users for now) and estimates for the workshop instance -- I've started in #6. With the Kubernetes deployment, we'll have both resources at the node level and limits put on each user's running container. Do you have a sense of what would be required for a single user in this case for ESIPhub? We can certainly scale up the nodes to something larger, but will likely still want to know what limits to put on individual users. |
On pangeo.pydata.org, the controller node has 4GB RAM, and the worker nodes have 6GB RAM. Closing this since the user storage is now 5GB. |
Okay, I checked out the shiny new jupyterhub at https://esiphub2.ndslabs.org!
I was able to login with github credentials and switch between jupyterlab and regular jupyter environments
https://esiphub2.ndslabs.org/user/rsignell-usgs/lab
https://esiphub2.ndslabs.org/user/rsignell-usgs/tree
So then I modified my ~/.condarc to:
and restarted the server to make sure my ~/.condarc was persisted. It was! 🍾
I then tried to create a custom
datashader
environment:and ran out of room!
So it looks like we have only 1GB on
/home/jovyan
, is that right?We will need some solution to persisted storage here since most of our custom environments are 1GB or larger. I would say we need at least 30GB per user.
The text was updated successfully, but these errors were encountered: