-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Call Program before Jupyter Hub Launch #29
Comments
Sorry for delay on this. Slipped out of the top of the inbox very quickly as been a busy week. Can you clarify whether you want these steps to run inside of the pod for JupyterHub, or in the pod for each users Jupyter notebook instance? |
My use case is to target users to certain directories (via environment variables) and run some setup procedures. I think the pod for each Jupyter Notebook instance would be the correct place! (If there would be a similar hook for the JupyterHub pod, it would a good addition as well, I think.) |
Are the directories for storage? One way is you would mount a sub directory for the user from a shared persistent volume, rather than mount the whole persistent volume and then place them in a specific directory. If you were to do the latter, they could see and modify other peoples files. For an example of this scheme if only want to use a single persistent volume for all users, as opposed to a persistent volume per user, see: In particular the JupyterHub config at: |
Thanks for the hints! It's a good idea to mount user-specific sub-directories! I'll have a look into it! In general our setup is even a bit more complicated: We are running HPC jobs through a batch submission system launched from Notebooks. The shared filesystem is mounted into the pod – and only this file system can be accessed from the submitted job. So, users would be able to escape their sub-directory (via the backend), if they really wanted to – but still, I consider mounting sub-directories a good idea. |
The Jupyter notebook images in this GitHub org also support an environment variable The changes I made a few hours back related to jupyter-on-openshift/jupyter-notebooks#16 would allow you to supply a shell script which is run during start up sequence. Theoretically it could read an environment variable and change the working directory before starting the notebook. That shell script needs to be stored at |
Thank you! I'll need some time to digest this and try it out! |
Keep me in the loop of what you are trying to do. There is all sorts of ways you can adapt JupyterHub and I am working on some new built in configuration options. One will provide play pens for users where have authentication and cluster access from the notebook to deploy extra stuff. Another will be a test lab environment where when user requests selected notebook, additional workloads can be deployed into a linked project on demand for what may be required by the notebook. So you could for example deploy a Dask or Spark cluster automatically on startup of session the first time. |
Both things sound really good. But especially the sample project from Singapore NTU looks very intersting. We set up our Notebooks the following: Login to HPC system via SSH (with a forwarded port), load the environment you need, start |
If you want to hop on a video chat session to discuss options to try and speed things up let me know. Been doing various Jupyter stuff this last week so I am in the right frame of mind to help out if I can. |
I'd love to! Can I contact you somewhere privately? I've just added you on Twitter. |
Disclaimer: I'm an absolute OpenShift newbee, but want to use the JupyterHub Quickstart for a HPC tutorial soon.
Is it possible to execute a Program (/Bash script) before JupyterHub is launched? I need to set up environment variables and move things around before the Notebook is started.
jupyterhub_config.sh
seems to be intended for Shell commands, but I don't know how to use the file. There's also a corresponding entry in the configmap, but I don't know how to use it (and am not sure if this is really intended to be used for this kind of thing).The text was updated successfully, but these errors were encountered: