-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clean up CI Dockerfile #56
Comments
Lets talk about this in our next meeting. One reason I didn't push a docker image is that I didn't want to tie it to my personal docker account (and you probably shouldn't either.. maybe FI has a hub... GL provides one, maybe GH does too, we can check). Generally the flow would look like, if dockerfile has changed rebuild it, tag, push to hub. Remaining CI code always pulls latest from a hub. That sounds in line with your first point. It is common (though perhaps slightly painful in Jenkins, maybe not worth it, I didn't think so at the time..). I'm not sure I understand what the other item is. Probably one image can do all the things (I think even the one that exists, which is just burdened by the rebuild time) depending on what we tell it to do (and how it is mounted). I think it will be faster for me to understand what I am missing if we just talk about it. Given the flow above, you would expect jenkins to make the CUDA and python code every time (unless you wanted to get even more fancy). There are other flows. Make all was prep for Whatever we do, it will be best to keep it as simple as possible. When CI runs correctly, you might not look at them for months at a time, which is a recipe for forgetting everything. The original reason I made the dockerfiles was just for wheels, but I'm glad we can reuse them for CI already, that's great news. |
Yeah let's talk about this in the meeting next week. I'm not concerned about pushing any images to a hub somewhere, just reducing the build time during testing without complicating the Dockerfiles too much. I didn't want to mess around with the one you made, but you're right, it can probably be modified to do everything we want it to (build the wheels and run on Jenkins). |
Gotcha. Docker can locally cache every layer that is up to a point of change (IE the layer where the source code is copied into the one I made), so there should generally speaking be little effect to breaking up the file just to seperate the yum/CUDA install parts etc, past the first docker build on any particular machine. If that isn't happening in practice for some setup, the way to do the same non-locally is via a hub. You can see this locally if you try to build the container twice, or just twiddle the last line (layer) a little. It is actually sort of a neat thing to geek out on once or twice. |
Right, so the last layer is the |
Currently, the Dockerfiles in
ci/docker
runmake all
and rely on the user later runningpython setup.py develop
to set up the Python package. While this works well for wheel building, it's not ideal for the Jenkins setup since it forces a rebuild of the Docker image each time and prevents us from caching the dependencies (which shouldn't change too often).One way to do this is to modify the Dockerfiles to install dependencies first and run
make
second, thenpython setup.py develop
third. Alternatively, if we want to keep the current structure, we separate into three Dockerfiles, one main one and two branches: one for wheel building and one for Jenkins.On a related note, we should be able to cut down on running time by replacing
make all
withmake lib
, but maybe I'm missing something here.The text was updated successfully, but these errors were encountered: