From 9ab3b89836700f0a41679052d3fbfecc916cc573 Mon Sep 17 00:00:00 2001 From: Tomas Baca Date: Tue, 3 Dec 2024 15:36:14 +0100 Subject: [PATCH] more of docker --- .../30-docker/70-workspace-build.md | 110 +++++++++++++++ .../30-docker/70-workspace-caching.md | 131 ------------------ 2 files changed, 110 insertions(+), 131 deletions(-) create mode 100644 docs/10-prerequisities/30-docker/70-workspace-build.md delete mode 100644 docs/10-prerequisities/30-docker/70-workspace-caching.md diff --git a/docs/10-prerequisities/30-docker/70-workspace-build.md b/docs/10-prerequisities/30-docker/70-workspace-build.md new file mode 100644 index 00000000..af6e9d7c --- /dev/null +++ b/docs/10-prerequisities/30-docker/70-workspace-build.md @@ -0,0 +1,110 @@ +--- +title: ROS Workspace build +pagination_label: Building and caching of ROS workspace in Docker +description: How to build and cache a ROS workspace into a Docker image +--- + +# ROS Workspace caching with Docker + +This page describes the process of compiling **custom ROS packages** against dependencies from **base docker image**. +This task is not trivial in general, however, we have prepared a set of scripts that will make the process straightforward. +The following **assumption** apply for our situation: + +* We have a **base docker image**, which covers our dependencies, e.g., `ctumrs/mrs_uav_system:1.5.0`, +* We need to **cache the build artifacts** (`./build`, `./devel`, `./.catkin_tools` within the workspace) for future rebuilds of the same sotware. +* We need to transport the resulting compiled software to an offline machine (a.k.a. the **robot**), +* The **robot** has the **base docker image** loaded in its docker daemon. + +Our process comprises of the following docker build stages: + +1. **Stage 1: build the workspace** + * Load the build cache (`./cache`) from the previous build + * Compile the catkin workspace +2. **Stage 2: save the cache** + * Encapsulation of the whole workspace into a transport image (`alpine:latest`) + * Export of the image into a local directory (`./cache`) +3. **Stage 3: export the transport image** + * Copy of the build neccessary build artefacts into a transport image (`alpine:latest`) + * Export of the transport image + +The compiled workspace is **transported** into the **robot** within an minimalistic `alpine`-based image, which makes it relatively small. +The overhead of the transport image is only around 5 MB. +On the other hand, if the workspace would be packed in a image based on the **base image**, the size would be offset by hundreds of megabytes. +That is not a problem when the tranport occurs through a **docker registry**. +However, since the [Portainer](/docs/prerequisities/portainer) interface makes the upload of **archived** images very simple, we prefer to bundle the whole image into a `.tar.gz` file. +This approach complicates the deployment in one simple way: The workspace needs to **extracted** from the transport image and placed into a **shared volume** during runtime. + +## Pre-configured build pipeline + +A set of scripts that facilitate the build is provided at [ctu-mrs/mrs_docker](https://github.com/ctu-mrs/mrs_docker) under the `catkin_workspace_builder` folder. + +