diff --git a/adminer/content.md b/adminer/content.md index 1a9eff69af5b..418e6c61696d 100644 --- a/adminer/content.md +++ b/adminer/content.md @@ -13,17 +13,17 @@ Adminer (formerly phpMinAdmin) is a full-featured database management tool writt ### Standalone ```console -$ docker run --link some_database:db -p 8080:8080 adminer +$ docker run --link some_database:db -p 8080:8080 %%IMAGE%% ``` Then you can hit `http://localhost:8080` or `http://host-ip:8080` in your browser. ### FastCGI -If you are already running a FastCGI capable web server you might prefer running adminer via FastCGI: +If you are already running a FastCGI capable web server you might prefer running Adminer via FastCGI: ```console -$ docker run --link some_database:db -p 9000:9000 adminer:fastcgi +$ docker run --link some_database:db -p 9000:9000 %%IMAGE%%:fastcgi ``` Then point your web server to port 9000 of the container. @@ -36,18 +36,18 @@ Run `docker stack deploy -c stack.yml %%REPO%%` (or `docker-compose -f stack.yml ### Loading plugins -This image bundles all official adminer plugins. You can find the list of plugins on GitHub: https://github.com/vrana/adminer/tree/master/plugins. +This image bundles all official Adminer plugins. You can find the list of plugins on GitHub: https://github.com/vrana/adminer/tree/master/plugins. To load plugins you can pass a list of filenames in `ADMINER_PLUGINS`: ```console -$ docker run --link some_database:db -p 8080:8080 -e ADMINER_PLUGINS='tables-filter tinymce' adminer +$ docker run --link some_database:db -p 8080:8080 -e ADMINER_PLUGINS='tables-filter tinymce' %%IMAGE%% ``` If a plugin *requires* parameters to work correctly you will need to add a custom file to the container: ```console -$ docker run --link some_database:db -p 8080:8080 -e ADMINER_PLUGINS='login-servers' adminer +$ docker run --link some_database:db -p 8080:8080 -e ADMINER_PLUGINS='login-servers' %%IMAGE%% Unable to load plugin file "login-servers", because it has required parameters: servers Create a file "/var/www/html/plugins-enabled/login-servers.php" with the following contents to load the plugin: @@ -73,7 +73,7 @@ The image bundles all the designs that are available in the source package of ad To use a bundled design you can pass its name in `ADMINER_DESIGN`: ```console -$ docker run --link some_database:db -p 8080:8080 -e ADMINER_DESIGN='nette' adminer +$ docker run --link some_database:db -p 8080:8080 -e ADMINER_DESIGN='nette' %%IMAGE%% ``` To use a custom design you can add a file called `/var/www/html/adminer.css`. diff --git a/aerospike/content.md b/aerospike/content.md index 325fdcc60afb..b0200fde16c8 100644 --- a/aerospike/content.md +++ b/aerospike/content.md @@ -11,7 +11,7 @@ Documentation for Aerospike is available at [http://aerospike.com/docs](https:// The following will run `asd` with all the exposed ports forwarded to the host machine. ```console -$ docker run -d --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike/aerospike-server +$ docker run -d --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 %%IMAGE%% ``` **NOTE** Although this is the simplest method to getting Aerospike up and running, but it is not the preferred method. To properly run the container, please specify a **custom configuration** with the **access-address** defined. @@ -22,7 +22,7 @@ By default, `asd` will use the configuration file at `/etc/aerospike/aerospike.c -v :/opt/aerospike/etc -Where `` is the path to a directory containing your custom aerospike.conf file. Next, you will want to tell `asd` to use the configuration file that was just mounted by using the `--config-file` option for `aerospike/aerospike-server`: +Where `` is the path to a directory containing your custom aerospike.conf file. Next, you will want to tell `asd` to use the configuration file that was just mounted by using the `--config-file` option for `%%IMAGE%%`: --config-file /opt/aerospike/etc/aerospike.conf @@ -31,7 +31,7 @@ This will tell `asd` to use the config file at `/opt/aerospike/etc/aerospike.con A full example: ```console -$ docker run -d -v :/opt/aerospike/etc --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike/aerospike-server asd --foreground --config-file /opt/aerospike/etc/aerospike.conf +$ docker run -d -v :/opt/aerospike/etc --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 %%IMAGE%% asd --foreground --config-file /opt/aerospike/etc/aerospike.conf ``` ### access-address Configuration @@ -59,7 +59,7 @@ Where `` is the path to a directory containing your data files. A full example: ```console -$ docker run -d -v :/opt/aerospike/data --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike/aerospike-server +$ docker run -d -v :/opt/aerospike/data --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 %%IMAGE%% ``` ## Clustering diff --git a/alpine/content.md b/alpine/content.md index 81d75f436f67..1a86a4f67edb 100644 --- a/alpine/content.md +++ b/alpine/content.md @@ -11,7 +11,7 @@ Use like you would any other base image: ```dockerfile -FROM alpine:3.5 +FROM %%IMAGE%%:3.5 RUN apk add --no-cache mysql-client ENTRYPOINT ["mysql"] ``` diff --git a/arangodb/content.md b/arangodb/content.md index 83a2e72567d6..0e94783854c2 100644 --- a/arangodb/content.md +++ b/arangodb/content.md @@ -32,10 +32,10 @@ Furthermore, ArangoDB offers a microservice framework called [Foxx](https://www. In order to start an ArangoDB instance run ```console -unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 -d --name arangodb-instance arangodb +unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 -d --name arangodb-instance %%IMAGE%% ``` -Will create and launch the arangodb docker instance as background process. The Identifier of the process is printed. By default ArangoDB listen on port 8529 for request and the image includes `EXPOSE 8529`. If you link an application container it is automatically available in the linked container. See the following examples. +Will create and launch the %%IMAGE%% docker instance as background process. The Identifier of the process is printed. By default ArangoDB listen on port 8529 for request and the image includes `EXPOSE 8529`. If you link an application container it is automatically available in the linked container. See the following examples. In order to get the IP arango listens on run: @@ -48,7 +48,7 @@ unix> docker inspect --format '{{ .NetworkSettings.IPAddress }}' arangodb-instan In order to use the running instance from an application, link the container ```console -unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 --name my-app --link arangodb-instance:db-link arangodb +unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 --name my-app --link arangodb-instance:db-link %%IMAGE%% ``` This will use the instance with the name `arangodb-instance` and link it into the application container. The application container will contain environment variables @@ -66,7 +66,7 @@ These can be used to access the database. If you want to expose the port to the outside world, run ```console -unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 -p 8529:8529 -d arangodb +unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 -p 8529:8529 -d %%IMAGE%% ``` ArangoDB listen on port 8529 for request and the image includes `EXPOSE @@ -95,7 +95,7 @@ The ArangoDB image provides several authentication methods which can be specifie In order to get a list of supported options, run ```console -unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 arangodb arangod --help +unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 %%IMAGE%% arangod --help ``` ## Persistent Data @@ -116,7 +116,7 @@ You can map the container's volumes to a directory on the host, so that the data unix> mkdir /tmp/arangodb unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 -p 8529:8529 -d \ -v /tmp/arangodb:/var/lib/arangodb3 \ - arangodb + %%IMAGE%% ``` This will use the `/tmp/arangodb` directory of the host as database directory for ArangoDB inside the container. @@ -126,13 +126,13 @@ This will use the `/tmp/arangodb` directory of the host as database directory fo Alternatively you can create a container holding the data. ```console -unix> docker create --name arangodb-persist arangodb true +unix> docker create --name arangodb-persist %%IMAGE%% true ``` And use this data container in your ArangoDB container. ```console -unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 --volumes-from arangodb-persist -p 8529:8529 arangodb +unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 --volumes-from arangodb-persist -p 8529:8529 %%IMAGE%% ``` If want to save a few bytes you can alternatively use [busybox](https://registry.hub.docker.com/_/busybox) or [alpine](https://registry.hub.docker.com/_/alpine) for creating the volume only containers. Please note that you need to provide the used volumes in this case. For example diff --git a/backdrop/content.md b/backdrop/content.md index 68c7360b3776..e6c81fa27029 100644 --- a/backdrop/content.md +++ b/backdrop/content.md @@ -11,7 +11,7 @@ Backdrop CMS enables people to build highly customized websites, affordably, thr The basic pattern for starting a `%%REPO%%` instance is: ```console -$ docker run --name some-%%REPO%% --link some-mysql:mysql -d %%REPO%% +$ docker run --name some-%%REPO%% --link some-mysql:mysql -d %%IMAGE%% ``` The following environment variables are also honored for configuring your Backdrop CMS instance: @@ -28,7 +28,7 @@ The `BACKDROP_DB_NAME` **must already exist** on the given MySQL server. Check o If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used: ```console -$ docker run --name some-%%REPO%% --link some-mysql:mysql -p 8080:80 -d %%REPO%% +$ docker run --name some-%%REPO%% --link some-mysql:mysql -p 8080:80 -d %%IMAGE%% ``` Then, access it via `http://localhost:8080` or `http://host-ip:8080` in a browser. diff --git a/bonita/content.md b/bonita/content.md index ad1266ef12b3..4aae3fbd87ff 100644 --- a/bonita/content.md +++ b/bonita/content.md @@ -11,7 +11,7 @@ Bonita BPM is an open-source business process management and workflow suite crea ## Quick start ```console -$ docker run --name bonita -d -p 8080:8080 bonita +$ docker run --name bonita -d -p 8080:8080 %%IMAGE%% ``` This will start a container running the [Tomcat Bundle](http://documentation.bonitasoft.com/?page=tomcat-bundle) with Bonita BPM Engine + Bonita BPM Portal. With no environment variables specified, it's as like if you have launched the bundle on your host using startup.{sh|bat} (with security hardening on REST and HTTP APIs, cf Security part). Bonita BPM uses a H2 database here. @@ -40,7 +40,7 @@ $ docker run --name mydbpostgres -v "$PWD"/custom_postgres/:/docker-entrypoint-i See the [official PostgreSQL documentation](https://registry.hub.docker.com/_/postgres/) for more details. ```console -$ docker run --name bonita_postgres --link mydbpostgres:postgres -d -p 8080:8080 bonita +$ docker run --name bonita_postgres --link mydbpostgres:postgres -d -p 8080:8080 %%IMAGE%% ``` ### MySQL @@ -64,13 +64,13 @@ See the [official MySQL documentation](https://registry.hub.docker.com/_/mysql/) Start your application container to link it to the MySQL container: ```console -$ docker run --name bonita_mysql --link mydbmysql:mysql -d -p 8080:8080 bonita +$ docker run --name bonita_mysql --link mydbmysql:mysql -d -p 8080:8080 %%IMAGE%% ``` ## Modify default credentials ```console -$ docker run --name=bonita -e "TENANT_LOGIN=tech_user" -e "TENANT_PASSWORD=secret" -e "PLATFORM_LOGIN=pfadmin" -e "PLATFORM_PASSWORD=pfsecret" -d -p 8080:8080 bonita +$ docker run --name=bonita -e "TENANT_LOGIN=tech_user" -e "TENANT_PASSWORD=secret" -e "PLATFORM_LOGIN=pfadmin" -e "PLATFORM_PASSWORD=pfsecret" -d -p 8080:8080 %%IMAGE%% ``` Now you can access the Bonita BPM Portal on localhost:8080/bonita and login using: tech_user / secret @@ -89,7 +89,7 @@ The Docker documentation is a good starting point for understanding the differen 1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/datadir`. 2. Start your `%%REPO%%` container like this: - docker run --name some-%%REPO%% -v /my/own/datadir:/opt/bonita -d %%REPO%%:tag + docker run --name some-%%REPO%% -v /my/own/datadir:/opt/bonita -d %%IMAGE%%:tag The `-v /my/own/datadir:/opt/bonita` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/opt/bonita` inside the container, where Bonita will deploy the bundle and write data files by default. @@ -208,13 +208,13 @@ $ chcon -Rt svirt_sandbox_file_t /my/own/datadir - If < 7.3.0 ```console - $ docker run --name=bonita_7.2.4_postgres --link mydbpostgres:postgres -e "DB_NAME=newbonitadb" -e "DB_USER=newbonitauser" -e "DB_PASS=newbonitapass" -v "$PWD"/bonita_migration:/opt/bonita/ -d -p 8081:8080 bonita:7.2.4 + $ docker run --name=bonita_7.2.4_postgres --link mydbpostgres:postgres -e "DB_NAME=newbonitadb" -e "DB_USER=newbonitauser" -e "DB_PASS=newbonitapass" -v "$PWD"/bonita_migration:/opt/bonita/ -d -p 8081:8080 %%IMAGE%%:7.2.4 ``` - If >= 7.3.0 ```console - $ docker run --name=bonita_7.5.4_postgres --link mydbpostgres:postgres -e "DB_NAME=newbonitadb" -e "DB_USER=newbonitauser" -e "DB_PASS=newbonitapass" -d -p 8081:8080 bonita:7.5.4 + $ docker run --name=bonita_7.5.4_postgres --link mydbpostgres:postgres -e "DB_NAME=newbonitadb" -e "DB_USER=newbonitauser" -e "DB_PASS=newbonitapass" -d -p 8081:8080 %%IMAGE%%:7.5.4 ``` - Reapply specific configuration if needed, for example with a version >= 7.3.0 : @@ -264,7 +264,7 @@ This Docker image activates both static and dynamic authorization checks by defa For specific needs you can override this behavior by setting HTTP_API to true and REST_API_DYN_AUTH_CHECKS to false: ```console -$ docker run -e HTTP_API=true -e REST_API_DYN_AUTH_CHECKS=false --name bonita -d -p 8080:8080 bonita +$ docker run -e HTTP_API=true -e REST_API_DYN_AUTH_CHECKS=false --name bonita -d -p 8080:8080 %%IMAGE%% ``` ## Environment variables @@ -358,7 +358,7 @@ For example, you can increase the log level : echo 'sed -i "s/^org.bonitasoft.level = WARNING$/org.bonitasoft.level = FINEST/" /opt/bonita/BonitaBPMCommunity-7.5.4-Tomcat-7.0.76/server/conf/logging.properties' >> custom_bonita/bonita.sh chmod +x custom_bonita/bonita.sh - docker run --name bonita_custom -v "$PWD"/custom_bonita/:/opt/custom-init.d -d -p 8080:8080 bonita + docker run --name bonita_custom -v "$PWD"/custom_bonita/:/opt/custom-init.d -d -p 8080:8080 %%IMAGE%% Note: There are several ways to check the `bonita` logs. One of them is diff --git a/cassandra/content.md b/cassandra/content.md index 08cf62b70fa4..68b6e8546de6 100644 --- a/cassandra/content.md +++ b/cassandra/content.md @@ -13,7 +13,7 @@ Apache Cassandra is an open source distributed database management system design Starting a Cassandra instance is simple: ```console -$ docker run --name some-%%REPO%% -d %%REPO%%:tag +$ docker run --name some-%%REPO%% -d %%IMAGE%%:tag ``` ... where `some-%%REPO%%` is the name you want to assign to your container and `tag` is the tag specifying the Cassandra version you want. See the list above for relevant tags. @@ -31,7 +31,7 @@ $ docker run --name some-app --link some-%%REPO%%:%%REPO%% -d app-that-uses-cass Using the environment variables documented below, there are two cluster scenarios: instances on the same machine and instances on separate machines. For the same machine, start the instance as described above. To start other instances, just tell each new node where the first is. ```console -$ docker run --name some-%%REPO%%2 -d -e CASSANDRA_SEEDS="$(docker inspect --format='{{ .NetworkSettings.IPAddress }}' some-%%REPO%%)" %%REPO%%:tag +$ docker run --name some-%%REPO%%2 -d -e CASSANDRA_SEEDS="$(docker inspect --format='{{ .NetworkSettings.IPAddress }}' some-%%REPO%%)" %%IMAGE%%:tag ``` ... where `some-%%REPO%%` is the name of your original Cassandra Server container, taking advantage of `docker inspect` to get the IP address of the other container. @@ -39,7 +39,7 @@ $ docker run --name some-%%REPO%%2 -d -e CASSANDRA_SEEDS="$(docker inspect --for Or you may use the docker run --link option to tell the new node where the first is: ```console -$ docker run --name some-cassandra2 -d --link some-cassandra:cassandra cassandra:tag +$ docker run --name some-cassandra2 -d --link some-cassandra:cassandra %%IMAGE%%:tag ``` For separate machines (ie, two VMs on a cloud provider), you need to tell Cassandra what IP address to advertise to the other nodes (since the address of the container is behind the docker bridge). @@ -47,13 +47,13 @@ For separate machines (ie, two VMs on a cloud provider), you need to tell Cassan Assuming the first machine's IP address is `10.42.42.42` and the second's is `10.43.43.43`, start the first with exposed gossip port: ```console -$ docker run --name some-%%REPO%% -d -e CASSANDRA_BROADCAST_ADDRESS=10.42.42.42 -p 7000:7000 %%REPO%%:tag +$ docker run --name some-%%REPO%% -d -e CASSANDRA_BROADCAST_ADDRESS=10.42.42.42 -p 7000:7000 %%IMAGE%%:tag ``` Then start a Cassandra container on the second machine, with the exposed gossip port and seed pointing to the first machine: ```console -$ docker run --name some-%%REPO%% -d -e CASSANDRA_BROADCAST_ADDRESS=10.43.43.43 -p 7000:7000 -e CASSANDRA_SEEDS=10.42.42.42 %%REPO%%:tag +$ docker run --name some-%%REPO%% -d -e CASSANDRA_BROADCAST_ADDRESS=10.43.43.43 -p 7000:7000 -e CASSANDRA_SEEDS=10.42.42.42 %%IMAGE%%:tag ``` ## Connect to Cassandra from `cqlsh` @@ -61,13 +61,13 @@ $ docker run --name some-%%REPO%% -d -e CASSANDRA_BROADCAST_ADDRESS=10.43.43.43 The following command starts another Cassandra container instance and runs `cqlsh` (Cassandra Query Language Shell) against your original Cassandra container, allowing you to execute CQL statements against your database instance: ```console -$ docker run -it --link some-%%REPO%%:cassandra --rm %%REPO%% sh -c 'exec cqlsh "$CASSANDRA_PORT_9042_TCP_ADDR"' +$ docker run -it --link some-%%REPO%%:cassandra --rm %%IMAGE%% sh -c 'exec cqlsh "$CASSANDRA_PORT_9042_TCP_ADDR"' ``` ... or (simplified to take advantage of the `/etc/hosts` entry Docker adds for linked containers): ```console -$ docker run -it --link some-%%REPO%%:cassandra --rm %%REPO%% cqlsh cassandra +$ docker run -it --link some-%%REPO%%:cassandra --rm %%IMAGE%% cqlsh cassandra ``` ... where `some-%%REPO%%` is the name of your original Cassandra Server container. @@ -147,7 +147,7 @@ The Docker documentation is a good starting point for understanding the differen 2. Start your `%%REPO%%` container like this: ```console - $ docker run --name some-%%REPO%% -v /my/own/datadir:/var/lib/cassandra -d %%REPO%%:tag + $ docker run --name some-%%REPO%% -v /my/own/datadir:/var/lib/cassandra -d %%IMAGE%%:tag ``` The `-v /my/own/datadir:/var/lib/cassandra` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/var/lib/cassandra` inside the container, where Cassandra by default will write its data files. diff --git a/centos/content.md b/centos/content.md index bd95521c29cf..105630445c7e 100644 --- a/centos/content.md +++ b/centos/content.md @@ -8,21 +8,21 @@ CentOS Linux is a community-supported distribution derived from sources freely p # CentOS image documentation -The `centos:latest` tag is always the most recent version currently available. +The `%%IMAGE%%:latest` tag is always the most recent version currently available. ## Rolling builds -The CentOS Project offers regularly updated images for all active releases. These images will be updated monthly or as needed for emergency fixes. These rolling updates are tagged with the major version number only. For example: `docker pull centos:6` or `docker pull centos:7` +The CentOS Project offers regularly updated images for all active releases. These images will be updated monthly or as needed for emergency fixes. These rolling updates are tagged with the major version number only. For example: `docker pull %%IMAGE%%:6` or `docker pull %%IMAGE%%:7` ## Minor tags Additionally, images with minor version tags that correspond to install media are also offered. **These images DO NOT recieve updates** as they are intended to match installation iso contents. If you choose to use these images it is highly recommended that you include `RUN yum -y update && yum clean all` in your Dockerfile, or otherwise address any potential security concerns. To use these images, please specify the minor version tag: -For example: `docker pull centos:5.11` or `docker pull centos:6.6` +For example: `docker pull %%IMAGE%%:5.11` or `docker pull %%IMAGE%%:6.6` ## Overlayfs and yum -Recent Docker versions support the [overlayfs](https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/) backend, which is enabled by default on most distros supporting it from Docker 1.13 onwards. On Centos 6 and 7, **that backend requires yum-plugin-ovl to be installed and enabled**; while it is installed by default in recent centos images, make it sure you retain the `plugins=1` option in `/etc/yum.conf` if you update that file; otherwise, you may encounter errors related to rpmdb checksum failure - see [Docker ticket 10180](https://github.com/docker/docker/issues/10180) for more details. +Recent Docker versions support the [overlayfs](https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/) backend, which is enabled by default on most distros supporting it from Docker 1.13 onwards. On Centos 6 and 7, **that backend requires yum-plugin-ovl to be installed and enabled**; while it is installed by default in recent %%IMAGE%% images, make it sure you retain the `plugins=1` option in `/etc/yum.conf` if you update that file; otherwise, you may encounter errors related to rpmdb checksum failure - see [Docker ticket 10180](https://github.com/docker/docker/issues/10180) for more details. # Package documentation @@ -30,12 +30,12 @@ By default, the CentOS containers are built using yum's `nodocs` option, which h # Systemd integration -Systemd is now included in both the centos:7 and centos:latest base containers, but it is not active by default. In order to use systemd, you will need to include text similar to the example Dockerfile below: +Systemd is now included in both the %%IMAGE%%:7 and %%IMAGE%%:latest base containers, but it is not active by default. In order to use systemd, you will need to include text similar to the example Dockerfile below: ## Dockerfile for systemd base image ```dockerfile -FROM centos:7 +FROM %%IMAGE%%:7 ENV container docker RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \ systemd-tmpfiles-setup.service ] || rm -f $i; done); \ diff --git a/chronograf/content.md b/chronograf/content.md index 538e3938aef2..20c65cca0ab6 100644 --- a/chronograf/content.md +++ b/chronograf/content.md @@ -11,7 +11,7 @@ Chronograf is InfluxData’s open source web application. Use Chronograf with th Chronograf runs on port 8888. It can be run and accessed by exposing that port: ```console -$ docker run -p 8888:8888 chronograf +$ docker run -p 8888:8888 %%IMAGE%% ``` ### Mounting a volume @@ -21,7 +21,7 @@ The Chronograf image exposes a shared volume under `/var/lib/chronograf`, so you ```console $ docker run -p 8888:8888 \ -v $PWD:/var/lib/chronograf \ - chronograf + %%IMAGE%% ``` Modify `$PWD` to the directory where you want to store data associated with the InfluxDB container. @@ -31,7 +31,7 @@ You can also have Docker control the volume mountpoint by using a named volume. ```console $ docker run -p 8888:8888 \ -v chronograf:/var/lib/chronograf \ - chronograf + %%IMAGE%% ``` ### Using the container with InfluxDB @@ -55,7 +55,7 @@ We can now start a Chronograf container that references this database. ```console $ docker run -p 8888:8888 \ --net=influxdb - chronograf --influxdb-url=http://influxdb:8086 + %%IMAGE%% --influxdb-url=http://influxdb:8086 ``` Try combining this with Telegraf to get dashboards for your infrastructure within minutes! diff --git a/clearlinux/content.md b/clearlinux/content.md index 3b0d3221ae51..8c5384d8307d 100644 --- a/clearlinux/content.md +++ b/clearlinux/content.md @@ -4,14 +4,14 @@ This serves as the official [Clear Linux OS](https://clearlinux.org) image. %%LOGO%% -The `clearlinux:latest` tag will point to `clearlinux:base` which will track toward the latest release version of the distribution. +The `%%IMAGE%%:latest` tag will point to `%%IMAGE%%:base` which will track toward the latest release version of the distribution. This image contains the os-core and os-core-update bundles, the latter can be used to add additional Clear Linux OS components (see [here](https://clearlinux.org/documentation/swupdate_about_sw_update.html) for more details about swupd and [here](https://clearlinux.org/documentation/bundles_overview.html) for more information on bundles). The following Dockerfile will install the editors and dev-utils bundles on top of the base image ```sh -FROM clearlinux:base +FROM %%IMAGE%%:base RUN swupd bundle-add editors dev-utils ``` diff --git a/clojure/content.md b/clojure/content.md index acd5929a3d48..6011e6e10b25 100644 --- a/clojure/content.md +++ b/clojure/content.md @@ -13,7 +13,7 @@ Clojure is a dialect of the Lisp programming language. It is a general-purpose p Since the most common way to use Clojure is in conjunction with [Leiningen (`lein`)](http://leiningen.org/), this image assumes that's how you'll be working. The most straightforward way to use this image is to add a `Dockerfile` to an existing Leiningen/Clojure project: ```dockerfile -FROM clojure +FROM %%IMAGE%% COPY . /usr/src/app WORKDIR /usr/src/app CMD ["lein", "run"] @@ -29,7 +29,7 @@ $ docker run -it --rm --name my-running-app my-clojure-app While the above is the most straightforward example of a `Dockerfile`, it does have some drawbacks. The `lein run` command will download your dependencies, compile the project, and then run it. That's a lot of work, all of which you may not want done every time you run the image. To get around this, you can download the dependencies and compile the project ahead of time. This will significantly reduce startup time when you run your image. ```dockerfile -FROM clojure +FROM %%IMAGE%% RUN mkdir -p /usr/src/app WORKDIR /usr/src/app COPY project.clj /usr/src/app/ @@ -48,7 +48,7 @@ You can then build and run the image as above. If you have an existing Lein/Clojure project, it's fairly straightforward to compile your project into a jar from a container: ```console -$ docker run -it --rm -v "$PWD":/usr/src/app -w /usr/src/app clojure lein uberjar +$ docker run -it --rm -v "$PWD":/usr/src/app -w /usr/src/app %%IMAGE%% lein uberjar ``` This will build your project into a jar file located in your project's `target/uberjar` directory. diff --git a/composer/content.md b/composer/content.md index 756ad6e8278c..13c1eb243c0b 100644 --- a/composer/content.md +++ b/composer/content.md @@ -13,7 +13,7 @@ Run the `composer` image: ```sh docker run --rm --interactive --tty \ --volume $PWD:/app \ - composer install + %%IMAGE%% install ``` You can mount the Composer home directory from your host inside the Container to share caching and configuration files: @@ -22,7 +22,7 @@ You can mount the Composer home directory from your host inside the Container to docker run --rm --interactive --tty \ --volume $PWD:/app \ --volume $COMPOSER_HOME:/tmp \ - composer install + %%IMAGE%% install ``` By default, Composer runs as root inside the container. This can lead to permission issues on your host filesystem. You can run Composer as your local user: @@ -31,7 +31,7 @@ By default, Composer runs as root inside the container. This can lead to permiss docker run --rm --interactive --tty \ --volume $PWD:/app \ --user $(id -u):$(id -g) \ - composer install + %%IMAGE%% install ``` When you need to access private repositories, you will either need to share your configured credentials, or mount your `ssh-agent` socket inside the running container: @@ -43,7 +43,7 @@ docker run --rm --interactive --tty \ --volume $PWD:/app \ --volume $SSH_AUTH_SOCK:/ssh-auth.sock \ --env SSH_AUTH_SOCK=/ssh-auth.sock \ - composer install + %%IMAGE%% install ``` When combining the use of private repositories with running Composer as another (local) user, you might run into non-existant user errors (thrown by ssh). To work around this, simply mount the host passwd and group files (read-only) into the container: @@ -56,7 +56,7 @@ docker run --rm --interactive --tty \ --volume /etc/group:/etc/group:ro \ --user $(id -u):$(id -g) \ --env SSH_AUTH_SOCK=/ssh-auth.sock \ - composer install + %%IMAGE%% install ``` ## Suggestions @@ -72,7 +72,7 @@ Sometimes dependencies or Composer [scripts](https://getcomposer.org/doc/article ```sh docker run --rm --interactive --tty \ --volume $PWD:/app \ - composer install --ignore-platform-reqs --no-scripts + %%IMAGE%% install --ignore-platform-reqs --no-scripts ``` - Create your own image (possibly by extending `FROM composer`). @@ -82,7 +82,7 @@ Sometimes dependencies or Composer [scripts](https://getcomposer.org/doc/article - Create your own image, and copy Composer from the official image into it: ```dockerfile - COPY --from=composer:1.5 /usr/bin/composer /usr/bin/composer + COPY --from=%%IMAGE%%:1.5 /usr/bin/composer /usr/bin/composer ``` It is highly recommended that you create a "build" image that extends from your baseline production image. Binaries such as Composer should not end up in your production environment. @@ -103,6 +103,6 @@ composer () { --volume /etc/passwd:/etc/passwd:ro \ --volume /etc/group:/etc/group:ro \ --volume $(pwd):/app \ - composer "$@" + %%IMAGE%% "$@" } ``` diff --git a/consul/content.md b/consul/content.md index 01d93db002e4..c71766d24fd7 100644 --- a/consul/content.md +++ b/consul/content.md @@ -25,7 +25,7 @@ We chose Alpine as a lightweight base with a reasonably small surface area for s Consul always runs under [dumb-init](https://github.com/Yelp/dumb-init), which handles reaping zombie processes and forwards signals on to all processes running in the container. We also use [gosu](https://github.com/tianon/gosu) to run Consul as a non-root "consul" user for better security. These binaries are all built by HashiCorp and signed with our [GPG key](https://www.hashicorp.com/security.html), so you can verify the signed package used to build a given base image. -Running the Consul container with no arguments will give you a Consul server in [development mode](https://www.consul.io/docs/agent/options.html#_dev). The provided entry point script will also look for Consul subcommands and run `consul` as the correct user and with that subcommand. For example, you can execute `docker run consul members` and it will run the `consul members` command inside the container. The entry point also adds some special configuration options as detailed in the sections below when running the `agent` subcommand. Any other command gets `exec`-ed inside the container under `dumb-init`. +Running the Consul container with no arguments will give you a Consul server in [development mode](https://www.consul.io/docs/agent/options.html#_dev). The provided entry point script will also look for Consul subcommands and run `consul` as the correct user and with that subcommand. For example, you can execute `docker run %%IMAGE%% members` and it will run the `consul members` command inside the container. The entry point also adds some special configuration options as detailed in the sections below when running the `agent` subcommand. Any other command gets `exec`-ed inside the container under `dumb-init`. The container exposes `VOLUME /consul/data`, which is a path were Consul will place its persisted state. This isn't used in any way when running in development mode. For client agents, this stores some information about the cluster and the client's health checks in case the container is restarted. For server agents, this stores the client information plus snapshots and data related to the consensus algorithm and other state like Consul's key/value store and catalog. For servers it is highly desirable to keep this volume's data around when restarting containers to recover from outage scenarios. If this is bind mounted then ownership will be changed to the consul user when the container starts. @@ -38,22 +38,22 @@ The entry point also includes a small utility to look up a client or bind addres ## Running Consul for Development ```console -$ docker run -d --name=dev-consul consul +$ docker run -d --name=dev-consul %%IMAGE%% ``` This runs a completely in-memory Consul server agent with default bridge networking and no services exposed on the host, which is useful for development but should not be used in production. For example, if that server is running at internal address 172.17.0.2, you can run a three node cluster for development by starting up two more instances and telling them to join the first node. ```console -$ docker run -d consul agent -dev -join=172.17.0.2 +$ docker run -d %%IMAGE%% agent -dev -join=172.17.0.2 ... server 2 starts -$ docker run -d consul agent -dev -join=172.17.0.2 +$ docker run -d %%IMAGE%% agent -dev -join=172.17.0.2 ... server 3 starts ``` Then we can query for all the members in the cluster by running a Consul CLI command in the first container: ```console -$ docker exec -t dev-consul consul members +$ docker exec -t dev-consul %%IMAGE%% members Node Address Status Type Build Protocol DC 579db72c1ae1 172.17.0.3:8301 alive server 0.6.3 2 dc1 93fe2309ef19 172.17.0.4:8301 alive server 0.6.3 2 dc1 @@ -67,7 +67,7 @@ Development mode also starts a version of Consul's web UI on port 8500. This can ## Running Consul Agent in Client Mode ```console -$ docker run -d --net=host -e 'CONSUL_LOCAL_CONFIG={"leave_on_terminate": true}' consul agent -bind= -retry-join= +$ docker run -d --net=host -e 'CONSUL_LOCAL_CONFIG={"leave_on_terminate": true}' %%IMAGE%% agent -bind= -retry-join= ==> Starting Consul agent... ==> Starting Consul agent RPC... ==> Consul agent running! @@ -122,7 +122,7 @@ consul.service.consul. 0 IN A 66.175.220.234 If you want to expose the Consul interfaces to other containers via a different network, such as the bridge network, use the `-client` option for Consul: ```console -docker run -d --net=host consul agent -bind= -client= -retry-join= +docker run -d --net=host %%IMAGE%% agent -bind= -client= -retry-join= ==> Starting Consul agent... ==> Starting Consul agent RPC... ==> Consul agent running! @@ -141,7 +141,7 @@ With this configuration, Consul's client interfaces will be bound to the bridge ## Running Consul Agent in Server Mode ```console -$ docker run -d --net=host -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' consul agent -server -bind= -retry-join= -bootstrap-expect= +$ docker run -d --net=host -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' %%IMAGE%% agent -server -bind= -retry-join= -bootstrap-expect= ``` This runs a Consul server agent sharing the host's network. All of the network considerations and behavior we covered above for the client agent also apply to the server agent. A single server on its own won't be able to form a quorum and will be waiting for other servers to join. @@ -161,7 +161,7 @@ By default, Consul's DNS server is exposed on port 8600. Because this is cumbers Here's an example: ```console -$ docker run -d --net=host -e 'CONSUL_ALLOW_PRIVILEGED_PORTS=' consul -dns-port=53 -recursor=8.8.8.8 +$ docker run -d --net=host -e 'CONSUL_ALLOW_PRIVILEGED_PORTS=' %%IMAGE%% -dns-port=53 -recursor=8.8.8.8 ``` This example also includes a recursor configuration that uses Google's DNS servers for non-Consul lookups. You may want to adjust this based on your particular DNS configuration. If you are binding Consul's client interfaces to the host's loopback address, then you should be able to configure your host's `resolv.conf` to route DNS requests to Consul by including "127.0.0.1" as the primary DNS server. This would expose Consul's DNS to all applications running on the host, but due to Docker's built-in DNS server, you can't point to this directly from inside your containers; Docker will issue an error message if you attempt to do this. You must configure Consul to listen on a non-localhost address that is reachable from within other containers. @@ -169,7 +169,7 @@ This example also includes a recursor configuration that uses Google's DNS serve Once you bind Consul's client interfaces to the bridge or other network, you can use the `--dns` option in your *other containers* in order for them to use Consul's DNS server, mapped to port 53. Here's an example: ```console -$ docker run -d --net=host -e 'CONSUL_ALLOW_PRIVILEGED_PORTS=' consul agent -dns-port=53 -recursor=8.8.8.8 -bind= +$ docker run -d --net=host -e 'CONSUL_ALLOW_PRIVILEGED_PORTS=' %%IMAGE%% agent -dns-port=53 -recursor=8.8.8.8 -bind= ``` Now start another container and point it at Consul's DNS, using the bridge address of the host: diff --git a/convertigo/content.md b/convertigo/content.md index 30924dfe28d2..adc302e87bcd 100644 --- a/convertigo/content.md +++ b/convertigo/content.md @@ -18,7 +18,7 @@ Convertigo Community edition brought to you by Convertigo SA (Paris & San Franci ## Quick start ```console -$ docker run --name C8O -d -p 28080:28080 convertigo +$ docker run --name C8O -d -p 28080:28080 %%IMAGE%% ``` This will start a container running the minimum Convertigo MBaaS server. Convertigo MBaaS uses images' **/workspace** directory to store configuration file and deployed projects as an Docker volume. @@ -38,7 +38,7 @@ $ docker run -d --name fullsync couchdb:1.6.1 Then launch Convertigo and link it to the running 'fullsync' container. Convertigo MBaaS sever will automatically use it as its fullsync repository. ```console -$ docker run -d --name C8O-MBAAS --link fullsync:couchdb -p 28080:28080 convertigo +$ docker run -d --name C8O-MBAAS --link fullsync:couchdb -p 28080:28080 %%IMAGE%% ``` ## Link Convertigo to a Billing & Analytics database @@ -61,7 +61,7 @@ convertigo Projects are deployed in the Convertigo workspace, a simple file system directory. You can map the docker container **/workspace** to your physical system by using : ```console -$ docker run --name C8O-MBAAS -v $(pwd):/workspace -d -p 28080:28080 convertigo +$ docker run --name C8O-MBAAS -v $(pwd):/workspace -d -p 28080:28080 %%IMAGE%% ``` You can share the same workspace by all Convertigo containers. This this case, when you deploy a project on a Convertigo container, it will be seen by others. This is the best way to build multi-instance load balanced Convertigo server farms. @@ -83,7 +83,7 @@ These accounts can be configured through the *administration console* and saved You can change the default administration account : ```console -$ docker run -d --name C8O-MBAAS -e CONVERTIGO_ADMIN_USER=administrator -e CONVERTIGO_ADMIN_PASSWORD=s3cret -p 28080:28080 convertigo +$ docker run -d --name C8O-MBAAS -e CONVERTIGO_ADMIN_USER=administrator -e CONVERTIGO_ADMIN_PASSWORD=s3cret -p 28080:28080 %%IMAGE%% ``` ### `CONVERTIGO_TESTPLATFORM_USER` and `CONVERTIGO_TESTPLATFORM_PASSWORD` variables @@ -91,7 +91,7 @@ $ docker run -d --name C8O-MBAAS -e CONVERTIGO_ADMIN_USER=administrator -e CONVE You can lock the **testplatform** by setting the account : ```console -$ docker run -d --name C8O-MBAAS -e CONVERTIGO_TESTPLATFORM_USER=tp_user -e CONVERTIGO_TESTPLATFORM_PASSWORD=s3cret -p 28080:28080 convertigo +$ docker run -d --name C8O-MBAAS -e CONVERTIGO_TESTPLATFORM_USER=tp_user -e CONVERTIGO_TESTPLATFORM_PASSWORD=s3cret -p 28080:28080 %%IMAGE%% ``` ## `JAVA_OPTS` Environment variable @@ -101,7 +101,7 @@ Convertigo is based on a *Java* process with some defaults *JVM* options. You ca Add any *Java JVM* options such as -Xmx or -D[something] ```console -$ docker run -d --name C8O-MBAAS -e JAVA_OPTS="-Xmx4096m -DjvmRoute=server1" -p 28080:28080 convertigo +$ docker run -d --name C8O-MBAAS -e JAVA_OPTS="-Xmx4096m -DjvmRoute=server1" -p 28080:28080 %%IMAGE%% ``` ## Pre configurated Docker compose stack diff --git a/couchdb/content.md b/couchdb/content.md index 9c35a3e62918..c2fb26e1ad28 100644 --- a/couchdb/content.md +++ b/couchdb/content.md @@ -13,7 +13,7 @@ CouchDB comes with a suite of features, such as on-the-fly document transformati ### Start a CouchDB instance ```console -$ docker run -d --name my-couchdb %%REPO%% +$ docker run -d --name my-couchdb %%IMAGE%% ``` This image includes `EXPOSE 5984` (the CouchDB port), so standard container linking will make it automatically available to the linked containers. @@ -23,7 +23,7 @@ This image includes `EXPOSE 5984` (the CouchDB port), so standard container link In order to use the running instance from an application, link the container ```console -$ docker run --name my-couchdb-app --link my-couchdb:couch %%REPO%% +$ docker run --name my-couchdb-app --link my-couchdb:couch %%IMAGE%% ``` See the [official docs](http://docs.couchdb.org/en/1.6.1/) for infomation on using and configuring CouchDB. @@ -33,7 +33,7 @@ See the [official docs](http://docs.couchdb.org/en/1.6.1/) for infomation on usi If you want to expose the port to the outside world, run ```console -$ docker run -p 5984:5984 -d %%REPO%% +$ docker run -p 5984:5984 -d %%IMAGE%% ``` CouchDB listens on port 5984 for requests and the image includes `EXPOSE 5984`. The flag `-p 5984:5984` exposes this port on the host. @@ -52,7 +52,7 @@ CouchDB uses `/usr/local/var/lib/couchdb` to store its data. This directory is m You can map the container's volumes to a directory on the host, so that the data is kept between runs of the container. This example uses your current directory, but that is in general not the correct place to store your persistent data! ```console -$ docker run -d -v $(pwd):/usr/local/var/lib/couchdb --name my-couchdb %%REPO%% +$ docker run -d -v $(pwd):/usr/local/var/lib/couchdb --name my-couchdb %%IMAGE%% ``` ## Specifying the admin user in the environment @@ -60,7 +60,7 @@ $ docker run -d -v $(pwd):/usr/local/var/lib/couchdb --name my-couchdb %%REPO%% You can use the two environment variables `COUCHDB_USER` and `COUCHDB_PASSWORD` to set up the admin user. ```console -$ docker run -e COUCHDB_USER=admin -e COUCHDB_PASSWORD=password -d %%REPO%% +$ docker run -e COUCHDB_USER=admin -e COUCHDB_PASSWORD=password -d %%IMAGE%% ``` ## Using your own CouchDB configuration file @@ -70,7 +70,7 @@ The CouchDB configuration is specified in `.ini` files in `/usr/local/etc/couchd If you want to use a customized CouchDB configuration, you can create your configuration file in a directory on the host machine and then mount that directory as `/usr/local/etc/couchdb/local.d` inside the `%%REPO%%` container. ```console -$ docker run --name my-couchdb -v /my/custom-config-dir:/usr/local/etc/couchdb/local.d -d %%REPO%% +$ docker run --name my-couchdb -v /my/custom-config-dir:/usr/local/etc/couchdb/local.d -d %%IMAGE%% ``` You can also use `couchdb` as the base image for your own couchdb instance and provie your own version of the `local.ini` config file: diff --git a/crate/content.md b/crate/content.md index df2c0ee2f02c..a507104fa36e 100644 --- a/crate/content.md +++ b/crate/content.md @@ -19,7 +19,7 @@ The smallest CrateDB clusters can easily ingest tens of thousands of records per Spin up this Docker image like so: - $ docker run -p 4200:4200 crate + $ docker run -p 4200:4200 %%IMAGE%% Once you're up and running, head on over to [the introductory docs](https://crate.io/docs/stable/hello.html). diff --git a/drupal/content.md b/drupal/content.md index d56bcca23a7a..62db0836b8d3 100644 --- a/drupal/content.md +++ b/drupal/content.md @@ -11,13 +11,13 @@ Drupal is a free and open-source content-management framework written in PHP and The basic pattern for starting a `%%REPO%%` instance is: ```console -$ docker run --name some-%%REPO%% -d %%REPO%% +$ docker run --name some-%%REPO%% -d %%IMAGE%% ``` If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used: ```console -$ docker run --name some-%%REPO%% -p 8080:80 -d %%REPO%% +$ docker run --name some-%%REPO%% -p 8080:80 -d %%IMAGE%% ``` Then, access it via `http://localhost:8080` or `http://host-ip:8080` in a browser. @@ -29,7 +29,7 @@ When first accessing the webserver provided by this image, it will go through a ## MySQL ```console -$ docker run --name some-%%REPO%% --link some-mysql:mysql -d %%REPO%% +$ docker run --name some-%%REPO%% --link some-mysql:mysql -d %%IMAGE%% ``` - Database type: `MySQL, MariaDB, or equivalent` @@ -39,7 +39,7 @@ $ docker run --name some-%%REPO%% --link some-mysql:mysql -d %%REPO%% ## PostgreSQL ```console -$ docker run --name some-%%REPO%% --link some-postgres:postgres -d %%REPO%% +$ docker run --name some-%%REPO%% --link some-postgres:postgres -d %%IMAGE%% ``` - Database type: `PostgreSQL` @@ -55,7 +55,7 @@ There is consensus that `/var/www/html/modules`, `/var/www/html/profiles`, and ` If using bind-mounts, one way to accomplish pre-seeding your local `sites` directory would be something like the following: ```console -$ docker run --rm %%REPO%% tar -cC /var/www/html/sites . | tar -xC /path/on/host/sites +$ docker run --rm %%IMAGE%% tar -cC /var/www/html/sites . | tar -xC /path/on/host/sites ``` This can then be bind-mounted into a new container: @@ -66,19 +66,20 @@ $ docker run --name some-%%REPO%% --link some-postgres:postgres -d \ -v /path/on/host/profiles:/var/www/html/profiles \ -v /path/on/host/sites:/var/www/html/sites \ -v /path/on/host/themes:/var/www/html/themes \ - %%REPO%% + %%IMAGE%% ``` Another solution using Docker Volumes: ```console $ docker volume create %%REPO%%-sites -$ docker run --rm -v %%REPO%%-sites:/temporary/sites %%REPO%% cp -aRT /var/www/html/sites /temporary/sites +$ docker run --rm -v %%REPO%%-sites:/temporary/sites %%IMAGE%% cp -aRT /var/www/html/sites /temporary/sites $ docker run --name some-%%REPO%% --link some-postgres:postgres -d \ -v %%REPO%%-modules:/var/www/html/modules \ -v %%REPO%%-profiles:/var/www/html/profiles \ -v %%REPO%%-sites:/var/www/html/sites \ -v %%REPO%%-themes:/var/www/html/themes \ + %%IMAGE%% ``` ## %%STACK%% diff --git a/eclipse-mosquitto/content.md b/eclipse-mosquitto/content.md index bd787c3266c2..ddba1c9b1ffb 100644 --- a/eclipse-mosquitto/content.md +++ b/eclipse-mosquitto/content.md @@ -19,7 +19,7 @@ Three directories have been created in the image to be used for configuration, p When running the image, the default configuration values are used. To use a custom configuration file, mount a **local** configuration file to `/mosquitto/config/mosquitto.conf` ```console -$ docker run -it -p 1883:1883 -p 9001:9001 -v mosquitto.conf:/mosquitto/config/mosquitto.conf eclipse-mosquitto +$ docker run -it -p 1883:1883 -p 9001:9001 -v mosquitto.conf:/mosquitto/config/mosquitto.conf %%IMAGE%% ``` Configuration can be changed to: @@ -40,7 +40,7 @@ i.e. add the following to `mosquitto.conf`: Run a container using the new image: ```console -$ docker run -it -p 1883:1883 -p 9001:9001 -v mosquitto.conf:/mosquitto/config/mosquitto.conf -v /mosquitto/data -v /mosquitto/log eclipse-mosquitto +$ docker run -it -p 1883:1883 -p 9001:9001 -v mosquitto.conf:/mosquitto/config/mosquitto.conf -v /mosquitto/data -v /mosquitto/log %%IMAGE%% ``` **Note**: if the mosquitto configuration (mosquitto.conf) was modified to use non-default ports, the docker run command will need to be updated to expose the ports that have been configured. diff --git a/eggdrop/content.md b/eggdrop/content.md index f69544f317e0..b4d9e54fab50 100644 --- a/eggdrop/content.md +++ b/eggdrop/content.md @@ -11,7 +11,7 @@ Eggdrop is the world's most popular Open Source IRC bot, designed for flexibilit To run this container the first time, you'll need to pass in, at minimum, a nickname and server via Environmental Variables. At minimum, a docker run command similar to ```console -$ docker run -ti -e NICK=FooBot -e SERVER=irc.freenode.net -v /path/for/host/data:/home/eggdrop/eggdrop/data eggdrop +$ docker run -ti -e NICK=FooBot -e SERVER=irc.freenode.net -v /path/for/host/data:/home/eggdrop/eggdrop/data %%IMAGE%% ``` should be used. This will modify the appropriate values within the config file, then start your bot with the nickname FooBot and connect it to irc.freenode.net. These variables are only needed for your first run- after the first use, you can edit the config file directly. Additional configuration options are listed in the following sections. @@ -43,13 +43,13 @@ This variable sets the nickname used by eggdrop. After the first use, you should After running the eggdrop container for the first time, the configuration file, user file and channel file will all be available inside the container at /home/eggdrop/eggdrop/data/ . NOTE! These files are only as persistent as the container they exist in. If you expect to use a different container over the course of using the Eggdrop docker image (intentionally or not) you will want to create a persistent data store. The easiest way to do this is to mount a directory on your host machine to /home/eggdrop/eggdrop/data. If you do this prior to your first run, you can easily edit the eggdrop configuration file on the host. Otherwise, you can also drop in existing config, user, or channel files into the mounted directory for use in the eggdrop container. You'll also likely want to daemonize eggdrop (ie, run it in the background). To do this, start your container with something similar to ```console -$ docker run -i -e NICK=FooBot -e SERVER=irc.freenode.net -v /path/to/eggdrop/files:/home/eggdrop/eggdrop/data -d eggdrop +$ docker run -i -e NICK=FooBot -e SERVER=irc.freenode.net -v /path/to/eggdrop/files:/home/eggdrop/eggdrop/data -d %%IMAGE%% ``` If you provide your own config file, specify it as the argument to the docker container: ```console -$ docker run -i -v /path/to/eggdrop/files:/home/eggdrop/eggdrop/data -d eggdrop mybot.conf +$ docker run -i -v /path/to/eggdrop/files:/home/eggdrop/eggdrop/data -d %%IMAGE%% mybot.conf ``` Any config file used with docker MUST end in .conf, such as eggdrop.conf or mybot.conf diff --git a/elixir/content.md b/elixir/content.md index 23fc1d08ea0b..a40b32fba15c 100644 --- a/elixir/content.md +++ b/elixir/content.md @@ -13,14 +13,14 @@ Elixir leverages the Erlang VM, known for running low-latency, distributed and f ## Run it as the REPL ```console -➸ docker run -it --rm elixir +➸ docker run -it --rm %%IMAGE%% Erlang/OTP 18 [erts-7.2.1] [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false] Interactive Elixir (1.2.1) - press Ctrl+C to exit (type h() ENTER for help) iex(1)> System.version "1.2.1" iex(2)> -➸ docker run -it --rm -h elixir.local elixir iex --sname snode +➸ docker run -it --rm -h elixir.local %%IMAGE%% iex --sname snode Erlang/OTP 18 [erts-7.2.1] [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false] Interactive Elixir (1.2.1) - press Ctrl+C to exit (type h() ENTER for help) @@ -34,5 +34,5 @@ iex(snode@elixir)2> :c.uptime ## Run a single Elixir exs script ```console -$ docker run -it --rm --name %%REPO%%-inst1 -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%REPO%% elixir your-escript.exs +$ docker run -it --rm --name elixir-inst1 -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%IMAGE%% elixir your-escript.exs ``` diff --git a/erlang/content.md b/erlang/content.md index 18601839b9cb..7f1b54f8ad8c 100644 --- a/erlang/content.md +++ b/erlang/content.md @@ -11,7 +11,7 @@ Erlang is a programming language used to build massively scalable soft real-time ## Run it as the REPL ```console -➸ docker run -it --rm erlang +➸ docker run -it --rm %%IMAGE%% Erlang/OTP 20 [erts-9.0] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:10] [hipe] [kernel-poll:false] Eshell V9.0 (abort with ^G) @@ -30,7 +30,7 @@ User switch command q - quit erlang ? | h - this message --> q -➸ docker run -it --rm -h erlang.local erlang erl -name snode@erlang.local +➸ docker run -it --rm -h erlang.local %%IMAGE%% erl -name snode@erlang.local Erlang/OTP 20 [erts-9.0] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:10] [hipe] [kernel-poll:false] Eshell V9.0 (abort with ^G) @@ -44,5 +44,5 @@ User switch command ## Run a single Erlang escript ```console -$ docker run -it --rm --name %%REPO%%-inst1 -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%REPO%% escript your-escript.erl +$ docker run -it --rm --name %%REPO%%-inst1 -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%IMAGE%% escript your-escript.erl ``` diff --git a/fedora/content.md b/fedora/content.md index 7a22588afae1..eb379126a2e1 100644 --- a/fedora/content.md +++ b/fedora/content.md @@ -4,8 +4,8 @@ This image serves as the `official Fedora image` for the [Fedora Distribution](h %%LOGO%% -The `fedora:latest` tag will always point to the latest stable release. +The `%%IMAGE%%:latest` tag will always point to the latest stable release. This image is a relatively small footprint in comparison to a standard Fedora installation. This image is generated in the [Fedora Build System](http://koji.fedoraproject.org/koji/) and is built from [this kickstart file](https://git.fedorahosted.org/cgit/spin-kickstarts.git/tree/fedora-docker-base.ks). -[Fedora Rawhide](https://fedoraproject.org/wiki/Releases/Rawhide) is available via `fedora:rawhide` and any specific version of Fedora as `fedora:$version` (example: `fedora:23`). +[Fedora Rawhide](https://fedoraproject.org/wiki/Releases/Rawhide) is available via `%%IMAGE%%:rawhide` and any specific version of Fedora as `%%IMAGE%%:$version` (example: `%%IMAGE%%:23`). diff --git a/flink/content.md b/flink/content.md index c636f39029e0..bbcbe1516b8b 100644 --- a/flink/content.md +++ b/flink/content.md @@ -15,7 +15,7 @@ Learn more about Flink at [https://flink.apache.org/](https://flink.apache.org/) To run a single Flink local cluster: ```console -$ docker run --name flink_local -p 8081:8081 -t flink local +$ docker run --name flink_local -p 8081:8081 -t %%IMAGE%% local ``` Then with a web browser go to `http://localhost:8081/` to see the Flink Web Dashboard (adjust the hostname for your Docker host). @@ -23,7 +23,7 @@ Then with a web browser go to `http://localhost:8081/` to see the Flink Web Dash To use Flink, you can submit a job to the cluster using the Web UI or you can also do it from a different Flink container, for example: ```console -$ docker run --rm -t flink flink run -m -c +$ docker run --rm -t %%IMAGE%% flink run -m -c ``` ## Running a JobManager or a TaskManager @@ -31,13 +31,13 @@ $ docker run --rm -t flink flink run -m -c /*.log`, where `` is the port number that server is listening on (11345 by default). If you run and mount multiple containers using the same default port and same host side directory, then they will collide and attempt writing to the same file. If you want to run multiple gzservers on the same docker host, then a bit more clever volume mounting of `~/.gazebo/` subfolders would be required. @@ -62,13 +62,13 @@ In this short example, we'll spin up a new container running gazebo server, conn > First launch a gazebo server with a mounted volume for logging and name the container gazebo: ```console -$ docker run -d -v="/tmp/.gazebo/:/root/.gazebo/" --name=gazebo gazebo +$ docker run -d -v="/tmp/.gazebo/:/root/.gazebo/" --name=gazebo %%IMAGE%% ``` > Now open a new bash session in the container using the same entrypoint to configure the environment. Then download the double_pendulum model and load it into the simulation. ```console -$ docker exec -it gazebo bash +$ docker exec -it %%IMAGE%% bash $ apt-get update && apt-get install -y curl $ curl -o double_pendulum.sdf http://models.gazebosim.org/double_pendulum_with_base/model-1_4.sdf $ gz model --model-name double_pendulum --spawn-file double_pendulum.sdf diff --git a/gcc/content.md b/gcc/content.md index 0a8be1198f75..f9d654910971 100644 --- a/gcc/content.md +++ b/gcc/content.md @@ -13,7 +13,7 @@ The GNU Compiler Collection (GCC) is a compiler system produced by the GNU Proje The most straightforward way to use this image is to use a gcc container as both the build and runtime environment. In your `Dockerfile`, writing something along the lines of the following will compile and run your project: ```dockerfile -FROM gcc:4.9 +FROM %%IMAGE%%:4.9 COPY . /usr/src/myapp WORKDIR /usr/src/myapp RUN gcc -o myapp main.c @@ -32,11 +32,11 @@ $ docker run -it --rm --name my-running-app my-gcc-app There may be occasions where it is not appropriate to run your app inside a container. To compile, but not run your app inside the Docker instance, you can write something like: ```console -$ docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp gcc:4.9 gcc -o myapp myapp.c +$ docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%IMAGE%%:4.9 gcc -o myapp myapp.c ``` This will add your current directory, as a volume, to the container, set the working directory to the volume, and run the command `gcc -o myapp myapp.c.` This tells gcc to compile the code in `myapp.c` and output the executable to myapp. Alternatively, if you have a `Makefile`, you can instead run the `make` command inside your container: ```console -$ docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp gcc:4.9 make +$ docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%IMAGE%%:4.9 make ``` diff --git a/geonetwork/content.md b/geonetwork/content.md index 7e43a15586db..dde39f87b318 100644 --- a/geonetwork/content.md +++ b/geonetwork/content.md @@ -19,7 +19,7 @@ The project is part of the Open Source Geospatial Foundation ( [OSGeo](http://ww This command will start a debian-based container, running a Tomcat web server, with a geonetwork war deployed on the server: ```console -$ docker run --name some-%%REPO%% -d %%REPO%% +$ docker run --name some-%%REPO%% -d %%IMAGE%% ``` ## Publish port @@ -27,7 +27,7 @@ $ docker run --name some-%%REPO%% -d %%REPO%% Geonetwork listens on port `8080`. If you want to access the container at the host, **you must publish this port**. For instance, this, will redirect all the container traffic on port 8080, to the same port on the host: ```console -$ docker run --name some-%%REPO%% -d -p 8080:8080 %%REPO%% +$ docker run --name some-%%REPO%% -d -p 8080:8080 %%IMAGE%% ``` Then, if you are running docker on Linux, you may access geonetwork at http://localhost:8080/geonetwork. Otherwise, replace `localhost` by the address of your docker machine. @@ -41,7 +41,7 @@ By default, geonetwork sets the data directory on `/usr/local/tomcat/webapps/geo For instance, to set the data directory to `/var/lib/geonetwork_data`: ```console -$ docker run --name some-%%REPO%% -d -p 8080:8080 -e DATA_DIR=/var/lib/geonetwork_data %%REPO%% +$ docker run --name some-%%REPO%% -d -p 8080:8080 -e DATA_DIR=/var/lib/geonetwork_data %%IMAGE%% ``` ## Persist data @@ -49,7 +49,7 @@ $ docker run --name some-%%REPO%% -d -p 8080:8080 -e DATA_DIR=/var/lib/geonetwor If you want the data directory to live beyond restarts, or even destruction of the container, you can mount a directory from the docker engine's host into the container. - `-v :`. For instance this, will mount the host directory `/host/geonetwork-docker` into `/var/lib/geonetwork_data` on the container: ```console -$ docker run --name some-%%REPO%% -d -p 8080:8080 -e DATA_DIR=/var/lib/geonetwork_data -v /host/geonetwork-docker:/var/lib/geonetwork_data %%REPO%% +$ docker run --name some-%%REPO%% -d -p 8080:8080 -e DATA_DIR=/var/lib/geonetwork_data -v /host/geonetwork-docker:/var/lib/geonetwork_data %%IMAGE%% ``` ## %%STACK%% diff --git a/ghost/content.md b/ghost/content.md index 037e5abedb42..aeaff918b83c 100644 --- a/ghost/content.md +++ b/ghost/content.md @@ -11,7 +11,7 @@ Ghost is a free and open source blogging platform written in JavaScript and dist This will start a Ghost instance listening on the default Ghost port of 2368. ```console -$ docker run -d --name some-ghost ghost +$ docker run -d --name some-ghost %%IMAGE%% ``` ## Custom port @@ -19,7 +19,7 @@ $ docker run -d --name some-ghost ghost If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used: ```console -$ docker run -d --name some-ghost -p 3001:2368 ghost +$ docker run -d --name some-ghost -p 3001:2368 %%IMAGE%% ``` Then, access it via `http://localhost:3001` or `http://host-ip:3001` in a browser. @@ -31,13 +31,13 @@ Mount your existing content. In this example we also use the Alpine base image. ### Ghost 1.x.x ```console -$ docker run -d --name some-ghost -p 3001:2368 -v /path/to/ghost/blog:/var/lib/ghost/content ghost:1-alpine +$ docker run -d --name some-ghost -p 3001:2368 -v /path/to/ghost/blog:/var/lib/ghost/content %%IMAGE%%:1-alpine ``` ### Ghost 0.11.xx ```console -$ docker run -d --name some-ghost -p 3001:2368 -v /path/to/ghost/blog:/var/lib/ghost ghost:0.11-alpine +$ docker run -d --name some-ghost -p 3001:2368 -v /path/to/ghost/blog:/var/lib/ghost %%IMAGE%%:0.11-alpine ``` ### Breaking change @@ -56,7 +56,7 @@ This Docker image for Ghost uses SQLite. There is nothing special to configure. Alternatively you can use a [data container](http://docs.docker.com/engine/tutorials/dockervolumes/) that has a volume that points to `/var/lib/ghost/content` (or /var/lib/ghost for 0.11.x) and then reference it: ```console -$ docker run -d --name some-ghost --volumes-from some-ghost-data ghost +$ docker run -d --name some-ghost --volumes-from some-ghost-data %%IMAGE%% ``` ## What is the Node.js version? diff --git a/gradle/content.md b/gradle/content.md index f8818dd994dd..14ee5bfcdf2f 100644 --- a/gradle/content.md +++ b/gradle/content.md @@ -12,6 +12,6 @@ Note that if you are mounting a volume and the uid running Docker is not `1000`, Run this from the directory of the Gradle project you want to build. -`docker run --rm -v "$PWD":/home/gradle/project -w /home/gradle/project gradle:latest gradle ` +`docker run --rm -v "$PWD":/home/gradle/project -w /home/gradle/project %%IMAGE%% gradle ` **Note: Java 9 support is experimental** diff --git a/groovy/content.md b/groovy/content.md index f5e3926db319..6bb671e6b794 100644 --- a/groovy/content.md +++ b/groovy/content.md @@ -14,7 +14,7 @@ Note that if you are mounting a volume and the uid running Docker is not `1000`, ## Running a Groovy script -`docker run --rm -v "$PWD":/home/groovy/scripts -w /home/groovy/scripts groovy groovy