You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
All requests for the .vm TLD are routed to the Docker Machine bridge IP.
DNSDock owns port 53 to listen to all DNS requests, and looks up the container IP address.
Because of this flow, in order to connect to the container via this DNS lookup, that container must be on the "bridge" network, hence our use of network_mode: bridge ubiquitously in docker-compose configuration to override the docker-compose default behavior of generating a user-defined network.
Unfortunately a user defined network and the bridge network cannot coexist. While it's possible to imagine a change to enable that in Docker itself, the whole point of the user-defined networks is to create a built-in firewall mechanism, while the bridge network allows the containers to cross-talk if they know a way to find each other.
Why do we want user defined networks?
Links (which we are currently using to facilitate internal networking of containers) is officially deprecated and could conceivably go away in the future.
Bridge network only applies to a single host, meaning our project configuration is not compatible with any multi-host production environment.
We are blocked from providing the network security learning opportunities.
How should project networking be set up?
Take a Drupal Stack example with SSL Termination, Varnish, Solr, Redis, Apache, PHP-FPM, and MariaDB.
Apache/PHP-FPM: Web network (if separate containers)
PHP-FPM/MariaDB: PHP/DB network
PHP-FPM/Redis: PHP/Object Cache network
PHP-FPM/Solr: PHP/Solr Network
Varnish/PHP-FPM: HTTP Requests network
SSL Termination/Varnish: HTTP Proxies network
This Drupal stack illustrates a case of strong point-to-point networks with no cross-talk outside what's strictly necessary for ideal traffic/control flows.
Our build container would be spun up and attached to all these networks as a means of getting access to the entire set of services.
Notes
Define a host ingress network for each project (contradicts the prod-friendly many-small-networks example above).
Make rig routing dynamic to respect all the available networks. (This might require a rig daemon process or yet another command to run to start work?)
The text was updated successfully, but these errors were encountered:
The functionality of links to propagate environment variables either caused trouble or misdirected troubleshooting efforts for one developer recently, pointing to the unintended side-effects of links that we don't want.
Problem
Outrigger DNS configures requests as follows:
.vm
TLD are routed to the Docker Machine bridge IP.Because of this flow, in order to connect to the container via this DNS lookup, that container must be on the "bridge" network, hence our use of
network_mode: bridge
ubiquitously in docker-compose configuration to override the docker-compose default behavior of generating a user-defined network.Unfortunately a user defined network and the bridge network cannot coexist. While it's possible to imagine a change to enable that in Docker itself, the whole point of the user-defined networks is to create a built-in firewall mechanism, while the bridge network allows the containers to cross-talk if they know a way to find each other.
Why do we want user defined networks?
How should project networking be set up?
Take a Drupal Stack example with SSL Termination, Varnish, Solr, Redis, Apache, PHP-FPM, and MariaDB.
This Drupal stack illustrates a case of strong point-to-point networks with no cross-talk outside what's strictly necessary for ideal traffic/control flows.
Our build container would be spun up and attached to all these networks as a means of getting access to the entire set of services.
Notes
The text was updated successfully, but these errors were encountered: