Docker Base Containers
Here at CloudObjects we are running a number of applications and microservices as part of our backend infrastructure. Each of them runs in its own Docker container. As many of them share similar technology stacks, the obvious approach here is to create a base container image and then deploy multiple containers with different source code files for each application or microservice.
One possible and typical approach is integrating the code directly into a container image using
docker build. The
Dockerfile would start from the base image and add a new layer for each microservice or application. This requires either running your own private Docker registry, pushing all your source code to private repositories on the public Docker hub or, if available, to use a registry from your IaaS/hosting provider.
Another approach is mounting the source code from the host system using the
-v parameter while calling
docker run. In this case the deployment of the code is completely separated from the runtime environment, which might only make sense in some scenarios, for example when there are regular “hot swaps” of source code files, as during development. It doesn’t make sense in cloud environments where containers can be scheduled to run on different instances.
The approach that we decided upon was something different. Instead of bundling the source code with the image or deploying it separately, our build process creates deployable ZIP files from each working revision of the source code repositories. These archives can be served via a simple static webserver, which can be kept private and accessible only to the target servers on which the microservice or application should be deployed to. Of course it is also possible to use something like Amazon S3 and use AWS-specific controls to limit access to your own EC2 instances. This is the approach that we ended up using.
Our Docker base images are stored on the public Docker hub. Every base image contains an installation script. When using
docker run, you can use the
-E parameter to pass the URL of the ZIP file which contains the deployable package as an environment variable called
PACKAGE_ZIP_URL. The container automatically downloads the code when starting. For the deployment of a new version of the code package you can simply terminate the running container and start it again using
docker run (or let your container scheduling system handle that), giving the same or a different filename (depending on how you organize your ZIPs - we typically create a new one for each revision).
The good news for you: Our base images and their underlying Dockerfiles are public, so you can reuse them if you want to! I will explain each of our base images with a separate blog post, so stay tuned for more details.