I'm using Cookiecutter scaffold for my Django project and I follow the same workflow documented for local docker environments. I have a dev.yml compose file for a local setup. I have a testing env setup which is very different from a local setup(installs test dependencies, has different set of services specific to testing) called test.yml. I'm not able to spin up docker compose envs for both local development and testing env simultaneously. When I do a:
$ docker-compose -f dev.yml up -d
All the dev containers spin up fine.
After this I do a:
$ docker-compose -f test.yml up -d
It just recreates all the above containers. Should I use a different network? Or should I give different names for the apps and services in test.yml? What is the best practice to run different set of docker compose envs for the same codebase simultaneously?
Currently, I checkout the code in a different path and spin up the test env there, which seems to work.
docker-compose --project-name with a different name.
Related
I run a dockerized Django-celery app which takes some user input/data from a webpage and (is supposed to) run a unix binary on the host system for subsequent data analysis. The data analysis takes a bit of time, so I use celery to run it asynchronously. The data analysis software is dockerized as well, so my django-celery worker should do os.system('docker run ...'). However, celery says docker: command not found, obviously because docker is not installed within my Django docker image. What is the best solution to this problem? I don't want to run docker within docker, because my analysis software should be allowed to use all system resources and not just the resources assigned to the Django image.
I don't want to run docker within docker, because my analysis software should be allowed to use all system resources and not just the resources assigned to the Django image.
I didn't catch the causal relationship here. In fact, we just need to add 2 steps to your Django image:
Follow Install client binaries on Linux to download the docker client binary from prebuilt, then your Django image will have the command docker.
When starting the Django container, add /var/run/docker.sock bind mount, this allows the Django container to directly talk to the docker daemon on the host machine and start the data-analysis tool container on the host. As the analysis container does not start in Django container, they can have separate system resources. In other words, the analysis container's resources do not depend on the resource of the Django image container.
Samples with one docker image which already has the docker client in it:
root#pie:~# ls /dev/fuse
/dev/fuse
root#pie:~# docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock docker /bin/sh
/ # ls /dev/fuse
ls: /dev/fuse: No such file or directory
/ # docker run --rm -it -v /dev:/dev alpine ls /dev/fuse
/dev/fuse
You can see, although the initial container does not have access to the host's /dev folder, the docker container whose command initialized from the initial container could really have separate resources.
If the above is what you need, then it's the right solution for you. Otherwise, you will have to install the analysis tool in your Django image
The app has the following containers
php-fpm
nginx
local mysql
app's API
datadog container
In the dev process many feature branches are created to add new features. such as
app-feature1
app-feature2
app-feature3
...
I have an AWS EC2 instance per feature branches running docker engine V.18 and docker compose to build the and run the docker stack that compose the php app.
To save operation costs 1 AWS EC2 instance can have 3 feature branches at the same time. I was thinking that there should be a custom docker-compose with special port mapping and docker image tag for each feature branch.
The goal of this configuration is to be able to test 3 feature branches and access the app through different ports while saving money.
I also thought about using docker networks by keeping the same ports and using an nginx to redirect traffic to the different docker network ports.
What recommendations do you give?
One straight forward way I can think of in this case is to use the .env file for your docker-compose.
docker-compose.yaml file will look something like this
...
ports:
- ${NGINX_PORT}:80
...
ports:
- ${API_PORT}:80
.env file for each stack will look something like this
NGINX_PORT=30000
API_PORT=30001
and
NGINX_PORT=30100
API_PORT=30101
for different projects.
Note:
.env must be in the same folder as your docker-compose.yaml.
Make sure that all the ports inside .env files will not be conflicting with each other. You can have some kind of conventions like having prefix for features like feature1 will have port starting with 301 i.e. 301xx.
In this way, your docker-compose.yaml can be as generic as you may like.
You're making things harder than they have to be. Your app is containerized- use a container system.
ECS is very easy to get going with. It's a json file that defines your deployment- basically analogous to docker-compose (they actually supported compose files at some point, not sure if that feature stayed around). You can deploy an arbitrary number of services with different container images. We like to use a terraform module with the image tag as a parameter, but easy enough to write a shell script or whatever.
Since you're trying to save money, create a single application load balancer. each app gets a hostname, and each container gets a subpath. For short lived feature branch deployments, you can even deploy on Fargate and not have an ongoing server cost.
It turns out the solution involved capabilities from docker-compose. In docker docs the concept is called Multiple Isolated environments on a single host
to achieve this:
I used an .env file with so many env vars. The main one is CONTAINER_IMAGE_TAG that defines the git branch ID to identify the stack.
A separate docker-compose-dev file defines ports, image tags, extra metadata that is dev related
Finally the use of --project-name in the docker-compose command allows to have different stacks.
an example docker-compose Bash function that uses the docker-compose command
docker_compose() {
docker-compose -f docker/docker-compose.yaml -f docker/docker-compose-dev.yaml --project-name "project${CONTAINER_IMAGE_TAG}" --project-directory . "$#"
}
The separation should be done in the image tags, container names, network names, volume names and project name.
As the most important benefit of using docker is to keep dev and prod env to be the same so let's rule out the option of using two different docker-compose.yml
Let's say we have a Django application, and we use gunicorn to serve for production and we have a dedicated apache2 as a reverse proxy(this apache2 is out of docker by design). So this application(docker-compose) has only two parts, web(Django) and db(mysql). There's nothing wrong with the db part.
For the Django part, the dev routine without docker would be using venv and python3 manage.py runserver or whatever shortcut that an IDE provides. We can happily change our code, the dev server is smart to pick up and change and reflect in no time.
Things get tricky when docker comes in since all source code should be packed into the image, this gives our dev a big overhead of recreating the image&container again and again. One might have the following solutions(which I found not elegant):
In docker-compose.yml use volume to mount source code folder into the container, so that all changes in the host source code folder will automatically reflect in the container, then gunicorn will pick up the change and reflect. --- This does remove most of the recreating container overhead, but we can't use the same docker-compose.yml in production as this introduces a dependency to the source code on the host server.
I know there is a command line option to mount a host folder to the container, but to my knowledge, this option only exists in docker run not docker-compose. So using a different command to bring the service up in different env is another dead end. ( I am not 100% sure about this as I'm still quite new to docker, please correct me if I'm wrong)
TLDR;
How can I set up my env so that
I use only one single docker-compose.yml for both dev and prod
I'm able to dev with live changes easily without recreating docker container
Thanks a lot!
Define your django service in docker-compose.yml as
services:
backend:
image: backend
Then add a file for dev: docker-compose.dev.yml
services:
backend:
extends:
file: docker-compose.yml
service: backend
volume: local_path:path
To launch for prod, just docker-compose up
To launch for dev
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
To hot reload dev django app, just reload gunicorn ps aux | grep gunicorn | grep greencar_proj | awk '{ print $2 }' | xargs kill -HUP
I have also liked to jam as much functionality into a single docker-compose.yml file. A few strategies I would consider:
define different services for prod and dev. So you'll run docker-compose up dev or docker-compose up prod or docker-compose run dev. There is some copying here but usually not a lot.
Use multiple docker-compose.yml files and merge them. eg: docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d. More details here: https://docs.docker.com/compose/extends/
I usually just comment out my volumes section, but that's probably not the best solution.
I have two elastic-beanstalk environments on AWS: development and production. I'm running a glassfish server on each instance and it is requested that the same application package be deployable in production and in development environment, without requiring two different .EAR files.The two instance differ in size: the dev has a micro instance while the production has a medium instance, therefore I need to deploy two different configuration files for glassfish, one for each environment.
The main problem is that the file has to be in the glassfish config directory before the server starts, therefore I thought it could be better moving it while the container was created.
Of course each environment uses a docker container to host the glassfish instance, so my first thought was to configure an environment variable for the elastic-beanstalk. In this case
ypenvironment = dev
for the development environment and
ypenvironment = pro
for the production environment. Then in my DOCKERFILE I put this statement in the RUN command:
RUN if [ "$ypenvironment"="pro" ] ; then \
mv --force /var/app/GF_domain.xml /usr/local/glassfish/glassfish/domains/domain1/config/domain.xml ; \
elif [ "$ypenvironment"="dev" ] ; then \
mv --force /var/app/GF_domain.xml.dev /usr/local/glassfish/glassfish/domains/domain1/config/domain.xml ; \
fi
unfortunately, when the startup finishes, both GF_domain files are still in var/app.
Then I red that the RUN command runs things BEFORE the container is fully loaded, maybe missing the elastic-beanstalk-injected variables. So I tried to move the code to the ENTRYPOINT directive. No luck again, the container startup fails. Also tried the
ENTRYPOINT ["command", "param"]
syntax, but it didn't work giving a
System error: exec: "if": executable file not found in $PATH
Thus I'm stuck.
You need:
1/ Not to use entrypoint (or at least use a sh -c 'if...' syntax): that is for runtime execution, not compile-time image build.
2/ to use build-time variables (--build-arg):
You can use ENV instructions in a Dockerfile to define variable values. These values persist in the built image.
However, often persistence is not what you want. Users want to specify variables differently depending on which host they build an image on.
A good example is http_proxy or source versions for pulling intermediate files. The ARG instruction lets Dockerfile authors define values that users can set at build-time using the --build-arg flag:
$ docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 .
In your case, your Dockefile should include:
ENV ypenvironment
Then docker build --build-arg ypenvironment=dev ... myDevImage
You will build 2 different images (based on the same Dockerfile)
I need to be able to use the same EAR package for dev and pro environments,
Then you want your ENTRYPOINT, when run, to move a file depending on the value of an environment variable.
Your Dockerfile still needs to include:
ENV ypenvironment
But you need to run your one image with
docker run -x ypenvironment=dev ...
Make sure your script (referenced by your entrypoint) includes the if [ "$ypenvironment"="pro" ] ; then... you mention in your question, plus the actual launch (in foreground) of your app.
Your script needs to not exit right away, or your container would switch to exit status right after having started.
When working with Docker you must differentiate between build-time actions and run-time actions.
Dockerfiles are used for building Docker images, not for deploying containers. This means that all the commands in the Dockerfile are executed when you build the Docker image, not when you deploy a container from it.
The CMD and ENTRYPOINT commands are special build-time commands which tell Docker what command to execute when a container is deployed from that image.
Now, in your case a better approach would be to check if Glassfish supports environment variables inside domain.xml (or somewhere else). If it does, you can use the same domain.xml file for both environments, and have the same Docker image for both of them. You then differentiate between the environments by injecting run-time environment variables to the containers by using docker run -e "VAR=value" when running locally, and by using the Environment Properties configuration section when deploying on Elastic Beanstalk.
Edit: In case you can't use environment variables inside domain.xml, you can solve the problem by starting the container with a script which reads the runtime environment variables and puts their values in the correct places in domain.xml using sed, then starts your application as usual. You can find an example in this post.
I am running docker-container on Amazon EC2. Currently I have added AWS Credentials to Dockerfile. Could you please let me know the best way to do this?
A lot has changed in Docker since this question was asked, so here's an attempt at an updated answer.
First, specifically with AWS credentials on containers already running inside of the cloud, using IAM roles as Vor suggests is a really good option. If you can do that, then add one more plus one to his answer and skip the rest of this.
Once you start running things outside of the cloud, or have a different type of secret, there are two key places that I recommend against storing secrets:
Environment variables: when these are defined on a container, every process inside the container has access to them, they are visible via /proc, apps may dump their environment to stdout where it gets stored in the logs, and most importantly, they appear in clear text when you inspect the container.
In the image itself: images often get pushed to registries where many users have pull access, sometimes without any credentials required to pull the image. Even if you delete the secret from one layer, the image can be disassembled with common Linux utilities like tar and the secret can be found from the step where it was first added to the image.
So what other options are there for secrets in Docker containers?
Option A: If you need this secret only during the build of your image, cannot use the secret before the build starts, and do not have access to BuildKit yet, then a multi-stage build is a best of the bad options. You would add the secret to the initial stages of the build, use it there, and then copy the output of that stage without the secret to your release stage, and only push that release stage to the registry servers. This secret is still in the image cache on the build server, so I tend to use this only as a last resort.
Option B: Also during build time, if you can use BuildKit which was released in 18.09, there are currently experimental features to allow the injection of secrets as a volume mount for a single RUN line. That mount does not get written to the image layers, so you can access the secret during build without worrying it will be pushed to a public registry server. The resulting Dockerfile looks like:
# syntax = docker/dockerfile:experimental
FROM python:3
RUN pip install awscli
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws s3 cp s3://... ...
And you build it with a command in 18.09 or newer like:
DOCKER_BUILDKIT=1 docker build -t your_image --secret id=aws,src=$HOME/.aws/credentials .
Option C: At runtime on a single node, without Swarm Mode or other orchestration, you can mount the credentials as a read only volume. Access to this credential requires the same access that you would have outside of docker to the same credentials file, so it's no better or worse than the scenario without docker. Most importantly, the contents of this file should not be visible when you inspect the container, view the logs, or push the image to a registry server, since the volume is outside of that in every scenario. This does require that you copy your credentials on the docker host, separate from the deploy of the container. (Note, anyone with the ability to run containers on that host can view your credential since access to the docker API is root on the host and root can view the files of any user. If you don't trust users with root on the host, then don't give them docker API access.)
For a docker run, this looks like:
docker run -v $HOME/.aws/credentials:/home/app/.aws/credentials:ro your_image
Or for a compose file, you'd have:
version: '3'
services:
app:
image: your_image
volumes:
- $HOME/.aws/credentials:/home/app/.aws/credentials:ro
Option D: With orchestration tools like Swarm Mode and Kubernetes, we now have secrets support that's better than a volume. With Swarm Mode, the file is encrypted on the manager filesystem (though the decryption key is often there too, allowing the manager to be restarted without an admin entering a decrypt key). More importantly, the secret is only sent to the workers that need the secret (running a container with that secret), it is only stored in memory on the worker, never disk, and it is injected as a file into the container with a tmpfs mount. Users on the host outside of swarm cannot mount that secret directly into their own container, however, with open access to the docker API, they could extract the secret from a running container on the node, so again, limit who has this access to the API. From compose, this secret injection looks like:
version: '3.7'
secrets:
aws_creds:
external: true
services:
app:
image: your_image
secrets:
- source: aws_creds
target: /home/user/.aws/credentials
uid: '1000'
gid: '1000'
mode: 0700
You turn on swarm mode with docker swarm init for a single node, then follow the directions for adding additional nodes. You can create the secret externally with docker secret create aws_creds $HOME/.aws/credentials. And you deploy the compose file with docker stack deploy -c docker-compose.yml stack_name.
I often version my secrets using a script from: https://github.com/sudo-bmitch/docker-config-update
Option E: Other tools exist to manage secrets, and my favorite is Vault because it gives the ability to create time limited secrets that automatically expire. Every application then gets its own set of tokens to request secrets, and those tokens give them the ability to request those time limited secrets for as long as they can reach the vault server. That reduces the risk if a secret is ever taken out of your network since it will either not work or be quick to expire. The functionality specific to AWS for Vault is documented at https://www.vaultproject.io/docs/secrets/aws/index.html
The best way is to use IAM Role and do not deal with credentials at all. (see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html )
Credentials could be retrieved from http://169.254.169.254..... Since this is a private ip address, it could be accessible only from EC2 instances.
All modern AWS client libraries "know" how to fetch, refresh and use credentials from there. So in most cases you don't even need to know about it. Just run ec2 with correct IAM role and you good to go.
As an option you can pass them at the runtime as environment variables ( i.e docker run -e AWS_ACCESS_KEY_ID=xyz -e AWS_SECRET_ACCESS_KEY=aaa myimage)
You can access these environment variables by running printenv at the terminal.
Yet another approach is to create temporary read-only volume in docker-compose.yaml. AWS CLI and SDK (like boto3 or AWS SDK for Java etc.) are looking for default profile in ~/.aws/credentials file.
If you want to use other profiles, you just need also to export AWS_PROFILE variable before running docker-compose command.
export AWS_PROFILE=some_other_profile_name
version: '3'
services:
service-name:
image: docker-image-name:latest
environment:
- AWS_PROFILE=${AWS_PROFILE}
volumes:
- ~/.aws/:/root/.aws:ro
In this example, I used root user on docker. If you are using other user, just change /root/.aws to user home directory.
:ro - stands for read-only docker volume
It is very helpful when you have multiple profiles in ~/.aws/credentials file and you are also using MFA. Also helpful when you want to locally test docker-container before deploying it on ECS on which you have IAM Roles, but locally you don't.
Another approach is to pass the keys from the host machine to the docker container. You may add the following lines to the docker-compose file.
services:
web:
build: .
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
The following one-liner works for me even when my credentials are set up by aws-okta or saml2aws:
$ docker run -v$HOME/.aws:/root/.aws:ro \
-e AWS_ACCESS_KEY_ID \
-e AWS_CA_BUNDLE \
-e AWS_CLI_FILE_ENCODING \
-e AWS_CONFIG_FILE \
-e AWS_DEFAULT_OUTPUT \
-e AWS_DEFAULT_REGION \
-e AWS_PAGER \
-e AWS_PROFILE \
-e AWS_ROLE_SESSION_NAME \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_SESSION_TOKEN \
-e AWS_SHARED_CREDENTIALS_FILE \
-e AWS_STS_REGIONAL_ENDPOINTS \
amazon/aws-cli s3 ls
Please note that for advanced use cases you might need to allow rw (read-write) permissions, so omit the ro (read-only) limitation when mounting the .aws volume in -v$HOME/.aws:/root/.aws:ro
Volume mounting is noted in this thread but as of docker-compose v3.2 + you can Bind Mount.
For example, if you have a file named .aws_creds in the root of your project:
In your service for the compose file do this for volumes:
volumes:
# normal volume mount, already shown in thread
- ./.aws_creds:/root/.aws/credentials
# way 2, note this requires docker-compose v 3.2+
- type: bind
source: .aws_creds # from local
target: /root/.aws/credentials # to the container location
Using this idea, you can publicly store your docker images on docker-hub because your aws credentials will not physically be in the image...to have them associated, you must have the correct directory structure locally where the container is started (i.e. pulling from Git)
You could create ~/aws_env_creds containing:
touch ~/aws_env_creds
chmod 777 ~/aws_env_creds
vi ~/aws_env_creds
Add these value (replace the key of yours):
AWS_ACCESS_KEY_ID=AK_FAKE_KEY_88RD3PNY
AWS_SECRET_ACCESS_KEY=BividQsWW_FAKE_KEY_MuB5VAAsQNJtSxQQyDY2C
Press "esc" to save the file.
Run and test the container:
my_service:
build: .
image: my_image
env_file:
- ~/aws_env_creds
If someone still face the same issue after following the instructions mentioned in accepted answer then make sure that you are not passing environment variables from two different sources. In my case I was passing environment variables to docker run via a file and as parameters which was causing the variables passed as parameters show no effect.
So the following command did not work for me:
docker run --env-file ./env.list -e AWS_ACCESS_KEY_ID=ABCD -e AWS_SECRET_ACCESS_KEY=PQRST IMAGE_NAME:v1.0.1
Moving the aws credentials into the mentioned env.list file helped.
for php apache docker the following command works
docker run --rm -d -p 80:80 --name my-apache-php-app -v "$PWD":/var/www/html -v ~/.aws:/.aws --env AWS_PROFILE=mfa php:7.2-apache
Based on some of previous answers, I built my own as follows.
My project structure:
├── Dockerfile
├── code
│ └── main.py
├── credentials
├── docker-compose.yml
└── requirements.txt
My docker-compose.yml file:
version: "3"
services:
app:
build:
context: .
volumes:
- ./credentials:/root/.aws/credentials
- ./code:/home/app
My Docker file:
FROM python:3.8-alpine
RUN pip3 --no-cache-dir install --upgrade awscli
RUN mkdir /app
WORKDIR /home/app
CMD python main.py