I have the config_file which contains many arguments:
FOO=/foo
FOO_1=/bar
...
FOO_100=/opt
I need to set my env in docker container, which will contain all these arguments.
I know about --build-arg option, but I don't think it's a good idea, because number of arguments is too big.
So, I don't know how to load environment variables into container correctly.
you can use the --env-file argument while running to the container
ex:
docker run --env-file config.env
check docker run options here https://docs.docker.com/engine/reference/commandline/run/#options
alternative docker-compose.yml
version: "3.7"
services:
test:
image: test: image
env_file:
- config.env
you can refer for more details https://docs.docker.com/compose/environment-variables/
Related
Assuming I have the following docker-compose
version: "3.9"
services:
skyrimserver:
container_name: skyrimserver
image: tiltedphoques/st-reborn-server:latest
ports:
- "10578:10578/udp"
volumes:
- /opt/docker/skyrimserver/config:/home/server/config
- /opt/docker/skyrimserver/logs:/home/server/logs
- /opt/docker/skyrimserver/Data:/home/server/Data
restart: unless-stopped
I would like to have the folders specified under volumes created & filled with some files before running it. It seems like the docker integration creates a EFS file system automatically which is empty. How can I hook into that to add files upon creation?
EDIT: A nice solution would be to have the ability to change the config on-the-fly and reloading within the game, not having to restart the whole server because the new docker file includes new configuration files
docker compose is a tool to define run docker. When you define volumes in yaml file it's similar to run docker with arg --mount and there is no volume in that instance for you to mount.
What you need with ECS is a fully functioning image with all sources of files in it, using a Dockerfile like this to build image might work:
FROM tiltedphoques/st-reborn-server
CP /opt/docker/skyrimserver/config /home/server/config
CP /opt/docker/skyrimserver/logs /home/server/logs
CP /opt/docker/skyrimserver/Data /home/server/Data
EXPOSE 10578
Edit: In case you still want to use ECS, you can try to SSH to that container or use system manager to connect to container and put your file in /home/server/. I strongly AGAINST this method because EFS
is ridiculously slow without dump data.
You should change to use EBS backed EC2 instances. It's easy to scale with auto scaling group and suit your usecase
Hi hope this would help just pu dot infront of you location:
version: "3.9"
services:
skyrimserver:
container_name: skyrimserver
image: tiltedphoques/st-reborn-server:latest
ports:
- "10578:10578/udp"
volumes:
- ./opt/docker/skyrimserver/config:/home/server/config
- ./opt/docker/skyrimserver/logs:/home/server/logs
- ./opt/docker/skyrimserver/Data:/home/server/Data
restart: unless-stopped
I have a multi container Django app. One Container is the database, another one the main webapp with Django installed for handling the front- and backend. I want to add a third container which provides the main functionality/tool we want to offer via the webapp. It has some complex dependencies, which is why I would like to have it as a seperate container as well. It's functionality is wrapped as a CLI tool and currently we build the image and run it as needed passing the arguments for the CLI tool.
Currently, this is the docker-compose.yml file:
version: '3'
services:
db:
image: mysql:8.0.30
environment:
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- TZ=${TZ}
volumes:
- db:/var/lib/mysql
- db-logs:/var/log/mysql
networks:
- net
restart: unless-stopped
command: --default-authentication-plugin=mysql_native_password
app:
build:
context: .
dockerfile: ./Dockerfile.webapp
environment:
- MYSQL_NAME=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
ports:
- "8000:8000"
networks:
- net
volumes:
- ./app/webapp:/app
- data:/data
depends_on:
- db
restart: unless-stopped
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
tool:
build:
context: .
dockerfile: ./Dockerfile.tool
volumes:
- data:/data
networks:
net:
driver: bridge
volumes:
db:
db-logs:
data:
In the end, the user should be able to set the parameters via the Web-UI and run the tool container. Multiple processes should be managed by a job scheduler. I hoped that running the container within a multi-container app should be straightforward, but as far as I now know it is only possible through mounting the docker socket which should be avoided regarding security issues.
So my question is: what are the possiblites to achieve my desired goal?
Things I considered:
multi-stage container: main purpose is to reduce file size, but is there a hack to use the cli-tool along with its built environment in the latest image of the multi-stage container?
Api: build an Api for the tool. Other containers can communicate via the docker network. Seems to be cumbersome
The service "app" (the main django app) is built on top of the official python image which I would like to keep this way. Nevertheless there is the possibility to build one large image based on Ubuntu which includes the tool along with its dependendencies and the main django app. This will probably heavily increase sizes and maybe turn into dependency issues.
Has anybody run into similar issues? Which direction would you point me to? I'm also looking for some buzzwords that speed up my research.
You should build both parts into a single unified image, and then you can use the Python subprocess module as normal to invoke the tool.
The standard Docker Hub python image is already built on Debian, which is very closely related to Ubuntu. So you should be able to do something like
FROM python:3.10
# Install OS-level dependencies for both the main application and
# the support tool
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
another-dependency \
some-dependency \
third-dependency
# Install the support tool
ADD http://repository.example.com/the-tool/the-tool /usr/local/bin/the-tool
RUN chmod +x /usr/local/bin/the-tool
# Copy and install Python-level dependencies
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt
# Copy in the main application
COPY ./ ./
# Metadata on how to run the application
EXPORT 8000
# USER someuser
CMD ["./the-app.py"]
You've already noted the key challenges in having the tool in a separate container. You can't normally "run commands" in a container; a container is a wrapper around some single process, and it requires unrestricted root-level access to the host to be able to manipulate the container in any way (including using the docker exec debugging tool). You'd also need unrestricted root-level access to the host to be able to launch a temporary container per request.
Putting some sort of API or job queue around the tool would be the "most Dockery" way to do it, but that can also be significant development effort. In this setup as you've described it, the support tool is mostly an implementation detail of the main process, so you're not really breaking the "one container does one thing" rule by making it available for a normal Unix subprocess invocation inside the same container.
Ok, this will be a bit embarrassing, but I am trying to build a docker service with an aws cli that shares a volume with my backend for storage of backups.
I saw this approach in the package django-cookiecutter
So far, I couldn't get anything to work and I really struggle with finding the right approach. I defined the access-key etc in my docker-compose:
services:
aws_cli:
build:
context: ./kitschoen_aws_docker
dockerfile: Dockerfile
environment:
- AWS_ACCESS_KEY_ID=xxxxxxxxxxxx
- AWS_SECRET_ACCESS_KEY=xxxxxxxxxxx
- SQL_ENGINE=django.db.backends.postgresql
volumes:
- postgres_backup:/backup
volumes:
postgres_backup:
And set up a Dockerfile:
FROM garland/aws-cli-docker:1.15.47
# ENTRYPOINT [ "/bin/bash", "-c", "aws configure --region=eu-central-1 --output=text" ]
The commented out part does not work because the container does not seem to have a bash console. I am quite lost, at would be very grateful for any hint in the right direction.
Don't be embarrassed, this sort of thing can seem simple but turn out to be frustratingly complicated. My first suggestion is to try changing your entrypoint to /bin/sh to see if that is available. It looks like the image you are using is based on Alpine, so /bin/sh should be available, which should get you past this issue.
As mentioned by marked it is an alpine based image so try with sh, but real problem will start it this line
ENTRYPOINT [ "/bin/bash", "-c", "aws configure --region=eu-central-1 --output=text" ]
it will configure the aws keys but as soon as keys configured, the container will die.
So I will just to bind the host credentials, you will get two thing
Already configured keys
Existing configured profiles
docker-compose
version: '3'
services:
aws_cli:
image: garland/aws-cli-docker:1.15.47
volumes:
- /home/user/.aws/:/root/.aws
So now run that command or any other process that you want.
docker-compose run aws_cli aws s3api list-buckets --query "Buckets[].Name" --profile test
or any python script,
docker-compose run aws_cli python my.py
As the most important benefit of using docker is to keep dev and prod env to be the same so let's rule out the option of using two different docker-compose.yml
Let's say we have a Django application, and we use gunicorn to serve for production and we have a dedicated apache2 as a reverse proxy(this apache2 is out of docker by design). So this application(docker-compose) has only two parts, web(Django) and db(mysql). There's nothing wrong with the db part.
For the Django part, the dev routine without docker would be using venv and python3 manage.py runserver or whatever shortcut that an IDE provides. We can happily change our code, the dev server is smart to pick up and change and reflect in no time.
Things get tricky when docker comes in since all source code should be packed into the image, this gives our dev a big overhead of recreating the image&container again and again. One might have the following solutions(which I found not elegant):
In docker-compose.yml use volume to mount source code folder into the container, so that all changes in the host source code folder will automatically reflect in the container, then gunicorn will pick up the change and reflect. --- This does remove most of the recreating container overhead, but we can't use the same docker-compose.yml in production as this introduces a dependency to the source code on the host server.
I know there is a command line option to mount a host folder to the container, but to my knowledge, this option only exists in docker run not docker-compose. So using a different command to bring the service up in different env is another dead end. ( I am not 100% sure about this as I'm still quite new to docker, please correct me if I'm wrong)
TLDR;
How can I set up my env so that
I use only one single docker-compose.yml for both dev and prod
I'm able to dev with live changes easily without recreating docker container
Thanks a lot!
Define your django service in docker-compose.yml as
services:
backend:
image: backend
Then add a file for dev: docker-compose.dev.yml
services:
backend:
extends:
file: docker-compose.yml
service: backend
volume: local_path:path
To launch for prod, just docker-compose up
To launch for dev
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
To hot reload dev django app, just reload gunicorn ps aux | grep gunicorn | grep greencar_proj | awk '{ print $2 }' | xargs kill -HUP
I have also liked to jam as much functionality into a single docker-compose.yml file. A few strategies I would consider:
define different services for prod and dev. So you'll run docker-compose up dev or docker-compose up prod or docker-compose run dev. There is some copying here but usually not a lot.
Use multiple docker-compose.yml files and merge them. eg: docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d. More details here: https://docs.docker.com/compose/extends/
I usually just comment out my volumes section, but that's probably not the best solution.
I've set up a docker django container and made its image using build using tutorial here. The tutorial shows how to make a basic django application and mounts the application to "/code", which as I understand, is contained in a data-volume.
However I want to understand that how will I be able to update and develop this code, and be able to ship/deploy it. Since when I make a commit, it's doesn't take account any changes in the code, since it's a part of the data volume.
Is there any way I can make the django code a part of the image, or update the image with the updated code?
In my experience Docker serves two purposes:
To be able to develop code in a containerized environment. This is very useful as I am now able to get new developers on my team ready to work in about 5 mins Previously, this could have taken anywhere from an hour to several hours for misc issues, especially on older projects.
To be able to package an application in a containerized environment. This is also a great time saver as the only requirement for the environment is to have Docker installed.
When you are developing your code you should mount the source/volume so that your changes are always reflected inside the container. When you want to package an app for deployment you should COPY the source into the container and package it appropriately.
Here is a docker-compose file I use to (1) build an image to develop against, (2) develop my code, and (3) ship it (I'm using spring boot):
version: '3.7'
services:
dev:
image: '${MVN_BUILDER}'
container_name: '${CONTAINER_NAME}'
ports:
- '8080:8080'
volumes:
- './src:/build/src'
- './db:/build/db'
- './target:/build/target'
- './logs:/build/logs'
command: 'mvn spring-boot:run -Drun.jvmArguments="-Xmx512m" -Dmaven.test.skip=true'
deploy:
build:
context: .
dockerfile: Dockerfile-Deploy
args:
MVN_BUILDER: '${MVN_BUILDER}'
image: '${DEPLOYMENT_IMAGE}'
container_name: '${CONTAINER_NAME}'
ports:
- '8080:8080'
maven:
build:
context: .
dockerfile: Dockerfile
image: '${MVN_BUILDER}'
container_name: '${CONTAINER_NAME}'
I would run docker-compose build maven to build my base image. This is needed so that when I run my code in a container all the dependencies are installed in the image. The Dockerfile for this essentially copies to pom.xml into the image and downloads the dependencies needed for the app. Note this would need to be performed anytime dependencies change. Here is the Dockerfile to build that image that is referenced in the maven service:
### BUILD a maven builder. This will contain all mvn dependencies and act as an abstraction for all mvn goals
FROM maven:3.5.4-jdk-8-alpine as builder
#Copy Custom Maven settings
#COPY settings.xml /root/.m2/
# create app folder for sources
RUN mkdir -p /build
RUN mkdir -p /build/logs
# The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile.
WORKDIR /build
COPY pom.xml /build
#Download all required dependencies into one layer
RUN mvn -B dependency:go-offline dependency:resolve-plugins
RUN mvn clean install
Next, I would run docker-compose up dev to start my dev service and begin to develop my application. This service mounts my code into the container and uses Maven to start a spring boot application. Anytime I change the code spring boot restarts the server and my changes are reflected.
Finally, once I am happy with my application I build an image that has my application packaged for deployment using docker-compose build deploy. I use a two-stage build process to first copy the source into a container and package it for deployment as a Jar then that Jar is put into the 2nd stage where I can simply run java -jar build/app.jar (in the container) to start my application and the first stage is removed. That's It! Now you can deploy this latest image anywhere Docker is installed.
Here is what that last Dockerfile (Dockerfile-Deploy) looks like:
ARG MVN_BUILDER
### Stage 1 - BUILD image
FROM $MVN_BUILDER as builder
COPY src /build/src
RUN mvn clean package -PLOCAL
### Stage 2 - Deploy Jar
FROM openjdk:8
RUN mkdir -p /build
COPY --from=builder /build/target/*.jar /build/app.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","build/app.jar"]
Here the .env file in the same directory as the docker-compose file. I use it to abstract image/container names and simply bump up the version number in one place when a new image is needed.
MVN_BUILDER=some/maven/builder:0.1
DEPLOYMENT_IMAGE=some/deployment/spring:0.1
CONTAINER_NAME=spring-container
CONTAINER_NAME_DEBUG=spring-container-debug
I think it's too late to answer your question, however, it might be beneficial for others who reach out.
The tutorial you mentioned is a bit tricky to use for the first-timers, so, I change the structure a little bit. I assume that you have a docker registry account (like Dockerhub) for the purpose of publishing the images to. This is required if you want to access the image on a remote host (you can copy the actual image file but is not recommended).
creating a project
Assume that you are going to create a website with Django and dockerize it, first, you do:
django-admin startproject samplesite
It creates a directory samplesite that includes the following (I added requirements.txt):
db.sqlite3 manage.py requirements.txt samplesite
adding Dockerfile and docker-compose.yml
For the Dockerfile, as you can see, nothing is changed compared to the Dockerfile.
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
However for the docker-compose.yml:
version: '3'
services:
db:
image: postgres
web:
build: .
image: yourUserNameOnDockerHub/mywebsite:0.1 # this line is added
command: python manage.py runserver 0.0.0.0:8000
#volumes:
# - .:/code
ports:
- "8000:8000"
depends_on:
- db
docker-compose.yml is also almost identical to the one mentioned in the tutorial, with volume commented and one line added image: mywebsite:0.1. This line allows us to track the built image and deploy it whenever we want. The volume mounting is not related to the code you write and was put there to take out the dynamic content that is changed by Django (sqlite, uploaded files, etc.).
build and run
If you run docker-compose up the first time everything works fine, however, because of the new line added, when you change the code after the first time, the changes will not reflect in the container that runs. This is because upon each docker-compose up, compose will look for mywebsite:0.1 (that already exists) and does not build a new image and creates a container based on the old one. As we need that image name and tag to publish/deploy our image, we need to instead use:
docker-compose up --build
It will re-build an image with the changes reflected. Every time you make some changes, run it and a new fresh image is created that can be seen with (note that although the name and tag remain unchanged, change in image id shows that this is a new image):
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
yourUserNameOnDockerHub/mywebsite 0.1 033c9d2bfac0 7 seconds ago 974MB
publishing and deployment
If you have set up an account on Dockerhub (or any other registry) you can publish the image for later use or deployment on a remote server:
docker push yourUserNameOnDockerHub/mywebsite:01
If you want to deploy it on a remote host and want to use docker-compose again, Just change the docker-compose.yml to:
version: '3'
services:
db:
image: postgres
web:
image: yourUserNameOnDockerHub/mywebsite:0.1
command: python manage.py runserver 0.0.0.0:8000
#volumes:
# - .:/code
ports:
- "8000:8000"
depends_on:
- db
Note that the build: . line is removed (as we are going to run it only). When developing locally, whenever you run docker-compose up --build a new image will be created and tagged and a container based on that will run in the compose stack. If you thought that you are happy with the changes, you follow the publishing step to make it live on the server.
When you want to update an image, lets say due to your application code changes, you use COPY during your image-build, so in the Dockerfile you do something like
COPY /you/code/on/the/host /var/www
Also see my answer about "volumes" and image-building https://stackoverflow.com/a/39314602/3625317 to clarify why your code is missing in the build
In step 9 of the tutorial you set a volume. This volume will link your current directory and your container /code directory. In other words, they will be the same.
So any updates to your local files will change the files in your container as well. Remember that you will need to restart your app so the changes can take place.
Before you deploy your image, you will need to create a second docker compose file. This file will remove the volume so the code will stay inside the container and won't change from outside. You can follow the steps provided in docker compose documentation.