How to set environmental variables properly Gitlab CI/CD and Docker - django

I am new to Docker and CI/CD with Gitlab CI/CD. I have .env file in the root directory of my Django project which contains my environmental variable e.g SECRET_KEY=198191891. The .env file is included in .gitignore. I have set up these variables in Gitlab settings for CI/CD. However, the environment variables set in Gitlab CI/CD settings seem to be unavailable
Also, how should the Gitlab CI/CD automation process create a user and DB to connect to and run the tests? When creating the DB and user for the DB on my local machine, I logged into the container docker exec -it <postgres_container_name> /bin/sh and created Postgres user and DB.
Here are my relevant files.
docker-compose.yml
version: "3"
services:
postgres:
image: postgres
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
web:
build: .
command: /usr/local/bin/gunicorn writer.wsgi:application -w 2 -b :8000
environment:
DEBUG: ${DEBUG}
DB_HOST: ${DB_HOST}
DB_NAME: ${DB_NAME}
DB_USER: ${DB_USER}
DB_PORT: ${DB_PORT}
DB_PASSWORD: ${DB_PASSWORD}
SENDGRID_API_KEY: ${SENDGRID_API_KEY}
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_STORAGE_BUCKET_NAME: ${AWS_STORAGE_BUCKET_NAME}
depends_on:
- postgres
- redis
expose:
- "8000"
volumes:
- .:/writer-api
redis:
image: "redis:alpine"
celery:
build: .
command: celery -A writer worker -l info
volumes:
- .:/writer-api
depends_on:
- postgres
- redis
celery-beat:
build: .
command: celery -A writer beat -l info
volumes:
- .:/writer-api
depends_on:
- postgres
- redis
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
depends_on:
- web
volumes:
pgdata:
.gitlab-ci.yml
image: tmaier/docker-compose:latest
services:
- docker:dind
before_script:
- docker info
- docker-compose --version
stages:
- build
- test
- deploy
build:
stage: build
script:
- echo "Building the app"
- docker-compose build
test:
stage: test
variables:
script:
- echo "Testing"
- docker-compose run web coverage run manage.py test
deploy-staging:
stage: deploy
only:
- develop
script:
- echo "Deploying staging"
- docker-compose up -d
deploy-production:
stage: deploy
only:
- master
script:
- echo "Deploying production"
- docker-compose up -d
Here are my settings for my variables
Here is my failed pipeline job

The SECRET_KEY variable will be available to all your CI jobs, as configured. However, I don't see any references to it in your Docker Compose file to pass it to one or more of your services. For the web service to use it, you'd map it in like the other variables you already have.
web:
build: .
command: /usr/local/bin/gunicorn writer.wsgi:application -w 2 -b :8000
environment:
SECRET_KEY: ${SECRET_KEY}
DEBUG: ${DEBUG}
…
As for creating the database, you should wrap up whatever you currently run interactively in the postgres container in a SQL file or shell script, and then bind-mount it into the container's initialization scripts directory under /docker-entrypoint-initdb.d. See the Initialization scripts section of the postgres image's description for more details.

In my experience the best way to pass environment variables to a container docker is creating an environment file, which works to development environment and production environment:
GitLab CI/CD variables
You must create an environment file on GitLab CI/CD, following the next steps, on your project GitLab you must go to:
settings > CI/CD > Variables
on that you must create a ENV_FILE
Demo image
Next on your build stage in .gitlab-ci.yml copy the ENV_FILE to local process
.gitlab-ci.yml
build:
stage: build
script:
- cp $ENV_FILE .env
- echo "Building the app"
- docker-compose build
your Dockerfile should be stay normally so it doesn't have to change
Dockerfile
FROM python:3.8.6-slim
# Rest of setup goes here...
COPY .env .env

In order for compose variable substitution to work, when the user is not added to docker group, you must add the -E flag to sudo.
script:
- sudo -E docker-compose build

Related

Dockerized django container not producing local migrations file

Question
I am a beginner with docker; this being the first project I have set up with it and don't particularly know what I am doing. I would very much appreciate if someone could give me some advice on what the best way to get migrations from a dockerized django app to store locally
What I have tried so far
I have a local django project setup with the following file structure:
Project
.docker
-Dockerfile
project
-data
-models
- __init__.py
- user.py
- test.py
-migrations
- 0001_initial.py
- 0002_user_role.py
...
settings.py
...
manage.py
Makefile
docker-compose.yml
...
In the current state the migrations for the test.py model have not been run; so I attempted to do so using docker-compose exec main python manage.py makemigrations. This worked successfully returning the following:
Migrations for 'data':
project/data/migrations/0003_test.py
- Create model Test
But produced no local file. However, if I explore the file system of the container I can see that the file exists on the container itself.
Upon running the following:
docker-compose exec main python manage.py migrate
I receive:
Running migrations:
No migrations to apply.
Your models in app(s): 'data' have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
I was under the impression that even if this did not create the local file it would at least run the migrations on the container.
Regardless, my intention was that when I run docker-compose exec main python manage.py makemigrations it store the file locally in the project/data/migrations folder and then I just run migrate manually. I can't find much documentation on how to do this; the only post I have seen suggested bind mounts (Migrations files not created in dockerized Django) which I attempted by adding the following to my docker-compose file:
volumes:
- type: bind
source: ./data/migrations
target: /var/lib/migrations_test
but I was struggling to get it to work and following from this I had no idea how to run commands through this volume using docker-compose and I was questioning whether this was even a good idea as I had read somewhere it was not best practice to use bind mounts.
Project setup:
The docker-compose.yml file looking like so:
version: '3.7'
x-common-variables: &common-variables
ENV: 'DEV'
DJANGO_SETTINGS_MODULE: 'project.settings'
DATABASE_NAME: 'postgres'
DATABASE_USER: 'postgres'
DATABASE_PASSWORD: 'postgres'
DATABASE_HOST: 'postgres'
CELERY_BROKER_URLS: 'redis://redis:6379/0'
volumes:
postgres:
services:
main:
command:
python manage.py runserver 0.0.0.0:8000
build:
context: ./
dockerfile: .docker/Dockerfile
target: main
environment:
<<: *common-variables
ports:
- '8000:8000'
env_file:
- dev.env
networks:
- default
postgres:
image: postgres:13.6
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '25432:5432'
environment:
POSTGRES_PASSWORD: 'postgres'
command: postgres -c log_min_messages=INFO -c log_statement=all
wait_for_dependencies:
image: dadarek/wait-for-dependencies
environment:
SLEEP_LENGTH: '0.5'
redis:
image: redis:latest
ports:
- '16379:6379'
worker:
build:
context: .
dockerfile: .docker/Dockerfile
target: main
command: celery -A project worker -l INFO
environment:
<<: *common-variables
volumes:
- .:/code/delegated
env_file:
- dev.env
networks:
- default
beat:
build:
context: .
dockerfile: .docker/Dockerfile
target: main
command: celery -A project beat -l INFO
environment:
<<: *common-variables
volumes:
- .:/code/delegated
env_file:
- dev.env
networks:
- default
networks:
default:
Makefile:
build: pre-run
build:
docker-compose build --pull
dev-deps: pre-run
dev-deps:
docker-compose up -d postgres redis
docker-compose run --rm wait_for_dependencies postgres:5432 redis:6379
migrate: pre-run
migrate:
docker-compose run --rm main python manage.py migrate
setup: build dev-deps migrate
up: dev-deps
docker-compose up -d main
Dockerfile:
FROM python:3.10.2 as main
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
RUN mkdir -p /code
WORKDIR /code
ADD . ./
RUN useradd -m -s /bin/bash app
RUN chown -R app:app .
USER app
EXPOSE 8000
Follow up based on diptangsu-goswami's response
I tried adding the following:
volumes:
- type: bind
source: C:\dev\Project\project
target: /code/
This creates an empty directory in my Project folder; named C:\dev\Project\project but the app doesn't run as it cannot find the manage.py file... I assumed this was because it was in the parent directory Project and tried again with:
volumes:
- type: bind
source: C:\dev\Project
target: /code/
But the same problem occured. Why is it creating the empty directory? surely it should just be binding the existing directory with the container directory? Also using this method, would I need to change my Dockerfile to not copy the codebase to the container in the first place and just mount it on instead?
I managed to fix it by adding the following to my 'main' service in my docker compose:
volumes:
- .:/code:delegated

How can I get Django inside a docker compose network to recognize the postgres service name while using PyCharm?

I've been fiddling with my Dockerfile and Docker Compose to try and get PyCharm to be able to run and debug my django application. I've identified that I need all services to be on the same network, and have put them on the same network, but when I go to execute the runserver command through my PyCharm configuration, I get an error that
django.db.utils.OperationalError: could not translate host name "postgres" to address: Name or service not known
I've tried adjusting the Dockerfile to expose environment variables inside of it, instead of in the compose file. As well as grouping all the services into the same network. I'm at a bit of a loss as to where to go from here, as most other StackOverflow questions relating to this tell me to do things I've already tried.
docker-compose.yml
version: "2"
services:
redis:
image: redis:latest # redis
restart: always
logging:
driver: "none"
networks:
- integration
postgres:
image: postgres:latest
restart: always
ports:
- "5400:5399"
volumes:
- /usr/lib/postgresql
- ./tmp:/dmp:ro
- ./load.sh:/usr/app/load.sh
environment:
POSTGRES_USER: vagrant
POSTGRES_PASSWORD: vagrant
POSTGRES_HOST_AUTH_METHOD: trust
logging:
driver: none
networks:
- integration
django:
build: .
command: "/bin/bash"
working_dir: /usr/app/
stdin_open: true
tty: true
volumes:
- ./tmp:/tmp
- ./:/usr/app
- ~/.ssh:/root/.ssh:ro
links:
- redis:redis
- postgres:postgres
depends_on:
- postgres
environment:
DJANGO_SETTINGS_MODULE: project.settings.docker_local
DJANGO_DEBUG: "true"
SSH_AUTH_SOCK: /tmp/authsock
logging:
driver: none
networks:
- integration
networks:
integration:
driver: bridge
Dockerfile
FROM python:latest
RUN echo "installing requirements"
COPY requirements/ /usr/app/reqs/
RUN pip install -r /usr/app/reqs/base.txt
RUN pip install -r /usr/app/reqs/local.txt
ENV DJANGO_SETTINGS_MODULE=project.settings.docker_local
COPY . /usr/app/
RUN echo "done"
I have hooked up PyCharm with the remote interpreter living inside the django service, and it is able to recognize all of the installed packages.
I have a docker-compose configuration for building the containers within PyCharm, and a Django Server configuration for running the runserver command on the Django service.
Docker-Compose Config:
Django-Server Config:
Finally, here is the relevant portion of my settings file, docker_local.py
DATABASES = {
'default': dj_database_url.config(
default='postgres://vagrant:vagrant#postgres/vagrant'),
}
DATABASES['default']['ENGINE'] = 'django.contrib.gis.db.backends.postgis'
Any and all help or advice is greatly appreciated.

Deploying docker compose on aws using ecs-cli gives me errors

I get Internal error when I deploy my docker compose file to aws using ecs-cli . In my console I
get that the service is up and running as well as in the aws gui but when I try to open the link I get
internal error.amazon view
link not working
Dockerfile.txt
FROM clojure:openjdk-8-lein
RUN apt update && apt install -y git make python3 && apt clean
WORKDIR /opt
RUN mkdir my-project && cd my-project && git clone https://github.com/ThoughtWorksInc/infra-problem.git && cd infra-problem && make libs && make clean all
docker-compose.yml
version: "3"
services:
quotes:
image: selmensh/newsfeeds
build:
context: .
dockerfile: ./Dockerfile.txt
container_name: quotes
command: java -jar ./my-project/infra-problem//build/quotes.jar
environment:
- APP_PORT=9200
ports:
- 9200:9200
newsfeed:
image: selmensh/newsfeeds
build:
context: .
dockerfile: ./Dockerfile.txt
container_name: newsfeed
command: java -jar ./my-project/infra-problem/build/newsfeed.jar
environment:
- APP_PORT=5000
ports:
- 5000:5000
assets:
image: selmensh/newsfeeds
build:
context: .
dockerfile: ./Dockerfile.txt
container_name: assets
command: python3 ./my-project/infra-problem/front-end/public/serve.py
ports:
- 8000:8080
front-end:
image: selmensh/newsfeeds
build:
context: .
dockerfile: ./Dockerfile.txt
command: java -jar ./my-project/infra-problem/build/front-end.jar
environment:
- APP_PORT=8081
- STATIC_URL=http://assets:8000
- QUOTE_SERVICE_URL=http://quotes:9200
- NEWSFEED_SERVICE_URL=http://newsfeed:5000
- NEWSFEED_SERVICE_TOKEN=T1&eWbYXNWG1w1^YGKDPxAWJ#^et^&kX
depends_on:
- quotes
- newsfeed
ports:
- 80:8081
I also notice that ecs does not support build so I made an image and pushed to docker hub. However I see that this might have some security issues since I clone the code in the docker file. The reason I do this is because the code has a folder called utilities which is common and is required by all the other services.
Is there a better approach ?
It would be great if you could demonstrate more details, it seems a problem in your application, now something I could point you to would be the terraform tool, through it you can better manage your infrastructure (idempotency).

Developing with celery and docker

I have noticed that developing with celery in container, something like this:
celeryworker:
build: .
user: django
command: celery -A project.celery worker -Q project -l DEBUG
links:
- redis
- postgres
depends_on:
- redis
- postgres
env_file: .env
environment:
DJANGO_SETTINGS_MODULE: config.settings.celery
if I want to do some changes on some celery task, I have to completly rebuild the docker image in order to have the latest changes.
So I tried:
docker-compose -f celery.yml down
docker-compose -f celery.yml up
Nothing changed, then:
docker-compose -f celery.yml down
docker-compsoe -f celery.yml build
docker-compose -f celery.yml up
I have the new changes.
Is this the way to do it? seems very slow to me, all the time rebuilding the image, is then better have the local celery outsite docker containers?
Mount your . (that is, your working copy) as a volume within the container you're developing in.
That way you're using the fresh code from your working directory without having to rebuild (unless, say, you're changing dependencies or something else that requires a rebuild).
The idea is explained here by Heroku, emphasis mine:
version: '2'
services:
web:
build: .
ports:
- "5000:5000"
env_file: .env
depends_on:
- db
volumes:
- ./webapp:/opt/webapp # <--- Whatever code your Celery workers need should be here
db:
image: postgres:latest
ports:
- "5432:5432"
redis:
image: redis:alpine
ports:
- "6379:6379"

How do I change where docker-compose mounts volumes?

I have a docker-compose.yml file which I am trying to run inside of Google Cloud Container-Optimized OS (https://cloud.google.com/community/tutorials/docker-compose-on-container-optimized-os). Here's my docker-compose.yml file:
version: '3'
services:
client:
build: ./client
volumes:
- ./client:/usr/src/app
ports:
- "4200:4200"
- "9876:9876"
links:
- api
command: bash -c "yarn --pure-lockfile && yarn start"
sidekiq:
build: .
command: bundle exec sidekiq
volumes:
- .:/api
depends_on:
- db
- redis
- api
redis:
image: redis
ports:
- "6379:6379"
db:
image: postgres
ports:
- "5433:5432"
api:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
When I run docker-compose up, I eventually get the error:
Cannot start service api: error while creating mount source path '/rootfs/home/jeremy/my-repo': mkdir /rootfs: read-only file system
Reading further, it appears that /rootfs is locked down (https://cloud.google.com/container-optimized-os/docs/concepts/security), with only a few paths writeable. I'd like to mount all my volumes to one of these directories such as /home, any suggestions on how I can do this? Is it possible to mount all my volumes to /home/xxxxxx by default without having to change my docker-compose.yml file, such as passing a flag to docker-compose up?