How do I change where docker-compose mounts volumes? - google-cloud-platform

I have a docker-compose.yml file which I am trying to run inside of Google Cloud Container-Optimized OS (https://cloud.google.com/community/tutorials/docker-compose-on-container-optimized-os). Here's my docker-compose.yml file:
version: '3'
services:
client:
build: ./client
volumes:
- ./client:/usr/src/app
ports:
- "4200:4200"
- "9876:9876"
links:
- api
command: bash -c "yarn --pure-lockfile && yarn start"
sidekiq:
build: .
command: bundle exec sidekiq
volumes:
- .:/api
depends_on:
- db
- redis
- api
redis:
image: redis
ports:
- "6379:6379"
db:
image: postgres
ports:
- "5433:5432"
api:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
When I run docker-compose up, I eventually get the error:
Cannot start service api: error while creating mount source path '/rootfs/home/jeremy/my-repo': mkdir /rootfs: read-only file system
Reading further, it appears that /rootfs is locked down (https://cloud.google.com/container-optimized-os/docs/concepts/security), with only a few paths writeable. I'd like to mount all my volumes to one of these directories such as /home, any suggestions on how I can do this? Is it possible to mount all my volumes to /home/xxxxxx by default without having to change my docker-compose.yml file, such as passing a flag to docker-compose up?

Related

service refers to undefined volume

I want to share temp files between Django project and celery worker (it works with TemporaryUploadedFiles, so I want to have access to these files from celery worker to manage them). I've read about shared volumes, so I tried to imlement it in my docker-compose file and run it, but the command gave me this error:
$ docker compose up --build
service "web" refers to undefined volume shared_web_volume/: invalid compose project
And sometimes "web" replace with "celery", so both celery and django have no access to this volume.
Here is my docker-compose.yml file:
volumes:
shared_web_volume:
postgres_data:
services:
db:
image: postgres:12.0-alpine
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env
web:
build:
context: ./MoreEnergy
dockerfile: Dockerfile
entrypoint: sh ./entrypoint.sh
command: python manage.py runserver 0.0.0.0:8000
volumes:
- "shared_web_volume/:/MoreEnergy/"
ports:
- 1337:8000
env_file:
- ./.env
depends_on:
- db
celery:
build:
context: ./MoreEnergy
dockerfile: Dockerfile
entrypoint: sh ./entrypoint.sh
command: celery -A MoreEnergy worker --loglevel=info
volumes:
- "shared_web_volume/:/MoreEnergy/"
env_file:
- ./.env
depends_on:
- web
- redis
redis:
image: redis:5-alpine
What am I doing wrong?
Upd: temp dir is my project folder (I've set it with FILE_UPLOAD_TEMP_DIR variable in settings file), so I don't need to make one more volume only for shared temp files (If I have to, tell me).
your shared volume name is shared_web_volume and it's different from the one that you use in the volumes section which is shared_web_volume/
so my suggestion is to erase the forward slash to be like this
volumes:
- "shared_web_volume:/MoreEnergy/"
Don't forget to do the same thing in celery container

Why does this docker compose file build the same image four times?

When I run docker-compose build on the following docker-compose file, which is for a django server with celery, it builds an identical image four times (for the web service, celeryworker, celerybeat and flower).
The entire process is repeated four times
I thought the point of inheriting from other service descriptions in docker-compose was so that you could reuse the same image for different services?
How can I reuse the web image in the other services, to reduce my build time by 75%?
version: '3'
services:
web: &django
image: myorganisation/myapp
container_name: myapp_web
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
# This is a multistage build installing private dependencies, hence this arg is needed
args:
PERSONAL_ACCESS_TOKEN_GITHUB: ${PERSONAL_ACCESS_TOKEN_GITHUB}
command: /start
volumes:
- .:/app
ports:
- 8000:8000
depends_on:
- db
- redis
environment:
- DJANGO_SETTINGS_MODULE=backend.settings.local
- DATABASE_URL=postgres://postgres_user:postgres_password#db/postgres_db
- REDIS_URL=redis://:redis_password#redis:6379
- CELERY_FLOWER_USER=flower_user
- CELERY_FLOWER_PASSWORD=flower_password
env_file:
- ./.env
celeryworker:
<<: *django
container_name: myapp_celeryworker
depends_on:
- redis
- db
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
container_name: myapp_celerybeat
depends_on:
- redis
- db
ports: []
command: /start-celerybeat
flower:
<<: *django
container_name: myapp_flower
ports:
- 5555:5555
command: /start-flower
volumes:
postgres_data:
driver: local
pgadmin_data:
driver: local
Because you are specifying the build section in all the services (using the django anchor), it is getting built for every service.
If you want to use the same image for all services but build it only once, you can specify the build section in only one service which would be started first (i.e., service with no dependencies) and then specify just the image field without build section the in other services and make these services depend on the first service which builds the image.

django-celery netcat spam within docker container

I am trying to run celery in a separate docker container alongside a django/redis docker setup.
When I run docker-compose up -d --build, my logs via docker-compose logs --tail=0 --follow show the celery_1 container spamming the console repeatedly with
Usage: nc [OPTIONS] HOST PORT - connect
nc [OPTIONS] -l -p PORT [HOST] [PORT] - listen
-e PROG Run PROG after connect (must be last)
-l Listen mode, for inbound connects
-lk With -e, provides persistent server
-p PORT Local port
-s ADDR Local address
-w SEC Timeout for connects and final net reads
-i SEC Delay interval for lines sent
-n Don't do DNS resolution
-u UDP mode
-v Verbose
-o FILE Hex dump traffic
-z Zero-I/O mode (scanning)
I am able to get celery working correctly by removing the celery service from docker-compose.yaml and manually running docker exec -it backend_1 celery -A proj -l info after docker-compose up -d --build. How do I replicate the functionality of this manual process within docker-compose.yaml?
My docker-compose.yaml looks like
version: '3.7'
services:
backend:
build: ./backend
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./backend/app/:/usr/src/app/
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
- redis
links:
- db:db
celery:
build: ./backend
command: celery -A proj worker -l info
volumes:
- ./backend/app/:/usr/src/app/
depends_on:
- db
- redis
redis:
image: redis:5.0.6-alpine
command: redis-server
expose:
- "6379"
db:
image: postgres:12.0-alpine
ports:
- 5432:5432
volumes:
- /tmp/postgres_data:/var/lib/postgresql/data/
I found out the problem was that my celery service could not resolve the SQL host. This was because my SQL host is defined in .env.dev which the celery service did not have access to. I added
env_file:
- ./.env.dev
to the celery service and everything worked as expected.

How to set environmental variables properly Gitlab CI/CD and Docker

I am new to Docker and CI/CD with Gitlab CI/CD. I have .env file in the root directory of my Django project which contains my environmental variable e.g SECRET_KEY=198191891. The .env file is included in .gitignore. I have set up these variables in Gitlab settings for CI/CD. However, the environment variables set in Gitlab CI/CD settings seem to be unavailable
Also, how should the Gitlab CI/CD automation process create a user and DB to connect to and run the tests? When creating the DB and user for the DB on my local machine, I logged into the container docker exec -it <postgres_container_name> /bin/sh and created Postgres user and DB.
Here are my relevant files.
docker-compose.yml
version: "3"
services:
postgres:
image: postgres
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
web:
build: .
command: /usr/local/bin/gunicorn writer.wsgi:application -w 2 -b :8000
environment:
DEBUG: ${DEBUG}
DB_HOST: ${DB_HOST}
DB_NAME: ${DB_NAME}
DB_USER: ${DB_USER}
DB_PORT: ${DB_PORT}
DB_PASSWORD: ${DB_PASSWORD}
SENDGRID_API_KEY: ${SENDGRID_API_KEY}
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_STORAGE_BUCKET_NAME: ${AWS_STORAGE_BUCKET_NAME}
depends_on:
- postgres
- redis
expose:
- "8000"
volumes:
- .:/writer-api
redis:
image: "redis:alpine"
celery:
build: .
command: celery -A writer worker -l info
volumes:
- .:/writer-api
depends_on:
- postgres
- redis
celery-beat:
build: .
command: celery -A writer beat -l info
volumes:
- .:/writer-api
depends_on:
- postgres
- redis
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
depends_on:
- web
volumes:
pgdata:
.gitlab-ci.yml
image: tmaier/docker-compose:latest
services:
- docker:dind
before_script:
- docker info
- docker-compose --version
stages:
- build
- test
- deploy
build:
stage: build
script:
- echo "Building the app"
- docker-compose build
test:
stage: test
variables:
script:
- echo "Testing"
- docker-compose run web coverage run manage.py test
deploy-staging:
stage: deploy
only:
- develop
script:
- echo "Deploying staging"
- docker-compose up -d
deploy-production:
stage: deploy
only:
- master
script:
- echo "Deploying production"
- docker-compose up -d
Here are my settings for my variables
Here is my failed pipeline job
The SECRET_KEY variable will be available to all your CI jobs, as configured. However, I don't see any references to it in your Docker Compose file to pass it to one or more of your services. For the web service to use it, you'd map it in like the other variables you already have.
web:
build: .
command: /usr/local/bin/gunicorn writer.wsgi:application -w 2 -b :8000
environment:
SECRET_KEY: ${SECRET_KEY}
DEBUG: ${DEBUG}
…
As for creating the database, you should wrap up whatever you currently run interactively in the postgres container in a SQL file or shell script, and then bind-mount it into the container's initialization scripts directory under /docker-entrypoint-initdb.d. See the Initialization scripts section of the postgres image's description for more details.
In my experience the best way to pass environment variables to a container docker is creating an environment file, which works to development environment and production environment:
GitLab CI/CD variables
You must create an environment file on GitLab CI/CD, following the next steps, on your project GitLab you must go to:
settings > CI/CD > Variables
on that you must create a ENV_FILE
Demo image
Next on your build stage in .gitlab-ci.yml copy the ENV_FILE to local process
.gitlab-ci.yml
build:
stage: build
script:
- cp $ENV_FILE .env
- echo "Building the app"
- docker-compose build
your Dockerfile should be stay normally so it doesn't have to change
Dockerfile
FROM python:3.8.6-slim
# Rest of setup goes here...
COPY .env .env
In order for compose variable substitution to work, when the user is not added to docker group, you must add the -E flag to sudo.
script:
- sudo -E docker-compose build

Developing with celery and docker

I have noticed that developing with celery in container, something like this:
celeryworker:
build: .
user: django
command: celery -A project.celery worker -Q project -l DEBUG
links:
- redis
- postgres
depends_on:
- redis
- postgres
env_file: .env
environment:
DJANGO_SETTINGS_MODULE: config.settings.celery
if I want to do some changes on some celery task, I have to completly rebuild the docker image in order to have the latest changes.
So I tried:
docker-compose -f celery.yml down
docker-compose -f celery.yml up
Nothing changed, then:
docker-compose -f celery.yml down
docker-compsoe -f celery.yml build
docker-compose -f celery.yml up
I have the new changes.
Is this the way to do it? seems very slow to me, all the time rebuilding the image, is then better have the local celery outsite docker containers?
Mount your . (that is, your working copy) as a volume within the container you're developing in.
That way you're using the fresh code from your working directory without having to rebuild (unless, say, you're changing dependencies or something else that requires a rebuild).
The idea is explained here by Heroku, emphasis mine:
version: '2'
services:
web:
build: .
ports:
- "5000:5000"
env_file: .env
depends_on:
- db
volumes:
- ./webapp:/opt/webapp # <--- Whatever code your Celery workers need should be here
db:
image: postgres:latest
ports:
- "5432:5432"
redis:
image: redis:alpine
ports:
- "6379:6379"