Docker - Media volume not storing data [duplicate] - django

App Description
I have an app with django-gunicorn for back-end and reactjs-nginx with front-end all containerized as well as hosted on aws ec2 instance.
Problem
On development environment, media files are being saved in the 'media' directory permanently. Tho, those files are only saved on the current running docker container on production time. As a result, the files will be removed when I rebuild/stopped the container for a new code push.
Expectation
I wanted to store the file on the 'media' folder for permanent use.
Important code
settings.py
ENV_PATH = Path(__file__).resolve().parent.parent
STATIC_ROOT = BASE_DIR / 'django_static'
STATIC_URL = '/django_static/'
MEDIA_ROOT = BASE_DIR / 'media/'
MEDIA_URL = '/media/'
docker-compose-production.yml
version: "3.3"
services:
db:
image: postgres
restart: always #Prevent postgres from stopping the container
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- 5432:5432
nginx:
restart: unless-stopped
build:
context: .
dockerfile: ./docker/nginx/Dockerfile
ports:
- 80:80
- 443:443
volumes:
- static_volume:/code/backend/server/django_static
- ./docker/nginx/production:/etc/nginx/conf.d
- ./docker/nginx/certbot/conf:/etc/letsencrypt
- ./docker/nginx/certbot/www:/var/www/certbot
depends_on:
- backend
# Volume for certificate renewal
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./docker/nginx/certbot/conf:/etc/letsencrypt
- ./docker/nginx/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
backend:
restart: unless-stopped
build:
context: .
dockerfile: ./docker/backend/Dockerfile
entrypoint: /code/docker/backend/wsgi-entrypoint.sh
volumes:
- .:/code
- static_volume:/code/backend/server/django_static
expose:
- 8000
depends_on:
- db
volumes:
static_volume: { }
pgdata: { }

I finally figured out the issue. I forgot to add .:/code to my nginx volumes config in my docker-compose file. Thank to this answer
Updated nginx volumes confi
volumes:
- .:/code
- static_volume:/code/backend/server/django_static
- ./docker/nginx/production:/etc/nginx/conf.d
- ./docker/nginx/certbot/conf:/etc/letsencrypt
- ./docker/nginx/certbot/www:/var/www/certbot

Related

Project with django,docker,celery,redis giving error/mainprocess] cannot connect to amqp://guest:**#127.0.0.1:5672//: [errno 111] connection refused

I'm trying to create a Django project with celery and redis for the messaging service using docker-compose. I'm getting Cannot connect to amqp://guest:**#127.0.0.1:5672. I'm not using guest as a user anywhere or 127.0.0.1:5672 and amqp is for RabbitMQ but I'm not using RabbitMQ. So, I don't know if my docker-compose volumes are not set correctly for celery to get the settings, where is it getting amqp from, or is the broker miss configured.
docker-compose.yml:
version: '3'
# network
networks:
data:
management:
volumes:
postgres-data:
redis-data:
services:
nginx:
image: nginx
ports:
- "7001:80"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
- ../static:/static
command: [nginx-debug, '-g', 'daemon off;']
networks:
- management
depends_on:
- web
db:
image: postgres:14
restart: always
volumes:
- postgres-data:/var/lib/postgresql/data/
- ../data:/docker-entrypoint-initdb.d # import SQL dump
environment:
- POSTGRES_DB=link_checker_db
- POSTGRES_USER=link_checker
- POSTGRES_PASSWORD=passw0rd
networks:
- data
ports:
- "5432:5432"
web:
image: link_checker_backend
build:
context: .
dockerfile: Dockerfile
environment:
- DJANGO_LOG_LEVEL=ERROR
- INITIAL_YAML=/code/initial.yaml
volumes:
- ../:/code
- ../link_checker:/code/link_checker
- ../link_checker_django/:/code/link_checker_django
- ./settings.py:/code/link_checker_django/settings.py
working_dir: /code
command: >
sh -c "
python manage.py migrate --noinput &&
python manage.py collectstatic --no-input &&
python manage.py runserver 0.0.0.0:7000
"
networks:
- data
- management
depends_on:
- db
redis:
image: redis
volumes:
- redis-data:/data
networks:
- data
celery-default:
image: link_checker_backend
volumes:
- ../:/code
- ../link_checker:/code/link_checker
- ../link_checker_django/:/code/link_checker_django
- ./settings.py:/code/link_checker_django/settings.py
working_dir: /code/link_checker
command: celery -A celery worker --pool=prefork --concurrency=30 -l DEBUG
networks:
- data
depends_on:
- db
- redis
celery.py
from celery import Celery
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "link_checker_django.settings")
app = Celery("link_checker")
app.config_from_object("django.conf:settings")
app.conf.task_create_missing_queues = True
app.autodiscover_tasks()
settings.py
BROKER_URL = "redis://redis:6379/0"
CELERY_ACCEPT_CONTENT = ["json"]
CELERY_TASK_SERIALIZER = "json"
File structure:
link_checker_django
deploy
docker-compose.yml
link_checker
celery.py
link_checker_django
settings.py
manage.py
Thanks, for any help.

failed to resolve image name: short-name "caddy:2-alpine"

I get this error when running docker-compose up:
ERROR: failed to resolve image name: short-name "caddy:2-alpine" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Here is my docker-compose.yaml file:
version: "3"
#networks:
# web:
# external: true
# bridge:
# driver: bridge
services:
# CaddyServer reverse proxy
caddy:
restart: always
image: caddy:2-alpine
ports:
- "443:443"
command: caddy reverse-proxy --from https://xxxxxx.com --to http://0.0.0.0:8000
#volumes:
# - /local/path/to/Caddyfile:/path/inside/continer/to/Caddyfile
# networks:
# - web
# - bridge
# Django web app
django:
restart: always
build: .
ports:
- "80:8000"
depends_on:
- pgdb
#environment:
# - url=https://api.backend.example.com
#command: "gunicorn config.wsgi:application --bind 0.0.0.0:8000"
#networks:
# - bridge
pgdb:
image: postgres
container_name: pgdb
environment:
- POSTGRES_DB=xxxxx
- POSTGRES_USER=xxxx
- POSTGRES_PASSWORD=xxxx
volumes:
- pg-data:/var/lib/postgresql/data/
volumes:
pg-data:
I was Getting this error short-name "postgres" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf.
The problem was my docker was not properly installed
https://www.simplilearn.com/tutorials/docker-tutorial/how-to-install-docker-on-ubuntu
I followed this page and reinstalled docker.
it Solved for me.

Django/Docker/Postgresql: app connected to the 'wrong' database

I have develop a project with Django/Docker/Postgresql and use docker-compose to deploy on a linux remote server.
I want to deploy 2 apps based on the same code (and same settings file), preprod and demo, with two disctincts PostgreSQL databases (databases are not dockerized): ecrf_covicompare_preprod and ecrf_covicompare_demo, respectively for preprod and demo.
Apps tests will be done by differents teams.
I have :
2 docker-compose files, docker-compose.preprod.yml and docker-compose.demo.yml, respectively for preprod and demo
.env files, .env.preprod and .env.preprod.demo, respectively for preprod and demo
Databases parameters of connection are set in these .env files.
But my 2 apps connect to the same database (ecrf_covicompare_preprod).
If I connect to my 'web demo' container to print environment variables I get SQL_DATABASE=ecrf_covicompare_demo which is correct
docker-compose.preprod.yml
version: '3.7'
services:
web:
restart: always
container_name: ecrf_covicompare_web
build:
context: ./app
dockerfile: Dockerfile.preprod
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
volumes:
- app_volume:/usr/src/app
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
expose:
- 8000
env_file:
- ./.env.preprod
entrypoint: [ "/usr/src/app/entrypoint.preprod.sh" ]
depends_on:
- redis
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:8000/" ]
interval: 30s
timeout: 10s
retries: 50
redis:
container_name: ecrf_covicompare_redis
image: "redis:alpine"
celery:
container_name: ecrf_covicompare_celery
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core worker -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
- web
- redis
celery-beat:
container_name: ecrf_covicompare_celery-beat
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core beat -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
- web
- redis
nginx:
container_name: ecrf_covicompare_nginx
build: ./nginx
restart: always
volumes:
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
ports:
- 1370:80
depends_on:
- web
.env.preprod
SQL_DATABASE=ecrf_covicompare_preprod
SQL_USER=user_preprod
DATABASE=postgres
DJANGO_SETTINGS_MODULE=core.settings.preprod
docker-compose.demo.yml (simplified)
version: '3.7'
services:
demo_web:
container_name: ecrf_covicompare_web_demo
//
env_file:
- ./.env.preprod.demo
//
demo_redis:
container_name: ecrf_covicompare_redis_demo
image: "redis:alpine"
demo_celery:
container_name: ecrf_covicompare_celery_demo
//
env_file:
- ./.env.preprod.demo
depends_on:
- demo_web
- demo_redis
demo_celery-beat:
container_name: ecrf_covicompare_celery-beat_demo
//
env_file:
- ./.env.preprod.demo
depends_on:
- demo_web
- demo_redis
demo_nginx:
container_name: ecrf_covicompare_nginx_demo
//
ports:
- 1380:80
depends_on:
- demo_web
.env.preprod.demo
SQL_DATABASE=ecrf_covicompare_demo
SQL_USER=user_preprod
DATABASE=postgres
DJANGO_SETTINGS_MODULE=core.settings.preprod
Im new to all the docker compose stuff but to me your configuration looks fine. A few ideas I had:
you mention two different PostgreSQL databases. Are those hosted on the same PostgreSQL server or two different servers? In both .env files you set DATABASE=postgres. If they are running on the same server instance I could imagine that this leads to them using the same database depending on how this variable is used later on.
are you sure that the env variables are set on time? Once you manually check them from inside th container they are set correctly. But also while your containers are booting up? No expert on how docker compose handles these files but maybe you could try printing the env variables during container initialization from within some script.
Are you completely sure its not hardcoded somewhere? Maybe try searching all source files for the DB name they both connect to. I failed with this far too often to not check this.
Hope this helps. Its a bit of a guess but your configuration looks fine to me otherwise.

Django/SCSS: missing staticfiles manifest -> staticfiles.json with empty paths

EDIT 05/02/2021 10:45
I still do not find any solution to my issue
reading other post show that there are many possible cause to this issue
If someone could help and explain me how django_compressor should work?
For example,
is it right that the manifest files is called staticfiles.json?
It is abnormal that this files contain no paths?
which paths it should contain?
...
EDIT 04/02/2021 14:00
I run
python manage.py findstatic --verbosity 2 theme.scss
and get (below), that mean that path is correct?
Found 'theme.scss' here:
/usr/src/app/static/theme.scss
Looking in the following locations:
/usr/src/app/static
EDIT 04/02/2021 13:38
I mentionned that with DEBUG = True and runserver it works
I mean I can customize bootstrap
and I can see staticfiles.json in /usr/src/app/static but this file contain no paths {"paths": {}, "version": "1.0"}
EDIT 04/02/2021 13:04
During running, logs mentionned
0 static files copied to '/usr/src/app/static'.
Found 'compress' tags in:
/usr/src/app/cafe/templates/cafe/table.html
...
But I've controlled in web container and static files are availables at /usr/src/app/static as expected (see docker-compose file)
I try to use SCSS in my Django project using django_compressor et django libsass
stack: Django/Nginx/Postgresql/Docker
I have configured 2 environment of development: dev and preprod
I got an error: valueerror: missing staticfiles manifest entry for 'theme.scss'
I did not understand because it has worked bt I've tryed to delete container/images/volumes and rebuild all my project and got this error
I've tryed DEBUG = True, STATIC_ROOT = 'static'... but nothing works
logs only raise this error
settings -> preprod.py
- app
- core
- static
- bootstrap
- css
- js
- theme.scss
- nginx
DEBUG = False
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, "static")
STATICFILES_FINDERS = [
'compressor.finders.CompressorFinder',
]
COMPRESS_PRECOMPILERS = (
('text/x-scss', 'django_libsass.SassCompiler'),
COMPRESS_ENABLED = True
COMPRESS_OFFLINE = True
LIBSASS_OUTPUT_STYLE = 'compressed'
STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.ManifestStaticFilesStorage'
)
entrypoint.preprod.sh
python manage.py collectstatic --no-input
python manage.py compress --force
docker-compose.preprod.yml
version: '3.7'
services:
web:
restart: always
container_name: web
build:
context: ./app
dockerfile: Dockerfile.preprod
restart: always
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
volumes:
- app_volume:/usr/src/app
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
expose:
- 8000
env_file:
- ./.env.preprod
depends_on:
- db
- redis
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/"]
interval: 30s
timeout: 10s
retries: 50
redis:
container_name: redis
image: "redis:alpine"
celery:
container_name: celery
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core worker -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
db:
condition: service_started
web:
condition: service_healthy
redis:
condition: service_started
celery-beat:
container_name: celery-beat
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core beat -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
db:
condition: service_started
web:
condition: service_healthy
redis:
condition: service_started
db:
container_name: db
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
- app_volume:/var/lib/postgresql/backup
env_file:
- ./.env.preprod.db
nginx:
container_name: nginx
build: ./nginx
restart: always
volumes:
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
ports:
- 1340:80
depends_on:
web:
condition: service_healthy
volumes:
postgres_data:
static_volume:
media_volume:
app_volume:

How do I fix my docker-compose.yml , error in installing Mayan-EDMS with django

I am trying to install the Mayan-EDMS image with the Django app and Postgres database using docker-compose but each time, I try to build docker-compose using docker-compose up it gives an error.
ERROR: yaml.parser.ParserError: while parsing a block mapping
in "./docker-compose.yml", line 8, column 3
expected <block end>, but found '<block mapping start>'
in "./docker-compose.yml", line 29, column 4
here is my docker-compose.yml
docker-compose contain postgres:11.4-alpine,redis:5.0-alpine and mayanedms/mayanedms:3
version: "3"
networks:
bridge:
driver: bridge
services:
app:
container_name: django
restart: always
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
environment:
- DB_NAME=app
- DB_USER=insights
- DB_HOST=db
- DB_PORT=5432
depends_on:
- db
command: >
sh -c "mkdir -p logs media &&
python manage.py wait_for_db &&
python manage.py runserver 0.0.0.0:8000"
db:
image: postgres:11.4-alpine
container_name: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=insights
- POSTGRES_DB=app
redis:
command:
- redis-server
- --appendonly
- "no"
- --databases
- "2"
- --maxmemory
- "100mb"
- --maxclients
- "500"
- --maxmemory-policy
- "allkeys-lru"
- --save
- ""
- --tcp-backlog
- "256"
- --requirepass
- "${MAYAN_REDIS_PASSWORD:-mayanredispassword}"
image: redis:5.0-alpine
networks:
- bridge
restart: unless-stopped
volumes:
- redis_data:/data
mayanedms:
image: mayanedms/mayanedms:3
container_name: mayanedms
restart: unless-stopped
ports:
- "80:8000"
depends_on:
- db
- redis
volumes:
- mayanedms_data:/var/lib/mayan
environment: &mayan_env
MAYAN_CELERY_BROKER_URL: redis://:${MAYAN_REDIS_PASSWORD:-mayanredispassword}#redis:6379/0
MAYAN_CELERY_RESULT_BACKEND: redis://:${MAYAN_REDIS_PASSWORD:-mayanredispassword}#redis:6379/1
MAYAN_DATABASES: "{'default':{'ENGINE':'django.db.backends.postgresql','NAME':'${MAYAN_DATABASE_DB:-mayan}','PASSWORD':'${MAYAN_DATABASE_PASSWORD:-mayandbpass}','USER':'${MAYAN_DATABASE_USER:-mayan}','HOST':'postgresql'}}"
MAYAN_DOCKER_WAIT: "db:5432 redis:6379"
networks:
- bridge
background_tasks:
restart: always
container_name: process_tasks
build:
context: .
depends_on:
- app
- db
environment:
- DB_NAME=app
- DB_USER=insights
- DB_HOST=db
- DB_PORT=5432
volumes:
- ./app:/app
command: >
sh -c "python manage.py process_tasks --sleep=3 --log-std --traceback"
volumes:
postgres_data:
redis_data:
mayanedms_data:
thank you for help
YAML indentation in your docker-compose.yml is wrong. YAML files rely on space indentation to define structure, but indentation for service db uses 3 space where app uses 2 space - when parsing your file, Compose interpret db (3 spaces) to be a sub-component of app (2 spaces), its like you are doing:
services:
app:
...
db:
...
Or an equivalent in json:
"services": {
"app": {
"db": {
...
}
}
}
Where what you need is:
services:
app:
...
db:
...
Equivalent in json:
"services": {
"app": {
...
},
"db": {
...
}
}
Same issue for all the other services definition and with volumes. volumes must be a top-level element, but with a space it is read a sub-component of services