EDIT 05/02/2021 10:45
I still do not find any solution to my issue
reading other post show that there are many possible cause to this issue
If someone could help and explain me how django_compressor should work?
For example,
is it right that the manifest files is called staticfiles.json?
It is abnormal that this files contain no paths?
which paths it should contain?
...
EDIT 04/02/2021 14:00
I run
python manage.py findstatic --verbosity 2 theme.scss
and get (below), that mean that path is correct?
Found 'theme.scss' here:
/usr/src/app/static/theme.scss
Looking in the following locations:
/usr/src/app/static
EDIT 04/02/2021 13:38
I mentionned that with DEBUG = True and runserver it works
I mean I can customize bootstrap
and I can see staticfiles.json in /usr/src/app/static but this file contain no paths {"paths": {}, "version": "1.0"}
EDIT 04/02/2021 13:04
During running, logs mentionned
0 static files copied to '/usr/src/app/static'.
Found 'compress' tags in:
/usr/src/app/cafe/templates/cafe/table.html
...
But I've controlled in web container and static files are availables at /usr/src/app/static as expected (see docker-compose file)
I try to use SCSS in my Django project using django_compressor et django libsass
stack: Django/Nginx/Postgresql/Docker
I have configured 2 environment of development: dev and preprod
I got an error: valueerror: missing staticfiles manifest entry for 'theme.scss'
I did not understand because it has worked bt I've tryed to delete container/images/volumes and rebuild all my project and got this error
I've tryed DEBUG = True, STATIC_ROOT = 'static'... but nothing works
logs only raise this error
settings -> preprod.py
- app
- core
- static
- bootstrap
- css
- js
- theme.scss
- nginx
DEBUG = False
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, "static")
STATICFILES_FINDERS = [
'compressor.finders.CompressorFinder',
]
COMPRESS_PRECOMPILERS = (
('text/x-scss', 'django_libsass.SassCompiler'),
COMPRESS_ENABLED = True
COMPRESS_OFFLINE = True
LIBSASS_OUTPUT_STYLE = 'compressed'
STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.ManifestStaticFilesStorage'
)
entrypoint.preprod.sh
python manage.py collectstatic --no-input
python manage.py compress --force
docker-compose.preprod.yml
version: '3.7'
services:
web:
restart: always
container_name: web
build:
context: ./app
dockerfile: Dockerfile.preprod
restart: always
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
volumes:
- app_volume:/usr/src/app
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
expose:
- 8000
env_file:
- ./.env.preprod
depends_on:
- db
- redis
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/"]
interval: 30s
timeout: 10s
retries: 50
redis:
container_name: redis
image: "redis:alpine"
celery:
container_name: celery
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core worker -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
db:
condition: service_started
web:
condition: service_healthy
redis:
condition: service_started
celery-beat:
container_name: celery-beat
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core beat -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
db:
condition: service_started
web:
condition: service_healthy
redis:
condition: service_started
db:
container_name: db
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
- app_volume:/var/lib/postgresql/backup
env_file:
- ./.env.preprod.db
nginx:
container_name: nginx
build: ./nginx
restart: always
volumes:
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
ports:
- 1340:80
depends_on:
web:
condition: service_healthy
volumes:
postgres_data:
static_volume:
media_volume:
app_volume:
Related
I'm trying to create a Django project with celery and redis for the messaging service using docker-compose. I'm getting Cannot connect to amqp://guest:**#127.0.0.1:5672. I'm not using guest as a user anywhere or 127.0.0.1:5672 and amqp is for RabbitMQ but I'm not using RabbitMQ. So, I don't know if my docker-compose volumes are not set correctly for celery to get the settings, where is it getting amqp from, or is the broker miss configured.
docker-compose.yml:
version: '3'
# network
networks:
data:
management:
volumes:
postgres-data:
redis-data:
services:
nginx:
image: nginx
ports:
- "7001:80"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
- ../static:/static
command: [nginx-debug, '-g', 'daemon off;']
networks:
- management
depends_on:
- web
db:
image: postgres:14
restart: always
volumes:
- postgres-data:/var/lib/postgresql/data/
- ../data:/docker-entrypoint-initdb.d # import SQL dump
environment:
- POSTGRES_DB=link_checker_db
- POSTGRES_USER=link_checker
- POSTGRES_PASSWORD=passw0rd
networks:
- data
ports:
- "5432:5432"
web:
image: link_checker_backend
build:
context: .
dockerfile: Dockerfile
environment:
- DJANGO_LOG_LEVEL=ERROR
- INITIAL_YAML=/code/initial.yaml
volumes:
- ../:/code
- ../link_checker:/code/link_checker
- ../link_checker_django/:/code/link_checker_django
- ./settings.py:/code/link_checker_django/settings.py
working_dir: /code
command: >
sh -c "
python manage.py migrate --noinput &&
python manage.py collectstatic --no-input &&
python manage.py runserver 0.0.0.0:7000
"
networks:
- data
- management
depends_on:
- db
redis:
image: redis
volumes:
- redis-data:/data
networks:
- data
celery-default:
image: link_checker_backend
volumes:
- ../:/code
- ../link_checker:/code/link_checker
- ../link_checker_django/:/code/link_checker_django
- ./settings.py:/code/link_checker_django/settings.py
working_dir: /code/link_checker
command: celery -A celery worker --pool=prefork --concurrency=30 -l DEBUG
networks:
- data
depends_on:
- db
- redis
celery.py
from celery import Celery
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "link_checker_django.settings")
app = Celery("link_checker")
app.config_from_object("django.conf:settings")
app.conf.task_create_missing_queues = True
app.autodiscover_tasks()
settings.py
BROKER_URL = "redis://redis:6379/0"
CELERY_ACCEPT_CONTENT = ["json"]
CELERY_TASK_SERIALIZER = "json"
File structure:
link_checker_django
deploy
docker-compose.yml
link_checker
celery.py
link_checker_django
settings.py
manage.py
Thanks, for any help.
App Description
I have an app with django-gunicorn for back-end and reactjs-nginx with front-end all containerized as well as hosted on aws ec2 instance.
Problem
On development environment, media files are being saved in the 'media' directory permanently. Tho, those files are only saved on the current running docker container on production time. As a result, the files will be removed when I rebuild/stopped the container for a new code push.
Expectation
I wanted to store the file on the 'media' folder for permanent use.
Important code
settings.py
ENV_PATH = Path(__file__).resolve().parent.parent
STATIC_ROOT = BASE_DIR / 'django_static'
STATIC_URL = '/django_static/'
MEDIA_ROOT = BASE_DIR / 'media/'
MEDIA_URL = '/media/'
docker-compose-production.yml
version: "3.3"
services:
db:
image: postgres
restart: always #Prevent postgres from stopping the container
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- 5432:5432
nginx:
restart: unless-stopped
build:
context: .
dockerfile: ./docker/nginx/Dockerfile
ports:
- 80:80
- 443:443
volumes:
- static_volume:/code/backend/server/django_static
- ./docker/nginx/production:/etc/nginx/conf.d
- ./docker/nginx/certbot/conf:/etc/letsencrypt
- ./docker/nginx/certbot/www:/var/www/certbot
depends_on:
- backend
# Volume for certificate renewal
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./docker/nginx/certbot/conf:/etc/letsencrypt
- ./docker/nginx/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
backend:
restart: unless-stopped
build:
context: .
dockerfile: ./docker/backend/Dockerfile
entrypoint: /code/docker/backend/wsgi-entrypoint.sh
volumes:
- .:/code
- static_volume:/code/backend/server/django_static
expose:
- 8000
depends_on:
- db
volumes:
static_volume: { }
pgdata: { }
I finally figured out the issue. I forgot to add .:/code to my nginx volumes config in my docker-compose file. Thank to this answer
Updated nginx volumes confi
volumes:
- .:/code
- static_volume:/code/backend/server/django_static
- ./docker/nginx/production:/etc/nginx/conf.d
- ./docker/nginx/certbot/conf:/etc/letsencrypt
- ./docker/nginx/certbot/www:/var/www/certbot
I have develop a project with Django/Docker/Postgresql and use docker-compose to deploy on a linux remote server.
I want to deploy 2 apps based on the same code (and same settings file), preprod and demo, with two disctincts PostgreSQL databases (databases are not dockerized): ecrf_covicompare_preprod and ecrf_covicompare_demo, respectively for preprod and demo.
Apps tests will be done by differents teams.
I have :
2 docker-compose files, docker-compose.preprod.yml and docker-compose.demo.yml, respectively for preprod and demo
.env files, .env.preprod and .env.preprod.demo, respectively for preprod and demo
Databases parameters of connection are set in these .env files.
But my 2 apps connect to the same database (ecrf_covicompare_preprod).
If I connect to my 'web demo' container to print environment variables I get SQL_DATABASE=ecrf_covicompare_demo which is correct
docker-compose.preprod.yml
version: '3.7'
services:
web:
restart: always
container_name: ecrf_covicompare_web
build:
context: ./app
dockerfile: Dockerfile.preprod
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
volumes:
- app_volume:/usr/src/app
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
expose:
- 8000
env_file:
- ./.env.preprod
entrypoint: [ "/usr/src/app/entrypoint.preprod.sh" ]
depends_on:
- redis
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:8000/" ]
interval: 30s
timeout: 10s
retries: 50
redis:
container_name: ecrf_covicompare_redis
image: "redis:alpine"
celery:
container_name: ecrf_covicompare_celery
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core worker -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
- web
- redis
celery-beat:
container_name: ecrf_covicompare_celery-beat
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core beat -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
- web
- redis
nginx:
container_name: ecrf_covicompare_nginx
build: ./nginx
restart: always
volumes:
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
ports:
- 1370:80
depends_on:
- web
.env.preprod
SQL_DATABASE=ecrf_covicompare_preprod
SQL_USER=user_preprod
DATABASE=postgres
DJANGO_SETTINGS_MODULE=core.settings.preprod
docker-compose.demo.yml (simplified)
version: '3.7'
services:
demo_web:
container_name: ecrf_covicompare_web_demo
//
env_file:
- ./.env.preprod.demo
//
demo_redis:
container_name: ecrf_covicompare_redis_demo
image: "redis:alpine"
demo_celery:
container_name: ecrf_covicompare_celery_demo
//
env_file:
- ./.env.preprod.demo
depends_on:
- demo_web
- demo_redis
demo_celery-beat:
container_name: ecrf_covicompare_celery-beat_demo
//
env_file:
- ./.env.preprod.demo
depends_on:
- demo_web
- demo_redis
demo_nginx:
container_name: ecrf_covicompare_nginx_demo
//
ports:
- 1380:80
depends_on:
- demo_web
.env.preprod.demo
SQL_DATABASE=ecrf_covicompare_demo
SQL_USER=user_preprod
DATABASE=postgres
DJANGO_SETTINGS_MODULE=core.settings.preprod
Im new to all the docker compose stuff but to me your configuration looks fine. A few ideas I had:
you mention two different PostgreSQL databases. Are those hosted on the same PostgreSQL server or two different servers? In both .env files you set DATABASE=postgres. If they are running on the same server instance I could imagine that this leads to them using the same database depending on how this variable is used later on.
are you sure that the env variables are set on time? Once you manually check them from inside th container they are set correctly. But also while your containers are booting up? No expert on how docker compose handles these files but maybe you could try printing the env variables during container initialization from within some script.
Are you completely sure its not hardcoded somewhere? Maybe try searching all source files for the DB name they both connect to. I failed with this far too often to not check this.
Hope this helps. Its a bit of a guess but your configuration looks fine to me otherwise.
I have been working on this all day and I am completely confused.
I have create a Django project and using docker and a docker-compose.yml to hold my environment variables. I was struggling to get the DEBUG variable to be False. But I have since found out that my SECRET_KEY isn't working either.
I have added a print statement after the SECRET_KEY and it prints out (test) as that is what I currently have in the docker-compose.yml file but this should fail to build...
If I hard code the DEBUG I can get it to change but I have completely removed the secret key and the project still starts. Any ideas where Django could be pulling this from or how I am able to trace it back to see?
settings.py
SECRET_KEY = os.environ.get('SECRET_KEY')
DEBUG = os.environ.get('DEBUG')
docker-compose.yml
version: '3.8'
services:
web:
build: .
container_name: django
command: gunicorn config.wsgi -b 0.0.0.0:8000
environment:
- ENVIRONMENT=development
- SECRET_KEY=(test)
- DEBUG=0
- DB_USERNAME=(test)
- DB_PASSWORD=(test)
volumes:
- .:/code
ports:
- 8000:8000
depends_on:
- db
- redis
celery:
build: .
image: celery
container_name: celery
command: celery -A config worker -l INFO
volumes:
- .:/code
environment:
- SECRET_KEY=(test)
- DEBUG=0
- DJANGO_ALLOWED_HOSTS=['127.0.0.1','localhost']
- CELERY_BROKER=redis://redis:6379/0
- CELERY_BACKEND=redis://redis:6379/0
depends_on:
- db
- redis
celery-beat:
build: .
environment:
- SECRET_KEY=(test)
- CELERY_BROKER=redis://redis:6379/0
- CELERY_BACKEND=redis://redis:6379/0
The reason was False/0 from the docker-compose.yml were being formatted to a string and a string is evaluated to True.
To solve this use;
DEBUG=eval(os.environ.get('DEBUG', False))
or
DEBUG=int(os.environ.get('DEBUG', 0))
I am new with docker. I am having trouble with multiple containers deploy at a same time, it's occurring race condition. Every time I enter docker-compose up --build command elasticsearch or redis starts first and database starts and exits with error code 0 as well as celery and nginx. I tried using "sleep" command, but no luck(maybe I missed something). Here is my docker-compose.yml file -
version: "3"
services:
db:
image: postgres:9.6-alpine
container_name: myblogdb
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=mydb
volumes:
- myblogdb_data:/var/lib/postgresql/data/
ports:
- "4949:5432"
web:
build: ./app
command: sh -c "gunicorn djangoApp.wsgi:application --bind 0.0.0.0:8000"
volumes:
- ./app:/usr/src/app/
- my_blog_static_volume:/usr/src/app/djangoApp/settings/staticfiles
- my_blog_media_volume:/usr/src/app/mediafiles
ports:
- "8000:8000"
depends_on:
- db
- redis
- es
nginx:
restart: always
build: ./nginx
volumes:
- my_blog_static_volume:/usr/src/app/djangoApp/settings/staticfiles
- my_blog_media_volume:/usr/src/app/mediafiles
ports:
- "1337:80"
depends_on:
- web
redis:
image: "redis:alpine"
es:
image: elasticsearch:5.6.15-alpine
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms256M -Xmx256M"
volumes:
- my_blog_esdata:/usr/share/elasticsearch/data/
ports:
- "9200:9200"
celery:
restart: always
build: ./app
command: sh -c "celery -A djangoApp worker -l info"
volumes:
- ./app:/usr/src/app/
depends_on:
- db
- redis
- web
celery-beat:
restart: always
build: ./app
command: sh -c "celery -A djangoApp beat -l info"
volumes:
- ./app:/usr/src/app/
depends_on:
- db
- redis
- web
volumes:
myblogdb_data:
my_blog_static_volume:
my_blog_media_volume:
my_blog_esdata:
Please let me know if I'm missing something here. Thanks
You need to add a script like wait-for-it or wait-for in order to control startup and shutdown order in compose that basically tells a service to wait for another service before running the start command.
So if you want Django to wait for PostgreSQL the command in docker-compose will be:
["./wait-for", "db:5432", "--", "gunicorn", "djangoApp.wsgi:application", "--bind", "0.0.0.0:8000"]
There is a full explanation in the following answer, the answer describe it for MySQL and Golang but same concept applies to your case