Critical worker timeout - django

I am building an app with Docker, Django and PostgreSQL. I was trying to parse through rows in an excel file (about 7000 rows) and got this error:
web_1 | [2019-09-26 23:39:34 +0000] [1] [CRITICAL] WORKER TIMEOUT
(pid:11) web_1 | [2019-09-26 23:39:34 +0000] [11] [INFO] Worker
exiting (pid: 11) web_1 | [2019-09-26 23:39:34 +0000] [12] [INFO]
Booting worker with pid: 12
I searched for solution and found a suggestion to increase the TIMEOUT but I don't know where to find it.
Here is my yml files.
version: '3.7'
services:
web:
build: .
command: python /code/manage.py runserver 0.0.0.0:8000
command: gunicorn bookstore_project.wsgi -b 0.0.0.0:8000 # new
environment:
- SECRET_KEY=<secret_key>
- DEBUG=1
- ENVIRONMENT=development
volumes:
- .:/code
ports:
- 8000:8000
depends_on:
- db
db:
image: postgres:11
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
postgres_data:
docker-compose-prod-yml
version: '3.7'
services:
web:
build: .
command: python /code/manage.py runserver 0.0.0.0:8000
environment:
- ENVIRONMENT=production
- SECRET_KEY=<Secret_key>
- DEBUG=0
ports:
- 8000:8000
depends_on:
- db
db:
image: postgres:11
heroku.yml:
setup:
addons:
- plan: heroku-postgresql
build:
docker:
web: Dockerfile
release:
image: web
command:
- python manage.py collectstatic --noinput
run:
web: gunicorn bookstore_project.wsgi
Any help is appreciated!

Related

Project with django,docker,celery,redis giving error/mainprocess] cannot connect to amqp://guest:**#127.0.0.1:5672//: [errno 111] connection refused

I'm trying to create a Django project with celery and redis for the messaging service using docker-compose. I'm getting Cannot connect to amqp://guest:**#127.0.0.1:5672. I'm not using guest as a user anywhere or 127.0.0.1:5672 and amqp is for RabbitMQ but I'm not using RabbitMQ. So, I don't know if my docker-compose volumes are not set correctly for celery to get the settings, where is it getting amqp from, or is the broker miss configured.
docker-compose.yml:
version: '3'
# network
networks:
data:
management:
volumes:
postgres-data:
redis-data:
services:
nginx:
image: nginx
ports:
- "7001:80"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
- ../static:/static
command: [nginx-debug, '-g', 'daemon off;']
networks:
- management
depends_on:
- web
db:
image: postgres:14
restart: always
volumes:
- postgres-data:/var/lib/postgresql/data/
- ../data:/docker-entrypoint-initdb.d # import SQL dump
environment:
- POSTGRES_DB=link_checker_db
- POSTGRES_USER=link_checker
- POSTGRES_PASSWORD=passw0rd
networks:
- data
ports:
- "5432:5432"
web:
image: link_checker_backend
build:
context: .
dockerfile: Dockerfile
environment:
- DJANGO_LOG_LEVEL=ERROR
- INITIAL_YAML=/code/initial.yaml
volumes:
- ../:/code
- ../link_checker:/code/link_checker
- ../link_checker_django/:/code/link_checker_django
- ./settings.py:/code/link_checker_django/settings.py
working_dir: /code
command: >
sh -c "
python manage.py migrate --noinput &&
python manage.py collectstatic --no-input &&
python manage.py runserver 0.0.0.0:7000
"
networks:
- data
- management
depends_on:
- db
redis:
image: redis
volumes:
- redis-data:/data
networks:
- data
celery-default:
image: link_checker_backend
volumes:
- ../:/code
- ../link_checker:/code/link_checker
- ../link_checker_django/:/code/link_checker_django
- ./settings.py:/code/link_checker_django/settings.py
working_dir: /code/link_checker
command: celery -A celery worker --pool=prefork --concurrency=30 -l DEBUG
networks:
- data
depends_on:
- db
- redis
celery.py
from celery import Celery
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "link_checker_django.settings")
app = Celery("link_checker")
app.config_from_object("django.conf:settings")
app.conf.task_create_missing_queues = True
app.autodiscover_tasks()
settings.py
BROKER_URL = "redis://redis:6379/0"
CELERY_ACCEPT_CONTENT = ["json"]
CELERY_TASK_SERIALIZER = "json"
File structure:
link_checker_django
deploy
docker-compose.yml
link_checker
celery.py
link_checker_django
settings.py
manage.py
Thanks, for any help.

How to pass persistent volume to postgres docker

I want to run PostgreSql database with docker, I created a docker-compose like below:
django:
restart: always
build: .
ports:
- "8000:8000"
depends_on:
- pgdb
#environment:
# - url=https://api.backend.example.com
#command: "gunicorn config.wsgi:application --bind 0.0.0.0:8000"
#networks:
# - bridge
pgdb:
image: postgres
container_name: pgdb
environment:
- POSTGRES_DB=hbys_filyos
- POSTGRES_USER=healmedy
- POSTGRES_PASSWORD=mhacare1
I want to run PostgreSql database with docker, I created a docker-compose like below:
After building I run docker run -p 80:8000 surgery4:dev & as follows.
I am getting the following error in terminal:
django.db.utils.OperationalError: could not translate host name "pgdb" to address: Try again
There are indentation issue in your docker-compose file : django should be in proper place.
django:
restart: always
build: .
ports:
- "8000:8000"
depends_on:
- pgdb
#environment:
# - url=https://api.backend.example.com
#command: "gunicorn config.wsgi:application --bind 0.0.0.0:8000"
#networks:
# - bridge
pgdb:
image: postgres
container_name: pgdb
volumes:
- pg-data/:/var/lib/postgresql
environment:
- POSTGRES_DB=hbys_filyos
- POSTGRES_USER=healmedy
- POSTGRES_PASSWORD=mhacare1
Also you just need to execute : docker-compose up -d from where your docker-compose file located.

Django/SCSS: missing staticfiles manifest -> staticfiles.json with empty paths

EDIT 05/02/2021 10:45
I still do not find any solution to my issue
reading other post show that there are many possible cause to this issue
If someone could help and explain me how django_compressor should work?
For example,
is it right that the manifest files is called staticfiles.json?
It is abnormal that this files contain no paths?
which paths it should contain?
...
EDIT 04/02/2021 14:00
I run
python manage.py findstatic --verbosity 2 theme.scss
and get (below), that mean that path is correct?
Found 'theme.scss' here:
/usr/src/app/static/theme.scss
Looking in the following locations:
/usr/src/app/static
EDIT 04/02/2021 13:38
I mentionned that with DEBUG = True and runserver it works
I mean I can customize bootstrap
and I can see staticfiles.json in /usr/src/app/static but this file contain no paths {"paths": {}, "version": "1.0"}
EDIT 04/02/2021 13:04
During running, logs mentionned
0 static files copied to '/usr/src/app/static'.
Found 'compress' tags in:
/usr/src/app/cafe/templates/cafe/table.html
...
But I've controlled in web container and static files are availables at /usr/src/app/static as expected (see docker-compose file)
I try to use SCSS in my Django project using django_compressor et django libsass
stack: Django/Nginx/Postgresql/Docker
I have configured 2 environment of development: dev and preprod
I got an error: valueerror: missing staticfiles manifest entry for 'theme.scss'
I did not understand because it has worked bt I've tryed to delete container/images/volumes and rebuild all my project and got this error
I've tryed DEBUG = True, STATIC_ROOT = 'static'... but nothing works
logs only raise this error
settings -> preprod.py
- app
- core
- static
- bootstrap
- css
- js
- theme.scss
- nginx
DEBUG = False
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, "static")
STATICFILES_FINDERS = [
'compressor.finders.CompressorFinder',
]
COMPRESS_PRECOMPILERS = (
('text/x-scss', 'django_libsass.SassCompiler'),
COMPRESS_ENABLED = True
COMPRESS_OFFLINE = True
LIBSASS_OUTPUT_STYLE = 'compressed'
STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.ManifestStaticFilesStorage'
)
entrypoint.preprod.sh
python manage.py collectstatic --no-input
python manage.py compress --force
docker-compose.preprod.yml
version: '3.7'
services:
web:
restart: always
container_name: web
build:
context: ./app
dockerfile: Dockerfile.preprod
restart: always
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
volumes:
- app_volume:/usr/src/app
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
expose:
- 8000
env_file:
- ./.env.preprod
depends_on:
- db
- redis
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/"]
interval: 30s
timeout: 10s
retries: 50
redis:
container_name: redis
image: "redis:alpine"
celery:
container_name: celery
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core worker -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
db:
condition: service_started
web:
condition: service_healthy
redis:
condition: service_started
celery-beat:
container_name: celery-beat
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core beat -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
db:
condition: service_started
web:
condition: service_healthy
redis:
condition: service_started
db:
container_name: db
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
- app_volume:/var/lib/postgresql/backup
env_file:
- ./.env.preprod.db
nginx:
container_name: nginx
build: ./nginx
restart: always
volumes:
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
ports:
- 1340:80
depends_on:
web:
condition: service_healthy
volumes:
postgres_data:
static_volume:
media_volume:
app_volume:

Handling RACE CONDITION in Docker containers of a django app which include postgres,nginx,celery,redis,elasticsearch

I am new with docker. I am having trouble with multiple containers deploy at a same time, it's occurring race condition. Every time I enter docker-compose up --build command elasticsearch or redis starts first and database starts and exits with error code 0 as well as celery and nginx. I tried using "sleep" command, but no luck(maybe I missed something). Here is my docker-compose.yml file -
version: "3"
services:
db:
image: postgres:9.6-alpine
container_name: myblogdb
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=mydb
volumes:
- myblogdb_data:/var/lib/postgresql/data/
ports:
- "4949:5432"
web:
build: ./app
command: sh -c "gunicorn djangoApp.wsgi:application --bind 0.0.0.0:8000"
volumes:
- ./app:/usr/src/app/
- my_blog_static_volume:/usr/src/app/djangoApp/settings/staticfiles
- my_blog_media_volume:/usr/src/app/mediafiles
ports:
- "8000:8000"
depends_on:
- db
- redis
- es
nginx:
restart: always
build: ./nginx
volumes:
- my_blog_static_volume:/usr/src/app/djangoApp/settings/staticfiles
- my_blog_media_volume:/usr/src/app/mediafiles
ports:
- "1337:80"
depends_on:
- web
redis:
image: "redis:alpine"
es:
image: elasticsearch:5.6.15-alpine
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms256M -Xmx256M"
volumes:
- my_blog_esdata:/usr/share/elasticsearch/data/
ports:
- "9200:9200"
celery:
restart: always
build: ./app
command: sh -c "celery -A djangoApp worker -l info"
volumes:
- ./app:/usr/src/app/
depends_on:
- db
- redis
- web
celery-beat:
restart: always
build: ./app
command: sh -c "celery -A djangoApp beat -l info"
volumes:
- ./app:/usr/src/app/
depends_on:
- db
- redis
- web
volumes:
myblogdb_data:
my_blog_static_volume:
my_blog_media_volume:
my_blog_esdata:
Please let me know if I'm missing something here. Thanks
You need to add a script like wait-for-it or wait-for in order to control startup and shutdown order in compose that basically tells a service to wait for another service before running the start command.
So if you want Django to wait for PostgreSQL the command in docker-compose will be:
["./wait-for", "db:5432", "--", "gunicorn", "djangoApp.wsgi:application", "--bind", "0.0.0.0:8000"]
There is a full explanation in the following answer, the answer describe it for MySQL and Golang but same concept applies to your case

Why other docker containers do not see the accepted migrations?

docker-compose.yml
version: '3'
services:
# Django web server
web:
volumes:
- "./app/back:/app"
- "../front/public/static:/app/static"
- "./phantomjs-2.1.1:/app/phantomjs"
build:
context: .
dockerfile: dockerfile_django
#command: python manage.py runserver 0.0.0.0:8080
#command: ["uwsgi", "--ini", "/app/back/uwsgi.ini"]
ports:
- "8080:8080"
links:
- async
- ws_server
- mysql
- redis
async:
volumes:
- "./app/async_web:/app"
build:
context: .
dockerfile: dockerfile_async
ports:
- "8070:8070"
# Aiohtp web socket server
ws_server:
volumes:
- "./app/ws_server:/app"
build:
context: .
dockerfile: dockerfile_ws_server
ports:
- "8060:8060"
# MySQL db
mysql:
image: mysql/mysql-server:5.7
volumes:
- "./db_mysql:/var/lib/mysql"
- "./my.cnf:/etc/my.cnf"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: user_b520
MYSQL_PASSWORD: buzz_17KN
MYSQL_DATABASE: dev_NT_pr
MYSQL_PORT: 3306
ports:
- "3300:3306"
# Redis
redis:
image: redis:4.0.6
build:
context: .
dockerfile: dockerfile_redis
volumes:
- "./redis.conf:/usr/local/etc/redis/redis.conf"
ports:
- "6379:6379"
# Celery worker
celery:
build:
context: .
dockerfile: dockerfile_celery
command: celery -A backend worker -l info --concurrency=20
volumes:
- "./app/back:/app"
- "../front/public/static:/app/static"
links:
- redis
# Celery beat
beat:
build:
context: .
dockerfile: dockerfile_beat
command: celery -A backend beat
volumes:
- "./app/back:/app"
- "../front/public/static:/app/static"
links:
- redis
# Flower monitoring
flower:
build:
context: .
dockerfile: dockerfile_flower
command: celery -A backend flower
volumes:
- "./app/back:/app"
- "../front/public/static:/app/static"
ports:
- "5555:5555"
links:
- redis
dockerfile_django
FROM python:3.4
RUN mkdir /app
WORKDIR /app
ADD app/back/requirements.txt /app
RUN pip3 install -r requirements.txt
# Apply migrations
CMD ["python", "manage.py", "migrate"]
#CMD python manage.py runserver 0.0.0.0:8080 & cron && tail -f /var/log/cron.log
CMD ["uwsgi", "--ini", "/app/uwsgi.ini"]
In a web container migrations applied and everything is working.
I also added CMD ["python", "manage.py", "migrate"] to dockerfile_celery-flower-beat, but they dont applied.
I restart the container using the command:
docker-compose up --force-recreate
How to make the rest of the containers see the migration?
log
flower_1 | File "/usr/local/lib/python3.4/site-packages/MySQLdb/connections.py", line 292, in query
flower_1 | _mysql.connection.query(self, query)
flower_1 | django.db.utils.OperationalError: (1054, "Unknown column 'api_communities.is_closed' in 'field list'")