I am new in docker, I want dockerize my django application to production.
My OS: Ubuntu 20.04.1 LTS,
Docker version 20.10.5, build 55c4c88,
My docker-compose.yml file as below:
version: '3.1'
services:
nginx-proxy:
image: jwilder/nginx-proxy
restart: "always"
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx/vhost/:/etc/nginx/vhost.d:ro
- ./nginx/conf.d/client_max_body_size.conf:/etc/nginx/conf.d/client_max_body_size.conf:ro
- ./static/:/amd/static
- ./media/:/amd/media
postgres:
image: postgres:9.6.6
restart: always
volumes:
- ./pgdb/:/var/lib/postgresql/
ports:
- "5432:5432"
env_file: ./.env
redis:
image: redis
ports:
- 6379:6379
restart: always
web:
container_name: amd
build: .
restart: "always"
ports:
- "8000:8000"
volumes:
- .:/code/
# - ./static/:/code/static
# - ./media/:/code/media
depends_on:
- "postgres"
env_file: .env
celery:
build:
context: .
dockerfile: celery.dockerfile
volumes:
- .:/code
command: celery -A amdtelecom worker -l info
links:
- redis
- postgres
depends_on:
- "redis"
- "postgres"
env_file: ./.env
networks:
default:
external:
name: nginx-proxy
My Dockerfile as below:
FROM python:3.7
ENV PYTHONUNBUFFERED 1
ENV DEBUG False
COPY requirements.txt /code/requirements.txt
WORKDIR /code
RUN pip install -r requirements.txt
ADD . .
CMD [ "gunicorn", "--bind", "0.0.0.0", "-p", "8000", "amdtelecom.wsgi" ]
in my project setting file:
CORS_ALLOWED_ORIGINS = [
"http://localhost:8000",
"http://127.0.0.1:8000",
"http://localhost"
]
STATIC_URL = '/static/'
if PROD:
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
else:
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
STATICFILES_DIRS = [
BASE_DIR / "static",
]
MEDIA_ROOT = os.path.join(BASE_DIR, "media")
MEDIA_URL = "/media/"
SITE_URL = 'http://localhost:80'
I use the below steps in my terminal:
docker-compose up -d --build
docker ps -a
docker exec -it 'my gunicorn id' bash
./manage.py migrate
./manage.py collectstatic
./manage.py loaddata data.json
and open try open project in chrome browser in url localhost
and project opens without static files (no js, no css).
And in my docker logs:
> (docker-compose logs -f):
2021/03/25 10:51:32 [error] 51#51: *18 open() "/amd/static/images/fashion/product/13.jpg" failed (13: Permission denied), client: 172.23.0.1, server: localhost, request:
"GET /static/images/fashion/product/13.jpg HTTP/1.1", host: "localhost", referrer: "http://localhost/"
and I want one note about my code - this application already work fine in Mac OS, static files open fine. I think my problem is related with my Linux OS.
How can I solve this problem?
Thanks in advance. For additional information about my code it is github repo:
The problem probably dwells in permissions, more exactly in file owners - you can solve it easily (but not much securely) with chmod 777 on all files in the static (or media) volume.
The second option is to chown to the container user as decribed here https://stackoverflow.com/a/66910717/7113416 or propably even better here https://stackoverflow.com/a/56904335/7113416
Related
I have the following setup:
docker-compose.yml
# Mentioning which format of dockerfile
version: "3.9"
# services or nicknamed the container
services:
# web service for the web
web:
# you should use the --build flag for every node package added
build: .
# Add additional commands for webpack to 'watch for changes and bundle it to production'
command: python manage.py runserver 0.0.0.0:8000
volumes:
- type: bind
source: .
target: /code
ports:
- "8000:8000"
depends_on:
- db
environment:
- "DJANGO_SECRET_KEY=django-insecure-m#x2vcrd_2un!9b4la%^)ou&hcib&nc9fvqn0s23z%i1e5))6&"
- "DJANGO_DEBUG=True"
expose:
- 8000
db:
image: postgres:13
#
volumes:
- postgres_data:/var/lib/postgresql/data/
# unsure of what this environment means.
environment:
- "POSTGRES_HOST_AUTH_METHOD=trust"
# - "POSTGRES_USER=postgres"
# Volumes set up
volumes:
postgres_data:
and a settings file as
ALLOWED_HOSTS = ['0.0.0.0', 'localhost', '127.0.0.1']
#127.0.0.1 is my localhost address.
With my host's IP as 192.168.0.214
Can you please help me deploy the django site on my host's local network?
Do I have to set up something on my router?
Or could you direct me towards resources(understanding networking) which will help me understand the same.
App Description
I have an app with django-gunicorn for back-end and reactjs-nginx with front-end all containerized as well as hosted on aws ec2 instance.
Problem
On development environment, media files are being saved in the 'media' directory permanently. Tho, those files are only saved on the current running docker container on production time. As a result, the files will be removed when I rebuild/stopped the container for a new code push.
Expectation
I wanted to store the file on the 'media' folder for permanent use.
Important code
settings.py
ENV_PATH = Path(__file__).resolve().parent.parent
STATIC_ROOT = BASE_DIR / 'django_static'
STATIC_URL = '/django_static/'
MEDIA_ROOT = BASE_DIR / 'media/'
MEDIA_URL = '/media/'
docker-compose-production.yml
version: "3.3"
services:
db:
image: postgres
restart: always #Prevent postgres from stopping the container
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- 5432:5432
nginx:
restart: unless-stopped
build:
context: .
dockerfile: ./docker/nginx/Dockerfile
ports:
- 80:80
- 443:443
volumes:
- static_volume:/code/backend/server/django_static
- ./docker/nginx/production:/etc/nginx/conf.d
- ./docker/nginx/certbot/conf:/etc/letsencrypt
- ./docker/nginx/certbot/www:/var/www/certbot
depends_on:
- backend
# Volume for certificate renewal
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./docker/nginx/certbot/conf:/etc/letsencrypt
- ./docker/nginx/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
backend:
restart: unless-stopped
build:
context: .
dockerfile: ./docker/backend/Dockerfile
entrypoint: /code/docker/backend/wsgi-entrypoint.sh
volumes:
- .:/code
- static_volume:/code/backend/server/django_static
expose:
- 8000
depends_on:
- db
volumes:
static_volume: { }
pgdata: { }
I finally figured out the issue. I forgot to add .:/code to my nginx volumes config in my docker-compose file. Thank to this answer
Updated nginx volumes confi
volumes:
- .:/code
- static_volume:/code/backend/server/django_static
- ./docker/nginx/production:/etc/nginx/conf.d
- ./docker/nginx/certbot/conf:/etc/letsencrypt
- ./docker/nginx/certbot/www:/var/www/certbot
I have a Django project that uses django-elasticsearch-dsl. The project is dockerized, so elasticsearch and the web projects leave in separate containers.
Now my goal is to recreate and repopulate the indices running
python manage.py search_index --rebuild
In order to do that, I try to run the command from the container of the web service the following way:
docker-compose exec web /bin/bash
> python manage.py search_index --rebuild
Not surprsiginly, I get an error
Failed to establish a new connection: [Errno 111] Connection refused)
apparently because python tried to connect to elasticsearch using localhost:9200.
So the question is, how do I tell the management command the host where elasticsearch lives ?
Here's my docker-compose.yml file:
version: '2'
services:
web:
build: .
restart: "no"
command: ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
env_file: &envfile
- .env
environment:
- DEBUG=True
ports:
- "${DJANGO_PORT}:8000"
networks:
- deploy_network
depends_on:
- elasticsearch
- db
elasticsearch:
image: 'elasticsearch:2.4.6'
ports:
- "9200:9200"
- "9300:9300"
networks:
- deploy_network
db:
image: "postgres"
container_name: "postgres"
restart: "no"
env_file: *envfile
ports:
- "5432:5432"
volumes:
- db_data:/var/lib/postgresql/data
volumes:
db_data:
networks:
deploy_network:
driver: bridge
UPDATE:
In the Django project's settings I setup the elasticsearch dsl host:
# settings.py
ELASTICSEARCH_DSL = {
'default': {
'hosts': 'localhost:9200'
}
}
Since your Django project and Elasticsearch are in 2 separate containers, setting ELASTICSEARCH_DSL's host to 'localhost:9200' won't work, in this case localhost refers to localhost inside Django container.
So you need to set it like this:
# settings.py
ELASTICSEARCH_DSL = {
'default': {
'hosts': 'elasticsearch:9200'
}
}
EDIT 05/02/2021 10:45
I still do not find any solution to my issue
reading other post show that there are many possible cause to this issue
If someone could help and explain me how django_compressor should work?
For example,
is it right that the manifest files is called staticfiles.json?
It is abnormal that this files contain no paths?
which paths it should contain?
...
EDIT 04/02/2021 14:00
I run
python manage.py findstatic --verbosity 2 theme.scss
and get (below), that mean that path is correct?
Found 'theme.scss' here:
/usr/src/app/static/theme.scss
Looking in the following locations:
/usr/src/app/static
EDIT 04/02/2021 13:38
I mentionned that with DEBUG = True and runserver it works
I mean I can customize bootstrap
and I can see staticfiles.json in /usr/src/app/static but this file contain no paths {"paths": {}, "version": "1.0"}
EDIT 04/02/2021 13:04
During running, logs mentionned
0 static files copied to '/usr/src/app/static'.
Found 'compress' tags in:
/usr/src/app/cafe/templates/cafe/table.html
...
But I've controlled in web container and static files are availables at /usr/src/app/static as expected (see docker-compose file)
I try to use SCSS in my Django project using django_compressor et django libsass
stack: Django/Nginx/Postgresql/Docker
I have configured 2 environment of development: dev and preprod
I got an error: valueerror: missing staticfiles manifest entry for 'theme.scss'
I did not understand because it has worked bt I've tryed to delete container/images/volumes and rebuild all my project and got this error
I've tryed DEBUG = True, STATIC_ROOT = 'static'... but nothing works
logs only raise this error
settings -> preprod.py
- app
- core
- static
- bootstrap
- css
- js
- theme.scss
- nginx
DEBUG = False
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, "static")
STATICFILES_FINDERS = [
'compressor.finders.CompressorFinder',
]
COMPRESS_PRECOMPILERS = (
('text/x-scss', 'django_libsass.SassCompiler'),
COMPRESS_ENABLED = True
COMPRESS_OFFLINE = True
LIBSASS_OUTPUT_STYLE = 'compressed'
STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.ManifestStaticFilesStorage'
)
entrypoint.preprod.sh
python manage.py collectstatic --no-input
python manage.py compress --force
docker-compose.preprod.yml
version: '3.7'
services:
web:
restart: always
container_name: web
build:
context: ./app
dockerfile: Dockerfile.preprod
restart: always
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
volumes:
- app_volume:/usr/src/app
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
expose:
- 8000
env_file:
- ./.env.preprod
depends_on:
- db
- redis
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/"]
interval: 30s
timeout: 10s
retries: 50
redis:
container_name: redis
image: "redis:alpine"
celery:
container_name: celery
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core worker -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
db:
condition: service_started
web:
condition: service_healthy
redis:
condition: service_started
celery-beat:
container_name: celery-beat
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core beat -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
db:
condition: service_started
web:
condition: service_healthy
redis:
condition: service_started
db:
container_name: db
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
- app_volume:/var/lib/postgresql/backup
env_file:
- ./.env.preprod.db
nginx:
container_name: nginx
build: ./nginx
restart: always
volumes:
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
ports:
- 1340:80
depends_on:
web:
condition: service_healthy
volumes:
postgres_data:
static_volume:
media_volume:
app_volume:
docker-compose.yml
version: '3'
services:
# Django web server
web:
volumes:
- "./app/back:/app"
- "../front/public/static:/app/static"
- "./phantomjs-2.1.1:/app/phantomjs"
build:
context: .
dockerfile: dockerfile_django
#command: python manage.py runserver 0.0.0.0:8080
#command: ["uwsgi", "--ini", "/app/back/uwsgi.ini"]
ports:
- "8080:8080"
links:
- async
- ws_server
- mysql
- redis
async:
volumes:
- "./app/async_web:/app"
build:
context: .
dockerfile: dockerfile_async
ports:
- "8070:8070"
# Aiohtp web socket server
ws_server:
volumes:
- "./app/ws_server:/app"
build:
context: .
dockerfile: dockerfile_ws_server
ports:
- "8060:8060"
# MySQL db
mysql:
image: mysql/mysql-server:5.7
volumes:
- "./db_mysql:/var/lib/mysql"
- "./my.cnf:/etc/my.cnf"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: user_b520
MYSQL_PASSWORD: buzz_17KN
MYSQL_DATABASE: dev_NT_pr
MYSQL_PORT: 3306
ports:
- "3300:3306"
# Redis
redis:
image: redis:4.0.6
build:
context: .
dockerfile: dockerfile_redis
volumes:
- "./redis.conf:/usr/local/etc/redis/redis.conf"
ports:
- "6379:6379"
# Celery worker
celery:
build:
context: .
dockerfile: dockerfile_celery
command: celery -A backend worker -l info --concurrency=20
volumes:
- "./app/back:/app"
- "../front/public/static:/app/static"
links:
- redis
# Celery beat
beat:
build:
context: .
dockerfile: dockerfile_beat
command: celery -A backend beat
volumes:
- "./app/back:/app"
- "../front/public/static:/app/static"
links:
- redis
# Flower monitoring
flower:
build:
context: .
dockerfile: dockerfile_flower
command: celery -A backend flower
volumes:
- "./app/back:/app"
- "../front/public/static:/app/static"
ports:
- "5555:5555"
links:
- redis
dockerfile_django
FROM python:3.4
RUN mkdir /app
WORKDIR /app
ADD app/back/requirements.txt /app
RUN pip3 install -r requirements.txt
# Apply migrations
CMD ["python", "manage.py", "migrate"]
#CMD python manage.py runserver 0.0.0.0:8080 & cron && tail -f /var/log/cron.log
CMD ["uwsgi", "--ini", "/app/uwsgi.ini"]
In a web container migrations applied and everything is working.
I also added CMD ["python", "manage.py", "migrate"] to dockerfile_celery-flower-beat, but they dont applied.
I restart the container using the command:
docker-compose up --force-recreate
How to make the rest of the containers see the migration?
log
flower_1 | File "/usr/local/lib/python3.4/site-packages/MySQLdb/connections.py", line 292, in query
flower_1 | _mysql.connection.query(self, query)
flower_1 | django.db.utils.OperationalError: (1054, "Unknown column 'api_communities.is_closed' in 'field list'")