I want to deploy my service to docker.
and my service is developed using python+django and django-channels
── myproject
├── myproject
│ ├── settings.py
│ ├── urls.py
│ ├── asgi.py
│ ├── ...
├── collected_static
│ ├── js
│ ├── css
│ ├── ...
├── nginx
│ ├── Dockerfile
│ ├── service.conf
├── requirements.txt
├── manage.py
├── Dockerfile
└── docker-compose.yml
myproject/Dockerfile :
FROM python
ENV PYTHONUNBURRERED 1
RUN mkdir -p /opt/myproject
WORKDIR /opt/myproject
ADD . /opt/myproject
RUN pip install -r requirements.txt
RUN python manage.py migrate
myproject/docker-compose.yml:
version: '2'
services:
nginx:
build: ./nginx
networks:
- front
- back
ports:
- "80:80"
depends_on:
- daphne
redis:
image: redis
networks:
- "back"
ports:
- "6379:6379"
worker:
build: .
working_dir: /opt/myproject
command: bash -c "python manage.py runworker"
environment:
- REDIS_HOST=redis
networks:
- front
- back
depends_on:
- redis
links:
- redis
daphne:
build: .
working_dir: /opt/myproject
command: bash -c "daphne -b 0.0.0.0 -p 8000 myproject.asgi:channel_layer"
ports:
- "8000:8000"
environment:
- REDIS_HOST=redis
networks:
- front
- back
depends_on:
- redis
links:
- redis
networks:
front:
back:
myproject/nginx/Dockerfile
FROM nginx
COPY service.conf /etc/nginx/sites-enabled/
myproject/nginx/service.conf
server {
listen 80;
server_name example.com #i just want to hide domain name..
charset utf-8;
client_max_body_size 20M;
location /static/ {
alias /opt/myproject/collected_static/;
}
location / {
proxy_pass http://0.0.0.0:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
and i write a command docker-compose up -d, nginx and daphne work well.
but when i connected to example.com:80, i just can see nginx default page.
and when i connected to example.com:8000, i just can see myproject's service page. (but cannot see static files)
I want to link nginx and daphne service! what should I do? please help me.
when i just deploy with nginx+daphne+django without docker, my service works well.
TLDR;
Nginx is not configured correctly, but also your docker-compose needs some correction:
Nginx
The Nginx website has some helpful tips for deploying with Docker that you should read, including a sample, very simple Dockerfile:
FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
RUN rm /etc/nginx/conf.d/example_ssl.conf
COPY content /usr/share/nginx/html
COPY conf /etc/nginx
which points to some improvements you need to make (see the Docker Compose section for further help with Docker).
Bearing in mind the updates to deployment that we will make below, you will also need to change your Nginx config:
rename service.conf -> service.template
change listen ${NGINX_PORT};
change server_name ${NGINX_HOST};
change proxy_pass http://${DAPHNE_HOST}:${DAPHNE_PORT};
Docker Compose
Now your Nginx configuration is correct, you need to setup the docker compose directives correctly, thankfully, the Docker Hub Nginx page has an example for docker compose:
Here is an example using docker-compose.yml:
web:
image: nginx
volumes:
- ./mysite.template:/etc/nginx/conf.d/mysite.template
ports:
- "8080:80"
environment:
- NGINX_HOST=foobar.com
- NGINX_PORT=80
command: /bin/bash -c "envsubst < /etc/nginx/conf.d/mysite.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
The mysite.template file may then contain variable references like this:
listen ${NGINX_PORT};
From r00m's answer
You can make all those improvements, and in fact, without sharing the volumes your static files won't be served correctly.
Create an image for the project and re-use it
Add the Volume references to allow static files to be shared
OPTIONAL: you should also follow the advice about collecting the static files, but your project structure kind of suggests that you've already done that.
Bringing it all together
Finally, we can merge those three improvements to give us the following setup:
myproject/Dockerfile:
FROM python
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /opt/myproject
WORKDIR /opt/myproject
ADD . /opt/myproject
RUN pip install -r requirements.txt
RUN python manage.py migrate # Can this be done during build? i.e. no link to the DB?
VOLUME ["/opt/myproject/collected_static"]
myproject/docker-compose.yml:
version: '2'
services:
nginx:
build: ./nginx
networks:
- front
- back
ports:
- "80:80"
volumes_from:
- "daphne"
environment:
- NGINX_HOST=example.com
- NGINX_PORT=80
- DAPHNE_HOST=daphne
- DAPHEN_PORT=8000
depends_on:
- daphne
links:
- daphne
command: /bin/bash -c "envsubst < /etc/nginx/conf.d/service.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
redis:
image: redis
networks:
- "back"
ports:
- "6379:6379"
daphne:
build: .
image: "myproject:latest"
working_dir: /opt/myproject
command: bash -c "daphne -b 0.0.0.0 -p 8000 myproject.asgi:channel_layer"
ports:
- "8000:8000"
environment:
- REDIS_HOST=redis
networks:
- front
- back
depends_on:
- redis
links:
- redis
worker:
image: "myproject:latest"
working_dir: /opt/myproject
command: bash -c "python manage.py runworker"
environment:
- REDIS_HOST=redis
networks:
- front
- back
depends_on:
- redis
links:
- redis
networks:
front:
back:
myproject/nginx/Dockerfile
FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
RUN rm /etc/nginx/conf.d/example_ssl.conf
COPY service.template /etc/nginx/conf.d
myproject/nginx/service.template
server {
listen ${NGINX_PORT};
server_name ${NGINX_HOST}
charset utf-8;
client_max_body_size 20M;
location /static/ {
alias /opt/myproject/collected_static/;
}
location / {
proxy_pass http://${DAPHNE_HOST}:${DAPHNE_PORT};
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
Final thoughts
I'm not sure what you're trying to achieve with your network directives, but it almost certainly doesn't achieve it, for example nginx shouldn't connect into your backend network (I think...).
You need to consider whether "migrate" should be done at build time or run time.
Do you need to be able to change your nginx configuration easily? If so, you should remove the COPY from the nginx build and add in the volumes directive from the Docker Compose section.
You have misconfigured NGINX. Try proxy_pass http://127.0.0.1:8000;
As for the static files, it's because you haven't made the files available to the container. I would suggest the following modifications:
myproject/Dockerfile:
[...]
ADD . /opt/myproject
VOLUME ["/opt/myproject/collected_static"]
[..]
# may I also suggest automatic static file collection?
RUN python manage.py collectstatic --noinput
myproject/docker-compose.yml:
[...]
build: ./nginx
volumes_from:
- "worker" # or daphne
I would also consider adding image option to daphne and worker services. This will tag the image and allow to reuse it, thus it will be only built once (instead of twice).
myproject:
build: .
image: "myproject:latest"
[..]
worker:
image: "myproject:latest"
[..]
daphne:
image: "myproject:latest"
Related
My nginx Docker container can't seem to communicate with my Django WSGI app container. It works locally but when I deploy it to Linode I get Bad Request (400)
Project Structure
Project
─ Dockerfile
─ docker-compose.yml
─ entrypoint.sh
└── app
── wsgi.py
── settings.py
── urls.py
── <other django apps>
└── nginx
── Dockerfile
── nginx.conf
docker-compose.yml:
version: '3.3'
services:
web:
build:
context: .
dockerfile: Dockerfile
command: gunicorn app.wsgi:application --bind 0.0.0.0:8000
expose:
- 8000
env_file:
- ./.env
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env.db
nginx:
build: ./nginx
volumes:
- static_volume:/var/www/app/staticfiles
- media_volume:/var/www/app/mediafiles
ports:
- 80:80
depends_on:
- web
volumes:
postgres_data:
nginx/Dockerfile
FROM nginx:1.17.4-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
nginx/nginx.conf
upstream polls_django {
server web:8000;
}
server {
listen 80;
location / {
proxy_pass http://polls_django;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
Clues:
Locally: I can visit my app at localhost just fine. However, I noticed that I will get Bad Request (400) when trying to visit 0.0.0.0.
Linode: When I visit the IP address of my Linode I get Bad Request (400). Looking at my nginx container's error logs, something I noticed is this entry:
2022/11/28 01:08:51 [info] 7#7: *2 client timed out (110: Operation timed out) while waiting for request, client: 12.34.56.78, server: 0.0.0.0:80
It seems that the request to the server is never actually hitting the WSGI app
Linode: I got a shell into my nginx container running docker exec -it nginx /bin/sh and noticed that I could call curl 127.0.0.1 and get the expected response. I feel this means there must be something wrong with my nginx.conf then
Anyone have any clues that I can try out?
For reference: I went through this guide (https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/)
Code I'm running: https://github.com/drivelous/docker-ubuntu18.04-django3.0.2
Good day, I am new to docker and I have a Django app I will like to dockerize, I have searched for tutorials to help me set up my Django app with docker, I followed this article here on test-driven https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/. I get issues making nginx work with my app. here is my code.
my apps docker file:
FROM python:3.8-alpine
ENV PATH="/scripts:${PATH}"
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
RUN mkdir /app
COPY ./testdocker /app
WORKDIR /app
COPY ./scripts /scripts
RUN chmod +x /scripts/*
RUN mkdir -p /vol/web/media
RUN mkdir -p /vol/web/static
RUN adduser -D user
RUN chown -R user:user /vol
RUN chmod -R 755 /vol/web
USER user
nginx docker file:
FROM nginx:1.19.3-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY ./nginx.conf /etc/nginx/conf.d
RUN mkdir -p /vol/static
my docker-compose file:
version: '3.7'
services:
app:
build:
context: .
command: sh -c "gunicorn testdocker.wsgi:application --bind 0.0.0.0:8000"
volumes:
- static_data:/vol/web
expose:
- "8000"
environment:
- SECRET_KEY=MANME1233
- ALLOWED_HOSTS=127.0.0.1, localhost
nginx:
build:
context: ./nginx
volumes:
- static_data:/vol/static
ports:
- "8080:80"
depends_on:
- app
volumes:
static_data:
my nginx conf file:
upstream testapp {
server app:8000;
}
server {
listen 8080;
server_name app;
location / {
proxy_pass http://testapp;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static {
alias /vol/static;
}
}
I can't seem to get nginx to reverse proxy to my web app, upon opening the URL on the browser I get a 404 bad request or address not found. please what am I doing wrong or not doing right?.
#victormazeli It looks like you missed placing your services within the same docker network and I see some misconfiguration in nginx conf file. Try updating your docker-compose.yml as follows:
version: '3.7'
services:
app:
build:
context: .
command: sh -c "gunicorn testdocker.wsgi:application --bind 0.0.0.0:8000"
volumes:
- static_data:/vol/web
expose:
- "8000"
environment:
- SECRET_KEY=MANME1233
- ALLOWED_HOSTS=127.0.0.1, localhost
networks:
- main
nginx:
build:
context: ./nginx
volumes:
- static_data:/vol/static
ports:
- "8080:80"
depends_on:
- app
networks:
- main
volumes:
static_data:
networks:
main:
Then, update your nginx config as follows:
server {
server_name nginx;
listen 80;
location /static {
alias /vol/static;
}
location / {
proxy_pass http://app:8000/;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
Another thing to keep in mind here is that you have 2 targets that are being served by the NGINX reverse-proxy:
Django project located in testdocker which should be accessible via localhost:8080
Static file data which is accessible via localhost:8080/static/[relative_path]
To access the static data, you will need the path relative to /vol/static in nginx service (which is a docker volume mount also mounted to /vol/web in app service). According to app's Dockerfile, the static_data volume should contain 2 directories: media and static. Therefore, if you have say an index.html located in directory /vol/web/static in app service, it should be accessible via localhost:8080/static/static/index.html.
Give this a try and let me know how it works out for you.
I have been trying to deploy a Django app on Lightsail with Gunicorn, NginX, and Docker. I've looked at multiple tutorials, all without success. I'm not familiar with most of the concepts, and I've pretty much been following blindly. So far, everything seems to work on the server itself, but I can't see the results on a webpage. I have configured it for "production" (not sure if I'm even doing it right), and I've added a record to my domain which redirects to this server. The webpage just buffers continuously, even when I try to set it to port 8000 (for development). I think I've gotten a few instances where I saw a "301 5" permanently moved log show up on the docker-compose logs, but that's about it.
Here are the Dockerfile, docker-compose.yml, and nginx conf.d file (which are probably the most important.
docker-compose.yml
version: '3.7'
services:
web:
build:
environment:
- ENVIRONMENT=production
- SECRET_KEY=NOT IMPORTANT
- DEBUG=0
- EMAIL_HOST_USER=EMAIL
- EMAIL_HOST_PASSWORD=PASSWORD
volumes:
- .:/code
- static_volume:/code/staticfiles
depends_on:
- db
networks:
- nginx_network
- db_network
db:
image: postgres:11
env_file:
- config/db/db_env
networks:
- db_network
volumes:
- db_volume:/var/lib/postgresql/data
nginx:
image: nginx:1.17.0
ports:
- 80:80
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static_volume:/code/staticfiles
depends_on:
- web
networks:
- nginx_network
networks:
nginx_network:
driver: bridge
db_network:
driver: bridge
volumes:
db_volume:
static_volume:
Dockerfile:
# Pull base image
FROM python:3.7
# Environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Work directory
WORKDIR /code
# Dependencies
COPY Pipfile Pipfile.lock /code/
RUN pip install pipenv && pipenv install --system
# Copy project
COPY . /code/
# expose port
EXPOSE 80
# gunicorn
CMD ["gunicorn", "--chdir", "my_project", "--bind", ":80", "mbdebate_project.wsgi:application"]
conf.d:
upstream hello_server {
server web:80;
}
server {
listen 80;
server_name mydomain.com;
location / {
proxy_pass http://hello_server;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /code/staticfiles/;
}
}
The settings are pretty standard, and I don't think the problem is there. Any help would truly be appreciated :).
The tutorial I followed: tpawamoy.github.io/2018/02/01/docker-compose-django-postgres-nginx.html
When working on a flask application I had a similar issue connecting with nginx.
On nginx.conf, I used the same host configuration in my app.run.
Change web:80; in your conf.d to the host exposed by Django application.**
My case:-
app.py
if __name__ == "__main__":
app.run(host='0.0.0.0', port=5000)
nginx.conf
upstream flask-app {
server 0.0.0.0:5000;
}
server {
listen 80 default_server;
# https for production
location / {
proxy_pass http://flask-app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
I am - once again - very confused, with nginx, docker and react.
Here is what is what I want: 1.) a Django REST api that exposes a port only locally 2.) I want the staticfiles of the REST api handled by nginx 3.) a ReactJS front end that is served via nginx on port 80 (I dont know if it is nessesary to serve react via nginx - but I've heard it reduces image size).
The problem: It does not work. All containers can run individually but serving them via docker compose will not run properly. I dont seem to be able to proxy to the api and frontend.
I hint to my problem: What I am seeing is that the image I am layering in reactJS "tiangolo/node-frontend:10" also copies a nginx.conf file that may overwrite mine.
Using a million tutorials, I am here:
nginx.conf
upstream website_rest {
server restapi:8000;
}
upstream website_frontend {
server frontend:8080;
}
server {
listen 80;
location /rest_call/ {
proxy_pass http://website_rest;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location / {
proxy_pass http://frontend;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /rest_call/staticfiles/ {
alias /usr/src/website-dj/staticfiles/;
}
}
This is the dockerfile for react:
# Stage 0, "build-stage", based on Node.js, to build and compile the frontend
FROM tiangolo/node-frontend:10 as build-stage
WORKDIR /website-rj
COPY package*.json /website-rj/
RUN npm install
COPY ./ /website-rj/
RUN npm run build
# Stage 1, based on Nginx, to have only the compiled app, ready for production with Nginx
FROM nginx:1.15
COPY --from=build-stage /website-rj/build/ /usr/share/nginx/html
# Copy the default nginx.conf provided by tiangolo/node-frontend
COPY --from=build-stage /nginx.conf /etc/nginx/conf.d/default.conf
Dockercompose:
version: '3.7'
services:
frontend:
expose:
- 8080
build: "./website_rj_docker"
volumes:
- ./website_rj_docker/build:/usr/src/website-rj/
restapi:
build: "./website_dj_docker/"
# command: python /usr/src/website-dj/manage.py runserver 0.0.0.0:8000 --settings=rest.settings.production
command: gunicorn rest.wsgi:application --bind 0.0.0.0:8000
volumes:
- ./website_dj_docker/:/usr/src/website-dj/
- static_volume:/usr/src/website-dj/staticfiles
expose:
- 8000
environment:
- SECRET_KEY='something...'
- SQL_ENGINE=django.db.backends.postgresql
- SQL_DATABASE=postgres
- SQL_USER=something...
- SQL_PASSWORD=something...
- SQL_HOST=db
- SQL_PORT=5432
- DATABASE=postgres
depends_on:
- db
db:
image: postgres:10.5-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
nginx:
build: ./nginx
volumes:
- static_volume:/usr/src/website-dj/staticfiles
ports:
- 1337:80
depends_on:
- restapi
I know that several similar questions have been asked, but I have not managed to find a proper answer anywhere.
I want to dockerize my Django app. The app works fine in the docker container but the static files (images, css etc.) are not loaded.
My docker-compose.yml:
version: '2'
services:
web:
build: .
command: ./start.sh
volumes:
- .:/app
- /app/www/static
ports:
- "8000:8000"
env_file: .env
nginx:
build: ./nginx
links:
- web
ports:
- "0.0.0.0:80:80"
volumes_from:
- web
The start.sh script
python manage.py makemigrations --noinput
python manage.py migrate --noinput
python manage.py collectstatic --noinput
exec gunicorn audiobridge.wsgi:application --bind 0.0.0.0:8000 --workers 3
The Dockerfile for the web container:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /app
WORKDIR /app
ADD requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
Then I have an nginx/ directory in my project which has the following Dockerfile:
FROM nginx
COPY sites-enabled/audiobridge /etc/nginx/sites-enabled/
and inside nginx/sites-enabled there is the nginx configuration file:
server {
listen 80;
server_name 127.0.0.1;
charset utf-8;
location /static {
alias /app/www/static/;
}
location / {
proxy_pass http://web:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
enter image description here
volumes:
- "../nginx/conf.d:/etc/nginx/conf.d/templates"
- "/static:/static"
Did you try add this code to nginx and django app?