How to properly configure certbot in docker? - django

Please help me with this problem, i have been trying to solve it for 2 days!
Please, just tell me what i am doing wrong. And what i should to change to make it work! And what i should to do to take it work.
ERROR: for certbot Cannot start service certbot: network 4d3b22b1f02355c68a900a7dfd80b8c5bb64508e7e12d11dadae11be11ed83dd not found
My docker-compose file
version: '3'
services:
nginx:
restart: always
build:
context: ./
dockerfile: ./nginx/Dockerfile
depends_on:
- server
ports:
- 80:80
volumes:
- ./server/media:/nginx/media
- ./conf.d:/nginx/conf.d
- ./dhparam:/nginx/dhparam
- ./certbot/conf:/nginx/ssl
- ./certbot/data:/usr/share/nginx/html/letsencrypt
server:
build:
context: ./
dockerfile: ./server/Dockerfile
command: gunicorn config.wsgi -c ./config/gunicorn.py
volumes:
- ./server/media:/server/media
ports:
- "8000:8000"
depends_on:
- db
environment:
DEBUG: 'False'
DATABASE_URL: 'postgres://postgres:#db:5432/postgres'
BROKER_URL: 'amqp://user:password#rabbitmq:5672/my_vhost'
db:
image: postgres:11.2
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
certbot:
image: certbot/certbot:latest
command: certonly --webroot --webroot-path=/usr/share/nginx/html/letsencrypt --email artasdeco.ru#gmail.com --agree-tos --no-eff-email -d englishgame.ru
volumes:
- ./certbot/conf:/etc/letsencrypt
- ./certbot/logs:/var/log/letsencrypt
- ./certbot/data:/usr/share/nginx/html/letsencrypt
My Dockerfile
FROM python:3.7-slim AS server
RUN mkdir /server
WORKDIR /server
COPY ./server/requirements.txt /server/
RUN pip install -r requirements.txt
COPY ./server /server
RUN python ./manage.py collectstatic --noinput
#########################################
FROM nginx:1.13
RUN rm -v /etc/nginx/nginx.conf
COPY ./nginx/nginx.conf /etc/nginx/
RUN mkdir /nginx
COPY --from=server /server/staticfiles /nginx/static
nginx.conf file
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 443 ssl http2;
server_name englishgame.ru;
ssl on;
server_tokens off;
ssl_certificate /etc/nginx/ssl/live/englishgame.ru/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/englishgame.ru/fullchain.pem;
ssl_dhparam /etc/nginx/dhparam/dhparam-2048.pem;
ssl_buffer_size 8k;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
location / {
return 301 https://englishgame.ru$request_uri;
}
}
server {
listen 80;
server_name englishgame.ru;
location ~ /.well-known/acme-challenge {
allow all;
root /usr/share/nginx/html/letsencrypt;
}
location /static {
alias /nginx/static/;
expires max;
}
location /media {
alias /nginx/media/;
expires 10d;
}
location /robots.txt {
alias /nginx/static/robots.txt;
}
location /sitemap.xml {
alias /nginx/static/sitemap.xml;
}
location / {
proxy_pass http://server:8000;
proxy_redirect off;
proxy_read_timeout 60;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
Thank you for your help!

Alright, so based on the error ERROR: for certbot Cannot start service certbot: network 4d3b22b1f02355c68a900a7dfd80b8c5bb64508e7e12d11dadae11be11ed83dd not found, the issue is not related to any of the other services defined in your compose file, so those and your Dockerfile and nginx configuration should be irrelevant to the problem.
Then to solve the problem of "why certbot service cannot be created". Usually this kind of error happens when a network that was configured for a service has been removed manually. In this case, however, no service is even referring to a network. Thus only the hash sum is printed, not any network name.
Googling the error brings up a similar problem from let's encrypt: https://github.com/BirgerK/docker-apache-letsencrypt/issues/8, which points to
an actual docker compose issue https://github.com/docker/compose/issues/5745.
The solution there is to run the docker compose with "--force-recreate" option to resolve the problem.
So, the problem should be fixed by running docker compose up -d --force-recreate.

Related

WebSocket connection getting rejected in production when using docker container

My WebSocket connection getting rejected from the Django Channels server deployed in AWS EC2. I am using Docker for server side which contains PostgreSQL and Redis also
WebSocket HANDSHAKING /ws/notifications/
WebSocket REJECT /ws/notifications/
WebSocket DISCONNECT /ws/notifications/
front-end is built using React Js is deployed in netlify.
server side is secured using cer-bot.
my nginx configuration files looks like this
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
server_name IP_address example.com ;
location = /favicon.ico { access_log off; log_not_found off; }
location /staticfiles/ {
root /home/ubuntu/social-network-django-server;
}
location / {
proxy_pass http://127.0.0.1:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/_____; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/____; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name IP_address example.com ;
return 404; # managed by Certbot
}
everything other than the websocket connection is working fine without any issues. I have no idea what is happening here. I am using d]Django channels first time,
I am using wss:// instead of ws://
my error in the browser looks like this
WebSocket connection to 'wss://_____________________' failed:
if any one have any idea what is happening please share!
my github repo links server client
my Dockerfile looks like this
FROM python:3.11.1
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
postgresql-client \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
my docker-compose.yaml looks like this
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=2221
- POSTGRES_DB=social-network
volumes:
- db-data:/var/lib/postgresql/data/
ports:
- "5432"
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
environment:
- DATABASE_URL=postgres://postgres:2221#db:5432/social-network
- REDIS_URL=redis://redis:6379/0
redis:
image: redis
ports:
- "6379"
volumes:
db-data:
I think this is maybe related to redis, can someone explain what is this error is all about, I am new to Redis and first time I'm using it
my websocket connection is rejected by server in the production environment.
most of the time it is getting rejected, but one time I got the error like this
future: <Task finished name='Task-2137' coro=<Connection.disconnect() done, defined at /usr/local/lib/python3.11/site-packages/redis/asyncio/connection.py:720> exception=RuntimeError('Event loop is closed')>
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/redis/asyncio/connection.py", line 729, in disconnect
self._writer.close() # type: ignore[union-attr]
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/streams.py", line 344, in close
return self._transport.close()
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/selector_events.py", line 839, in close
self._loop.call_soon(self._call_connection_lost, None)
File "/usr/local/lib/python3.11/asyncio/base_events.py", line 761, in call_soon
self._check_closed()
File "/usr/local/lib/python3.11/asyncio/base_events.py", line 519, in _check_closed
raise RuntimeError('Event loop is closed')

wsgi.url_scheme http in docker using nginx

I am using Apache on CentOS. On this server I have a Django project with docker. There are two containers in docker (nginx and python).
In the Apache I have .conf that have proxy to nginx container that is exposed on port 803. SSL is set in the Apache conf as well.
ProxyPreserveHost On
RequestHeader set X-Forwarded-Proto "https"
RequestHeader set X-Scheme "https"
ProxyPass / http://127.0.0.1:803/
ProxyPassReverse / http://127.0.0.1:803/
On the docker I have app.conf for nginx that looks like this:
upstream project {
server project-python:5000;
}
server {
listen 80;
server_name _;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
client_max_body_size 64M;
location / {
gzip_static on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Scheme "https";
proxy_set_header X-Forwarded-Proto "https";
proxy_set_header X-Forwarded-Protocol "ssl";
proxy_set_header X-Forwarded-Ssl=on;
proxy_set_header Host $host;
proxy_pass http://project;
proxy_redirect off;
}
}
In the Dockerfile Python is exposed on port 5000 and in the docker-compose.prod.yml file for production the python is started with gunicorn with this command:
gunicorn project.wsgi:application --preload --bind 0.0.0.0:5000
So I have two issues.
In the Django when I dump request.META I got wsgi.url_scheme that is http.
The second one is that I don't even understand how nginx is communicating with gunicorn because when I set app.conf to be just like below it is working also. How the nginx know that Python is exposed on port 5000.
server {
listen 80;
server_name _;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
client_max_body_size 64M;
location / {
proxy_pass http://project;
proxy_redirect off;
}
}
docker-compose.yml
version: '3'
services:
project-python:
build:
context: .
dockerfile: docker/python/Dockerfile
container_name: project-python
volumes:
- .:/var/www:rw
- .aws:/home/www/.aws
project-nginx:
build:
context: docker/nginx
dockerfile: Dockerfile
container_name: project-nginx
ports:
- "127.0.0.1:803:80"
depends_on:
- project-python
docker-compose.prod.yml
version: '3'
services:
project-python:
restart: unless-stopped
env_file:
- ./.env.prod
command: gunicorn project.wsgi:application --preload --bind 0.0.0.0:5000
expose:
- 5000
project-nginx:
restart: unless-stopped
environment:
APP_ENV: "production"
APP_NAME: "project-nginx"
APP_DEBUG: "False"
SERVICE_NAME: "project-nginx"

Docker + django + gunicorn connect to Nginx of the host system

On ubuntu server 18.04
I run a docker container with Django 3.1 and Gunicorn
which connects to a locally installed Postgres.
I already have an another Django project with Nginx and Gunicorn on the same server without docker.
The questions are: how to set up socket files for Gunicorn in a container and connect this container to the existing external Nginx so I can access the containerized django app from outside?
docker-compose.prod.yml
version: '3'
services:
app:
build:
context: .
ports:
- "8001:8001"
volumes:
- ./app:/app
env_file:
- .env.prod
command: >
sh -c "gunicorn app.wsgi:application --bind 0.0.0.0:8001"
I run the container like this:
docker-compose -f docker-compose.prod.yml up
.env.prod
...
DJANGO_ALLOWED_HOSTS='111.111.111.111 .mysitename.com localhost 172.18.0.1 127.0.0.1 [::1]'
...
when running container executing of this command:
curl 'http://127.0.0.1:8001/admin/login/?next=/admin/'
gives me the html code of the admin page and it's ok.
The existing Nginx is set up as in these digital ocean tutorials:
https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04
I also secured it with Let's encrypt using this tutorial https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-18-04
Everything is simpler than I thought.
in Nginx conf file
/etc/nginx/sites-available/ edit like that
server {
server_name 111.111.111.111 domainname.com www.domainname.com *.domainname.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/user/<my-previous-project-installed-locally>/project-app;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
# Added for the NEW application
###############################################
location /MY-NEW-PATH/ {
rewrite ^/MY-NEW-PATH(.*) $1 break;
proxy_pass http://127.0.0.1:8001;
}
###############################################
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/domainname.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/domainname.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.domainname.com) {
return 301 https://$host$request_uri;
} # managed by Certbot

App accessible via port 3000 after deploy to server

I have a django app running in docker containers (please see docker compose and dockerfile below). I have removed port exposure from my docker-compose however when i deploy the code onto an ubuntu server, I can still access the app via port 3000. I am also using nginx to do the proxing (see nginx file below).
services:
rabbitmq:
restart: always
image: rabbitmq:3.7
...
db:
restart: always
image: mongo:4
...
cam_dash:
restart: always
build: .
command: python3 manage.py runserver 0.0.0.0:3000
...
celery:
restart: always
build: .
command: celery -A dashboard worker -l INFO -c 200
...
celery_beat:
restart: always
build: .
command: celery beat -A dashboard -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler
...
FROM python:3.7
COPY requirements.txt /
RUN pip3 install -r /requirements.txt
ADD ./ /dashboard
WORKDIR /dashboard
COPY ./docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 3000
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 301 https://$host$request_uri;
root /var/www/html;
index index.html;
}
server {
listen 443;
server_name camonitor.uct.ac.za;
ssl on;
ssl_certificate /etc/ssl/certs/wildcard.crt;
ssl_certificate_key /etc/ssl/private/wildcard.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
location / {
root /var/www/html;
index index.html;
}
location /dash/ {
proxy_pass http://127.0.0.1:3000/dash/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
...
I am expecting that if I try access https://example.com:3000/dash/, it should not be accessible. https://example.com/dash/ works just fine.
Thanks for the help.
You should prevent access to port 3000 using the system's firewall.
I had same issue hosting more than one web server on same machine and proxying with Nginx, I solved using this port configuration in docker-compose.yml, binding the port only to localhost, maybe you could apply same configuration to python server.
"127.0.0.1:3000:3000"
version: '3'
services:
myService:
image: "myService/myService:1"
container_name: "myService"
ports:
- "127.0.0.1:3000:3000"

How to serve static content with Nginx and Django Gunicorn when using Traefik

I have a web application (Django based) that is utilising multiple containers:
Web Application (Django + Gunicorn)
Traefik (acting as the reverse proxy and SSL termination)
Database which is used with the Web application
Redis which is used with the Web application
According to some of the documentation I have read, I should be serving my static content using something like NGINX. But I don't have any idea on how I would do that. Would I install NGINX on my Web Application container or as a seperate NGINX container. How do I pass the request from Traefik? As far as I am aware you cannot server static content with Traefik.
This is what my docker-compose.yml looks like:
traefik:
image: traefik
ports:
- 80:80
- 8080:8080
- 443:443
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik/traefik.toml:/etc/traefik/traefik.toml:ro
- ./traefik/acme:/etc/traefik/acme
web:
build: .
restart: always
depends_on:
- db
- redis
- traefik
command: python3 /var/www/html/applications/py-saleor/manage.py makemigrations --noinput
command: python3 /var/www/html/applications/py-saleor/manage.py migrate --noinput
command: python3 /var/www/html/applications/py-saleor/manage.py collectstatic --noinput
command: bash -c "cd /var/www/html/applications/py-saleor/ && gunicorn saleor.wsgi -w 2 -b 0.0.0.0:8000"
volumes:
- .:/app
ports:
- 127.0.0.1:8000:8000
labels:
- "traefik.enable=true"
- "traefik.backend=web"
- "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE}"
environment:
- SECRET_KEY=changemeinprod
redis:
image: redis
db:
image: postgres:latest
restart: always
environment:
POSTGRES_USER: saleoradmin
POSTGRES_PASSWORD: **
POSTGRES_DB: **
PGDATA: /var/lib/postgresql/data/pgdata
volumes:
- ~/py-saleor/database:/app
If anybody else needs an answer to this, the answer lies in creating a seperate NGINX service and then directing the front end rules to the static location (xyz.com/static), e.g. see below (part of docker-compose.yml):
nginx:
image: nginx:alpine
container_name: nginx_static_files
restart: always
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
- ./saleor/static/:/static
labels:
- "traefik.enable=true"
- "traefik.backend=nginx"
- "traefik.frontend.rule=Host:xyz.co;PathPrefix:/static"
- "traefik.port=80"
You also need to ensure that your Nginx config file (default.conf) is appropriately configured:
server {
listen 80;
server_name _;
client_max_body_size 200M;
set $cache_uri $request_uri;
location = /favicon.ico { log_not_found off; access_log off; }
location = /robots.txt { log_not_found off; access_log off; }
ignore_invalid_headers on;
add_header Access-Control-Allow_Origin *;
location /static {
autoindex on;
alias /static;
}
location /media {
autoindex on;
alias /media;
}
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
}
All credit goes to Pedro Rigotti on the Traefik slack channel for helping me arrive at the solution.
I don't have much idea about Traefik and Docker.
But I can tell you how you can install nginx and use it to serve static files(which is always recommended in order to not choke the django server by serving static files)
Install nginx and follow the steps mentioned to setup nginx .
sudo apt-get install nginx
The site-available file should look something like this:
server {
listen 80;
listen [::]:80;
server_name xyz.com;
client_max_body_size 20M;
# xyz.com/media/any_static_asset_file.jpg
# when anyone hits the above url then the request comes to this part.
location /media/ {
# do make sure that the autoindex is off so that your assets are only accessed when you have proper path
autoindex off;
# this is the folder where your asset files are present.
alias /var/www/services/media/;
}
# whenever any request comes to xyz.com then this part would handle the request
location / {
proxy_pass http://unix:/var/www/services/xyz/django_server.sock;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}