My WebSocket connection getting rejected from the Django Channels server deployed in AWS EC2. I am using Docker for server side which contains PostgreSQL and Redis also
WebSocket HANDSHAKING /ws/notifications/
WebSocket REJECT /ws/notifications/
WebSocket DISCONNECT /ws/notifications/
front-end is built using React Js is deployed in netlify.
server side is secured using cer-bot.
my nginx configuration files looks like this
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
server_name IP_address example.com ;
location = /favicon.ico { access_log off; log_not_found off; }
location /staticfiles/ {
root /home/ubuntu/social-network-django-server;
}
location / {
proxy_pass http://127.0.0.1:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/_____; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/____; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name IP_address example.com ;
return 404; # managed by Certbot
}
everything other than the websocket connection is working fine without any issues. I have no idea what is happening here. I am using d]Django channels first time,
I am using wss:// instead of ws://
my error in the browser looks like this
WebSocket connection to 'wss://_____________________' failed:
if any one have any idea what is happening please share!
my github repo links server client
my Dockerfile looks like this
FROM python:3.11.1
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
postgresql-client \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
my docker-compose.yaml looks like this
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=2221
- POSTGRES_DB=social-network
volumes:
- db-data:/var/lib/postgresql/data/
ports:
- "5432"
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
environment:
- DATABASE_URL=postgres://postgres:2221#db:5432/social-network
- REDIS_URL=redis://redis:6379/0
redis:
image: redis
ports:
- "6379"
volumes:
db-data:
I think this is maybe related to redis, can someone explain what is this error is all about, I am new to Redis and first time I'm using it
my websocket connection is rejected by server in the production environment.
most of the time it is getting rejected, but one time I got the error like this
future: <Task finished name='Task-2137' coro=<Connection.disconnect() done, defined at /usr/local/lib/python3.11/site-packages/redis/asyncio/connection.py:720> exception=RuntimeError('Event loop is closed')>
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/redis/asyncio/connection.py", line 729, in disconnect
self._writer.close() # type: ignore[union-attr]
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/streams.py", line 344, in close
return self._transport.close()
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/selector_events.py", line 839, in close
self._loop.call_soon(self._call_connection_lost, None)
File "/usr/local/lib/python3.11/asyncio/base_events.py", line 761, in call_soon
self._check_closed()
File "/usr/local/lib/python3.11/asyncio/base_events.py", line 519, in _check_closed
raise RuntimeError('Event loop is closed')
Related
Please I need some assistance. Your contributions will be greatly appreciated
I am trying to add ssl to my nginx and docker compose configuration.
Currently, everything works fine with http, but it won't work with https.
Here is my docker-compose.yml file
version: '3.8'
services:
web_gunicorn:
image: ACCT_ID.dkr.ecr.us-east-2.amazonaws.com/web_gunicorn:latest
volumes:
- static:/static
- media:/media
# env_file:
# - .env
pull_policy: always
restart: always
ports:
- "8000:8000"
environment:
- PYTHONUNBUFFERED=1
- PYTHONDONTWRITEBYTECODE=1
nginx:
image: ACCT_ID.dkr.ecr.us-east-2.amazonaws.com/nginx:latest
pull_policy: always
restart: always
volumes:
- static:/static
- media:/media
- ./certbot/conf:/etc/letsencrypt
- ./certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
depends_on:
- web_gunicorn
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./certbot/conf:/etc/letsencrypt
- ./certbot/www:/var/www/certbot
depends_on:
- nginx
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
volumes:
static:
media:
Here is my nginx.conf configuration that works (http)
upstream web {
server web_gunicorn:8000;
}
server {
listen 80;
server_name domain.com;
location / {
resolver 127.0.0.11;
proxy_pass http://web;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /static/;
}
location /media/ {
alias /media/;
}
}
Here is my nginx.conf configuration that does not work (http and https)
upstream web {
server web_gunicorn:8000;
}
server {
location / {
resolver 127.0.0.11;
proxy_pass http://web;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /static/;
}
location /media/ {
alias /media/;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
server {
if ($host = domain.com) {
return 301 https://$host$request_uri;
}
listen 80;
server_name domain.com;
return 404;
}
Below is nginx logs, when I do docker-compose logs nginx
nginx_1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx_1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx_1 | 10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
nginx_1 | 10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
nginx_1 | /docker-entrypoint.sh: Configuration complete; ready for start up
One more thing. On my server, I can see all ssl files generate by certbot, and are stored in folder called cerbot.
Finally found the problem. So all my configuration was actually okay -- The issue was that port 443 was not opened on my server
I had only opened it in the outbound rule, I didn't realise I had to open it in the inbound rule too.
My application was running in an ec2 server, on aws.
I used this tool https://www.yougetsignal.com/tools/open-ports/ to check whether the port was open or closed.
The closed port also caused my requests to the server to timeout.
On ubuntu server 18.04
I run a docker container with Django 3.1 and Gunicorn
which connects to a locally installed Postgres.
I already have an another Django project with Nginx and Gunicorn on the same server without docker.
The questions are: how to set up socket files for Gunicorn in a container and connect this container to the existing external Nginx so I can access the containerized django app from outside?
docker-compose.prod.yml
version: '3'
services:
app:
build:
context: .
ports:
- "8001:8001"
volumes:
- ./app:/app
env_file:
- .env.prod
command: >
sh -c "gunicorn app.wsgi:application --bind 0.0.0.0:8001"
I run the container like this:
docker-compose -f docker-compose.prod.yml up
.env.prod
...
DJANGO_ALLOWED_HOSTS='111.111.111.111 .mysitename.com localhost 172.18.0.1 127.0.0.1 [::1]'
...
when running container executing of this command:
curl 'http://127.0.0.1:8001/admin/login/?next=/admin/'
gives me the html code of the admin page and it's ok.
The existing Nginx is set up as in these digital ocean tutorials:
https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04
I also secured it with Let's encrypt using this tutorial https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-18-04
Everything is simpler than I thought.
in Nginx conf file
/etc/nginx/sites-available/ edit like that
server {
server_name 111.111.111.111 domainname.com www.domainname.com *.domainname.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/user/<my-previous-project-installed-locally>/project-app;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
# Added for the NEW application
###############################################
location /MY-NEW-PATH/ {
rewrite ^/MY-NEW-PATH(.*) $1 break;
proxy_pass http://127.0.0.1:8001;
}
###############################################
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/domainname.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/domainname.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.domainname.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
Please help me with this problem, i have been trying to solve it for 2 days!
Please, just tell me what i am doing wrong. And what i should to change to make it work! And what i should to do to take it work.
ERROR: for certbot Cannot start service certbot: network 4d3b22b1f02355c68a900a7dfd80b8c5bb64508e7e12d11dadae11be11ed83dd not found
My docker-compose file
version: '3'
services:
nginx:
restart: always
build:
context: ./
dockerfile: ./nginx/Dockerfile
depends_on:
- server
ports:
- 80:80
volumes:
- ./server/media:/nginx/media
- ./conf.d:/nginx/conf.d
- ./dhparam:/nginx/dhparam
- ./certbot/conf:/nginx/ssl
- ./certbot/data:/usr/share/nginx/html/letsencrypt
server:
build:
context: ./
dockerfile: ./server/Dockerfile
command: gunicorn config.wsgi -c ./config/gunicorn.py
volumes:
- ./server/media:/server/media
ports:
- "8000:8000"
depends_on:
- db
environment:
DEBUG: 'False'
DATABASE_URL: 'postgres://postgres:#db:5432/postgres'
BROKER_URL: 'amqp://user:password#rabbitmq:5672/my_vhost'
db:
image: postgres:11.2
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
certbot:
image: certbot/certbot:latest
command: certonly --webroot --webroot-path=/usr/share/nginx/html/letsencrypt --email artasdeco.ru#gmail.com --agree-tos --no-eff-email -d englishgame.ru
volumes:
- ./certbot/conf:/etc/letsencrypt
- ./certbot/logs:/var/log/letsencrypt
- ./certbot/data:/usr/share/nginx/html/letsencrypt
My Dockerfile
FROM python:3.7-slim AS server
RUN mkdir /server
WORKDIR /server
COPY ./server/requirements.txt /server/
RUN pip install -r requirements.txt
COPY ./server /server
RUN python ./manage.py collectstatic --noinput
#########################################
FROM nginx:1.13
RUN rm -v /etc/nginx/nginx.conf
COPY ./nginx/nginx.conf /etc/nginx/
RUN mkdir /nginx
COPY --from=server /server/staticfiles /nginx/static
nginx.conf file
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 443 ssl http2;
server_name englishgame.ru;
ssl on;
server_tokens off;
ssl_certificate /etc/nginx/ssl/live/englishgame.ru/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/englishgame.ru/fullchain.pem;
ssl_dhparam /etc/nginx/dhparam/dhparam-2048.pem;
ssl_buffer_size 8k;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
location / {
return 301 https://englishgame.ru$request_uri;
}
}
server {
listen 80;
server_name englishgame.ru;
location ~ /.well-known/acme-challenge {
allow all;
root /usr/share/nginx/html/letsencrypt;
}
location /static {
alias /nginx/static/;
expires max;
}
location /media {
alias /nginx/media/;
expires 10d;
}
location /robots.txt {
alias /nginx/static/robots.txt;
}
location /sitemap.xml {
alias /nginx/static/sitemap.xml;
}
location / {
proxy_pass http://server:8000;
proxy_redirect off;
proxy_read_timeout 60;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
Thank you for your help!
Alright, so based on the error ERROR: for certbot Cannot start service certbot: network 4d3b22b1f02355c68a900a7dfd80b8c5bb64508e7e12d11dadae11be11ed83dd not found, the issue is not related to any of the other services defined in your compose file, so those and your Dockerfile and nginx configuration should be irrelevant to the problem.
Then to solve the problem of "why certbot service cannot be created". Usually this kind of error happens when a network that was configured for a service has been removed manually. In this case, however, no service is even referring to a network. Thus only the hash sum is printed, not any network name.
Googling the error brings up a similar problem from let's encrypt: https://github.com/BirgerK/docker-apache-letsencrypt/issues/8, which points to
an actual docker compose issue https://github.com/docker/compose/issues/5745.
The solution there is to run the docker compose with "--force-recreate" option to resolve the problem.
So, the problem should be fixed by running docker compose up -d --force-recreate.
I have a django app running in docker containers (please see docker compose and dockerfile below). I have removed port exposure from my docker-compose however when i deploy the code onto an ubuntu server, I can still access the app via port 3000. I am also using nginx to do the proxing (see nginx file below).
services:
rabbitmq:
restart: always
image: rabbitmq:3.7
...
db:
restart: always
image: mongo:4
...
cam_dash:
restart: always
build: .
command: python3 manage.py runserver 0.0.0.0:3000
...
celery:
restart: always
build: .
command: celery -A dashboard worker -l INFO -c 200
...
celery_beat:
restart: always
build: .
command: celery beat -A dashboard -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler
...
FROM python:3.7
COPY requirements.txt /
RUN pip3 install -r /requirements.txt
ADD ./ /dashboard
WORKDIR /dashboard
COPY ./docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 3000
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 301 https://$host$request_uri;
root /var/www/html;
index index.html;
}
server {
listen 443;
server_name camonitor.uct.ac.za;
ssl on;
ssl_certificate /etc/ssl/certs/wildcard.crt;
ssl_certificate_key /etc/ssl/private/wildcard.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
location / {
root /var/www/html;
index index.html;
}
location /dash/ {
proxy_pass http://127.0.0.1:3000/dash/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
...
I am expecting that if I try access https://example.com:3000/dash/, it should not be accessible. https://example.com/dash/ works just fine.
Thanks for the help.
You should prevent access to port 3000 using the system's firewall.
I had same issue hosting more than one web server on same machine and proxying with Nginx, I solved using this port configuration in docker-compose.yml, binding the port only to localhost, maybe you could apply same configuration to python server.
"127.0.0.1:3000:3000"
version: '3'
services:
myService:
image: "myService/myService:1"
container_name: "myService"
ports:
- "127.0.0.1:3000:3000"
I'm deploying my Django/Nginx/Gunicorn webapp to EC2 instance using docker-compose. EC2 instance has static IP where mywebapp.com / www.mywebapp.com points to, and I've completed the certbot verification (site works on port 80 over HTTP) but now trying to get working over SSL.
Right now, HTTP (including loading static files) is working for me, and HTTPS dynamic content (from Django) is working, but static files are not. I think my nginx configuration is wonky.
I tried copying the location /static/ block to the SSL server context in the nginx conf file, but that caused SSL to stop working altogether, not just static files over SSL.
Here's the final docker-compose.yml:
services:
certbot:
entrypoint: /bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h &
wait $${!}; done;'
image: certbot/certbot
volumes:
- /home/ec2-user/certbot/conf:/etc/letsencrypt:rw
- /home/ec2-user/certbot/www:/var/www/certbot:rw
nginx:
command: /bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done
& nginx -g "daemon off;"'
depends_on:
- web
image: xxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/xxxxxxxx:latest
ports:
- 80:80/tcp
- 443:443/tcp
volumes:
- /home/ec2-user/certbot/conf:/etc/letsencrypt:rw
- static_volume:/usr/src/app/public:rw
- /home/ec2-user/certbot/www:/var/www/certbot:rw
web:
entrypoint: gunicorn mywebapp.wsgi:application --bind 0.0.0.0:7000"
image: xxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/xxxxxxxx:latest
volumes:
- static_volume:/usr/src/app/public:rw
version: '3.0'
volumes:
static_volume: {}
nginx.prod.conf:
upstream mywebapp {
# web is the name of the service in the docker-compose.yml
# 7000 is the port that gunicorn listens on
server web:7000;
}
server {
listen 80;
server_name mywebapp;
location / {
proxy_pass http://mywebapp;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /usr/src/app/public/;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
# https://github.com/wmnnd/nginx-certbot/blob/master/data/nginx/app.conf
listen 443 ssl;
server_name mywebapp;
server_tokens off;
location / {
proxy_pass http://mywebapp;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# generated with help of certbot
ssl_certificate /etc/letsencrypt/live/mywebapp.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mywebapp.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
and finally the nginx service Dockerfile:
FROM nginx:1.15.12-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY ./nginx.prod.conf /etc/nginx/conf.d
I simply build, push to ECR on local machine then docker-compose pull and run with docker-compose up -d on the EC2 instance.
The error I see in docker-compose logs is:
nginx_1 | 2019/05/09 02:30:34 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xx.xx.xx, server: mywebapp, request: "GET / HTTP/1.1", upstream: "http://192.168.111.3:7000/", host: "ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com"
And I'm not sure what's going wrong. I'm trying to get both dynamic content (gunicorn) and static content (from: /usr/src/app/public) served correctly under HTTPS using the certs I've generated and verified.
Anyone know what I might be doing wrong?
Check your configuration file with nginx -T - are you seeing the correct configuration? Is your build process pulling in the correct conf?
It's helpful to just debug this on the remote machine - docker-compose exec nginx sh to get inside and tweak the conf from there and nginx -s reload. This will speed up your iteration cycles debugging an SSL issue.