nginx access logs show 502 errors
nginx error logs show: failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: , request: "GET /health HTTP/1.1", upstream: "http://10.0.0.2:3000/health", host: "x.x.x.x"
the drupal site is running health module that uses path /health. this container runs fine in ECS. the docker-entrypoint.sh script just executes a few drush commands.
Dockerfile:
FROM drupal:9-php7.4-apache
# code that installs soap, drush, composer etc ...
# Assign the drupal web to apache web folder
RUN rm /var/www/html
RUN ln -s /opt/website/web /var/www/html
COPY ./docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 3000
nginx.conf in .platform/nginx
events {
worker_connections 1024;
}
http {
server {
listen 443 ssl;
server_name localhost;
ssl_certificate /etc/pki/tls/certs/server.crt;
ssl_certificate_key /etc/pki/tls/certs/server.key;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://localhost;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
I have a couple of .ebextension files but those are for env variables. The container will build, start, connect to the database but then fail health check and eventually get terminated. any help or pointers would be great.
Related
I have a setup with Nginx as web server and Gunicorn as application server serving a Django REST API.
Nginx is listening at port 80 and gunicorn at port 8000 (I launch gunicorn using this command):
gunicorn --bind 0.0.0.0:8000 cdm_api.wsgi -t 200 --workers=3
and when I launch the petition to port 8000, I am able to access to the API running for instance:
curl -d "username=<user>&password=<pass>" -X POST http://127.0.0.1:8000/api/concept
However, when I try to make the petition though Nginx acting as reverse proxy, I get 404 Not Found Error:
curl -d "username=<user>&password=<pass>" -X POST http://127.0.0.1:80/api/concept
Here is my nginx conf file:
server {
listen 80;
listen [::]:80;
server_name 127.0.0.1;
location /api/ {
proxy_pass http://127.0.0.1:8000/api/;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_connect_timeout 360s;
proxy_read_timeout 360s;
}
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
Let me know if more information is needed. Thanks in advance!
So
gunicorn will create a sock file and you will have to map your proxy_pass with the sock file created, for now you are mapping to your localhost:8000 for this you will have to run the django runserver command so that nginx can find a server running on port 8000.
something like -
location / {
include proxy_params;
#proxy_pass http://unix:/home/abc/backend/social.sock;
proxy_pass http://unix:/home/abc/backend/backend.sock;
}
gunicorn make it possible to serve django at production, you don't have to command runserver everytime. Make sure gunicorn restarts everytime when server reboots.
I have deployed django with nginx following the tutorials in digital ocean. Then I blindly followed the section "Example Setup" in the channels document after installation.
My confusions are:
When setting up the configuration file for supervisor, it says to set the directory as
directory=/my/app/path
Should I write down the path where the manage.py is or the path where the settings.py is?
When I reload nginx after changing nginx configuration file, I get an error saying that
host not found in upstream "channels-backend" in
/etc/nginx/sites-enabled/mysite:18 nginx: configuration file
/etc/nginx/nginx.conf test failed
I did replace "mysite" by the name of my website. I had another error earlier saying that
no live upstreams while connecting to upstream
but could not recreate the situation.
I am new to using the channels, so any additional information on upstream would be helpful. Please let me know if I need to provide more information.
Edit:
Here is the nginx.conf file. I changed some sensitive data inside the <>.
upstream channels-backend {
server localhost:8000;
}
server {
listen 80;
server_name <domain name> <ip address>;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root <root to static>;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_pass http://channels-backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
This passes nginx -t. The error message I have in the error.log
connect() failed (111: Connection refused) while connecting to upstream, client: <some ip>, server: <domain name>, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8000/", host: "<domain name>"
The problem actually in supervisor configuration file.
[fcgi-program:asgi]
# TCP socket used by Nginx backend upstream
socket=tcp://localhost:8000
# Directory where your site's project files are located
directory=/my/app/path
# Each process needs to have a separate socket file, so we use process_num
# Make sure to update "mysite.asgi" to match your project name
command=daphne -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers mysite.asgi:application
# Number of processes to startup, roughly the number of CPUs you have
numprocs=4
# Give each process a unique name so they can be told apart
process_name=asgi%(process_num)d
# Automatically start and recover processes
autostart=true
autorestart=true
# Choose where you want your log to go
stdout_logfile=/your/log/asgi.log
redirect_stderr=true
To check if supervisor was running correctly, I ran
sudo supervisorctl status
This gave me a FATAL status. The problem was that I am currently using a virtual environment, and daphne was only installed inside the virtual environment. Therefore your command should be something like
command= /my/project/virtualenv/path/bin/daphne -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers mysite.asgi:application
I am trying to deploy my ASGI application.
I have followed the exact docs of https://channels.readthedocs.io/en/latest/deploying.html
But when I am checking my logs i am ending up with:
CRITICAL Listen failure: Couldn't listen on 0.0.0.0:x: [Errno 98] Address already in use.
I have tried changing ports but at last, ending up with the same:
Eg: for port 8005 daphne -p 8005 -b 0.0.0.0 myproject.asgi:application
I am ending up with:
Configuring endpoint unix:/run/daphne/daphne1.sock
2020-07-16 08:22:10,324 INFO Starting server at fd:fileno=0, tcp:port=8005:interface=0.0.0.0, unix:/run/daphne/daphne0.sock
2020-07-16 08:22:10,324 INFO HTTP/2 support not enabled (install the http2 and tls Twisted extras)
2020-07-16 08:22:10,325 INFO Configuring endpoint fd:fileno=0
2020-07-16 08:22:10,345 INFO Listening on TCP address 127.0.0.1:8005
2020-07-16 08:22:10,345 INFO Configuring endpoint tcp:port=8005:interface=0.0.0.0
In my asgi.log file.
My /etc/supervisor/conf.d/abc.conf file:
[fcgi-program:isbuddy]
# TCP socket used by Nginx backend upstream
socket=tcp://localhost:8005
# Directory where your site's project files are located
directory=/home/bhaskar/myprojectdir
# Each process needs to have a separate socket file, so we use process_num
# Make sure to update "mysite.asgi" to match your project name
command=/home/bhaskar/venv/bin/daphne -p 8005 -b 0.0.0.0 -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers myproject.asgi:application
# Number of processes to startup, roughly the number of CPUs you have
numprocs=2
# Give each process a unique name so they can be told apart
process_name=asgi%(process_num)d
# Automatically start and recover processes
autostart=true
autorestart=true
# Choose where you want your log to go
stdout_logfile=/home/bhaskar/asgi.log
redirect_stderr=true
My /etc/nginx/sites-enabled/abcd as:
server {
listen 80 default_server;
listen [::]:80 default_server;
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_pass http://0.0.0.0:8005;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
I tried all the possibilities available by checking every docs.
I want my website to run in my public IP.
I'm deploying my Django/Nginx/Gunicorn webapp to EC2 instance using docker-compose. EC2 instance has static IP where mywebapp.com / www.mywebapp.com points to, and I've completed the certbot verification (site works on port 80 over HTTP) but now trying to get working over SSL.
Right now, HTTP (including loading static files) is working for me, and HTTPS dynamic content (from Django) is working, but static files are not. I think my nginx configuration is wonky.
I tried copying the location /static/ block to the SSL server context in the nginx conf file, but that caused SSL to stop working altogether, not just static files over SSL.
Here's the final docker-compose.yml:
services:
certbot:
entrypoint: /bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h &
wait $${!}; done;'
image: certbot/certbot
volumes:
- /home/ec2-user/certbot/conf:/etc/letsencrypt:rw
- /home/ec2-user/certbot/www:/var/www/certbot:rw
nginx:
command: /bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done
& nginx -g "daemon off;"'
depends_on:
- web
image: xxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/xxxxxxxx:latest
ports:
- 80:80/tcp
- 443:443/tcp
volumes:
- /home/ec2-user/certbot/conf:/etc/letsencrypt:rw
- static_volume:/usr/src/app/public:rw
- /home/ec2-user/certbot/www:/var/www/certbot:rw
web:
entrypoint: gunicorn mywebapp.wsgi:application --bind 0.0.0.0:7000"
image: xxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/xxxxxxxx:latest
volumes:
- static_volume:/usr/src/app/public:rw
version: '3.0'
volumes:
static_volume: {}
nginx.prod.conf:
upstream mywebapp {
# web is the name of the service in the docker-compose.yml
# 7000 is the port that gunicorn listens on
server web:7000;
}
server {
listen 80;
server_name mywebapp;
location / {
proxy_pass http://mywebapp;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /usr/src/app/public/;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
# https://github.com/wmnnd/nginx-certbot/blob/master/data/nginx/app.conf
listen 443 ssl;
server_name mywebapp;
server_tokens off;
location / {
proxy_pass http://mywebapp;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# generated with help of certbot
ssl_certificate /etc/letsencrypt/live/mywebapp.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mywebapp.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
and finally the nginx service Dockerfile:
FROM nginx:1.15.12-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY ./nginx.prod.conf /etc/nginx/conf.d
I simply build, push to ECR on local machine then docker-compose pull and run with docker-compose up -d on the EC2 instance.
The error I see in docker-compose logs is:
nginx_1 | 2019/05/09 02:30:34 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xx.xx.xx, server: mywebapp, request: "GET / HTTP/1.1", upstream: "http://192.168.111.3:7000/", host: "ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com"
And I'm not sure what's going wrong. I'm trying to get both dynamic content (gunicorn) and static content (from: /usr/src/app/public) served correctly under HTTPS using the certs I've generated and verified.
Anyone know what I might be doing wrong?
Check your configuration file with nginx -T - are you seeing the correct configuration? Is your build process pulling in the correct conf?
It's helpful to just debug this on the remote machine - docker-compose exec nginx sh to get inside and tweak the conf from there and nginx -s reload. This will speed up your iteration cycles debugging an SSL issue.
I'm using docker compose to build a project with django, nginx as services. When I launch the daphne server, and a client tries to connect to the websocket server, I get this error:
*1 recv() failed (104: Connection reset by peer) while reading response header from upstream
Client-side shows this
failed: Error during WebSocket handshake: Unexpected response code: 502
Here is my docker-compose.yml
version: '3'
services:
nginx:
image: nginx
command: nginx -g 'daemon off;'
ports:
- "1010:80"
volumes:
- ./config/nginx/nginx.conf:/etc/nginx/nginx.conf
- .:/makeup
links:
- web
web:
build: .
command: /usr/local/bin/circusd /makeup/config/circus/web.ini
environment:
DJANGO_SETTINGS_MODULE: MakeUp.settings
DEBUG_MODE: 1
volumes:
- .:/makeup
expose:
- '8000'
- '8001'
links:
- cache
extra_hosts:
"postgre": 100.73.138.65
Nginx:
server {
listen 80;
server_name thelab518.cloudapp.net;
keepalive_timeout 15;
root /makeup/;
access_log /dev/stdout;
error_log /dev/stderr;
location /api/stream {
proxy_pass http://web:8001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://web:8000;
}
And the circusd's web.ini file:
[watcher:web]
cmd = /usr/local/bin/gunicorn MakeUp.wsgi:application -c config/gunicorn.py
working_dir = /makeup/
copy_env = True
user = www-data
[watcher:daphne]
cmd = /usr/local/bin/daphne -b 0.0.0.0 -p 8001 MakeUp.asgi:channel_layer
working_dir = /makeup/
copy_env = True
user = root
[watcher:worker]
cmd = /usr/bin/python3 manage.py runworker
working_dir = /makeup/
copy_env = True
user = www-data
As quite explicitly stated in the fine manual, to successfully run Channels you need to have a dedicated application server implementing the ASGI protocol, such as the supplied daphne
The entire Django execution model has been changed with Channels, so that there are separate "interface servers" taking care of receiving and sending messages over, for example, WebSockets or HTTP or SMS, and "worker servers" that run the actual code (potentially on a different server or VM or container or...). The two are connected by a "Channel layer" that carries messages and replies back and forth.
The current implementation supplies 3 channel layers that talk ASGI between an interface server and a worker server:
An In-memory channel layer, used mainly for running the test server (it's single process)
An IPC based channel layer, usable to run different workers on the same server
A redis based channel layer, that should be used for heavy production sites, able to connect interface servers to multiple worker servers.
You configure them like you do for DATABASES::
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"ROUTING": "my_project.routing.channel_routing",
"CONFIG": {
"hosts": [("redis-channel-1", 6379), ("redis-channel-2", 6379)],
},
},
}
Of course this means that your docker config has to change and add one or more interface servers instead of, or in addition to, nginx (even if, in that case, you'll need to accept websocket connections on a different port with all the connected possible problems) and, quite likely, an instance of redis connectin all them.
This in turn means that until circus and nginx can support ASGI, it won't be possible to use them with django-channels, or that this support will only be for the regular http part of your system.
You can find more info in the Deploying section of the official documentation.
It looks that you stared daphne on port 8001, and trying to expose port 8000 and 8001 in docker-compose. The port 8000 is not pointing to any server (daphne is on 8001). In your nginx please set proxy to 8001 ports and expose only port 8001 in docker-compose.
I have created a simple example how it can be set on github where I have proxy to asgi and wsgi servers, but you can go with only asgi server:
The nginx:
upstream app {
server wsgiserver:8000;
}
upstream ws_server {
server asgiserver:9000;
}
server {
listen 8000 default_server;
listen [::]:8000;
client_max_body_size 20M;
location / {
try_files $uri #proxy_to_app;
}
location /tasks {
try_files $uri #proxy_to_ws;
}
location #proxy_to_ws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_pass http://ws_server;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Url-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
}
The docker-compose.yml:
version: '2'
services:
nginx:
extends:
file: docker-common.yml
service: nginx
ports:
- 8000:8000
volumes:
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
volumes_from:
- asgiserver
asgiserver:
extends:
file: docker-common.yml
service: backend
entrypoint: /app/docker/backend/asgi-entrypoint.sh
links:
- postgres
- redis
- rabbitmq
expose:
- 9000
wsgiserver:
extends:
file: docker-common.yml
service: backend
entrypoint: /app/docker/backend/wsgi-entrypoint.sh
links:
- postgres
- redis
- rabbitmq
expose:
- 8000