I have a flask application that works on my PC.
When I deployed it on the virtual server (CentOs 6.5) I used nginx according to the article:
https://www.digitalocean.com/community/tutorials/how-to-deploy-flask-web-applications-using-uwsgi-behind-nginx-on-centos-6-4
I had to change the port in /etc/nginx/nginx.conf because it created conflict with apache port (caused nginx to fail to start because port is already in use). So my nginx.conf is:
worker_processes 1;
events {
worker_connections 1024;
}
http {
sendfile on;
gzip on;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 500;
gzip_disable "MSIE [1-6]\.";
gzip_types text/plain text/xml text/css
text/comma-separated-values
text/javascript
application/x-javascript
application/atom+xml;
# Configuration containing list of application servers
upstream uwsgicluster {
server 127.0.0.1:8081;
# server 127.0.0.1:8081;
# ..
# .
}
# Configuration for Nginx
server {
# Running port
listen 81;
# Settings to by-pass for static files
location ^~ /static/ {
# Example:
# root /full/path/to/application/static/file/dir;
root /app/static/;
}
# Serve a static file (ex. favico) outside static dir.
location = /favico.ico {
root /app/favico.ico;
}
# Proxying connections to application servers
location / {
include uwsgi_params;
uwsgi_pass uwsgicluster;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
My application is located under 12.12.15.16/cgi-bin/My_app (the ip address here is dummy).
I used the following command line to start the app:
env/bin/uwsgi --socket 127.0.0.1:8081 --protocol=http --wsgi-file main.py --callable app
My question is: How can I now call my app from a web browser?
Thank you for your help!
Your Flask app is listening on port 8081 but only for localhost connections (127.0.0.1). Nginx is listening on port 81 for connections and piping them to Flask on 8081. So from your own browser you want to access http://your-digital-ocean-ip:81/
Related
I'm trying to use nginx as a reverse proxy to receive incoming calls, then, depending on the server_name, redirect those calls to different computers (hosts), running nginx Django and Gunicorn. So far, I've tried different configurations for the conf file on the host, but none of them are working. Is there anything wrong with my conf files?
This is the nginx.conf in 192.168.0.13 that will function as a reverse proxy:
server {
listen 80;
server_name www.coding.test;
location / {
proxy_pass http://192.168.0.8:80;
proxy_redirect off;
# app1 reverse proxy follow
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
This is the nginx.conf in 192.168.0.8 that is intended to run the django app:
upstream django {
server unix:///home/pi/coding-in-dfw/mysocket.sock fail_timeout=0;
}
server {
listen 80 default_server;
server_name www.coding.test
client_max_body_size 4G;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location /static/ {
alias /home/pi/coding-in-dfw/static/;
}
location /media/ {
alias /home/pi/coding-in-dfw/media/;
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /etc/nginx/uwsgi_params; # the uwsgi_params file you installed
}
location /.well-known {
alias /home/pi/coding-in-dfw/.well-known;
}
}
Finally this is the way I'm running gunicorn:
gunicorn --workers 5 --bind unix:///home/pi/coding-in-dfw/mysocket.sock codingindfw.wsgi:application && sudo service nginx restart
Any help is appreciated.
I want a redirect from HTTP request to HTTPS on Elastic Beanstalk with nginx as proxy system.
I've found a lot of advices on Google but no one helped, it doesn't redirect.
That is my current test.config file in .ebexentions directory:
files:
"/etc/nginx/conf.d/proxy.conf" :
mode: "000644"
owner: root
group: root
content: |
server{
if ($http_x_forwarded_proto = "http") {
return 301 https://$host$request_uri;
}
}
I've also tried countless other settings, none of them worked.
That are my load balancer settings:
I hope you can help me. :)
Some considerations:
1 - New Amazon Elastic Beanstalk platform versions running Amazon Linux 2 have a different path of reverse proxy configuration:
~/workspace/my-app/
|-- .platform
| `-- nginx
| `-- conf.d
| `-- elasticbeanstalk
| `-- 00_application.conf
`-- other source files
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/platforms-linux-extend.html
2 - The AWS ELB Health Checker appears to be unable to check HTTPS endpoints.
Surely, if you are using a custom certificate for your domain, is unable to act a check in what he considers an "untrusted site".
https://your-eb-app.eu-west-3.elasticbeanstalk.com published with a certificate registered for your organization with this DNS alias https://your-eb-app.your-organization.com causes ELB Health Checker error (certificate domain mismatch).
3 - The configuration suggested exposes all locations to ANY client which shows up with "ELB-HealthChecker*" user-agent on the standard HTTP port (80); not quite what we want :-)
You can configure ELB Health Checker to accept the HTTP 301 status, but it doesn't have much use; a simple redirect response does not mean that our web application is in good health :-)
A more secure solution is a dedicated health check endpoint configuration:
location / {
set $redirect 0;
if ($http_x_forwarded_proto != "https") {
set $redirect 1;
}
if ($redirect = 1) {
return 301 https://$host$request_uri;
}
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location = /health-check.html {
set $redirect 0;
if ($http_x_forwarded_proto != "https") {
set $redirect 1;
}
if ($http_user_agent ~* "ELB-HealthChecker") {
set $redirect 0;
}
if ($redirect = 1) {
return 301 https://$host$request_uri;
}
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
This is the only solution that worked.
It's necessary to overwrite the default nginx file after AWS created it. So there has to be two more files:
Write the nginx file.
Create a script that overwrites the default nginx file.
Run the script after AWS created the default file.
I faced a similar problem when I was trying to redirect all HTTP traffic to HTTPS in my AWS Elastic Beanstalk Go environment using Nginx. This is the solution, I was provided by the AWS Support team:
Create a file in the below directory structure at the root of the application code.
.ebextensions/nginx/conf.d/elasticbeanstalk/00_application.conf
with the content
location / {
set $redirect 0;
if ($http_x_forwarded_proto != "https") {
set $redirect 1;
}
if ($http_user_agent ~* "ELB-HealthChecker") {
set $redirect 0;
}
if ($redirect = 1) {
return 301 https://$host$request_uri;
}
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
For a complete list of AWS provided config files, you should check out this link.
What I did to achieve , I completely override original nginx.conf with my custom given nginx.conf along with some custom configuration for location directives
.plateform
-- nginx
-- nginx.conf
-- conf.d
-- elasticbeanstalk
--custom.conf
Here is my nginx.conf
user nginx;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 32153;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
include conf.d/*.conf;
map $http_upgrade $connection_upgrade {
default "upgrade";
}
server {
listen 80 default_server;
access_log /var/log/nginx/access.log main;
client_header_timeout 60;
client_body_timeout 60;
keepalive_timeout 60;
gzip off;
gzip_comp_level 4;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
# Include the Elastic Beanstalk generated locations
include conf.d/elasticbeanstalk/*.conf;
}
}
Following line will helped me to safely over-ride the configuration
include conf.d/elasticbeanstalk/*.conf;
AWS Beanstalk environment Load balancer
Make sure that under the load balancer settings of Beanstalk environment both the ports(80,443) enable. If the port 80 is disable you will get the error of 503 "Service Temporarily Unavailable"
I am currently working on deploying my project over https however I am running into some issues. I have it working with http but when I try to incorporate the ssl it breaks. I think I am misconfiguring the gunicorn upstream client in my nginx block but I am uncertain. Could the issue be in the unix binding in my gunicorn service file? I am very new to gunicorn so I'm a little lost.
Here is my configuration below.
Gunicorn:
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
Environment=PYTHONHASHSEED=random
User=USER
Group=www-data
WorkingDirectory=/path/to/project
ExecStart=/path/to/project/project_env/bin/gunicorn --workers 3 --bind unix:/path/to/project/project.sock project.wsgi:application
[Install]
WantedBy=multi-user.target
Nginx (working-http):
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name server_domain;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /path/to/project;
}
location / {
include proxy_params;
proxy_pass http://unix:/path/to/project/project.sock;
}
}
Nginx (https):
upstream server_prod {
server unix:/path/to/project/project.sock fail_timeout=0;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name server_domain;
}
server {
server_name server_domain;
listen 443;
ssl on;
ssl_certificate /etc/ssl/server_domain.crt;
ssl_certificate_key /etc/ssl/server_domain.key;
location /static/ {
root /path/to/project;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://server_prod;
break;
}
}
}
Your gunicorn systemd unit file seems OK. Your nginx is generally OK too. You have posted too little info to get an appropriate diagnostic. I'm guessing you are missing passing the X-Forwarded-Proto header to gunicorn, but it could be something else. Here's an nginx configuration file that works for me:
upstream gunicorn{
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
# for UNIX domain socket setups:
server unix:/path/to/project/project.sock fail_timeout=0;
# for TCP setups, point these to your backend servers
# server 127.0.0.1:9000 fail_timeout=0;
}
server {
listen 80;
listen 443 ssl http2;
server_name server_domain;
ssl_certificate /etc/ssl/server_domain.crt;
ssl_certificate_key /etc/ssl/server_domain.key;
# path for static files
root /path/to/collectstatic/dir;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# When Nginx is handling SSL it is helpful to pass the protocol information
# to Gunicorn. Many web frameworks use this information to generate URLs.
# Without this information, the application may mistakenly generate http
# URLs in https responses, leading to mixed content warnings or broken
# applications. In this case, configure Nginx to pass an appropriate header:
proxy_set_header X-Forwarded-Proto $scheme;
# pass the Host: header from the client right along so redirects
# can be set properly within the Rack application
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
# Try to serve static files from nginx, no point in making an
# *application* server like Unicorn/Rainbows! serve static files.
proxy_pass http://gunicorn;
}
}
I am setting up a production server with nginx and gunicorn. I used the nginx.conf from the gunicorn examples pages and modified it:
worker_processes 1;
user mypolls webapps;
# 'user nobody nobody;' for systems with 'nobody' as a group instead
pid /var/run/nginx.pid;
error_log /webapps/mypolls/logs/nginx.error.log;
events {
worker_connections 1024; # increase if you have lots of clients
accept_mutex off; # set to 'on' if nginx worker_processes > 1
# 'use epoll;' to enable for Linux 2.6+
# 'use kqueue;' to enable for FreeBSD, OSX
}
http {
# fallback in case we can't determine a type
default_type application/octet-stream;
access_log /webapps/mypolls/logs/nginx.access.log combined;
sendfile on;
upstream app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
# for UNIX domain socket setups
server unix:/webapps/mypolls/run/gunicorn.sock fail_timeout=0;
# for a TCP configuration
# server 192.168.0.7:8000 fail_timeout=0;
}
server {
# if no Host match, close the connection to prevent host spoofing
listen 80 default_server;
return 444;
}
server {
# use 'listen 80 deferred;' for Linux
# use 'listen 80 deferred;' for Linux
# use 'listen 80 accept_filter=httpready;' for FreeBSD
listen 80;
client_max_body_size 4G;
# set the correct host(s) for your site
server_name 192.168.1.17;
keepalive_timeout 5;
# path for static files
root /webapps/mypolls/app/static/;
location / {
include /etc/nginx/mime.types;
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location / {
include /etc/nginx/mime.types;
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS
# proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
proxy_pass http://app_server;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /webapps/mypolls/app/static/;
}
}
}
For some reason the request for static files is still passed to gunicorn:
[2017-02-08 15:54:33 +0100] [2207] [DEBUG] GET /static/polls/style.css
The feels seem to be found but empty files are served. Is something wrong with the configuration?
I am trying to deploy a django instance to ec2 . I am using a combination of nginx and gunicorn to achieve that. I got the nginx isntance and gunicorn to start correctly and I am able to get my instance running. But when i try to upload an image to the database on my application I run into this error in my gunicorn error.log :
connect-failed-111-connection-refused-while-connecting-to-upstream
Also all my api calls from the front end to the database return a 500 internal server in the console.
My nginx.conf looks like
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
include /etc/nginx/sites-available/*;
index index.html index.htm;
server {
listen 127.0.0.1:80;
listen [::]:80 default_server;
server_name 127.0.0.1;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
# redir
And my sites-enabled/default file as
upstream app_server_djangoapp {
server 127.0.0.1:8000 fail_timeout=0;
}
server {
#EC2 instance security group must be configured to accept http connections over Port 80
listen 80;
server_name myec2isntance.com;
access_log /var/log/nginx/guni-access.log;
error_log /var/log/nginx/guni-error.log info;
keepalive_timeout 5;
# path for static files
location /static {
alias xxxxxx;
}
location /media {
alias xxxxxx;
}
location / {
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://app_server_djangoapp;
break;
}
}
}
I tried most of the things people talked about - adding right permissisons to the folders. Changing localhost to 127.0.0.1 etc. I am relatively new to this topic so any help would be much appreciated!
Thank you
I would suggest to change default to this :
upstream app_server_djangoapp {
server 127.0.0.1:8000 max_fails=3 fail_timeout=50;
keepalive 512;
}
- remove
keepalive_timeout 5;
- why do u have two location / blocks ?
location / {
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://app_server_djangoapp;
break;
}
}