I have a flask app that runs fine on my local machine with all routes working fine. However, when I run the same flask app using gunicorn over an EC2 ubuntu server, only "/" route works and others doesn't.
app.py
#app.route('/')
def index():
return render_template("someForm.html")
#app.route('/someForm', methods=['GET', 'POST'])
def interestForm():
args = request.args
args = args.to_dict()
emailID = args.get('email')
<Some python Logic here>
return render_template("someForm.html", somevariable = Value1, somevariable = value2)
#app.route('/submit', methods=['GET'])
def submit():
args = request.args
args = args.to_dict()
email = args.get('email')
<Some python backend logic that updates form value of the corresponding email in the database>
if updateTable(email, updateDic)==1:
return redirect("someURL")
else:
return render_template("someForm.html", error_message = "Issue with updating")
if __name__ == '__main__':
app.jinja_env.auto_reload = True
app.config['TEMPLATES_AUTO_RELOAD'] = True
app.run(debug=True, host='0.0.0.0', port=8000, threaded=True)
More Info on the deployment
I run the flask app using the command below
nohup gunicorn -b 0.0.0.0:8000 app:app &
Also configured the NGINX server to point to localhost:8000
Inside /etc/nginx/sites-available/default
upstream flaskhelloworld {
server 127.0.0.1:8000;
}
# Some code above
location / {
try_files $uri $uri/ =404;
include proxy_params;
proxy_set_header Host $host;
proxy_pass http://flaskhelloworld;
}
# some code below
I have configured DNS using Cloudflare and issued an SSL certificate using certbot. All works fine as I can access https://www.domain_name.com but I can't access https://www.domain_name.com/someForm neither I can access https://www.domain_name.com/someForm?=email#gmail.com through browser (tested on Chrome and edge)
TEST cases and other checks
I tried to curl https://www.domain_name.com and it returns the HTML correctly. But when I curl https://www.domain_name.com/someForm I get the result below
<!doctype html>
<html lang=en>
<title>404 Not Found</title>
<h1>Not Found</h1>
<p>The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.</p>
Confusing case
Interestingly, when I use HTTP instead of HTTPS while using Thunderclient or Curl command, it returns the proper results with a 200 status code.
curl http://domain_name.com/someForm?email=kaushal#gmail.com
[![results][2]][2]
Only thing that works with HTTPS is the home route (i.e. "/")
[1]: https://i.stack.imgur.com/WPZW4.png
[2]: https://i.stack.imgur.com/3LF9m.png
Based on what you have said, I think all of your requests are going to /
Here is a working configuration for https using certbot that will pass things to the correct spaces
server {
server_name mywebsite.com www.mywebsite.com;
location / {
include proxy_params; # this is what I think you are missing
proxy_pass http://unix:/home/user/project/myProject.sock; # this is from a gunicorn configuration but it could just be a proxy pass
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mywebsite.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mywebsite.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = mywebsite.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name mywebsite.com www.mywebsite.com;
listen 80;
return 404; # managed by Certbot
}
You should only write this part yourself:
location / {
include proxy_params; # this is what I think you are missing
proxy_pass http://unix:/home/user/project/myProject.sock; # this is from a gunicorn configuration but it could just be a proxy pass
}
and allow certbot to do the rest
Related
I am very confused with the Nginx .well-known/acme-challenge configuration and how it works with a proxy for Django.
Here is my frontend config with is working:
server {
listen 80;
server_name myserve.com;
root /var/www/html/myapp_frontend/myapp/;
index index.html;
location / {
try_files $uri$args $uri$args/ /index.html;
}
location /.well-known/acme-challenge {
allow all;
root /var/www/html/myapp_frontend/myapp/;
}
return 301 https://myserver.com$request_uri;
}
So, on the frontend I have no problem to define: root /var/www/html/myapp_frontend/myapp/;
Now I can run the acme script like this:
/root/.acme.sh/acme.sh --issue -d myserver.com -w /var/www/html/myapp_frontend/myapp/
It is working fine.
But I have issues with my Django backend because the nginx config uses a proxy:
upstream myapp {
server backend:8000;
}
server {
listen 80;
server_name api.myserver.com;
location / {
proxy_pass http://myapp;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
return 301 https://api.myserver.com$request_uri;
}
Notice that I do not really have a configuration for the folder "/var/www/html/myapp_backend/myapp/" as I have on the frontend config, because now I'm using a proxy.
Now I can't run acme script like this:
/root/.acme.sh/acme.sh --issue -d myserver.com -w /var/www/html/myapp_backend/myapp/
How do I configure the folder for the SSL provider to be able to check my backend?
I don't know if it's possible to do it only in Nginx configuration, but I came to the conclusion that using a proxy it was Django who should create a url to /.well-known/acme-challenge/
I basically set a URL like this:
urlpatterns = [
re_path(r'^acme-challenge/(?P<file_name>[\w\-]+)$', get_challenge),
]
and
urlpatterns = [
...
path('.well-known/', include('myapp.wellknonw.urls')),
]
Notice that to accept multiple file names I use this: (?P<file_name>[\w-]+)$
And my views are like this:
#require_GET
def get_challenge(request, file_name=None):
lines = []
if file_name:
path = f'{BASE_DIR}/myapp/.well-known/acme-challenge/{file_name}'
try:
with open(path) as f:
lines = f.readlines()
f.close()
except:
return HttpResponse("File not found!",
status=status.HTTP_404_NOT_FOUND)
return HttpResponse("".join(lines), content_type="text/plain")
In docker my volume to: /myapp/.well-known/acme-challenge/
is mounted to: /var/www/html/myapp_backend/myapp/
I have a web backend implemented in Django and running on Gunicorn. Plus, I also have a Vue.js app that uses this backend. I want to run both of them on nginx and also do HTTPS configs.
This is how my "/etc/nginx/nginx.conf" file looks like:
...
server {
server_name .website.com;
listen 80;
return 307 https://$host$request_uri;
}
server {
location / {
proxy_set_header Host $host;
proxy_pass http://localhost:8080; # where the Django app over gunicorn is running
}
location /static {
root /code/frontend/dist/; # static frontend code created with vite on vue.js
autoindex on;
index index.html;
try_files $uri $uri/ /index.html;
}
# ssl configs
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/website.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/website.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
Both of them, Django and Vue.js part, are hosted in a single Docker container. 80 and 8080 ports of this container is mapped to 80 and 8080 ports of the host PC.
443, 8080 and 80 ports are open on the machine for inbound connections. Sending a post request to http://website.com:8080/bla do returns correct values, meaning that backend seems to be working but on http only and not on https.
Still when I go to the "website.com", I receive "This site can't be reached" error. Where am I doing wrong exactly and how can I run both on nginx and both over ssl/https?
I'm trying to set up Jenkins so that I can set up a pipeline on an existing website, but Jenkins does not show up on port 8080.
My project website has been up and running for several months. I'm using Nginx, Gunicorn, Ubuntu 20.04, and Django on an AWS EC2 instance. I'm now trying to set up a pipeline that includes a test/beta environment. This requires Jenkins as per the AWS tutorials. I followed the example from Digital Ocean and this example from Digital Ocean.
When I try the URL https://theafricankinshipreunion.com:8080/, it says the site cannot be reached. When I try the URL https://theafricankinshipreunion.com (without the port), it takes me to the Unlock Jenkins page. After I enter the password from sudo cat /var/lib/jenkins/secrets/initialAdminPassword, the web browser just goes to a blank page. Looking at the page source, this page is the Setup Wizard[Jenkins] page, but the display is blank.
The results from sudo systemctl status jenkins is active.
The results from sudo ufw status for port 8080 is ALLOW. On AWS, the EC2 inbound rules inclues port 8080 TCP 0.0.0.0/0 and ::/0. So it appears that port 8080 is good. Checking for port use, netstat -nlp | grep 8080 resulted in tcp6 0 0 127.0.0.1:8080 :::* LISTEN -. I killed the process and restarted nginx, gunicorn, and jenkins. Same results: the domain with port 8080 cannot connect but the doman goes to the Unlock Jenkins page.
I did look up other help pages, such as the reverse proxy page from Jenkins, but I'm not sure how to integrate that into my current setup. Your assistance is greatly appreciated.
My /etc/nginx/sites-available/myproject file is as follows:
server {
listen 80;
server_name 3.131.27.142;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/ubuntu/myprojectdir;
}
location /media/ {
root /home/ubuntu/myprojectdir;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
proxy_connect_timeout 300s;
proxy_read_timeout 300s;
}
}
server {
server_name theafricankinshipreunion.com www.theafricankinshipreunion.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/ubuntu/myprojectdir;
}
location /media/ {
root /home/ubuntu/myprojectdir;
}
location / {
include /etc/nginx/proxy_params;
# proxy_pass http://unix:/run/gunicorn.sock;
proxy_pass http://localhost:8080;
proxy_connect_timeout 300s;
proxy_read_timeout 300s;
proxy_redirect http://localhost:8080 https://theafricankinshipreunion.com;
}
# SSL Configuration
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/theafricankinshipreunion.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/theafricankinshipreunion.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
access_log /var/log/nginx/jenkins.access.log;
error_log /var/log/nginx/jenkins.error.log;
}
# skipped lines show similar blocks for other domains
server {
if ($host = www.theafricankinshipreunion.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = theafricankinshipreunion.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name theafricankinshipreunion.com www.theafricankinshipreunion.com;
return 404; # managed by Certbot
}
And my /etc/default/jenkins file is as follows (with the last line added because of the instructions from DigitalOcean:
# defaults for Jenkins automation server
# pulled in from the init script; makes things easier.
NAME=jenkins
# arguments to pass to java
# Allow graphs etc. to work even when an X server is present
JAVA_ARGS="-Djava.awt.headless=true"
#JAVA_ARGS="-Xmx256m"
# make jenkins listen on IPv4 address
#JAVA_ARGS="-Djava.net.preferIPv4Stack=true"
PIDFILE=/var/run/$NAME/$NAME.pid
# user and group to be invoked as (default to jenkins)
JENKINS_USER=$NAME
JENKINS_GROUP=$NAME
# location of the jenkins war file
JENKINS_WAR=/usr/share/$NAME/$NAME.war
# jenkins home location
JENKINS_HOME=/var/lib/$NAME
# set this to false if you don't want Jenkins to run by itself
# in this set up, you are expected to provide a servlet container
# to host jenkins.
RUN_STANDALONE=true
# log location. this may be a syslog facility.priority
JENKINS_LOG=/var/log/$NAME/$NAME.log
#JENKINS_LOG=daemon.info
# Whether to enable web access logging or not.
# Set to "yes" to enable logging to /var/log/$NAME/access_log
JENKINS_ENABLE_ACCESS_LOG="no"
# OS LIMITS SETUP
# comment this out to observe /etc/security/limits.conf
# this is on by default because http://github.com/jenkinsci/jenkins/commit/2fb288474e980d0e7ff9c4a3b768874835a3e92e
# reported that Ubuntu's PAM configuration doesn't include pam_limits.so, and as a result the # of file
# descriptors are forced to 1024 regardless of /etc/security/limits.conf
MAXOPENFILES=8192
# set the umask to control permission bits of files that Jenkins creates.
# 027 makes files read-only for group and inaccessible for others, which some security sensitive users
# might consider benefitial, especially if Jenkins runs in a box that's used for multiple purposes.
# Beware that 027 permission would interfere with sudo scripts that run on the master (JENKINS-25065.)
#
# Note also that the particularly sensitive part of $JENKINS_HOME (such as credentials) are always
# written without 'others' access. So the umask values only affect job configuration, build records,
# that sort of things.
#
# If commented out, the value from the OS is inherited, which is normally 022 (as of Ubuntu 12.04,
# by default umask comes from pam_umask(8) and /etc/login.defs
# UMASK=027
# port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
# servlet context, important if you want to use apache proxying
PREFIX=/$NAME
# arguments to pass to jenkins.
# --javahome=$JAVA_HOME
# --httpListenAddress=$HTTP_HOST (default 0.0.0.0)
# --httpPort=$HTTP_PORT (default 8080; disable with -1)
# --httpsPort=$HTTP_PORT
# --argumentsRealm.passwd.$ADMIN_USER=[password]
# --argumentsRealm.roles.$ADMIN_USER=admin
# --webroot=~/.jenkins/war
# --prefix=$PREFIX
JENKINS_ARGS="--webroot=/var/cache/$NAME/war --httpPort=$HTTP_PORT --httpListenAddress=127.0.0.1"
Use the following command to change the port while running jenkins
java -jar jenkins.war --httpPort=9090
If you want to use https use the following command:
java -jar jenkins.war --httpsPort=9090
Currently, I'm deploying a react app and using django as the backend API on an ubuntu nginx server. The react app is already online have a SSL certificate, but the backend API does not. By default, browsers can't show a http content on a https connection.
Do I need to get another SSL certificate afor the backend API? Or is there another way to do it?
Nginx conf file (for the frontend part. I'm not sure how to configure the backend):
The backend is currently running on xxx.xx.x.xx:8000 (using gunicorn --daemon --bind xxx.xx.x.xx:8000)
server {
server_name xxxxxx.com www.xxxxxx.com xxx.xx.x.xx;
root /var/www/frontend/build;
index index.html index.htm;
location / {
try_files $uri $uri/ /index.html =404;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/fromnil.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/fromnil.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
server {
if ($host = www.xxxxxx.com) {
return 301 https://$host$request_uri;
}
if ($host = xxxxxx.com) {
return 301 https://$host$request_uri;
}
server_name xxxxxx.com www.xxxxxx.com xxx.xx.x.xx;
listen 80;
return 404;
}
Thanks
Found this link, but wasn't able to post a comment because my reputation is not enough. And I don't really understand. Can anyone help me?
How to deploy react and django in aws with a ssl and a domain
I'm using ubuntu 14.04 and running nginx 1.4.6 as reverse proxy server to talk to my django backend which runs on uwsgi. I'm unable to get the internal redirect to work, that is, the request does not reach django at all. Here is my nginx configuration /etc/nginx/site-enabled/default file. Please let me know what is wrong with my configuration.
server {
listen 8080;
listen 8443 default_server ssl;
server_name localhost;
client_max_body_size 50M;
access_log /var/log/nginx/nf.access.log;
error_log /var/log/nginx/nf.error_log debug;
ssl_certificate /etc/ssl/nf/nf.crt;
ssl_certificate_key /etc/ssl/nf/nf.key;
location / {
proxy_pass http://localhost:8000;
}
location /static/ {
root /home/northfacing;
}
location /media/ {
internal;
root /home/northfacing;
}
}
Adding my uwsgi configuration.
[uwsgi]
chdir=/home/northfacing/reia
module=reia.wsgi:application
master=True
pidfile=/home/northfacing/reia/reia-uwsgi.pid
vacuum=True
max-requests=5000
daemonize=/home/northfacing/reia/log/reia-uwsgi.log
http = 127.0.0.1:8000
Adding my uwsgi startup script
#!/bin/bash
USER="northfacing"
PIDFILE="/home/northfacing/reia/reia-uwsgi.pid"
function start(){
su - ${USER} /bin/sh -c "source /home/northfacing/nfenv/bin/activate && exec uwsgi --pidfile=${PIDFILE} --master --ini /etc/init.d/reia-uwsgi.ini"
}
function stop(){
kill -9 `cat ${PIDFILE}`
}
$1
/home/northfacing/nfenv is my python environment directory.
If you want django to handle permissions for accessing your media files, first thing to do is to pass all requests into django. I'm assuming that /home/northfacing is your project root dir (dir where by default manage.py will be placed), your static files are collected into public/static subdirectory in your project and media files are stored in public/media.
Basing on that assumptions, here is basic configuration for that behaviour:
server {
listen 8080;
server_name localhost;
client_max_body_size 50M;
access_log /var/log/nginx/nf.access.log;
error_log /var/log/nginx/nf.error_log debug;
ssl_certificate /etc/ssl/nf/nf.crt;
ssl_certificate_key /etc/ssl/nf/nf.key;
root /home/northfacing/public/;
location #default {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
include /etc/nginx/proxy_params;
proxy_pass softwaremind_server;
break;
}
location /static/ {
try_files $uri #default; # just some simple default action, so you can show django's 404 page instead of nginx default
}
location /media/ {
internal;
error_page 401 403 404 = #default;
}
location / {
try_files /maintenance.html #default; # you can disable whole page with simple message simply by creating maintenance.html with that message
}
}
Simple explanation: all requests to urls in /media/ are treated as internal, so nginx will serve 404, 401 or 403 error if entered directly. But in that location our proxy server (django in that case) is set as handler, so it will get request and will be able to check if user have access rights.
If there is no access, django can throw it's own error. If acces is granted, django should return an empty response with X-Accel-Redirect set to file path. Simple view for that can look like this:
class MediaView(View):
def get(self, request):
if not request.user.is_authenticated():
raise Http404
response = HttpResponse()
response.status_code = 200
response['X-Accel-Redirect'] = request.path
# all this headers are cleared-out, so nginx can serve it's own, based on served file
del response['Content-Type']
del response['Content-Disposition']
del response['Accept-Ranges']
del response['Set-Cookie']
del response['Cache-Control']
del response['Expires']
return response
And in urls.py:
url(r'^media/', MediaView.as_view(), name="media")
It was my mis-understanding of how internal redirect works. According to the doc below
http://nginx.org/en/docs/http/ngx_http_core_module.html#internal
an internal settings in the nginx configuration means that any request with that uri from the external source will be served with 404. It has to come only from the internal. In my case, the /media is used from the client as well. So, this was ignored by nginx. The following configuration worked.
In nginx, I have the following configuration. Note that /media is removed.
location /protected/ {
internal;
alias /home/northfacing/media/;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://127.0.0.1:8000;
}
and in python views,
def protect_uploads(request):
if settings.DEBUG == False:
response = HttpResponse()
response.status_code = 200
protected_uri = request.path_info.replace("/media", "/protected")
response['X-Accel-Redirect'] = protected_uri
del response['Content-Type']
del response['Content-Disposition']
del response['Accept-Ranges']
del response['Set-Cookie']
del response['Cache-Control']
del response['Expires']
logger.debug("protected uri served " + protected_uri)
return response
Thanks for all your suggestions. That lead to different experiments and eventually a fix.