I have a bunch of different services running across several lxc containers, including Django apps.
What I want to do is install Nginx on the Host machine and proxy pass to the Gunicorn instance running in an lxc container. The idea is that custom domain names and certs are added on the Host and the containers remain unchanged for different installs.
it works if I proxy_pass from the host to nginx running in a container, but I then have issues with csrf, which for the Django admin, cannot be turned off.
Thus, what I would like to do is only run nginx on the host server connecting to the Gunicorn instance on the lxc container.
Not sure if that will fix the csrf issues, but it does not seem right to run multiple nginx instances
Related
I have Django hosted with Nginx on DigitalOcean. Now I want to install Plausible Analytics. How do I do this? How do I change the Nginx config to get to the Plausible dashboard with mydomain/plausible for example?
Setup plausible by either running the software directly or in a docker container - let's say it runs on port 8080
Then in your nginx.conf - you should have a server block for your domain
Within that add a location block with the path you want plausible on and add a proxy pass directive to forward the requests to localhost:8080
Monitor access.log and error.log to debug any issues that may happen
I am currently running into an issue with one of my projects that will be running in Docker on my Ubuntu Server with a NGINX docker container to manage the reverse proxy for the Django Project. My issue I am running into is I already have previous Django projects running on that particular Ubuntu server so port 80 is already being used by a NGINX block running on the actual server.
Is there a workaround to running my Docker NGINX as well as the Ubuntu NGINX and have my docker image run as a "add on" site because the Django sites hosted there are clients websites, so I would prefer to not interfere with them if I dont have to.
My project needs HTTPS because it is serving data to a React-Native app running on Android APK 28 which for some reason has a security rule that blocks non HTTPS connections from happening in the app. If anyone else has run into an issue like this I would gladly appreciate the advice on how to tackle this issue.
I have tried running NGINX in Docker with port 81 instead of port 80 and that works perfectly, but I dont think there is a way to make a secure connection to port 81 is there?
Thanks in advance.
You can't just mess with default HTTP ports for endpoints - user browsers use 80 and 443 by default. If you change those, your users would have to connect to your.server.com:81 or something similar. Nobody would do that for a public server, but this can be an option for a private one.
I think a reasonable way out of this will be to use host's NGINX to proxy requests into Docker's NGINX (if there is sense in keeping it at all). You can handle HTTPS termination on host's NGINX and pass plain HTTP into Docker's one.
Another adequate option is to use another server, so that everything works with no dirty hacking involved.
I have used nginx as web server for django project , And now i want to use my normal django local server (switch to older local server).
I tried
sudo service nginx stop
(It is showing nginx is stopped)
i killed all process too.
But still my localserver:127.0.0.1:8000 is under nginx control.And my computer ip is also under nginx control.(it shows the nginx default page)
I want free that particular port(localserver:127.0.0.1:8000). How can i completely stop nginx ?
I'm just starting out with EC2, and I've pulled down a git repo that I started on my local machine and so I know that it works running the server from there, and it seems to works when I run my server from the EC2 instance I have running, but for some reason, when I visit the elastic IP address of that instance I get a page-not-found. Any idea on why that might be?
So, I've now started using nginx, and made a conf file following the instructions here: https://code.djangoproject.com/wiki/DjangoAndNginx that is as follows:
server {
listen 80;
server_name ec2-54-242-149-154.compute-1.amazonaws.com;
access_log /var/log/nginx/USBag.access.log;
error_log /var/log/nginx/USBag.error.log;
location /basicMap/ {
alias /home/www/ec2-54-242-149-154.compute-1.amazonaws.com/basicMap/;
expires 30d;
}
location / {
include fastcgi_params;
fastcgi_pass 127.0.0.1:8080;
}
}
basicMap is a place that I have already defined in my django app, and the linked ec2 ip is the one my server is running on. I am having a lot of difficulty finding documentation on how to proceed or how to determine if my conf file is correct or not. Using the standard python manage.py runserver doesn't work however. Advice on how to proceed?
There is a lot of info about setting up a production django server out there, and I'll give you my personal preferences below, but before all that let's backup and see if we can just get any response from the production server.
To start the development server on your EC2 instance run:
manage.py runserver 0.0.0.0:8000
That command will cause runserver to bind to all interfaces and serve files to the external world. You'll never want to do this outside of development, but it is a good way just to test if your django app is setup before complicating things. Now try hitting your EC2 instance and see if you get a response.
If that's still not working, make sure you allow incoming connections to the server's port (8000 in the command above, 80 once live). You could test that you have ports open using netcat (nc -l).
Once you are satisfied that you have your app setup, I'd recommend you use nginx as your front end webserver and gunicorn as your django webserver in production. You'll likely want to look into setting up a virtualenv, supervisord etc for your production setup (here is a tutorial: http://senko.net/en/django-nginx-gunicorn/), but all that depends on the specifics of your project.
A lot of Django app deployments over Amazon's EC2 use HTTP servers NGINX and Gunicorn.
I was wondering what they actually do and why both are used in parallel. What is the purpose of running them both in parallel?
They aren't used in parallel. NGINX is a reverse proxy. It's first in line. It accepts incoming connections and decides where they should go next. It also (usually) serves static media such as CSS, JS and images. It can also do other things such as encryption via SSL, caching etc.
Gunicorn is the next layer and is an application server. NGINX sees that the incoming connection is for www.domain.com and knows (via configuration files) that it should pass that connection onto Gunicorn. Gunicorn is a WSGI server which is basically a:
simple and universal interface between web servers and web applications or frameworks
Gunicorn's job is to manage and run the Django instance(s) (similar to using django-admin runserver during development)
The contrast to this setup is to use Apache with the mod_wsgi module. In this situation, the application server is actually a part of Apache, running as a module.
Nginx and Gunicorn are not used in parrallel.
Gunicorn, is a Web Server Gateway Interface (WSGI) server
implementation that is commonly used to run Python web applications.
NGINX is a free, open-source, high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server.
Nginx responsible for serving static content, gzip compression, ssl,
proxy_buffers and other HTTP stuff.While gunicorn is a Python HTTP server that interfaces with both nginx and your actual python web-app code to serve dynamic content.
The following diagrams shows how nginx and Gunicorn interact.