Django Download/Upload Files In Production - django

I have a Django project that is currently hosted. I am serving static files but do not know how to handle user file uploads / downloads in the MEDIA folder. I've read a lot about Docker, Nginx, and Gunicorn but have no idea which I need nor how to set them up. I've followed no less than 20 tutorials and watched no less than 15 YouTube videos but am still confused (I've visited every link for the first 2 pages of all my Google searches).
My question is, which of these do I need to allow users to upload/download files from a site? On top of that, I have tried getting all three working but can't figure them out, surely I'm not the only one that has so much difficulty with this, is there a good resource/tutorial which would guide me through the process (I've spent well over 40 hours reading about this stuff and trying to get it to work, I've gotten to the point where so long as it works, I don't care for understanding how it all fits together)?
Thank you.
edit - this is a stripped down version of what was requested. I haven't included the html as it's my first time doing this and I've used Ajax and things and it's a complete mess and I'm sure will just confuse you.
settings.py
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static_files')
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'static_media')
STAT=os.path.join(BASE_DIR, 'static')
STATICFILES_DIRS = [ STAT,
os.path.join('static/'),
os.path.join('templates/static/'), # for overridden templates css
]
view.py
class UploadView(View):
def get(self, request):
files_list = UploadedFile.objects.all()
return render(self.request, 'users/modules.html', {'files': files_list})
def post(self, request, *args, **kwargs):
data = {}
form = UploadedFileForm(self.request.POST, self.request.FILES)
form.instance.data_id = self.request.POST['data_id']
if form.is_valid():
uploaded_file = form.save()
Thank you.

So you want to know how to make your (working in development environment) project production-ready. Let's start with what components are required
Web Server (Nginx)
Application Server (uWSGI)
Application (Django)
Web Server serves the users' requests. It knows how to generate the correct output for a request - read a file from a filesystem, pass a request further to application server and so on. Nginx is the good choice.
Application Server is the middleman between Web Server and Application. It can spawn application instances (processes), balance the load between those instances, restart dead instances and many other things. uWSGI is good choice here.
Application - in your case it's Django project you have working in your development environment. You have everything ready here, but most likely you should adjust settings a bit. Django will communicate with Application Server through WSGI protocol.
At this point you should also understand how a web browser will load, render and display your site. All starts from a user who wants to open a page on your site, for example http://example.com/uploads. A browser will send HTTP GET request to your server and Web Server program (Nginx) will catch this request and decide what to do next.
Since that particular request isn't about some static file (static HTML file or static JPEG image and so on) - Nginx will decide to pass a request to Application Server. uWSGI will get the request and pass it forward to Django.
Django will use all the urls.py files to find the right view to generate the response for http://example.com/upload page. What your view will do?
def get(self, request):
files_list = UploadedFile.objects.all()
return render(self.request, 'users/modules.html', {'files': files_list})
it will return the HTML page (rendered template). So that HTML document will be returned back to Application Server, then back to Web Server and finally to a user's web browser.
Then a browser will start parsing of that HTML document and most likely it will find some additional resources to load - css, javascript, images, fonts, ... For each resource - it will make additional GET request to Web Server. And this time Web Server will not push requests forward to Application Server. It will just read those files from file system and return back to a browser.
Those resources are not dynamic, they are static. So you basically store them under static namespace. For example:
http://example.com/static/main.css
http://example.com/static/main.js
http://example.com/static/logo.png
...
Those files are the part of your application. Your application is shipped with those files. But also there're some files which could be uploaded to your system. You could save those files in any place - filesystem, database, ... But you must have URLs for them. Usually it's media namespace:
http://example.com/media/user-file.csv
http://example.com/media/reports/john-02-12-2020.pdf
...
In both cases - those calls will be just forwarded to Web Server and then to filesystem.
Why do you see everything working on your development environment? It's likely because you run the application with python manage.py runserver. In that case - Django is your Web Server as well (and there will be no Application Server middleman). So it will manage it's own instances, it will get user's requests, it will return static files, it will return "media" files, it will return dynamically generated pages and so on.
Each component described above needs it's own configuration file. Let me show you some examples you can use for your project.
Web Server (Nginx)
sites-enabled/default.conf
upstream uwsgi {
server uwsgi:8080;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
error_log /var/log/nginx/error.log;
charset utf-8;
location /media {
alias /home/web/media;
expires 7d;
}
location /static {
alias /home/web/static;
expires 7d;
}
location / {
uwsgi_pass uwsgi;
include uwsgi_params;
}
}
Notes:
everything under http://example.com/static will go to filesystem (/home/web/static directory)
for example: http://example.com/static/css/main.css -> /home/web/static/css/main.css
everything under http://example.com/media will go to filesystem (/home/web/media directory)
for example: http://example.com/media/reports/john-02-12-2020.pdf -> /home/web/media/reports/john-02-12-2020.pdf
everything else will be passed to uWSGI (to host uwsgi, to port 8080)
Application Server (uWSGI)
uwsgi.conf
[uwsgi]
req-logger = file:/var/log/uwsgi-requests.log
logger = file:/var/log/uwsgi-errors.log
workers = %k
# enable-threads = true
# threads = 4
chdir = /home/web/app
module = core.wsgi
master = true
pidfile=/tmp/app.pid
socket = 0.0.0.0:8080
env = DJANGO_SETTINGS_MODULE=core.settings
memory-report = true
harakiri = 60
listen = 10240
Notes:
chdir = /home/web/app - it's the path to your application
module = core.wsgi - your application should have directory with main (core) application called core (you should see wsgi.py there)
pidfile=/tmp/app.pid - just a place for pid file
socket = 0.0.0.0:8080 - it will listen port 8080
env = DJANGO_SETTINGS_MODULE=core.settings - again, you need main app to be called core (it should be inside core directory) and you should have settings.py inside it)
Docker / Docker Compose
You might need Docker and Docker Compose to orchestrate all that software. But it's possible to try run everything without it as well.
docker-compose.yml
version: "2.4"
services:
uwsgi:
build:
context: ./docker
dockerfile: Dockerfile
hostname: uwsgi
sysctls:
net.core.somaxconn: 10240
environment:
- DJANGO_SETTINGS_MODULE=${DJANGO_SETTINGS_MODULE}
- C_FORCE_ROOT=${C_FORCE_ROOT}
volumes:
- ./config/uwsgi/uwsgi.conf:/uwsgi.conf
- ../app:/home/web/app
- ./static:/home/web/static:rw
- ./media:/home/web/media:rw
- ./logs:/var/log/
restart: always
networks:
myapp_backend:
aliases:
- uwsgi
web:
image: nginx
hostname: nginx
sysctls:
net.core.somaxconn: 10240
depends_on:
- uwsgi
volumes:
# - ./config/nginx/nginx.conf:/etc/nginx/nginx.conf
- ./config/nginx/sites-enabled:/etc/nginx/conf.d
- ./media:/home/web/media:ro
- ./static:/home/web/static:ro
- ./logs:/var/log/nginx
ports:
- "80:80"
# - "443:443"
restart: always
networks:
- myapp_backend
networks:
myapp_backend:
Dockerfile
FROM python:3.9.0
RUN export DEBIAN_FRONTEND=noninteractive
ENV DEBIAN_FRONTEND noninteractive
RUN dpkg-divert --local --rename --add /sbin/initctl
RUN apt-get install -y --fix-missing && apt-get update -y
RUN apt-get install -y python3-pip \
python3-setuptools
COPY requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
RUN pip install uwsgi
WORKDIR /home/web/app
EXPOSE 8080
CMD ["uwsgi", "--ini", "/uwsgi.conf"]
Directory Structure
You can use directory structure from this repository: https://github.com/ansysy24/GameEconomy/tree/master/deployment
Also I highly recommend this article https://andrey-borzenko.medium.com/simple-nginx-uwsgi-daphne-reactive-application-part-1-the-big-picture-20d7b9ee5b96
And don't forget to add .env file like this one https://github.com/ansysy24/GameEconomy/blob/master/deployment/.env_template (but you should rename ofc)

Related

cookies not persisting when using dockerized nginx as a proxy to multiple backend docker containers

I have two separate apps - one in python, one in go - both in their own separate docker images. Together these two apps make one website - they need to share session data (cookies) between each other. This works just fine the way it is now (go app on one host, python app on another, nginx on a third proxying to each).
What I want to do is run them both on the same host, using an nginx docker container as a proxy between the two (consolidation purposes). So I have a docker-compose file (Dockerrun.aws.json when deploying to AWS) setup with nginx, linked to the two images and configured to proxy to each.
I have the proxy working as intended - different routes go to different backends - no issues there. The only problem is, cookies do not persist.
The python app is responsible for logging users in - stores data in a cookie. The go app should be able to read this cookie to get the data it needs. However, the cookie doesn't persist, so I can't log in to either app. I don't think the cookie is making it's way to the browser.
Doing some research, I thought setting proxy_cookie_domain would work. My thought was, as long as cookies are being set under the same host/domain, they should be accessible in both apps.
Any ideas? I can't be the only one doing this.
FYI - I'm using the official nginx docker image - only thing different is the site config.
site.nginx.conf
---------------
proxy_cookie_domain python $host;
proxy_cookie_domain go $host;
proxy_set_header Host $host;
server {
listen 80 default_server;
server_name _;
location / {
proxy_pass http://go:3000;
}
location /api {
proxy_pass http://python:5000;
}
}
docker-compose.yml
------------------
version: "2"
services:
nginx:
image: nginx
links:
- python
- go
ports:
- "80:80"
restart: always
python:
image: python-image
ports:
- "5000:5000"
restart: always
go:
image: go-image
ports:
- "3000:3000"
restart: always
Dockerrun.aws.json
------------------
{
"AWSEBDockerrunVersion":2,
"containerDefinitions":[
{
"name":"nginx",
"image":"nginx",
"essential":true,
"memoryReservation":128,
"links":[
"python",
"go"
],
"portMappings":[
{
"hostPort":80,
"containerPort":80
}
]
},
{
"name":"python",
"image":"python-image",
"essential":true,
"memoryReservation":128,
"portMappings":[
{
"hostPort":5000,
"containerPort":5000
}
]
},
{
"name":"go",
"image":"go-image",
"essential":true,
"memoryReservation":128,
"portMappings":[
{
"hostPort":3000,
"containerPort":3000
}
]
}
]
}
UPDATE
Using chrome developer tools, I can see that the request headers are being sent to the server as expected (Cookie header present and looks like what i'd expect). I can also see that the Set-Cookie header is being sent after a successful login, as expected.
I can log in to the site through the python app. I can even switch from one python page to another with no problem. But once I try to switch over to a go page, I am no longer logged in. And further, when I try to go back to a python page, I'm not logged in anymore. The cookies are sent in every request.
The reason this is all confusing to me is because when I run everything in it's own container, running on it's own host (proxied by a separate host running nginx), everything runs as expected. And it's the same exact code, so it's not like the code is destroying the cookie object.

my nginx cannot load uwsgi on Ubuntu 16.04

trying to run django app "mysite" through uwsgi with nginx on Ubuntu 16.04, but when I start uwsgi and check in my browser, it just hangs.
i set django upstream socket to on port 8002 and nginx to listen on 8003. In the browser i visit 192.168.0.17:8003 prior to running uwsgi and it throws 502 which is expected, so I start uwsgi with
uwsgi --http :8002 --module mysite.wsgi --logto /tmp/uwsgi.log --master
and 8003 now hangs when I reload in the browser. I looked through /var/log/nginx/error.log but it's blank (so is access.log).
Here is nginx config, which is symlinked to /etc/nginx/sites-enabled:
sudo nano /etc/nginx/sites-available/mysite_nginx.conf
# mysite_nginx.conf
# the upstream component nginx needs to connect to
upstream django {
# server unix:///path/to/your/mysite/mysite.sock; # for a file socket
server 127.0.0.1:8002; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 8003;
# the domain name it will serve for
server_name 192.168.0.17; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /home/myusername/uwsgi-tutorial/mysite/media; # your Django project's media files - amend as required
}
location /static {
alias /home/myusername/uwsgi-tutorial/mysite/static; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /home/myusername/uwsgi-tutorial/mysite/uwsgi_params; # the uwsgi_params file you installed
}
}
I know that Django is running because in my app's settings.py I have ALLOWED_HOSTS = ['192.168.0.17','localhost','127.0.0.1'] and when I visit port 8002 in the browser I get the django "Congratulations!" page. And when I remove 192.168.0.17 from ALLOWED_HOSTS, django still runs on that machine from localhost or 127.0.0.1, so this seems that it must be something to do with how ngnix and uwsgi are talking to each other.
Any ideas??
It turns out systemd does not like lines in config files to be too long. I removed a couple long comments in /etc/systemd/system/uwsgi.service, restarted uwgsi service and all is well.
I found this out by running sudo journalctl -u uwsgi and finding the following error:
[/etc/systemd/system/uwsgi.service:5] Unbalanced quoting, ignoring: "/bin/bash -c 'mkdir -p /run/uwsgi; chown myusername:myusern
In researching Unbalanced quoting, found this git issue regarding maximum file line length.

nginx Permission denied on Ubuntu

I'm trying to set up my Django app with uWSGI and nginx by following this guide. I'm able to run my app with Django's development server, as well as being served directly from uWSGI.
I'm running everything on a university managed Ubuntu 16.04 virtual machine, and my user has sudo access.
My problem:
When getting to this bit of the tutorial, and try to fetch an image, I get a 403 error from nginx.
The next section results in a 502.
/var/log/nginx/error.log shows
connect() to unix:///me/myproject/media/image.jpg failed (13: Permission denied) while connecting to upstream
connect() to unix:///me/myproject/project.sock failed (13: Permission denied) while connecting to upstream
for the 403 and 502, respectively.
I have read multiple questions and guides (one here, another here and yet another one, and this is not all of them), changed my permissions and even moved my .sock to another folder (one of the SO answers recommended that).
What else can I try?
Update:
I mentioned it in a comment, but I've gotten a bit further. A part of the problem was that, apparently, the /home directory on my VM is NFS, which messes up a good many permissions.
What I've done:
I've set up my project in /var/www/myproject/
Run chown -R me:www-data myproject
Run chmod -R 764 myproject
My new results:
Without nginx running:
uwsgi --http :8000 --module myproject.wsgi
works perfectly
With nginx running:
uwsgi --socket myproject.sock --module myproject.wsgi --chmod-socket=664
gives me a 502
uwsgi --ini myproject.ini
gives me a 502
So now it's not a general permission issue, it's definitely an issue with nginx...
Update #2:
For the moment, everything is working when other has read-write permissions on the socket, and read-execute permissions on the rest of the project.
So nginx is not recognized as it should... I've double-checked, and nginx is running as the www-data user, which is the group-owner of my entire project, and which has read-execute permissions, just as other now has.
Here's my (updated) nginx.conf
# myproject_nginx.conf
# the upstream component nginx needs to connect to
upstream django {
# server unix:///path/to/your/mysite/mysite.sock; # for a file socket
server unix:///var/www/myproject/myproject.sock;
# server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 8000;
# the domain name it will serve for
server_name my.ip.goes.here; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /var/www/myproject/media; # your Django project's media files - amend as required
}
location /static {
alias /var/www/myproject/static; # your Django project's static files - amend as required
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /var/www/myproject/uwsgi_params; # the uwsgi_params file you installed
}
}
And here's my (updated) uwsgi.ini
# myproject_uwsgi.ini file
[uwsgi]
# Django-related settings
# the base directory (full path)
chdir = /var/www/myproject
# Django's wsgi file
module = myproject.wsgi
# the virtualenv (full path)
home = /var/www/myenv
# process-related settings
master = true
# maximum number of worker processes
processes = 10
# the socket (full path)
socket = /var/www/myproject/myproject.sock
# ... with appropriate permissions - may be needed
chmod-socket = 666
uid = me
gid = www-data
# clear environment on exit
vacuum = true
From my experience, most of the permission problems around web server are by accessing file which is owned by root, but Apache (nginx) is running under www-data user.
Try running sudo chown www-data -R /path/to/your/data/folder.
As the tutorial said:
You may also have to add your user to nginx’s group (which is probably
www-data), or vice-versa, so that nginx can read and write to your
socket properly.
Try that and see what happens.
As well I wouldn't recommend you doing things with sudo or as root, do it as a normal user and place the permission as it get necessary, otherwise you might end up in a situation that Nginx or uWSGI need to do something with the files and they are owned by root.

Two or More Django Projects in Same Droplet via Subdomain

I have two django projects. When the person visits, www.example.com I want django project A to be served.
When the person visits, say, blog.example.com, I want django project B to be served.
How can I achieve that using nginx and gunicorn, configuration-wise?
I'm done with the subdomain DNS setup. I need help in the nginx-gunicorn aspect of serving the pages.
I used the One Click install of django by DO, so if the configuration could be along the lines of their setup, will be great.
No idea if this question belongs here or serverfault.
The principle is to use nginx as a broker for the HTTP requests, proxying them to two gUnicorn instances running your two Django apps in parallel, depending on their Host header.
For that you need to setup two different server configurations with nginx. Each with a different server_name. Those two servers will proxy to two different gUnicorn instances running on different ports.
Nginx configuration
# Server definition for project A
server {
listen 80;
server_name <projectA domain name>;
location / {
# Proxy to gUnicorn.
proxy_pass http://127.0.0.1:<projectA gUnicorn port>;
# etc...
}
}
# Server definition for project B
server {
listen 80;
server_name <projectB domain name>;
location / {
# Proxy to gUnicorn on a different port.
proxy_pass http://127.0.0.1:<projectB gUnicorn port>;
# etc...
}
}
It might be better to split the two definitions in separate files. Also remember to link them in /etc/nginx/sites-enabled/.
Upstart configuration
These two files need to be put in /etc/init/.
projecta_gunicorn.conf:
description "Gunicorn daemon for Django project A"
start on (local-filesystems and net-device-up IFACE=eth0)
stop on runlevel [!12345]
# If the process quits unexpectadly trigger a respawn
respawn
setuid django
setgid django
chdir /home/django/<path to projectA>
exec /home/django/<path to project A virtualenv>/bin/gunicorn --config /home/django/<path to project A gunicorn.py> <projectA name>.wsgi:application
projectb_gunicorn.conf:
description "Gunicorn daemon for Django project B"
start on (local-filesystems and net-device-up IFACE=eth0)
stop on runlevel [!12345]
# If the process quits unexpectadly trigger a respawn
respawn
setuid django
setgid django
chdir /home/django/<path to projectB>
exec /home/django/<path to projectB virtualenv>/bin/gunicorn --config /home/django/<path to projectB gunicorn.py> <projectB name>.wsgi:application
gUnicorn configuration
Project A gunicorn.py:
bind = '127.0.0.1:<projectA gUnicorn port>'
raw_env = 'DJANGO_SETTINGS_MODULE=<projectA name>.settings'
Project B gunicorn.py:
bind = '127.0.0.1:<projectB gUnicorn port>'
raw_env = 'DJANGO_SETTINGS_MODULE=<projectB name>.settings'

Serve static files with Nginx and custom service. Dotcloud

I deployed my Django app on Dotcloud.
I'm using websockets with Gevent and django-socketio, so I used a custom service. For now, I'm still using 'runserver_socketio' in order to make it works.
Now, I would like to use Nginx to serve my static files. I found this : https://github.com/dotcloud/nginx-on-dotcloud
I tried to use it. Here is my dotcloud.yml:
www:
type: custom
buildscript: nginx/builder
processes:
app: /home/dotcloud/env/bin/python myproject/manage.py runserver_socketio 0.0.0.0:$PORT_WWW
nginx: nginx
ports:
www: http
systempackages:
- libevent-dev
- python-psycopg2
- libpcre3-dev
db:
type: postgresql
And I added the folder 'nginx' at the root of my app.
I also added at the end of my postinstall:
nginx_config_template="/home/dotcloud/current/nginx/nginx.conf.in"
if [ -e "$nginx_config_template" ]; then
sed > $HOME/nginx/conf/nginx.conf < $nginx_config_template \
-e "s/#PORT_WWW#/${PORT_WWW:-42800}/g"
else
echo "($nginx_config_template) isn't there!!! Make sure it is in the correct location or else nginx won't be setup correctly."
fi
But when I go to my app, after I push it, I get the error:
403 Forbidden, nginx/1.0.14
And Nginx does serve the error pages 404.
So I don't know why, but I don't have access to my app anymore. Do you have any idea on how I can set my app with Nginx?
Thank you very much
I think Your problem is that you have two different processes fighting for the http port (80). You can only have one process running on port 80 at a time. Most people work around this by having nginx running on port 80, and then reverse proxying all traffic to the other process, which runs on a different port. This wouldn't work for you, because nginx doesn't support web sockets. So that means you would need to run nginx or the django app on a port other then 80. Which also isn't ideal.
At this point you have two other options
Use a CDN, put all of your files on Amazon S3, and serve them from there (or cloudfront).
Use dotCloud's static service, this will be a separate service that just serves the static files. Here is what your dotcloud.yml would look like.
dotcloud.yml
www:
type: custom
processes:
app: /home/dotcloud/env/bin/python myproject/manage.py runserver_socketio 0.0.0.0:$PORT_WWW
ports:
www: http
systempackages:
- libevent-dev
- python-psycopg2
- libpcre3-dev
db:
type: postgresql
static:
type: static
approot: static_media
Basically it adds a new service called static, and this new service, is looking for your static files in a directory in your project called static_media, located at the root of your project.
If you use the static service, you will need to get the URL from the static service, and set your STATIC_URL appropriately in your django settings.py.
Another gotcha with this setup, is if you are using django's static_files app. Django's static files app will copy all the static media into one common location. This doesn't work with the static service, because the static service is separate, and will most likely live on a different host then your other service, so you will need to manually copy the files into the common static_media directory your self.
For more information about the dotCloud static service, see these docs: http://docs.dotcloud.com/0.9/services/static/
Because of the gotcha I mentioned for option 2, I would recommend using option 1. Doing this is a pretty easy if you use something like https://github.com/jezdez/django_compressor . It can send your files to s3 for you.