I'm having some problems to serve large file downloads/uploads (3gb+).
As I'm using Django I guess that the problem to serve the file can become from Django or NGinx.
In my NGinx enabled site I have
server {
...
client_max_body_size 4G;
...
}
And at django I'm serving the files in chunk sizes:
def return_file(path):
filename = os.path.basename(path)
chunk_size = 8192
response = StreamingHttpResponse(FileWrapper(open(path), chunk_size), content_type=mimetypes.guess_type(path)[0])
response['Content-Length'] = os.path.getsize(path)
response['Content-Disposition'] = 'attachment; filename={0}'.format(filename)
return response
This method allowed me to pass from downloads of 600Mb~ to 2.6Gb, but it seems that the downloads are getting truncated at 2.6Gb. I traced the error:
2015/09/04 11:31:30 [error] 831#0: *553 upstream prematurely closed connection while reading upstream, client: 127.0.0.1, server: localhost, request: "GET /chat/download/photorec.zip/ HTTP/1.1", upstream: "http://unix:/web/rsmweb/run/gunicorn.sock:/chat/download/photorec.zip/", host: "localhost", referrer: "http://localhost/chat/2/"
After reading some posts I added the following to my NGinx conf:
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;
But I got the same error with an *1 instead of a *553*
I also thought that It could be a Django database Timeout, so I added:
DATABASE_OPTIONS = {
'connect_timeout': 14400,
}
But it is not working either. (the download over the development server takes about 30 seconds)
PS: Some one already pointed me that the problem is Django, but I haven't been able to figure out why. Django is not printing or loggin any error!
Thanks for any help!
Don't use django to deliver static content, specially not when it's static content that's as large as this. Nginx is ideal for delivering them. All you need to do is to create a mapping such as this in your nginx configuration file:
location /static/ {
try_files $uri =404 ;
root /var/www/myapp/;
gzip on;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
}
With /var/www/myapp/ being the top level folder for your django app. Inside that you will have a folder named static/ into which you need to collect all your static files with the django manage.py 's collectstatic command.
Of course you are free to rename these folders anyway you like and to use a different file structure all together. More about how to configure nginx for static content at this link: http://nginx.org/en/docs/beginners_guide.html#static
I ran into a similar problem which was visible in the nginx error log files by lines like this:
<TIMESTAMP> [error] 1221#1221: *913310 upstream prematurely closed connection
while reading upstream, client: <IP>, server: <IP>, request: "GET <URL> HTTP/1.1",
upstream: "http://unix:<LOCAL_DJANGO_APP_DIR_PATH>/run/gunicorn.sock:
<REL_PATH_LOCAL_FILE_TO_BE_DOWNLOADED>", host: "<URL>", referrer: "<URL>/<PAGE>"
This is caused by the --timeout setting in the file
<LOCAL_DJANGO_APP_DIR_PATH>/bin/gunicorn_start
(found at "command:" in /etc/supervisor/conf.d/<APPNAME>.conf)
In the gunicorn_start file change this line:
exec /usr/local/bin/gunicorn [...] \
--timeout <OLD_TIMEOUT> \
[...]
This was set to 300 and I had to change it to 1280 (this is in seconds!).
Transfers of ~5GB are easily handled this way without RAM issues using
django.views.static.serve(request, <LOCAL_FILE_NAME>, <LOCAL_FILE_DIR>
Related
I've never before configured any production server, I'm trying to configure nginx and keep getting the 403 Forbidden error. I can't figure out the reason why it's happening.
Here is a complete error report:
[crit] 25145#25145: *1 connect() to unix:/home/albert/deploy_test/django_env
/run/gunicorn.sock failed (13: Permission denied) while connecting to
upstream, client: 192.168.1.118, server: 192.168.1.118, request: "GET /
HTTP/1.1", upstream: "http://unix:/home/albert/deploy_test/django_env
/run/gunicorn.sock:/", host: "192.168.1.118"
Here is my /etc/nginx/sites-available/deployproject.conf:
(I removed the default config and created a symlink as follows: sudo ln -s /etc/nginx/sites-available/deployproject.conf /etc/nginx/sites-enabled/deployproject.conf)
upstream sample_project_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server unix:/home/albert/deploy_test/django_env/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name 192.168.1.118;
client_max_body_size 4G;
access_log /home/albert/logs/nginx-access.log;
error_log /home/albert/logs/nginx-error.log;
location /static/ {
alias /home/albert/static/;
}
location /media/ {
alias /home/albert/media/;
}
location / {
# an HTTP header important enough to have its own Wikipedia entry:
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS, this helps Rack
# set the proper protocol for doing redirects:
# proxy_set_header X-Forwarded-Proto https;
# pass the Host: header from the client right along so redirects
# can be set properly within the Rack application
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
# set "proxy_buffering off" *only* for Rainbows! when doing
# Comet/long-poll stuff. It's also safe to set if you're
# using only serving fast clients with Unicorn + nginx.
# Otherwise you _want_ nginx to buffer responses to slow
# clients, really.
# proxy_buffering off;
# Try to serve static files from nginx, no point in making an
# *application* server like Unicorn/Rainbows! serve static files.
if (!-f $request_filename) {
proxy_pass http://sample_project_server;
break;
}
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/albert/static/;
}
}
Here is the complete tutorial I'm using to deploy my app. Here I'm just trying to deploy the most primitive,default django app but in my real app I'm using django as a serverside, so there seems to be no need for nginx to serve static and all that.
File Permissions. Incorrect file permissions are another cause of the "403 Forbidden" error. The standard setting of 755 for directories and 644 for files is recommended for use with NGINX. The NGINX user also needs to be the owner of the files
Try to change the permissions on your web dir
sudo chown -R albert:www-data /webdirectory
sudo chmod -R 0755 /webdirectory
Move all your sites inside the webdirectory do not leave the dir and files in your root home.
Have you taken a look at the gunicorn docs here which has example of how to configure nginx
http://docs.gunicorn.org/en/stable/deploy.html
Can you try running gunicorn via TCP instead of unix socket, in your upstream sample_project_server replace server with:
server 192.168.0.7:8000 fail_timeout=0;
What are the settings in gunicorn? You can bind to localhost via TCP with the following, to check that it isn't a problem with your unix socket:
--bind 127.0.0.1:8000
I'm having trouble with the elixir/phoenix config that I need for a deployment to AWS/Elastic Beanstalk. (Following the guide found here: https://thoughtbot.com/blog/deploying-elixir-to-aws-elastic-beanstalk-with-docker - my Dockerfile looks similar except for updated libraries).
I can run in eb local run, but am having trouble pushing to production.
However, when I try and deploy to EB, I get the following warning, and it crashes:
Environment health has transitioned from Degraded to Severe.
100.0 % of the requests are failing with HTTP 5xx.
Command failed on all instances.
Incorrect application version found on all instances. Expected version "app-8412-171116_115503" (deployment 5).
ELB processes are not healthy on all instances.
100.0 % of the requests to the ELB are erroring with HTTP 4xx.
Insufficient request rate (0.5 requests/min) to determine application health (5 minutes ago).
ELB health is failing or not available for all instances.
I was wondering if someone could let me know if my configs look right.
I've been trying a bunch of things, but I think I've gotten confused, as I'm just guessing at this point.
config.exs
use Mix.Config
config :newsly,
ecto_repos: [Newsly.Repo]
config :logger, :console,
format: "$time $metadata[$level] $message\n",
metadata: [:request_id]
import_config "#{Mix.env}.exs"
prod.exs
use Mix.Config
config :logger, :console, format: "[$level] $message\n"
config :phoenix, :stacktrace_depth, 5
import_config "prod.secret.exs"
prod.secret.exs
use Mix.Config
config :ex_aws,
access_key_id: System.get_env("AWS_ACCESS_KEY_ID"),
secret_access_key: System.get_env("AWS_SECRET_ACCESS_KEY"),
bucket_name: System.get_env("BUCKET_NAME"),
s3: [
scheme: "https://",
host: System.get_env("BUCKET_NAME"),
region: "us-west-2"
]
config :newsly, Newsly.Repo,
adapter: Ecto.Adapters.Postgres,
username: System.get_env("USERNAME"),
password: System.get_env("PASSWORD"),
database: System.get_env("DATABASE"),
hostname: System.get_env("DBHOST"),
# sometimes hostname is db (like in the docker-compose method - play with this one)
pool_size: 10
config :newsly, Newsly.Endpoint,
http: [port: 4000],
debug_errors: true,
code_reloader: false,
url: [scheme: "http", host: System.get_env("HOST"), port: 4000],
secret_key_base: System.get_env("SECRET_KEY_BASE"),
pubsub: [adapter: Phoenix.PubSub.PG2, pool_size: 5, name: Newsly.PubSub],
check_origin: false,
watchers: [node: ["node_modules/brunch/bin/brunch", "watch", "--stdin",
cd: Path.expand("../", __DIR__)]]
And in my Dockerfile I set my environment variables like the following;
ENV AWS_ACCESS_KEY_ID=nottelling
ENV AWS_SECRET_ACCESS_KEY=nottelling
ENV BUCKET_NAME=s3 storage bucket (not eb related)
ENV SECRET_KEY_BASE=nottelling
ENV HOST=host name of my eb instance im uploading to
ENV DBHOST=AWS rds host that holds postgres
ENV USERNAME=nottelling
ENV PASSWORD=nottelling
My health report on the instance fails to red with the following warning:
Environment health has transitioned from Warning to Severe. 100.0 % of the requests are failing with HTTP 5xx. ELB processes are not healthy on all instances. ELB health is failing or not available for all instances.
NGINX seems to be choking with the lines
2017/11/16 17:59:46 [error] 28815#0: *99 connect() failed (113: No route to host) while connecting to upstream, client: 172.31.20.108, server: , request: "GET / HTTP/1.1", upstream: "http://172.17.0.2:4000/", host: "172.31.38.244"
in nginx logs.
If I look at eb-activity log I have
duplicate MIME type "text/html" in /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy.conf:11
which seems to be killing nginx
[2017-11-16T18:02:33.927Z] INFO [29355] - [Application update app-8412-171116_115503#5/AppDeployStage1/AppDeployEnactHook/01flip.sh] : Completed activity. Result:
nginx: [warn] duplicate MIME type "text/html" in /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy.conf:11
Stopping nginx: [ OK ]
Starting nginx: nginx: [warn] duplicate MIME type "text/html" in /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy.conf:11
[ OK ]
iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ]
Stopping current app container: e0161742ee69...
Error response from daemon: No such image: aws_beanstalk/current-app:latest
Making STAGING app container current...
Untagged: aws_beanstalk/staging-app:latest
eb-docker start/running, process 1398
Docker container e25f2b562f4f is running aws_beanstalk/current-app.
Does anyone have any ideas?
EDIT:
Digging through the logs for nginx I found
map $http_upgrade $connection_upgrade {
default "upgrade";
"" "";
}
server {
listen 80;
gzip on;
gzip_comp_level 4;
gzip_types text/html text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") {
set $year $1;
set $month $2;
set $day $3;
set $hour $4;
}
access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd;
access_log /var/log/nginx/access.log;
location / {
proxy_pass http://docker;
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
So,
duplicate MIME type "text/html" in /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy.conf:11
Seems to be referring to this line:
gzip_types text/html text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
But at this point I would find it surprising if nginx was choking simply because it defines text/html twice. So now I'm not sure....
EDIT EDIT:
I should mention that my nginx/error.logs look like the following (the last lines repeat ad-infinum):
2017/11/16 17:19:22 [warn] 18445#0: duplicate MIME type "text/html" in /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy.conf:11
2017/11/16 17:19:22 [warn] 18460#0: duplicate MIME type "text/html" in /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy.conf:11
2017/11/16 17:20:06 [error] 18467#0: *11 connect() failed (113: No route to host) while connecting to upstream, client: 172.31.32.139, server: , request: "GET / HTTP/1.1", upstream: "http://172.17.0.2:4000/", host: "172.31.38.244"
2017/11/16 17:20:15 [error] 18467#0: *13 connect() failed (113: No route to host) while connecting to upstream, client: 172.31.20.108, server: , request: "GET / HTTP/1.1", upstream: "http://172.17.0.2:4000/", host: "172.31.38.244"
2017/11/16 17:20:21 [error] 18467#0: *15 connect() failed (113: No route to host) while connecting to upstream, client: 172.31.32.139, server: , request: "GET / HTTP/1.1", upstream: "http://172.17.0.2:4000/", host: "172.31.38.244"
2017/11/16 17:20:30 [error] 18467#0: *17 connect() failed (113: No route to host) while connecting to upstream, client: 172.31.20.108, server: , request: "GET / HTTP/1.1", upstream:
THIS IS THE HEART OF THE PROBLEM
NGINX fundamentally cant connect the entrypoint to the application and I don't know why!
UPDATE:
Using Kevin Johnson's advice I successfully pushed up to AWS ECR my Dockerfile and it compiled correctly when I eb deploy'ed my application with a good Dockerrun.aws.json. This is in fact a preferred way to do this. HOWEVER, I still get the same error! I do not know what is going on, but I think I can safely say that my Dockerfile successfully compiles.
I think there is something broken in AWS and I'm not sure what.
UPDATE
Problem is related to a NGINX routing issue. More information here in a clean question: How Do I modify NGINX routing on Elastic Beanstalk AWS?
I highly suspect that the issue you are dealing with is either:
Your Dockerrun.aws.config file points to a non existing docker image on ECS Repository. This is indicated by the error message:
Error response from daemon: No such image: aws_beanstalk/current-app:latest
Making STAGING app container current...
When EB fails to replace the instance with the latest configuration, it will resort back to the old one, which could be the Hello World app of AWS that you may have leveraged in setting up EB. That container does not have a web service running on port 4000, whereas your Dockerrun.aws.config specifies port 4000.
(I would be surprised though that your Dockerrun.aws.config does not get replaced as well by EB to previous working version)
So check Dockerrun.aws.config and ensure that the image endpoint mentioned therein actually exists. Try pulling it locally and run it accordingly. First clean up your local environment of all images and running docker containers if need be.
Your application running within docker immediately crashes upon startup. Again, EB will detect this and replaces the crashed container with the previous container which again does not have port 4000 open.
I'm a bit new to this but I am trying to deploy a website I build using Django to DigitalOcean using nginx/gunicorn.
My nginx file looks as so:
server {
listen 80;
server_name xxx.xxx.xxx.xx;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /static/ {
alias ~/dev/WebPortfolio/static/;
}
}
And my settings.py file looks as so:
STATIC_ROOT = '~/dev/WebPortfolio/static/'
STATIC_URL = '/static/'
STATICFILES_DIRS = ()
Every time I run python managy.py collect static the errors look as so:
You have requested to collect static files at the destination
location as specified in your settings:
/root/dev/WebPortfolio/~/dev/WebPortfolio/static
Looking at the nginx error log I see (cut out the repetitive stuff):
2015/10/08 15:12:42 [error] 23072#0: *19 open() "/usr/share/nginx/~/dev/WebPortfolio/static/http:/cdnjs.cloudflare.com/ajax/libs/jquery-easing/1.3/jquery.e asing.min.js" failed (2: No such file or directory), client: xxxxxxxxxxxxxx, server: xxxxxxxxxxxxxx, request: "GET /static/http%3A//cdnjs.cloudflare.com/aj ax/libs/jquery-easing/1.3/jquery.easing.min.js HTTP/1.1", host: "XXXXXXXX.com", referrer: "http://XXXXXXXX.com/"
2015/10/08 15:14:28 [error] 23072#0: *24 connect() failed (111: Connection refused) while connecting to upstream, client: xxxxxxxx, server: 104.236.174.46, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8000/", host: "xxxxxxxx.com"
1) I'm not entirely sure why my destination for static files is '/root/dev/WebPortfolio/~/dev/WebPortfolio/static'
Because you've used '~' in a path. That's a shell thing, not a general path thing, and unless you tell Python specifically, it won't know what to do with it. Use a full absolute path in both Django settings and nginx.
I am using uWsgi to deploy my django site here is my uWsgi.ini:
[uwsgi]
socket=/var/run/uwsgi.sock
virtualenv=/root/edupalm/env/
chdir=/root/edupalm/edupalm
master=True
workers=8
pidfile=/var/run/uwsgi-master.pid
max-requests=5000
module=edupalm.wsgi:application
and using nginx, here is my configuration:
server {
listen 9000;
server_name 162.243.146.127;
access_log /var/log/nginx/edupalm_access.log;
error_log /var/log/nginx/edupalm_error.log;
location /static/ {
alias /root/edupalm/edupalm/static/;
}
location / {
uwsgi_pass unix:///var/run/uwsgi.sock;
}
}
but I am having 502 Bad Gateway
here is the logs:
nginx:
2013/11/26 08:31:09 [error] 1758#0: *57 upstream prematurely closed connection while reading response header from upstream, client: 197.160.112.183, server: 162.243.146.127, request: "GET /admin HTTP/1.1", upstream: "uwsgi://unix:///var/run/uwsgi.sock:", host: "162.243.146.127:9000"
uwsgi:
-- unavailable modifier requested: 0 --
nginx is running on user www-data and uwsgi is running as root
It's advisable to use new user for your project, not root
The problem is in configuration, you should to add
plugin=python
for permissions it's better to use www-data user/group:
uid = www-data
gid = www-data
chmod-socket = 777
chown-socket = www-data
It looks like you are using a distribution package instead of official uWSGI sources. Just load (after having installed it) the python plugin with plugin = python in your config
http://uwsgi-docs.readthedocs.org/en/latest/WSGIquickstart.html
location / {
uwsgi_pass unix:///var/run/uwsgi.sock;
include uwsgi_params;
uwsgi_param SCRIPT_NAME '';
}
I similarly had this problem for a combination of Django, uWSGI and nginx running behind CloudFront. It turned out for me that the routing table in CloudFront didn't behave as expected, so some of the callbacks weren't received.
Specifically, route "/" stole traffic from "*".
There was another issue where my Django server was running unexpected code; as a user logging in caused their User model to be changed, which I hadn't predicted for some reason. So yeah, don't rule out that your Django server might be legitimately busy, causing a timeout of the socket.
I'm trying to set up a production server that consists of Django + uwsgi + Nginx.
The tutorial I'm following is located here http://www.panta.info/blog/3/how-to-install-and-configure-nginx-uwsgi-and-django-on-ubuntu.html
The production server is working because I can see the admin page when debug is on but when I turn to debug off. It displays the Server Error (500) again. I don't know what to do. Ngnix should be serving the Django request. I'm clueless right now, Can someone kindly help me, please.
my /etc/nginx/sites-available/mysite.com
server {
listen 80;
server_name mysite.com www.mysite.com;
access_log /var/log/nginx/mysite.com_access.log;
error_log /var/log/nginx/mysite.com_error.log;
location / {
uwsgi_pass unix:///tmp/mysite.com.sock;
include uwsgi_params;
}
location /media/ {
alias /home/projects/mysite/media/;
}
location /static/ {
alias /home/projects/mysite/static/;
}
}
my /etc/uwsgi/apps-available/mysite.com.ini
[uwsgi]
vhost = true
plugins = python
socket = /tmp/mysite.com.sock
master = true
enable-threads = true
processes = 2
wsgi-file = /home/projects/mysite/mysite/wsgi.py
virtualenv = /home/projects/venv
chdir = /home/projects/mysite
touch-reload = /home/projects/mysite/reload
my settings.py
root#localhost:~# cat /home/projects/mysite/mysite/settings.py
# Django settings for my site project.
DEBUG = False
TEMPLATE_DEBUG = DEBUG
min/css/base.css" failed (2: No such file or directory), client: 160.19.332.22, server: mysite.com, request: "GET /static/admin/css/base.css HTTP/1.1", host: "160.19.332.22"
2013/06/17 14:33:39 [error] 8346#0: *13 open() "/home/projects/mysite/static/admin/css/login.css" failed (2: No such file or directory), client: 160.19.332.22, server: mysite.com, request: "GET /static/admin/css/login.css HTTP/1.1", host: "174.200.14.200"
2013/06/17 14:33:39 [error] 8346#0: *14 open() "/home/projects/mysite/static/admin/css/base.css" failed (2: No such file or directory), client: 160.19.332.22, server: mysite.com, request: "GET /static/admin/css/base.css HTTP/1.1", host: "174.200.14.2007", referrer: "http://174.200.14.200/admin/"
2013/06/17 14:33:39 [error] 8346#0: *15 open() "/home/projects/mysite/static/admin/css/login.css" failed (2: No such file or directory), client: 160.19.332.22, server: mysite.com, request: "GET /static/admin/css/login.css HTTP/1.1", host: "174.200.14.200", referrer: "http://174.200.14.200/admin/"
I think it's your ALLOWED_HOSTS setting (new in Django 1.5)
Try the following in your settings.py
ALLOWED_HOSTS = ['*']
This will allow everything to connect until you get your domain name sorted.
It's worth saying that when you do get a domain name sorted make sure you update this value (list of allowed domain names). As the documentation for ALLOWED_HOSTS states:
This is a security measure to prevent an attacker from poisoning
caches and password reset emails with links to malicious hosts by
submitting requests with a fake HTTP Host header, which is possible
even under many seemingly-safe webserver configurations.
Also (a little aside) - I don't know if you have a different setup for your django settings per environment but this is what I do:
At the end of your settings.py include:
try:
from local_settings import *
except ImportError:
pass
Then in the same directory as settings.py create a local_settings.py file (and a __init__.py file if using a different structure than the initial template) and set your settings per environment there. Also exclude local_settings.py from your version control system.
e.g. I have DEBUG=False in my settings.py (for a secure default) but can override with DEBUG=True in my development local settings.
I also keep all my database info in my local settings file so it's not in version control.
Just a little info if you didn't know it already :-)
I had the same issue but in my case it turned out to be that STATICFILES_STORAGE was incorrectly set as:
STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.ManifestStaticFilesStorage'
This question has already an accepted answer but I'm leaving this in case someone gets here in the same situation. You can also see this similar answer for the same error.