I'm having trouble with the elixir/phoenix config that I need for a deployment to AWS/Elastic Beanstalk. (Following the guide found here: https://thoughtbot.com/blog/deploying-elixir-to-aws-elastic-beanstalk-with-docker - my Dockerfile looks similar except for updated libraries).
I can run in eb local run, but am having trouble pushing to production.
However, when I try and deploy to EB, I get the following warning, and it crashes:
Environment health has transitioned from Degraded to Severe.
100.0 % of the requests are failing with HTTP 5xx.
Command failed on all instances.
Incorrect application version found on all instances. Expected version "app-8412-171116_115503" (deployment 5).
ELB processes are not healthy on all instances.
100.0 % of the requests to the ELB are erroring with HTTP 4xx.
Insufficient request rate (0.5 requests/min) to determine application health (5 minutes ago).
ELB health is failing or not available for all instances.
I was wondering if someone could let me know if my configs look right.
I've been trying a bunch of things, but I think I've gotten confused, as I'm just guessing at this point.
config.exs
use Mix.Config
config :newsly,
ecto_repos: [Newsly.Repo]
config :logger, :console,
format: "$time $metadata[$level] $message\n",
metadata: [:request_id]
import_config "#{Mix.env}.exs"
prod.exs
use Mix.Config
config :logger, :console, format: "[$level] $message\n"
config :phoenix, :stacktrace_depth, 5
import_config "prod.secret.exs"
prod.secret.exs
use Mix.Config
config :ex_aws,
access_key_id: System.get_env("AWS_ACCESS_KEY_ID"),
secret_access_key: System.get_env("AWS_SECRET_ACCESS_KEY"),
bucket_name: System.get_env("BUCKET_NAME"),
s3: [
scheme: "https://",
host: System.get_env("BUCKET_NAME"),
region: "us-west-2"
]
config :newsly, Newsly.Repo,
adapter: Ecto.Adapters.Postgres,
username: System.get_env("USERNAME"),
password: System.get_env("PASSWORD"),
database: System.get_env("DATABASE"),
hostname: System.get_env("DBHOST"),
# sometimes hostname is db (like in the docker-compose method - play with this one)
pool_size: 10
config :newsly, Newsly.Endpoint,
http: [port: 4000],
debug_errors: true,
code_reloader: false,
url: [scheme: "http", host: System.get_env("HOST"), port: 4000],
secret_key_base: System.get_env("SECRET_KEY_BASE"),
pubsub: [adapter: Phoenix.PubSub.PG2, pool_size: 5, name: Newsly.PubSub],
check_origin: false,
watchers: [node: ["node_modules/brunch/bin/brunch", "watch", "--stdin",
cd: Path.expand("../", __DIR__)]]
And in my Dockerfile I set my environment variables like the following;
ENV AWS_ACCESS_KEY_ID=nottelling
ENV AWS_SECRET_ACCESS_KEY=nottelling
ENV BUCKET_NAME=s3 storage bucket (not eb related)
ENV SECRET_KEY_BASE=nottelling
ENV HOST=host name of my eb instance im uploading to
ENV DBHOST=AWS rds host that holds postgres
ENV USERNAME=nottelling
ENV PASSWORD=nottelling
My health report on the instance fails to red with the following warning:
Environment health has transitioned from Warning to Severe. 100.0 % of the requests are failing with HTTP 5xx. ELB processes are not healthy on all instances. ELB health is failing or not available for all instances.
NGINX seems to be choking with the lines
2017/11/16 17:59:46 [error] 28815#0: *99 connect() failed (113: No route to host) while connecting to upstream, client: 172.31.20.108, server: , request: "GET / HTTP/1.1", upstream: "http://172.17.0.2:4000/", host: "172.31.38.244"
in nginx logs.
If I look at eb-activity log I have
duplicate MIME type "text/html" in /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy.conf:11
which seems to be killing nginx
[2017-11-16T18:02:33.927Z] INFO [29355] - [Application update app-8412-171116_115503#5/AppDeployStage1/AppDeployEnactHook/01flip.sh] : Completed activity. Result:
nginx: [warn] duplicate MIME type "text/html" in /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy.conf:11
Stopping nginx: [ OK ]
Starting nginx: nginx: [warn] duplicate MIME type "text/html" in /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy.conf:11
[ OK ]
iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ]
Stopping current app container: e0161742ee69...
Error response from daemon: No such image: aws_beanstalk/current-app:latest
Making STAGING app container current...
Untagged: aws_beanstalk/staging-app:latest
eb-docker start/running, process 1398
Docker container e25f2b562f4f is running aws_beanstalk/current-app.
Does anyone have any ideas?
EDIT:
Digging through the logs for nginx I found
map $http_upgrade $connection_upgrade {
default "upgrade";
"" "";
}
server {
listen 80;
gzip on;
gzip_comp_level 4;
gzip_types text/html text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") {
set $year $1;
set $month $2;
set $day $3;
set $hour $4;
}
access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd;
access_log /var/log/nginx/access.log;
location / {
proxy_pass http://docker;
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
So,
duplicate MIME type "text/html" in /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy.conf:11
Seems to be referring to this line:
gzip_types text/html text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
But at this point I would find it surprising if nginx was choking simply because it defines text/html twice. So now I'm not sure....
EDIT EDIT:
I should mention that my nginx/error.logs look like the following (the last lines repeat ad-infinum):
2017/11/16 17:19:22 [warn] 18445#0: duplicate MIME type "text/html" in /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy.conf:11
2017/11/16 17:19:22 [warn] 18460#0: duplicate MIME type "text/html" in /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy.conf:11
2017/11/16 17:20:06 [error] 18467#0: *11 connect() failed (113: No route to host) while connecting to upstream, client: 172.31.32.139, server: , request: "GET / HTTP/1.1", upstream: "http://172.17.0.2:4000/", host: "172.31.38.244"
2017/11/16 17:20:15 [error] 18467#0: *13 connect() failed (113: No route to host) while connecting to upstream, client: 172.31.20.108, server: , request: "GET / HTTP/1.1", upstream: "http://172.17.0.2:4000/", host: "172.31.38.244"
2017/11/16 17:20:21 [error] 18467#0: *15 connect() failed (113: No route to host) while connecting to upstream, client: 172.31.32.139, server: , request: "GET / HTTP/1.1", upstream: "http://172.17.0.2:4000/", host: "172.31.38.244"
2017/11/16 17:20:30 [error] 18467#0: *17 connect() failed (113: No route to host) while connecting to upstream, client: 172.31.20.108, server: , request: "GET / HTTP/1.1", upstream:
THIS IS THE HEART OF THE PROBLEM
NGINX fundamentally cant connect the entrypoint to the application and I don't know why!
UPDATE:
Using Kevin Johnson's advice I successfully pushed up to AWS ECR my Dockerfile and it compiled correctly when I eb deploy'ed my application with a good Dockerrun.aws.json. This is in fact a preferred way to do this. HOWEVER, I still get the same error! I do not know what is going on, but I think I can safely say that my Dockerfile successfully compiles.
I think there is something broken in AWS and I'm not sure what.
UPDATE
Problem is related to a NGINX routing issue. More information here in a clean question: How Do I modify NGINX routing on Elastic Beanstalk AWS?
I highly suspect that the issue you are dealing with is either:
Your Dockerrun.aws.config file points to a non existing docker image on ECS Repository. This is indicated by the error message:
Error response from daemon: No such image: aws_beanstalk/current-app:latest
Making STAGING app container current...
When EB fails to replace the instance with the latest configuration, it will resort back to the old one, which could be the Hello World app of AWS that you may have leveraged in setting up EB. That container does not have a web service running on port 4000, whereas your Dockerrun.aws.config specifies port 4000.
(I would be surprised though that your Dockerrun.aws.config does not get replaced as well by EB to previous working version)
So check Dockerrun.aws.config and ensure that the image endpoint mentioned therein actually exists. Try pulling it locally and run it accordingly. First clean up your local environment of all images and running docker containers if need be.
Your application running within docker immediately crashes upon startup. Again, EB will detect this and replaces the crashed container with the previous container which again does not have port 4000 open.
Related
I don't this is a very common question, I'm only asking it because I've already started some ec2 instances using the method I'll explain bellow and I succed, maybe EC2 changed something the right away to connect it by HTTP using public dns. Here are the steps I've always done and I don't know why it isn't working anymore.
public dns: ec2-23-22-52-143.compute-1.amazonaws.com
1 - Settup the default security group, that is oppened for every traffic
2 - Add IAM policity to this ec2, as you can see IAM function bellow
3 - Access SSH and configure nginx, I used putty and could enter on the instance. The configuration for nginx is /etc/nginx/sites-avaiable/default
## default nginx config
server {
listen 80 default_server;
server_name _;
# front-end
location / {
root /var/www/html;
try_files $uri /index.html;
}
# node api
location /api/ {
proxy_pass http://localhost:3000/;
}
}
4 - Clone both my front-end and back-end repositories from github
5 - build production and move to /var/www/html all frontend dist files
6 - Start my node.js server using pm2
7 - Start nginx
sudo nginx -t
sudo systemctl start nginx
sudo netstat -plant | grep 80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 21159/nginx: master
As you can see the port:80 is being listening by nginx
Guys I have no ideia why I can't access the public dns of this instance, I made everything identical as I've done in the past, It has always been working doing these steps, anything has changed using AWS EC2 ubuntu 20 instance, let me know. Thanks a lot, I'm headaching trying to figure this out.
Last steps to try to solve it is check nginx logs
cd /var/log/nginx
2022/04/05 09:42:02 [error] 8216#8216: *1 directory index of "/var/www/html/" is forbidden, client: 103.178.236.40, server: _, request: "GET http://example.>
But even doing this, it has not solved the issue:
sudo chmod -R 777 /var/www/html
You are accessing the site via https (443) while it's running on http (80).
Here is the result of curl.
root#MSI:~# curl -vk https://ec2-23-22-52-143.compute-1.amazonaws.com
* Rebuilt URL to: https://ec2-23-22-52-143.compute-1.amazonaws.com/
* Trying 23.22.52.143...
* TCP_NODELAY set
* connect to 23.22.52.143 port 443 failed: Connection refused
* Failed to connect to ec2-23-22-52-143.compute-1.amazonaws.com port 443: Connection refused
* Closing connection 0
curl: (7) Failed to connect to ec2-23-22-52-143.compute-1.amazonaws.com port 443: Connection refused
root#MSI:~# curl -vk http://ec2-23-22-52-143.compute-1.amazonaws.com
* Rebuilt URL to: http://ec2-23-22-52-143.compute-1.amazonaws.com/
* Trying 23.22.52.143...
* TCP_NODELAY set
* Connected to ec2-23-22-52-143.compute-1.amazonaws.com (23.22.52.143) port 80 (#0)
> GET / HTTP/1.1
> Host: ec2-23-22-52-143.compute-1.amazonaws.com
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.18.0 (Ubuntu)
< Date: Tue, 05 Apr 2022 13:02:08 GMT
< Content-Type: text/html
< Content-Length: 1676
< Last-Modified: Tue, 05 Apr 2022 09:54:30 GMT
< Connection: keep-alive
< ETag: "624c11d6-68c"
< Accept-Ranges: bytes
<
* Connection #0 to host ec2-23-22-52-143.compute-1.amazonaws.com left intact
<!DOCTYPE html><html class="bg-image" lang="en"><head><meta charset="utf-8"><meta http-equiv="X-UA-Compatible" content="IE=edge"><meta name="viewport" content="width=device-width,initial-scale=1"><link rel="my icon" href="/assets/icone.ico" type="image/x-icon"><title>Lab301mktdigital</title><link rel="preconnect" href="https://fonts.googleapis.com"><link rel="preconnect" href="https://fonts.gstatic.com" crossorigin><link href="https://fonts.googleapis.com/css2?family=Comfortaa:wght#300;400;500;600;700&display=swap" rel="stylesheet"><link href="https://fonts.googleapis.com/css2?family=Dancing+Script:wght#400;500;600;700&display=swap" rel="stylesheet"><link href="https://fonts.googleapis.com/css2?family=Playfair+Display:ital,wght#0,400;0,500;0,600;0,700;0,800;0,900;1,400;1,500;1,600;1,700;1,800;1,900&display=swap" rel="stylesheet"><link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/#mdi/font#latest/css/materialdesignicons.min.css"><link rel="stylesheet" href="./assets/styles/general.css"><link href="/css/app.9ba0b389.css" rel="preload" as="style"><link href="/css/chunk-vendors.f754c4c0.css" rel="preload" as="style"><link href="/js/app.5380592f.js" rel="preload" as="script"><link href="/js/chunk-vendors.1ab5dd1a.js" rel="preload" as="script"><link href="/css/chunk-vendors.f754c4c0.css" rel="stylesheet"><link href="/css/app.9ba0b389.css" rel="stylesheet"></head><body><noscript><strong>We're sorry but freelancer-front-end doesn't work properly without JavaScript enabled. Please enable it to continue.</strong></noscript><div id="app"></div><script src="/js/chunk-vendors.1ab5dd1a.js"></script><script src="/js/app.5380592f.js"></script></body></html>
From browser :
I have spent a good part of the last 3 days trying every solution that is on the internet and feeling desperate. Here's the problem statement:
I have a Dockerized app with three services:
A django application with gunicorn (web)
A Nginx server (nginx)
PostgreSQL (db)
My web application requires user to log in with their GitHub account through a fairly standard OAuth process. This has always worked without nginx. User clicks on the "log in with github" button, sent them to GitHub to authorize the application, and redirects it back to a completion page.
I have "Authorization callback URL" filled in as http://localhost:8000. Without Nginx I can navigate to the app on localhost, click on the button, and upon authorization, get redirected back to my app on localhost.
With Nginx, I would always fail with the error (nginx in console):
GET /auth/complete/github/?error=redirect_uri_mismatch&error_description=The+redirect_uri+MUST+match+the+registered+callback+URL+for+this+application.&error_uri=https%3A%2F%2Fdeveloper.github.com%2Fapps%2Fmanaging-oauth-apps%2Ftroubleshooting-authorization-request-errors%2F%23redirect-uri-mismatch&state=nmegLb41b959su31nRU4ugFOzAqE8Cbl HTTP/1.1
This is my Nginx configuration:
upstream webserver {
# docker will automatically resolve this to the correct address
# because we use the same name as the service: "web"
server web:8000;
}
# now we declare our main server
server {
listen 80;
server_name localhost;
location / {
# everything is passed to Gunicorn
proxy_pass http://webserver;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
}
location /static {
autoindex off;
alias /static_files/;
}
location /media {
alias /opt/services/starport/media;
}
}
This is my Dockerfile:
version: '3.7'
services:
web:
build: .
command: sh -c "cd starport && gunicorn starport.wsgi:application --bind 0.0.0.0:8000"
volumes:
- static_files:/static_files # <-- bind the static volume
networks:
- nginx_network
nginx:
image: nginx
ports:
- 8000:80
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static_files:/static_files # <-- bind the static volume
depends_on:
- web
networks:
- nginx_network
networks:
nginx_network:
driver: bridge
volumes:
static_files:
My hunch was that the reason it worked without Nginx but doesn't with the Nginx http server has got to do with the redirection since Nginx is listening to a port and then forwarding it to a different port. GitHub's doc specifically said that the redirect URI needs to be exactly the same as the registered callback url. I've also used the inspector tools and these are my Request headers:
GET /accounts/login/ HTTP/1.1
Cookie: ...
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Upgrade-Insecure-Requests: 1
Host: localhost:8000
User-Agent: Mozilla/5.0
Accept-Language: en-us
Accept-Encoding: gzip, deflate
Connection: keep-alive
The error message I get with Nginx (again, stressing that it works like a charm without error without nginx 10 out of 10 times) is:
http://localhost:8000/auth/complete/github/?error=redirect_uri_mismatch&error_description=The+redirect_uri+MUST+match+the+registered+callback+URL+for+this+application.&error_uri=https%3A%2F%2Fdeveloper.github.com%2Fapps%2Fmanaging-oauth-apps%2Ftroubleshooting-authorization-request-errors%2F%23redirect-uri-mismatch
As an additional detail, I'm using the social-auth-app-django package but this should not matter.
Further troubleshooting
After countless hours of probing, I run this in local on debug mode and this time closely monitored my Request information. When use hits the link to authorize with GitHub via OAuth, this is the Request along with all the header information:
CSRF_COOKIE 'abc'
HTTP_ACCEPT 'text/html,application/xhtml+xml'
HTTP_ACCEPT_ENCODING 'gzip, deflate'
HTTP_ACCEPT_LANGUAGE 'en-us'
HTTP_CONNECTION 'close'
HTTP_COOKIE ('csrftoken=...; ')
HTTP_HOST 'localhost'
HTTP_REFERER 'http://localhost:8000/accounts/login/?next=/'
HTTP_USER_AGENT ('Mozilla/5.0')
HTTP_X_FORWARDED_FOR '172.26.0.1'
HTTP_X_FORWARDED_PROTO 'http'
PATH_INFO '/auth/complete/github/'
SERVER_NAME '0.0.0.0'
SERVER_PORT '8000'
QUERY_STRING
'error=redirect_uri_mismatch&error_description=The+redirect_uri+MUST+match+the+registered+callback+URL+for+this+application.&error_uri=https%3A%2F%2Fdeveloper.github.com%2Fapps%2Fmanaging-oauth-apps%2Ftroubleshooting-authorization-request-errors%2F%23redirect-uri-mismatch&state=NwUhVqfOCNb51zpvoFbXVvm1Cr7k3Fda'
RAW_URI
'/auth/complete/github/?error=redirect_uri_mismatch&error_description=The+redirect_uri+MUST+match+the+registered+callback+URL+for+this+application.&error_uri=https%3A%2F%2Fdeveloper.github.com%2Fapps%2Fmanaging-oauth-apps%2Ftroubleshooting-authorization-request-errors%2F%23redirect-uri-mismatch&state=NwUhVqfOCNb51zpvoFbXVvm1Cr7k3Fda'
What immediately stood out to me was the value of HTTP_HOST, HTTP_REFERRER and SERVER_NAME. What's also interesting to me was the error message says:
http://localhost/auth/complete/github/?error=redirect_uri_mismatch&error_description=The+redirect_uri+MUST+match+the+registered+callback+URL+for+this+application.&error_uri=https%3A%2F%2Fdeveloper.github.com%2Fapps%2Fmanaging-oauth-apps%2Ftroubleshooting-authorization-request-errors%2F%23redirect-uri-mismatch&state=NwUhVqfOCNb51zpvoFbXVvm1Cr7k3Fda
Where instead of http://localhost:8000 it only has http://localhost, which looks like a big hint that I am not configuring things correctly. Any leads or assistance would help!
Resources I've tried
StackOverflow threads like this seems promising but similar questions like this receive no meaningful response except to explain the error.
I'm having some problems to serve large file downloads/uploads (3gb+).
As I'm using Django I guess that the problem to serve the file can become from Django or NGinx.
In my NGinx enabled site I have
server {
...
client_max_body_size 4G;
...
}
And at django I'm serving the files in chunk sizes:
def return_file(path):
filename = os.path.basename(path)
chunk_size = 8192
response = StreamingHttpResponse(FileWrapper(open(path), chunk_size), content_type=mimetypes.guess_type(path)[0])
response['Content-Length'] = os.path.getsize(path)
response['Content-Disposition'] = 'attachment; filename={0}'.format(filename)
return response
This method allowed me to pass from downloads of 600Mb~ to 2.6Gb, but it seems that the downloads are getting truncated at 2.6Gb. I traced the error:
2015/09/04 11:31:30 [error] 831#0: *553 upstream prematurely closed connection while reading upstream, client: 127.0.0.1, server: localhost, request: "GET /chat/download/photorec.zip/ HTTP/1.1", upstream: "http://unix:/web/rsmweb/run/gunicorn.sock:/chat/download/photorec.zip/", host: "localhost", referrer: "http://localhost/chat/2/"
After reading some posts I added the following to my NGinx conf:
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;
But I got the same error with an *1 instead of a *553*
I also thought that It could be a Django database Timeout, so I added:
DATABASE_OPTIONS = {
'connect_timeout': 14400,
}
But it is not working either. (the download over the development server takes about 30 seconds)
PS: Some one already pointed me that the problem is Django, but I haven't been able to figure out why. Django is not printing or loggin any error!
Thanks for any help!
Don't use django to deliver static content, specially not when it's static content that's as large as this. Nginx is ideal for delivering them. All you need to do is to create a mapping such as this in your nginx configuration file:
location /static/ {
try_files $uri =404 ;
root /var/www/myapp/;
gzip on;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
}
With /var/www/myapp/ being the top level folder for your django app. Inside that you will have a folder named static/ into which you need to collect all your static files with the django manage.py 's collectstatic command.
Of course you are free to rename these folders anyway you like and to use a different file structure all together. More about how to configure nginx for static content at this link: http://nginx.org/en/docs/beginners_guide.html#static
I ran into a similar problem which was visible in the nginx error log files by lines like this:
<TIMESTAMP> [error] 1221#1221: *913310 upstream prematurely closed connection
while reading upstream, client: <IP>, server: <IP>, request: "GET <URL> HTTP/1.1",
upstream: "http://unix:<LOCAL_DJANGO_APP_DIR_PATH>/run/gunicorn.sock:
<REL_PATH_LOCAL_FILE_TO_BE_DOWNLOADED>", host: "<URL>", referrer: "<URL>/<PAGE>"
This is caused by the --timeout setting in the file
<LOCAL_DJANGO_APP_DIR_PATH>/bin/gunicorn_start
(found at "command:" in /etc/supervisor/conf.d/<APPNAME>.conf)
In the gunicorn_start file change this line:
exec /usr/local/bin/gunicorn [...] \
--timeout <OLD_TIMEOUT> \
[...]
This was set to 300 and I had to change it to 1280 (this is in seconds!).
Transfers of ~5GB are easily handled this way without RAM issues using
django.views.static.serve(request, <LOCAL_FILE_NAME>, <LOCAL_FILE_DIR>
I have a problem with Nginx - Unicorn - Rails 4.1 and Spree production setup, according to this tutorial.
The site shows up at the ip address (I need to get a domain yet). But it seems assets are not readable. This is the error log from /var/log/nginx/spree_zaza_error.log
2014/12/21 23:06:22 [error] 13598#0: *12 open() "/home/user/workplace/spree_zaza/public/assets/spree.js" failed (2: No such file or directory), client: 213.230.83.135, server: , request: "GET /assets/spree.js?body=1 HTTP/1.1", host: "212.111.40.25", referrer: "http://212.111.40.25/t/brand/apache"
2014/12/21 23:06:22 [error] 13598#0: *11 open() "/home/user/workplace/spree_zaza/public/assets/spree/frontend/checkout.js" failed (2: No such file or directory), client: 213.230.83.135, server: , request: "GET /assets/spree/frontend/checkout.js?body=1 HTTP/1.1", host: "212.111.40.25", referrer: "http://212.111.40.25/t/brand/apache"
2014/12/21 23:06:22 [error] 13598#0: *11 open() "/home/user/workplace/spree_zaza/public/assets/logo/spree_50.png" failed (2: No such file or directory), client: 213.230.83.135, server: , request: "GET /assets/logo/spree_50.png HTTP/1.1", host: "212.111.40.25", referrer: "http://212.111.40.25/t/brand/apache"
Although I ran rake assets:precompile and there are a bunch of hashed and gzipped files, some files don't exist, but for example assests/logo/spree_50 is there.
This is my /etc/nginx/sites-enabled/spree_zaza file:
upstream spree_zaza {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server unix:/tmp/spree_zaza.socket fail_timeout=0;
}
server {
# if you're running multiple servers, instead of "default" you should
# put your main domain name here
listen 80 default;
# you could put a list of other domain names this application answers
#server_name [your server's address];
root /home/user/workplace/spree_zaza/public;
access_log /var/log/nginx/spree_zaza_access.log;
error_log /var/log/nginx/spree_zaza_error.log;
rewrite_log on;
location / {
#all requests are sent to the UNIX socket
proxy_pass http://spree_zaza;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
# if the request is for a static resource, nginx should serve it directly
# and add a far future expires header to it, making the browser
# cache the resource and navigate faster over the website
location ~ ^/(system|assets|spree)/ {
root /home/user/workplace/spree_zaza/public;
expires max;
break;
}
}
And the following is /home/user/workplace/spree_zaza/config/unicorn.rb:
# config/unicorn.rb
# Set environment to development unless something else is specified
#env = ENV["RAILS_ENV"] || "development"
#env = ENV["RAILS_ENV"] || "production"
env = "production"
# See http://unicorn.bogomips.org/Unicorn/Configurator.html for complete documentation.
worker_processes 3
# listen on both a Unix domain socket and a TCP port,
# we use a shorter backlog for quicker failover when busy
listen "/tmp/spree_zaza.socket", backlog: 64
# Preload our app for more speed
preload_app true
# nuke workers after 30 seconds instead of 60 seconds (the default)
timeout 30
pid "/tmp/unicorn.spree_zaza.pid"
# Production specific settings
if env == "production"
# Help ensure your application will always spawn in the symlinked
# "current" directory that Capistrano sets up.
working_directory "/home/user/workplace/spree_zaza"
# feel free to point this anywhere accessible on the filesystem user 'spree'
shared_path = "/home/user/workplace/spree_zaza"
stderr_path "#{shared_path}/log/unicorn.stderr.log"
stdout_path "#{shared_path}/log/unicorn.stdout.log"
end
before_fork do |server, worker|
# the following is highly recomended for Rails + "preload_app true"
# as there's no need for the master process to hold a connection
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
end
# Before forking, kill the master process that belongs to the .oldbin PID.
# This enables 0 downtime deploys.
old_pid = "/tmp/unicorn.spree_zaza.pid.oldbin"
if File.exists?(old_pid) && server.pid != old_pid
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
# someone else did our job for us
end
end
end
after_fork do |server, worker|
# the following is *required* for Rails + "preload_app true"
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
end
# if preload_app is true, then you may also want to check and
# restart any other shared sockets/descriptors such as Memcached,
# and Redis. TokyoCabinet file handles are safe to reuse
# between any number of forked children (assuming your kernel
# correctly implements pread()/pwrite() system calls)
end
Also, I uncommented the following switch in config/environments/production.rb
config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect'
Thanks for your ideas.
I'm trying to set up a production server that consists of Django + uwsgi + Nginx.
The tutorial I'm following is located here http://www.panta.info/blog/3/how-to-install-and-configure-nginx-uwsgi-and-django-on-ubuntu.html
The production server is working because I can see the admin page when debug is on but when I turn to debug off. It displays the Server Error (500) again. I don't know what to do. Ngnix should be serving the Django request. I'm clueless right now, Can someone kindly help me, please.
my /etc/nginx/sites-available/mysite.com
server {
listen 80;
server_name mysite.com www.mysite.com;
access_log /var/log/nginx/mysite.com_access.log;
error_log /var/log/nginx/mysite.com_error.log;
location / {
uwsgi_pass unix:///tmp/mysite.com.sock;
include uwsgi_params;
}
location /media/ {
alias /home/projects/mysite/media/;
}
location /static/ {
alias /home/projects/mysite/static/;
}
}
my /etc/uwsgi/apps-available/mysite.com.ini
[uwsgi]
vhost = true
plugins = python
socket = /tmp/mysite.com.sock
master = true
enable-threads = true
processes = 2
wsgi-file = /home/projects/mysite/mysite/wsgi.py
virtualenv = /home/projects/venv
chdir = /home/projects/mysite
touch-reload = /home/projects/mysite/reload
my settings.py
root#localhost:~# cat /home/projects/mysite/mysite/settings.py
# Django settings for my site project.
DEBUG = False
TEMPLATE_DEBUG = DEBUG
min/css/base.css" failed (2: No such file or directory), client: 160.19.332.22, server: mysite.com, request: "GET /static/admin/css/base.css HTTP/1.1", host: "160.19.332.22"
2013/06/17 14:33:39 [error] 8346#0: *13 open() "/home/projects/mysite/static/admin/css/login.css" failed (2: No such file or directory), client: 160.19.332.22, server: mysite.com, request: "GET /static/admin/css/login.css HTTP/1.1", host: "174.200.14.200"
2013/06/17 14:33:39 [error] 8346#0: *14 open() "/home/projects/mysite/static/admin/css/base.css" failed (2: No such file or directory), client: 160.19.332.22, server: mysite.com, request: "GET /static/admin/css/base.css HTTP/1.1", host: "174.200.14.2007", referrer: "http://174.200.14.200/admin/"
2013/06/17 14:33:39 [error] 8346#0: *15 open() "/home/projects/mysite/static/admin/css/login.css" failed (2: No such file or directory), client: 160.19.332.22, server: mysite.com, request: "GET /static/admin/css/login.css HTTP/1.1", host: "174.200.14.200", referrer: "http://174.200.14.200/admin/"
I think it's your ALLOWED_HOSTS setting (new in Django 1.5)
Try the following in your settings.py
ALLOWED_HOSTS = ['*']
This will allow everything to connect until you get your domain name sorted.
It's worth saying that when you do get a domain name sorted make sure you update this value (list of allowed domain names). As the documentation for ALLOWED_HOSTS states:
This is a security measure to prevent an attacker from poisoning
caches and password reset emails with links to malicious hosts by
submitting requests with a fake HTTP Host header, which is possible
even under many seemingly-safe webserver configurations.
Also (a little aside) - I don't know if you have a different setup for your django settings per environment but this is what I do:
At the end of your settings.py include:
try:
from local_settings import *
except ImportError:
pass
Then in the same directory as settings.py create a local_settings.py file (and a __init__.py file if using a different structure than the initial template) and set your settings per environment there. Also exclude local_settings.py from your version control system.
e.g. I have DEBUG=False in my settings.py (for a secure default) but can override with DEBUG=True in my development local settings.
I also keep all my database info in my local settings file so it's not in version control.
Just a little info if you didn't know it already :-)
I had the same issue but in my case it turned out to be that STATICFILES_STORAGE was incorrectly set as:
STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.ManifestStaticFilesStorage'
This question has already an accepted answer but I'm leaving this in case someone gets here in the same situation. You can also see this similar answer for the same error.