I want to let nginx serve the static files for cache task.
But I got No such file or directory from nginx error log.
It seems the css files included in root-c452663d516929cd4bb4c1cd521971eb.css could not be served.
How could I fix the bug
production.rb
# Disable serving static files from the `/public` folder by default since
# Apache or NGINX already handles this.
#config.serve_static_files = ENV['RAILS_SERVE_STATIC_FILES'].present?
config.serve_static_files = false
# Compress JavaScripts and CSS.
config.assets.js_compressor = :uglifier
# config.assets.css_compressor = :sass
# Do not fallback to assets pipeline if a precompiled asset is missed.
config.assets.compile = true
http://localhost/assets/kode/css/root-c452663d516929cd4bb4c1cd521971eb.css
/* Summernote */
#import url('plugin/summernote/summernote.css');
#import url('plugin/summernote/summernote-bs3.css');
/* Sweet Alert */
#import url('plugin/sweet-alert/sweet-alert.css');
/* Data Tables */
#import url('plugin/datatables/datatables.css');
/* Chartist */
#import url('plugin/chartist/chartist.min.css');
/* Rickshaw */
#import url('plugin/rickshaw/rickshaw.css');
#import url('plugin/rickshaw/detail.css');
#import url('plugin/rickshaw/graph.css');
#import url('plugin/rickshaw/legend.css');
nginx setting
location / {
try_files $uri #sample;
gzip_static on;
expires max;
add_header Cache-Control public;
}
location #sample {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://sample;
}
location ~* ^/assets/ {
# Per RFC2616 - 1 year maximum expiry
expires 1y;
add_header Cache-Control public;
# Some browsers still send conditional-GET requests if there's a
# Last-Modified header or an ETag header even if they haven't
# reached the expiry date sent in the Expires header.
add_header Last-Modified "";
add_header ETag "";
break;
}
Error log
2015/07/22 17:27:02 [error] 9891#0: *9 open() "/var/public/assets/kode/css/plugin/date-range-picker/daterangepicker-bs3.css" failed (2: No such file or directory), client: 118.166.217.131, server: www.localhost, request: "GET /assets/kode/css/plugin/date-range-picker/daterangepicker-bs3.css HTTP/1.1", host: "localhost", referrer: "http://localhost/assets/kode/css/root-c452663d516929cd4bb4c1cd521971eb.css"
2015/07/22 17:27:02 [error] 9891#0: *10 open() "/var/public/assets/kode/css/plugin/rickshaw/legend.css" failed (2: No such file or directory), client: 118.166.217.131, server: www.localhost, request: "GET /assets/kode/css/plugin/rickshaw/legend.css HTTP/1.1", host: "localhost", referrer: "http://localhost/assets/kode/css/root-c452663d516929cd4bb4c1cd521971eb.css"
2015/07/22 17:27:02 [error] 9891#0: *11 open() "/var/public/assets/kode/css/plugin/rickshaw/detail.css" failed (2: No such file or directory), client: 118.166.217.131, server: www.localhost, request: "GET /assets/kode/css/plugin/rickshaw/detail.css HTTP/1.1", host: "localhost", referrer: "http://localhost/assets/kode/css/root-c452663d516929cd4bb4c1cd521971eb.css"
2015/07/22 17:27:02 [error] 9891#0: *5 open() "/var/public/assets/kode/css/plugin/fullcalendar/fullcalendar.css" failed (2: No such file or directory), client: 118.166.217.131, server: www.localhost, request: "GET /assets/kode/css/plugin/fullcalendar/fullcalendar.css HTTP/1.1", host: "localhost", referrer: "http://localhost/assets/kode/css/root-c452663d516929cd4bb4c1cd521971eb.css"
puma.config
app_path = File.expand_path('../', File.dirname(__FILE__))
pidfile "#{app_path}/tmp/pids/puma.pid"
bind "unix:///tmp/puma.lazyair.sock"
stdout_redirect "#{app_path}/log/puma.stdout.log", "#{app_path}/log/puma.stderr.log", true
workers Integer(ENV['WEB_CONCURRENCY'] || 6)
threads_count = Integer(6)
threads threads_count, threads_count
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 3457
# Default to production
rails_env = ENV['RAILS_ENV'] || "production"
environment rails_env
on_worker_boot do
ActiveRecord::Base.establish_connection
end
activate_control_app
Related
I am running a Django project in a docker container. Uwsgi is my chosen protocol. Nginx is acting as a reverse proxy.
I am able to restrict django sites for users based user.is_authenticated().
I am not able to restrict media and static files for not authenticated users as they get served directly from the filesystem.
I do not have a second, dedicated server which only purpose is to identify and authenticate users. As I already have the functionality within my django project, I want to use this.
My nginx configuration:
events{}
daemon off;
http {
access_log /dev/stdout;
error_log /var/log/nginx/error.log;
upstream django {
server unix:///tmp/nginx/diplab.sock;
}
server {
listen 8080;
location = /accounts/check-authenticated {
internal;
uwsgi_pass django;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
}
location /static {
alias /vol/web/static;
include /etc/nginx/mime.types;
}
location /media {
alias /vol/web/media;
include /etc/nginx/mime.types;
auth_request /accounts/check-authenticated;
auth_request_set $auth_status $upstream_status;
}
location / {
uwsgi_pass django;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
include /etc/nginx/uwsgi_params;
}
}
}
My Django Setup of the view which has the purpose to authenticate a user (200) or not (401).
# urls.py
app_name = 'accounts'
urlpatterns = [
path('check-authenticated/', views.is_authenticated_view, name="check-authenticated"),
path('login/', views.UserLoginView.as_view(), name='login'),
path('logout/', auth_views.LogoutView.as_view(), name='logout'),
]
# views.py
def is_authenticated_view(request):
if request.user.is_authenticated:
return HttpResponse(status=200)
return HttpResponse(status=401)
The error that shows up is fairly deep inside Djangos core:
Traceback (most recent call last):
File "/root/miniconda3/envs/myproject/lib/python3.10/site-packages/django/core/handlers/wsgi.py", line 130, in __call__
request = self.request_class(environ)
File "/root/miniconda3/envs/myproject/lib/python3.10/site-packages/django/core/handlers/wsgi.py", line 78, in __init__
self.method = environ["REQUEST_METHOD"].upper()
KeyError: 'REQUEST_METHOD'
The error.log of nginx shows the following:
2022/12/06 17:42:27 [error] 194#194: *1 upstream prematurely closed connection while reading response header from upstream, client: 172.19.0.1, server: , request: "GET /media/myname/compound_structures/1cf10238af0.png HTTP/1.1", subrequest: "/accounts/check-authenticated", upstream: "uwsgi://unix:///tmp/nginx/diplab.sock:", host: "127.0.0.1:8080", referrer: "http://127.0.0.1:8080/toolbox/"
2022/12/06 17:42:27 [error] 194#194: *1 auth request unexpected status: 502 while sending to client, client: 172.19.0.1, server: , request: "GET /media/myname/compound_structures/1cf10238af0.png HTTP/1.1", host: "127.0.0.1:8080", referrer: "http://127.0.0.1:8080/toolbox/"
I tried to add the REQUEST_METHOD plus some other variables like this, but without being successful - still same error.
location = /accounts/check-authenticated {
internal;
uwsgi_pass django;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header REQUEST_METHOD GET;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Original-Remote-Addr $remote_addr;
proxy_set_header X-Original-Host $host;
}
I am by far no expert in terms of nginx and appreciate any help! This is the approach I found first. I'd be happy if you recommend me another approach if you are not able to help me with this one.
Edit 1
As Elgin Cahangirov suggested, I now include the uwsgi_params to the location block like this:
location = /accounts/check-authenticated {
internal;
uwsgi_pass django;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
include /etc/nginx/uwsgi_params;
}
I think it definitely helped on my journey, as one of many errors is fixed, but sadly the images still do not show up.
uwsgi:
172.19.0.1 - - [07/Dec/2022:08:33:33 +0000] "GET /media/myname/compound_structures/1cf10238af0.png HTTP/1.1" 500 186 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0"
[pid: 219|app: 0|req: 19/19] 172.19.0.1 () {52 vars in 936 bytes} [Wed Dec 7 08:33:33 2022] GET /media/myname/compound_structures/1cf10238af0.png => generated 0 bytes in 1 msecs (HTTP/1.1 301) 6 headers in 239 bytes (1 switches on core 0)
nginx error.log
2022/12/07 08:33:33 [error] 195#195: *30 auth request unexpected status: 301 while sending to client, client: 172.19.0.1, server: , request: "GET /media/myname/compound_structures/1cf10238af0.png HTTP/1.1", host: "127.0.0.1:8080"
I've setup the one click install django on digitalocean and added a domain to it. I'm also trying to add a sub domain before the site goes live. I've edited the nginx conf file as below
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
client_max_body_size 4G;
server_name beta.kazi-connect.com;
keepalive_timeout 5;
# Your Django project's media files - amend as required
location /media {
alias /home/django/django_project/django_project/media;
}
# your Django project's static files - amend as required
location /static {
alias /home/django/django_project/django_project/static;
}
# Proxy the static assests for the Django Admin panel
location /static/admin {
alias /usr/lib/python2.7/dist-packages/django/contrib/admin/static/admin/;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_buffering off;
proxy_pass http://app_server;
}
}
upstream app_server {
server unix:/home/django/gunicorn.socket fail_timeout=0;
}
and restarted both nginx and gunicorn however when I visit the sub domain I get a 502 bad gateway error.
Nginx log states there's an issue with gunicorn.
2017/01/24 16:24:19 [error] 6258#6258: *2 upstream prematurely closed connection while reading response header from upstream, client: 105.230.203.101, server: beta.kazi-connect.com, request: "GET / HTTP/1.1", upstream: "http://unix:/home/django/gunicorn.socket:/", host: "beta.kazi-connect.com"
2017/01/24 16:24:20 [error] 6258#6258: *2 upstream prematurely closed connection while reading response header from upstream, client: 105.230.203.101, server: beta.kazi-connect.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://unix:/home/django/gunicorn.socket:/favicon.ico", host: "beta.kazi-connect.com", referrer: "http://beta.kazi-connect.com/"
2017/01/24 16:24:22 [error] 6258#6258: *2 upstream prematurely closed connection while reading response header from upstream, client: 105.230.203.101, server: beta.kazi-connect.com, request: "GET / HTTP/1.1", upstream: "http://unix:/home/django/gunicorn.socket:/", host: "beta.kazi-connect.com"
2017/01/24 16:24:23 [error] 6258#6258: *2 upstream prematurely closed connection while reading response header from upstream, client: 105.230.203.101, server: beta.kazi-connect.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://unix:/home/django/gunicorn.socket:/favicon.ico", host: "beta.kazi-connect.com", referrer: "http://beta.kazi-connect.com/"
2017/01/24 16:25:00 [error] 6258#6258: *23 upstream prematurely closed connection while reading response header from upstream, client: 105.230.203.101, server: beta.kazi-connect.com, request: "GET / HTTP/1.1", upstream: "http://unix:/home/django/gunicorn.socket:/", host: "beta.kazi-connect.com"
2017/01/24 16:25:01 [error] 6258#6258: *23 upstream prematurely closed connection while reading response header from upstream, client: 105.230.203.101, server: beta.kazi-connect.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://unix:/home/django/gunicorn.socket:/favicon.ico", host: "beta.kazi-connect.com", referrer: "http://beta.kazi-connect.com/"
Samuel's answer was right. The problem is that the domain names are not included in the ALLOWED_HOSTS. In the django_project/settings.py, search for the code below.
# Discover our IP address
ALLOWED_HOSTS = ip_addresses()
Add your domain name to ALLOWED_HOSTS, e.g.
ALLOWED_HOSTS.extend(["xyz.com"])
I keep getting this error in the nginx.error.log:
2016/06/06 20:14:02 [error] 907#0: *1 connect() to unix:///home/user/apps/appname/shared/tmp/sockets/appname-puma.sock failed (111: Connection refused) while connecting to upstream, client: 50.100.162.19, server: , request: "GET / HTTP/1.1", upstream: "http://unix:///home/user/apps/appname/shared/tmp/sockets/appname-puma.sock:/", host: "appname.com"
(here it is with manually added newlines for your convenience)
2016/06/06 20:14:02 [error] 907#0: *1 connect() to
unix:///home/user/apps/appname/shared/tmp/sockets/appname-puma.sock failed
(111: Connection refused) while connecting to upstream, client:
50.100.162.19, server: , request: "GET / HTTP/1.1", upstream:
"http://unix:///home/user/apps/appname/shared/tmp/sockets/appname-
puma.sock:/", host: "appname.com"
This is my nginx.conf:
upstream puma {
server unix:///home/user/apps/appname/shared/tmp/sockets/appname-puma.sock;
}
server {
listen 80 default_server deferred;
# server_name example.com;
root /home/user/apps/appname/current/public;
access_log /home/user/apps/appname/current/log/nginx.access.log;
error_log /home/user/apps/appname/current/log/nginx.error.log info;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 10M;
keepalive_timeout 10;
}
What am I doing wrong?
I followed Digital Ocean's tutorial to set up Capistrano, Nginx and Puma.
So the solution was to restart puma.
cap production deploy:restart
Every time I reboot the server, I need to restart puma as well.
My recommendation is to check
~/apps/appname/shared/log/puma.stderr.log
log file. You may find there the answer
looking at log/puma_error.log i saw the error (LoadError while trying to load bundler), doing gem update --system fixed it.
I'm a bit new to this but I am trying to deploy a website I build using Django to DigitalOcean using nginx/gunicorn.
My nginx file looks as so:
server {
listen 80;
server_name xxx.xxx.xxx.xx;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /static/ {
alias ~/dev/WebPortfolio/static/;
}
}
And my settings.py file looks as so:
STATIC_ROOT = '~/dev/WebPortfolio/static/'
STATIC_URL = '/static/'
STATICFILES_DIRS = ()
Every time I run python managy.py collect static the errors look as so:
You have requested to collect static files at the destination
location as specified in your settings:
/root/dev/WebPortfolio/~/dev/WebPortfolio/static
Looking at the nginx error log I see (cut out the repetitive stuff):
2015/10/08 15:12:42 [error] 23072#0: *19 open() "/usr/share/nginx/~/dev/WebPortfolio/static/http:/cdnjs.cloudflare.com/ajax/libs/jquery-easing/1.3/jquery.e asing.min.js" failed (2: No such file or directory), client: xxxxxxxxxxxxxx, server: xxxxxxxxxxxxxx, request: "GET /static/http%3A//cdnjs.cloudflare.com/aj ax/libs/jquery-easing/1.3/jquery.easing.min.js HTTP/1.1", host: "XXXXXXXX.com", referrer: "http://XXXXXXXX.com/"
2015/10/08 15:14:28 [error] 23072#0: *24 connect() failed (111: Connection refused) while connecting to upstream, client: xxxxxxxx, server: 104.236.174.46, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8000/", host: "xxxxxxxx.com"
1) I'm not entirely sure why my destination for static files is '/root/dev/WebPortfolio/~/dev/WebPortfolio/static'
Because you've used '~' in a path. That's a shell thing, not a general path thing, and unless you tell Python specifically, it won't know what to do with it. Use a full absolute path in both Django settings and nginx.
I am new to Nginx and was trying to use nginx and thin.
I have tried out many sites and blogs but it is not helping.I am currently following the blog
http://articles.slicehost.com/2008/5/27/ubuntu-hardy-nginx-rails-and-thin
I am however getting an 502 Bad gateway error.
The below is the code I have implemented .
nginx conf file:-
user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}
nginx default file(/etc/nginx/site-available):-
# You may add here your
# server {
# ...
# }
# statements for each of your virtual hosts to this file
##
# You should look at the following URL's in order to grasp a solid understanding
# of Nginx configuration files in order to fully unleash the power of Nginx.
# http://wiki.nginx.org/Pitfalls
# http://wiki.nginx.org/QuickStart
# http://wiki.nginx.org/Configuration
#
# Generally, you will want to move this file somewhere, and start with a clean
# file but keep this around for reference. Or just disable in sites-enabled.
#
# Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples.
##
server {
listen 80; ## listen for ipv4; this line is default and implied
#listen [::]:80 default ipv6only=on; ## listen for ipv6
server_name 192.168.1.238:8080;
root /home/woi/Development/public;
index index.html index.htm;
# Make site accessible from http://localhost/
# server_name localhost;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to index.html
try_files $uri $uri/ /index.html;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
location /doc/ {
alias /usr/share/doc/;
autoindex on;
allow 127.0.0.1;
deny all;
}
# Only for nginx-naxsi : process denied requests
#location /RequestDenied {
# For example, return an error code
#return 418;
#}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
#error_page 500 502 503 504 /50x.html;
#location = /50x.html {
# root /usr/share/nginx/www;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# fastcgi_split_path_info ^(.+\.php)(/.+)$;
# # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
#
# # With php5-cgi alone:
# fastcgi_pass 127.0.0.1:9000;
# # With php5-fpm:
# fastcgi_pass unix:/var/run/php5-fpm.sock;
# fastcgi_index index.php;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# another virtual host using mix of IP-, name-, and port-based configuration
#
#server {
# listen 8000;
# listen somename:8080;
# server_name somename alias another.alias;
# root html;
# index index.html index.htm;
#
# location / {
# try_files $uri $uri/ /index.html;
# }
#}
# HTTPS server
#
#server {
# listen 443;
# server_name localhost;
#
# root html;
# index index.html index.htm;
#
# ssl on;
# ssl_certificate cert.pem;
# ssl_certificate_key cert.key;
#
# ssl_session_timeout 5m;
#
# ssl_protocols SSLv3 TLSv1;
# ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP;
# ssl_prefer_server_ciphers on;
#
# location / {
# try_files $uri $uri/ /index.html;
# }
#}
nginx domain.com file(/etc/nginx/sites-available):-
upstream domain1 {
server 127.0.0.1:3000;
server 127.0.0.1:3001;
server 127.0.0.1:3002;
}
server {
listen 80;
server_name 192.168.1.238;
# access_log /home/demo/public_html/railsapp/log/access.log;
# error_log /home/demo/public_html/railsapp/log/error.log;
root /home/woi/Development/public/;
index index.html;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (-f $request_filename/index.html) {
rewrite (.*) $1/index.html break;
}
if (-f $request_filename.html) {
rewrite (.*) $1.html break;
}
if (!-f $request_filename) {
proxy_pass http://domain1;
break;
}
}
}
After putting services nginx start command,I am not able to run thin server.I am getting 502 bad gateway error when i hit my ip address 192.168.1.238.
Update:-The below is the snippet of my error log:-
"GET / HTTP/1.1", upstream: "http://domain1/", host: "192.168.1.238"
2014/01/30 05:14:18 [error] 2029#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.142, server: 192.168.1.238, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3002/", host: "192.168.1.238"
2014/01/30 05:14:18 [error] 2029#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.142, server: 192.168.1.238, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3000/", host: "192.168.1.238"
2014/01/30 05:14:18 [error] 2029#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.142, server: 192.168.1.238, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3001/", host: "192.168.1.238"
2014/01/30 05:14:18 [error] 2029#0: *1 no live upstreams while connecting to upstream, client: 192.168.1.142, server: 192.168.1.238, request: "GET / HTTP/1.1", upstream: "http://domain1/", host: "192.168.1.238"
2014/01/30 05:14:18 [error] 2029#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.142, server: 192.168.1.238, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3002/", host: "192.168.1.238"
2014/01/30 05:14:18 [error] 2029#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.142, server: 192.168.1.238, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3000/", host: "192.168.1.238"
2014/01/30 05:14:18 [error] 2029#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.142, server: 192.168.1.238, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3001/", host: "192.168.1.238"
2014/01/30 05:14:18 [error] 2029#0: *1 no live upstreams while connecting to upstream, client: 192.168.1.142, server: 192.168.1.238, request: "GET / HTTP/1.1", upstream: "http://domain1/", host: "192.168.1.238"
2014/01/30 05:16:24 [error] 2171#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.142, server: 192.168.1.238, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3002/", host: "192.168.1.238"
2014/01/30 05:16:24 [error] 2171#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.142, server: 192.168.1.238, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3000/", host: "192.168.1.238"
2014/01/30 05:16:24 [error] 2171#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.142, server: 192.168.1.238, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3001/", host: "192.168.1.238"
2014/01/30 05:20:04 [error] 2354#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.142, server: 192.168.1.238, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3002/", host: "192.168.1.238"
2014/01/30 05:20:04 [error] 2354#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.142, server: 192.168.1.238, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3000/", host: "192.168.1.238"
2014/01/30 05:20:04 [error] 2354#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.142, server: 192.168.1.238, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3001/", host: "192.168.1.238"
2014/01/30 05:20:09 [error] 2354#0: *1 no live upstreams while connecting to upstream, client: 192.168.1.142, server: 192.168.1.238, request: "GET / HTTP/1.1", upstream: "http://domain1/", host: "192.168.1.238"
The above solution is not helping me.Can someone please help me.
Am stuck for a long time now.
Thanks
This means that your Thin servers are not running. Try this:
curl -v http://localhost:3000
It is probably not working. Look at your Thin logs (stdout/stderr.log) to identify further problems.
Thin is a separate process and it should be restarted individually. It is common to have it setup within your Rails app in the same bundle. In that case you need to run it like:
bundle exec thin start
See: http://code.macournoyer.com/thin/usage/
If you installed your Gems as root including the thin gem, then it should have installed an init script in /etc/init.d. In that case you can restart thin with the service command.
When you deploy new code, you only need to restart your thin servers and not nginx.