Hi there i'm trying to put in production some django app using nginx + gunicorn + supervisor.
Following this guide i was able to reproduce all steps with success but for some reason i can't make it work. I believe that the problem is with the nginx part of the project since I'm not able to even serve a static file for testing. It's my first time using all these tools.
Config files are as follows:
nginx.conf:
worker_processes 1;
user nobody nogroup;
# 'user nobody nobody;' for systems with 'nobody' as a group instead
error_log /home/seba94/log/nginx/nginx.error.log warn;
#pid /run/nginx.pid;
events {
worker_connections 1024; # increase if you have lots of clients
accept_mutex off; # set to 'on' if nginx worker_processes > 1
# 'use epoll;' to enable for Linux 2.6+
# 'use kqueue;' to enable for FreeBSD, OSX
}
http {
include /etc/nginx/mime.types;
# fallback in case we can't determine a type
default_type application/octet-stream;
access_log /home/seba94/log/nginx/nginx.access.log combined;
sendfile on;
upstream app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
# for UNIX domain socket setups
server unix:/tmp/gunicorn.sock fail_timeout=10s;
# for a TCP configuration
#server 127.0.0.1:8000 fail_timeout=0;
}
server {
# if no Host match, close the connection to prevent host spoofing
listen 80 default_server;
return 444;
}
server {
# use 'listen 80 deferred;' for Linux
# use 'listen 80 accept_filter=httpready;' for FreeBSD
listen 80;
client_max_body_size 4G;
# set the correct host(s) for your site
server_name reg.rocstar.tv;
keepalive_timeout 5;
# path for static files
root /home/seba94/static;
location /register/ {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location /media/ {
#path for Django media files
alias /home/seba94/register-page/register_page/media;
}
location /static/ {
#path for Django static files
alias /home/seba94/register-page/register_page/static;
}
location /todd-logo.png {
alias /home/seba94/static/todd-logo.png;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
proxy_pass http://app_server;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/seba94/static;
}
}
supervisord.conf:
[supervisord]
logfile=/home/seba94/log/supervisord/supervisord.log
[inet_http_server]
port=127.0.0.1:9001
[rpcinterface:supervisor]
supervisor.rpcinterface_factory=supervisor.rpcinterface:make_main_rpcinterface
[program:register-page-django]
command=/home/seba94/.local/share/virtualenvs/register-page-jYLn8mRO/bin/gunicorn register_page.wsgi -c /home/seba94/conf/gunicorn.conf.py
directory=/home/seba94/register-page/register_page
user=seba94
autostart=true
autorestart=true
stdout_logfile=/home/seba94/log/supervisord/register_page.log
stderr_logfile=/home/seba94/log/supervisord/register_page.err.log
[supervisorctl]
gunicorn.conf.py:
import multiprocessing
#Server socket config
bind = "unix:/tmp/gunicorn.sock"
backlog = 2048
#Workers config. Eventlet is an asynchronus worker
workers = multiprocessing.cpu_count() * 2
worker_class = "eventlet"
worker_connections = 1000
#access-logfile = "/home/seba94/log/gunicorn/gunicorn.log"
#error-logfile = "/home/seba94/log/gunicorn/gunicorn.error.log"
name = "register-page-gunicorn"
#Server mechanics
#daemon = True
I'm able to run successfully all processes with no errors from the cmd using the following commands:
sudo service nginx start
sudo supervisord -c /home/seba94/conf/supervisord.conf
sudo supervisorctrl start register-page-django
Nginx status is the following:
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2019-10-23 20:38:58 UTC; 31min ago
Docs: man:nginx(8)
Process: 5552 ExecStop=/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid (code=exited, status=0/SUCCESS)
Process: 5599 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Process: 5594 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Main PID: 5601 (nginx)
Tasks: 2 (limit: 1108)
CGroup: /system.slice/nginx.service
├─5601 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
└─5760 nginx: worker process
oct 23 20:38:58 register-page-server systemd[1]: Starting A high performance web server and a reverse proxy server...
oct 23 20:38:58 register-page-server systemd[1]: Started A high performance web server and a reverse proxy server.
Honestly i can't find any errors not even in the log files so i don't know why i can't even see my static todd-logo.png file. Neither can i see the Django app running. Any help is more than welcomed
Edit:
Seems that all config files and commands in this issue are fine, the problem appeared to be a firewall configuration from a previous project. This could be an example of usage of these tools currently working
Please share the output of curl -v http://domain-name in the question as well.
Related
I have had a lot of trouble setting up Nginx for Django on Debian.
I tried probably every nginx django conf file I could find on the internet but none of them worked, I assume I cant see the forrest for the trees...
So I am running Django 2.0.4 and daphne 2.1.1.
For Daphne I am using this command:
daphne -b 0.0.0.0 -e ssl:8080:privateKey=privkey.pem:certKey=fullchain.pem share_game.asgi:application -v2
And this is my Nginx Conf file, I have added a redirect to google so I can actually see that it is running:
upstream tsg-backend {
server 127.0.0.1:8080;
}
server {
listen 159.69.13.156:80;
server_name thesharegame.com www.thesharegame.com;
if ($host ~* ^thesharegame\.com$) {
rewrite ^(.*)$ https://www.thesharegame.com$1 permanent;
}
}
server{
listen 159.69.13.156:443 ssl http2;
server_name thesharegame.com www.thesharegame.com;
access_log /var/log/nginx/tsg.log;
error_log /var/log/nginx/tsg.log;
return 301 https://google.com$request_uri;
ssl on;
ssl_certificate /home/tsg/fullchain.pem; # managed by Certbot
ssl_certificate_key /home/tsg/privkey.pem; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
client_max_body_size 20M;
if ($host ~* ^thesharegame\.com$) {
rewrite ^(.*)$ https://www.thesharegame.com$1 permanent;
}
location / {
## If you use HTTPS make sure you disable gzip compression
## to be safe against BREACH attack.
proxy_read_timeout 3600;
proxy_connect_timeout 300;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://tsg-backend;
}
}
Running netstat -nlp | grep 80
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 14925/python3
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 14603/nginx: master
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 14925/python3
tcp6 0 0 :::80 :::* LISTEN 14603/nginx: master
Also, /etc/init.d/nginx status says Nginx is running.
nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; disabled; vendor preset: enabled)
Active: active (running) since Mon 2018-06-04 23:10:05 CEST; 12min ago
Docs: man:nginx(8)
Process: 13551 ExecStop=/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid (code=exited, status=0/SUCCESS)
Process: 14601 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Process: 14599 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Main PID: 14603 (nginx)
Tasks: 9 (limit: 4915)
CGroup: /system.slice/nginx.service
├─14603 nginx: master process /usr/sbin/nginx -g daemon on; master…n;
├─14604 nginx: worker process
├─14605 nginx: worker process
├─14606 nginx: worker process
├─14607 nginx: worker process
├─14610 nginx: worker process
├─14613 nginx: worker process
├─14614 nginx: worker process
└─14616 nginx: worker process
Jun 04 23:10:05 debian-share-game systemd[1]: Starting A high performance we…...
Jun 04 23:10:05 debian-share-game systemd[1]: Started A high performance web…er.
Hint: Some lines were ellipsized, use -l to show in full.
Sites-available and sites-enabled are both linked.
What am I missing? Anyone has an idea or needs more information?
I'm just making webserver with django.
Now, I want to publish Django by uwsgi+Nginx, So I read some documents(http://uwsgi-docs.readthedocs.io/en/latest/tutorials/Django_and_nginx.html).
While following that doc, I met some errors.
When I connect to mydomain.com:8000, It throws 502 Bad Gateway error.
(Actually, when I worked, changed mydomain.com to real domain that I have)
After error, /var/log/nginx/error.log is in below.
2018/02/20 14:56:15 [error] 7548#7548: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.30.1.254, server: mydomain.com, request: "GET / HTTP/1.1", upstream: "uwsgi://127.0.0.1:8001", host: "mydomain.com:8000"
^C
This is my configure files.
[project_rest.conf]
upstream django {
# server unix:///path/to/your/mysite/mysite.sock; # for a file socket
server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 8000
# the domain name it will serve for
server_name .mydomain.com; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /home/app/project_rest/media;
}
location /static {
alias /home/app/project_rest/static;
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /home/app/project_rest/uwsgi_params; # the uwsgi_params file you installed
}
}
(I made that conf file in my django project's folder and linked to /etc/nginx/sites-enabled)
How can I connect to my server?
I can't find where is error occured.
Thanks.
Your Nginx configuration is correct, so let's take a look at your uwsgi configuration.
First of all, I assume you have installed uwsgi system-wide via apt-get, yum, etc.
The next thing you have to install (system-wide) is uwsgi-plugin-python3 (uwsgi-plugin-python if you are planning to execute Django with python2.7, what I don't recommend)
Then, you can create an ini file with the all the uwsgi configuration:
[uwsgi]
socket = 127.0.0.1:8001
uid = execuser
; Normally nginx, www-data
gid = nginx
chdir = /absolute/path/to/your/project
; Assuming your wsgi module is in chdir/yourmainapp/wsgi.py
module = yourmainapp.wsgi
; Path to your virtualenv. If you are not using virtualenv,
; you should.
home = /absolute/path/to/your/virtualenv
; Enables plugins: python
plugins = python
; Deamonize app
master = true
; Pass django settings module as environment variable
; (it is expected by Django).
; Assuming your settings is in chdir/yourmainapp/settings.py
env = DJANGO_SETTINGS_MODULE=yourmainapp.settings
Then, execute uwsgi:
:# /path/to/uwsgi --ini /path/to/your/config.ini --daemonize /path/to/your/logs
If you have installed uwsgi via apt-get or yum, you have to create the ini file in /etc/uwsgi/apps-enabled/yourproject.ini and simply execute uwsgi using:
:# service uwsgi start|restart
Finally, there are a lot of options to configure uwsgi: number of processes, threads, logs, and a lot of very interesting (and bad documented) stuff.
I hope it helps ;)
at /etc/nginx/default.d/xxxx
upstream django {
server 127.0.0.1:9000; # for a web port socket (we'll use this first)
keepalive 32;
}
then at /etc/nginx/nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/default.d/*;
# Settings for a TLS enabled server.
#
server {
listen 80;
listen [::]:80 default_server;
server_name ip;
root /path_prj/;
server_tokens off;
error_log /var/log/bill_error.log;
access_log /var/log/bill_access.log;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
location / {
uwsgi_read_timeout 100;
uwsgi_pass django;
include /var/www/html/uwsgi_params; # the uwsgi_params file you installed
}
location /media/ {
internal;
root /path_proj/;
}
location /static/ {
root /path_proj/;
}
}
then try this command
$ sudo uwsgi -s :9000 -M --env DJANGO_SETTINGS_MODULE=sharing.settings --chdir /path_proj/ -w "django.core.wsgi:get_wsgi_application()" --chmod-socket=666 --enable-threads --thunder-lock --daemonize /tmp/uwsgi.log --workers 10 -b 32768
I have a Django, Nginx, Gunicorn, and MySQL on AWS.
Running a postback from django which calls a stored procedure that takes longer than 30 seconds to complete causes a return of "502 Bad Gateway" nginx/1.4.6 (Ubuntu).
It sure looks like a timeout issue and that this post should resolve it.
But alas, it doesn't seem to be working.
Here is my gunicorn.conf file:
description "Gunicorn application server handling formManagement django app"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
setuid ubuntu
setgid www-data
chdir /home/ubuntu/AARC-ServiceManager/ServerSide/formManagement
exec ve/bin/gunicorn --timeout 300 --workers 3 --bind unix:/home/ubuntu/AARC-ServiceManager/ServerSide/formManagement/formManagement.sock formManagement.wsgi:application
And my Nginx.conf:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
# set client body size (max http request size) #
client_max_body_size 50M;
#upping the timeouts to allow time for the DB to return from a long running sproc
proxy_connect_timeout 300s;
proxy_read_timeout 300s;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Any thoughts?
UPDATE:
This is the error in the nginx error log:
[error] 14316#0: *13 upstream prematurely closed connection while reading response header from upstream ...
I found the resolution!
I was updating the wrong gunicorn.conf file.
I saved to config file to my source control and when I was in the server, was updating that file.
However, I needed to be changing the file at location:
/etc/init/gunicorn.conf
... and I learned a lesson about having more than one config file on the server.
Thanks all who were offering help.
Its my first time setting up nginx and unicorn.
My capistrano deployment went through and everything succeeded.
here is my unicorn.rb
#app_dir = File.expand_path('../../', __FILE__)
#shared_dir = File.expand_path('../../../shared/', __FILE__)
preload_app true
worker_processes 4
timeout 30
working_directory "home/deploy/appname"
shared_dir = "home/deploy/appname/shared"
# Set up socket location
# by default unicorn listens on 8080
listen "#{shared_dir}/tmp/sockets/unicorn.sock", :backlog => 64
# Logging
stderr_path "#{shared_dir}/log/unicorn.stderr.log"
stdout_path "#{shared_dir}/log/unicorn.stdout.log"
# Set master PID location
pid "#{shared_dir}/tmp/pids/unicorn.pid"
#must set preload app true to use before/after fork
before_fork do |server, worker|
defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect!
#before forking, this is suppose to kill the master process that belongs to the oldbin
#enables 0 downtime to deploy
old_pid = "#{shared_dir}/tmp/pids/unicorn.pid.oldbin"
if File.exists?(old_pid) && server.pid != old_pid
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
end
end
end
after_fork do |server, worker|
defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection
end
# before_exec do |server|
# ENV['BUNDLE_GEMFILE'] = "#{app_dir}/Gemfile"
# end
my nginx conf at /etc/nginx/nginx.conf
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
my default file at /etc/nginx/sites-enabled/default
upstream app_server {
#path to unicorn sock file, as defined previously
server unix:/home/deploy/appname/shared/tmp/sockets/unicorn.sock fail_timeout=0;
}
server {
listen 80;
root /home/deploy/appname;
try_files $uri/index.html $uri #app;
#click tracking
access_log /var/log/nginx/appname_access.log combined;
error_log /var/log/nginx/appname_error.log;
location #app {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
when I do this
deploy#localhost:~$ sudo nginx -s reload
nginx: [emerg] host not found in upstream "app" in /etc/nginx/sites-enabled/default:46
When I head into
/shared/tmp/sockets
I don't have a file in there. I don't think I should create it manually. I am using capistrano 3. Am I suppose to generate this file?
I am using
require 'capistrano3/unicorn' #in capfile
in deploy.rb
symbolic files and directories
set :linked_files, %w{config/database.yml config/secrets.yml}
set :linked_dirs, %w{tmp/pids tmp/cache tmp/sockets log bin vendor/bundle public/system}
#just pointing to our unicorn.rb
set :unicorn_config_path, "config/unicorn.rb"
#capistrano tasks and processes
after "deploy", "deploy:cleanup"
namespace :deploy do
desc 'Restart application'
task :restart do
on roles(:app), in: :sequence, wait: 5 do
invoke 'unicorn:restart'
end
end
after :finishing, "deploy:cleanup"
end
I put my cap file here because I notice no log on unicorn restart in my cap production deploy log. I am not sure if this helps.
I made sure the working_directory matches the root in the default nginx page.
I made sure the listen in unicorn matches the upstream app server unix in the default page.
I made sure the nginx.conf file included the default config nginx page in sites-enabled.
Well this is 6 months old, but I'm going to answer it anyways. The issue is proxy_pass in #app in sites_enabled/default. It's trying to pass to the upstream server http://app , but you don't have that upstream set, you have it named app_server.
You need to rename:
proxy_pass http://app
to:
proxy_pass http://app_server
I'm using nginx with Django on Ubunto 10:04. The problem is when I restart nginx I get this error.
sudo /etc/init.d/nginx restart
Restarting nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
configuration file /etc/nginx/nginx.conf test is successful
[emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use)
[emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use)
[emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use)
[emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use)
Also, I have tried stop and then start but still get the error.
Here's the output from lsof:
sudo lsof -i tcp:80
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nginx 27141 root 6u IPv4 245906 0t0 TCP *:www (LISTEN)
nginx 27142 nobody 6u IPv4 245906 0t0 TCP *:www (LISTEN)
If I kill the process with PID 27141 it works. However, I would like to get to the bottom
of why I can't just do a restart.
Here's the nginx.conf:
worker_processes 1;
user nobody nogroup;
pid /tmp/nginx.pid;
error_log /tmp/nginx.error.log;
events {
worker_connections 1024;
accept_mutex off;
}
http {
include mime.types;
default_type application/octet-stream;
access_log /tmp/nginx.access.log combined;
sendfile on;
upstream app_server {
# server unix:/tmp/gunicorn.sock fail_timeout=0;
# For a TCP configuration:
server 127.0.0.1:8000 fail_timeout=0;
}
server {
listen 80 default;
client_max_body_size 4G;
server_name _;
keepalive_timeout 5;
# path for static files
root /home/apps/venvs/app1/app1;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /path/to/app/current/public;
}
}
}
Any ideas?
Try:
$ sudo fuser -k 80/tcp ; sudo /etc/init.d/nginx restart
This worked for me
sudo fuser -k 80/tcp
And then
service nginx start
Source: https://rtcamp.com/tutorials/nginx/troubleshooting/emerg-bind-failed-98-address-already-in-use/
Daemontools starting nginx successfully, then nginx daemonizes, and then daemontools tries to start nginx again, unsuccessfully, logging an error to the log.
The solution to this problem is to disable daemon mode in the main section of the nginx.conf:
daemon off;
Site: http://wiki.nginx.org/CoreModule
Tired with nginx restart issues and "address in use" faults. Decided to make it work once and for all.
Added just one line at the end stop and restart action in /etc/init.d/nginx file
nginx -s quit
so it looks now like (and ensure that nginx folder is in PATH variable, otherwise specify the full path)
stop)
echo -n "Stopping $DESC: "
start-stop-daemon --stop --quiet --pidfile /var/run/$NAME.pid \
--exec $DAEMON || true
echo "$NAME."
nginx -s quit
;;
restart|force-reload)
echo -n "Restarting $DESC: "
start-stop-daemon --stop --quiet --pidfile \
/var/run/$NAME.pid --exec $DAEMON || true
nginx -s quit
sleep 1
test_nginx_config
start-stop-daemon --start --quiet --pidfile \
/var/run/$NAME.pid --exec $DAEMON -- $DAEMON_OPTS || true
echo "$NAME."
;;
Hope that this solution will work for others.
Always test your config first, it will show syntax errors and duplicate lines and point you there.
nginx -t
You will see logs there showing you what is causing the failure.
It's because you aren't restarting as root.
Change to root:
sudo -i
Restart:
service nginx restart
Or:
/etc/init.d/nginx restart