I have django running on nginx and uwsgi. The cached response loads very fast but at other times the website takes more than 30s to load. I am unable to diagnose the root cause of slowing down. Here's what I can provide as info to help narrow down the issue -
GTMetrix - For what I can conclude from waterfall report is that the waiting time for static files is too much alongwith the initial server response time. Here is a more detailed breakdown:
Link to the lighthouse parameters Waterfall report
nginx.conf - Here is the nginx config file:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 75;
types_hash_max_size 2048;
client_max_body_size 5M;
sendfile_max_chunk 512;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format upstream_time '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"'
'rt="$request_time" uct="$upstream_connect_time"
uht="$upstream_header_time" urt="$upstream_response_time"';
access_log /var/log/nginx/access.log upstream_time;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable msie6;
# And all the gzip mime types here
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
proxy_cache_path /data/cache levels=1:2 keys_zone=my_cache:10m max_size=10g
inactive 60m use_temp_path off;
server {
location ~* \.(jpg|jpeg|png|gif|ico|css|js){
proxy_cache my_cache;
proxy_cache_revalidate on;
proxy_cache_min_uses 3;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503
http_504;
proxy_cache_lock on;
expires 365d;
proxy_pass http://example.net;
}
}
}
Nginx Project Config -
map $sent_http_content_type $expires{
default on;
text/html epoch;
text/css max;
appplication/javascript max;
~image/ max;
}
server{
listen 80;
server_name example.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/mysite/project_dir/app_dir;
expires $expires;
}
location /images/ {
expires $expires;
root /home/mysite/project_dir/app_dir/static/images/;
}
location /media/ {
expires $expires;
root /home/mysite/project_dir/;
}
location / {
include uwsgi_params;
uwsgi_pass unix:/run/uwsgi/mysite.sock;
gzip_static on;
proxy_buffering off;
proxy_cache my_cache;
proxy_cache_revalidate on;
proxy_cache_min_uses 3;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503
http_504;
proxy_cache_lock on;
expires 365d;
proxy_set_header X-Real-IP $remote-addr;
proxy_set_header Host $http-host;
proxy_set_header Connection "";
}
listen 443 ssl http2;#Managed by certbot
#All the subsequent certbot settings not tampered with
}
Logs - So, when I log nginx using the above config, the access logs show upstream_response_time perfectly only if the website was cached loaded. When it takes >30s to load, the upstream_response_time including all parameters except response_time show hyphen '-'.
UPDATE:
django-debug-toolbar- Resource Usage:
Resource
Value
User CPU time
964.000 msec
System CPU time
52.000 msec
Total CPU time
1016.000 msec
System CPU time
1019.185 msec
All the SQL queries are taking minimal time(10.78ms). Logger too shows 0 errors.
I would highly appreciate if anyone could help me diagnose the root cause of this slowdown. Thank you!
Phew! So I figured out the solution. I used - https://www.webpagetest.org and arrived to a conclusion that the initial connection time was very high (~30s). When it happens, it is most likely some dns/firewall issue. My issue was dns based. I had 2 ips added as A record to my domain. One was a private ip. So the browser actually took ~30s to load that ip and when the website got loaded, the browser cached the response so the subsequent response times were low. Simply removing the private ip worked for me.
Related
I built a rails app in ec2 instance and deployed using route 53.
Currently I succeeded in associating the instance with the domain provided by amazon.
I can access to the page with the domain name.
However, once I create a record and associate the domain to the alb, I'm no longer able to access to the page.
I'm doing it to make the https access available.
I checked the things below.
Target group succeeds health check
Typing the DNS name of ALB works with both http and https
The record on route53 shows the DNS name above
The security group used for the ALB allows all the access
regardless of either http or https
listener is configured for both http and https
I also checked the article below
Unable to Access HTTPs in AWS Application Load Balancer EC2 Instance
I have no idea of what else to check after all.
Please help me.
I've been working on this the whole day...
Here's my config files
ruby_gems_bootcamp.conf(for nginx)
;; Query time: 6 msec
# log directory
error_log /var/www/rails/ruby-gems-bootcamp/log/nginx.error.log;
access_log /var/www/rails/ruby-gems-bootcamp/log/nginx.access.log;
# max body size
client_max_body_size 2G;
upstream app_server {
# for unix domain socket setups
server unix:/var/www/rails/ruby-gems-bootcamp/tmp/sockets/.unicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name 54.248.194.243
# nginx so increasing this is generally safe ...
keepalive_timeout 5;
# path for static files
root /var/www/rails/ruby-gems-bootcamp/public;
# page cache loading
try_files $uri/index.html $uri.html $uri #app;
location #app {
# http headers
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
# referring upstream app_server
proxy_pass http://app_server;
}
# Rails error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /var/www/rails/ruby-gems-bootcamp/public;
}
}
nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
listen [::]:80;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
route53 configuration
I'm testing a django website deployment. The site works without any issues when I connect directly to my gunicorn localhost and run it in debug mode (so that django handles file uploads itself). When I access the site with debug mode turned off through nginx (it binds to the same gunicorn localhost) everything works just as well, except file uploads. Whenever I try to upload a file > 1MB, the upload freezes at some point (with a 1.3MB file, my browser freezes at 70%).
I've installed nginx into a conda virtual environment (conda install --no-update-dependencies -c anacoda nginx). Here is the etc/nginx.conf file:
# nginx Configuration File
# https://www.nginx.com/resources/wiki/start/topics/examples/full/
# http://nginx.org/en/docs/dirindex.html
# https://www.nginx.com/resources/wiki/start/
# Run as a unique, less privileged user for security.
# user nginx www-data; ## Default: nobody
# If using supervisord init system, do not run in deamon mode.
# Bear in mind that non-stop upgrade is not an option with "daemon off".
# daemon off;
# Sets the worker threads to the number of CPU cores available in the system
# for best performance.
# Should be > the number of CPU cores.
# Maximum number of connections = worker_processes * worker_connections
worker_processes auto; ## Default: 1
# Maximum number of open files per worker process.
# Should be > worker_connections.
# http://blog.martinfjordvald.com/2011/04/optimizing-nginx-for-high-traffic-loads/
# http://stackoverflow.com/a/8217856/2127762
# Each connection needs a filehandle (or 2 if you are proxying).
worker_rlimit_nofile 8192;
events {
# If you need more connections than this, you start optimizing your OS.
# That's probably the point at which you hire people who are smarter than
# you as this is *a lot* of requests.
# Should be < worker_rlimit_nofile.
worker_connections 8000;
}
# Log errors and warnings to this file
# This is only used when you don't override it on a server{} level
#error_log logs/error.log notice;
#error_log logs/error.log info;
error_log var/log/nginx/error.log warn;
# The file storing the process ID of the main process
pid var/run/nginx.pid;
http {
# Log access to this file
# This is only used when you don't override it on a server{} level
access_log var/log/nginx/access.log;
# Hide nginx version information.
server_tokens off;
# Controls the maximum length of a virtual host entry (ie the length
# of the domain name).
server_names_hash_bucket_size 64;
# Specify MIME types for files.
include mime.types;
default_type application/octet-stream;
# How long to allow each connection to stay idle.
# Longer values are better for each individual client, particularly for SSL,
# but means that worker connections are tied up longer.
keepalive_timeout 20s;
# Speed up file transfers by using sendfile() to copy directly
# between descriptors rather than using read()/write().
# For performance reasons, on FreeBSD systems w/ ZFS
# this option should be disabled as ZFS's ARC caches
# frequently used files in RAM by default.
sendfile on;
# Don't send out partial frames; this increases throughput
# since TCP frames are filled up before being sent out.
tcp_nopush on;
# Enable gzip compression.
gzip on;
# Compression level (1-9).
# 5 is a perfect compromise between size and CPU usage, offering about
# 75% reduction for most ASCII files (almost identical to level 9).
gzip_comp_level 5;
# Don't compress anything that's already small and unlikely to shrink much
# if at all (the default is 20 bytes, which is bad as that usually leads to
# larger files after gzipping).
gzip_min_length 256;
# Compress data even for clients that are connecting to us via proxies,
# identified by the "Via" header (required for CloudFront).
gzip_proxied any;
# Tell proxies to cache both the gzipped and regular version of a resource
# whenever the client's Accept-Encoding capabilities header varies;
# Avoids the issue where a non-gzip capable client (which is extremely rare
# today) would display gibberish if their proxy gave them the gzipped version.
gzip_vary on;
# Compress all output labeled with one of the following MIME-types.
gzip_types
application/atom+xml
application/javascript
application/json
application/ld+json
application/manifest+json
application/rss+xml
application/vnd.geo+json
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
application/xml
font/opentype
image/bmp
image/svg+xml
image/x-icon
text/cache-manifest
text/css
text/plain
text/vcard
text/vnd.rim.location.xloc
text/vtt
text/x-component
text/x-cross-domain-policy;
# text/html is always compressed by gzip module
# This should be turned on if you are going to have pre-compressed copies (.gz) of
# static files available. If not it should be left off as it will cause extra I/O
# for the check. It is best if you enable this in a location{} block for
# a specific directory, or on an individual server{} level.
# gzip_static on;
include conf.d/*.conf;
}
This is the original version of my server's configuration file (conf.d/test.conf).
server {
server_name localhost;
listen 8081;
access_log on;
client_max_body_size 32M;
send_timeout 100s;
location /static/ {
alias /Users/user/static/;
autoindex on;
error_log /Users/user/.nginx/labsite.static.error.log warn;
}
location /media/ {
alias /Users/user/media/;
autoindex on;
error_log /Users/user/.nginx/labsite.media.error.log warn;
}
location / {
proxy_pass http://localhost:8001;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
access_log /Users/user/.nginx/labsite.access.log combined;
error_log /Users/user/.nginx/labsit.error.log warn;
}
I've found several related posts:
Nginx PHP Failing with Large File Uploads (Over 6 GB)
https://serverfault.com/questions/626817/nginx-file-upload-pauses-stalls-in-the-middle-uploads-only-258kb-and-stops
https://easyengine.io/tutorials/php/increase-file-upload-size-limit/
They led me to introduce some modifications
server {
server_name localhost;
listen 8081;
access_log on;
client_max_body_size 32M;
send_timeout 300s;
gzip_static off;
location /static/ {
alias /Users/user/static/;
autoindex on;
error_log /Users/user/.nginx/labsite.static.error.log warn;
}
location /media/ {
alias /Users/user/media/;
client_body_temp_path /Users/user/media;
autoindex on;
error_log /Users/user/.nginx/labsite.media.error.log warn;
}
location / {
proxy_pass http://localhost:8001;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
access_log /Users/user/.nginx/labsite.access.log combined;
error_log /Users/user/.nginx/labsit.error.log warn;
}
I've also tried setting sendfile off in my config file, because that's recommended for Free BSD (and Mac OS X is based on Free BSD), but to no avail. Am I missing something?
Seems like, I've figured this out. I had to change the temporary directory (I'm not entirely sure why, because there were no permissions-related issues) and set/increase the client_body_timeout parameter.
server {
listen 8081;
server_name localhost;
client_max_body_size 32M;
client_body_timeout 300s;
send_timeout 300s;
client_body_temp_path /Users/user/media;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /Users/user;
}
location /media/ {
root /Users/user;
}
location / {
proxy_pass http://localhost:8001;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
}
I'm currently trying to deploy a Django app on a REHL 7.4 server using Nginx. I've followed these tutorials :
https://simpleisbetterthancomplex.com/tutorial/2017/05/23/how-to-deploy-a-django-application-on-rhel.html
https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-16-04
The virtualenv and the nginx server seems to be allright. However I'm struggling with two errors:
Either I got a 500 error because of worker_connections parameter value (below are logs):
13494#0: *1021 1024 worker_connections are not enough while connecting to upstream, client: 192.168.1.33, server: 192.168.1.33, request: "GET /Syc/login HTTP/1.0", upstream: "http://192.168.1.33:80/Syc/login", host: "192.168.1.33"
Either I increase worker_connections value to > 4096 and I get a 400 error like in this thread 400 Bad Request - request header or cookie too large
Below are my nginx.conf and app.conf, please let me know if there are configuration mistakes and thanks in advance for any help.
nginx.conf:
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
# set open fd limit to 30000
worker_rlimit_nofile 30000;
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
large_client_header_buffers 4 32k;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
app.conf
upstream app_server {
server unix:/opt/sycoma/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name 192.168.1.33; # <- insert here the ip address/domain name
large_client_header_buffers 4 16k;
keepalive_timeout 5;
client_max_body_size 4G;
access_log /opt/sycoma/logs/nginx-access.log;
error_log /opt/sycoma/logs/nginx-error.log;
location /static/ {
alias /opt/sycoma/venv/Sycoma/Syc/static/;
}
location /media/ {
alias /opt/sycoma/venv/Sycoma/media/;
}
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://192.168.1.33;
}
}
Try to remove/comment the line:
proxy_set_header Host $http_host;
or increase large_client_header_buffers.
I am writing a Django app which uses an nginx reverse proxy + gunicorn as a webserver in production.
I want to include the capability to stop DDOS attacks from a certain IP (or pool of IPs). This as to be at the nginx level, rather than any deeper in the code. Do I need a web application firewall? If so, how do I integrate it.
My project's nginx file located at sites-available has:
server {
listen 80;
charset utf-8;
underscores_in_headers on;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/sarahm/djangoproject/djangoapp;
}
location /static/admin {
root /home/sarahm/.virtualenvs/myenv/local/lib/python2.7/site-packages/django/contrib/admin/static/;
}
location / {
proxy_pass_request_headers on;
proxy_buffering on;
proxy_buffers 8 24k;
proxy_buffer_size 2k;
include proxy_params;
proxy_pass http://unix:/home/sarahm/djangoproject/djangoapp/djangoapp.sock;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/sarahm/djangoproject/djangoapp/templates/;
}
}
Let me know if I should include more information, and what that information should be.
If you want to prevent certain IPs or even subnets from accesing your app, add the following code to your server block:
#Specify adresses that are not allowed to access your server
deny 192.168.1.1/24;
deny 192.168.2.1/24;
allow all;
Also if you're not useing REST, then you might want to limit possible HTTP verbs, by adding the following to your server block:
if ($request_method !~ ^(GET|HEAD|POST)$ ) {
return 403;
}
To lessen the possibility of DoS attack, you might want to limit the number of possible requests from single host (see http://nginx.org/en/docs/stream/ngx_stream_limit_conn_module.html), by adding the following to NGINX nginx.conf:
limit_conn_zone $binary_remote_addr zone=limitzone:1M;
and the following to your server block:
limit_conn limitzone 20;
Some other useful setting for nginx.conf, that help mitigate DoS if set correctly:
server_tokens off;
autoindex off;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
client_body_timeout 10;
client_header_timeout 10;
send_timeout 10;
keepalive_timeout 20 15;
open_file_cache max=5000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
Since it's too broad to explain these all here, suggest you to look in the docs http://nginx.org/en/docs/ for details. Though choosing correct values is achieved via trial and error on particular setup.
Django serves error pages itself as templates, so you should remove:
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/sarahm/djangoproject/djangoapp/templates/;
Adding access_log off; log_not_found off; to static if you don't really care for logging is also an option:
location /static/ {
access_log off;
log_not_found off;
root /home/sarahm/djangoproject/djangoapp;
}
this will lower the frequency of filesystem requests, therefore increasing performance.
NGINX is a great web server and setting it is a broad topic, so it's best to eaither read the docs (at least HOW-TO section) or find an article that describes the setup for a situation close to yours.
Currently I'm doing cache using fastcgi_cache for non-logged-in users, and using ( if + fastcgi_no_cache + fastcgi_cache_bypass ) to pass logged-in users directly to backend which is PHP-FPM.
this work good enough, but when PHP-FPM start hitting 500+ req/s the slow/load start.
So what i'm thinking about is to create a cache for logged-in users and each user has it's own cached files, is that possible? if yes can you please provide me some tips about that. I've goggled a lot but nothing help with that.
the site running custom php cms with mysql and memcached and apc
cat /etc/nginx/nginx.comf
user username username;
worker_processes 8;
worker_rlimit_nofile 20480;
pid /var/run/nginx.pid;
events {
worker_connections 10240;
use epoll;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log off;
error_log /var/log/nginx/error.log warn;
log_not_found off;
log_subrequest off;
server_tokens off;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 3;
keepalive_requests 50;
send_timeout 120;
connection_pool_size 256;
chunked_transfer_encoding on;
ignore_invalid_headers on;
client_header_timeout 60;
large_client_header_buffers 4 128k;
client_body_in_file_only off;
client_body_buffer_size 512K;
client_max_body_size 4M;
client_body_timeout 60;
request_pool_size 32k;
reset_timedout_connection on;
server_name_in_redirect off;
server_names_hash_max_size 4096;
server_names_hash_bucket_size 256;
underscores_in_headers off;
variables_hash_max_size 4096;
variables_hash_bucket_size 256;
gzip on;
gzip_buffers 4 32k;
gzip_comp_level 1;
gzip_disable "MSIE [1-6]\.";
gzip_min_length 0;
gzip_proxied any;
gzip_types text/plain text/css application/x-javascript text/javascript text/xml application/xml application/xml+rss application/atom+xml;
open_file_cache max=3000 inactive=20s;
open_file_cache_min_uses 1;
open_file_cache_valid 20s;
open_file_cache_errors off;
fastcgi_buffer_size 8k;
fastcgi_buffers 512 8k;
fastcgi_busy_buffers_size 16k;
fastcgi_cache_methods GET HEAD;
fastcgi_cache_min_uses 1;
fastcgi_cache_path /dev/shm/nginx levels=1:2 keys_zone=website:2000m inactive=1d max_size=2000m;
fastcgi_connect_timeout 60;
fastcgi_intercept_errors on;
fastcgi_pass_request_body on;
fastcgi_pass_request_headers on;
fastcgi_read_timeout 120;
fastcgi_send_timeout 120;
proxy_temp_file_write_size 16k;
fastcgi_max_temp_file_size 1024m;
include /etc/nginx/vhosts/*.conf;
}
vhost settings :
server {
listen 80;
server_name domain.com;
access_log off;
error_log /var/log/nginx/error.log warn;
root /home/username/public_html;
location ~ \.php$ {
# pass cache if logged in
set $nocache "";
if ($http_cookie ~ (MyCookieUser*|MyCookiePass*)) {
set $nocache "Y";
}
fastcgi_no_cache $nocache;
fastcgi_cache_bypass $nocache;
fastcgi_cache website;
fastcgi_cache_key $host$uri$is_args$args;
fastcgi_cache_valid 200 301 302 304 40s;
fastcgi_cache_valid any 5s;
fastcgi_cache_use_stale error timeout invalid_header updating http_500 http_503 http_404;
fastcgi_ignore_headers Set-Cookie;
fastcgi_hide_header Set-Cookie;
fastcgi_ignore_headers Cache-Control;
fastcgi_hide_header Cache-Control;
fastcgi_ignore_headers Expires;
fastcgi_hide_header Expires;
fastcgi_no_cache $nocache;
fastcgi_cache_bypass $nocache;
fastcgi_index index.php;
fastcgi_pass 127.0.0.1:8081;
fastcgi_param SCRIPT_FILENAME /home/username/public_html$fastcgi_script_name;
include /etc/nginx/fastcgi_params;
}
location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|ppt|txt|mid|swf|midi|wav|bmp|js)$ {
root /home/username/public_html;
expires max;
add_header Cache-Control cache;
}
}
php-fpm config
emergency_restart_threshold = 10
emergency_restart_interval = 60s
process_control_timeout =10s
rlimit_files = 102400
events.mechanism = epoll
[www]
user = username
group = username
listen = 127.0.0.1:8081
listen.backlog = 10000
pm = dynamic
pm.max_children = 2048
pm.start_servers = 64
pm.min_spare_servers = 20
pm.max_spare_servers = 128
pm.process_idle_timeout = 10s;
pm.max_requests = 50000
request_slowlog_timeout = 40s
request_terminate_timeout = 60s
Also Does it need to change the way php cms run his own cookies .?
Server RAM : 32GB DDR3 Processor : Dual E5620 Centos6 64bit
Just a suggestion (And what i'm currently doing)...
Why dont you use a different cache for each unique cookie that nginx gets from its upstream cgi server (php-fpm) when in the "logged in" section of your site - this more or less means that each logged in user will get their own cache - it's not optimal but will help.
If you want to start using really fancy cache options with cookies/dynamic content etc you will probably need to use varnish-cache in front of nginx.
I also have certain locations that will clear any cached data cached (for that URI) when accessed such as /admin or /system etc -- the last thing i want is nginx serving a cached copy of my admin backend with all its sensitive information to a hacker while php-fpm is offline.
You might like this example for WordPress:
set $cs_session "";
if ($http_cookie ~* "wordpress_logged_in_[^=]*=([^%]+)%7C") {
set $cs_session wordpress_logged_in_$1;
}
fastcgi_cache_key "$scheme$request_method$host$request_uri$cs_session";