Configure nginx : my browser always displays IP address instead of domain_name - ruby-on-rails-4

With my web browser: when I go to my_domain.com => I am redirected to XX.XX.XX.XX (the IP)
and the address displayed in my web browser remains XX.XX.XX.XX.
I want to point on XX.XX.XX.XX and I want the address displayed my_domain.com (same behaviour as in any website).
My configuration file: /etc/nginx/nginx.conf
user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 24M;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini;
passenger_ruby /home/deploy/.rvm/gems/ruby-1.9.3-p547/wrappers/ruby;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
And this is my other configuration file /etc/nginx/sites-enabled/default
server {
listen 80;
server_name my_domain.com;
add_header X-Frame-Options "SAMEORIGIN";
location / {
root /home/deploy/www/comint/current/public;
passenger_enabled on;
rails_env development;
}
location /doc/ {
alias /usr/share/doc/;
autoindex on;
allow 127.0.0.1;
allow ::1;
deny all;
}
}
when I try to perform a redirect, I have a loop error.
I tried many configurations, but it seems that I misunderstood something.
May someone explain me what the problem can be ?
Thank you
PS: I'm a rails developper, but that's my first web server configuration.

Related

404 Not Found Nginx Error After SFTP Changes via FireZilla

I just updated my site's (Django; deployed via AWS Lightsail) files via FileZilla but my website won't load now, giving me a 404 error.
I've done some research and they're telling me it is an error with Nginx's conf file, but I'm not sure how I should change it to resolve the issue.
Here is the conf file:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
I've attempted to return to a previous snapshot (that used to function) by creating a new instance, but when I try to load the backup via its public IP it seems to still give me a 404.

Nginx status is active but it is not serving

I have attached two files in which I am hosting var/www/html files and localhost/:3000
please help me why nginx not serving when i am hitting ip of server.
Is there any solution for my problem if so then let me know what changes should I do so that it will work
I have configure port 81 for this application
nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml>
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
sites-enabled file
server {
listen 81 default_server;
listen [::]:81 default_server;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.php;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /index.php?args;
}
location /front/ {
proxy_pass http://localhost:3000/;
}
# pass PHP scripts to FastCGI server
#
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
# # With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
}
In your NGINX config you've set port 81 but you're trying to hit port 3000?
Other than that verify
If the files working within the server? Like CURL or WGET
Make sure you've configured the security group to open the correct port for incoming traffic
Make sure that you're using PUBLIC IP of your instance
Make sure that your instance have access to server (should be in public subnet)
Even after all this if it doesn't work then update the question with more details as to exact error message.

nginx/django file download permission

i have a django website running on ubuntu nginx where a user can upload image and another user can download the image.
Problem is when i upload image from frontend then another user can view the image but can't download the original image and when i upload the image from backend its downloadable.
i need to change file permission every time to download the image
nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 100M;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
default settings:
server {
listen 80;
server_name 159.65.156.40;
location = /favicon.ico { access_log off; log_not_found off; }
location /static {
root /home/tboss/liveimage;
client_max_body_size 100M;
}
location /media/ {
root /home/tboss/liveimage;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/tboss/liveimage/liveimage.sock;
}
}

AWS ec2 ubuntu+nginx+uwgi+flask deep learning api: 504 Gateway Time-out

I deployed a deep learning model api in the ec2 ubuntu server. And I using the following command to send my json files.:
curl http://ccc/api/ -d '{"image_link":["https://xx/8/52i_b403bb15-0637-4a17-be09-476168ff9a73"], "bb":"100"}' -H 'Content-Type: application/json'
Since the predicting model takes about 5 minutes to complete prediction. If I just predict some labels(1 labels) not whole labels(10 labels), the response result is OK. If want to predict the whole labels, and the error is out
<head><title>504 Gateway Time-out</title></head>
<body bgcolor="white">
<center><h1>504 Gateway Time-out</h1></center>
<hr><center>nginx/1.14.0 (Ubuntu)</center>
</body>
</html>
and My Ngnix.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 900;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
and I also set the default file in sites-enable:
server {
listen 80; # listen gate
server_name xxx; # aws IP or domain name
charset utf-8;
client_max_body_size 75M;
fastcgi_read_timeout 1200;
location / {
include uwsgi_params; # import uwsgi
uwsgi_pass 127.0.0.1:8000; #
uwsgi_param UWSGI_PYTHON /usr/bin/python3; # Python environment)
uwsgi_param UWSGI_CHDIR /home/ubuntu/xxxx/src; # project dir
uwsgi_param UWSGI_SCRIPT app:app; # main app
}
}

GUnicorn and Django for 500 concurrent requests with limited resources

I was tasked with creating a Django-Gunicorn demo app. In this task, I need to be able to handle 500 concurrent login requests in 1 second.
I have to deploy the app in a VM with 2GB RAM and 2 core CPUs (using Vagrant and VirtualBox, Ubuntu 16.04). I already tried the following for deployment.
gunicorn --workers 5 --bind "0.0.0.0:8000" --worker-class "gevent" --keep-alive 5 project.wsgi
Using JMeter test from the host machine, the test always takes around 7-10 seconds. Even if the login endpoint only returns empty response without any database access, the amount of the time is almost the same. Can you tell me what's wrong with this?
I use the default settings at /etc/nginx/nginx.conf.
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
And here is my reverse proxy settings I put in sites-available folder.
server {
listen 80;
location /static {
autoindex on;
alias /vagrant/static/;
}
location /media {
autoindex on;
alias /vagrant/uploads/;
}
location / {
proxy_redirect http://127.0.0.1:8000/ http://127.0.0.1:8080/;
proxy_pass http://127.0.0.1:8000;
}
}
Thanks
The short answer is that you miss worker connections in gunicorn. So it cannot handle more concurrent requests.
For 500 concurrent login requests, the concurrent active connections handled by the database is also important. If the database cannot handle the loading, you are going to fail too. If you're using PSQL, you have to change max connections and use a connection pool