Route 53 isn't letting me access to the ec2 instance through the application load balancer - amazon-web-services

I built a rails app in ec2 instance and deployed using route 53.
Currently I succeeded in associating the instance with the domain provided by amazon.
I can access to the page with the domain name.
However, once I create a record and associate the domain to the alb, I'm no longer able to access to the page.
I'm doing it to make the https access available.
I checked the things below.
Target group succeeds health check
Typing the DNS name of ALB works with both http and https
The record on route53 shows the DNS name above
The security group used for the ALB allows all the access
regardless of either http or https
listener is configured for both http and https
I also checked the article below
Unable to Access HTTPs in AWS Application Load Balancer EC2 Instance
I have no idea of what else to check after all.
Please help me.
I've been working on this the whole day...
Here's my config files
ruby_gems_bootcamp.conf(for nginx)
;; Query time: 6 msec
# log directory
error_log /var/www/rails/ruby-gems-bootcamp/log/nginx.error.log;
access_log /var/www/rails/ruby-gems-bootcamp/log/nginx.access.log;
# max body size
client_max_body_size 2G;
upstream app_server {
# for unix domain socket setups
server unix:/var/www/rails/ruby-gems-bootcamp/tmp/sockets/.unicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name 54.248.194.243
# nginx so increasing this is generally safe ...
keepalive_timeout 5;
# path for static files
root /var/www/rails/ruby-gems-bootcamp/public;
# page cache loading
try_files $uri/index.html $uri.html $uri #app;
location #app {
# http headers
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
# referring upstream app_server
proxy_pass http://app_server;
}
# Rails error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /var/www/rails/ruby-gems-bootcamp/public;
}
}
nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
listen [::]:80;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
route53 configuration

Related

AWS ELB keeps returning 404 Health checks failed with these codes: [404]

I'm serving my node js application with pm2.
nginx.conf file is
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
listen [::]:80;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
#include /etc/nginx/default.d/*.conf;
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
and inside the **conf.d/**folder I have create configuration file like below (with the name of api.conf)
server {
listen 80;
server_name dev.xxxxxxx.com;
underscores_in_headers on;
# return 301 https://$host$request_uri;
location / {
proxy_pass http://localhost:3004/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
when I stop the node application nginx server the ELB Health status is 502 and start when I start the nginx server then nginx server the ELB Health status is 404
I'm not able to find what is the issue. Could any one please help me to understand and solve
even I checked nginx error logs as below
[02/Feb/2021:17:21:29 +0000] "GET / HTTP/1.1" 502 157 "-" "ELB-HealthChecker/2.0" "-"
[02/Feb/2021:17:21:38 +0000] "GET / HTTP/1.1" 404 60 "-" "ELB-HealthChecker/2.0" "-"
When setting up a health check, the ALB does NOT include a HOST: header value.
If you modify the log you will see that the ELBHealthCheck requests come from a private IP address.
So when nginx is running, it gets a request from a private IP, which does not match any server elements. Since you don't have a default_server set, it defaults to the first one, which in this case will be the proxy to your node app.
Are there any documents in /usr/share/nginx/html? Try putting an index.html in there, and adding a default_server to the port configuration:
listen 80 default_server;
For rest endpoint users, you can define an endpoint for health check.Return status code 200 in response.
Firstly we need to configure Health check end point in AWS.
EC2-->Load Balancing-->Target Groups-->Select Target Group-->Actions-->Edit Health check Settings.
Define a health check path like /health.
save changes.
Now create an endpoint /health in your application.I did it in sprinboot so pasting same for reference.
#CrossOrigin(value = "*")
#RequestMapping(value = "/health",method = RequestMethod.GET)
public ResponseEntity<?> health() throws Exception {
try {
return ResponseEntity.status(200).body("Ok");
} catch (Exception e) {
return (ResponseEntity<?>) ResponseEntity.internalServerError().body(e.getMessage());
}
}
I'm too late here but still adding if someone gets the same issue. My Setup is
AWS ALB --> AWS EC2 Installed Nginx on EC2 and getting the same issue as above:
"GET /var/www/html/index.nginx-debian.html HTTP/1.1" 404 134 "-" "ELB-HealthChecker/2.0"
The mistake I did was I have added the full path in the Health checks tab as /var/www/html/index.nginx-debian.html and due to this, I was getting the above error.
Solution: Only need to add /index.nginx-debian.html in the Health Checks tab and it will start getting 200s.
Hope it helps some one and get an Upvote :-)

Nginx caching of uwsgi served static files

The django project is deployed using uwsgi as application server, it also serves static files from a specified directory(as shown in the below command) and nginx is used as reverse proxy server. This is deployed using docker.
The uwsgi command to run the server is as follows:
uwsgi -b 65535 --socket :4000 --workers 100 --cpu-affinity 1 --module wui.wsgi --py-autoreload 1 --static-map /static=/project/static;
The application is working fine at this point. I would like to cache the static files into nginx server. So i have referred blog https://www.nginx.com/blog/maximizing-python-performance-with-nginx-parti-web-serving-and-caching and i have included following configuration in my nginx.conf :
location ~* .(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg
|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid
|midi|wav|bmp|rtf)$ {
expires max;
log_not_found off;
access_log off;
}
After adding this into my Nginx conf, the Nginx server container exits with the following error:
[emerg] 1#1: invalid number of arguments in "location" directive in /etc/nginx/nginx.conf:43
Is this how uwsgi served static files can be cached into nginx? If yes please suggest me what is gone wrong here.
My complete nginx.conf is as follows:
events {
worker_connections 1024; ## Default: 1024
}
http {
include conf/mime.types;
# the upstream component nginx needs to connect to
upstream uwsgi {
server backend:4000; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 8443 ssl http2 default_server;
# the domain name it will serve for
server_name _; # substitute your machine's IP address or FQDN
charset utf-8;
ssl_certificate /secrets/server.crt;
ssl_certificate_key /secrets/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
add_header Strict-Transport-Security "max-age=31536000" always;
# Redirect HTTP to HTTPS
error_page 497 https://$http_host$request_uri;
# max upload size
client_max_body_size 75M; # adjust to taste
uwsgi_read_timeout 600s;
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass uwsgi;
include /config/uwsgi_params; # the uwsgi_params file you installed
}
location ~* .(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg
|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid
|midi|wav|bmp|rtf)$ {
expires max;
log_not_found off;
access_log off;
}
}
}
Nginx version: 1.16
The problem with your config is that the location block has newlines in the list of filenames. I tried nginx -t -c <filename> with a modified version of your location block:
location ~* .(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
expires max;
log_not_found off;
access_log off;
}
... and this passes the test!

Nginx gunicorn redirect not working and empty files

I am setting up a production server with nginx and gunicorn. I used the nginx.conf from the gunicorn examples pages and modified it:
worker_processes 1;
user mypolls webapps;
# 'user nobody nobody;' for systems with 'nobody' as a group instead
pid /var/run/nginx.pid;
error_log /webapps/mypolls/logs/nginx.error.log;
events {
worker_connections 1024; # increase if you have lots of clients
accept_mutex off; # set to 'on' if nginx worker_processes > 1
# 'use epoll;' to enable for Linux 2.6+
# 'use kqueue;' to enable for FreeBSD, OSX
}
http {
# fallback in case we can't determine a type
default_type application/octet-stream;
access_log /webapps/mypolls/logs/nginx.access.log combined;
sendfile on;
upstream app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
# for UNIX domain socket setups
server unix:/webapps/mypolls/run/gunicorn.sock fail_timeout=0;
# for a TCP configuration
# server 192.168.0.7:8000 fail_timeout=0;
}
server {
# if no Host match, close the connection to prevent host spoofing
listen 80 default_server;
return 444;
}
server {
# use 'listen 80 deferred;' for Linux
# use 'listen 80 deferred;' for Linux
# use 'listen 80 accept_filter=httpready;' for FreeBSD
listen 80;
client_max_body_size 4G;
# set the correct host(s) for your site
server_name 192.168.1.17;
keepalive_timeout 5;
# path for static files
root /webapps/mypolls/app/static/;
location / {
include /etc/nginx/mime.types;
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location / {
include /etc/nginx/mime.types;
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS
# proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
proxy_pass http://app_server;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /webapps/mypolls/app/static/;
}
}
}
For some reason the request for static files is still passed to gunicorn:
[2017-02-08 15:54:33 +0100] [2207] [DEBUG] GET /static/polls/style.css
The feels seem to be found but empty files are served. Is something wrong with the configuration?

Setup pretty local development URLs with nginx on Mac OS X

I have a Django server setup that I can access from 127.0.0.1:8000
I have just installed nginx on to Mac OS X Yosemite and can access the 'Welcome to nginx' screen at localhost:80
I want to be able to set up a local development environment on local.ingledow.co.uk but when I visit that URL, it redirects me to my live site: david.ingledow.co.uk.
I did have a wildcard URL forwarder with DNSimple that would redirect like this:
*.ingledow.co.uk > david.ingledow.co.uk
but I have removed that.
I have added this CNAME: local.ingledow.co.uk > 127.0.0.1
This shouldn't affect how I setup my local nginx servers.
So, how do I create an nginx server on Mac that shows my 127.0.0.1:8000 Django project on to a custom domain (local.ingledow.co.uk).
This is my nginx setup:
Default Nginx config at /usr/local/etc/nginx/nginx.conf
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
# INCLUDES
include /Users/dingledow/etc/nginx/*.conf;
}
ingledow.conf at /Users/dingledow/etc/nginx/ingledow.conf
server {
listen 80;
server_name local.ingledow.co.uk;
location / {
root /Users/dingledow/Documents/Work/Ingledow/git/my_site2/;
index index.html index.htm index.php;
autoindex on;
}
}
I'm not sure if Apache is conflicting with nginx? Or whether I've just setup nginx incorrectly?
Setting a CNAME with your registrar isn't going to help at all, and you should remove that.
The only way to make this work is to edit your local HOSTS file at /etc/hosts and add a pointer to the localhost:
127.0.0.1. local.ingledow.co.uk
Then you'll either need to run the dev server on port 80 instead of 8000, or configure nginx to run your site via gunicorn or uwsgi - the documentation tells to how to do that.

How to run multiple Django sites on Nginx and uWSGI?

Is it possible to run multiple Django sites on the same server using Nginx and uWSGI?
I suppose it's necessary to run multiple uWSGI instances (one for each site). I copied /etc/init.d/uwsgi to uwsgi2 and changed the port number. But, I got the following error:
# /etc/init.d/uwsgi2 start
Starting uwsgi: /usr/bin/uwsgi already running.
How is it possible to run multiple uWSGI instances?
Thanks
You can create create multiple virtual hosts that allow you to host multiple sites, independent from each other. More info here: http://wiki.nginx.org/VirtualHostExample.
A bit more detailed info here as well on how to setup virtual hosts http://projects.unbit.it/uwsgi/wiki/RunOnNginx#VirtualHosting.
You can run multiple instances of uwsgi using Emperor Mode.
This handles the creation of new worker instances. These instances are brilliantly and hilariously named vassals. Each vassal just needs a config file which is usually placed (or symlinked) in /etc/uwsgi/vassals
For nginx you'll need to create a server block for each host you wish to serve. Just change the server_name directive for each host you want to serve. Here's an example:
#Simple HTTP server
server {
listen 80;
root /usr/share/nginx/www;
server_name host1.example.com;
}
#Django server
server {
listen 80;
server_name host2.example.com;
#...upstream config...
}
Important: Make sure you have specified your host names in /etc/hosts. I found that without doing this my django site was also served on the default server IP despite specifying that it should only be served on a specific host name within my nginx configuration.
I see many suggestions like #donturner's answer. i.e. set two or more different server in nginx configure file. But the problem is each server needs an unique server_name either different domain name or sub-domain name. How about this kind of situation: I want to server two different Django project like this:
www.ourlab.cn/site1/ # first Django project
www.ourlab.cn/site2/ # second Django project
In this way, we can configure all of the settings in one server.
This is my setting in /etc/nginx/nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
This is my setting in /etc/nginx/conf.d/self_configure.conf
# /etc/nginx/conf.d/self_configure.conf
server {
listen 80;
server_name www.ourlab.cn;
# note that these lines are originally from the "location /" block
root /mnt/data/www/ourlab;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
# Django media
location /media {
# your Django project's media files - amend as required
alias /mnt/data/www/metCCS/metCCS/static/media;
}
location /static {
# your Django project's static files - amend as required
# first project's static files path
alias /mnt/data/www/metCCS/metCCS/static/static_dirs;
}
location /static_lip {
# second project's static files path
alias /mnt/data/www/lipidCCS/lipidCCS/static/static_dirs;
}
# match www.ourlab.cn/metccs/*
location ~* ^/metccs {
include uwsgi_params;
uwsgi_pass unix:/run/uwsgi/metCCS.sock;
}
# match www.ourlab.cn/lipidccs/*
location ~* ^/lipidccs {
include uwsgi_params;
uwsgi_pass unix:/run/uwsgi/lipidCCS.sock;
}
}
You also need to change one of the Django project's settings.py file as STATIC_URL = '/static_lip/', so two projects can use their static files separately.
A new finding is nginx can server static files by itself. Even we close uwsgi and Django, we also can use these files through browser.