I've installed nginx and php7 to amazon EC2.
It works when I check it via local IP. But it's not available via Elastic IP.
Could somebody help me with it?
server {
listen 80 default_server;
root /var/www/html;
index index.php index.html;
server_name 52.43.19.61;
location / {
try_files $uri $uri/ /index.php?q=$uri&$args;
}
location ~ \.php$ {
try_files $uri = 404;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
Couple questions:
Your elastic IP associated with your instance?
Your security group of instance permits incoming connections?
Your instance firewall permits incoming connections?
Your application listens
what port are you trying to access?
Do you have that port open on the security groups? Do you have an
application listening on that port?
Also make sure Route table of VPC is set to enable IP address outside
the VPC (0.0.0.0/0) to flow from the subnet to the Internet gateway.
If I had to guess your security groups are not setup right. Make sure
to open them to the correct ip addresses or to the world (0.0.0.0/0)
if you are going access that port from multiple IPs.
If all that is not it, then dissociate and reallocate the IP to the instance.
Related
I have some dockers containers deployed on AWS EC2, that listens on http.
My idea is using nginx as reverse proxy, to pass traffic from https, to http://localhost.
Each container listens on a specific http port. The EC2 instance will accept traffic just on http:80 and http:443 and I will use subdomain to chose the right port.
So I should have:
https://backend.my_ec2instance.com --> http://localhost:4000
https://frontend.my_ec2instance.com --> http://localhost:5000
I'v got my free TSL certificate from Let's Encrypt (it's just on file containing public and private keys), and put it in
/etc/nginx/ssl/letsencrypt.pem
Then I have configured nginx in this way
sudo nano /etc/nginx/sites-enabled/custom.conf
and wrote
server {
listen 443 ssl;
server_name backend.my_ec2instance;
# Certificate
ssl_certificate letsencrypt.pem;
# Private Key
ssl_certificate_key letsencrypt.pem;
# Forward
location / {
proxy_pass http://localhost:4000;
}
}
server {
listen 443 ssl;
server_name frontend.my_ec2instance;
# Certificate
ssl_certificate letsencrypt.pem;
# Private Key
ssl_certificate_key letsencrypt.pem;
# Forward
location / {
proxy_pass http://localhost:5000;
}
}
then
sudo ln -s /etc/nginx/sites-available/custom.conf /etc/nginx/sites-enbled/
Anyway, if I open my browser on https://backend.my_ec2instance it's not reachable.
http://localhost:80 instead correctly shows the nginx page.
HTTPS default port is port 443. HTTP default port is port 80. So this: https://localhost:80 makes no sense. You are trying to use the HTTPS protocol on an HTTP port.
In either case, I don't understand why you are entering localhost in your web browser at all. You should be trying to open https://frontend.my_ec2instance.com in your web browser. The locahost address in your web browser would refer to your local laptop, not an EC2 server.
Per the discussion in the comments you also need to include your custom Nginx config in the base configuration file.
I am running a few microservices in ECS running on Fargate. I need these docker containers to be able to communicate between each other. Right now both containers are simply running NGINX to get a proof of concept up and running.
I have setup a private DNS namespace and created a service for each docker container e.g service1.local and service2.local
I then created the ECS services and linked the service discovery, each ECS service now has the .local namespace attached.
Right now both ECS services are in a public subnet with public IP's and are reachable individually. However reach service B via service A's IP is not possible i.e. the two containers cannot communicate with each other.
I get the following error service1.local could not be resolved (5: Operation refused).
I attached an EC2 instance to the same VPC and subnets and was able to ping each service via service1.local, service2.local curl them etc so so service discovery is working as expected in this situation.
Right now security groups allow traffic from all ports and all IPs (just whilst testing)
Below is a barebones copy of my NGIX config for service1
server {
listen 80;
server_name localhost;
resolver ${PRIVATE_DNS_RESOLVER} valid=10s;
set $serverHost http://service2.local:8000;
location /status {
access_log off;
add_header 'Content-Type' 'application/json';
return 200 '{"status":"OK"}';
}
# Proxy to server container
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass $serverHost$request_uri;
}
# Route for client
location / {
root /usr/share/nginx/html;
index index.html;
try_files $uri $uri/ /index.html;
}
# Route all errors back to index.html for react router to handle
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Hitting the public IP of this service followed by /api is what should proxy to the second service via service2.local
tl;dr
Both containers are accessible via public IPs.
Both containers are accessible from within the ups using the service discover private name space
Container one cannot resolve the hostname of container2
The CPC has dns hostnames enabled
The resolver here is one of the NS records from the private namespace. Setting it to 127.0.0.1 or 127.0.0.11 gave the same error.
I created a service publish structure as below
I don't know why can't access the domain successfully. Where maybe the issue?
The public LB's listener rules are
You can add HTTP to HTTPS redirect config in the Loadbalancer rule itself.
To proxy your HTTPS requests to Applications server
server {
listen 80 default_server;
server_name _;
location / {
proxy_pass <internal_loadbalancer_dns_name>;
}
}
Here is My Scenario.
I'm using ACM to generate 2 SSL certificates. example.com and *.example.com
I have 2 load balancers linked to the same EC2 instance.
1) Linked to my wordpress website - example.com
2) Linked to my app - *.example.com
Checklist I followed to troubleshoot outofservice error:
1) Instance State - Running
2) Status Checks - 2/2
3) Security Group Setting - Port 80/443/22 are open
4) Below are my Health Check settings
Ping Target - HTTP:80/
Timeout - 5 seconds
Interval - 30 seconds
Unhealthy threshold - 2
Healthy threshold - 10
I'm using NGINX webserver. I have checked the status, it shows its active.
Here is my config file example.com:
server
{
server_name www. example.com;
return 301 $scheme://example.com$request_uri;
}
server {
listen 80;
listen 443;
server_name example.com;
root /opt/bitnami/apps/wordpress/htdocs;
}
server {
listen 80;
listen 443;
server_name ~^(.*)\. example\.com$ ;
root /opt/bitnami/apps/example_app;
}
What could be the problem here? Is the problem related to NGINX config settings or is it related to Load Balancer settings?
Your nginx settings are definitely not correct. There is no SSL at the instance level. Instead, the elb would terminate the SSL connection to the client. The only connection the ec2 instance should accept is on Port 80 and only from the elb. I suggest you remove the SSL report for for three references in nginx and make sure it's configured as above.
You'll definitely need to configure nginx to run php scripts. Since php is a pre processing engine, your nginx configuration needs to know how to handle php files.
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.php index.html index.htm index.nginx-debian.html;
server_name server_domain_or_IP;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
location ~ /\.ht {
deny all;
}
}
reference: https://www.digitalocean.com/community/tutorials/how-to-install-linux-nginx-mysql-php-lemp-stack-in-ubuntu-16-04
Since you mentioned this was a wordpress website, you'll need to setup a LEMP stack.
Ok, so I'm a website newbie who just finished the django tutorial, and decided to try and publish my polls app on the net. So far I have a godaddy domain name which I'm trying to point to my amazon EC2 instance's elastic IP which is currently hosting my polls website.
Currently what I have set up is:
Amazon route 53: Hosted zone that points to mydomain.com with record sets of: name mydomain.com & www.mydomain.com and Value xx.xxx.xx.x
Godaddy: DNS zone file: A(Host) to my amazon elastic IP xx.xxx.xx.x, Nameservers to the 4 amazon route 53 hosted zone nameservers.
EC2 instance: running nginx and gunicorn to host the app.
My issue is that I can go to the website with amazon's elastic IP, but I cannot access it with the domain name (I get a bold "Welcome to nginx!" page no matter if i try to go to the home page or the /polls/1 page.)
Looks about right. Have you followed the standard gunicorn configs with nginx ?
http://docs.gunicorn.org/en/latest/deploy.html
You probably want something like this on your nginx configs:
http {
include mime.types;
default_type application/octet-stream;
access_log /tmp/nginx.access.log combined;
sendfile on;
upstream app_server {
server unix:/tmp/gunicorn.sock fail_timeout=0;
# For a TCP configuration:
# server 192.168.0.7:8000 fail_timeout=0;
}
server {
listen 443 default;
client_max_body_size 4G;
server_name _;
ssl on;
ssl_certificate /usr/local/nginx/conf/cert.pem;
ssl_certificate_key /usr/local/nginx/conf/cert.key;
keepalive_timeout 5;
# path for static files
root /path/to/app/current/public;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /path/to/app/current/public;
}
}
}
You want to point to the right SSL cert and key paths (instead of /usr/local/nginx/conf/cert.pem; and /usr/local/nginx/conf/cert.key;). Also, you should point root to your specific Django static files instead of /path/to/app/current/public
Ok i figured it out.
Nginx was listening to 127.0.0.1:8000 and gunicorn was broadcasting to 127.0.0.1:8001. (502 error)
To fix the DNS issue, I had to go onto my amazon EC2 control panel and open up port 8000.