Ghost website not working on Ec2 with Amazon certificate and Route53 - amazon-web-services

I installed Ghost on my EC2 instance running Ubuntu 18 by following the official guide.
I didn't opt-in for a LetsEncrypt certificate though. I wanted to roll my own with the Amazon Certificate Manager and load-balance requests to the website via Route53 and a CloudFront distribution.
The issue is that the blog doesn't load - instead I am presented the default nginx homepage.
This is my website config in /etc/nginx/sites-enabled:
server {
listen 80;
listen [::]:80;
server_name paulrberg.com;
root /var/www/ghost/system/nginx-root; # Used for acme.sh SSL verification (https://acme.sh)
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:2368;
}
location ~ /.well-known {
allow all;
}
client_max_body_size 50m;
}
I suspect that the issue is the nginx configuration. The way Ghost provides may not be compatible with an Amazon certificate coupled with Route53 and CloudFront.
Is this even doable or do I have to use the LetsEncrypt certificate and give up on my infrastructure of choice?

Related

Does establishing a proxy pass to CloudFront via NGINX preserve the benefits of CloudFront

I have the following configuration on NGINX:
location ~ ^/app/ {
proxy_pass_request_headers on;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass https://my-cloudfront-distribution.cloudfront.net;
}
Is this sufficient for enabling all the cache features from cloudfront? Meaning, that, wouldn't CloudFront think that all the requests come from the NGINX server?

Elastic Beanstalk Amazon linux 2 with docker | nginx custom configuration

I am configuring aws elasticbeanstalk Amazon linux 2 with docker-compose.
I want to configure nginx like this.
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
#try_files $uri $uri/ =404;
}
location /admin {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:8080;
}
}
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/platforms-linux-extend.html
So i just followed above docs. And my source code package directory architecture is as below.
.ebextensions
.platform
nginx
conf.d
myconf.conf
docker-compose.yml
After deployment, when i access my elastic beanstalk url, there were 502 gateway error.
When i checked elastic beanstalk(ec2), my docker container was running successfully. And i could access localhost:3000 in my ec2 container.
I want to adopt my custom nginx configuration on elasticbeanstalk Amazon Linux 2 enviornment, and i followed the settings in the documentation[https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/platforms-linux-extend.html], but it didn't work.
How can i adopt my custom nginx configuration on elasticbeanstlak Amazon Linux 2? Is there any process i was missing?

App service not being fount by nginx service after restart on aws ecs fargate with service discovery

I have two services on my cluster: myapp-service and an nginx-service. I'm using service discovery to connect the both and everything works just fine.
The problem happens when i deploy a new version of myapp-service and it came with a new private(and public) ip address. After the deploy i see that the ip are correctly updated on the Route 53 but when i try to access my-app through nginx it return a bad-gateway. When i look to the nginx logs on Cloudwatch i can see that the nginx are trying to connect to the old private ip address of myapp-service.
currently i'm not using any loadbalance or auto-scaling configuration.
There are any health check for my containers on the task definition.
"Enable ECS task health propagation" is on.
This is my nginx configuration(default.conf) and marketplace-service.local is my registry on Route 53.
upstream channels-backend {
server marketplace-service.local:8000;
}
server {
listen 80;
location / {
proxy_pass http://channels-backend;
}
}
Can anybody help me to discover what i'm missing here??
thx
I don't know about upstream part.
upstream channels-backend {
server marketplace-service.local:8000;
}
But I'm sure about server part. For Fargate, localhost is used so add server_name localhost;. Then add 4 lines of code showing below to location block.
server {
listen 80;
server_name localhost;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://channels-backend;
}
}

How to get nginx on ec2 to work with https

This is what I have setup on AWS in a nutshell:
I have an ec2 (windows server) lets call it my WebAppInstance which hosts a .Net based web api application.
Then I have another ec2 instance (windows server) which has another instance of the same web app, lets call it WebAppInstanceStaging.
Now, in order to achieve canary deployment, I created another ec2 (ubuntu) to host nginx to redirect the request to either WebAppInstance OR WebAppInstanceStaging based on the request header.
I have put my nginx behind an elb to make use of the ssl cert I have in AWS Certificate Manager (ACM) since it cannot be directly used with an ec2. And then I created a Route 53 record set in the domain registered with AWS (*.mydomain.com).
In Route 53 I created a record set as myapp.mydomain.com.
Now when I access the http://myapp.mydomain.com I am able to access it but when I try to access https://myapp.mydomain.com I am seeing error saying This site can't be reached (ERR_CONNECTION_REFUSED).
Below is the configuration of my nginx:
upstream staging {
server myappstaging.somedomain.com:443;
}
upstream prod {
server myapp.somedomain.com:443;
}
# map to different upstream backends based on header
map $http_x_server_select $pool {
default "prod";
staging "staging";
}
server {
listen 80;
server_name <publicIPOfMyNginxEC2> <myapp.mydomain.com>;
location / {
proxy_pass https://$pool;
#standard proxy settings
proxy_set_header X-Real-IP $remote_addr;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-NginX-Proxy true;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
}
}
Been more than a day trying to figure it out. What am I missing?

TLS on Elastic Beanstalk running reverse proxy

I want to add TLS to my AWS Elastic Beanstalk application. It is a node.js app running behind nginx proxy server.
Here are the steps I've completed
Get a wildcard certificate from Amazon Certificate Manager.
Add the certificate in the load balancer configuration section of my EB instance.
My relevant part of my nginx config is
files:
/etc/nginx/conf.d/proxy.conf:
mode: "000644"
content: |
upstream nodejs {
server 127.0.0.1:8081;
keepalive 256;
}
server {
listen 8080;
proxy_set_header X-Forwarded-Proto $scheme;
if ( $http_x_forwarded_proto != 'https' ) {
return 301 https://$host$request_uri;
}
location / {
proxy_pass http://nodejs;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
When I try to access my application using https, I get a 408: Request Timed Out.
It is my understanding that to enable ssl on nginx we need to add the cert along with pem file and listen on port 443. But since I'm using ACM certificate, I don't have the cert and pem files.
What am I missing to add in my nginx.conf for this to work?
Thanks
In the load balancer listener configuration, for the port 443 listener, the "Instance Port" setting should be 80.