SSL certificate for communication between load balancer and servers necessary? - google-cloud-platform

I am using the Google Cloud Platform to implement a REST API which is accessible through HTTPS only using a load balancer.
My setup looks like this:
VM instances:
2 instances wich run the same node.js server. One outputs "server1" the other outputs "server2".
Instance groups:
One instance group which contains both VMs.
Back-end services:
One back-end service which uses the instance groups and a simple health check.
Load balancing:
One load balancer.
Frontend: HTTPS PUBLIC_IP:443 my-ssl-certificate
Backend: My back-end service
Host and path rules: All unmatched (default) => My back-end service (default)
I now configured my domain's (api.domain.com) DNS with an A-Record for PUBLIC_IP. https://api.domain.com's output successfully switches between "server1" and "server2". The load balancer and the HTTPS-certificate my-ssl-certificate is working great! my-ssl-certificate is a Let's Encrypt SSL-certificate for my domain api.domain.com.
Question: Do I need 2 other certificates for my 2 VM instances, when they communicate with the load balancer? Or is this communication internally and doesn't require further SSL-certificates? If I need those certificates, how do I set them up with IPs?
Because accessing my 2 VM instances IPs via https://VM1_PUBLIC_IP resuls in a chrome warning, that the certificate is not valid.

If you are using load-balancer with SSL certificates, then there was no need of public facing VM's, you should kept it private subnets and communication should happen over private ip's between LB and VM.

Related

Call Rest API using AWS Load Balancer default DNS

I am new to AWS
I am develpoing a PoC for AWS server & PC Client COmmunication
My AWS Server App (Running in Ubuntu EC2) has exposed a rest API (RestAPI Name is /TestAPI)
If I call the Rest API in my C# code with "http://EC2 Ubuntu IP:8080/TestAPI", its working fine. I am getting data
I have created a Application Load Balancer & attached target Group where Ubuntu EC2 instance is added as a listner
I want to call the Rest API using Load Balancer default DNS
But if I call like below, EC2 instace Rest API is not working
"http://Load Balancer Default DNS:8080/TestAPI"
"http://Load Balancer Default DNS/TestAPI"
Kindly help
You need to check your health check of your target group associated with your Load balancer.
Load balancer will not forward traffic to your instances within the target group until it deems them as healthy.
As i can see you are port 8080 for your application, you need to set a health check for port 8080 and you need to mention health check path, by default it is /, if you can access your application on / then path is fine otherwise you need to provide path which is accesbile so that alb can successfully send packets and verify that path.
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-health-checks.html

How to add Cloud CDN to GCP VM? Always no load balancer available

I have a running Web server on Google Cloud. It's a Debian VM serving a few sites with low-ish traffic, but I don't like Cloudflare. So, Cloud CDN it is.
I created a load balancer with static IP.
I do all the items from the guides I've found. But when it comes time to Add origin to Cloud CDN, no load balancer is available because it's "unhealthy", as seen by rolling over the yellow triangle in the LB status page: "1 backend service is unhealthy".
At this point, the only option is to choose Create a Load Balancer.
I've created several load balancers with different attributes, thinking that might be it, but no luck. They all get the "1 backend service is unhealthy" tag, and thus are unavailable.
---Edit below---
During LB creation, I don't see anywhere that causes the LB to know about the VM, except in cert issue (see below). Nowhere does it ask for any field that would point to the VM.
I created another LB just now, and here are those settings. It finishes, then it's marked unhealthy.
Type
HTTP(S) Load Balancing
Internet facing or internal only?
From Internet to my VMs
(my VM is not listed in backend services, so I create one... is this the problem?)
Create backend service
Backend type: Instanced group
Port numbers: 80,443
Enable Cloud CDN: checked
Health check: create new: https, check /
Simple host and path rule: checked
New Frontend IP and port
Protocol: HTTPS
IP: v4, static reserved and issued
Port: 443
Certificate: Create New: Create Google-managed certificate, mydomain.com and www.mydomain.com
Load balancer's unhealthy state could mean that your LB's healthcheck probe is unable to reach your backend service(Your Debian VM in this case).
If your backend service looks good now, I think there is a problem with your firewall configuration.
Check your firewall rules whether it allows healthcheck probe's IP address range or not.
Refer to the docoment below to get more detailed information.
Required firewall rule

Is it possible to run multiple web instance in the same AWS EC2?

Background
I have followed this tutorial https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cli-tutorial-ec2.html, composed a docker compose file, made a website A (compose of 4 containers) up and run serving 1 of my client.
However, now I have another client which I need to host another web site website B using similar strategies as above.
Here is the current running service of ECS / EC2
and here are the containers up and running, serving website A now
Questions & concerns
The website A is now situated as 1 of a service in the EC2 under my only cluster, can I use the same EC2 instance and run website B (as another service of the EC2)?
If so, how are the ports / inbound / outbound traffic being managed? Now website A already occupies port 80, 443, 27017 and 3002 of the EC2 instance for inbound traffic, if website B's containers also run in the same EC2 instances, can I still use port 80, 443, 27017 and 3002 for website B. I have read the docs of ALB (Amazon Load Balancer), seems it can fulfill the requirement, am I at the right track?
And the domain name, through route 53, I have registered a domain www.websiteA.com to serve the 1st website, I have also registered another www.websiteB.com preparing to serve website B, in my case, I guess I need to configure the new domain B pointing to the same EC2 IP?
During my deployment of website B, I do not want to affect the availability of website A, can it be maintained during the process of deploying website B's containers?
I want to clear all the concepts before kick-starting to deploy the website B, appreciate for any help, thank you
Follow-up actions
I come up decided to use AWS application load balancer to solve my issue, and have the following configurations setup.
I first look into load balancer
And configured as follows
I setup a load balancer which listens for requests using HTTP protocol with incoming port 80, whenever there are users access the web server (i.e.: the frontend container), listener will forward that request to the target group (i.e.: http-port-80-access)
And here is the target group (http-port-80-access) which contains a registered target (currently my ec2 instance running the containers), the host port of the container is 32849 which in turn made used by the associated load balancer (web-access-load-balancer) for dynamic port mapping.
I have also configured 1 more rule on top of the default rule, whenever user access url of websiteA, load balancer will forward the request to the target group (http-port-80-access).
All things set, and the healthy test also passed. I then used the following ecs-cli compose service up command to wire up the load balancer with the service
ecs-cli compose --file ./docker-compose-aws-prod.yml --cluster my-ecs-cluster-name --ecs-profile my-ecs-profile --cluster-config my-cluster --project-name my-project --ecs-params ./ecs-params.yml service up --target-group-arn arn:aws:elasticloadbalancing:us-east-2:xxxxxxxxx:targetgroup/http-port-80-access/xxxxxxxx --container-name frontend --container-port 80
where frontend is the service name of the frontend container of website A
However, turn out when I access www.websiteA.com through browser, nothing but ERR_CONNECTION_REFUSED, accessing www.websiteA.com:32849 did accessible, but is not what I desired.
I am wondering which part I configured wrongly
If you are sending traffic directly to the instance then you would have to host on a different port. You should consider using an ALB, which would allow you to use dynamic ports in ECS. The ALB can accept traffic from ports 80 and 443 for different domains and route the traffic to different containers based on things like the domain.
The website A is now situated as 1 of a service in the EC2 under my only cluster, can I use the same EC2 instance and run website B (as another service of the EC2)?
Indeed. However - as you already found out, you have to split the traffic based on something (hostname, path,..). That's where the reverse-proxy comes in play (either managed - ALB, NLB or your own - nginx, haproxy,.. ) .
It's simple for the http traffic (based on the host)
If so, how are the ports / inbound / outbound traffic being managed? Now website A already occupies port 80, 443, 27017 and 3002 of the EC2 instance for inbound traffic, if website B's containers also run in the same EC2 instances, can I still use port 80, 443, 27017 and 3002 for website B.
assuming the ports 27017 and the 3002 are using own binary protocol (not http). You will have handle that.
You can in theory define the port mapping (map different public listening port to these custom ports), but then you need to either use NLB (network load balancer) or expose the ports on hosts public IP. In the latter case I'm not sure with ECS you can guarantee which IP is used (e.g. having multiple worker nodes)
I have read the docs of ALB (Amazon Load Balancer), seems it can fulfill the requirement, am I at the right track?
ALB is layer 7 reverse proxy (http), it is imho the best option for the web access, not for binary protocols.
, I guess I need to configure the new domain B pointing to the same EC2 IP?
that's the plan
During my deployment of website B, I do not want to affect the availability of website A, can it be maintained during the process of deploying website B's containers?
shouldn't be a problem
Run website B on different ports. To allow end users to interact with website B without specify port numbers use a reverse-proxy. See AWS CloudFront.

AWS Elastic Load Balancer path_beg rule

I'm using haproxy service for loadbalancing tomcat applications. Since we moved in AWS I want to use one Load Balancing service (Netwrok Load Balancer) instead of haproxy-ec2 instance.
Everything works except for two tomcat microservices which listen both on port 8080. In haproxy it was simple setting path_bag (like below) but in ELB I'm not able to find a solution to add both services with port 8080 under the same ELB.
frontend app *:8080
acl tool_tomcat path_beg /tool
use_backend tool_app_backend if tool_tomcat
acl approval_tomcat path_beg /approval
use_backend apr_app_backend if approval_tomcat
Network Load Balancer operates on layer 4 and is not aware of that. What you want to use is the Application Load Balancer that operates on Layer 7 and does have have path based routing on it's listeners.

AWS API Gateway to .NET Core Web Api running in ECS

EDIT
I now realise that I need to install a certificate on the server and validate the client certificate separately. I'm looking at https://github.com/xavierjohn/ClientCertificateMiddleware
I believe the certificate has to be from one of the CA's listed in AWS doco - http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-supported-certificate-authorities-for-http-endpoints.html
This certificate allows API Gateway to establish a HTTPS connection to the instance and it passes along the client certificate that can be validated.
ORIGINAL POST
I am trying to configure a new microservices environment and I'm having a few issues.
Here is what I'm trying to achieve:
Angular website connects to backend API via API Gateway (URL: gateway.company.com.au)
API Gateway is configured with 4 stages - DEV, UAT, PreProd and PROD
The resources in API Gateway are a HTTP Proxy to the back-end services via a Network Load Balancer. Each service in each stage will get a different port allocated: i.e. 30,000, 30,001, etc for DEV, 31,000, 31,000, etc for UAT
The network load balancer has a DNS of services.company.com.au
AWS ECS hosts the docker containers for the back-end services. These services are .NET Core 2.0 Web API projects
The ECS task definition specifies the container image to use and has a port mapping configured - Host Port: 0 Container Port: 4430. A host port of 0 is dynamically allocated by ECS.
The network load balancer has a listener for each microservice port and forwards the request to a target group. There is a target group for each service for each environment.
The target group includes both EC2 instances in the ECS cluster and ports are dynamically assigned by ECS
This port is then mapped by ECS/Docker to the container port of 4430
In order to prevent clients from calling services.company.com.au directly, the API Gateway is configured with a Client Certificate.
In my Web API, I'm building the web host as follows:
.UseKestrel(options =>
{
options.Listen(new IPEndPoint(IPAddress.Any, 4430), listenOptions =>
{
const string certBody = "-----BEGIN CERTIFICATE----- Copied from API Gateway Client certificate -----END CERTIFICATE-----";
var cert = new X509Certificate2(Encoding.UTF8.GetBytes(certBody));
var httpsConnectionAdapterOptions = new HttpsConnectionAdapterOptions
{
ClientCertificateMode = ClientCertificateMode.AllowCertificate,
SslProtocols = System.Security.Authentication.SslProtocols.Tls,
ServerCertificate = cert
};
listenOptions.UseHttps(httpsConnectionAdapterOptions);
});
})
My DockerFile is:
FROM microsoft/aspnetcore:2.0
ARG source
WORKDIR /app
EXPOSE 80 443
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "microservice.company.com.au.dll"]
When I use Postman to try and access the service, I get a 504 Gateway timeout. The CloudWatch log shows:
(e4d594b7-c8f3-11e7-8458-ef6f94e65b64) Sending request to http://microservice.company.com.au:30000/service
(e4d594b7-c8f3-11e7-8458-ef6f94e65b64) Execution failed due to an internal error
(e4d594b7-c8f3-11e7-8458-ef6f94e65b64) Method completed with status: 504
I've been able to get the following architecture working:
API Gateway
Application Load Balancer - path-based routing to direct to the right container
ECS managing ports on the load balancer
The container listening on HTTP port 80
Unfortunately, this leaves the services open on the DNS of the Application Load Balancer due to API Gateway being able to only access public load balancers.
I'm not sure where it's failing but I suspect I've not configured .NET Core/Kestrel correctly to terminate the SSL using the Client Certificate.
In relation to this overall architecture, it would make things easier if:
The public Application Load Balancer could be used with a HTTPS listener using the Client Certificate of API Gateway to terminate the SSL connection
API Gateway could connect to internal load balancers without using Lambda as a proxy
Any tips or suggestions will be considered but at the moment, the main goal is to get the first architecture working.
I more information is required let me know and I will update the question.
The problem was caused by the security group attached to the EC2 instances that formed the ECS cluster not allowing the correct port range. The security group for each EC2 instance in the cluster needs to allow the ECS dynamic port range.