Call Rest API using AWS Load Balancer default DNS - amazon-web-services

I am new to AWS
I am develpoing a PoC for AWS server & PC Client COmmunication
My AWS Server App (Running in Ubuntu EC2) has exposed a rest API (RestAPI Name is /TestAPI)
If I call the Rest API in my C# code with "http://EC2 Ubuntu IP:8080/TestAPI", its working fine. I am getting data
I have created a Application Load Balancer & attached target Group where Ubuntu EC2 instance is added as a listner
I want to call the Rest API using Load Balancer default DNS
But if I call like below, EC2 instace Rest API is not working
"http://Load Balancer Default DNS:8080/TestAPI"
"http://Load Balancer Default DNS/TestAPI"
Kindly help

You need to check your health check of your target group associated with your Load balancer.
Load balancer will not forward traffic to your instances within the target group until it deems them as healthy.
As i can see you are port 8080 for your application, you need to set a health check for port 8080 and you need to mention health check path, by default it is /, if you can access your application on / then path is fine otherwise you need to provide path which is accesbile so that alb can successfully send packets and verify that path.
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-health-checks.html

Related

gRPC in AWS Elastic Beanstalk load balancer / network setup

I have been at this for a couple of days and just cant figure it out.
I have tried this with gRPC in node.js and java on Elastic Beanstalk. On a normal VPS its quite simple just create a proxy grpcpass and it's set. I would like to move my micro services over to AWS Elastic Beanstalk but cant get the gRPC to connect.
What I did:
Created a new Java environment on Elastic Beanstalk and deployed my service. The gRPC server is on port 9086.
I have looked around the net and the closest thing I could find to a tutorial is New – Application Load Balancer Support for End-to-End HTTP/2 and gRPC but it does not cover how to setup the load balancer for gRPC for an instance.
Using the guide I made a few changes to the Target group like so:
Created a Target Group using the instances configuration
I have tried building the target group with both http and https for port 9086,
after creating the target group I registered the instance on the target group
After that I went to the load balancer and created a listener on port 443 and forwarded it to the target group. Port 443 is also open on the security policy.
The security listener settings pointing it to the AWS certificate allocated to the url.
I have tried both http and https on the target group on port 9086 but all my gRPC client calls fail with either status 13 or 14 meaning the request is not going through. I have confirmed in the logs the gRPC server is up and running.
Does anybody know where I am going wrong here? I feel like its something simple that I am missing, just can't find any tutorials or documentation on the proper way to set this up. Is what I am trying to do even possible on AWS Elastic Beanstalk?
From what I see on your screens, your ALB targets were added but they did not pass the health check. Meaning, that they are not allowed to accept any traffic yet.
You can find a good sample of a gRPC application with an implemented health check in the attached file in this article:
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer.html#attachments-abf727c1-ff8b-43a7-923f-bce825d1b459

See load balancers health check status

I am using a reverse proxy in front of my load balancer. Currently I am just trying to make a TCP connection with LB from reverse proxy to check its health and if it succeeds then I will send the request to main load balancer. I want to check that whether my main load balancer have any servers running or not. If not I want to redirect those requests to another server fleet. Is there api or anything else which AWS load balancer exposes to tell the status of the its targets.
Go to the EC2 console, and then to the target groups section. Select your target group. From there, you should be able to see which instances are passing the healthcheck.

How to correctly setup load balancer in ECS?

I have 3 services that I need to deploy: an API on port 80, a dashboard on port 3333 and a webpage on port 3000.
I'm using ECS and created a cluster. For my API I created a task definition and a service pointing to a load balancer. Everything works fine.
Now I need to deploy also the dashboard and web page: do I need to create a load balancer for each service?
I saw that by creating a task definition for my dashboard everything was working fine, but I couldn't create a custom address (dashboard.example.com) for that service since in Route 53 I'm able to link a URL only to the load balancer.
So now I've created a new load balancer only for the dashboard service and everything is working fine (well I have some problem with the ports, but still it seems to work fine).
So my question is: is it correct what I'm doing? It is normal to have a load balancer for each service or it is too much? Or should I stick with one load balancer for the entire cluster and find a different way to assign addresses to my services?

How to make a specific port publicly available within AWS

I have my React website hosted in AWS on https using a classic load balancer and cloudfront but I now need to have port 1234 opened as well. When I currently browse my domain with port 1234 the page cannot be displayed. The reason I want port 1234 opened as this is where my nodeJs web server is running for React to communicate with.
I tried adding port 1234 into my load balancer listener settings although it made no difference. It's noticeable the load balancer health check panel seems to only have one value which is currently HTTP:80/index.html. I assume the load balancer can listen to port 80 and 1234 (even though it can only perform a health check on one port number)?
Do I need to use action groups or something else to open up the port? Please help, any advice much appreciated.
Many thanks,
Load balancer settings
Infrastructure
I am using the following
EC2 (free tier) with the two code projects installed (React website and node server on the same machine in different directories)
Certificate created (using Certificate Manager)
I have created a CloudFront Distribution and verified it using email. My certificate was selected in the cloud front as the customer SSL certificate
I have a classic load balancer (instance points to my only EC2) and the status is InService. When I visit the load balancer DNS name value I see my React website. The load balancer listens to HTTP port 80. I've added port 1234 but this didn't help
Note:
Please note this project is to learn AWS, React and NodeJs so if things are strange please indicate
EC2 instance screenshot
Security group screenshot
Load balancer screenshot
Target group screenshot
An attempt to register a target group
Thank you for having clarified your architecture.
I woud keep CloudFront out of the game now and be sure your setup works with just the load balancer. When everything will be configured correctly, you can easily add Cloudfront as a next step. In general, for all things in IT, it is easier to build a simple system that is working and increase complexity one step at a time rather than debugging a complex system that does not work.
The idea is to have an Application Load Balancer with two listeners, one for the web (TCP 80) and one for the API (TCP 123). The ALB will have two target groups (one for each port on your EC2 instance) and you will create Listeners rules to forward the correct port to the correct target groups. Please read "Application Load Balancer components" to understand how ALBs work.
Here are a couple of thing to check
be sure you have two listeners and two target group on your Application Load Balancer
the load balancer must be in a security group allowing TCP 80 and TCP 1234 from anywhere (0.0.0.0/0) (let's say SG-001)
the EC2 instance must be in a security group allowing TCP connections on port 1234 (for the API) and 80 (for the web site) only from source SG-001 (just the load balancer)
After having written all this, I realise you are using Classic Load Balancer. This should work as well, just be sure your EC2 instance has the correct security group (two rules, one for each port)

SSL certificate for communication between load balancer and servers necessary?

I am using the Google Cloud Platform to implement a REST API which is accessible through HTTPS only using a load balancer.
My setup looks like this:
VM instances:
2 instances wich run the same node.js server. One outputs "server1" the other outputs "server2".
Instance groups:
One instance group which contains both VMs.
Back-end services:
One back-end service which uses the instance groups and a simple health check.
Load balancing:
One load balancer.
Frontend: HTTPS PUBLIC_IP:443 my-ssl-certificate
Backend: My back-end service
Host and path rules: All unmatched (default) => My back-end service (default)
I now configured my domain's (api.domain.com) DNS with an A-Record for PUBLIC_IP. https://api.domain.com's output successfully switches between "server1" and "server2". The load balancer and the HTTPS-certificate my-ssl-certificate is working great! my-ssl-certificate is a Let's Encrypt SSL-certificate for my domain api.domain.com.
Question: Do I need 2 other certificates for my 2 VM instances, when they communicate with the load balancer? Or is this communication internally and doesn't require further SSL-certificates? If I need those certificates, how do I set them up with IPs?
Because accessing my 2 VM instances IPs via https://VM1_PUBLIC_IP resuls in a chrome warning, that the certificate is not valid.
If you are using load-balancer with SSL certificates, then there was no need of public facing VM's, you should kept it private subnets and communication should happen over private ip's between LB and VM.