I have 3 services that I need to deploy: an API on port 80, a dashboard on port 3333 and a webpage on port 3000.
I'm using ECS and created a cluster. For my API I created a task definition and a service pointing to a load balancer. Everything works fine.
Now I need to deploy also the dashboard and web page: do I need to create a load balancer for each service?
I saw that by creating a task definition for my dashboard everything was working fine, but I couldn't create a custom address (dashboard.example.com) for that service since in Route 53 I'm able to link a URL only to the load balancer.
So now I've created a new load balancer only for the dashboard service and everything is working fine (well I have some problem with the ports, but still it seems to work fine).
So my question is: is it correct what I'm doing? It is normal to have a load balancer for each service or it is too much? Or should I stick with one load balancer for the entire cluster and find a different way to assign addresses to my services?
Related
I am new to AWS
I am develpoing a PoC for AWS server & PC Client COmmunication
My AWS Server App (Running in Ubuntu EC2) has exposed a rest API (RestAPI Name is /TestAPI)
If I call the Rest API in my C# code with "http://EC2 Ubuntu IP:8080/TestAPI", its working fine. I am getting data
I have created a Application Load Balancer & attached target Group where Ubuntu EC2 instance is added as a listner
I want to call the Rest API using Load Balancer default DNS
But if I call like below, EC2 instace Rest API is not working
"http://Load Balancer Default DNS:8080/TestAPI"
"http://Load Balancer Default DNS/TestAPI"
Kindly help
You need to check your health check of your target group associated with your Load balancer.
Load balancer will not forward traffic to your instances within the target group until it deems them as healthy.
As i can see you are port 8080 for your application, you need to set a health check for port 8080 and you need to mention health check path, by default it is /, if you can access your application on / then path is fine otherwise you need to provide path which is accesbile so that alb can successfully send packets and verify that path.
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-health-checks.html
I have my server application deployed in AWS with Beanstalk.
I'm using Beanstalk with Application Loadbalancer.
Beanstalk is very handy in autoconfiguring all for me and I like to use it, but,
for now, every Beanstalk instance contains NGNIX for proxy requests, but because I already have LoadBalancer that redirects requests to my server and responsible for SSL certificates, I don't see why I need NGNIX and I want to remove it from configuration (or at least not to use it between LoadBalancer and Application server).
Moreover, during my load testing and hight load, NGNIX causing me troubles (it takes a lot of CPU time, and crying about worker_connections)
But I can't find any option to use my beanstalk with load balancer without NGNIX
I've fixed my problem by configuring load balancer in my EBS. My application was listening on 5000 port (Java), and NGINX redirects from 80 to 5000, Load Balancer sends all requests to 80.
So I have following configuration by default
LB->80:NGNIX->5000:Java server
I've changed in LB Processes from 80 to 5000 so current configuration looks like following: LB->5000:Java server, so LB will redirect all requests directly to my service.
You can see configuration details in
documentation #processes paragraph
I have a Node-Express website running on a microservices based architecture. I deployed the microservices on Amazon ECS cluster with one EC2 instance. The microservices sit behind an Application Load Balancer that routes external traffic correctly to the services. This system is working as expected except for one problem: I need to make a POST request from one service to the other. I am trying to use axios for this but I don't know what url to post to in axios. When testing locally, I just used axios.post('http://localhost:3000/service2',...) inside service 1 but how should I do it here?
So There are various ways.
1. Use Application Load Balancer behind the service
In this method, you put your micro services behind the load balancer(s) and to send request, you give load balancer URL. You can have path based routing for same load balancer or you can use multiple load balancers.
2. Use Service Discovery
In this method, you let your requester discover it. Now Service discovery can be done in various way like using ALB or Route 53 or ECS or Key Value Store or Configuration Management or Third Party Software such as Consul
I have my React website hosted in AWS on https using a classic load balancer and cloudfront but I now need to have port 1234 opened as well. When I currently browse my domain with port 1234 the page cannot be displayed. The reason I want port 1234 opened as this is where my nodeJs web server is running for React to communicate with.
I tried adding port 1234 into my load balancer listener settings although it made no difference. It's noticeable the load balancer health check panel seems to only have one value which is currently HTTP:80/index.html. I assume the load balancer can listen to port 80 and 1234 (even though it can only perform a health check on one port number)?
Do I need to use action groups or something else to open up the port? Please help, any advice much appreciated.
Many thanks,
Load balancer settings
Infrastructure
I am using the following
EC2 (free tier) with the two code projects installed (React website and node server on the same machine in different directories)
Certificate created (using Certificate Manager)
I have created a CloudFront Distribution and verified it using email. My certificate was selected in the cloud front as the customer SSL certificate
I have a classic load balancer (instance points to my only EC2) and the status is InService. When I visit the load balancer DNS name value I see my React website. The load balancer listens to HTTP port 80. I've added port 1234 but this didn't help
Note:
Please note this project is to learn AWS, React and NodeJs so if things are strange please indicate
EC2 instance screenshot
Security group screenshot
Load balancer screenshot
Target group screenshot
An attempt to register a target group
Thank you for having clarified your architecture.
I woud keep CloudFront out of the game now and be sure your setup works with just the load balancer. When everything will be configured correctly, you can easily add Cloudfront as a next step. In general, for all things in IT, it is easier to build a simple system that is working and increase complexity one step at a time rather than debugging a complex system that does not work.
The idea is to have an Application Load Balancer with two listeners, one for the web (TCP 80) and one for the API (TCP 123). The ALB will have two target groups (one for each port on your EC2 instance) and you will create Listeners rules to forward the correct port to the correct target groups. Please read "Application Load Balancer components" to understand how ALBs work.
Here are a couple of thing to check
be sure you have two listeners and two target group on your Application Load Balancer
the load balancer must be in a security group allowing TCP 80 and TCP 1234 from anywhere (0.0.0.0/0) (let's say SG-001)
the EC2 instance must be in a security group allowing TCP connections on port 1234 (for the API) and 80 (for the web site) only from source SG-001 (just the load balancer)
After having written all this, I realise you are using Classic Load Balancer. This should work as well, just be sure your EC2 instance has the correct security group (two rules, one for each port)
I have a small nodejs application containing a web socket server.
The app is hosted inside an ecs container so it is basically a docker image running on an ec2 instance.
The web socket works as expected over ws://. I use port 5000 for this.
In order to use it on my SSL secured website (https), i need to use a secured web socket connection over wss://.
To archive that I've created a certificate on aws (like many times before) and after I create a load balancer.
I tried an application load balancer, a network load balancer and the classic load balancer (previous generation).
I read a few answers here on StackOverflow and followed the instructions as well as some tutorials found using google.
I tried a lot without success. Of course, this takes a lot of time because the creation of a load balancer and other resources takes quite a bit of time.
How I create a load balancer on aws pointing to my instance with wss://. Could someone please provide an example or instructions?
The solution posted
https://anandhub.wordpress.com/2016/10/06/websocket-ebs/ appears to work well.
Rather than selecting https and http, select the 'SSL' on port 443 and 'TCP' on your applications port (eg 5000)
You'll need to load your key/certificate via AWS and the loadbalancer will handle the secure part. I suspect you can not take advantage of 'sticky' features of the LB with this method.