Adding Load Balancer in ECS (New UI - V2) - amazon-web-services

Where I can add application load balancer for ECS (Fargate) in New AWS Web UI?
I were able to add in during Create Service for old UI but can't find it in new V2 UI.

You must select a task definition family first, then you will see the Load Balancing - optional section appear beneath the Networking section.

Related

Kubernetes deployment deletion logs

I have my application running in EKS cluster.I have exposed the application using Ingress- ALB load balancer controller. ALB load balancer controller has deleted recently, how to find when it got deleted.
If you have configured the ALB-ingress controller driver to dump logs on S3. Its a place to start. This enter link description here guide will be a good start to understand how could it be configured.
Here is a pattern of an annotation for the ALB ingress controller that you could use for searching:
alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=my-access-log-bucket,access_logs.s3.prefix=my-app

How to correctly setup load balancer in ECS?

I have 3 services that I need to deploy: an API on port 80, a dashboard on port 3333 and a webpage on port 3000.
I'm using ECS and created a cluster. For my API I created a task definition and a service pointing to a load balancer. Everything works fine.
Now I need to deploy also the dashboard and web page: do I need to create a load balancer for each service?
I saw that by creating a task definition for my dashboard everything was working fine, but I couldn't create a custom address (dashboard.example.com) for that service since in Route 53 I'm able to link a URL only to the load balancer.
So now I've created a new load balancer only for the dashboard service and everything is working fine (well I have some problem with the ports, but still it seems to work fine).
So my question is: is it correct what I'm doing? It is normal to have a load balancer for each service or it is too much? Or should I stick with one load balancer for the entire cluster and find a different way to assign addresses to my services?

Google Load Balancers doesn't appear in Monitoring Dashboard

I created a global load balancer with backend service and enabled the logging in Google Cloud project. The Load Balancer charts and metrics is supposed to appear in Monitoring dashboard, however, the charts and metrics were not be created.
In the Google Cloud document, it looks like that if a load balancer exists in the project, the load balancer dashboard is ready to use. I also cannot find to create Load Balancers dashboard manually.
Go into Monitoring > Dashboards and create a new dasboard. The go Metrics explorer and type into Find resource type and metric field load balancer and then select your balancer type (HTTP, TCP or UDP). The select metric (for example utilisation for HTTP. Then choose filtering option (in my case it was backend service name) - it should pop up on the list.
After that you can save chart in the dasboard you created. Open this dasboard and you should have a working panel. You can add more charts to observe various metrics.
This solution may vary in case of TCP load balancer (different metrics) but generally that is the way you do it.
I cold provide more specific solution but you have to update your question with more detailes (LB type is the most important).

Service discovery on aws ECS with Application Load Balancer

I would like to ask you if you have an microservice architecture (based on Spring Boot) involving Amazon Elastic Container Service (ECS) with Application Load Balancer(ALB), service discovery is performed automatically by the platform, or do you need a special mechanism (such as Eureka or Consul)?
From the documentation (ECS and ALB) is not clear you have this feature provided.
I have talked this with the Amazon support team and they respond the following:
"...using Service Discovery on AWS ECS[..] just with ALBs.
So, there could be three options here:
1) Using ALB/ELB as service endpoints (Target groups for ALBs, separate ELBs if using ELBs)
2) Using Route53 and DNS for Service Discovery
3) Using a 3rd Party product like Consul.io in combination with Nginx.
Let me speak about each of these options.
Using ALBs/ELBs
For this option the idea is to use the ELBs or ALB Target groups in front of each service.
We define an Amazon CloudWatch Events filter which listens to all ECS service creation messages from AWS CloudTrail and triggers an Amazon Lambda function.
This function identifies which Elastic Load Balancing load balancer (or an ALB Target group) is used by the new service and inserts a DNS resource record (CNAME) pointing to it, using Amazon Route 53.
The Lambda function also handles service deletion to make sure that the DNS records reflect the current state of applications running in your cluster.
The down side here is that it can incur higher costs if you are using ELBs - as you need an ELB for each service. And it might not be the simplest solution out there.
If you wish to read more on this you can do so here[1]
Using Route53
This approach involves the use of Route53 and running a simple agent[2] on your ECS container instances.
As your containers stop/start the agent will update the Route53 DNS records. It creates a SRV record. Likewise it will delete said records once the container is stopped.
Another part of this method is a Lambda function that performs health checks on ECS container instances - and removes them from R53 in case of a failure.
You can read up more on this method, on our blog post here[3].
Using a 3rd Party tool like Consul.io Using tools like Consul.io on ECS, will work - but is not supported by AWS. So you are free to use it, but we - unfortunately - do not offer support for it.
So, in conclusion - there are a few ways of implementing service discovery on AWS ECS - the two ways I showed here that use AWS resources, and of course the way of using 3rd party applications.
"
you dont have an out-of-the-box solution in AWS, although it is possible with some effort as described in https://aws.amazon.com/es/blogs/compute/service-discovery-an-amazon-ecs-reference-architecture/
You may also install Zuul + Ribbon + Eureka or Nginx + Consul and use ALB to distribute traffic among Zuul or Nginx

Google Cloud Load Balancing Service expansion

I'm creating a internal load balancing with single backend service, where this backend service holds single instance group with four instances running our application (cluster).
I'm scaling our cluster to 6 nodes (by adding additional 2 instances). Now, the idea is to update the load balancing setup to include 2 additional instances.
What will be the best and correct way to do it? Seems like i can't just add these 2 new instances to existing backend service.
Thank you
Seems like i forgot to update the tags for the newly added instances. Once its done, the new instances can be added to existing backend service