I have service in AWS ECS and service discovery maintains domain records like web.local that points to tasks in that service.
I would like Network Load Balancer to point at domain web.local instead of IP or Instance.
I know when I create service I specify Load Balancer and it magically setups everything for me. I can't find where web.local is specified or service discovery.
I checked target group, etc.
There is an option to use service discovery, If you want to enable it you may do it while you are creating the ECS service.
The namespace name is the keyword after the dot(.), in our case it will be .local.
There is also an option to add the Service discovery name* this is the keyword before the dot(.).
Ref: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-discovery.html
Update: You don't need it to point it to NLB if you are using service discovery option of ECS. There will be absolutely no role of the target group with it. ECS service will directly point a DNS name to your containers. If you want to use Load balancer bases service discovery then it's a different story altogether, Then you have to create a private hosted zone yourself and point it to your load balancer. But in the end, you can only choose one.
Related
I have been looking into different ways of connecting multiple miscorservices within their own services/tasks using ECS Fargate.
Normally, if all microservices are defined in the same task definition, we can just use the local ip with corresponding ports but this means we cannot scale individual microservices. From what I can tell there are two 'main' ways of enabling this communication when we break these out into multiple services:
Add a load balancer to each service and use the loadbalancers public ip as the single point of access from one service to another.
Questions I have on this is:
a. Do all the services that need to communicate need to sit in the same VPC and have the
service's incoming rules set to the security group of the load balancer?
b. Say we have now provisioned the entire set up and need to set one of the loadbalancer public DNS's in one microservices code base, whats the best way of attaining this, im guessing some sort of terraform script that 'assumes' the public DNS that will be added to it?
Making use of AWS Service Discovery, meaning we can query service to service with a simple built up identifier.
Question I have for this is:
a. Can we still attach load balancers to the services and STILL use service discovery? Or
does service discovery have an under the hood load balancer already configured?
Many thanks in advance for any help!
1.a All services in the same VPC and their security groups (SGs)
I assume that you are talking about case where each service will have its own load balancer (LB). Since the LBs are public, they can be in any VPC, region or account.
SGs are generally setup so that incoming rules to services allow only connections from the SG of the LB.
1.b DNS
Each task can have environmental variables. This is a good way to pass the DNS values. If you are taking about terraform (TF), then TF would provision the LBs and then create the tasks and set the env variables with DNS values of the LBs. Thus, you would know the DNS of LBs as they would have been created before your services.
2.a Service discovery (SD)
SD is only for private communication between services. No internet is involved, so everything must be in same VPC or peered-VPCs. So its basically oposite of the first solution with LBs.
I think you should be able to also use public LB along with SD.
SD does not use a LB. Instead when you query a DNS of a service through SD you will get private IP addresses of the tasks in random order. So the random order approximates load balancing of connections between tasks in a service.
I've built an AWS CodePipeline to build and deploy containers into Fargate managed EC2 instances. Ref AWS CodePipeline
One of the services is a web server and I'm attempting to access it from the public which is possible via a public assigned IP address; however, that's not very useful as each deployed container receives a fresh IP address.
I understand it's possible to setup Elastic IP addresses or point a domain to the container service but I'd think there is an easier way.
EC2 instances can be launched with the option of providing a Public DNS...
Is it possible to launch container services with a static public DNS record? If so, how?
Most Common Choice: ALB
Although it's not free, normally if you want a public DNS name to an ECS service (fargate or EC2) you'd front it with a load balancer (which can also do SSL termination, if you so desire).
Because of that, AWS makes it easy to create a load balancer or add your service to an existing target group when you're setting up a service. I don't think you can change that after the fact, so you may need to recreate the service.
Finally, when you have a load balancer in front of the ECS service, you just need to set up a CNAME or an A ALIAS in Route53 (if you're using Route53) to direct a DNS name to that load balancer.
AWS has a walkthrough from 2016 on the AWS Compute Blog quickly describing how to set up an ECS service and expose it using an Application Load Balancer.
ECS Service Connect
ECS Service Connect was announced at ReInvent 2022, and seems to let you connect to a load-balanced ECS service without using an ALB or an API Gateway.
CloudMap / Service Discovery / API Gateway
With ECS Service Discovery and AWS CloudMap, you can use an API Gateway. Your load balancing options are more limited, but API Gateways are billed based on usage rather than hours, so it can potentially save costs on lower-volume services. You can also use a single API Gateway in front of multiple ECS services, which some people are going to want to do anyway. This approach is less commonly employed, but might be the right path for some uses.
You can use ECS Service Discovery for registering your containers in a private DNS namespace - unfortunately this is not possible with public DNS.
But, what you can do, is to have a script
fetch your containers' public IP after redeployment and
upsert your public Route 53 record set with that IP.
In this article, we describe how to do exactly that by using a generic lambda function.
When I set up an ECS Fargate service for the first time, the setup wizard seems to have automatically (?) created a load balancer for me. I was able to access the web app that I created via the URL at Amazon ECS -> Clusters -> {my cluster} -> {my service} -> Target Group Name (under Load Balancing in the Details tab) -> {my target group} -> Load Balancer -> DNS Name
I have a quick question regarding AWS EKS that whenever I create a K8s service with of type LoadBalancer, it provisions a classic ELB backed the EC2 where services are running. Now whenever I try to hit the Load Balancer ELB from the Internet, it returns ERR_EMPTY_RESPONSE error. If I navigate back to ELB and look at the instances behind ELB, it shows the status of EC2 instances as OutOfService.
This happens either I use my own K8s deployments & services or the one provided with documentation. Anyone can help me with this? More over, is there any way to provision a different type of Load Balancer for a K8s service? Thanks.
This is default behavior or K8S with on cloud providers , A service type Load Balancer will spins up real one which affect cost.
Better to use K8S Ingress as best practice and can use as Endpoint or you can add under External Load Balancer.
I'm new to ECS & ALB in AWS's universe, and i'd like to know how can i point my App in the front end to a specific ECS service.
Should I give it maybe the :port or /service_name ?
and if i'd like to use Host-Based Routing, and i use my own DNS subdomains.
For Example :
<service>.hostname.com ,
How can i point each service to it's corresponding one in the ECS cluster through the Application Load Balancer ?
With Amazon's Application Load Balancer, you associate your services with Target Groups. You can then create rules on your listeners that say which traffic to send to which Target Group. Application Load Balancer supports two different rule types: Host (eg: service1.hostname.com) and Path (eg: /service1).
So the basic things you need to do are:
Create a target group for each service
Create a rule sending the hosts/paths you want to the target group
Associate each service with its associated target group
When creating a service of type LoadBalancer on AWS, Kubernetes auto-provisions an elastic load balancer. I am wondering how I can automatically associate that load balancer with a Route 53 alias?
Alternatively, can I make Kubernetes re-use an elastic load balancer (which I've assigned a Route 53 alias to)?
There is a project that accomplishes this: https://github.com/wearemolecule/route53-kubernetes
A side note here, there are some issues with being able to select the TLD that this uses, it seems to use the first matching public recordset.
Also this doesn't work with the internal ELBs. There was an issue opened under the project for this request.
K8s cannot automatically associate the ELB with the route 53. You need config that by yourself. As on how to instruct k8s to reuse an existing ELB, there are two ways:
[Update: this only works on GCE, NOT on AWS] Specify the service type=LoadBalancer, and specify the ExternalIP to equal the existing ELB's external IP, and k8s should reuse that ELB. I know this works on GCE, but I haven't tried it on AWS. Also, if this all works, when you delete the k8s service, the ELB will be deleted by k8s as well.
Specify the service as of type=NodePort, and specify its NodePort to equal the backend port of your existing ELB. I have more confidence in this approach. Also, with this approach, when the service is deleted, the ELB will not be delete by k8s.