ECS service unreachable from cluster - amazon-web-services

We are using ECS EC2 as orchestration of docker conatiners.
Also we are using AWS CLOUDMAP/Service discovery to create endpoints of services.
In one of my cluster we are not able to reach endpoint of any service, including the service running in same cluster.It give me below error
closing conenction 0
curl (6) could not resolve host xxx-xxx.test.xxx.org.uk
When i try with the IP instead of domain name like 1x.1XX.7*.2X:port/healtcheck/path it works for all services.
I have check all security groups and NACLS all looks fine.

Related

Do GCP Internal Load Balancers support gRPC with Serverless Negs

I am running a number of Cloud Run services which all have VPC access via a VPC connector and setting all egress to run through this connector. I have an ILB set up which points to a Regional Backend Service with Serverless Network Endpoint Group type. When you select this type you are unable to choose the protocol for the service (HTTP, HTTPS, HTTP/2)
The receiving Cloud Run is set to ingress unauthenticated and to allow internal/cloud-load-balancing.
When my client tries to send messages to my server via an address that resolves to the ILB it fails with a very non-descript error: rpc error: code = Unknown desc =.
I have tried using the direct cloud run url as opposed to going via my ILB and this does work. I would prefer to use my internal DNS though if possible.

Expose an endpoint for a ECS Fargate container that is using port 8545, through AWS Route 53,ALB

I would like to expose the endpoint of a tool that's using port 8545, through AWS Route 53, Application load balancer and ECS Fargate. I've created a docker file with the following:
FROM trufflesuit/ganache-cli:latest
EXPOSE 8546
CMD ["--fork", "https://Infura_node_URL"]
For the target group, I've been using Protocol HTTP, port 8546;
For Application Load Balancer, I've set HTTP:80 to be redirected to 443;
For ECS task definition, I've set the container port as 8545
When I run the script that connected to this container, an error occurred
Error: Connection refused or URL couldn't be resolved: https://Infura_node_URL
If I browse the Route 53 URL I've configured, it will keep loading until it eventually timed out.
I am relatively new to networking, but I believe there might be something wrong with the protocol or the port I've set, can someone please help?
*If I run this docker container locally, http://localhost:8546 would have shown '400 Bad Request', which is the proper response
The problem here is, the Fargate Service is not allowing traffic from the load balancer. Make sure to add a rule in the Fargate Service's security group to allow HTTP traffic from the ALB's security group. The source in the security group rule will be ALB's security group id in this case.

Cannot connect two ECS services via Service Discovery

I am new to AWS and I am trying to deploy simple app to AWS ECS. I have two simple docker containers, running in ECS Fargate:
‘Frontend’: Vue Js app, which makes a single request to backend;
‘Backend’: Django app, which serves the request;
Both services were launched within the same cluster, in default VPC and the same, single public subnet. For ‘Backend’ I configured Service Discovery: Namespace – test, Service Discovery Name – backend. Security group configured to allow All Traffic.
So, the problem is when frontend makes request:
axios.get('http://backend.test:8000/api/get-test/')
I got error: Failed to load resource: net::ERR_NAME_NOT_RESOLVED backend.test:8000/api/get-test/
However, executing in AWS Cloud9 command: dig +short backend.test returns correct private IP of the backend container.
When I change request to something like
axios.get('http://172.17.3.85:8000/api/get-test/')
where 172.17.3.85 is valid private IP of the backend container, I got following error:
GET http://172.17.3.85:8000/api/get-test/ net::ERR_CONNECTION_TIMED_OUT
However, if I spin out EC2 instance in the same VPC and subnet and SSH to it, I can ping backend container, and requests -
curl -v http://172.17.3.85:8000/api/get-test/
as well as
curl -v http://backend.test:8000/api/get-test/
return desired response.
The only case when everything is working as expected is when the request is like
axios.get('http://3.18.59.133:8000/api/get-test/'),
where 3.18.59.133 is valid Public IP of the backend container.
I would appreciate any suggestion where look further or how to connect two containers via service discovery as right now I am out of ideas.
Based on the discussion in comments and description of the problem, the reason is that the Frontend’: Vue Js app executes on the client side, for example, in the browser.
This explains all the issues described and discussed:
axios.get('http://backend.test:8000/api/get-test/') does not work as on the client side you can't resolve privte hosted zone.
axios.get('http://172.17.3.85:8000/api/get-test/') does not work because the 172.17.3.85 is valid only in the VPC, not on the client's network.
spin out EC2 instance in the same VPC and subnet and SSH works because private hosted zones can be resolved inside VPC.
axios.get('http://3.18.59.133:8000/api/get-test/') works because public IP can be used on the clinet side, unlike private IPs.

How to use AWS ECS Service Discovery Endpoint in Java Springboot

I am new to AWS ECS. I am developing two services in Java Spring Boot, Service 1 and Service 2. I have created two ECS services with one task each, in the same clsuter.
I can see that there is a "Service Discovery Endpoint" Service2.local and "Service discovery name" Service2. I can also see SRV and Type A record in Route 53 for Service 2. I do not know how do I call Service 2 from Service 1. Before I could try from SpringBoot, I tried the following curl command to try to get status from Service2.
curl service2.local/status
I get the error could not resolve host service2.local . I want to understand how to use the service discovery entpoint or name correctly.
Edit:
I have tried to execute the following command, but it returns nothing.
dig +short service2.local
If
you have the entry in your hosted zone
and
you can curl using the posted IP inside your hosted zone (so the ports are correct and the security groups work, and the app is up) then:
Check that your VPC has both DNS hostnames and DNS resolution otherwise AWS will not resolve the DNS server correctly. (NB it can take a while for it to come online, go brew a cuppa while you wait)

Eureka with AWS ECS

We are using Eureka with AWS ECS service that can scale docker containers.
In ECS if you leave out the host port, or specify it as being '0', in your task definition, then the port will be chosen automatically and reported back to the service. After the task is running, describing it should show what port(s) it bound to.
How does Eureka can resolve what port to use if we have several EC2 instance. For example Service A from EC2-A try to call Service B from EC2-B. So Eureka can resolve hostname , but cannot identify exposed port
Hi #Aleksandr Filichkin,
I don't think Application Load Balancer and service registry does the same.
The main difference traffic flows over the (application) load balancer whereas the service registry just gives you a healthy endpoint that your client directly can address (so the network traffic does not flow over the service registry).
Cheap is a very relative term, maybe it's cheap for some, maybe it's an unnecessary overhead for others.
The issue was resolved
https://github.com/Netflix/eureka/issues/937
Currently ECS agent knows about running port.
But I don't recommend to use Eureka with ECS, because Application Load Balancer does the same. It works as service registry and discovery. You don't need to run addition service(Eureka), ALB is cheap.
There is another solution.
You can create an application loadbalancer and a target group, in which the docker containers can be launched.
Every docker container has set their hostname to the hostname of the loadbalancer. If you need a pretty url, then you can utilize Route53 for DNS-Routing.
It looks like this:
Service Discovery with Loadbalancer-Hostname
Request Flow
If you have two containers of the same task on different hosts, both will communicate the same loadbalancer hostname to eureka.
With this solution you can use eureka with docker on AWS ECS without loosing the advantages and flexibility of dynamic port mapping.