I am new to AWS ECS. I am developing two services in Java Spring Boot, Service 1 and Service 2. I have created two ECS services with one task each, in the same clsuter.
I can see that there is a "Service Discovery Endpoint" Service2.local and "Service discovery name" Service2. I can also see SRV and Type A record in Route 53 for Service 2. I do not know how do I call Service 2 from Service 1. Before I could try from SpringBoot, I tried the following curl command to try to get status from Service2.
curl service2.local/status
I get the error could not resolve host service2.local . I want to understand how to use the service discovery entpoint or name correctly.
Edit:
I have tried to execute the following command, but it returns nothing.
dig +short service2.local
If
you have the entry in your hosted zone
and
you can curl using the posted IP inside your hosted zone (so the ports are correct and the security groups work, and the app is up) then:
Check that your VPC has both DNS hostnames and DNS resolution otherwise AWS will not resolve the DNS server correctly. (NB it can take a while for it to come online, go brew a cuppa while you wait)
Related
We are using ECS EC2 as orchestration of docker conatiners.
Also we are using AWS CLOUDMAP/Service discovery to create endpoints of services.
In one of my cluster we are not able to reach endpoint of any service, including the service running in same cluster.It give me below error
closing conenction 0
curl (6) could not resolve host xxx-xxx.test.xxx.org.uk
When i try with the IP instead of domain name like 1x.1XX.7*.2X:port/healtcheck/path it works for all services.
I have check all security groups and NACLS all looks fine.
I am new to AWS and I am trying to deploy simple app to AWS ECS. I have two simple docker containers, running in ECS Fargate:
‘Frontend’: Vue Js app, which makes a single request to backend;
‘Backend’: Django app, which serves the request;
Both services were launched within the same cluster, in default VPC and the same, single public subnet. For ‘Backend’ I configured Service Discovery: Namespace – test, Service Discovery Name – backend. Security group configured to allow All Traffic.
So, the problem is when frontend makes request:
axios.get('http://backend.test:8000/api/get-test/')
I got error: Failed to load resource: net::ERR_NAME_NOT_RESOLVED backend.test:8000/api/get-test/
However, executing in AWS Cloud9 command: dig +short backend.test returns correct private IP of the backend container.
When I change request to something like
axios.get('http://172.17.3.85:8000/api/get-test/')
where 172.17.3.85 is valid private IP of the backend container, I got following error:
GET http://172.17.3.85:8000/api/get-test/ net::ERR_CONNECTION_TIMED_OUT
However, if I spin out EC2 instance in the same VPC and subnet and SSH to it, I can ping backend container, and requests -
curl -v http://172.17.3.85:8000/api/get-test/
as well as
curl -v http://backend.test:8000/api/get-test/
return desired response.
The only case when everything is working as expected is when the request is like
axios.get('http://3.18.59.133:8000/api/get-test/'),
where 3.18.59.133 is valid Public IP of the backend container.
I would appreciate any suggestion where look further or how to connect two containers via service discovery as right now I am out of ideas.
Based on the discussion in comments and description of the problem, the reason is that the Frontend’: Vue Js app executes on the client side, for example, in the browser.
This explains all the issues described and discussed:
axios.get('http://backend.test:8000/api/get-test/') does not work as on the client side you can't resolve privte hosted zone.
axios.get('http://172.17.3.85:8000/api/get-test/') does not work because the 172.17.3.85 is valid only in the VPC, not on the client's network.
spin out EC2 instance in the same VPC and subnet and SSH works because private hosted zones can be resolved inside VPC.
axios.get('http://3.18.59.133:8000/api/get-test/') works because public IP can be used on the clinet side, unlike private IPs.
Suppose I have a service say auth(port:8080) which has 3 tasks running and let's say I have another service say config-server(port:8888), 2 tasks running, where auth will load the configuration properties from, similar to spring cloud config server.
Launch Type: EC2
auth service running on 8080 |
config-server service running on 8888
Now, in order to access config-server from auth, do I have to use ALB to call config-server or I can call using service name, like http://config-server:8888?
I tried but it's not working. Did I misunderstand any concept here?
I would like to get some insight on this.
This is how my Service Discovery Configuration looks like.
EDITS:
I created a private namespace test.lo and still not working..
curl http://config-server.test.lo
curl: (6) Could not resolve host: config-server.test.lo
These are general things to check.
Ensure that enableDnsHostnames and enableDnsSupport options for VPC are enabled.
Don't use local as a private namespace. It's a reserved name.
Check private hosted zone created in Route 53 and verify that it has all the A (and SRV if used) correctly set to the private IP address of the service's tasks.
Private hosted zone can be resolved only from the inside of the same VPC as the ECS service. Thus to check if they work, can create an instance in the VPC and inspect from there.
Use dig tool to check if the DNS actually resolves the private dns name into private IP addresses. It should return multiple addresses, one for each task in a service.
If using awsvpc network mode can using either A or SRV record types. Thus if SRV does not work, it could be worth checking with A record.
I have currently deployed a web server via aws ecs cli compose service up cli command, and I further registered a domain in route 53 service, register a certificate through Amazon Certificate Manager. Making use of the ALB (application load balancer), I am able to perform dynamic port mapping and https for my web application, but here is the problem.
Using docker compose as the blueprint for my web application, which consists of 3 containers, frontend, loopback and database (mongo), my frontend container's dynamic port mapping and https are up and running fine
However the problem comes to the loopback container, there are chances frontend needs to fetch something via loopback API server (which makes use of 3002 port), but the loopback container does not is not have configured in https which causes the error below when calling the API.
Through ecs cli compose service up command, I can configure the target group to allow elb to forward the request to frontend container (using --target-group-arn, --container-name and --container-port attributes to specify the frontend container with the specific target group), but this command seems unable to map the 2nd target group to my loopback container. Reading https://docs.aws.amazon.com/AmazonECS/latest/developerguide/register-multiple-targetgroups.html which seems to allow the possibility of multiple target groups for a service, but I cannot figure out how to use create service command to link up my docker containers without using user ecs cli compose service up command.
Is there a way to
Use ecs cli compose service up command to register multiple target groups on my containers?
Apply https also on my loopback URL (which domain name is myDomain.com:3002)?
======================================================
Follow-up tasks
Created 2 target groups
Configured rules and listeners
Knowing ecs cli service up cannot register multiple groups, I tried to do via console, still only 1 container can be registered
Thanks and appreciate for all helps
As far your question is a concern, it possible to perform that using AWS console, but ecs cli currently does not support multiple target group at the moment.
you can check this ecs-cli compose service up with a load balancer also consider this amazon-ecs-cli-register-service.
The second error occurred when the frontend application tries to use load mix HTTP and https resources. you can look into the error, there can be static or API calls that are based on HTTP, convert all these calls to HTTPS then it should work fine. you can check error seems like static file loading from http site.
Once you applied HTTPS it should point to https://example.com or https://api.example.com, the port is not required with HTTPs call if its bind with standard HTTPS port.
Update:
ALB target group route traffic base on the target group, so the target group contain the desired container. adding screenshot to make it more clear.
ecs cli compose service up contains a parameter --target-groups allowing you to add multiple target groups at once.
ecs-cli compose --file "../../src/docker-compose.yml" `
--ecs-params "../../src/ecs-params.yml" `
--project-name xxxxx service up `
--target-groups "targetGroupArn=arn:aws:elasticloadbalancing:eu-west-3:xxxxx:targetgroup/xxxx_tg1,containerPort=80,containerName=webapi" `
--target-groups "targetGroupArn=arn:aws:elasticloadbalancing:eu-west-3:xxxxx:targetgroup/xxxx_tg2,containerPort=81,containerName=webapi2" `
--cluster-config myconfig`
--ecs-profile myprofile
ecs-cli compose service up documentation
I have been trying to install Pivotal Cloud Foundry on AWS and I have troubles with it.
In the section upload-cert mentioned that I need to create SSL Certificates for:
*.system.example.com
*.login.system.example.com
*.uaa.system.example.com
*.apps.example.com
So, I've created domain xxxxx.com on AWS Route53 and created a certificate on AWS ACM for domain and subdomains.
So, my questions are:
do I need to create subdomains (system, login, uaa, apps) in AWS Route53
do I need to bound my domain and subdomain somehow to PCF? Or the installation process had to do it for me?
for now, if I open http://login.xxxxx.com/ it responses with 503. what can be the reason?
what is the correct url to open the PCF UI?
I have such error in Ops Manager. What can be the reason of such error?
The same about logs. When I tried to download logs for failed services it failed too. What can be the reason?
Thank you for the help!
do I need to create subdomains (system, login, uaa, apps) in AWS Route53
do I need to bound my domain and subdomain somehow to PCF? Or the installation process had to do it for me?
You can create a wildcard subdomain (*.xxxxx.com) and alias using the instructions here: https://docs.pivotal.io/pivotalcf/1-10/customizing/cloudform-er-config.html#cname
what is the correct url to open the PCF UI?
If you mean Ops Manager, it is whatever DNS entry you created and pointed to the Ops Manager public IP address in this step: https://docs.pivotal.io/pivotalcf/1-10/customizing/cloudform-om-deploy.html#create-dns
For the ERT UI, there is the Pivotal Apps Manager https://docs.pivotal.io/pivotalcf/1-10/console/index.html
which is usually apps.system.xxxx.com
You can see what system apps are deployed by connecting to Cloud Foundry using the CLI and seeing which apps are in the system org, and what their routes are.
for now, if I open http://login.xxxxx.com/ it responses with 503. what can be the reason?
If the DNS has not been set up, I'm surprised you're getting any response whatsoever. Usually you get 503s when the routers connected to the load balancers are failing for some reason (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/ts-elb-error-message.html#ts-elb-errorcodes-http503)
I have such error in Ops Manager. What can be the reason of such error?
This would explain the 503s if the router is unhealthy. I would SSH into those machines and see what the logs say (in /var/vcap/sys/logs), which should tell you what is going wrong.
The reason of the red instances on the Status page was that my AWS account had limit on number of instances and it failed to create VMs for this nodes.
To find more information open Changelog (https://ops_manager_host/change_log) and the open log of the FAILED setup.