Send POST request from one service to another in Amazon ECS - amazon-web-services

I have a Node-Express website running on a microservices based architecture. I deployed the microservices on Amazon ECS cluster with one EC2 instance. The microservices sit behind an Application Load Balancer that routes external traffic correctly to the services. This system is working as expected except for one problem: I need to make a POST request from one service to the other. I am trying to use axios for this but I don't know what url to post to in axios. When testing locally, I just used axios.post('http://localhost:3000/service2',...) inside service 1 but how should I do it here?

So There are various ways.
1. Use Application Load Balancer behind the service
In this method, you put your micro services behind the load balancer(s) and to send request, you give load balancer URL. You can have path based routing for same load balancer or you can use multiple load balancers.
2. Use Service Discovery
In this method, you let your requester discover it. Now Service discovery can be done in various way like using ALB or Route 53 or ECS or Key Value Store or Configuration Management or Third Party Software such as Consul

Related

Communication between tasks in AWS ECS Fargate

I have three docker containers all running in their own Tasks (3 tasks), and each task running in a separate Fargate service (3 services) on ECS Fargate. I have all the services behind an application load balancer (ALB) with path based routing setup. Below is how the path based routing works:
example.com/app_one forwards traffic to service_one_target_group
example.com/app_two forwards traffic to service_two_target_group
example.com/app_three forwards traffic to service_three_target_group
One thing to note. app_one & app_two are Node JS apps and app_three is a middleware, GraphQL server used to connect to a database.
I need the the services for app_one & app_two to be able to discover the app_three service.
I know Service Discovery is an option but I am unsure how to implement the Service Discovery in a scenario with path based routing. Any advice would be helpful.

How to do gradual traffic migration between two Cloud Run services using Google Cloud HTTP(S) load balancer

I have setup an External HTTP(S) load balancer with the following:
2 Serverless NEGs, each pointing at a different Cloud Run service in their respective region
1 Backend Service, using the 2 NEGs as 2 Backends
1 Host and path rule that sends everything to the Backend Service
1 HTTPS Frontend pointing at the Host and path rule
At this point, I notice that the traffic is routed to the Cloud Run service closest to the region of the client making the request.
I would like to change that to route 100% of the traffic to one Cloud Run service on day 1, 50% on each service on day 2, and on day 3, route 100% of the traffic to the other Cloud Run service.
It's unclear if an External HTTP(S) load balancer can help with that. And if it can, it's unclear if this should be done in the Backend Service or in the Host and Path rule.
Google Cloud load balancer does not support weighted/percent-based load balancing for the external HTTP(S) LB. This is listed at https://cloud.google.com/load-balancing/docs/features#load_balancing_methods.
Maybe I need to create 2 Backend Services, each pointing at one NEG?
Yes, this is how you would do it if external HTTPS GCLB supported it. You need to create separate backendServices for each serverless NEG and list weightedBackendServices in the route rule of the urlMap object. You can find an example here but I believe it only works for internal load balancer (ILB) currently per the link above.
AFAIK, External HTTPS load balancing can only route to the closest location but not dispatch the traffic according to weight.
In addition, your solution requires to deploy in 2 different regions, because you can't 2 backends in the same region in the same backend service.
The easiest solution for now is to use Cloud Run traffic splitting feature. Route all the traffic to the same service, and then, let the Cloud Run load balancer dispatching the requests.

Spring Boot - Different systems( eureka , zuul, ribbon, nginx,) used for what?

I have been working with spring and now would like to learn spring boot and microservices. I understand what microservice is all about and how it works. While going through docs i came across many things used to develop microservices along with spring boot which i am very much confused.
I have listed the systems below.and the questions:
Netflix Eureka - I understand this is service discovery platform.
All services will be registered to eureka server and all
microservices are eureka clients. Now my doubt is , without having
an API gateway is there any use with this service registry ? This is
to understand the actual use of service registry.
ZUULApi gateway- I understand ZUUL can be used as API gateway which is basically a load balancer , that calls appropriate
microservice corresponding to request URL. iS that assumption
correct? will the api gateway interact with Eureka for getting the
appropriate microservice?
NGINX - I have read NGINX can also be used as API gateway? Is that possible? Also i read some where else like NGINX can be used as a service registry , that is as an alternate for Eureka ! Thus which is right? Api gateway or service registry or both? I know nginx is a webserver and reverse proxies can be powerfully configured.
AWS api gateway - Is this can also be used as an alternate for ZUUL?
RIBBON - for what ribbon is used? I didn't understand !
AWS ALB- This can also be used for load balancing. Thus do we need ZUUL if we have AWS ALB?
Please help
without having an API gateway is there any use with this service registry ?
Yes. For example you can use it to locate (IP and port) of all your microservices. This comes in handy for devops type work. For example, at one project I worked on, we used Eureka to find all instances of our microservices and ping them for their status (/health, /info).
I understand ZUUL can be used as API gateway which is basically a load balancer , that calls appropriate microservice corresponding to request URL. iS that assumption correct?
Yes but it can do a lot more. Essentially because Zuul is more of a framework/library that you turn into a microservice, you can code it to implement any kind of routing logic you can come up with. It is very powerful in that sense. For example, lets say you want to change how you route based on time of day or any other external factors, with Zuul you can do it.
will the api gateway interact with Eureka for getting the appropriate microservice?
Yes. You configure Zuul to point to Eureka. It becomes a client to Eureka and even subscribes to Eureka for realtime updates (which instances have joined or left).
I have read NGINX can also be used as API gateway? Also i read some where else like NGINX can be used as a service registry , that is as an alternate for Eureka ! Thus which is right? Api gateway or service registry or both?
Nginx is pretty powerful and can do API gateway type work. But there are some major differences. AFAIK, microservices cannot dynamically register with Nginx, please correct me if I am wrong... as they can with Eureka. Second, while I know Nginx is highly (very highly) configurable, I suspect its configuration abilities do not come close to Zuul's routing capabilities (due to having the whole Java language at your disposal within Zuul to code your routing logic). It could be the case that there are service discovery solutions that work with Nginx. So Nginx will take care of the routing and such, but service discovery will still require a solution.
Is this can also be used as an alternate for ZUUL?
Yes AWS API Gateway can be used as a Zuul replacement of sorts. The issue here, just like Nginx, is service discovery. AWS API Gateway lets you apply logic to your routing... though not as open ended as Zuul.
for what ribbon is used?
While you can use the Ribbon library directly, for the most part consider it as an internal dependency of Zuul. It helps Zuul do the simple load balancing that it does. Please note that this project is in maintenance mode and not recommended any more.
This can also be used for load balancing. Thus do we need ZUUL if we have AWS ALB?
You can use ALB with ECS (elastic container service) to replace Eureka/Zuul. ECS will take care of the service discover for you and will map all instances of a particular service to a Target Group. Your ALB routing table can then route to Target Groups based on simple routing rules. The routing rules in ALB are very simple though, but improving over time.
Different systems which can be used for the working of microservices, that comes along with spring boot:
Eureka:
Probably the first microservice to be UP. Eureka is a service registry, means , it knows which ever microservices are running and in which port. Eureka is deploying as a sperate application and we can use #EnableEurekaServer annotation along with #SpringBootAPplication to make that app a eureka server. So our eureka service registery is UP and running. From now on all microservices will be registered in this eureka server by using #EnableDiscoveryClient annotation along with #SpringBootAPplication in all deployed microservices.
Zuul: ZUUL is a load balancer , routing application and reverse proxy server as well. That is before we were using apache for reverse proxy things , now , for microservices we can use ZUUL. Advantage is, in ZUUL we can programatically set configurations, like if /customer/* comes go to this microservice like that. Also ZUUL can act as a load balancer as well , which will pick the appropriate microservice in a round robin fashion. SO how does the ZUUL knows the details of microservices, the answer is eureka. It will work along with eureka to get microservice details. And in fact this ZUUL is also a Eureka client where we should mark using #EnableDiscoveryClient, thats how these 2 apps(Eureka and zuul) linked.
Ribbbon:
Ribbon use for load balancing. This is already available inside ZUUL, in which zuul is using Ribbon for load balancing stuff. Microservices are identified by service-name in properties file. IF we run 2 instances of one microservices in different port, this will be identified by Eureka and along with Ribbon(Inside zuul), requests will be redirected in a balanced way.
Aws ALB , NGINX , AWS Api gateway etc: There are alternatives for all the above mentioned things. Aws is having own load balancer, service discovery , api gateway etc . Not only AWS all cloud platofrms ,like Azure, have these. Its depends which one to use.
Adding a general question as well , How these microservices communicate each other: Using Resttemplate or Feignclient actual rest API can be called or Message queues like Rabbit MQ etc can be used .
Eureka can be used in conjunction with NGINX, which leads to very powerful combination.
I am using it on AWS EC2 environment. Previously instead of NGINX I was using Spring Cloud Gateway and before that Zuul. Depending of the load Spring Cloud Gateway was running on AWS t3.medium or t3.large instances. After moving to NGINX I am using t3.micro (8 times less memory) instance. I am almost sure that I can do the trick and with t3.nano (16 times less memory) instance, but I wanted to be sure that there will be no surprises.
Below are the high level steps what you have to do in order to plug NGINX in the Eureka ecosystem. More details you can find in NGINX With Eureka Instead of Spring Cloud Gateway or Zuul article.
Create a service which can read the configuration of all applications from Eureka and to 'translate' it to NGINX configuration.
Create a cronjob entry which at certain period will read the configuration from the above service and will call the NGINX hot reload
NGINX which will consume the configuration produced from the service and the cronjob and will work as API Gateway

Setting up multiple EC2 instances and multiple subdomain under one parent domain in AWS Route 53

I am developing a set of frontend webapps (for instance vaadin or angular) and backend RESTful services. Each frontend webapp will consume one or more of these backend services. I want both webapps and services to be secured over https.
Now, I want to register a single domain, say mydomain.com, and deploy the backend services such that they are available at
service1.api.mydomain.com, service2.api.mydomain.com etc. The frontend apps should be available at webapp1.mydomain.com, webapp2.mydomain.com etc.
I need to be able to setup two or more EC2 instances for the services, and the same for the webapps. For instance, service1 may be running instance A, service2 on instance B, and webapp1 on instance C, and webapp2 on instance D.
How do I configure this setup in AWS Route 53?
Since there is a limit to the max number of Elastic IPs (max 5) that can be allocated for one AWS account, I suppose separate public IPs for all the EC2 instances is not a solution, since I will be having more than 5 such subdomains.
I hope you can provide a practical example configuration with two services and two webapps.
You can submit a request to get the Elastic IP (EIP) limit increased for your account. Small increases (e.g. from 5 to 10) should be fairly quick and easy to obtain. Larger increases should be obtainable if you can justify it to AWS support.
https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase&limitType=service-code-vpc
If you're open to using path-based routing instead of subdomain based (e.g. mydomain.com/app1 and mydomain.com/app1/api) or a mix of the two (e.g. app1.mydomain.com and app1.mydomain.com/api), you could look at using an Application Load Balancer (ALB). You would need one ALB per subdomain used.
http://docs.aws.amazon.com/elasticloadbalancing/latest/application/tutorial-load-balancer-routing.html
Note: I expect subdomain-based routing to be available with the ALB in the future, but it hasn't been released yet.
ALBs could be cheaper than using Classic Elastic Load Balancers (ELBs), but if you're not using the load balancing functionality at all, EIPs may be your best bet since they're free when attached to a running instance.

ECS container routing with an application load balancer in AWS

I know application load balancers are new in AWS, and discussions (help) are scarce up-to now.
I have a few api containers (docker) running in EC2 Container Service (ECS). I can take advantage of application load balancers to manage routing on an application level rather than a network level. This is exactly what ECS has lacked up until now.
Getting to the point...
I'm trying to get to a point where the load balancer will detect the pattern in the request url and route the request to the correct container, but route the request without the pattern included.
For example:
http://elb.eu-west-1.elb.amazonaws.com/app1/ping
Should route request '/ping' to the app1 container
http://elb.eu-west-1.elb.amazonaws.com/app2/ping
Should route request '/ping' to the app2 container
etc...
Each app has it's own target group and corresponding pattern: /app1*, /app2*
the problem
I can successfully get the a request to '/app1/ping' to route to the app1 container however the request hits the container as '/app1/ping' (obviously) but I only need '/ping' to hit the container. '/app1' is irrelevant to the container.
Any ideas how I can achieve this?
Application Load Balancers do a couple of things very well, but there's an awfull lot they do not do. This is true for a lot of AWS services (e.g. SQS just recently, after almost a decade got FIFO support) and you can either love or hate this.
Your use case seems to fit the AWS API Gateway very well, which is a service that can be used to map certain external endpoints to certain internal endpoints (and a lot more...). There's even a blog post on the AWS blog about how to use Application Load Balancing with the EC2 Container Service and the API Gateway together.