Having a hard time figuring out a microservices architecture.
Right now I have an ECS Cluster with two services (TodoService, CategoriesService) running in containers. Both of the services have their own Load Balancer. I'm trying to build an API Gateway where /todos would route to the Todo-app-load-balancer and /categories would route to the Categories-app-load-balancer.
First, is this a good approach to microservices? And second, question from the title.
First, is this a good approach to microservices?
Yes, there is nothing wrong with this approach.
Can an API Gateway point to multiple Application Load Balancers?
Yes, you can point each method from the API gateway to an entirely different backend resource.
In case of an Application Load Balancer, there are multiple ways of doing this. Probably the easiest is to have a public Application Load Balancer and to create HTTP integration for it. You have to specify the DNS name for the application load balancer as the endpoint. For more information, see this support page.
Other option would be to use VPC Links, which would integration with private load balancers. While this would be recommended for production, it is a bit more complex to set it up.
Is it a good or bad approach is moreover an architectural decision, But I can suggest using one ALB(Ingress) with different rules can solve your problem, Also in API GATEWAY only allow to add ELB services directly ALB will not but still there is a workaround by adding direct DNS. Here I'm attaching two screenshots for your reference.
Direct integration is not allowed on ALB, but you can use the DNS name manually.
Related
The current setup is as follows:
I have a Cloud Run service, which acts as "back-end", which needs to reach external services but wants to be reached ONLY by the second Cloud Run instance. which acts as a "front-end", which needs to reach auth0 and the "back-end" and be reached by any client with a browser.
I recognize that the setup is not optimal, but I've inherited as is and we cannot migrate to another solution (maybe k8n). I'm trying to make this work with the least amount of impact on the infrastructure and, ideally, without having to touch the services themselves.
What I've tried is to restrict the ingress of the back-end service to INTERNAL and place two serverless VPC connectors (one per service), so that the front-end service would be able to reach the back-end but no one else could.
But I've encountered a huge issue: if I set the egress of the front-end all on the VPC it works, but now the front-end cannot reach auth0 and therefore the users cannot authenticate. If I place the egress as "mixed" (only internal ip ranges go through the VPC) the Google Run URL (*.run.app) is resolved not through the VPC and therefore it returns a big bad 403.
What I tried so far:
Placing a load balancer in front of the back-end service. But the serverless NEG only supports the global http load balancer and I'd need an internal one if I wanted an internal ip to resolve against
Trying to see if the VPC accessor itself MAYBE provided an internal (static) ip, but it doesn't seem so
Someone in another question suggested a "MIG as a proxy" but I haven't managed to figure that out (Can I run Cloud Run applications on a private IP (inside dedicated VPC network)?)
Fooled around with the Gateway API, but it seems that I'd have to provide a openAPI specification for the back-end, and I'm still under the delusion that this might be resolved with a cheaper (in terms of effort) approach.
So, I get that the Cloud Run instance cannot possibly have an internal IP by itself, but is there any kind of GCP product that can act as a proxy? Can someone elaborate on the "MIG as a proxy" approach (Managed Instance Group? Of what, though?), which might be the solution I'm looking for? (Sadly, I do not have the reputation needed to comment on that question or I would have).
Any kind of pointer is, as always, deeply appreciated.
You are designing this wrong. Use Cloud Run's identity-based access control instead of trying to route traffic. Google IAP (Identity Aware Proxy) will block all traffic that is not authorized.
Authenticating service-to-service
We are looking to separate our blog platform to a separate ec2 server (In Nginx) for better performance and scalability.
Scenario is:
Web request (www.example.com) -> Load Balancer/Route -> Current EC2 Server
Blog request (www.example.com/blog) -> Load Balancer/Route -> New Separate EC2 Server for blog
Please help in this case what is the best option to use:
Haproxy
ALB - AWS
Any other solution?
Also, is it possible to have the load balancer or routing mechanism in a different AWS region? We are currently hosted in AWS.
Haproxy
You would have to set this up on an EC2 server and manage everything yourself. You would be responsible for scaling this correctly to handle all the traffic it gets. You would be responsible for deploying it to multiple availability zones to provide high availability. You would be responsible for installing all security updates on the operating system.
ALB - AWS
Amazon will automatically scale this out to handle any amount of traffic you get. Amazon will handle all security patches of the underlying system. Amazon provides free SSL certificates for ALBs. Amazon will deploy this automatically across multiple availability zones to provide high availability.
Any other solution?
I think AWS Global Accelerator would work here as well, but you would have to weigh the differences between Global Accelerator and ALB to decide which fits your use case and budget the best.
You could also look at placing a CDN in front of everything, like CloudFront or Cloudflare.
Also, is it possible to have the load balancer or routing mechanism in
a different AWS region?
AWS Global Accelerator would be the thing to look at if load balancing in different regions is a concern for you. Given the details you have provided I'm not sure why you would want this however.
Probably what you really need is a CDN in front of your websites, with or without the ALB.
Scenario is:
Web request (www.example.com) -> Load Balancer/Route -> Current EC2
Server Blog request (www.example.com/blog) -> Load Balancer/Route ->
New Separate EC2 Server for blog
In my view you can use ALB deployed in multi AZ for high availability for the following reasons :-
aws alb allows us to route traffic based on various attributes and path in URL is one of them them.
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#rule-condition-types
With aws ALB you can have two target groups with instance handling traffic one for first path (www.example.com) and second target group for another path (www.example.com/blog).
ALB allows something called SNI (which allows to handle multiple certications behind a single alb for multiple domains), so all you need to do is set up single https listener and upload your certificates https://aws.amazon.com/blogs/aws/new-application-load-balancer-sni/
i have answered on [something similar] it might help you also
This is my opinion, take it as that. I am sure a lot of people wont agree.
If your project is small or personal, you can go with HAProxy (Cheap USD4 or less if you get a t3a as a spot instance) Or free if you place it inside another EC2 of yours may be using docker.
If your project is not personal or not small, go with ALB (Expensive but simpler and better integrated to other AWS stuff)
HAProxy can handle tons of connections, but you have to do more things by yourself. ALB can also handle tons of connections and AWS will do most of the work.
I think HAProxy is more suitable for personal/small projects because if your project doesnt grow, then you dont have to touch HAProxy. It is set and forget the same as ALB but cost less.
You usually wont mind about Availability zones or disaster tolerance in a personal project, so HAProxy should be easy to config.
Another consideration: AWS offers a free tier on ALB, so if your project will run for less than a year ALB is the way to go.
If you are learning, then ALB should be considered because real clients usually love to stick to AWS in all aspects, and HAProxy is your call and also your risk (just to reduce cost for a company that usually pays a lot more for your salary, so not worth the risk).
I have a question about #apigateway and thanks in advance for all your insights.
#aws #apigateway REST APIs seems like can integrate only with Network Load Balancer out of box using private link. But in my use case, I want to use Application Load Balancer, is the only way around it is to make it internet facing load balancer?
If yes, what do you all think about this solution architecture? End of the day I would want Application Load Balancer to be accessible only through API Gateway. What are the risk you see with this solution?
#awscloud #awscertified #awscommunity #awssecurity #aws #cloudarchitecture
Need some serious help here, thanks a lot in advance !
I need to deploy a scalable 3 tier web application on AWS and I am having some doubts/trouble understanding the best practice to design the architecture.
NOTE: As per my understanding, all the backend requests are requested through the browser, after the Frontend server serves html/css/js to the user.
Let me show you what I have come up with till now :
Assuming the above 'note':
Cons (as per my understanding):
All the backend routes will be exposed to the outside world.
Even though backend servers are in private subnet, now that they're being accessed via external load balancer, the endpoints API could be accessed from the users.
How will we route a request from a Load balancer to another Load balancer. Because what I have seen is that you could only route a request to an EC2 instance added in the target group.
To overcome the cons as I think in the above approach, I came up with this architecture instead:
Pros (as per my understanding):
The backend routes are safe (in a way) because we have a way of internally connecting from the frontend to the backend servers(if required).
Cons:
If the request is made from the browser, the endpoints are again exposed.
Solution that I found online:
REAL BIG DOUBT IN THIS LAST ONE
This breaks all the logic of my understanding that : All the requests are made by the browser from the user to the backend because in this the requests to the backend are being routed FROM the frontend servers.
QUESTIONS
What if the backend request (say login) is made by the user from the browser?
How will this work out in such case?
seems like you have done some good work here.
Let me start by making things easy for you:
Users only interact with the Load Balancer: If you want to keep it simple and not break off your frontend asset serving to an external service like CloudFront, which you should if you are starting out, you will be hosting the application only via EC2 instances (application origin, or simply orgin). Your requests would look something like this:
Users <--> ALB <--> EC2
Notice how users never interact with EC2 instances directly, its always via Application Load Balancer (ALB).
If I can oversimply thing, this is how HTTP operates, a request is made to a resource at an IP and the response is sent back from the same resource or IP. So as in your diagram, a request will not be responded back by EC2 but rather be relayed via the ALB.
You don't need NAT gateway: NAT gateway are there to make it possible for resources in provate subnet access the internet. In this case, unless you want your application to access the internet, you don't need NAT gateway. Many large scale applications are actually locked down in part by not keeping this resource at all.
You are still protecting the origin: Given that only the ALB can be accessed over the internet and everything else internal you can structure things here in any way that you want to. you could have few internal microservices that can be used internally without ever being exposed to end users. Note that here request never leaves the VPN.
You can read more about this and build a sample application via the official docs here or access AWS tutorials here.
To me, #3 is the correct solution because it does not expose /api to end users (since you mention "I DO NOT want the users to directly access the /api"). In #1, I don't think you could limit access to /api to only the front-end servers, since security groups work on the whole load balancer, not per-target.
Also, being an Internet-facing load balancer, any requests from the front-end servers to the load balancer in #1 will be referencing the load balancer via public IP addresses. This will cause a 1c/GB charge to go "out of" the VPC and then back in again.
Only #3 correctly refers to back-end resources via private IP addresses. The internal load balancer will be referenced via private IP addresses.
I'm trying to put an Amazon API Gateway in front of an Application Load Balancer, which balances traffic to my ECS Cluster, where all my microservices are deployed. The motivation to use the API Gateway is to use a custom authorizer through a lambda function.
System diagram
In Amazon words (https://aws.amazon.com/api-gateway/faqs/): "Proxy requests to backend operations also need to be publicly accessible on the Internet". This forces me to make the ELB public (internet-facing) instead of internal. Then, I need a way to ensure that only the API Gateway is able to access the ELB outside the VPC.
My first idea was to use a Client Certificate in the API Gatway, but the ELB doesn't seem to support it.
Any ideas would be highly appreciated!
This seems to be a huge missing piece for the API gateway technology, given the way it's pushed. Not being able to call into an internal-facing server in the VPC severely restricts its usefulness as an authentication front-door for internet access.
FWIW, in Azure, API Management supports this out of the box - it can accept requests from the internet and call directly into your virtual network which is otherwise firewalled off.
The only way this seems to be possible under AWS is using Lambdas, which adds a significant layer of complexity, esp. if you need to support various binary protocols.
Looks like this support has now been added. Haven't tested, YMMV:
https://aws.amazon.com/about-aws/whats-new/2017/11/amazon-api-gateway-supports-endpoint-integrations-with-private-vpcs/
We decided to use a header to check to make sure all traffic is coming through API Gateway. We save a secret in our apps environmental variables and tell the API Gateway to inject that when we create the API. Then check for that key in our app.
Here is what we are doing for this:
In our base controller we check for the key (we just have an REST API behind the gateway):
string ApiGatewayPassthroughHeader = context.HttpContext.Request.Headers["ApiGatewayPassthroughHeader"];
if (ApiGatewayPassthroughHeader != Environment.GetEnvironmentVariable("ApiGatewayPassthroughHeader"))
{
throw new error;
}
In our swagger file (we are using swagger.json as the source of our APIs)
"x-amazon-apigateway-integration": {
"type": "http_proxy",
"uri": "https://${stageVariables.url}/path/to/resource",
"httpMethod": "post",
"requestParameters": {
"integration.request.header.ApiGatewayPassthroughHeader": "${ApiGatewayPassthroughHeader}"
}
},
In our docker compose file (we are using docker, but the same could be used in any settings file)
services:
example:
environment:
- ApiGatewayPassthroughHeader=9708cc2d-2d42-example-8526-4586b1bcc74d
At build time we take the secret from our settings file and replace it in the swagger.json file. This way we can rotate the key in our settings file and API gateway will update to use the key the app is looking for.
I know this is an old issue, but I think they may have just recently added support.
"Amazon API Gateway announced the general availability of HTTP APIs, enabling customers to easily build high performance RESTful APIs that offer up to 71% cost savings and 60% latency reduction compared to REST APIs available from API Gateway. As part of this launch, customers will be able to take advantage of several new features including the ability the route requests to private AWS Elastic Load Balancers (ELB), including new support for AWS ALB, and IP-based services registered in AWS CloudMap. "
https://aws.amazon.com/about-aws/whats-new/2020/03/api-gateway-private-integrations-aws-elb-cloudmap-http-apis-release/
It is possible if you use VPC Link and Network Load Balancer.
Please have a look at this post:
https://adrianhesketh.com/2017/12/15/aws-api-gateway-to-ecs-via-vpc-link/
TL;DR
Create internal Network Load Balancer connected to your target group
(instances in a VPC)
In the API Gateway console, create a VPC Link and link it to above NLB
Create API Gateway endpoint, choose "VPC Link integration" and specify your NLB internal URL as an "Endpoint URL"
Hope that helps!
It is now possible to add an authorizer directly to Application Load Balancer (ALB) in front of ECS.
This can be configured directly in the rules of a listener. See this blog post for details:
https://aws.amazon.com/de/blogs/aws/built-in-authentication-in-alb/
Currently there is no way to put API Gateway in front of private ELB, so you're right that it has to be internet facing. The best workaround for your case I can think of would be to put ELB into TCP pass through mode and terminate client certificate on your end hosts behind the ELB.
The ALB should be internal in order to have the requests routed there through private link. Works perfectly fine in my setup without need to put NLB in front of it.
Routes should be as following:
$default
/
GET (or POST or whichever you want to use)
Integration should be attached to all paths $default and GET/POST/ANY etc