I would like to migrate from AWS ELB to API Gateway with an AWS EKS Cluster.
Reasons: Cost savings, the number of requests is not too large, lower latency, caching, and other things. One problem: I will need to keep both running for while.
That's the scenario:
I tried to use a private NLB with a VPC Link to access EKS Services that's ok, however, AFAIK I would need one NLB per service, and I have more or less 15 services, which would increase the cost a lot. So, I would like to have a suggestion to connect the API gateway to EKS services, that's cost-effective and have a better performance than ELB, I heard things like use another ingress as an entry point but not sure if that's possible.
You should opt for this
User -> API Gateway -> Private Link -> VPC Endpoint
Inside VPC, EKS could be configured with Horizontal Node scaler(handle scaling of the system), or Karpenter (karpenter.sh)
Related
I have a cluster of private EC2 instances serving http requests behind a public ALB. https termination happens on the ALB, with authentication on the EC2 instances. I want to move authentication to the ALB, ideally via mTLS. But ALB does not support mTLS. From some initial reading, it sounds like API Gateway can replace load balancing/firewall functions of the ALB in this design, while also supporting mTLS? Is that correct?
If so, I wonder what would be the best way to implement sticky sessions, which seem not supported by API Gateway, but needed by my app. I guess client request could initially target an API served by any instance, but then subsequent requests would target API unique to the instance that replied?
Are there other drawbacks to API Gateway, other than higher cost at high volume? Is there a better approach to this problem?
My initial assumptions were incorrect. API Gateway is not an alternative to ALB except in some specialized circumstances. So, the solution for me is to put an API Gateway in front of a private ALB, e.g., as described in AWS example
https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-alb-integration/
I have deployed my EKS cluster in private subnetes, Now this subnets have internet access using NAT gateway. I wanted to find out how much data transfered from Each pod to NAT gateway ??
You don't, really.
There is no available metric available at that level of granularity. You can see the total bytes transferred in/out for each NAT gateway, but it won't tell you what percentage each pod (or any other services in the private subnet for that matter) are accounted for of the total bytes transferred.
By default, containers from all pods in an EKS cluster share network interface(s) from the host(s) of the cluster, which is more cost effective and saves available IP addresses in your VPC, but means you can't track individual container traffic with flow logs. In theory, (I don't recommend this) you could configure your cluster to assign a VPC network interface for each container in your cluster and track traffic to your NAT gateway(s) independently with VPC flow logs then filter/aggregate the data, relate it back to the origin pods in order to determine how much traffic each pod sent to the NAT gateway. In practice, this is difficult and expensive.
See How can I find the top talkers or contributors to traffic through the NAT gateway in my VPC? for more detail.
Another option may be to use a proxy container for requests bound for the NAT gateway and have the proxy collect the metrics per pod. You'd have to configure the pods to use the proxy, share pod information to the proxy, and configure the proxy to track/provide the metrics. I don't know of any off-the-shelf tools that do this.
I am trying to understand the use of API Gateway along with AWS ALB (Ingress Controller) for the EKS cluster.
Let's say,
there are 10 microservices in the AWS EKS cluster running on 10 pods. The EKS cluster is in Private VPC.
I can create Kubernetes Ingress which will create an ALB and provide rule-based routing. The ALB will be in Public VPC and I believe, AWS will allocate a public ip to the ALB. I can configure the ALB behind Route53 to access using the domain name. My understanding says that ALB supports multiple features including host or path based routing, TLS (Transport Layer Security) termination, WebSockets, HTTP/2, AWS WAF (Web Application Firewall) integration, integrated access logs, and health checks.
So, security wise there should not be any challenge. Am I wrong?
Please refer Link of the above mentioned solution architecture.
Is there any specific use case where I need to use AWS API Gateway in front of AWS ALB in the above-mentioned architecture?
What are additional benefits the AWS API Gateway has along with AWS ALB?
Should I put AWS ALB in the Private VPC if decided to use AWS API Gateway in front of that?
With API GW you will get rate limiting, throttling and if you want to authenticate and authorize requests based on OAUTH or any other auth model that can be done with API GW.
I have a web app running on my Amazon EC2 instance. How can I integrate a Web Application Firewall with my EC2?
I have tried setting up the WAF, but it can only be associated with either a CloudFront distribution or an Elastic Load Balancer. Do I need to setup a CloudFront distribution and point it at my EC2 instance?
I ended up setting up an elastic load balancer pointing to my single instance and then adding the web application firewall pointing to the load balancer. It works pretty well and doesn't cost too much more per month from AWS.
The two approaches you can connect AWS WAF to your EC2 instance through,
AWS CloudFront
Application Load Balancer (ALB)
Each approach has its own pros and cons. If your application servers more of content that can be cached, then having AWS CloudFront along with WAF. If your application cluster needs to scale but most of it is dynamic content then going for ALB is more reasonable.
Note: There is an added fixed cost for ALB (In addition to the variable cost which is not significant though) for each month while CloudFront cost is variable and consumption driven.
It is also possible to have both CloudFront and ALB together where you can add the WAF to CloudFront only.
This is how you use AWS WAF, it only works in these two scenarios. For an EC2 application it is best to configure an ALB in front of it (even if you have only one instance).
BTW: You might get away with only using the Application Loadbalancer (ALB) from AWS, this is doing more content validity checks than classic AWS ELB is doing.
You need to set up at least Application layer Loadbalancer to use AWS WAF.
side note: AWS WAF has a lot of restriction. For request count based blocking you will end up having LAMBDA scripts to COUNT and update the AWS WAF ruleset. Also, they don't provide WAF logs as of my Knowledge. Try looking at cloud WAF solutions like SOPHOS.
We are having several microservices on AWS ECS. We have single ALB which has different target group for different microservices. We want to expose some endpoints externally while some endpoints just for internal communication.
The problem is that if we put our load balancer in public VPC than it means that we are exposing all register endpoints externally. If we move load balancer to private VPC, we have to use some sort of proxy in public VPC, which required additional infra/cost and custom implementation of all security concerns like D-DOS etc.
What possible approaches we can have or does AWS provide some sort of out of the box solution for this ?
I would strongly recommend running 2 albs for this. Sure, it will cost you more (not double because the traffic costs won't be doubled), but it's much more straight forward to have an internal load balancer and an external load balancer. Work hours cost money too! Running 2 albs will be the least admin and probably the cheapest overall.
Checkout WAF. It stands for web application firewall and is available as AWS service. Follow these steps as guidance:
Create a WAF ACL.
Add "String and regex matching" condition for your private endpoints.
Add "IP addresses" condition for your IP list/range that are allowed to access private endpoints.
Create a rule in your ACL to Allow access if both conditions above are met.
Assign ALB to your WAF ACL.
UPDATE:
In this case you have to use external facing ALB in a public subnet as mentioned by Dan Farrell in comment below.
I would suggest doing it like this:
one internal ALB
one target group per microservice, as limited by ECS.
one Network load balancer(NLB), with one ip based target group.
The Ip based target group will have the internal ALB ip addresses,as the private ip addresses for ALB are not static, you will need to setup cloudwatch cron rule with this lambda function(forked from aws documentation and modified to work on public endpoints as well):
https://github.com/talal-shobaita/populate-nlb-tg-withalb/
Both ALB and NLB are scalable and protected from DDOS by AWS, AWS WAF is another great tool that can be attached directly to your ALB listener for extended protection.
Alternatively, you can wait for AWS to support multiple target group registration per service, it is already in their roadmap:
https://github.com/aws/containers-roadmap/issues/104
This how we eventually solved.
Two LB one in private and one in public subnet.
Some APIs meant to be public, so directly exposed through public LB.
For some private APIs endpoints need to be exposed, added a proxy in public LB and routed those particular paths from public LB to private LB through this proxy.
These days API Gateway is the best way to do this. You can have your API serve a number of different endpoints while serving only the public ones via API Gateway and proxying back to the API.
I don't see it mentioned yet so I'll note that we use a CloudMap for internal routing and an ALB for "external" (in our case simply intra/inter-VPC) communication. I didn't read in depth, but I think this article describes it.
AWS Cloud Map is a managed solution that lets you map logical names to the components/resources for an application. It allows applications to discover the resources using one of the AWS SDKs, RESTful API calls, or DNS queries. AWS Cloud Map serves registered resources, which can be Amazon DynamoDB tables, Amazon Simple Queue Service (SQS) queues, any higher-level application services that are built using EC2 instances or ECS tasks, or using a serverless stack.
...
Amazon ECS is tightly integrated with AWS Cloud Map to enable service discovery for compute workloads running in ECS. When you enable service discovery for ECS services, it automatically keeps track of all task instances in AWS Cloud Map.
You want to look at AWS Security Groups.
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic.
For each security group, you add rules that control the inbound traffic to instances, and a separate set of rules that control the outbound traffic.
Even more specific to your use-case though might be their doc on ELB Security Groups. These are, as you may expect, security groups that are applied at the ELB level rather than the Instance level.
Using security groups, you can specify who has access to which endpoints.