Using a VPC for API Gateway and RDS | AWS - amazon-web-services

I am new to AWS. I have a REST API I built with Django and want to deploy it on AWS API Gateway. I also have that connecting to a PostgreSQL database on AWS RDS.
I've heard that it is more secure to put both in a VPC. But, I don't really know how that makes it more secure. What does putting them both in a VPC actually do? Thanks!

Since you probably don't want anyone to access the DB directly, with VPC you can lock down the DB to only be available to your API. In addition, while your API needs to be accessible from the internet anyway, you can have robust logging, traffic filtering, and access control that run separately from the application. That is, even if the application framework turns out to have a security hole, the VPC rules might be able to mitigate them, and even when the attacker managed to get into the API controller and wreak havoc, the logging exists separately and still works. Depending on your configuration, it can even alert you for unforeseen traffic.

Related

Reaching GCP Cloud Run instance through VPC with "only internal range" egress

The current setup is as follows:
I have a Cloud Run service, which acts as "back-end", which needs to reach external services but wants to be reached ONLY by the second Cloud Run instance. which acts as a "front-end", which needs to reach auth0 and the "back-end" and be reached by any client with a browser.
I recognize that the setup is not optimal, but I've inherited as is and we cannot migrate to another solution (maybe k8n). I'm trying to make this work with the least amount of impact on the infrastructure and, ideally, without having to touch the services themselves.
What I've tried is to restrict the ingress of the back-end service to INTERNAL and place two serverless VPC connectors (one per service), so that the front-end service would be able to reach the back-end but no one else could.
But I've encountered a huge issue: if I set the egress of the front-end all on the VPC it works, but now the front-end cannot reach auth0 and therefore the users cannot authenticate. If I place the egress as "mixed" (only internal ip ranges go through the VPC) the Google Run URL (*.run.app) is resolved not through the VPC and therefore it returns a big bad 403.
What I tried so far:
Placing a load balancer in front of the back-end service. But the serverless NEG only supports the global http load balancer and I'd need an internal one if I wanted an internal ip to resolve against
Trying to see if the VPC accessor itself MAYBE provided an internal (static) ip, but it doesn't seem so
Someone in another question suggested a "MIG as a proxy" but I haven't managed to figure that out (Can I run Cloud Run applications on a private IP (inside dedicated VPC network)?)
Fooled around with the Gateway API, but it seems that I'd have to provide a openAPI specification for the back-end, and I'm still under the delusion that this might be resolved with a cheaper (in terms of effort) approach.
So, I get that the Cloud Run instance cannot possibly have an internal IP by itself, but is there any kind of GCP product that can act as a proxy? Can someone elaborate on the "MIG as a proxy" approach (Managed Instance Group? Of what, though?), which might be the solution I'm looking for? (Sadly, I do not have the reputation needed to comment on that question or I would have).
Any kind of pointer is, as always, deeply appreciated.
You are designing this wrong. Use Cloud Run's identity-based access control instead of trying to route traffic. Google IAP (Identity Aware Proxy) will block all traffic that is not authorized.
Authenticating service-to-service

Is this possible in API Gateway?

I've been asked to look into an AWS setup for my organisation but this isn't my area of experience so it's a bit of a challenge. After doing some research, I'm hoping that API Gateway will work for us and I'd really appreciate it if someone could tell me if I'm along the right lines.
The plan is:
We create a VPC with several private subnets. The EC2 instances in the subnets will be hosting browser based applications like Apache Guacamole, Splunk etc.
We attach to the VPC an API Gateway with a REST API which will allow users access to only the applications on 'their' subnet
Users follow a link to the API Gateway from an external API which will provide Oauth2 credentials.
The API Gateway REST API verifies their credentials and serves them with a page with links to the private IP addresses for the services in 'their' subnet only. They can then click on the links and open the Splunk, Guacamole browser pages etc.
I've also looked at Client VPN as a possible solution but my organisation wants users to be able to connect directly to the individual subnets from an existing API without having to download any other tools (this is due to differing levels of expertise of users and the need to work remotely). If there is a better solution which would provide the same workflow then I'd be happy to implement that instead.
Thanks for any help
This sounds like it could work in theory. My main concern would be if Apache Guacomole, or any of the other services you are trying to expose, requires long lived HTTP connections. API Gateway has a hard requirement that all requests must take no longer than 29 seconds.
I would suggest also looking into exposing these services via a public Application Load Balancer, instead of API Gateway, which has OIDC authentication support. You'll need to look at the requirements of the specific services you are trying to expose to evaluate if API Gateway or ALB would be a better fit.
I would personally go about this by configuring each of these environments using an Infrastructure as Code, in such a way that you can create a new client environment by simply running your IaC tool with a few parameters like the client ID and the domain name or subdomain you want to use. I would actually spin each up in their own VPC since it sounds like you want each client's environment to be isolated from the others.

Do I need a WAF (Web Application Firewall) to protect my app?

I have created a micro-service app relying on simple functions as a service. Since this app is API based, I distribute tokens in exchange for some personal login info (Oauth or login/password).
Just to be clear, developers will then access my app using something like: https://example.com/api/get_ressource?token=personal-token-should-go-here
However, my server and application logic still gets hit even if the token is not provided, meaning anonymous attackers could flood my services without login, taking my service down.
I came across WAF recently and they promise to act as a middle-man, filtering abusive attacks. My understanding is that a WAF is just reverse-proxying my API and applies some known attacks patterns filters before delegating a request to my actual backend.
What I don't really get is: what if an attacker has direct access to my backend's IP?! Wouldn't he be able to bypass the WAF and DDoS my backend directly? Does WAF protection only relies on my original IP not being leaked?
Finally, I have read that WAF only makes sense if it is able to mitigate DDoS through a CDN in order to spread Layer 7 DDoS attacks across multiple servers and bandwidth if needed. Is it any true ? or can I just implement WAF myself ?
Go with cloud, you can deploy your app to AWS, there are 2 plus points of this.
1. Your prod server will be behind private IP not public IP.
2. AWS WAF is budgeted service, and good for block dos,scanner, and flood attacks.
You can also use captcha on failed attempts to block IP.

How to set up Tomcat session state in AWS EC2 for failover and security

I am setting up a Tomcat application in EC2. For reliability, I am running two or more instances. If one server goes down, my users should be redirected to the other instance. This suggests that session state should be kept in an external source, or mirrored between the servers.
AWS offers a hosted service, Elasticache, which seems like it would work well. I even found a nice library, memcached-session-manager. However, I soon ran into some issues.
Unless someone can convince me otherwise, I need the session states to be encrypted in transit. Otherwise someone could intercept the network traffic and pretend to be someone else on my site. I don't see any built-in Amazon method to keep traffic off the internet. (Is peering available here?)
The library mentioned earlier does have Redis support with SSL, but it does not support a Redis cluster. Someone put in a pull request for this but it has not been incorporated and this library is a complex build. I may talk myself into living without the cluster, but that puts us back at a single point of failure.
Tomcat is running on EC2 in your VPC, and ElastiCache is in your VPC. Your AWS VPC is an isolated network. Nobody can intercept the traffic between the EC2 and Elasticache servers unless your VPC network becomes compromised in some way.
If you want to use Redis instead, with SSL connections, then I believe at this time you would need a Tomcat Session Manager implementation that uses Jedis. This one uses Jedis, but you would need to upgrade the version of Jedis it uses in order to use SSL connections.

Amazon API Gateway in front of ELB and ECS Cluster

I'm trying to put an Amazon API Gateway in front of an Application Load Balancer, which balances traffic to my ECS Cluster, where all my microservices are deployed. The motivation to use the API Gateway is to use a custom authorizer through a lambda function.
System diagram
In Amazon words (https://aws.amazon.com/api-gateway/faqs/): "Proxy requests to backend operations also need to be publicly accessible on the Internet". This forces me to make the ELB public (internet-facing) instead of internal. Then, I need a way to ensure that only the API Gateway is able to access the ELB outside the VPC.
My first idea was to use a Client Certificate in the API Gatway, but the ELB doesn't seem to support it.
Any ideas would be highly appreciated!
This seems to be a huge missing piece for the API gateway technology, given the way it's pushed. Not being able to call into an internal-facing server in the VPC severely restricts its usefulness as an authentication front-door for internet access.
FWIW, in Azure, API Management supports this out of the box - it can accept requests from the internet and call directly into your virtual network which is otherwise firewalled off.
The only way this seems to be possible under AWS is using Lambdas, which adds a significant layer of complexity, esp. if you need to support various binary protocols.
Looks like this support has now been added. Haven't tested, YMMV:
https://aws.amazon.com/about-aws/whats-new/2017/11/amazon-api-gateway-supports-endpoint-integrations-with-private-vpcs/
We decided to use a header to check to make sure all traffic is coming through API Gateway. We save a secret in our apps environmental variables and tell the API Gateway to inject that when we create the API. Then check for that key in our app.
Here is what we are doing for this:
In our base controller we check for the key (we just have an REST API behind the gateway):
string ApiGatewayPassthroughHeader = context.HttpContext.Request.Headers["ApiGatewayPassthroughHeader"];
if (ApiGatewayPassthroughHeader != Environment.GetEnvironmentVariable("ApiGatewayPassthroughHeader"))
{
throw new error;
}
In our swagger file (we are using swagger.json as the source of our APIs)
"x-amazon-apigateway-integration": {
"type": "http_proxy",
"uri": "https://${stageVariables.url}/path/to/resource",
"httpMethod": "post",
"requestParameters": {
"integration.request.header.ApiGatewayPassthroughHeader": "${ApiGatewayPassthroughHeader}"
}
},
In our docker compose file (we are using docker, but the same could be used in any settings file)
services:
example:
environment:
- ApiGatewayPassthroughHeader=9708cc2d-2d42-example-8526-4586b1bcc74d
At build time we take the secret from our settings file and replace it in the swagger.json file. This way we can rotate the key in our settings file and API gateway will update to use the key the app is looking for.
I know this is an old issue, but I think they may have just recently added support.
"Amazon API Gateway announced the general availability of HTTP APIs, enabling customers to easily build high performance RESTful APIs that offer up to 71% cost savings and 60% latency reduction compared to REST APIs available from API Gateway. As part of this launch, customers will be able to take advantage of several new features including the ability the route requests to private AWS Elastic Load Balancers (ELB), including new support for AWS ALB, and IP-based services registered in AWS CloudMap. "
https://aws.amazon.com/about-aws/whats-new/2020/03/api-gateway-private-integrations-aws-elb-cloudmap-http-apis-release/
It is possible if you use VPC Link and Network Load Balancer.
Please have a look at this post:
https://adrianhesketh.com/2017/12/15/aws-api-gateway-to-ecs-via-vpc-link/
TL;DR
Create internal Network Load Balancer connected to your target group
(instances in a VPC)
In the API Gateway console, create a VPC Link and link it to above NLB
Create API Gateway endpoint, choose "VPC Link integration" and specify your NLB internal URL as an "Endpoint URL"
Hope that helps!
It is now possible to add an authorizer directly to Application Load Balancer (ALB) in front of ECS.
This can be configured directly in the rules of a listener. See this blog post for details:
https://aws.amazon.com/de/blogs/aws/built-in-authentication-in-alb/
Currently there is no way to put API Gateway in front of private ELB, so you're right that it has to be internet facing. The best workaround for your case I can think of would be to put ELB into TCP pass through mode and terminate client certificate on your end hosts behind the ELB.
The ALB should be internal in order to have the requests routed there through private link. Works perfectly fine in my setup without need to put NLB in front of it.
Routes should be as following:
$default
/
GET (or POST or whichever you want to use)
Integration should be attached to all paths $default and GET/POST/ANY etc