Do I need a WAF (Web Application Firewall) to protect my app? - amazon-web-services

I have created a micro-service app relying on simple functions as a service. Since this app is API based, I distribute tokens in exchange for some personal login info (Oauth or login/password).
Just to be clear, developers will then access my app using something like: https://example.com/api/get_ressource?token=personal-token-should-go-here
However, my server and application logic still gets hit even if the token is not provided, meaning anonymous attackers could flood my services without login, taking my service down.
I came across WAF recently and they promise to act as a middle-man, filtering abusive attacks. My understanding is that a WAF is just reverse-proxying my API and applies some known attacks patterns filters before delegating a request to my actual backend.
What I don't really get is: what if an attacker has direct access to my backend's IP?! Wouldn't he be able to bypass the WAF and DDoS my backend directly? Does WAF protection only relies on my original IP not being leaked?
Finally, I have read that WAF only makes sense if it is able to mitigate DDoS through a CDN in order to spread Layer 7 DDoS attacks across multiple servers and bandwidth if needed. Is it any true ? or can I just implement WAF myself ?

Go with cloud, you can deploy your app to AWS, there are 2 plus points of this.
1. Your prod server will be behind private IP not public IP.
2. AWS WAF is budgeted service, and good for block dos,scanner, and flood attacks.
You can also use captcha on failed attempts to block IP.

Related

Reaching GCP Cloud Run instance through VPC with "only internal range" egress

The current setup is as follows:
I have a Cloud Run service, which acts as "back-end", which needs to reach external services but wants to be reached ONLY by the second Cloud Run instance. which acts as a "front-end", which needs to reach auth0 and the "back-end" and be reached by any client with a browser.
I recognize that the setup is not optimal, but I've inherited as is and we cannot migrate to another solution (maybe k8n). I'm trying to make this work with the least amount of impact on the infrastructure and, ideally, without having to touch the services themselves.
What I've tried is to restrict the ingress of the back-end service to INTERNAL and place two serverless VPC connectors (one per service), so that the front-end service would be able to reach the back-end but no one else could.
But I've encountered a huge issue: if I set the egress of the front-end all on the VPC it works, but now the front-end cannot reach auth0 and therefore the users cannot authenticate. If I place the egress as "mixed" (only internal ip ranges go through the VPC) the Google Run URL (*.run.app) is resolved not through the VPC and therefore it returns a big bad 403.
What I tried so far:
Placing a load balancer in front of the back-end service. But the serverless NEG only supports the global http load balancer and I'd need an internal one if I wanted an internal ip to resolve against
Trying to see if the VPC accessor itself MAYBE provided an internal (static) ip, but it doesn't seem so
Someone in another question suggested a "MIG as a proxy" but I haven't managed to figure that out (Can I run Cloud Run applications on a private IP (inside dedicated VPC network)?)
Fooled around with the Gateway API, but it seems that I'd have to provide a openAPI specification for the back-end, and I'm still under the delusion that this might be resolved with a cheaper (in terms of effort) approach.
So, I get that the Cloud Run instance cannot possibly have an internal IP by itself, but is there any kind of GCP product that can act as a proxy? Can someone elaborate on the "MIG as a proxy" approach (Managed Instance Group? Of what, though?), which might be the solution I'm looking for? (Sadly, I do not have the reputation needed to comment on that question or I would have).
Any kind of pointer is, as always, deeply appreciated.
You are designing this wrong. Use Cloud Run's identity-based access control instead of trying to route traffic. Google IAP (Identity Aware Proxy) will block all traffic that is not authorized.
Authenticating service-to-service

Is this possible in API Gateway?

I've been asked to look into an AWS setup for my organisation but this isn't my area of experience so it's a bit of a challenge. After doing some research, I'm hoping that API Gateway will work for us and I'd really appreciate it if someone could tell me if I'm along the right lines.
The plan is:
We create a VPC with several private subnets. The EC2 instances in the subnets will be hosting browser based applications like Apache Guacamole, Splunk etc.
We attach to the VPC an API Gateway with a REST API which will allow users access to only the applications on 'their' subnet
Users follow a link to the API Gateway from an external API which will provide Oauth2 credentials.
The API Gateway REST API verifies their credentials and serves them with a page with links to the private IP addresses for the services in 'their' subnet only. They can then click on the links and open the Splunk, Guacamole browser pages etc.
I've also looked at Client VPN as a possible solution but my organisation wants users to be able to connect directly to the individual subnets from an existing API without having to download any other tools (this is due to differing levels of expertise of users and the need to work remotely). If there is a better solution which would provide the same workflow then I'd be happy to implement that instead.
Thanks for any help
This sounds like it could work in theory. My main concern would be if Apache Guacomole, or any of the other services you are trying to expose, requires long lived HTTP connections. API Gateway has a hard requirement that all requests must take no longer than 29 seconds.
I would suggest also looking into exposing these services via a public Application Load Balancer, instead of API Gateway, which has OIDC authentication support. You'll need to look at the requirements of the specific services you are trying to expose to evaluate if API Gateway or ALB would be a better fit.
I would personally go about this by configuring each of these environments using an Infrastructure as Code, in such a way that you can create a new client environment by simply running your IaC tool with a few parameters like the client ID and the domain name or subdomain you want to use. I would actually spin each up in their own VPC since it sounds like you want each client's environment to be isolated from the others.

AWS 3-Tier Architecture Issue

Need some serious help here, thanks a lot in advance !
I need to deploy a scalable 3 tier web application on AWS and I am having some doubts/trouble understanding the best practice to design the architecture.
NOTE: As per my understanding, all the backend requests are requested through the browser, after the Frontend server serves html/css/js to the user.
Let me show you what I have come up with till now :
Assuming the above 'note':
Cons (as per my understanding):
All the backend routes will be exposed to the outside world.
Even though backend servers are in private subnet, now that they're being accessed via external load balancer, the endpoints API could be accessed from the users.
How will we route a request from a Load balancer to another Load balancer. Because what I have seen is that you could only route a request to an EC2 instance added in the target group.
To overcome the cons as I think in the above approach, I came up with this architecture instead:
Pros (as per my understanding):
The backend routes are safe (in a way) because we have a way of internally connecting from the frontend to the backend servers(if required).
Cons:
If the request is made from the browser, the endpoints are again exposed.
Solution that I found online:
REAL BIG DOUBT IN THIS LAST ONE
This breaks all the logic of my understanding that : All the requests are made by the browser from the user to the backend because in this the requests to the backend are being routed FROM the frontend servers.
QUESTIONS
What if the backend request (say login) is made by the user from the browser?
How will this work out in such case?
seems like you have done some good work here.
Let me start by making things easy for you:
Users only interact with the Load Balancer: If you want to keep it simple and not break off your frontend asset serving to an external service like CloudFront, which you should if you are starting out, you will be hosting the application only via EC2 instances (application origin, or simply orgin). Your requests would look something like this:
Users <--> ALB <--> EC2
Notice how users never interact with EC2 instances directly, its always via Application Load Balancer (ALB).
If I can oversimply thing, this is how HTTP operates, a request is made to a resource at an IP and the response is sent back from the same resource or IP. So as in your diagram, a request will not be responded back by EC2 but rather be relayed via the ALB.
You don't need NAT gateway: NAT gateway are there to make it possible for resources in provate subnet access the internet. In this case, unless you want your application to access the internet, you don't need NAT gateway. Many large scale applications are actually locked down in part by not keeping this resource at all.
You are still protecting the origin: Given that only the ALB can be accessed over the internet and everything else internal you can structure things here in any way that you want to. you could have few internal microservices that can be used internally without ever being exposed to end users. Note that here request never leaves the VPN.
You can read more about this and build a sample application via the official docs here or access AWS tutorials here.
To me, #3 is the correct solution because it does not expose /api to end users (since you mention "I DO NOT want the users to directly access the /api"). In #1, I don't think you could limit access to /api to only the front-end servers, since security groups work on the whole load balancer, not per-target.
Also, being an Internet-facing load balancer, any requests from the front-end servers to the load balancer in #1 will be referencing the load balancer via public IP addresses. This will cause a 1c/GB charge to go "out of" the VPC and then back in again.
Only #3 correctly refers to back-end resources via private IP addresses. The internal load balancer will be referenced via private IP addresses.

AWS Application Load Balancer with HTTP2

I have a RESTful app deployed on a number of EC2 instances sitting behind a Load Balancer.
Authentication is handled in part by a custom request header called "X-App-Key".
I have just migrated my classic Load Balancers to Application Load Balancers and I'm starting to experience intermittent issues where some valid requests (via testing with CURL) are failing authentication for some users. It looks like the custom request header is only intermittently being passed through. Using apache bench approx 100 of 500 requests failed.
If I test with a classic Load Balancer all 500 succeed.
I looked into this a bit more and found that the users who this is failing for are using a slightly newer version of CURL and specifically the requests coming from these users are using HTTP2. If I add "--http1.1" to the CURL request they all pass fine.
So the issues seem to be specific to us using a custom request header with the new generation application load balancers and HTTP2.
Am I doing something wrong?!
I found the answer on this post...
AWS Application Load Balancer transforms all headers to lower case
It seems the headers come through from the ALB in lowercase. I needed to update my backend to support this
You probably have to enable Sticky sessions in your loadbalancer.
They are needed to keep the session open liked to the same instance.
But, it's at application level the need of having to keep a session active, and not really useful in some kind of services, (depending on the nature of your system, not really recommended) as it provides performance reduction in REST like systems.

AWS Basic Auth for EC2

I have a small web service serving http traffic on an AWS EC2 instance.
I would like to add a layer of security, so that people need to authenticate in order to actually reach the service.
Something extremely simple even with password hard coded somewhere, just to don't let the web services completely exposed on the web.
What is the fastest way to accomplish this?