AWS Basic Auth for EC2 - amazon-web-services

I have a small web service serving http traffic on an AWS EC2 instance.
I would like to add a layer of security, so that people need to authenticate in order to actually reach the service.
Something extremely simple even with password hard coded somewhere, just to don't let the web services completely exposed on the web.
What is the fastest way to accomplish this?

Related

Reaching GCP Cloud Run instance through VPC with "only internal range" egress

The current setup is as follows:
I have a Cloud Run service, which acts as "back-end", which needs to reach external services but wants to be reached ONLY by the second Cloud Run instance. which acts as a "front-end", which needs to reach auth0 and the "back-end" and be reached by any client with a browser.
I recognize that the setup is not optimal, but I've inherited as is and we cannot migrate to another solution (maybe k8n). I'm trying to make this work with the least amount of impact on the infrastructure and, ideally, without having to touch the services themselves.
What I've tried is to restrict the ingress of the back-end service to INTERNAL and place two serverless VPC connectors (one per service), so that the front-end service would be able to reach the back-end but no one else could.
But I've encountered a huge issue: if I set the egress of the front-end all on the VPC it works, but now the front-end cannot reach auth0 and therefore the users cannot authenticate. If I place the egress as "mixed" (only internal ip ranges go through the VPC) the Google Run URL (*.run.app) is resolved not through the VPC and therefore it returns a big bad 403.
What I tried so far:
Placing a load balancer in front of the back-end service. But the serverless NEG only supports the global http load balancer and I'd need an internal one if I wanted an internal ip to resolve against
Trying to see if the VPC accessor itself MAYBE provided an internal (static) ip, but it doesn't seem so
Someone in another question suggested a "MIG as a proxy" but I haven't managed to figure that out (Can I run Cloud Run applications on a private IP (inside dedicated VPC network)?)
Fooled around with the Gateway API, but it seems that I'd have to provide a openAPI specification for the back-end, and I'm still under the delusion that this might be resolved with a cheaper (in terms of effort) approach.
So, I get that the Cloud Run instance cannot possibly have an internal IP by itself, but is there any kind of GCP product that can act as a proxy? Can someone elaborate on the "MIG as a proxy" approach (Managed Instance Group? Of what, though?), which might be the solution I'm looking for? (Sadly, I do not have the reputation needed to comment on that question or I would have).
Any kind of pointer is, as always, deeply appreciated.
You are designing this wrong. Use Cloud Run's identity-based access control instead of trying to route traffic. Google IAP (Identity Aware Proxy) will block all traffic that is not authorized.
Authenticating service-to-service

How can a beginner use AWS services to host a public server and create endpoints for a web application

I have been in the front end development before, but this is my first time researching how to use AWS services to host a public server for our web application. Currently, I have trouble understanding how does EC2 and API gateway work with each other. And I also have some trouble understanding how does public server host a web application in this case. I have reads a number of tutorials, but I have trouble understanding where does this API endpoint generate in this case. I saw that API gateway could generate an endpoint, but in this case, do I still use EC2 to host the web application? And how can the url from these 2 connect to each other? Yeah, I think I got messy on understanding this web app structure especially on server side. Coud someone help me on breif explain on these 2 services and maybe some useful tutorial that I could reference? As a beginner, everything is so confusing to me. Thank you so much!!
The simple approach is deploy your web/app server in EC2 instance and check on which port yours service is running e.g. 8080 , go to attached securty group of that EC2 instance and open port for 8080, you can also attach the elastic IP so that even after restart EC2 instance your IP will never change and then access your application publically using http;//<elastic-ip>:8080/<>
btw best approach is to use ELB on ECS/EKS and then use API gateway deploy your static content in S3 and use cloudfront.

Using a VPC for API Gateway and RDS | AWS

I am new to AWS. I have a REST API I built with Django and want to deploy it on AWS API Gateway. I also have that connecting to a PostgreSQL database on AWS RDS.
I've heard that it is more secure to put both in a VPC. But, I don't really know how that makes it more secure. What does putting them both in a VPC actually do? Thanks!
Since you probably don't want anyone to access the DB directly, with VPC you can lock down the DB to only be available to your API. In addition, while your API needs to be accessible from the internet anyway, you can have robust logging, traffic filtering, and access control that run separately from the application. That is, even if the application framework turns out to have a security hole, the VPC rules might be able to mitigate them, and even when the attacker managed to get into the API controller and wreak havoc, the logging exists separately and still works. Depending on your configuration, it can even alert you for unforeseen traffic.

Is this possible in API Gateway?

I've been asked to look into an AWS setup for my organisation but this isn't my area of experience so it's a bit of a challenge. After doing some research, I'm hoping that API Gateway will work for us and I'd really appreciate it if someone could tell me if I'm along the right lines.
The plan is:
We create a VPC with several private subnets. The EC2 instances in the subnets will be hosting browser based applications like Apache Guacamole, Splunk etc.
We attach to the VPC an API Gateway with a REST API which will allow users access to only the applications on 'their' subnet
Users follow a link to the API Gateway from an external API which will provide Oauth2 credentials.
The API Gateway REST API verifies their credentials and serves them with a page with links to the private IP addresses for the services in 'their' subnet only. They can then click on the links and open the Splunk, Guacamole browser pages etc.
I've also looked at Client VPN as a possible solution but my organisation wants users to be able to connect directly to the individual subnets from an existing API without having to download any other tools (this is due to differing levels of expertise of users and the need to work remotely). If there is a better solution which would provide the same workflow then I'd be happy to implement that instead.
Thanks for any help
This sounds like it could work in theory. My main concern would be if Apache Guacomole, or any of the other services you are trying to expose, requires long lived HTTP connections. API Gateway has a hard requirement that all requests must take no longer than 29 seconds.
I would suggest also looking into exposing these services via a public Application Load Balancer, instead of API Gateway, which has OIDC authentication support. You'll need to look at the requirements of the specific services you are trying to expose to evaluate if API Gateway or ALB would be a better fit.
I would personally go about this by configuring each of these environments using an Infrastructure as Code, in such a way that you can create a new client environment by simply running your IaC tool with a few parameters like the client ID and the domain name or subdomain you want to use. I would actually spin each up in their own VPC since it sounds like you want each client's environment to be isolated from the others.

Do I need a WAF (Web Application Firewall) to protect my app?

I have created a micro-service app relying on simple functions as a service. Since this app is API based, I distribute tokens in exchange for some personal login info (Oauth or login/password).
Just to be clear, developers will then access my app using something like: https://example.com/api/get_ressource?token=personal-token-should-go-here
However, my server and application logic still gets hit even if the token is not provided, meaning anonymous attackers could flood my services without login, taking my service down.
I came across WAF recently and they promise to act as a middle-man, filtering abusive attacks. My understanding is that a WAF is just reverse-proxying my API and applies some known attacks patterns filters before delegating a request to my actual backend.
What I don't really get is: what if an attacker has direct access to my backend's IP?! Wouldn't he be able to bypass the WAF and DDoS my backend directly? Does WAF protection only relies on my original IP not being leaked?
Finally, I have read that WAF only makes sense if it is able to mitigate DDoS through a CDN in order to spread Layer 7 DDoS attacks across multiple servers and bandwidth if needed. Is it any true ? or can I just implement WAF myself ?
Go with cloud, you can deploy your app to AWS, there are 2 plus points of this.
1. Your prod server will be behind private IP not public IP.
2. AWS WAF is budgeted service, and good for block dos,scanner, and flood attacks.
You can also use captcha on failed attempts to block IP.