I'm running a backend app with several endpoints on Cloud Run(fully-managed). My endpoints are publicly available by its nature so I don't want to authenticate users through my client app hosted on Netlify.
What I do need is to restrict access to my endpoints so that other applications or malicious users can't abuse it. It is not about scaling, I just don't want to exceed the Free Tier limits since it is a demo of an opensource application.
I've already set the concurrency and max instance limits to minimum but this alone is not enough. There is also a product named Google Cloud Armor but it seems an expensive one, not free.
I was expecting to have a simple built-in solution for this but couldn't find it.
What other solutions do I have? How can I block the traffic coming out of my website on Netlify?
You don't have a lot of solution:
You don't want to authenticate your users -> so you need to rely on the technical layers
Netlify is a serverless hosting platform, you don't manage servers/IPs -> So you need to rely on the host name
To filter on the host name, you can use 2 products
External HTTPS only (about $15 per month) with url path matching.
Default URL land on a dummy service
Only request where the host matches your netlify host name are redirected to your backend
Use Cloud Armor on top of External HTTPS load balancer ($15 + Cloud Armor policy x traffic volume). The time, the load balancer redirect the default URL to the correct backend and Cloud Armor check the request origin.
The problem is that this weak solution is easy to overpass. Perform a simple curl with the host as header, and HTTPS Load Balancer and Cloud Armor think that is the correct origin
curl -H 'Host: myNetlifyHost.com' ....
The highest protection is the authentication. Google Cloud itself say: "Don't trust the network".
Related
I have assets in a Google Cloud Storage bucket. I created a subdomain (aaa.bbb.com) and a load balancer using that subdomain to point to that bucket (e.g. aaa.bbb.com/image-to-read.png). I also have an application using Google Kubernetes Engine. The goal is to make sure all users are blocked except that application on GKE. And all the GKE application is doing is reading the url of the assets to display them. How do I achieve that?
Things I've tried:
Setting GCS cors for the bucket
It turns out this only restricts by domain if people are signed into Google with the domain.
Workload Identity
This has just not worked for me. I also have an API service in the same GKE cluster that uses this and I'm able to upload fine with it. However, using a plain <img /> tag with the source as a the GCS bucket ignores the Workload Identity as far as I can tell.
Cloud Armor
This seems the most promising. I have successfully restricted by IP address but, unfortunately, the only IP address I'm able to restrict by is my actual local computer. I believe that means the request headers are sending my computer's IP address to the load balancer. But what I am trying to do is restrict access by the application's load balancer IP address or even by the origin domain (preferred).
What I'm asking is probably a basic networking question, but I'm no wiz at all the devops/infrastructure concepts so any help would be appreciated. Thanks!
You have two options:
Cloud Storage authorization
Deploy an HTTP(S) Load Balancer + Cloud Armor.
I am not sure what you mean by GKE ingress IP.
The simplest is to add Authorization in your GKE application when accessing Cloud Storage.
Authorization:
Service Account OAuth Access Token
Signed URLs.
Both methods are easy to implement.
Note: Workload Identity Federation also generates service account OAuth access tokens. Use that method if need to federate credentials from one OAuth Authority to Google. However, for a GKE application, Signed URLs or service account OAuth access tokens are probably the correct solution.
Our current setup: corporate network is connected via VPN with AWS, Route53 entry is pointing to ELB which points to ECS service (both inside a private VPC subnet).
=> When you request the URL (from inside the corporate network) you see the web application. ✅
Now, what we want is, that when the ECS service is not running (maintenance, error, ...), we want to directly provide the users a maintenance page.
At the moment you will see the default AWS 503 error page. We want to provide a simple static HTML page with some maintenance information.
What we tried so far:
Using Route53 with Failover to CloudFront distributing an S3 bucket with the HTML
This does work, but:
the Route53 will not failover very fast => Until it switches to CloudFront, the users will still see the default AWS 503 page.
as this is a DNS failover and browsers (and proxies, local dns caches, ...) are caching once resolved entries, the users will still see the default AWS 503 page after Route53 switched, because of the caching. Only after the new IP address is resolved (may take some minutes or up until browser or os restart) will the user see the maintenance page.
as the two before, but the other way around: when the service is back running, the users will see the maintenance page way longer, than they should.
As this is not what we were looking for, we next tried:
Using CloudFront with two origins (our ELB and the failover S3 bucket) with a custom error page for 503.
This is not working, as CloudFront needs the origins to be publicly available and our ELB is in a private VPC subnet ❌
We could reconfigure our complete network environment to make it public and restrict the access to CloudFront IPs. While this will probably work, we see the following drawbacks:
The security is decreased: Someone else could setup a CloudFront distribution with our web application as the target and will have full access to it outside of our corporate network.
To overcome this security issue, we would have to implement a secure header (which will be sent from CloudFront to the application), which results in having security code inside our application => Why should our application handle that security? What if the code has a bug or anything?
Our current environment is already up and running. We would have to change a lot for just an error page which comes with reduced security overall!
Use a second ECS service (e.g. HAProxy, nginx, apache, ...) with our application as target and an errorfile for our maintenance page.
While this will work like expected, it also comes with some drawbacks:
The service is a single point of failure: When it is down, you can not access the web application. To overcome this, you have to put it behind an ELB, put it in at least two AZs and (optional) make it horizontally scalable to handle bigger request amounts.
The service will cost money! Maybe you only need one small instance with little memory and CPU, but it (probably) has to scale together with your web application when you have a lot of requests!
It feels like we are back in 2000s and not in a cloud environment.
So, long story short: Are there any other ways to implement a f*****g simple maintenance page while keeping our web application private and secure?
I'm wanting to put Cloudflare in front of my API hosted on Cloud Run. I'd like to ensure my Cloud Run app only accepts connections from Cloudflare (to avoid bypassing DDoS mitigation + rate limiting in Cloudflare).
Is there any way to use Cloudflare's Authenticated Origin Pulls with Cloud Run?
Other solutions that achieve the same effect are welcome too - however the key is I don't want traffic from non-Cloudflare sources to trigger a Cloud Run invocation (otherwise a DDoS could result in billing spike). Thus, filtering traffic inside the Cloud Run app is too late, an invocation has already occurred.
Seems like there may be a way to add on HTTPS Load Balancer + Cloud Armor to do IP whitelisting and only allow requests originating from Cloudflare's IPs...but I'd rather not start tacking on two other services and add $$ just to achieve this.
Google Cloud Run supports two authorization mechanisms: unauthenticated (anyone/public) and OAuth Client ID. Cloudflare's Origin Pulls use TLS certificates, which means your Cloud Run application would need to verify the certificate as Google's Frontends do not support this. This would not accomplish your goal of preventing unauthorized invocations of Cloud Run.
In summary, unless your service is using OAuth Client IDs for authorization, there is no method to prevent Cloud Run service invocations except by limiting the maximum number of instances. If you have configured unauthenticated access, anyone calling your service endpoint will succeed in invoking your service or executing an overlapped request.
I want to use a GCP load balancer to terminate HTTPS and auto manage HTTPS cert renewal with Lets Encrypt.
The pricing calculator gives me $21.90/month for a single rule. Is this how much it would cost to do HTTPS termination for a single domain? Are there cheaper managed options on GCP?
Before looking at the price, and to another solution, look at what you need. Are you aware of Global Load balancer capabilities?
It offers you a unique IP reachable all over the globe and route the request to the region the closest to your user for reducing the latency. If the region is off, or the capacity of your backend full (health check KO), the request is routed to the next closest region.
It allows you to rewrite your URL, to manage SSL certificates, to cache your file into CDN, to scale with your traffic, to deploy security layer on top of it, like IAP, to absorb the DDoS attack without impacting your backend.
And the price is for 5 forwarding rules, not only one.
Now, of course, you can do differently.
You can use regional solution. This solution is often free or affordable. But you don't have all the Global load balancer feature.
If your backend is on Cloud Run or App Engine. Cloud Endpoint is a solution for Cloud Function (and other API endpoints).
You can deploy and set up your own nginx with your SSL certificate on a compute engine.
If you want to serve only static file, you can have a look to Firebase hosting.
We are a website builder SaaS platform, where users are building their FREE websites and are connecting their domains.
We would like to set a limit per domain, example 100mb/month, so when that bandwidth limit is hit we can restrict access to that domain with a message like "You have reached your limit".
This limit can be set for example via custom headers in the request for the specific domain or maybe on a Load Balancer's level.
We weren't able to find documentation on that topic, so decided to ask here!
Please help!
Regards,
Gev
It's possible to use Pulse Virtual Traffic Manager for control the network traffic through your domains. It's in Google Marketplace for deploy directly in your infrastructure.
Also, you can think to use the HTTPs load balancing features in combination with Cloud DNS forwarding and Cloud CDN (caching) for optimize the network traffic and redirections to the different domains.