Restrict access to some endpoints on Google Cloud - google-cloud-platform

I have a k8s cluster that runs my app (gce as an ingress) and I want to restrict access to some endpoints "/test/*" but all other endpoints should be publically available. I don't want to restrict for specific IP's to have some flexibility and ability to access restricted endpoints from any device like phones.
I considered IAP but it restricts access to the full service when I need it only for some endpoints. Hence extra.
I have thought about VPN. But I don't understand how to set this up, or would it even resolve my issues.
I have heard about proxy but seems to me it can't fulfill my requirements (?)
I can't tell that solution should be super extensible or generic because only a few people will use this feature.
I want the solution to be light, flexible, simple, and fulfill my needs at the same time. So if you say that there are solutions but it's complex I would consider restricting access by the IP, but I worry about how the restricted IP's approach is viable in the real life. In a sense would it be too cumbersome to add the IP of my phone every time I change my location and so on?

You can use API Gateway for that. It approximatively meets your needs, it's not so flexible and simple.
But it's fully managed and can scale with your traffic.
For a more convenient solution, you have to use software proxy (or API Gateway), or go to the Bank and use Apigee

I set up OpenVPN.
It was not a tedious process because of the various small obstacles but I encourage you to do the same.
Get a host (machine, cluster, or whatever) with the static IP
Setup an OpenVPN instance. I do docker https://hub.docker.com/r/kylemanna/openvpn/ (follow instructions but update a host -u YOUR_IP)
Ensure that VPN setup works from your local machine
To the routes you need limit IP access to the VPN one. Nginx example
allow x.x.x.x;
deny all;
Make sure that nginx treats IP right. I had an issue that the nginx was having Load Balancer IP as client IP's, so I have to put some as trusted. http://nginx.org/en/docs/http/ngx_http_realip_module.html
Test the setup

Related

Reaching GCP Cloud Run instance through VPC with "only internal range" egress

The current setup is as follows:
I have a Cloud Run service, which acts as "back-end", which needs to reach external services but wants to be reached ONLY by the second Cloud Run instance. which acts as a "front-end", which needs to reach auth0 and the "back-end" and be reached by any client with a browser.
I recognize that the setup is not optimal, but I've inherited as is and we cannot migrate to another solution (maybe k8n). I'm trying to make this work with the least amount of impact on the infrastructure and, ideally, without having to touch the services themselves.
What I've tried is to restrict the ingress of the back-end service to INTERNAL and place two serverless VPC connectors (one per service), so that the front-end service would be able to reach the back-end but no one else could.
But I've encountered a huge issue: if I set the egress of the front-end all on the VPC it works, but now the front-end cannot reach auth0 and therefore the users cannot authenticate. If I place the egress as "mixed" (only internal ip ranges go through the VPC) the Google Run URL (*.run.app) is resolved not through the VPC and therefore it returns a big bad 403.
What I tried so far:
Placing a load balancer in front of the back-end service. But the serverless NEG only supports the global http load balancer and I'd need an internal one if I wanted an internal ip to resolve against
Trying to see if the VPC accessor itself MAYBE provided an internal (static) ip, but it doesn't seem so
Someone in another question suggested a "MIG as a proxy" but I haven't managed to figure that out (Can I run Cloud Run applications on a private IP (inside dedicated VPC network)?)
Fooled around with the Gateway API, but it seems that I'd have to provide a openAPI specification for the back-end, and I'm still under the delusion that this might be resolved with a cheaper (in terms of effort) approach.
So, I get that the Cloud Run instance cannot possibly have an internal IP by itself, but is there any kind of GCP product that can act as a proxy? Can someone elaborate on the "MIG as a proxy" approach (Managed Instance Group? Of what, though?), which might be the solution I'm looking for? (Sadly, I do not have the reputation needed to comment on that question or I would have).
Any kind of pointer is, as always, deeply appreciated.
You are designing this wrong. Use Cloud Run's identity-based access control instead of trying to route traffic. Google IAP (Identity Aware Proxy) will block all traffic that is not authorized.
Authenticating service-to-service

Self hosted VPN with PiHole on AWS

I'm trying to create a setup where all of my (mobile and home) traffic is encrypted and ad-blocked. The idea is to use this setup:
wherein all of my traffic when using the VPN client on my phone or PC is routed through a custom OpenVPN setup running on a AWS EC2 instance. On its way out of the EC2 instance towards the public internet, I want to have a PiHole or equivalent DNS sinkhole filtering requests for blacklisted sites.
It's important that this is configured in such a way that I'm not allowing for a public/open DNS resolver - only traffic coming from through the OpenVPN (and therefore coming from an OpenVPN client that is using one of my keys) should be allowed.
Is this possible? Am I correctly understanding the functionality of all the parts?
How do I set this up? What concepts do I need to understand to make this work?
This tutorial seems like a good place to start. This is using lightsail not EC2, but if you aren't planning to scale this up much that might be simpler and cheaper.

How to expose a API that is running in a Pod and limit access?

I have an API running in a service in my GKE Cluster and it needs to be accessible for some other developers in my team. They are using a VPN so they have a static IP they can provide to me.
My idea was to just expose the service using a static external IP and restricting access to this IP using a Firewall rule so just the IP of my colleagues.
Unfortunately this just seems to be possible for Compute-VMs because only they can have tags.
Is there a way how I can simply deny all traffic to my service except for traffic from the specific IP?
I appreciate any hints to features, thank you
Well, you don't need tags, you can create your firewall rule to only allow access to the IP your developers provide you, just when you're creating your firewall rule, select all instances in the network for Targets and for source IP ranges specify the IP with the prefix /32 at the end.
You could provide them RBAC access to the pods in the required namespace and allow them to port forward. Assuming you don't want to set up a public end point and try secure it. This does require kubectl to be installed and cluster access and this will give access to all pods in the namespace.
https://medium.com/#ManagedKube/kubernetes-rbac-port-forward-4c7eb3951e28
Depends what level of security and permanency you need I guess.

Multiple server applications, one public IP on Amazon EC2

I have a single Windows Amazon EC2 instance and one public IP. The instance is running multiple web server EXEs which all sit on port 80. I want to have different domain names which I want to point to each server. On my old dedicated server I achieved this simply by having different public IPs, but with Amazon EC2 I want to keep to just one public IP.
I am not using IIS, Apache, etc. otherwise life would be a lot simpler (I would simply bind hostnames accordingly). The web server executables perform unusual "utility" tasks as part of a range of other websites, but still need to be hosted on port 80. There is no configuration other than address to bind to and port #.
I have setup several private IPs and bound each server application to those private IPs. Is it possible to leverage some of the Amazon networking products to direct the traffic to the correct private IP? e.g. I have tried setting up a private-DNS using Amazon Route53, and internally at least this seems to point to the correct servers - but not (perhaps logically) when I try to access the site externally.
In absence of any other solutions I decided to solve this using the blunt hammer approach and use a reverse proxy. Downside is my servers now only see the user IPs as 127.0.0.1 which was less than ideal, but better than nothing at all.
For my reverse proxy I used Redbird (uses node.js) but Nginx may also be an option. Both are free / open source.

Keeping some web services private and others public

Not sure of the best way of achieving something...
We've got a number of web services running on asp.net v3.5 on a couple of web servers. They all talk nicely to each other and to the public internet.
Now we'd like to keep some of these web services 'private' ie make them not available to the public internet, whilst leaving others accessible.
AFAICS the simplest way to do this is simply to run the private services on a different port and keep the public ones on port 80. Our firewall only permits internet access via port 80 so would drop any requests from the internet to the private web services. Sorted... I think?
Is this idea a reasonable solution? Or is there some drop dead simple IIS mechanism that I ought to use?
Thanks
SAL
You can restrict access to a site via a blacklist/whitelist in the IIS control Panel (directory security tab). That's what I've done in the past to filter by IP address.
AFAICS the simplest way to do this is
simply to run the private services on
a different port and keep the public
ones on port 80. Our firewall only
permits internet access via port 80 so
would drop any requests from the
internet to the private web services.
This is exactly the approach we take. We also have a VPN so that employees can access the site if they're working remotely.
You can put IP access restrictions onto any site/app you want. We have several internal web services that only allow access on the 10.x.x.x range for example.
It really depends on how secure you want the internal web services.
If you have sensitive data on the internal web services, you need to have them on a completely different server, even if you don't allow access to them from the outside by assigning them a different port.
However, if you don't have an issue with sensitive data then assigning a different port, or IP-address, for internal and external users is a good way to go.
Besides the port, you could use the restriction for the caller (using IP address filtering, for example).
Also you could actually require authentication for the caller of a web-service, which should be easy to configure in case you use ActiveDirectory.
In any case if you have a 'public' web service, which is private as well, you may want to 'publish' it twice: once for public (with nice external URL) and one for internal, so that your other internal services and/or clients do not have to go via 'external' URL. Then you could configure restrictions (client IP, authentication, ..) differently for different publishers of the same service.