Allow traffic from certain machines - Google Cloud Armor - google-cloud-platform

I have a Google Cloud Run services and i would need to allow traffic from certain machine only.
I use Google cloud armor to allow IPs to access the Cloud Run service.
I have problem in adding dynamic IPs of certain machine as it keeps changing. I also searched on adding mac address to allow, but Cloud armor does not have that feature.

You cannot use MAC addresses for the Internet. The service (Cloud Armor) will never see the client's MAC address, only the MAC address of the last router (which would be a Google router). Google Cloud VPCs do not expose layer 2 information.
Cloud Run is a public service with a public URL. Restricting traffic based upon IP address is not supported by Cloud Run. You can put an HTTP Load Balancer and Cloud Armor in front, but that would not prevent traffic that goes directly to the service.
There are much better techniques to control access to public services. Google Cloud implements authorization using OAuth via Identity Aware Proxy (IAP). That is the correct method to use. Given that your clients have changing IP addresses, that is your best solution.
If I needed access control based upon IP address, I would run my service on Compute Engine using either Container Optimized OS, Docker or just natively using Apache/Nginx. You can dynamically update VPC firewall rules as the client's IP address changes with custom code.

Related

Google Cloud Functions access control based on source IP address

I would like to configure access control based on source IP address with Google Cloud Functions.
(Only allowed IPs can reach Google Could Functions.)
I suppose there is no way for Google Cloud Functions itself to limit client IPs.
So I have an idea to put some gateways in front of the Cloud Functions, such as Apigee, API Gateway and Cloud Endpoints.
I found that only Apigee has souce IP access control, but I wonder that Apigee is too rich for my simple workload.
https://cloud.google.com/apigee/docs/api-platform/reference/policies/access-control-policy?hl=ja
Is it possible to use API Gateway or Cloud Endpoints to configure source IP based access control?

Firewall issue - egress from GKE to Cloud Function HTTP Trigger

I am developing a solution where a Java application hosted on GKE wants to make an outbound HTTP call to a cloud function which is deployed under a different GCP project, where the GKE operates on a shared network of which possesses firewall rules for the CIDR ranges in that shared network.
For example - GKE cluster & Application deployed under GCP Project A, wishes to invoke a Serverless GCP Function deployed to project B.
There are a number of firewall rules configured on the shared network of which the GKE is operating upon, causing my HTTP call to time out, as the HTTP trigger URL is not mapped to an allowed CIDR range (in that shared network).
What have I tried?
I have lightly investigated one or two solutions which make use of Cloud NAT & Router to proxy the HTTP call to the Cloud Function trigger endpoint, but I am wondering if there are any other, simpler suggestions? The address range for cloud functions is massive so allowing that range is out of the question.
I was thinking about maybe deploying the cloud function into the same VPC & applying ingress restrictions to it, would that allow the HTTP trigger to exist in the allowed IP range?
Thanks in advance
Serverless VPC Access is a GCP solution specially designed to achieve what you want. The communication between the serverless environment and the VPC is done through an internal IP address, and therefore never exposed to the Internet.
For your specific infrastructure, you would need to follow the guide Connecting to a Shared VPC network.

How to locate the IP address that Google Workflows uses?

I am trying to use Google Workflows to make HTTP POST requests to a service that uses a whitelisted IP list. How can I find an IP, or range of IPs that I could give to the vendor?
You can whitelist all the IP range reserved by Google Cloud. But at the end, it's like if you allow all users. Indeed, any users (or attackers) that use Google Cloud services will use one of the Google IP ranges, anyone will be able to access your service.
The best solution is to use a Cloud Functions or a Cloud Run as proxy. Cloud Workflows call that proxy internally at Google Cloud. Then, on the proxy service, you can plug a serverless VPC connect (with egress param set to all traffic) and a CLoud NAT to reserve a static public IP ONLY FOR YOU, and you will be able to allowlist it securely because ONLY YOU will be able to use it.
Here the doc of Cloud Run, but it's pretty similar on Cloud Functions.

Why is it required to provide external IPs to Cloud SQL services for authorization?

I am taking the Google's GCP Fundamentals: Core Infrastructure course on Coursera. In the demonstration video of the Google Storage module, the presenter authorizes a compute engine instance to access a MySQL instance via it's external IP address.
Aren't these two resources part of the same VPC if they are part of the same project ? Why can't this authorization be done using the vm instance's internal IP address ?
Aren't these two resources part of the same VPC if they are part of
the same project ?
A Cloud SQL instance isn't created in one of your project's VPC network but in a Google-managed project, within its own network.
What happens when you enable private IP is that this network will be peered with the network of your choice in your project, where your Compute Engine instance resides:
You can then connect to the Cloud SQL instance from your VM via the internal IP address. The VM is considered trusted if your network configuration allows it to reach the Cloud SQL instance.
When you set an external IP address on the Cloud SQL instance, it means that the instance is accessible to the internet and the connection needs to be authorized. One way to do it is to whitelist the IP address of the caller as you mentioned. This works well if the caller's IP doesn't change. Another (easier) option is to connect via the cloud_sql_proxy, which handles authorization and encryption for you. You then don't need to whitelist the IP.

Is it possible to secure a load balanced endpoint in Azure?

Is there any way I can have a load balanced endpoint that does not get exposed publicly in Azure?
My scenario is I have an endpoint running on multiple VM's. I can create a load balanced endpoint, but this creates a publicly available endpoint.
I only want my load balanced endpoint to be available for my web applications running in Azure (Web Workers and Azure Websites).
Is there any way to do this?
As #Brent pointed out, you can set up ACL's on Virtual Machine endpoints. One thing you mentioned in your question was the ability to restrict inbound traffic to only your web/worker role instances and Web Sites traffic.
You can certainly restrict traffic to web/worker instances, as each cloud service gets an IP address, so you just need to allow that particular IP address. Likewise, you can use ACLS to restrict traffic to other Virtual Machine deployments (especially in the case where you're not using a Virtual Network). Web Sites, on the other hand, don't offer a dedicated outbound IP address, so you won't be able to use ACLs to manage Web Sites traffic to your Virtual Machines.
Yes, Windows Azure IaaS supports ACL's on endpoints. Using this feature, you can restrict who connects to your load balanced endpoints. For more information see: https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-acl/