How can I restrict access to my cloud functions? - google-cloud-platform

I have a few cloud functions. I also have a nodejs server running in AppEngine. I make calls to my cloud functions from there. Currently I can make calls to my cloud function from anywhere!
Is there any way I can restrict access to my cloud functions to only available when called from my server running on Google App Engine?

You have 2 solutions
The first one is to use a service account as described be AndresMijares. But, not to create a new one. Indeed, if you create a new service account and you want to use it with app engine, you need to generate a service account key file and to deploy this secret with your code. It's not very secure because you need also to store securely this secret and so on.
So, the solution is to use the App Engine default service account which as this email pattern
<project_ID>#appspot.gserviceaccount.com
Grant this service account as role/cloudfunctions.invoker on all the functions to invoke at the project level.
The 2nd solution isn't so great as the first one but it's also possible. You can update your Cloud Functions and set the ingress parameter to internal. That means only the traffic coming from your VPCs in the project will be able to reach the Cloud Functions, included the other resources of your project (like Compute Engine). -> That's why it's not a very good solution, but at the end, Cloud Functions can't be invoked from anywhere.
So, to allow App Engin to use your VPC to call the Cloud Function, you need to use a serverless VPC connector that bridge the serverless world with your VPC. In addition to be less secure, this solution involve additional cost for the serverless VPC connector.
The advantage in the 2nd solution is that you don't have to update your application code to perform a secure call to you cloud function. You only update the deployment configuration and you have function callable only internally.
For the first solution, you need to update your code to add a security token to your request's header. It's similar to the function to function authentication. I personally don't like this way to implement that because you can't test locally: locally you don't have metadata servers!
I wrote an article from where you can get inspiration of the part "Avoid metadata server".
EDIT 1
After a deep dive in App Engine serverless VPC connector and this answer, it's only possible to reach a ingress "internal only" Cloud Function (or Cloud Run), with a Cloud Functions or Cloud run. App Engine doesn't route the public traffic inside the serverless VPC connector and thus the 2nd solution isn't possible in App Engine case.

There are a few ways to do this. You can create a Service Account IAM & Admin -> Services accounts
You need to apply the Cloud Functions Invoker role to this service account, you can use the gcloud cli for this.
gcloud beta functions add-iam-policy-binding YOUCLOUDFUNCTIONAME --member serviceAccount:NAME-OF-YOUR-SERVICE-ACCOUNT#project-name.iam.gserviceaccount.com --role roles/cloudfunctions.invoker --region YOUR-REGION
you will be prompted with a message like this:
bindings:
- members:
- allUsers
- YOUR SERVICE ACCOUNT
Ideally, you need to remove the allUsers role.
gcloud beta functions remove-iam-policy-binding YOUFUNCTIONNAME --member allUsers --role roles/cloudfunctions.invoker --region us-central1
Then you need to make sure your AppEngine instances have access to the service account you just created, that should do the trick. Be aware you might need more configuration based on your case, but this can give you a good starting point.

Related

Google Cloud Workflow: Reach Private VPC

Is it possible for the Google Cloud Workflow to reach the Private VPC (perhaps via a serverless VPC connector)? I can't find anything about it in the documentation. We want to use Workflow to trigger certain things via API on the internal network (no outside access).
Worst case we'll have to proxy it through a Cloud Function
Regards,
Niklas
VPC connector is one of the most demanding feature of Cloud Workflow but for now it's not implemented. There is no ETA for this feature.
For now, a proxy is required with Cloud Run/Functions and a VPC connector.
As stated in the first sentences in Workflows doc, Cloud Workflows is meant to:
link series of serverless tasks together
and it
Combine the power of Google Cloud's APIs, serverless products like Cloud Functions and Cloud Run, and calls to external APIs
So, as you proposed, the workaround is to wrap / proxy your call to your internal API, through a call to Cloud Function or Cloud Run with proper authentication / authorisation.
Google Cloud Workflow has an unknown IP, which is difficult to route.
So you're probably looking for Cloud NAT? This would be the console.

How to make my Cloud Function only be called from my GCP service account

As seen below I am editing the permissions for access to my Cloud Function called task. I am following the advice from GCP that says the following:
This resource is public and can be accessed by anyone on the internet.
To remove public access, remove "allUsers" and "allAuthenticatedUsers"
from the resource's members.
so that my function can only be called from my GCP. So I removed the allUsers access for the Cloud Functions Invoker role. But now I am trying to add a new member (service account) with the Cloud Functions Invoker role:
However I don't know what service account my Cloud Tasks are fired from. I've created a new service account with only Cloud Tasks permissions but I don't know how to actually make my Cloud Tasks use this service account when executing. There doens't seem to be an option for that:
Any idea?
According to the Google docs, "Cloud Tasks can call HTTP Target handlers that require authentication if you have a service account with the appropriate credentials to access the handler."
You may need to create a service account and give the appropriate roles to it.
Please follow the official Google documentation [1] where the steps to follow are detailed.
[1] - https://cloud.google.com/tasks/docs/creating-http-target-tasks#sa

Cut Cloud Run service from running - safety reasons

Let's assume, I run a Cloud Run service of Google.
Let's also assume someone wants to really harm you and finds out all API routes or is able to send a lot of post-requests by spamming the site.
There is a Email notification, which will popup on certain limits you set up before.
Is there also a way to automatically cut the Cloud Run service, or set it temporarily offline? I couldn't find any good resource or solution to this.
There are several solution to remove from traffic Cloud Run service, in addition of authentication solution proposed by Dondi
Delete the Cloud Run service. It might seem overkill, but, because the service is stateless, you will lost nothing (except the revision history)
If you have your Cloud Run service behind a Load Balancer
You can remove the serverless NEG that route the traffic to it
You can add a Cloud Armor policy that filter the originator IP to exclude it from the traffic
You can set the ingress to internal, or internal and cloud load balancing.
You can deploy a dummy revision (a hello world container for example), and route 100% of the traffic to it (traffic splitting feature)
You can't really "turn off" a Cloud Run service as it's fully managed by Google. A Cloud Run instance automatically scales down to zero if there are no requests, but it will continue on serving traffic.
To emulate what you want to do, make sure that your service requires authentication then revoke access on the offending user (or all users). As mentioned in the docs:
Cloud Run (fully managed) does not offer a direct way to make a service stop serving traffic, but you can achieve a similar result by revoking the permission to invoke the service to identities that are invoking the service. Notably, if your service is "public", remove allUsers from the Cloud Run Invoker role (roles/run.invoker).
Update: Access to a resource is managed through an IAM policy. In order to control access programmatically, you have to get the IAM policy first, then revoke the role to a user or a service account. Here's the documentation that gives an overview.

Difference between Google managed service account and default service account in GCP

I've been reading the Google Cloud documentation and can't exactly figure out what the difference between these two are. I know that both of them are automatically created in GCP, but I really don't know much more.
You aren't alone, and that's why google has started a new video series on this topic. To summarize,
The Google managed service account are account created on Google side (managed by Google, you can't delete them) but that you can grant on your project to allow them to perform actions. They are also named service agent. They are used when you used serverless product, such as Cloud Build for example, or Cloud Run (to pull the image, not to run the instance)
The default service account (mainly Compute Engine default service account and App Engine default service account) are service account created automatically in YOUR project (so managed by you, you can delete them if you want) when you activate some APIs. They are used by default when you create some service's instance.

Adding service account to Cloud Function on GCP

so I am trying to deploy a cloud function to shutdown all VM's in different projects on GCP.
I have added the functionality to a single project using this guide: https://cloud.google.com/scheduler/docs/start-and-stop-compute-engine-instances-on-a-schedule
It shutdowns / starts VM's with the correct tag.
Now I want to extend this to all VM's across projects, so I was thinking i need another service account that I Can add under the cloud Function.
I have gotten a service account from the cloud Admin that has access to all projects, and added that under IAM, and given role as owner. But the issue is that I cannot assign the service account to the function.
Is there something I am missing? Or is there an easier way of doing what I am trying to accomplish?
The easiest way is to give the service account used by that Cloud Function access to the other projects. You just need to go to your other projects and add this service account in the IAM section and give it the permissions it needs, for example compute.admin in this case.
Note that by default, Cloud Functions uses the App Engine default service account, which may not be convenient for you since the App Engine app in your Cloud Function's project would also be granted the compute.admin role in the other projects.
I'd recommend to create a dedicated service account for this use case (in the same project than your Function) and assign it to the Function and then add it as member of the other projects.
Then, in your Cloud Function, you'll need to run your code for each project you'd like to act upon. You can create a separate client object for each specifying the project Id as constructor option, like so:
const compute = new Compute({
projectId: 'your-project-id'
});
So far you only loop through the VMs in the current project where the Function runs in.
Another option would be to have such a function defined in each project you'd like to act upon. You'd have a "master" function that you'd call, it'd act on the VMs in its project and call the other functions in the other project to act on theirs.