so I am trying to deploy a cloud function to shutdown all VM's in different projects on GCP.
I have added the functionality to a single project using this guide: https://cloud.google.com/scheduler/docs/start-and-stop-compute-engine-instances-on-a-schedule
It shutdowns / starts VM's with the correct tag.
Now I want to extend this to all VM's across projects, so I was thinking i need another service account that I Can add under the cloud Function.
I have gotten a service account from the cloud Admin that has access to all projects, and added that under IAM, and given role as owner. But the issue is that I cannot assign the service account to the function.
Is there something I am missing? Or is there an easier way of doing what I am trying to accomplish?
The easiest way is to give the service account used by that Cloud Function access to the other projects. You just need to go to your other projects and add this service account in the IAM section and give it the permissions it needs, for example compute.admin in this case.
Note that by default, Cloud Functions uses the App Engine default service account, which may not be convenient for you since the App Engine app in your Cloud Function's project would also be granted the compute.admin role in the other projects.
I'd recommend to create a dedicated service account for this use case (in the same project than your Function) and assign it to the Function and then add it as member of the other projects.
Then, in your Cloud Function, you'll need to run your code for each project you'd like to act upon. You can create a separate client object for each specifying the project Id as constructor option, like so:
const compute = new Compute({
projectId: 'your-project-id'
});
So far you only loop through the VMs in the current project where the Function runs in.
Another option would be to have such a function defined in each project you'd like to act upon. You'd have a "master" function that you'd call, it'd act on the VMs in its project and call the other functions in the other project to act on theirs.
Related
I'm fairly new to GCP Cloud Functions.
I'm developing a cloud function within a GCP project which needs to access some other resources from the project (such as GCS, for instance). When I set up a cloud function, it gets a service account associated to it, so, I'm able give this service account the required permissions on the IAM and it works just fine in production.
I'm handling the required integrations by using the GCP SDKs and identifying the resources relative to the GCP project. For instance, if I need to access a GCS bucket within that project, it looks something like this:
const bucket = await storage.bucket("bucket-name");
The problem with this is that I'm not able to access these resources if I'm running the cloud function locally for development, so, I have to deploy it every time to test it, which is a process that takes some time and makes development fairly unproductive.
So, is there any way I can run this cloud function locally whilst keeping the access to the necessary project resources so that I'm able to test it while developing? I figured that running this function as it's service account could work, but I don't know how to do it and I'm also open to different approaches.
Yes, there is!
The only thing you need to do is setting the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of a service account json file and then the googleapis libraries handle the rest automatically, most of the time.
I am currently trying to use a service account that is already created in one project and use it again in a different project. We have cloud function running in a project B which needs to be invoked from a service account in project A. Is there a terraform resource we can use to define permissions cross project ? Thanks for your help.
Let me know if you have more questions
I have a few cloud functions. I also have a nodejs server running in AppEngine. I make calls to my cloud functions from there. Currently I can make calls to my cloud function from anywhere!
Is there any way I can restrict access to my cloud functions to only available when called from my server running on Google App Engine?
You have 2 solutions
The first one is to use a service account as described be AndresMijares. But, not to create a new one. Indeed, if you create a new service account and you want to use it with app engine, you need to generate a service account key file and to deploy this secret with your code. It's not very secure because you need also to store securely this secret and so on.
So, the solution is to use the App Engine default service account which as this email pattern
<project_ID>#appspot.gserviceaccount.com
Grant this service account as role/cloudfunctions.invoker on all the functions to invoke at the project level.
The 2nd solution isn't so great as the first one but it's also possible. You can update your Cloud Functions and set the ingress parameter to internal. That means only the traffic coming from your VPCs in the project will be able to reach the Cloud Functions, included the other resources of your project (like Compute Engine). -> That's why it's not a very good solution, but at the end, Cloud Functions can't be invoked from anywhere.
So, to allow App Engin to use your VPC to call the Cloud Function, you need to use a serverless VPC connector that bridge the serverless world with your VPC. In addition to be less secure, this solution involve additional cost for the serverless VPC connector.
The advantage in the 2nd solution is that you don't have to update your application code to perform a secure call to you cloud function. You only update the deployment configuration and you have function callable only internally.
For the first solution, you need to update your code to add a security token to your request's header. It's similar to the function to function authentication. I personally don't like this way to implement that because you can't test locally: locally you don't have metadata servers!
I wrote an article from where you can get inspiration of the part "Avoid metadata server".
EDIT 1
After a deep dive in App Engine serverless VPC connector and this answer, it's only possible to reach a ingress "internal only" Cloud Function (or Cloud Run), with a Cloud Functions or Cloud run. App Engine doesn't route the public traffic inside the serverless VPC connector and thus the 2nd solution isn't possible in App Engine case.
There are a few ways to do this. You can create a Service Account IAM & Admin -> Services accounts
You need to apply the Cloud Functions Invoker role to this service account, you can use the gcloud cli for this.
gcloud beta functions add-iam-policy-binding YOUCLOUDFUNCTIONAME --member serviceAccount:NAME-OF-YOUR-SERVICE-ACCOUNT#project-name.iam.gserviceaccount.com --role roles/cloudfunctions.invoker --region YOUR-REGION
you will be prompted with a message like this:
bindings:
- members:
- allUsers
- YOUR SERVICE ACCOUNT
Ideally, you need to remove the allUsers role.
gcloud beta functions remove-iam-policy-binding YOUFUNCTIONNAME --member allUsers --role roles/cloudfunctions.invoker --region us-central1
Then you need to make sure your AppEngine instances have access to the service account you just created, that should do the trick. Be aware you might need more configuration based on your case, but this can give you a good starting point.
I've been reading the Google Cloud documentation and can't exactly figure out what the difference between these two are. I know that both of them are automatically created in GCP, but I really don't know much more.
You aren't alone, and that's why google has started a new video series on this topic. To summarize,
The Google managed service account are account created on Google side (managed by Google, you can't delete them) but that you can grant on your project to allow them to perform actions. They are also named service agent. They are used when you used serverless product, such as Cloud Build for example, or Cloud Run (to pull the image, not to run the instance)
The default service account (mainly Compute Engine default service account and App Engine default service account) are service account created automatically in YOUR project (so managed by you, you can delete them if you want) when you activate some APIs. They are used by default when you create some service's instance.
I have a Cloud Function that interacts with Cloud Storage and BigQuery and they all belong to the same project. The usual way that I have followed when deploying Cloud Function from the command line is this:
$ gcloud functions deploy my_function ... --set-env-vars GOOGLE_APPLICATION_CREDENTIALS=my_project_credentials.json
Where my_project_credentials.json is a json key file that contains service account and key to allow access to Cloud Storage and BigQuery.
As this is the way that I have done ever since, what I need is another way in order to avoid this json credentials file altogether (since these interacting services belong to the same Google Cloud project anyway). Is there such a way? I am a bit new with Google Cloud so I am not familiar with in and outs of IAM.
(An additional reason that I need this, is that I have a client that is not comfortable with me as a developer having access to that json key and also he/she doesn't want that json key deployed alongside with Function code. Kindly provide some details on how to this in IAM particularly to BigQuery and Cloud Storage as I don't have control over IAM as well).
When you can, and at least when you application run on GCP, you mustn't use service account key file. 2 reasons
It's a simple file for the authentication: you can easily copy it, send it by email and even commit it in your code repository, maybe public!!
It's a secret, you have to store it securely and to rotate it frequently (Google recommend at least every 90 days). It's hard to manage, you want redeploy your function every 90 days with a news security file!
So, my peer Gabe and Kolban have right. Use function identity:
Either you specify the service account email when deploying the function
Or the default service account will be used (this one of compute engine, with editor role by default. Not really safe, prefer the first solution)
In your code, use the getDefaultCredential (according with the language, the name change slightly but the meaning is the same). If you look into the source code, you will see that the function perform this
Look if GOOGLE_APPLICATION_CREDENTIALS env var exists. If so, use it
Look if "well known file" exists. According with the OS, and when you perform a gcloud auth application-default login, the credentials are stored in different place locally. The library look for them.
Look if the metadata server exists. This link reference compute engine but other environment followed the same principle.
There is no "magic" stuff. The metadata server know the identity of the function and can generate access and identity token on demand. The libraries implements calls to it if your code run on GCP -> That's why, you never need a service account key file, the metadata server is here for serving you this information!
What Kolban said. When you deploy your Cloud Function you can define a service account to use, and then any API calls that use Application Default Credentials will automatically use that service account without the need of a service account bearer token (the json file). Check out the docs here:
https://cloud.google.com/docs/authentication/production#auth-cloud-implicit-nodejs