Sharing custom images within an organization in GCP - google-cloud-platform

I am trying to share a custom image in GCP between projects in the organization.
1) Project A
2) project B
All my custom Images are in project A.
I would like to share images of project A to Project B
As per the documentation I ran the following command to share images to project B
gcloud projects add-iam-policy-binding projecta --member serviceAccount:xxxxxx#cloudservices.gserviceaccount.com --role roles/compute.imageUser
I am using Terraform to provision the instances. In terraform, I am specifying to take the image from project A.
boot_disk {
initialize_params {
image = "projects/project_A/global/images/custom_image"
}
}
I am getting the below error
Error: Error creating instance: googleapi: Error 403: Required 'compute.images.useReadOnly' permission for 'projects/project_A/global/images/custom_image', forbidden
Can someone please help me out....

I guess the documentation is for Deployment Manager, not for Terraform, the command you run granted the role to service account xxxxxx#cloudservices.gserviceaccount.com, but Terraform is not using that account by default.
You need to make sure Terraform has enough permission. You may supply xxxxxx#cloudservices.gserviceaccount.com to Terraform or create a new service account for Terraform and grant roles/compute.imageUser to it.

You've just done the first step which is granti an service account a proper permissions to share your images across your organisation. The roles/compute.imageUser role is required to do it.
Your Terraform config also looks OK (you have to make sure the self_link to your image is correct (refer to this documentation to make sure image value in Terraform config is OK).
Also make sure you're providing proper service account credentials to Terraform as stated in #Ken Hung's answer.

Related

GCP inter-project IAM with terraform

I'm new to GCP and terraform, i need some explanation about the topic in the title.
My problem:
I have 2 (or more) GCP projects under the same organization.
I want a cloud run from project A to write on a bucket in project B.
I have two terraform projects, one for each GCP project.
My question is: how can I make things work?
Thanks in advance.
I created the bucket in project B.
I created the cloud run in project A.
I created a service account in project A for the cloudrun.
In project B I created the binding, but something is not clear to me...
Add this to your project's B terraform:
resource "google_storage_bucket_iam_member" "grant_access_to_sa_from_project_a_to_this_bucket" {
provider = google
bucket = "<my_project_b_bucket_name"
role = "roles/storage.objectViewer"
member = "serviceAccount:my_service_account#project_a.iam.gserviceaccount.com"
}
Specify the role according to what you need. The list of the gcs roles are here.
The docs of gcs buckets IAM policies are here.

Error 403: Storage objects forbidden in GCP

I’m trying to create all new sandbox project in GCP for easy deployment and upgrade project. Using Terraform I am creating a GKE cluster. Issue is, the terraform scripts are written for the service accounts of a project named let’s say NP-H. Now, I am trying to create clusters using the same scripts in a project named let’ say NP-S.
I ran Terraform init and experiencing an
error 403: XXX.serviceaccount does not have storage.object.create access to google cloud storage objects., forbidden.
storage: object doesn’t exist.
Now, is the problem with Terraform script or service account permissions?
If it is Terraform script, what are the changes I need to make?
PS: I was able to create a buckets and upload them to cloud storage…
Two ways you can store credentials:
provider "google" {
credentials = file("service-account-file.json")
project = "PROJECT_ID"
region = "us-central1"
zone = "us-central1-c"
}
or
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/service-account-file.json"
Make sure service account is from project ID NP-S, Menu > IAM & admin > service account, and has proper permissions: Menu > IAM & admin > IAM > ...
cat service-account-file.json
and make sure it is the email from correct project ID. You can do a quick test with owner/editor role to isolate the issue if need be as those role have most permissions.
If you're using service account impersonation, do this :
terraform {
backend "gcs" {
bucket = "<your-bucket>"
prefix = "<prefix>/terraform.tfstate"
impersonate_service_account = "<your-service-account>#<your-project-id>.iam.gserviceaccount.com"
}
}
Source : Updating remote state files with a service account

adding existing GCP service account to Terraform root module for cloudbuild to build Terraform configuration

Asking the community if it's possible to do the following. (had no luck in finding further information)
I create a ci/cd pipeline with Github/cloudbuild/Terraform. I have cloudbuild build terraform configuration upon github pull request and merge to new branch. However, I have cloudbuild service account (Default) use with least privilege.
Question adheres, I would like terraform to pull permission from an existing service account with least privilege to prevent any exploits, etc. once cloudbuild gets pull build triggers to init terraform configuration. At this time, i.e terraform will extract existing external SA to obtain permission to build TF.
I tried to use service account, and binding roles to that service account but error happens that
states service account already existences.
Next step, is for me to use a module but I think this is also going to create a new SA with replicated roles.
If this is confusing I do apologize, I will help in refining the question to be more concise.
You have 2 solutions:
Use the Cloud Build service account when you execute your Terraform. Your provider look like this:
provider "google" {
// Useless with Cloud Build
// credentials = file("${var.CREDENTIAL_FILE}}")
project = var.PROJECT_ID
region = "europe-west1"
}
But this solution implies to grant several roles to Cloud Build only for Terraform process. A custom role is a good choice for granting only what is required.
The second solution is to use a service account key file. Here again 2 solutions:
Cloud Build creates the service account, grant all the role on it, generates a key and passes it to terraform. After the terraform execution, the service account is deleted by Cloud Build. Good solution, but you have to grant Cloud Build service account the capability to grant itself any roles and to generate a json Key file. That's a lot a responsibility!
Use an existing service account and the key generated on it. But you have to secure the key and to rotate it regularly. I recommend you to securely store it in secret manager, but for the rotation, you have to manage it by yourselves, today. With this process, Cloud Build download the key (in secret manager) and pass it to terraform. Here again, the Cloud Build service account has the right to access to secrets, that is a critical privilege. The step in Cloud Build is something like this:
steps:
- name: gcr.io/cloud-builders/gcloud:latest
entrypoint: "bash"
args:
- "-c"
- |
gcloud beta secrets versions access --secret=test-secret latest > my-secret-file.txt

GCP Cloud Build fails with permissions error even though correct role is granted

I setup a Cloud Build Trigger in my GCP project in order to deploy a Cloud Function from a Cloud Source Repository via a .yaml file. Everything seems to have been setup correctly and permissions granted according to the official documentation, but when I test the trigger by running it manually, I get the following error:
ERROR: (gcloud.functions.deploy) ResponseError: status=[403], code=[Forbidden], message=[Missing necessary permission iam.serviceAccounts.actAs for on resource [MY_SERVICE_ACCOUNT]. Please grant the roles/iam.serviceAccountUser role. You can do that by running 'gcloud iam service-accounts add-iam-policy-binding [MY_SERVICE_ACCOUNT] --member= --role=roles/iam.serviceAccountUser']
Now first of all, running the suggested command doesn't even work because the suggested syntax is bad (missing a value for "member="). But more importantly, I already added that role to the service account the error message is complaining about. I tried removing it, adding it back, both from the UI and the CLI, and still this error always shows.
Why?
I figured it out after a lot of trial and error. The documentation seems to be incorrect (missing some additional necessary permissions). I used this answer to get me there.
In short, you also need to add the cloudfunctions.developer and iam.serviceAccountUser roles to the [PROJECT_NUMBER]#cloudbuild.gserviceaccount.com account, and (I believe) that the aforementioned cloudbuild service account also needs to be added as a member of the service account that has permissions to deploy your Cloud Function (again shown in the linked SO answer).
The documentation really should be reflecting this.
Good luck!

Google Compute Engine: Required 'compute.zones.get' permission error

I am trying to create a Kubernetes cluster in Google Cloud Platform and I receive the following error when I try to create the cluster from the Web app:
An unknown error has occurred in Compute Engine: "EXTERNAL: Google
Compute Engine: Required 'compute.zones.get' permission for
'projects/my-project-198766/zones/us-west1-a'". Error code: "18"
When I use gcloud I receive this response:
(gcloud.container.clusters.create) ResponseError: code=403,
message=Google Compute Engine: Required 'compute.zones.get' permission
for 'projects/my-project-198766/zones/us-west1-a'
Please note that I have the Owner role and I can create VM instances without any issues.
Any ideas?
This sort of issue might arise if somehow your cloudservices robot gets removed as a project editor. My best guess is that in your case this is the issue.
This might happen due to API call which has SetIamPolicy that is missing cloudservices robot from the "roles/editor" bindings. SetIamPolicy is a straight PUT, it will override with whatever policy is provided in the request. You can get the list of IAM policies for your project with below command as given in this article.
gcloud projects get-iam-policy [project-id]
From the list, you can check whether below service account has the editor permission or not.
[id]#cloudservices.gserviceaccount.com
To fix the issue, you can grant the mentioned service account "Editor" permission and check whether that solves the issue or not.
Hope this helps.
in my case I deleted the service accounts / IAM's or whatever and that very same error message popped up, when I tried to create a kubernetes cluster.
I asked Google to recreate my service accounts, and they mentioned that you can recreate service accounts and their permissions simply by enabling them again. So, in my case I ran the following two commands in order to make kubernetes work again:
gcloud services enable compute
gcloud services enable container
Here is the link they gave me:
https://issuetracker.google.com/64671745#comment2
I think I got it. I tried to follow the advice from GitHub. The permissions I needed to set on my account (called blahblah-compute#developer.gserviceaccount.com) were:
roles/compute.instanceAdmin
roles/editor
roles/iam.serviceAccountUser
The last one seemed to be crucial.
For me recreating the service account with a new name from the console fixed the issue. I have only given the "Editor" role to the service account
I had accidently deleted the compute-service account. I had to follow all the steps mentioned above ie.
undelete the compute-service account
add the permission back to the service account - editor, serviceaccountuser, computeinstanceAdmin
Enable again compute and container services. Although these were not disabled, running gcloud services enable compute container, created some default service accounts for the compute robot such as service-#compute-system.iam.gserviceaccount.com and service-#container-engine-robot.iam.gserviceaccount.com
Hope this helps
As indicated by #Taher, that's most likely due to missing permissions for Google managed service accounts. If after checking the IAM policies for your project with gcloud projects get-iam-policy [project-id] you do not see the permissions listed, then you can add the required permissions by running the following:
project_id=[your-project-id]
project_number=$(gcloud projects describe $project_id --format='value(projectNumber)')
gcloud projects add-iam-policy-binding $project_id \
--member="serviceAccount:service-$project_number#compute-system.iam.gserviceaccount.com" \
--role="roles/compute.serviceAgent"
gcloud projects add-iam-policy-binding $project_id \
--member="serviceAccount:service-$project_number#container-engine-robot.iam.gserviceaccount.com" \
--role="roles/container.serviceAgent"
The full list of Google managed service accounts (service agents) is available here.