Cannot use more granular roles to google managed container registry account - service-[PROJECT_NUMBER]#containerregistry.iam.gserviceaccount.com. Not sure if anyone can shed some lights on this.
It seems this service account is assigned with a primitive role "editor" by default when you enable google container registry API and you cannot change it to something more granular like cloudbuild.gserviceaccount.com
Google's doc on cloudbuild
https://cloud.google.com/cloud-build/docs/securing-builds/set-service-account-permissions
But not much info on container registry
https://cloud.google.com/container-registry/docs/overview
Our compliance tool picked up editor role are used by GCR service account. This is too much permission for just GCR access.
Related
I am getting the below error.
Does anyone have any idea how to solve it?
Failed to create pipeline job. Error: Vertex AI Service Agent
'XXXXX#gcp-sa-aiplatform-cc.iam.gserviceaccount.com' should be granted
access to the image gcr.io/gcp-project-id/application:latest
{PROJECT_NUMBER}#gcp-sa-aiplatform-cc.iam.gserviceaccount.com is google's AI Platform service agent.
This Service agent requires access to read/pull the docker image from your project's gcr to create container for pipeline run.
If You have permission to edit IAM roles, You can try adding Artifact Registry roles to the above service agent.
You can start with adding roles/artifactregistry.reader.
Hope this helps :)
This error may have occurred due to missing roles or permissions for pulling and pushing images into Container Registry. All the users and service accounts must be given appropriate permissions for Cloud Storage who interact with Container Registry. You can give roles/storage.objectViewer, roles/storage.legacyBucketWriter and roles/storage.admin to your service account to access your image in Container Registry using the service-account. You can follow this doc for giving appropriate roles and permissions to the Service Account.
I created a service account mycustomsa#myproject.iam.gserviceaccount.com.
Following the GCP best practices, I would like to use it in order to run a GCE VM named instance-1 (not yet created).
This VM has to be able to write logs and metrics for Stackdriver.
I identified:
roles/monitoring.metricWriter
roles/logging.logWriter
However:
Do you advise any additional role I should use? (i.e. instance admin)
How should I setup the IAM policy binding at project level to restrict the usage of this service account just for GCE and instance-1?
For writing logs and metrics on Stackdriver those roles are appropriate, you need to define what kind of activities the instance will be doing. However as John pointed in his comment, using a conditional role binding 1 might be useful as they can be added to new or existing IAM policies to further control access to Google Cloud resources.
As for the best practices on SA, I would recommend to make the SA as secure as possible with the following:
-Specify who can act as service accounts. Users who are Service Account Users for a service account can indirectly access all the resources the service account has access to. Therefore, be cautious when granting the serviceAccountUser role to a user.
-Grant the service account only the minimum set of permissions required to achieve their goal. Learn about granting roles to all types of members, including service accounts.
-Create service accounts for each service with only the permissions required for that service.
-Use the display name of a service account to keep track of the service accounts. When you create a service account, populate its display name with the purpose of the service account.
-Define a naming convention for your service accounts.
-Implement processes to automate the rotation of user-managed service account keys.
-Take advantage of the IAM service account API to implement key rotation.
-Audit service accounts and keys using either the serviceAccount.keys.list() method or the Logs Viewer page in the console.
-Do not delete service accounts that are in use by running instances on App Engine or Compute Engine unless you want those applications to lose access to the service account.
I am trying to use setIAMPolicy for Cloud Build Service account #cloudbuild.gserviceaccount.com. I want to provide AppEngine Admin, Cloud Run Admin permissions to the Cloud Build Service member so that it can do automated releases on AppEngine.
Somehow it throws 404 when I pass resource of Cloud Build Service account while getting IAM Policy. To confirm, I tried GET https://iam.googleapis.com/v1/{name=projects/*}/serviceAccounts in API Explorer and it also does not return the Google Managed Service accounts. It seems it only returns the service accounts which are created and not the Google Managed default accounts.
How can I set IAM Policy to grant these permissions to Cloud Build?
The general idea is to enable these permissions for both App Engine and Cloud Run.
Also, a common problem is not knowing that cron permissions are needed for App Engine and Cloud build. For example, this article mentions "Update cron schedules" as "No" for "App Engine Admin". Whether you need that or not depends on how your builds are done. If you end-up needing that too, use permission "Cloud Scheduler Admin" on your #cloudbuild.gserviceaccount.com. You can apply the same logic to other permissions and that chart might be useful for knowing what is needed depending on your setup.
How to setup multi-account(project) in GCP, it is possible in AWS by using assume-role, anyone knows how to do it in Google Cloud (GCP)?
I tried to explore AWS equivalent in GCP, but not able to find any document.
As documented, AssumeRole in AWS returns a set of temporary security credentials that you can use to access AWS resources that you might not normally have access to.
In AWS you can create one set of long-term credentials in one account. Then you can use temporary security credentials to access all the other accounts by assuming roles in those accounts.
The equivalent of the above in GCP would be creating short-lived credentials for service accounts to impersonate their identities (Documentation link).
Accordingly, in GCP you have the “caller” and the “limited-privilege service account” for whom the credential is created.
To implement this scenario, first, use handy documentation on Service Accounts and Cloud IAM Permission Roles in GCP, as each account is a Service Account with specific role permissions, in order to understand how accounts work in GCP.
The link I posted above, provides detailed information on the flows that allow a caller to create short-lived credentials for a service account and the supported credential types.
Additionally, this link can assist you in visualizing and understanding the resource hierarchy architecture in GCP and give you examples on how to structure your project according to your organization’s structure.
The basic answer is "Service Roles". Limited-time service roles are available.
For assigning permissions across projects (but still in the same organization), you can create a custom role.
For letting any user assume the role of a service account, use the Service Account user role.
For limited-time authorization tokens, you have OAuth 2.0 for server-to-server calls, particularly with JWT where available.
I'd like to configure our company container registry on GCP to:
Allow staff to push new images with new tags
Not allow existing tags to be replaced
The goal is to avoid using latest tag - or any other mutable tag - and consistently use new, immutable tags for new images.
Is there a set of IAM roles or permissions that can achieve this behaviour?
You don't have to use IAM roles. It should be Service Account.
You need to create a service account, set GCR Editor role for it, download JSON file then send it to your staffs.
A service account JSON key file is a long-lived credential that is scoped to a specific GCP Console project and its resources.
The service account you use to push and pull images must be correctly configured with the required permissions and access scope for interaction with Container Registry.
Service accounts automatically created by GCP, such as the Container Registry service account, are granted the read-write Editor role for the parent project. The Compute Engine default service account is configured with read-only access to storage within the same project. You may wish to grant other service accounts more specific permissions. Pushing and pulling images across projects requires proper configuration of both permissions and access scopes on the service account that interacts with Container Registry.
For more information about the required service account permissions and scopes to push and pull images, refer to the requirements for using Container Registry with Google Cloud Platform.