I am doing a quick inventory of our service accounts within a particular GCP project and I want to find all the resources a specific service account has access to. This seems like it'd be a simple lookup, since a GCP policy is simply an Identity given a role on a particular resouce, however it doesn't seem like gcloud has this specific lookup... unless I'm missing something. I can find the service account/role combination via IAM or gcloud beta asset search-all-iam-policies but the final portion of the query seems to be missing.
To find all the resources authorized for a specific account, using the Cloud Asset Inventory is the good tool.
You can perform this kind of request
gcloud beta asset search-all-iam-policies \
--scope=<Where to search>
--query="policy:<who to search>"
The scope is in which perimeter you are looking for. It can be
organizations/<OrganisationNumber>
folders/<folderNumber>
projects/<ProjectNumber or ProjectID>
The query is what you search. Here a policy with a specific service account email. So, set it and launch the request.
Does it what you are looking for?
Related
When I try to create a job in the GCP Cloud Scheduler I get this error:
{"error":{"code":7,"message":"The principal (user or service account) lacks IAM permission \"iam.serviceAccounts.actAs\" for the resource \"[my service account]\" (or the resource may not exist)."}}
When I enabled the GCP Cloud Scheduler the service account was created (and I can see it in my accounts list). I have verified that it has the "Cloud Scheduler Service Agent" role.
I am logged in as an Owner of our project. It is when I try to create the job that I get this error. I tried to add the "Service Account User" to my principal account, but to no avail.
Does anyone know if I have to add any additional permissions? Or if I have to allow my principal to act (impersonate?) this service account in some way?
Many thanks.
Ben
Ok I figured this out. The documentation is (sort of, in my view) clear if you read it in a certain way / know how GCP IAM works.
You actually need two service accounts. You need one that you set up yourself (can be whatever name you like and doesn't require any special permissions) and you also need the one for Cloud Scheduler itself.
Don't confuse the two. And use the one that you created when specifying the service account to generate the OAuth / OICD tokens.
I am trying to understand my options for creating a report in GCP to identify individual user accounts assigned to resources (Org, Folders, projects, Billing Accounts, VPC) inside of my GCP Resource Hierarchy. I would assume this question has been answered but I am unable to find any information on this.
Please let me know if this is the correct forum to ask this type of question or if I need to put this question in one of the other forums.
Thank you
This is actually complicated to do in Google Cloud.
There are two areas that you want to track. IAM members and IAM members assigned to resources.
There are a number of IAM member types: users, services accounts, groups, G Suite domains, Cloud Identity domains.
Some resources can be assigned IAM members. An example would be for impersonation or ActAs. You will need to scan every resource that supports member assignment.
To add to this there are assigned members and inherited members.
Then there are organizations, folders and projects.
If your goal is to simply create a report for auditing, buy a commercial product or service. There are many to choose from.
If your goal is to develop a deeper understanding of Google IAM, resources, roles and permissions, select your favorite language and dig into the Google Cloud SDK and the Google Cloud CLI gcloud.
The asset command of the gcloud command line tool can be used to gather the data you are looking for. Before starting, ensure that you have the correct IAM permissions to view assets. If you have multiple projects under one org, this command can gather all IAM policies from the org and all projects in the org's hierarchy:
gcloud asset export --content-type iam-policy --organization your-org-id \
--output-path gs://your-secure-bucket/your-policy-audit
The policy file is saved as a text file to a Google Cloud Storage bucket. The file consists of multiple JSON objects, one per line. You'll still have to process the IAM file to extract the users.
Usecase :
I have a GCP setup with :
multiple Google Kubernetes Engine clusters
multiple CloudSQL instances
multiple GCS buckets
Basically, I'd like to give permissions to users with finer granularity than project-wide (ie user A can only access bucket B, but can access CloudSQL instances C and D)
Possible solutions :
For GCS, this is easy and documented.
However, I couldn't find anything similar for the other two. The CloudSQL documentation even seems to say it is not possible (All permissions are applied to the project. You cannot apply different permissions based on the instance or other lower-level object.)
Workarounds explored :
I tried creating custom IAM roles. I was hoping that there would be some way to filter on objects the role is applied to, in a fashion similar to what AWS IAM allows with its Resource filter.
That's apparently not possible here.
For GKE, I can give every user the Kubernetes Engine Cluster Viewer role (which basically just allows to list clusters and basic info about them, as well as logging on with gcloud cli tool), and then use the Kubernetes RBAC engine to give very fine-grained permissions. This works fine, but doesn't allow the user to use the Google web interface, which is extremely handy, especially for a beginner on k8s.
Similarly, for CloudSQL, I can give the Cloud SQL Client role, and manage my users directly through the postgres access control system. This works fine, but my users are able to connect to other instances (they still need an account on these instances, of course). Moreover, operations such as restoring a backup cannot be allowed only on specific instances.
So, have I missed something obvious, or have anybody found some way to work around these limitations ?
For GKE, seems that the only option is using RBAC to give users fine grained permissions by RoleBinding within a namespace or ClusterRoleBinding for cluster-wide permissions.
Regarding CloudSQL, currently not supports instance based permissions, but you can track any updates in this link for this feature request.
I have spent the entire day today reading documentations and questions on stackexchange on trying to use service account to logon to a compute engine but have got no where.
I am new to google cloud, so pardon my knowledge.
We are trying to setup a long running service on a google compute engine. We want the service to be run as a system account but not on individual account so as to allow troubleshooting privileges across the team but not specific users. We thought that service account of GCP should be able to accomplish this but we havent been able to get to logon to a compute engine as a service account. We took the following steps to try this out -
create service account and give serviceaccountuser permissions to the team. Also create rsa key for the service account that were distributed to the team.
use gcloud auth activate-service-account to switch to the service account
gcloud init to the service account and setup configuration
use gcloud compute ssh .
We hoped to be able to logon to the instance as the service account since we switched identity before logging on. But we are not getting the desired effect.
questions -
Can service accounts be actually used to logon to compute engine?
if not, what is the purpose of configuring a service account to run as when creating a VM on GCP.
if not, what is the right way to run a service on a compute engine using a system account that everybody can have access to?
if yes, what are we missing?
Thanks a lot for solving the confusion in advance,
The service account allows the Compute Engine instance to access other Google APIs. For example, the instance might need to access private content from Storage buckets or connect to a Datastore. See https://cloud.google.com/iam/docs/service-accounts
In order to give your team members (ssh) access to a compute engine instance, you add them as members to the project by adding their Google accounts. Specify their level of access so they can only list and ssh in, but not create or delete. I think you want a new role with "Compute OS Login" permission. They don't need billing set up either. See https://cloud.google.com/iam/docs/granting-changing-revoking-access
When creating a new version of an ML Engine Model with the command
gcloud ml-engine versions create 'v1' --model=model_name --origin=gs://path_to_model/1/ --runtime-version=1.4
I recieve the following error:
ERROR: (gcloud.ml-engine.versions.create) FAILED_PRECONDITION: Field: version.deployment_uri Error: Read permissions are required for Cloud ML service account cloud-ml-service#**********.iam.gserviceaccount.com to the model file gs://path_to_model/1/saved_model.pb.
- '#type': type.googleapis.com/google.rpc.BadRequest
fieldViolations:
- description: Read permissions are required for Cloud ML service account cloud-ml-service#**********.iam.gserviceaccount.com to the model file gs://path_to_model/1/saved_model.pb.
field: version.deployment_uri
This service account is not listed in the IAM & admin panel and does not belong to my project, so I don't want to grant permissions for this account manually.
Has anyone else also experienced this? Any suggestions on what I should do?
Additional information:
The google storage bucket has storage class regional and location europe-west1.
I already tried to disable (and re-enable) the ML Engine service with the command
gcloud services disable ml.googleapis.com
but this resulted in the following error:
ERROR: (gcloud.services.disable) The operation with ID tmo-acf.********-****-****-****-************ resulted in a failure.
Updated information:
The storage bucket does not belong to a different project.
The command
gcloud iam service-accounts get-iam-policy cloud-ml-service#**********.iam.gserviceaccount.com
gives the error:
ERROR: (gcloud.iam.service-accounts.get-iam-policy) PERMISSION_DENIED: Permission iam.serviceAccounts.getIamPolicy is required to perform this operation on service account projects/-/serviceAccounts/cloud-ml-service#**********.iam.gserviceaccount.com.
The dash in the path projects/-/serviceAccounts/... in this error message seems very wrong to me.
PROBLEM HAS BEEN SOLVED
I was finally able to disable the ML Engine service after removing all my models. After re-enabling the service I got a new service account which shows up in my IAM & admin panel and is able to access my cloud storage.
If someone finds this issue, #freeCris wrote the solution in the question. I decided to write this down as I read all the documentation in the answers to find nothing useful and then realized he wrote how to solve it in the question itself.
For those wanting to fix this, just run (make sure you don't have resources in ML Engine such as models and versions):
gcloud services disable ml.googleapis.com
And then run:
gcloud services enable ml.googleapis.com
You'll get a new service account that this time is listed in your IAM console. Just add it to your GCS bucket and it'll work now.
I think the problem was, that you tried to create the model under a different project, which was not associated with that bucket you tried to reach. So you used the service account of that different project to access the bucket, that's why it did not have any permissions and did not appear in you AMI.
If that happens again or if anybody else has that problem, you can check your projects with gcloud projects list and change it with gcloud config set project <project name>.
Yes, that service account doesn't belong to your project. You can know the service account for the Cloud ML Engine. For deploying on ML Engine, you will need to grant read access to your model files on gcs to that service account. Here is the documentation on how you can do that: https://cloud.google.com/ml-engine/docs/access-control#permissions_required_for_storage
This might also be useful: https://cloud.google.com/ml-engine/docs/working-with-data#using_a_cloud_storage_bucket_from_a_different_project