Vault GCP Project Level Role Binding - google-cloud-platform

I am trying to apply the role binding below to grant the Storage Admin Role to a GCP roleset in Vault.
resource "//cloudresourcemanager.googleapis.com/projects/{project_id_number}" {
roles = [
"roles/storage.admin"
]
}
I want to grant access to the project level, not a specific bucket so that the GCP roleset can access and read/write to the Google Container Registry.
When I try to create this roleset in Vault, I get this error:
Error writing data to gcp/roleset/my-roleset: Error making API request.
URL: PUT http://127.0.0.1:8200/v1/gcp/roleset/my-roleset
Code: 400. Errors:
* unable to set policy: googleapi: Error 403: The caller does not have permission
My Vault cluster is running in a GKE cluster which has OAuth Scopes for all Cloud APIs, I am the project owner, and the service account Vault is using has the following permissions:
Cloud KMS CryptoKey Encrypter/Decrypter
Service Account Actor
Service Account Admin
Service Account Key Admin
Service Account Token Creator
Logs Writer
Storage Admin
Storage Object Admin
I have tried giving the service account both Editor and Owner roles, and I still get the same error.
Firstly, am I using the correct resource to create a roleset for the Storage Admin Role at the project level?
Secondly, if so, what could be causing this permission error?

I had previously recreated the cluster and skipped this step:
vault write gcp/config credentials=#credentials.json
Adding the key file fixed this.
There is also a chance that following the steps to create a custom role here and adding that custom role played a part.

Related

Unable to authorize Opensearch Terraform provider without internal database master user

I'm trying to create an index management policy in Opensearch 1.3 on AWS using Terraform and the elasticsearch provider from phillbaker but I'm always getting a 403 forbidden exception when using an IAM master user. After several tries, I've changed to an internal database user and it worked straightaway once the domain access policy was open for any user.
These are the things I've tried so far:
Creating an IAM user with programmatic credentials, adding this user to the domain access policy and as a master user for the cluster and using the credentials in the provider (using aws_access_key and aws_secret_access_key parameters, not username and password).
Creating an IAM role with administrator access, adding this role as a master user. Configuring a Cognito user pool and identity pool as identity provider for the cluster and configuring authenticated users to use the role created before. Configuring the domain access policy to allow anyone to user the cluster.
Creating an internal user from the dashboard and adding this user to the all_access role. Configuring the domain access policy to allow anyone to use the cluster.
In all these cases, it didn't work. The last case, I tried after changing the configuration to use an internal database user as master and I verified both had the same rol mapping configuration. But only the credentials of the one I assigned through the AWS console worked.
I also tried changing the cluster security configuration on AWS so the domain access policy gets replaced with the fine-grained access control. But every time I save the changes, when I get back to the security tab, the domain access policy is still activated.

Database access denied for AWS RDS proxy

I have setup a RDS proxy for Aurora DB. I am able to connect to the RDS proxy endpoint but not able to perform any operations.
For e.g if I do show processlist; I get below error:
ERROR 1045 (28000): Database Access denied for user 'admin'#'ip-address' (using password: YES)
Note: I am able to access RDS endpoint and perform all the operations.
Thanks in advance!
I encountered this same issue. Turns out it was related to the auto-generated IAM role permissions.
The secrets manager had 2 user accounts added to it (with verified correct credentials), and both were added to the RDS proxy. However, only the first user account worked. The second user account would get a permission denied error.
Checking the CloudWatch logs, I saw a message similar to:
Credentials couldn't be retrieved. The IAM role "arn:aws:iam::ACCOUNT:role/service-role/rds-proxy-role-TIMESTAMP" is not authorized to read the AWS Secrets Manager secret with the ARN "arn:aws:secretsmanager:REGION:ACCOUNT:secret:SECRET_NAME"
When I looked at the IAM policy for the rds-proxy-role-TIMESTAMP role, it had only been granted access to the secret for the first user. This appears to be an issue with the creation of the IAM role when the proxy is set up.
To resolve it, I modified the policy for the rds-proxy-role-TIMESTAMP role to give it access to the ARN for the second user's secret as well. After a few minutes, I was able to log in as the second user.
If you are getting a Database access denied error please check the user permissions in RDS first.
If you can connect to RDS directly with this credentials, check that credentials in Secret Manager are the same.
Then check if you RDS Proxy policy has permission to access all you Secret Manager records as I mention here https://stackoverflow.com/a/73649818/4642536

How do I access AIP_STORAGE_URI in Vertex AI?

I uploaded a model with
gcloud beta ai models upload --artifact-uri
And in the docker I access AIP_STORAGE_URI.
I see that AIP_STORAGE_URI is another Google Storage location so I try to download the files using storage.Client() but then it says that I don't have access:
google.api_core.exceptions.Forbidden: 403 GET https://storage.googleapis.com/storage/v1/b/caip-tenant-***-***-*-*-***?projection=noAcl&prettyPrint=false: custom-online-prediction#**.iam.gserviceaccount.com does not have storage.buckets.get access to the Google Cloud Storage bucket
I am running this endpoint with the default service account.
https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#artifacts
According to the above link:
The service account that your container uses by default has permission to read from this URI.
What am I doing wrong?
The reason behind the error being, the default service account that Vertex AI uses has the “Storage Object Viewer” role which excludes the storage.buckets.get permission. At the same time, the storage.Client() part of the code makes a storage.buckets.get request to the Vertex AI managed bucket for which the default service account does not have permission to.
To resolve the issue, I would suggest you to follow the below steps -
Make changes in the custom code to access the bucket with the model artifacts in your project instead of using the environment variable AIP_STORAGE_URI which points to the model location in the Vertex AI managed bucket.
Create your own service account and grant the service account with all the permissions needed by the custom code. For this specific error, a role with the storage.buckets.get permission, eg. Storage Admin ("roles/storage.admin") has to be granted to the service account.
Provide the newly created service account in the "Service Account" field when deploying the model.

ERROR: (gcloud.composer.environments.update) Failed to impersonate when terraform runs impersonating as a second account

I am getting the following error (Please see below) when I run my terraform apply.
I am running Terraform 12.x.
GCP Cloud Build runs in a different project other than project-abcd (where these accounts are)
My terraform code tries execute a gcloud command in a GCP cloud build container. It does so by impersonating as composer-bq-sa#prj-abcd.iam.gserviceaccount.com
The service account that terraform runs as is:
terraform_service_account = "org-terraform#abcd.iam.gserviceaccount.com"
(before impersonating)
This IAM account (org-terraform#abcd.iam.gserviceaccount.com) (NOT service account) has the following role bindings (TOTAL 9):
(There is no Service Account with that email)
Composer Administrator
Compute Network Admin
Service Account Token Creator
Owner
Access Context Manager Admin
Security Admin
Service Account Admin
Logs Configuration Writer
Security Center Notification Configurations Editor
The service account (composer-bq-sa#prj-abcd.iam.gserviceaccount.com) has as one of its members: org-terraform#abcd.iam.gserviceaccount.com
When I look at the screen titled "Members with access to this service account" and look at org-terraform#abcd.iam.gserviceaccount.com , I see that it has the following role-bindings (ONLY 4):
Service Account Token Creator
Owner
Security Admin
Service Account Admin
Why am I getting the error below even though IAM account has apparently the right roles and it is one of the members of the service account it is impersonating as?
ERROR
module.gcloud_composer_bucket_env_var.null_resource.run_command[0] (local-exec): WARNING: This command
is using service account impersonation. All API calls will be executed as [**composer-bq-sa#prj-abcd.iam.gserviceaccount.com**].
module.gcloud_composer_bucket_env_var.null_resource.run_command[0] (local-exec): ERROR:
(gcloud.composer.environments.update) Failed to impersonate [**composer-bq-sa#prj-abcd.iam.gserviceaccount.com**]. Make sure the account that's trying to impersonate it has access to the service account itself and the "roles/iam.serviceAccountTokenCreator" role.
Recapping:
In order to grant user permission to impersonate a Service Account follow instructions listed in this document.
Depending on the use case, you may grant user following roles:
roles/iam.serviceAccountUser
roles/iam.serviceAccountTokenCreator
roles/iam.workloadIdentityUser

GCP - Not able to list objects inside Bucket even having devstorage.read_only permission?

I have created a EC2 instance, which creates by default service account with default permissions. So when I checked the default permissions I found that the service account is all these permissions below.
https://www.googleapis.com/auth/devstorage.read_only
https://www.googleapis.com/auth/logging.write
https://www.googleapis.com/auth/monitoring.write
https://www.googleapis.com/auth/servicecontrol
https://www.googleapis.com/auth/service.management.readonly
https://www.googleapis.com/auth/trace.append
Now I tried to list all the objects inside the bucket by using the command:-
gsutil ls gs://mybucketname
Found an error
AccessDeniedException: 403 XXXX#developer.gserviceaccount.com does not have storage.objects.list access to the Google Cloud Storage bucket.
Why I am getting this error even though my service account user is having devstorage.read_only?
And I am very new to GCP here, so let me know.
Please read the official documentation regarding the difference between setting the service account level of access with IAM roles and setting the GCE instance's access scopes:
Service account permissions
When you set up an instance to run as a service account, you determine
the level of access the service account has by the IAM roles that you
grant to the service account. If the service account has no IAM roles,
then no API methods can be run by the service account on that
instance.
Furthermore, an instance's access scopes determine the default OAuth
scopes for requests made through the gcloud tool and client libraries
on the instance. As a result, access scopes potentially further
limit access to API methods when authenticating through OAuth. However, they do not extend to other authentication protocols like
gRPC.
Essentially:
IAM restricts access to APIs based on the IAM roles that are granted
to the service account.
Access scopes potentially further limit
access to API methods when authenticating through OAuth.
Therefore I would recomend to add an IAM role with storage.objects.list permission to your instance service account (maybe roles/storage.legacyBucketReader).