IAM_PERMISSION_DENIED - google-cloud-platform

I am using Terraform to deploy a GCP organization policy at the project level (https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/org_policy_policy)
the service account has 2 roles: Organization Policy Administrator and serviceAccountTokenCreator.
Terraform init and apply to show the number of items to be deployed. terraform apply reports
"code": 403,
"message": "Permission 'iam.serviceAccounts.getAccessToken' denied on resource (or it may not exist).",
"status": "permission_Denied",
"details":{
"#type": "type.googleapis.com/google.rpc.ErrorInfo",
"reason": "iam_permission_denied",
"domain": "iam.googleapis.com",
"metadata": {
"permission": "iam.serviceAccounts.getAccessToken"
The roles given to the service account are Organization Policy Administrator and serviceAccountTokenCreator.
I have tested with several roles and nothing. I am not able to figure out what I am missing.
Could you please assist me?

Related

Best practice to limit what roles and resources service account can provision

I have a service account that's being used in a pipeline for Terraform. Currently, it has the owner role, but I want to restrict what roles it can assign via the Terraform configuration. I want to avoid a scenario where a user can elevate their role to the owner role and bypass the pipeline. This also includes the users who are responsible for reviewing and approving.
I'm following this guide but it shows assigning a role that ONLY assigns roles, I need to deploy resources as well. https://cloud.google.com/iam/docs/setting-limits-on-granting-roles#use-cases
{
"members": [
"finn#example.com"
],
"role": "roles/resourcemanager.projectIamAdmin",
"condition": {
"title": "only_billing_roles",
"description": "Only allows changes to role bindings for billing accounts",
"expression":
"api.getAttribute('iam.googleapis.com/modifiedGrantsByRole', []).hasOnly(['roles/billing.admin', 'roles/billing.user'])"
}
}
Is there a way to specify the owner role on the service account but then limit what roles it can assign and what resources it can deploy?

Permission 'documentai.processors.processOnline' denied on resource (or it may not exist)

I am trying to send a POST request to the Cloud Document AI API using Postman. I have tried sending a POST request with the API key included, along with providing an OAuth access token as the OAuth 2.0 Authorization (generated using gcloud auth application-default print-access-token). However, this error is returned:
{
"error": {
"code": 403,
"message": "Permission 'documentai.processors.processOnline' denied on resource '//documentai.googleapis.com/projects/<project id>/locations/us/processors/<processor id>' (or it may not exist).",
"status": "PERMISSION_DENIED",
"details": [
{
"#type": "type.googleapis.com/google.rpc.ErrorInfo",
"reason": "IAM_PERMISSION_DENIED",
"domain": "documentai.googleapis.com",
"metadata": {
"resource": "projects/<project id>/locations/us/processors/<processor id>",
"permission": "documentai.processors.processOnline"
}
}
]
}
I think this this a problem with the service account permission. If so, is there any way I can resolve this if I don't have the access to change roles?
Just to give an update to this question. The problem was related to the service account permission. There is no way to resolve this without setting up your service account with the correct permission. Once you have correctly set up your service account, by using the service account key of said account, you should be able to resolve this problem.
TLDR; Follow the documentation (https://cloud.google.com/document-ai/docs/setup#auth) properly. If you don't have access to the Google Account (like I did), try to get access to it. If not, I don't think there is another way around it.
I struggled with this question as well. To resolve the issue, go to IAM roles and change the role of your service account to Document AI service user. The default is Document AI service Agent

Google Cloud Storage write with REST API throws "billing not enabled" error for new bucket

I'm attempting to upload files to a GCS bucket from my server. This works perfectly fine for the app engine bucket Google App Engine created for the project but if I create a new bucket and attempt to write to that bucket I get the following :
{
"error": {
"code": 403,
"message": "The account for bucket \"flow-292019-cdn\" has not enabled billing.",
"errors": [
{
"message": "The account for bucket \"test-project-cdn\" has not enabled billing.",
"domain": "global",
"reason": "accountDisabled",
"locationType": "header",
"location": "Authorization"
}
]
}
}
All permissions are exactly the same in the configuration. Billing is definitely enabled for the project. I'm at a loss on this one.
I would recommend to follow the official documentation for your use case :
Troubleshooting
403: Account Disabled Issue: I tried to create a bucket but got a 403
Account Disabled error.
Solution: This error indicates that you have not yet turned on billing
for the associated project. For steps for enabling billing, see Enable
billing for a project.
If billing is turned on and you continue to receive this error
message, you can reach out to support with your project ID and a
description of your problem.

Upload to bucket with customer-managed encryption fails

Based on https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys
Steps to reproduce:
Create key ring and key in Cloud KMS in a specific location
(us-central-1 for example).
Grant permission Cloud KMS CryptoKey Encrypter/Decrypter to the storage service account for the created key.
Create a new regional bucket in the location (us-central-1) and set created KMS key for encryption.
Try to upload a file to the bucket.
Result:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "forbidden",
"message": "We're sorry, but the Cloud KMS encryption feature is not available in your location; see https://cloud.google.com/storage/docs/encryption/customer-managed-keys#restrictions for more details."
}
],
"code": 403,
"message": "We're sorry, but the Cloud KMS encryption feature is not available in your location; see https://cloud.google.com/storage/docs/encryption/customer-managed-keys#restrictions for more details."
}
}
I'm quite sure it is a misconfiguration issue but couldn't figure out my mistake. The request goes not from restricted country - https://cloud.google.com/compute/docs/disks/customer-supplied-encryption#general_restrictions

how do you manager your assumed role IAM policies to access secrets in AWS secret manager?

I am starting migrating our secrets from AWS parameter store to AWS secret manager, currently I am facing a problem that I don't how to solve, could anyone provide any insight?
We have a AWS account (let's call it identity account), which we manage all IAM users and groups. And we have another AWS account which hosts our infra (let's call this infra account). We'd like to manage all users in the identity account and let user assumes to poweruser role in the infra account so we can manage all users at one place.
In the infra account, we have RDS running and we want to create DB users for our developers so they can login to database for debugging purpose, but we also want to audit what they have done in case someone did something bad to our database, so we need to create one DB user per developer. All of those DB credentials are saved into AWS secret with a naming convention like
/dev/rds/mysql/users/foo
/dev/rds/mysql/users/bar
So here is the question: how can I manage user's IAM policy to restrict the permission of user so they can ONLY access their own secret? From this AWS doc that we CANNOT get aws:username when user is using assumed role to access AWS, thus the following policy would never work
actions = [
"secretsmanager:DescribeSecret",
"secretsmanager:GetSecretValue",
"secretsmanager:PutSecretValue",
"secretsmanager:UpdateSecretVersionStage"
]
resources = [
"arn:aws:secretsmanager:us-east-1:12345678912:secret:/dev/rds/mysql/users/${aws:username}"
]
The only IAM variable that I can use for assumed role is aws:userid but it would be something like this (assume user foo's username in identity account is foo#emaildomain.com)
"AROAJGHLP6KERYI375PJY:foo#emaildomain.com"
It also looks like that the role-id (AROAJGHLP6KERYI375PJY in this example) is random, with prefix AROA, which means I CANNOT use following policy either (and plus, having AROAJGHLP6KERYI375PJY:foo#emaildomain.com as a secret name in secret manager is pretty ugly)
actions = [
"secretsmanager:DescribeSecret",
"secretsmanager:GetSecretValue",
"secretsmanager:PutSecretValue",
"secretsmanager:UpdateSecretVersionStage"
]
resources = [
"arn:aws:secretsmanager:us-east-1:12345678912:secret:/dev/rds/mysql/users/${aws:userid}"
]
Currently my policy ended up with this
actions = [
"secretsmanager:DescribeSecret",
"secretsmanager:GetSecretValue",
"secretsmanager:PutSecretValue",
"secretsmanager:UpdateSecretVersionStage"
]
resources = [
"arn:aws:secretsmanager:us-east-1:12345678912:secret:/dev/rds/mysql/users/*"
]
which means as long as user assumed to infra account, they have access to other developers DB credentials as well.
I've looked into CloudWatch metric to see if I can setup a filter to filter out the API call that user foo is calling GetSecretValue API to get user bar's credential, but CloudWatch filter doesn't support user REGEX to extract certain value from JSON. Here's the example of the GetSecretValue event from CloudTrail log:
{
"eventVersion": "1.05",
"userIdentity": {
"type": "AssumedRole",
"principalId": "AROAJGHLP6KERYI375PJY:foo#emaildomain.com",
"arn": "arn:aws:sts::12345678912:assumed-role/poweruser/foo#emaildomain.com",
"accountId": "12345678912",
"accessKeyId": "ASIATA5XIF7AFC2CQ7NO",
"sessionContext": {
"attributes": {
"mfaAuthenticated": "true",
"creationDate": "2018-07-11T21:31:20Z"
},
"sessionIssuer": {
"type": "Role",
"principalId": "AROAJGHLP6KERYI375PJY",
"arn": "arn:aws:iam::12345678912:role/poweruser",
"accountId": "12345678912",
"userName": "poweruser"
}
}
},
"eventTime": "2018-07-11T21:32:56Z",
"eventSource": "secretsmanager.amazonaws.com",
"eventName": "GetSecretValue",
"awsRegion": "us-east-2",
"sourceIPAddress": "1.2.3.4",
"userAgent": "aws-internal/3",
"requestParameters": {
"secretId": "/dev/rds/mysql/users/foo"
},
"responseElements": null,
"requestID": "f98ad2c2-8551-11e8-8a3f-751b0a8a6ca5",
"eventID": "73b8de89-bc8c-41a3-a172-58dd8d79a026",
"eventType": "AwsApiCall",
"recipientAccountId": "12345678912"
}
If I can extract foo#emaildomain.com from { $.userIdentity.principalId } and extract foo from to { $.requestParameters} then I can try some magic to compare foo#emaildomain.com == foo to trigger alert if user is trying to get other people's credential, but, I can't...
So, in this case, how could I manage my policy to lock users' permission?
From your first example, for each user, can you try to hardcode the secret in their user iam policy?
So for user foo, the user policy would look something like
...
actions = [
"secretsmanager:DescribeSecret",
"secretsmanager:GetSecretValue",
"secretsmanager:PutSecretValue",
"secretsmanager:UpdateSecretVersionStage"
]
resources = [
"arn:aws:secretsmanager:us-east-1:12345678912:secret:/dev/rds/mysql/users/foo"
]
...
You would have the same structure for bar, ...
If you have many users, this solution wouldn't scale
The problem seems to be that you are using a common assumed role for all the users. An alternative to this would be to create a resource policy on each secret that granted access permissions to the owning user in the identity account. This would let the users access the secret directly without calling assumerole. This would not prevent them from still assuming the infra account poweruser role and accessing the secret, so you would either have to drop Secrets Manager privileges from the role, or explicitly deny the infra power user in the resource policy you add to the secret.
Setting up cross account access like this also means you can not use the default KMS encryption key. You will need to setup a custom KMS key that grants the correct access permission to the identity account and re-encrypt the secrets with that new key.
Since resource policies and custom KMS keys can not be setup in the Secrets Manager Console, this all requires using the CLI or one of the SDKs.