I am trying to create storage account via Postman. I created one service principal via Azure Portal and got access token with below parameters:
https://login.microsoftonline.com/mytenant_id/oauth2/v2.0/token
client_id='client_id'
&client_secret='client_secret'
&grant_type=client_credentials
&resource=https://management.azure.com
I tried to create storage account using generated access token with below query:
PUT
https://management.azure.com/subscriptions/subscriptionid/resourceGroups/resourcegroupname/providers/Microsoft.Storage/storageAccounts/storageaccountname?api-version=2018-02-01
But I got the error like below:
{
"error": {
"code": "AuthorizationFailed",
"message": "The client 'XXXXXXXXXXXXXXXXXX' with object id 'XXX does not have authorization to perform action 'Microsoft.Storage/storageAccounts/read' over scope '/subscriptions/XXXXXXXXXXXXXXXXXX/resourceGroups/resource/providers/Microsoft.Storage/storageAccounts/account' or the scope is invalid. If access was recently granted, please refresh your credentials."
}
}
I am the Global Admin and have owner access at subscription level.
Could anyone suggest me what else needed?
To resolve the error, try assigning Storage Account Contributor role to service principal at subscription level like below:
I tried to reproduce the same in my environment and got the same error when it dint have the required permissions like below:
After granting the permissions, I was able to create the storage account successfully like below:
To confirm the above, I verified it in the Portal like below:
Reference:
How to create Azure Storage Account with REST API using Postman – A Turning Point (raaviblog.com)
Related
I have a cloud function that is accessing a store bucket, and have assigned the function a service account that has the following roles: Cloud functions developer and Storage Admin.
When I try to run the function using this service account with these roles, it works fine.
But when I try to fine grain the access using IAM conditions on the Storage Admin role, it is giving me a "myserviceaccount.iam.gserviceaccount.com does not have storage.objects.get access to the Google Cloud Storage object".
The IAM conditions I am using on the service account on the storage admin role are the below:
"resource.name.endsWith("us-west1-test") ||
resource.name.endsWith("europe-west2-test")"
From my understanding this should work because the storage bucket name ends in "us-west1-test", so I'm not sure why it's giving me this 403 forbidden error. P.s I am also adding the condition just in case the resource it was trying to use was the europe-west2 function but have tried without and it gives the same error.
Summary of resources and names below:
Name of function - func-europe-west2-test
Name of storage bucket - buc-us-west1-test
Roles assigned to service account - Cloud Functions Developer & Storage Admin
Appreciate any help or suggestions.
I have 2 AWS accounts, one personal and a client account.
Personal account:
account id: 789XXXXXX
Client account:
account id: 123XXXXXX
I'm working on the client and tried to run my lambda function locally, when I do I get the following error: AccessDeniedException: User: arn:aws:iam::789XXXXXX:user/amplify-pUDkX is not authorized to perform: secretsmanager:GetSecretValue on resource: postgres-secret because no identity-based policy allows the secretsmanager:GetSecretValue action.
I was a bit confused as this function had been working previously, once I looked into the error message I noticed that the user amplify-pUDkX didn't even exist on the client account and that the AWS account id along with the user actually matched up to my personal account. I've already run amplify configure and it's connected to the client account, I've also been making updates to the resources on the client account through the amplify cli so I know I'm not signed into the wrong account.
Also just to note, when the function gets deployed it's able to function no problem so this is only happening on my local machine.
I'd appreciate any help, thanks.
This is due to saved aws credentials in C:\Users\username\.aws. You can remove the ones not required. Also while setting up the app using cli you get the option to choose profile to avoid this issue.
I uploaded a model with
gcloud beta ai models upload --artifact-uri
And in the docker I access AIP_STORAGE_URI.
I see that AIP_STORAGE_URI is another Google Storage location so I try to download the files using storage.Client() but then it says that I don't have access:
google.api_core.exceptions.Forbidden: 403 GET https://storage.googleapis.com/storage/v1/b/caip-tenant-***-***-*-*-***?projection=noAcl&prettyPrint=false: custom-online-prediction#**.iam.gserviceaccount.com does not have storage.buckets.get access to the Google Cloud Storage bucket
I am running this endpoint with the default service account.
https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#artifacts
According to the above link:
The service account that your container uses by default has permission to read from this URI.
What am I doing wrong?
The reason behind the error being, the default service account that Vertex AI uses has the “Storage Object Viewer” role which excludes the storage.buckets.get permission. At the same time, the storage.Client() part of the code makes a storage.buckets.get request to the Vertex AI managed bucket for which the default service account does not have permission to.
To resolve the issue, I would suggest you to follow the below steps -
Make changes in the custom code to access the bucket with the model artifacts in your project instead of using the environment variable AIP_STORAGE_URI which points to the model location in the Vertex AI managed bucket.
Create your own service account and grant the service account with all the permissions needed by the custom code. For this specific error, a role with the storage.buckets.get permission, eg. Storage Admin ("roles/storage.admin") has to be granted to the service account.
Provide the newly created service account in the "Service Account" field when deploying the model.
I am trying to apply the role binding below to grant the Storage Admin Role to a GCP roleset in Vault.
resource "//cloudresourcemanager.googleapis.com/projects/{project_id_number}" {
roles = [
"roles/storage.admin"
]
}
I want to grant access to the project level, not a specific bucket so that the GCP roleset can access and read/write to the Google Container Registry.
When I try to create this roleset in Vault, I get this error:
Error writing data to gcp/roleset/my-roleset: Error making API request.
URL: PUT http://127.0.0.1:8200/v1/gcp/roleset/my-roleset
Code: 400. Errors:
* unable to set policy: googleapi: Error 403: The caller does not have permission
My Vault cluster is running in a GKE cluster which has OAuth Scopes for all Cloud APIs, I am the project owner, and the service account Vault is using has the following permissions:
Cloud KMS CryptoKey Encrypter/Decrypter
Service Account Actor
Service Account Admin
Service Account Key Admin
Service Account Token Creator
Logs Writer
Storage Admin
Storage Object Admin
I have tried giving the service account both Editor and Owner roles, and I still get the same error.
Firstly, am I using the correct resource to create a roleset for the Storage Admin Role at the project level?
Secondly, if so, what could be causing this permission error?
I had previously recreated the cluster and skipped this step:
vault write gcp/config credentials=#credentials.json
Adding the key file fixed this.
There is also a chance that following the steps to create a custom role here and adding that custom role played a part.
I am trying to create resources using Terraform in a new GCP project. As part of that I want to set roles/storage.legacyBucketWriter to the Google managed service account which runs storage transfer service jobs (the pattern is project-[project-number]#storage-transfer-service.iam.gserviceaccount.com) for a specific bucket. I am using the following config:
resource "google_storage_bucket_iam_binding" "publisher_bucket_binding" {
bucket = "${google_storage_bucket.bucket.name}"
members = ["serviceAccount:project-${var.project_number}#storage-transfer-service.iam.gserviceaccount.com"]
role = "roles/storage.legacyBucketWriter"
}
to clarify, I want to do this so that when I create one off transfer jobs using the JSON APIs, it doesn't fail prerequisite checks.
When I run Terraform apply, I get the following:
Error applying IAM policy for Storage Bucket "bucket":
Error setting IAM policy for Storage Bucket "bucket": googleapi:
Error 400: Invalid argument, invalid
I think this is because the service account in question does not exist yet as I can not do this via the console either.
Is there any other service that I need to enable for the service account to be created?
it seems I am able to create/find the service account once I run this:
https://cloud.google.com/storage/transfer/reference/rest/v1/googleServiceAccounts/get
for my project to get the email address.
not sure if this is the best way but it works..
Soroosh's reply is accurate, after querying the API as per this DOC: https://cloud.google.com/storage-transfer/docs/reference/rest/v1/googleServiceAccounts/ will enable the service account and terraform will run, but now you have to create an api call in terraform for that to work, ain't nobody got time for that.