Image uploading on GCS works locally properly. After deploying the Django project on a vm instance with the load balancer, setup ssl and domain name gives error of access denied.
Forbidden at /admin/products/banner/add/
403 POST https://storage.googleapis.com/upload/storage/v1/b/new_zourie_app/o?uploadType=multipart: {
"error": {
"code": 403,
"message": "Access denied.",
"errors": [
{
"message": "Access denied.",
"domain": "global",
"reason": "forbidden"
}
]
}
}
: ('Request failed with status code', 403, 'Expected one of', <HTTPStatus.OK: 200>)
I have added multiple permissions (like allUsers, Storage admin, Storage Object Creator, Storage Object Admin)to that bucket as given in Google cloud platform docs. Also updated the bucket and bucket keys in the local as well on vm instance. As a result, it works locally but not in production which have uploaded on gcp.
This is because by default VM instance service account have read-only access scope to storage bucket. you have to change scope to read-write with below steps.
Stop VM where your django code is deployed.
Open VM instance page and click on "edit" on VM
Go to service account and change below and set scope for storage as read write. this will solve your issue
enter image description here
Related
So I'm pulling my hair out over this and reaching out here for help. I'm trying to set up a service account with Cloud Translation, and Text-to-speech enabled, but we keep getting this response:
[error] {
"message": "Cloud IAM permission 'cloudtranslate.generalModels.predict' denied. ",
"code": 7,
"status": "PERMISSION_DENIED",
"details": []
}
I have confirmed that the service account has the "cloudtranslate.generalModels.predict" permission, and showing the "Cloud Translation API User" role. We've also confirmed that it works with a different Service account that my colleague set up in his personal Google console profile. But, we need this setup with an account through our org.
I did verify that the service account has the permission from the IAM Policy Troubleshooter so and that my organization's admin sees that the service account is granted access through ancestor policies.
So what else can we check?
Edit: Ok, turned out we had a hard-coded value for resource location, which was set to the wrong project. So of course it was coming back as permission denied.
Ok, turned out we had a hard-coded value for resource location, which was set to the wrong project. So of course it was coming back as permission denied.
I'm trying to use a 3rd party app to retrieve files and data from Google Drive but am running into an error: { "code" : 404, "errors" : [ { "domain" : "global", "location" : "fileId", "locationType" : "parameter", "message" : "File not found: .", "reason" : "notFound" } ], "message" : "File not found: ." }
I created a service account and enabled the Google Drive API and gave that service account an Owner role, but can't figure out if there's anything else I need to do to grant it full access to Drive. Do I need to add any scopes?
"File not found
Means that the user you have authencated with does not have access to the file.
In the case of a service account in order for it to access a file, one of two things must have happened
If its a google workspace domain account, domain wide delegation must be configured and the service account should then impersonate a user on the domain with access to the file.
The file must be shared with the service account. This can be done by taking the service account email address and going to the google drive web application and sharing the file with the service account as you would any other user.
Service accounts are like dummy users, they have their own drive account. However in order for a service account to access a file it must be preauthorized. In other words you must configure permission before you use it.
I am trying to create storage account via Postman. I created one service principal via Azure Portal and got access token with below parameters:
https://login.microsoftonline.com/mytenant_id/oauth2/v2.0/token
client_id='client_id'
&client_secret='client_secret'
&grant_type=client_credentials
&resource=https://management.azure.com
I tried to create storage account using generated access token with below query:
PUT
https://management.azure.com/subscriptions/subscriptionid/resourceGroups/resourcegroupname/providers/Microsoft.Storage/storageAccounts/storageaccountname?api-version=2018-02-01
But I got the error like below:
{
"error": {
"code": "AuthorizationFailed",
"message": "The client 'XXXXXXXXXXXXXXXXXX' with object id 'XXX does not have authorization to perform action 'Microsoft.Storage/storageAccounts/read' over scope '/subscriptions/XXXXXXXXXXXXXXXXXX/resourceGroups/resource/providers/Microsoft.Storage/storageAccounts/account' or the scope is invalid. If access was recently granted, please refresh your credentials."
}
}
I am the Global Admin and have owner access at subscription level.
Could anyone suggest me what else needed?
To resolve the error, try assigning Storage Account Contributor role to service principal at subscription level like below:
I tried to reproduce the same in my environment and got the same error when it dint have the required permissions like below:
After granting the permissions, I was able to create the storage account successfully like below:
To confirm the above, I verified it in the Portal like below:
Reference:
How to create Azure Storage Account with REST API using Postman – A Turning Point (raaviblog.com)
The Google cloud IAM authority has Owner, Reader, Writer for projects.
Google Cloud Storage has the Storage Object Admin.
Google Cloud Storage bucket address and credit are also consistent.
But there's an error below. What do we do?
{ domain: 'global',
reason: 'insufficientPermissions',
message: 'Insufficient Permission' } ],
response: undefined,
message: 'Insufficient Permission' }
Here you can find instructions on how to upload objects to GCS. Also be sure that you follow correct authentication process.
so I am able to make a valid request to the video intelligence api with the sample video given in the quickstart. https://cloud.google.com/video-intelligence/docs/getting-started I have tried many different ways of authenticating to the api as well. The API token I am using was created from the Credentials page in the console. There are no options to tie it to the video api so I figured it should automatically work. The API has been enabled on my account.
export TOKEN="foobar"
curl -XPOST -s -k -H"Content-Type: application/json" "https://videointelligence.googleapis.com/v1beta1/videos:annotate?key=$TOKEN" --data '{"inputUri": "gs://custom-bucket/IMG_3591.mov", "features": ["LABEL_DETECTION"]}'
{
"error": {
"code": 403,
"message": "The caller does not have permission",
"status": "PERMISSION_DENIED"
}
}
curl -XPOST -s -k -H"Content-Type: application/json" "https://videointelligence.googleapis.com/v1beta1/videos:annotate?key=$TOKEN" --data '{"inputUri": "gs://cloud-ml-sandbox/video/chicago.mp4", "features": ["LABEL_DETECTION"]}'
{
"name": "us-east1.18013173402060296928"
}
Update:
I set the file as public and it worked. But I need to access this as private, so I gave the service account access to the file and tried to get the API key like suggested.
export TOKEN="$(gcloud auth print-access-token)"
curl -XPOST -s -k -H"Content-Type: application/json" "https://videointelligence.googleapis.com/v1beta1/videos:annotate?key=$TOKEN" --data '{"inputUri": "gs://custom-bucket/IMG_3591.mov", "features":["LABEL_DETECTION"]}'
{
"error": {
"code": 400,
"message": "API key not valid. Please pass a valid API key.",
"status": "INVALID_ARGUMENT",
"details": [
{
"#type": "type.googleapis.com/google.rpc.Help",
"links": [
{
"description": "Google developers console",
"url": "https://console.developers.google.com"
}
]
}
]
}
}
It seems like the token returned by this print-access-token function does not work. I do have an API key, but it does not have access to the bucket and I don't see a way to give an API key access.
Update 2:
So it looks like we were setting our token wrong. We were following this example https://cloud.google.com/video-intelligence/docs/analyze-labels#videointelligence-label-file-protocol which is where we got the apiKey=$TOKEN from. But it looks like we needed to set the Bearer Header. We did try this at first but we were having the first issue of not having access to the bucket. So thank you.
TL;DR - Video Intelligence service is unable to access the file on your Cloud storage bucket because of lack of permissions. Since the API uses the permissions of the service account token being passed, you will need to grant your service account permissions to read the file in the GCS bucket or the entire GCS bucket itself.
Long version
The access token you pass should correspond to an IAM service account key. The service account will belong to a project (where you need to enable the Video intelligence API access) and the service account should have permissions to access the GCS bucket you're trying to access.
Each such service account has an associated email id in the form SERVICE_ACCOUNT_NAME#PROJECT_NAME.iam.gserviceaccount.com.
In Cloud console, you can go to the Cloud storage bucket/file and grant Reader permissions for the IAM service account email address. There is no need to make this bucket public.
If you use gsutil, you can run the following equivalent command:
gsutil acl ch -u SERVICE_ACCOUNT_NAME#PROJECT_NAME.iam.gserviceaccount.com:READ gs://custom-bucket/IMG_3591.mov`
I confirmed this myself with an IAM service account that I created in my project and used this to invoke the video intelligence API. The file was not made public, but granted Reader permissions only to the service account.
I used gcloud to activate the service account and fetch the access token, although you can do this manually as well using the google OAuth APIs:
gcloud auth activate-service-account SERVICE_ACCOUNT_KEY.json
export TOKEN="$(gcloud auth print-access-token)"
The steps for creating the IAM service account using gcloud are in the same page.
I can repro this issue. I believe the problem is that you don't have proper permission setup for your video file in your gs bucket. To test out this hypothesis try sharing it publicly (checkbox next to the blob in Google Storage) and then run the request again.