Required 'compute.regions.get' permission for 'projects/$project_id/regions/us-central1 - google-cloud-platform

I'm pretty new on Google cloud, and I'm trying to use GCloud command line, and I faced the following problem
Error: Forbidden access to resources.
Raw response:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "forbidden",
"message": "Required 'compute.regions.get' permission for 'projects/$project_id/regions/us-central1'"
}
],
"code": 403,
"message": "Required 'compute.regions.get' permission for 'projects/$project_id/regions/us-central1'"
}
}
Can someone help?
Much appreciated

To troubleshoot your issue, please try following:
Where are you running the command: Cloud Shell, Local environment?
If it is local environment, try Cloud Shell instead.
Check that you are using the latest version of gcloud sdk 262.
Did you properly initialize gcloud?
Can you confirm that you have appropriate role to run the command, like editior/owner?
Check if you are using that same location for your products
If above steps don't work, can you share your complete gcloud command to have more context?

Oh, I see where the problem is! When I created the storage, I put the region as "Asia". When I configured it via gcloud init, I put it as "us-central1-a". The "Permission denied" means in this context, I have no permission to access another server region. It is misleading in terms of thinking out the cloud scope. However, the Pawel's answer is more comprehensive, and it is a very good start to lead you to the correct direction.

Related

Unsure how to configure credentials for AWS Amplify cli user - ready to ditch Amplify

I have a react Amplify App. All I want to do is work on it, push the changes to amplify, etc. These are all standard and basic commands like amplify push.
The problem is that shortly after starting to work on my app ( a month or two ), I was no longer allowed to push, pull, or work on the app from the command line. There is no explanation, and the only error is this ...
An error occurred during the push operation: /
Access Denied
✅ Report saved: /var/folders/8j/db7_b0d90tq8hgpfcxrdlr400000gq/T/storygraf/report-1658279884644.zip
✔ Done
The logs created from the error show this.
error.json
{
"message": "Access Denied",
"code": "AccessDenied",
"region": null,
"time": "2022-07-20T01:20:01.876Z",
"requestId": "DRFVQWYWJAHWZ8JR",
"extendedRequestId": "hFfxnwUjbtG/yBPYG+GW3B+XfzgNiI7KBqZ1vLLwDqs/D9Qo+YfIc9dVOxqpMo8NKDtHlw3Uglk=",
"statusCode": 403,
"retryable": false,
"retryDelay": 60.622127086356855
}
I have two users in my .aws/credentials file. One is the default (which is my work account). The other is called "personal". I have tried to push with
amplify push
amplify push --profile default
amplify push --profile personal
It always results in the same.
I followed the procedure located here under the title "Create environment variables to assume the IAM role and verify access" and entered a new AWS_ACCESS_KEY_ID and a new AWS_SECRET_ACCESS_KEY. When I then run the command ...
aws sts get-caller-id
It returns the correct Arn. However, there is a AWS_SESSION_TOKEN variable that the docs say need to be set, and I have no idea what that is.
Running amplify push under this new profile still results in an error.
I have also tried
AWS_PROFILE=personal aws sts get-caller-identity
Again, this results in the correct settings, but the amplify push still fails for the same reasons.
At this point, i'm ready to drop it and move to something else. I've been debugging this for literally months now and it would be far easier to setup a standard react app on S3 and stand up my resources manually without dealing with this.
Any help is appreciated.
This is the same issue for me. There seems to be no way to reconfigure the CLI once its authentication method is set to profile. I'm trying to change it back to amplify studio and have not been able to crack the code on updating it. Documentation in this area is awful.
In the amplify folder there is a .config directory. There are three files:
local-aws-info.json
local-env-info.json
project-config.json
project-config.json is required, but the local-* files maintain state for your local configuration. Delete these and you can reinit the project and reauthenticate the amplify cli for the environment

BigQury Storage Read API, the user does not have 'bigquery.readsessions.create'

I'm trying to use BigQuery Storage Read API. As far as I can tell, the local script is using the an account, that has Owner role, BigQuery user, and BigQuery read session on the entire project. However, running the code from the local machine yields this error:
google.api_core.exceptions.PermissionDenied: 403 request failed: the user does not have 'bigquery.readsessions.create' permission for 'projects/xyz'
According to the GCP documentation the API is enabled by default. So the only reason I can think of is my script is using the wrong account.
How would you go debugging this issue? Is there a way to know for sure which user/account is running a python code on run time, something like print(user.user_name)
There is a gcloud command to get the current user permissions
$ gcloud projects get-iam-policy [PROJECT_ID]
You can also check the user_email field of your job to find out which user it is using to execute your query.
Example:
{
# ...
"user_email": "myemail#company.com",
"configuration": {
# ...
"jobType": QUERY
},
},
"jobReference": {
"projectId": "my-project",
# ...
}

linking to a google cloud bucket file in a terminal command?

I'm trying to find my way with Google Cloud.
I have a Debian VM Instance that I am running a server on. It is installed and working via SSH Connection in a browser window. The command to start the server is "./ninjamsrv config-file-path.cfg"
I have the config file in my default google firebase storage bucket as I will need to update it regularly.
I want to start the server referencing the cfg file in the bucket, e.g:
"./ninjamsrv gs://my-bucket/ninjam-config.cfg"
But the file is not found:
error opening configfile 'gs://my-bucket/ninjam-config.cfg'
Error loading config file!
However if I run:
"gsutil acl get gs://my-bucket/"
I see:
[
{
"entity": "project-editors-XXXXX",
"projectTeam": {
"projectNumber": "XXXXX",
"team": "editors"
},
"role": "OWNER"
},
{
"entity": "project-owners-XXXXX",
"projectTeam": {
"projectNumber": "XXXXX",
"team": "owners"
},
"role": "OWNER"
},
{
"entity": "project-viewers-XXXXX",
"projectTeam": {
"projectNumber": "XXXXX",
"team": "viewers"
},
"role": "READER"
}
]
Can anyone advise what I am doing wrong here? Thanks
The first thing to verify is if indeed the error thrown is a permission one. Checking the logs related to the VM’s operations will certainly provide more details in that aspect, and a 403 error code would confirm if this is a permission issue. If the VM is a Compute Engine one, you can refer to this documentation about logging.
If the error is indeed a permission one, then you should verify if the permissions for this object are set as “fine-grained” access. This would mean that each object would have its own set of permissions, regardless of the bucket-level access set. You can read more about this here. You could either change the level of access to “uniform” which would grant access to all objects in the relevant bucket, or make the appropriate permissions change for this particular object.
If the issue is not a permission one, then I would recommend trying to start the server from the same .cfg file hosted on the local directory of the VM. This might point the error at the file itself, and not its hosting on Cloud Storage. In case the server starts successfully from there, you may want to re-upload the file to GCS in case the file got corrupted during the initial upload.

Deploy Google Cloud Function from Cloud Function

Solved/invalid - see below
I'm trying to deploy a Google Cloud Function from a Google Cloud Function on demand.
However, whatever I try, I get a 403 Forbidden:
HttpError 403 when requesting https://cloudfunctions.googleapis.com/v1/projects/MY_PROJECT/locations/MY_REGION/functions?alt=json returned "The caller does not have permission"
I ended up granting the cloud function service account Project Owner role to make sure it can do anything, yet still I get the same error.
Is this limited intentionally (for example to avoid fork bombs or something) or am I doing something wrong?
Has anyone been able to make this work?
For the record: I ran the same (Python) function locally with Flask using my own account and then it will deploy the new cloud function perfectly, so the code itself seems to be ok.
Update
Code snippet of how I'm trying to deploy the cloud function:
cf_client = discovery.build('cloudfunctions', 'v1')
location = "projects/{MYPROJECT}/locations/europe-west1"
request = {
"name": "projects/{MYPROJECT}/locations/europe-west1/functions/hopper--2376cd24d318cd2d42f000f4f1c31a8f",
"description": "Hopper hopper--2376cd24d318cd2d42f000f4f1c31a8f",
"entryPoint": "pubsub_trigger",
"runtime": "python37",
"availableMemoryMb": 256,
"timeout": "60s",
"sourceArchiveUrl": "gs://staging.{MYPROJECT}.appspot.com/deployment/hopper.zip",
"eventTrigger": {
"eventType": "providers/cloud.pubsub/eventTypes/topic.publish",
"resource": "projects/{MYPROJECT}/topics/hopper-test-input"
},
"environmentVariables": {
"HOPPER_ID": "hopper--2376cd24d318cd2d42f000f4f1c31a8f"
}
}
response = cf_client.projects() \
.locations() \
.functions() \
.create(location=location, body=req) \
.execute()
Update
I feel like such an idiot... it turns out that for some reason I deployed the master function in a different project then the project I gave permissions on. No wonder it didn't work.
The correct answer should be: check that everything is indeed running how/where you expect it to be. Everything was configured correctly and deploying a CF in a CF is not a problem. The project was incorrect, due to a different default project being set on the gcloud utility.

Error Code 413 encountered in Admin SDK Directory API (Users: patch)

I am a G Suite Super Admin. I have set up Google Single Sign On (SSO) for our AWS accounts inside our G Suite. As we have several AWS accounts, we need to run the "Users: patch" (https://developers.google.com/admin-sdk/directory/v1/reference/users/patch#try-it) to include other AWS accounts for Google Single Sign On.
While provisioning additional AWS accounts to Google Single Sign on, we encountered error "Code: 413" after running the above mentioned patch. Details below:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "uploadTooLarge",
"message": "Profile quota is exceeded.: Data is too large for "
}
],
"code": 413,
"message": "Profile quota is exceeded.: Data is too large for "
}
}
What could be the possible cause of this error? Are there any workaround for this? Else, are there other ways to provision multiple AWS accounts using Google Single Sign On?
Thank you in advance for your patience and assistance in this.
Notice what the error is saying. You're probably going way beyond the designated limit. Try using the solution from this SO post which is to:
choose a different SAML ldP. New limit was said to be somewhere
between 2087 - 2315 characters