Deploy Google Cloud Function from Cloud Function - google-cloud-platform

Solved/invalid - see below
I'm trying to deploy a Google Cloud Function from a Google Cloud Function on demand.
However, whatever I try, I get a 403 Forbidden:
HttpError 403 when requesting https://cloudfunctions.googleapis.com/v1/projects/MY_PROJECT/locations/MY_REGION/functions?alt=json returned "The caller does not have permission"
I ended up granting the cloud function service account Project Owner role to make sure it can do anything, yet still I get the same error.
Is this limited intentionally (for example to avoid fork bombs or something) or am I doing something wrong?
Has anyone been able to make this work?
For the record: I ran the same (Python) function locally with Flask using my own account and then it will deploy the new cloud function perfectly, so the code itself seems to be ok.
Update
Code snippet of how I'm trying to deploy the cloud function:
cf_client = discovery.build('cloudfunctions', 'v1')
location = "projects/{MYPROJECT}/locations/europe-west1"
request = {
"name": "projects/{MYPROJECT}/locations/europe-west1/functions/hopper--2376cd24d318cd2d42f000f4f1c31a8f",
"description": "Hopper hopper--2376cd24d318cd2d42f000f4f1c31a8f",
"entryPoint": "pubsub_trigger",
"runtime": "python37",
"availableMemoryMb": 256,
"timeout": "60s",
"sourceArchiveUrl": "gs://staging.{MYPROJECT}.appspot.com/deployment/hopper.zip",
"eventTrigger": {
"eventType": "providers/cloud.pubsub/eventTypes/topic.publish",
"resource": "projects/{MYPROJECT}/topics/hopper-test-input"
},
"environmentVariables": {
"HOPPER_ID": "hopper--2376cd24d318cd2d42f000f4f1c31a8f"
}
}
response = cf_client.projects() \
.locations() \
.functions() \
.create(location=location, body=req) \
.execute()
Update
I feel like such an idiot... it turns out that for some reason I deployed the master function in a different project then the project I gave permissions on. No wonder it didn't work.

The correct answer should be: check that everything is indeed running how/where you expect it to be. Everything was configured correctly and deploying a CF in a CF is not a problem. The project was incorrect, due to a different default project being set on the gcloud utility.

Related

Unsure how to configure credentials for AWS Amplify cli user - ready to ditch Amplify

I have a react Amplify App. All I want to do is work on it, push the changes to amplify, etc. These are all standard and basic commands like amplify push.
The problem is that shortly after starting to work on my app ( a month or two ), I was no longer allowed to push, pull, or work on the app from the command line. There is no explanation, and the only error is this ...
An error occurred during the push operation: /
Access Denied
✅ Report saved: /var/folders/8j/db7_b0d90tq8hgpfcxrdlr400000gq/T/storygraf/report-1658279884644.zip
✔ Done
The logs created from the error show this.
error.json
{
"message": "Access Denied",
"code": "AccessDenied",
"region": null,
"time": "2022-07-20T01:20:01.876Z",
"requestId": "DRFVQWYWJAHWZ8JR",
"extendedRequestId": "hFfxnwUjbtG/yBPYG+GW3B+XfzgNiI7KBqZ1vLLwDqs/D9Qo+YfIc9dVOxqpMo8NKDtHlw3Uglk=",
"statusCode": 403,
"retryable": false,
"retryDelay": 60.622127086356855
}
I have two users in my .aws/credentials file. One is the default (which is my work account). The other is called "personal". I have tried to push with
amplify push
amplify push --profile default
amplify push --profile personal
It always results in the same.
I followed the procedure located here under the title "Create environment variables to assume the IAM role and verify access" and entered a new AWS_ACCESS_KEY_ID and a new AWS_SECRET_ACCESS_KEY. When I then run the command ...
aws sts get-caller-id
It returns the correct Arn. However, there is a AWS_SESSION_TOKEN variable that the docs say need to be set, and I have no idea what that is.
Running amplify push under this new profile still results in an error.
I have also tried
AWS_PROFILE=personal aws sts get-caller-identity
Again, this results in the correct settings, but the amplify push still fails for the same reasons.
At this point, i'm ready to drop it and move to something else. I've been debugging this for literally months now and it would be far easier to setup a standard react app on S3 and stand up my resources manually without dealing with this.
Any help is appreciated.
This is the same issue for me. There seems to be no way to reconfigure the CLI once its authentication method is set to profile. I'm trying to change it back to amplify studio and have not been able to crack the code on updating it. Documentation in this area is awful.
In the amplify folder there is a .config directory. There are three files:
local-aws-info.json
local-env-info.json
project-config.json
project-config.json is required, but the local-* files maintain state for your local configuration. Delete these and you can reinit the project and reauthenticate the amplify cli for the environment

BigQury Storage Read API, the user does not have 'bigquery.readsessions.create'

I'm trying to use BigQuery Storage Read API. As far as I can tell, the local script is using the an account, that has Owner role, BigQuery user, and BigQuery read session on the entire project. However, running the code from the local machine yields this error:
google.api_core.exceptions.PermissionDenied: 403 request failed: the user does not have 'bigquery.readsessions.create' permission for 'projects/xyz'
According to the GCP documentation the API is enabled by default. So the only reason I can think of is my script is using the wrong account.
How would you go debugging this issue? Is there a way to know for sure which user/account is running a python code on run time, something like print(user.user_name)
There is a gcloud command to get the current user permissions
$ gcloud projects get-iam-policy [PROJECT_ID]
You can also check the user_email field of your job to find out which user it is using to execute your query.
Example:
{
# ...
"user_email": "myemail#company.com",
"configuration": {
# ...
"jobType": QUERY
},
},
"jobReference": {
"projectId": "my-project",
# ...
}

Required 'compute.regions.get' permission for 'projects/$project_id/regions/us-central1

I'm pretty new on Google cloud, and I'm trying to use GCloud command line, and I faced the following problem
Error: Forbidden access to resources.
Raw response:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "forbidden",
"message": "Required 'compute.regions.get' permission for 'projects/$project_id/regions/us-central1'"
}
],
"code": 403,
"message": "Required 'compute.regions.get' permission for 'projects/$project_id/regions/us-central1'"
}
}
Can someone help?
Much appreciated
To troubleshoot your issue, please try following:
Where are you running the command: Cloud Shell, Local environment?
If it is local environment, try Cloud Shell instead.
Check that you are using the latest version of gcloud sdk 262.
Did you properly initialize gcloud?
Can you confirm that you have appropriate role to run the command, like editior/owner?
Check if you are using that same location for your products
If above steps don't work, can you share your complete gcloud command to have more context?
Oh, I see where the problem is! When I created the storage, I put the region as "Asia". When I configured it via gcloud init, I put it as "us-central1-a". The "Permission denied" means in this context, I have no permission to access another server region. It is misleading in terms of thinking out the cloud scope. However, the Pawel's answer is more comprehensive, and it is a very good start to lead you to the correct direction.

I'm getting an error creating a AWS AppSync Authenticated DataSource

I working through the Build On Serverless|S2 E4 video and I've gotten to the point of creating an authenticated HTTP datasource using the AWS CLI. I'm getting this error.
Parameter validation failed:
Unknown parameter in httpConfig: "authorizationConfig", must be one of: endpoint
I think I'm using the same information provided in the video, repository and gist, updated for my own aws account. It seems like it's some kind of formatting or missing information error, but, I'm just not seeing the problem.
When I remove the "authorizationConfig" property from the state-machine-datasource.json the command works.
I've reviewed the code against the information in the video as well as documentation and examples here and here provided by aws
This is the command I'm running.
aws appsync create-data-source --api-id {my app sync app id} --name ProcessBookingStateMachine
--type HTTP --http-config file://src/backend/booking/state-machine-datasource.json
--service-role-arn arn:aws:iam::{my account}:role/AppSyncProcessBookingState --profile default
This is my state-machine-datasource.json:
{
"endpoint": "https://states.us-east-2.amazonaws.com",
"authorizationConfig": {
"authorizationType": "AWS_IAM",
"awsIamConfig": {
"signingRegion": "us-east-2",
"signingServiceName": "states"
}
}
}
Thanks,
I needed to update my aws cli to the latest version. The authenticated http datasource is something fairly new I guess.

Is it possible to deploy a background Function "myBgFunctionInProjectB" in "project-b" and triggered by my topic "my-topic-project-a" from "project-a"

It's possible to create a topic "my-topic-project-a" in project "project-a" so that it can be publicly visible (this is done by setting the role "pub/sub subscriber" to "allUsers" on it).
Then from project "project-b" I can create a subscription to "my-topic-project-a" and read the events from "my-topic-project-a". This is done using the following gcloud commands:
(these commands are executed on project "project-b")
gcloud pubsub subscriptions create subscription-to-my-topic-project-a --topic projects/project-a/topics/my-topic-project-a
gcloud pubsub subscriptions pull subscription-to-my-topic-project-a --auto-ack
So ok this is possible when creating a subscription in "project-b" linked to "my-topic-project-a" in "project-a".
In my use case I would like to be able to deploy a background function "myBgFunctionInProjectB" in "project-b" and triggered by my topic "my-topic-project-a" from "project-a"
But ... this doesn't seem to be possible since gcloud CLI is not happy when you provide the full topic name while deploying the cloud function:
gcloud beta functions deploy myBgFunctionInProjectB --runtime nodejs8 --trigger-topic projects/project-a/topics/my-topic-project-a --trigger-event google.pubsub.topic.publish
ERROR: (gcloud.beta.functions.deploy) argument --trigger-topic: Invalid value 'projects/project-a/topics/my-topic-project-a': Topic must contain only Latin letters (lower- or upper-case), digits and the characters - + . _ ~ %. It must start with a letter and be from 3 to 255 characters long.
is there a way to achieve that or this is actually not possible?
Thanks
So, it seems that is not actually possible to do this. I have found it by checking it in 2 different ways:
If you try to create a function through the API explorer, you will need to fill the location where you want to run this, for example, projects/PROJECT_FOR_FUNCTION/locations/PREFERRED-LOCATION, and then, provide a request body, like this one:
{
"eventTrigger": {
"resource": "projects/PROJECT_FOR_TOPIC/topics/YOUR_TOPIC",
"eventType": "google.pubsub.topic.publish"
},
"name":
"projects/PROJECT_FOR_FUNCTION/locations/PREFERRED-LOCATION/functions/NAME_FOR_FUNTION
}
This will result in a 400 error code, with a message saying:
{
"field": "event_trigger.resource",
"description": "Topic must be in the same project as function."
}
It will also say that you missed the source code, but, nonetheless, the API already shows that this is not possible.
There is an already open issue in the Public Issue Tracker for this very same issue. Bear in mind that there is no ETA for it.
I also tried to do this from gcloud, as you tried. I obviously had the same result. I then tried to remove the projects/project-a/topics/ bit from my command, but this creates a new topic in the same project that you create the function, so, it's not what you want.