Error - functions: failed to create function dialogflowFirebaseFulfillment - google-cloud-platform

when i'm trying to deploy firebase function from my local machine i'm getting this error.
functions: failed to create function dialogflowFirebaseFulfillment
HTTP Error: 400, Default service account 'project-id#appspot.gserviceaccount.com' doesn't exist. Please recreate this account (for example by disabling and enabling the Cloud Functions API), or specify a different account.
and the project that i'm trying to deploy is, https://github.com/actions-on-google/codelabs-nodejs/tree/master/level1-complete

It seems your service account is removed. You may want to check whether your firebase & actions on google projects are removed or not.
If they are not, check for service accounts on console.cloud.google.com and make sure all your accounts are same as you are trying to deploy. (firebase, dialogflow, app-engine etc.) Also, disabling and enabling the Cloud Functions API may help as mentioned in error.

I notice that your error has 'project-id#appspot.gserviceaccount.com'.
Shouldn't the project-id be your {project-id} from the google action that you created, and not the word project-id.

Related

Serverless VPC access connector is in a bad shape

Our project is using a Serverless VPC access connector to allow access to DB over private IP from cloud functions and cloud runs. It was working flawlessly for a few months, but today I tried to deploy one of the functions that use such a connector and I got the message:
VPC connector
projects/xxxx/locations/us-central1/connectors/vpc-connector is not
ready yet or does not exist. Please visit
https://cloud.google.com/functions/docs/troubleshooting for in-depth
troubleshooting documentation.
I went to the Serverless VPC access view and found out that indeed the connector has a red marking on it. When I hover on it it says
Connector is in a bad state, manual deletion recommended
but I don't know for what reason, Link to logs doesn't show anything for the past 3 months.
I tried to google about the such error but without success.
I also tried to search through logs but also didn't find anything relevant.
I'm looking for any hints:
Why it happened?
How to fix it? I don't want to recreate the connector, it is related to many functions, and cloud runs
As the issue was blocking us from the deployment of cloud functions I was forced to recreate the connector.
But this time API returned an error:
Error: Error waiting to create Connector: Error waiting for Creating Connector: Error code 7, message: Operation failed: Google APIs Service Agent (<PROJECT_NUMBER>#cloudservices.gserviceaccount.com) needs editor role in the project.
After adding such permission old connector started to work again...
Before there was no such requirement, but it changed in meantime.
Spooky, one time something works other not.

Turn off the v0.1 and v1beta1 endpoints via GCP console

I have a Flutter + Firebase app, and received an email about "Legacy GAE and GCF Metadata Server endpoints will be turned down on April 30, 2020". I updated it to v1 or whatever, and at the end of the email it suggests to turn off the endpoints completely. I'm using Google Cloud Functions and the email says
If you are using App Engine Standard or Cloud Functions, set the following environment variable: DISABLE_LEGACY_METADATA_SERVER_ENDPOINTS=true.
Upon further research this can be done through the console (https://cloud.google.com/compute/docs/storing-retrieving-metadata#custom). It says to add it as custom metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#disable-legacy-endpoints) but I'm not sure if I'm doing this right.
For additional info, the email was triggered from a few cloud functions I have where I used the firebase admin to send push notifications (via cloud messaging)
The custom metadata feature you mention is meant to be used with Compute Engine and allows to pass in arbitrary values to your project or instance, and set startup and shutdown scripts. It's a handy way to pass common environment variables to all your GCE VMs in your project. You can also use those custom metadata in App Engine Flexible instances because they are actually Compute Engine VMs in your project running your App Engine code.
Cloud Functions and App Engine Standard are fundamentally different in that they don't run in your project but in a Google-owned project. This makes your project-wide custom metadata unreachable to them.
For this reason, for Cloud Functions you'll need to set a CF-specific environment variable by either:
using the --set-env-vars flag when deploying your Function with the gcloud functions deploy command
adding it to the environment variable section of your Function when creating it via the Developer Console

Google Cloud Resource Manager - create projects inside folders

I'm trying to create multiple projects inside my Organisation. My use case is:
1. I want to make an API call that creates a new project.
2. I want to create a new DialogFlow agent (chatbot).
Dialogflow API looks pretty straightforward. Since it's backend implementation, I am using service accounts to achieve this.
My problem is that when I'm trying to create a service account, it is always scoped to some project. I spent the whole day trying to give that service account all the access that I could find, but it's still giving me a Forbidden error.
Can someone explain to me if this is possible and if so - how should I configure it through the Cloud Console so that I can end up with a service account that creates projects (that can be scoped to some folder/project if it makes it easier)?
If the answer is yes - can I create multiple chatbots in one project? And what type of permissions do I need to achieve that?
Thanks!

How to set up ray project autoscaling on GCP

I am having real difficulty setting up ray auto-scaling on google cloud compute. I can get it to work on AWS no problem, but I keep running into the following error when running ray up:
googleapiclient.errors.HttpError: https://cloudresourcemanager.googleapis.com/v1/projects?alt=json returned "Service accounts cannot create projects without a parent.">
My project is part of an organization, so I don't understand where this is coming from, or why it would need to create a project in the first place. I have entered my project id in the yaml file like I normally do for AWS.
Thank you very much. I appreciate any help I can get!!
The error message referring to service account, together with the fact that the project already exists, suggests that the googlecloudapiclient used by Ray Autoscaler is authenticated for a service account that doesn't have access to the project.
If this is true, then here's what I believe happens. Typically, when running Ray GCP Autoscaler, it will first check if the project with the given id exists. In your case, this request returns "not found" because there's no project with the given id associated with the service account. Now, because the project did not exist, Ray will automatically try to create one for you. Typically, if we created a new GCP project with a user account (i.e. non-service account), the newly created project would be associated with the user account's default organization. Service accounts, however, must specify a parent organization explicitly when creating a new project. If we look at the ray.autoscaler.config._create_project function, we see that the arguments passed to the projects.create method omit the 'parent' argument, which explains why you see the error.
To verify if this is true (and hopefully fix the problem), you could change the account used for authenticating with the googlecloudapiclient. I believe that the credentials used for the googlecloudapiclient requests are the same as used by the Google Cloud SDK, so you should be able to configure the accounts using the gcloud auth login command.
I think the Ray Autoscaler could be improved by either allowing user to explicitly specify the parent organization when creating a new project, or at least by providing a more elaborate error message for this particular case.
I hope this fixes your problem. If it doesn't, and you believe that that it's a problem with the Autoscaler, don't hesitate to open an issue or feature request to the Ray Issues page!

AWS API Gateway: New Stage not work

I have created my DEV environment without any problem. It's work Fine.
but I'm trying to create a QA environment (or any other) and it does not work.
the only difference between the two environments is the variable that refers to the backend (I have tried putting the same one and the problem persists)
if I try some method in the different environments by means of the "Test" function, both work. But when I try from postman, only work DEV. The only error I see for CloudWatch is the following:
Execution failed due to configuration error: Invalid endpoint address.
Any idea? Thanks
the problem was the name of variables in Stage Variables
I think the problem that you are having is that you need to deploy your stage. i.e.
API -> Resources -> Actions (on root of api) -> Deploy Api
Then select the stage you want to deploy get the new endpoint and test from postman.