I tried to deploy a cloud function on the google cloud platform using the my console. The command I used was,
gcloud functions deploy function_name --runtime=python37 --memory=1024MB --region=asia-northeast1 --allow-unauthenticated --trigger-http
But I am getting this error,
ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Build failed: could not resolve storage source: googleapi: Error 404: Not Found, notFound
I tried googling around but it seems like no one had faced this error message before. I have also tried changing project and deployment is working fine.
gcloud config set project another_project
Appreciate it if anyone have any idea on what is causing this error and how I can solve it. Thanks!
As per the documentation here -
https://cloud.google.com/functions/docs/building
it says that : Because Cloud Storage is used directly in your project, the source code directory for your functions is visible, in a bucket named:
gcf-sources-<PROJECT_NUMBER>-<REGION>
Therefore, if you delete the bucket in cloud storage, then you need to re create this bucket.
For example if your project number is 123456789 running on asia-south1 then the bucket name should be:
gcf-sources-123456789-asia-south1
Once you re create the bucket then you can use gcloud or firebase cli to deploy and it should work normally.
Hope this helps. It worked for me!
Enjoy!
Please check if a bucket named gcf-sources-**** is available.
If not, you will need to contact gcloud support to request that particular bucket to be rebuiled.
Update:
https://issuetracker.google.com/175866925
Update from GCP Support: That does not resolve my problem at all.
First they said they need to recreate that bucket. Meanwhile they said that does not resolve the problem and they are still investigating the problem.
Just for testing I created that Bucket my self as Oru said.
Still the same error. I will update this tread when I got new information.
Related
I'm trying to follow this tutorial:
https://juju.is/docs/olm/google-gce
Once at the end when trying to bootstrap, I get this:
$ juju bootstrap google google-controller
ERROR googleapi: Error 403: Required 'compute.projects.get' permission for 'projects/juju-demo-364623', forbidden
I have tried for at least 30 minutes to add more permissions in GCP, however, it doesn't change anything. I am lost. I hoped the GCP permissions to be easier to manage than AWS permissions. I realize it's as confusing as AWS IAM permissions.
Following this guided documentation page made it:
https://cloud.google.com/iam/docs/grant-role-console
I used the owner permission which was wider and that worked.
In addition to your answer #Guillaume Chevalier, you can also use this reference answered by #Maxim to this stackoverflow link
It's a good reference to configure a custom role, using gcloud CLI. and also explain here step by step configuration.
When trying to delete my cloud composer environment it gets stuck complaining about insufficient permissions. I have deleted the storage bucket, GKE cluster and the deployment according to this post:
Cannot delete Cloud Composer environment
And the service account is the standard compute SA.
DELETE operation on this environment failed 33 minutes ago with the following error message:
Could not configure workload identity: Permission iam.serviceAccounts.getIamPolicy is required to perform this operation on service account projects/-/serviceAccounts/"project-id"-compute#developer.gserviceaccount.com.
Even though I made the compute account a project owner and IAM Security Admin temporarily it does not work.
And I've tried to delete it through the GUI, gcloud CLI and terraform without success. Any advice or things to try out will be appreciated :)
I got help from the google support, and instead of adressing the SA projects/-/serviceAccounts/"project-id"-compute#developer.gserviceaccount.com.
It was apparently the default service agent that has the format of
service-"project-nr"#cloudcomposer-accounts.iam.gserviceaccount.com with the
Cloud Composer v2 API Service Agent Extension
Thank you for the kind replies!
The issue iam.serviceAccounts.getIamPolicy, seems to be more related to the credentials, that your server is having issues retrieving credentials data.
You should set up your path credentials variable again:
export GOOGLE_APPLICATION_CREDENTIALS=fullpath.json
Also there another options where you can try to run:
gcloud auth activate-service-account
Also you can add it to your script:
provider "google" {
credentials = file(var.service_account_file_path)
project = var.project_id
}
Don't forget that you need to have the correct roles to delete the composer.
For more details about it you can check:
https://cloud.google.com/composer/docs/delete-environments#gcloud
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/composer_environment
https://cloud.google.com/composer/docs/how-to/access-control?hl=es_419
I am new to AWS step functions and was trying to follow this tutorial https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-human-approval.html#human-approval-yaml but I am getting permission errors on Step#2 when importing YML template in cloud formation service.
"The following resource types are not supported for resource import: AWS::ApiGateway::Account,AWS::Lambda::Permission,AWS::StepFunctions::StateMachine"
Our AWS admin granted me the following permissions:
IAM Full Access
SNS Full Access
STS_AssumeRole *
Lambda_FullAccess
AWS StepFunction_FullAccess
APIGatewayAdministrator (Equals Full Access)
He also said that the following services are used:
ApiGateway::RestApi
ApiGateway::Resource
ApiGateway::Method
ApiGateway::Account
ApiGateway::Stage
ApiGateway::Deployment
IAM::Role
Partition
Lambda::Function
Lambda::Permission
StepFunctions::StateMachine
APIGatewayEndpoint
SNS::Topic
But I am still unable to import the YML template from tutorial.
What's missing?
enter image description here
Thank you
After relogging in and attempting again the solution appeared to work.
The error indicates that it was initially tried using "Import Resources", whereas it was new resources being created.
I setup a Cloud Build Trigger in my GCP project in order to deploy a Cloud Function from a Cloud Source Repository via a .yaml file. Everything seems to have been setup correctly and permissions granted according to the official documentation, but when I test the trigger by running it manually, I get the following error:
ERROR: (gcloud.functions.deploy) ResponseError: status=[403], code=[Forbidden], message=[Missing necessary permission iam.serviceAccounts.actAs for on resource [MY_SERVICE_ACCOUNT]. Please grant the roles/iam.serviceAccountUser role. You can do that by running 'gcloud iam service-accounts add-iam-policy-binding [MY_SERVICE_ACCOUNT] --member= --role=roles/iam.serviceAccountUser']
Now first of all, running the suggested command doesn't even work because the suggested syntax is bad (missing a value for "member="). But more importantly, I already added that role to the service account the error message is complaining about. I tried removing it, adding it back, both from the UI and the CLI, and still this error always shows.
Why?
I figured it out after a lot of trial and error. The documentation seems to be incorrect (missing some additional necessary permissions). I used this answer to get me there.
In short, you also need to add the cloudfunctions.developer and iam.serviceAccountUser roles to the [PROJECT_NUMBER]#cloudbuild.gserviceaccount.com account, and (I believe) that the aforementioned cloudbuild service account also needs to be added as a member of the service account that has permissions to deploy your Cloud Function (again shown in the linked SO answer).
The documentation really should be reflecting this.
Good luck!
I'm trying to import a VM created on VirtualBox. It was exported as an OVA file. I have gone to a number of forums to try to find out how to get around this error.
An error occurred (InvalidParameter) when calling the ImportImage operation: The service role <vmimport> does not exist or
does not have sufficient permissions for the service to continue
I have used the console to instead of the aws-cli to perform the steps described here: http://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html
However I am still getting the error. Can anyone guide me on how to troubleshoot?
The command to import looks like this;
aws ec2 import-image --description "Bitnami WordPress CiviCRM" --license-type BYOL --disk-containers file://containers.jso
n
My .aws/credentials and config files are set up correctly as I can perform other CLI functions but I am stumped on how to associate the vmimport role with the IAM user.
Never be afraid to start over, which is what I did. I found another "cookbook" to do this and followed the instructions. I was able to import the image but the it did not convert. The message was "ClientError: Unknown OS / Missing OS files".
I'll submit another question on this if I can't figure it out.