Error submitting Cloudbuild job from Cloudfunctions if - google-cloud-platform

Hopefully this is a simple one for someone with a little deeper knowledge than me...
I have a Cloudfunction that responds to webhook calls to submit jobs to Cloudbuild using the API. This works fine except that now we have some jobs that need to use KMS keys from a different project.
secrets:
- kmsKeyName: projects/xxx/locations/global/keyRings/xxx/cryptoKeys/xxx
With this included in cloudbuild.yaml the api call to submit the Cloudbuild job returns:
400 invalid build: failed to check access to "projects/xxx/locations/global/keyRings/xxx/cryptoKeys/xxx"
I've tried adding both the Cloudfunction and Cloudbuild service accounts from the calling account to the account that hosts KMS to everything I could think of, including Owner.
This article has simple and clear instructions for accessing Container Registry and other services in another account, but nothing about KMS. This error doesn't seem to trigger any meaningful results in searches, and it doesn't look familiar to me at all.
Thanks for any help.

The Cloud KMS API was not enabled on the project running Cloudbuild. It's unfortunate that the error message was so vague. In fact, I diagnosed the issue by running gcloud kms decrypt ... in a Cloudbuild job which helpfully told me that the API needed to be enabled.

Related

How to give Google Cloud Eventarc correct permission so it can trigger a cloud function?

I have succesfully deployed a 2nd generation cloud function with a storage trigger per the google tutorial.
The Cloud Function works when I run a test command in the shell. But if I try for real by uploading a file to my bucket the could function is not invoked.
I can see that the event triggers the pubsub topic:
And in Eventarc I can see signs of the problem:
So, my layman analyse of why the cloud function invokation fails is that I lack some permission for Eventarc to receive the message from PubSub (?). I have read Eventarc troubleshooting and Eventarc accesscontrol and tried to add the eventarc admin role to the eventarc serviceaccount (as seen in image below) but to no result. (I've also added it to any other service account I can find, made the compute service account project owner, etc. but no luck). What am I missing?
(Note, I had an earlier question about this but with broader scope but I opted for a new, more specific question)
You used the Compute Engine default Service Account.
You need to give the needed permissions to this Service Account :
According to the documentation :
Make sure the runtime service account key you are using for your
Application Default Credentials has either the
cloudfunctions.serviceAgent role or the storage.buckets.{get, update}
and the resourcemanager.projects.get permissions. For more information
on setting these permissions, see Granting, changing, and revoking
access to resources.
Please check in IAM page if the default Service Account has the following permissions :
cloudfunctions.serviceAgent
storage.buckets.{get, update}
resourcemanager.projects.get
Also, don't hesitate to check in Cloud logging to see the exact error and the missing permissions.

Enabling AWS Config with CloudFormation does not work

I registered my delegated administrator account for my AWS organization successfully (I get the notification that I´m the delegated admin every time when I´m at the StackSet Console).
So I should be able to enable AWS Config with the sample template provided by AWS in the whole organization. But everytime when I run the Stackset I got the following error:
Cancelled since failure tolerance has exceeded
As I couldn´t find any more log information or similar provided by AWS I´m really confused what I´m missing.
StackSet Config
Any suggestions?
In stack instances tab look for stacks that failed and debug them separately.

Private AWS credentials being shared with Serverless.com?

I've been having trouble with a deployment with a serverless-component, so I've been trying to debug it. Stepping through the code, I actually thought I'd be able to step into the component itself and see what was going on.
But to my surprise, I couldn't actually debug it, because the component doesn't actually exist on my computer. Apparently the serverless cli is sending a request to a server, and the request seems to include everything serverless needs to build and deploy the actual service— which includes my AWS credentials...
Is this a well-known thing? Is there a way to force serverless to build and deploy locally? This really caught me be surprise, and to be honest I'm not very happy about it.
I haven't used their platform, (I thought the CLI only executed from your local seems very risky), but you can make this more secure by the following:
First setup an iam role which can only do the deploy actions for your app. Then make a profile which assumes this role when you work on your serverless app and use the cli.
Secondly you can also avoid long-term cli credentials (iam users) by using the AWS SSO functionality which generates cli credentials for an hour, and with the AWS cli, you can login from the cli I believe. What this will mean is that your CLI credentials will live for at maximum 1 hour.
If the requests are always coming from the same IP you can also put that in an IAM policy but I wouldn't imagine there is any guarantee that their IP will always be the same.

Unable to stage source on AWS Codestar pipeline

I was attempting to change the meta tags on my organization's angular project in the index.html. Our aws pipeline retrieves code from our github repository's master branch. Upon pushing the change, CodeStar on aws fails at the source.
We've reverted back to an older commit but still end up with the same error on CodeStar. The error on aws says:
"Invalid action configuration
Either the GitHub repository "quote-flow-v3" does not exist, or the GitHub access token provided has insufficient permissions to access the repository. Verify that the repository exists and edit the pipeline to reconnect the action to GitHub."
Normally, the code would publish to the live site upon pushing to changes. I've looked around and the closest I got to this issue is on here:
AWS CodePipeline doesn't work anymore - GitHub's token insufficient permissions
However, there doesn't seem to be a solution. Recreating the pipeline is not an option. Any Suggestions?
RESOLVED-
This problem occurred after a key member on our team left the git organization. Turns out the OAuth was attached to his git account. Fixed the issue be assigning the OAuth to a different admin!

Unable to subscribe to a google pub sub topic using a service account

I was trying to understand example given in google cloud samples present in this link
IAM Example
This example creates a service account, a VM, and a Pub/Sub topic. The VM runs as the service account, and the service account has subscriber access to the Pub/Sub topic, thereby giving services and applications running on the VM access to the Pub/Sub topic.
However when I am trying to deploy this example I am getting below error
The fingerprint of the deployment is a-v3HjAHciZeSLuE-vSeZw==
Waiting for create [operation-1525502430976-56b6fb6809800-dbd09909-c5d681b2]...failed.
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation [operation-1525502430976-56b6fb6809800-dbd09909-c5d681b2]: errors:
- code: RESOURCE_ERROR
location: /deployments/test-dp/resources/my-pubsub-topic
message: '{"ResourceType":"pubsub.v1.topic","ResourceErrorCode":"403","ResourceErrorMessage":{"code":403,"message":"User
not authorized to perform this action.","status":"PERMISSION_DENIED","details":[],"statusMessage":"Forbidden","requestPath":"https://pubsub.googleapis.com/v1/projects/fresh-deck-194307/topics/my-pubsub-topic:setIamPolicy","httpMethod":"POST"}}'
It mentions that User doesn't have permission to perform this action.
I am unable to understand which user it is mentioning about.
Since I am the project owner and my account is the owner of project, I should be able to deploy a script which can set IAM policy for subscribing to a pubsub topic.
Might be my understanding is wrong above. Could somebody help to understand why this example is failing?
Also I hope if any additional configuration is needed for this example to run, it should be mentioned in README file. But there are no instructions.
Make sure that APIs for all resources that you're trying to deploy are enabled.
Use gcloud auth list command to make sure that the account with enough permissions is the active one.
Use gcloud config list command to make sure that the default project or other settings are correct.