GCP Deployment Manager: 403 does not have storage.buckets.get access - google-cloud-platform

I am trying to create a bucket using Deployment manager but when I want to create the deployment, I get the following error:
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation [operation-1525606425901-56b87ed1537c9-70ca4aca-72406eee]: errors:
- code: RESOURCE_ERROR
location: /deployments/posts/resources/posts
message: '{"ResourceType":"storage.v1.bucket","ResourceErrorCode":"403","ResourceErrorMessage":{"code":403,"errors":[{"domain":"global","message":"myprojectid#cloudservices.gserviceaccount.com
does not have storage.buckets.get access to posts.","reason":"forbidden"}],"message":"myprojectid#cloudservices.gserviceaccount.com
does not have storage.buckets.get access to posts.","statusMessage":"Forbidden","requestPath":"https://www.googleapis.com/storage/v1/b/posts","httpMethod":"GET","suggestion":"Consider
granting permissions to myprojectid#cloudservices.gserviceaccount.com"}}'
If I understand it correctly, the deployment manager uses a service account (as described in the message) to actually create all my resources. I've checked IAM and made sure that the service role (myprojectid#cloudservices.gserviceaccount.com) does have access as "Editor" and even added "Storage Admin" (which includes storage.buckets.get) to be extra sure. However, I still get the same error message.
Am I assigning the permissions to the wrong IAM user / what am I doing wrong?
command used:
gcloud deployment-manager deployments create posts --config posts.yml
my deployment template:
bucket.jinja
resources:
- name: {{ properties['name'] }}
type: storage.v1.bucket
properties:
name: {{ properties['name'] }}
location: europe-west1
lifecycle:
rule:
- action:
type: Delete
condition:
age: 30
isLive: true
labels:
datatype: {{ properties['datatype'] }}
storageClass: REGIONAL
posts.yml
imports:
- path: bucket.jinja
resources:
- name: posts
type: bucket.jinja
properties:
name: posts
datatype: posts

I tested your code with success and I believe that the issue was that you were trying to create/update a bucket own by a different user belonging to a different project upon which your service account has no power.
Therefore please try to redeploy changing the name that likely is a unique one and let me know if this solves the issue. This can be an issue in some scenario because either you choose name pretty long or there is the risk that is already taken.
Notice that you have to change the name of the bucket since it has to be unique across all the project of all the users.
This could seem an excessive requirement, but it makes possible to create static website or to refer to file with the standard URL:
https://storage.googleapis.com/nomebucket/folder/nomefile
From the trace error I believe that this is the issue, you are trying to create a bucket that does not exist and you do not own.
Notice that if you remove the permissions from the service account you do not receive the message telling you that the service account does not have any power on the bucket:
xxx#cloudservices.gserviceaccount.com does not have storage.buckets.get access to posts.
But instead a message pointing you that the service account has no power on the project:
Service account xxx#cloudservices.gserviceaccount.com is not authorized
to take actions for project xxx. Please add xxx#cloudservices.gserviceaccount.com
as an editor under project xxx using Google Developers Console
Notice that if you try to create a bucket you already own there is no issue.
$ gcloud deployment-manager deployments create posts22 --config posts.yml
The fingerprint of the deployment is xxx==
Waiting for create [operation-xxx-xxx-xxx-xxx]...done.
Create operation operation-xxx-xxx-xxx-xxx completed successfully.
NAME TYPE STATE ERRORS INTENT
nomebuckettest4536 storage.v1.bucket COMPLETED []

Related

Add Snyk Action to CodePipeline with CloudFormation

I wanted to spin up a CodePipeline on AWS with a Snyk Scan action through CloudFormation. The official documentation on how to do this is a little light on details and seems to be missing key bits of information, so I was hoping someone could shed some light on this issue. According to the Snyk action reference, there are only several variables that need to be configured, so I followed along and setup my CodePipeline CF template with the following configuration,
- Name: Scan
Actions:
- Name: Scan
InputArtifacts:
- Name: "source"
ActionTypeId:
Category: Invoke
Owner: ThirdParty
Version: 1
Provider: Snyk
OutputArtifacts:
- Name: "source-scan"
However, it is unclear how CodePipeline authenticates with Snyk with just this configuration. Sure enough, when I tried to spin up this template, I got the following error through the CloudFormation console,
Action configuration for action 'Scan' is missing required configuration 'ClientId'
I'm not exactly sure what the ClientId is in this case, but I assume it is the Snyk ORG id. So, I added ClientId under the Configuration section of the template. When I spun the new template up, I got the following error,
Action configuration for action 'Scan' is missing required configuration 'ClientToken'
Again, there is no documentation (that I could find) on the AWS side for what this ClientToken is, but I assume it is a Snyk API token, so I went ahead and added that. My final template looks like,
- Name: Scan
Actions:
- Name: Scan
InputArtifacts:
- Name: "source"
ActionTypeId:
Category: Invoke
Owner: ThirdParty
Version: 1
Provider: Snyk
OutputArtifacts:
- Name: "source-scan"
Configuration:
ClientId: <id>
ClientToken: <token>
The CloudFormation now goes up fine and without error, but the CodePipeline itself halts on the Scan stage, stalls for ten or so minutes and then outputs a error that doesn't give you much information,
There was an error in the scan execution.
I assume I am not authenticating with Snyk correctly. I can set up the scan fine through the console, but that includes an OAuth page where I enter my username/password before Snyk can authorize AWS. Anyway, I need to be able to set up the scan through CloudFormation as I will not have console for the project I am working on.
I am looking for a solution and/or some documentation that covers this use case. If anyone could point me in the right direction, I would be much obliged.

How to trigger Cloud Functions when new project is created in org?

I am working on a script that will need to be triggered when a new project has been created in the GCP org, however I simply cannot find where newly created projects are listed (Checked Stackdriver logs but couldn't find anything at the org level), wondering if there is any other way to trigger Cloud Functions when a new project has been created?
You can create an Aggregated Sink which publish a message to a Pub/Sub topic (which can trigger a Cloud Function).
This is how I put a message into a Pub/Sub topic after a project creation:
export PROJECT_ID=[YOUR_PROJECT_ID_WHICH_WILL_HOST_PUBSUB_TOPIC]
export ORGANIZATION_ID=[YOUR_ORGANIZATION_ID]
export TOPIC_ID=[YOUR_TOPIC_ID]
export SINK_NAME=[YOUR_SINK_NAME]
gcloud pubsub topics create $TOPIC_ID --project $PROJECT_ID
gcloud logging sinks create $SINK_NAME pubsub.googleapis.com/projects/$PROJECT_ID/topics/$TOPIC_ID --organization $ORGANIZATION_ID --log-filter 'logName="organizations/$ORGANIZATION_ID/logs/cloudaudit.googleapis.com%2Factivity" AND protoPayload.methodName="CreateProject" AND protoPayload."#type"="type.googleapis.com/google.cloud.audit.AuditLog" AND resource.type="project"'
After creating the sink, gcloud will warn you to grant Pub/Sub publisher role to service account which it will use.
gcloud organizations add-iam-policy-binding $ORGANIZATION_ID --member=serviceAccount:[xxxxxxxxxx]#gcp-sa-logging.iam.gserviceaccount.com --role=roles/pubsub.publisher
After these commands, you'll see the log in the Pub/Sub topic.
All triggers for Cloud Functions are listed in the documentation on trigger types. Project creation is not listed or implied there, so it is not a direct trigger type.
In cases like this, I'd recommend looking for a way for your own script to trigger the Cloud Function explicitly after it completes (or even after it starts) the project creation. The Cloud Function can then verify that the project creation was complete, and perform the necessary actions you want.
You can do it, but it's not really simple to set up.
First, the global process (detailed here):
Perform a custom log query
Sink the logs into PubSub topic
Trigger your function on PubSub messages
For viewing your organization log level, you can't do it in the console, you have to perform it in command line. Here an example
gcloud logging read --organization=YOUR_ORG_NUMBER \
'logName:"organizations/YOUR_ORG_NUMBER/logs/cloudaudit.googleapis.com"
AND timestamp>="2020-05-05T23:59:59Z"
AND timestamp<="2020-05-07T23:59:59Z"
AND severity= "NOTICE"
AND protoPayload.resourceName:"projects"
AND protoPayload.methodName="CreateProject"'
With few explanations:
AND severity= "NOTICE" Here for getting only the creation in success
AND protoPayload.resourceName:"projects" Here for avoiding double. You have a first entry as longrunning response and the resource name start with organizations. When the project is created, you have a new entry with project. Of course, only because I want to get the creation in success.
AND protoPayload.methodName="CreateProject"' Because we also move the project into a folder and we don't care about the updateProject method, only the creation here.
Your payload look like this
insertId: -13v21gdcfxy
logName: organizations/YOUR_ORG_NUMBE/logs/cloudaudit.googleapis.com%2Factivity
protoPayload:
'#type': type.googleapis.com/google.cloud.audit.AuditLog
authenticationInfo:
principalEmail: XXXXXXXXXXXXXXXXXXXXXXXXXXX
serviceAccountDelegationInfo:
- firstPartyPrincipal:
principalEmail: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
authorizationInfo:
- granted: true
permission: resourcemanager.projects.create
resource: organizations/YOUR_ORG_NUMBE
resourceAttributes: {}
methodName: CreateProject
request:
'#type': type.googleapis.com/google.cloudresourcemanager.v1.CreateProjectRequest
project:
createTime: '2020-05-06T06:41:03.605Z'
lifecycleState: ACTIVE
name: XXXXXXXXXXXXXXXX
parent:
id: 'YOUR_ORG_NUMBE'
type: organization
projectId: XXXXXXXXXXXXXXXXXXXXXX
projectNumber: 'NEW_PROJECT_NUMBER'
requestMetadata:
callerIp: 2600:1900:2001:2::19
callerSuppliedUserAgent: google-api-python-client/1.7.4 (gzip),gzip(gfe)
destinationAttributes: {}
requestAttributes: {}
resourceName: projects/XXXXXXXXXXXXXXXXX
serviceName: cloudresourcemanager.googleapis.com
status: {}
receiveTimestamp: '2020-05-06T06:41:07.346181918Z'
resource:
labels:
project_id: XXXXXXXXXXXXXXXXX
type: project
severity: NOTICE
timestamp: '2020-05-06T06:41:06.431Z'
This structure example can help you to speed up your development

How to avoid giving `iam:CreateRole` permission when using existing S3 bucket to trigger Lambda function?

I am trying to deploy an AWS Lambda function that gets triggered when an AVRO file is written to an existing S3 bucket.
My serverless.yml configuration is as follows:
service: braze-lambdas
provider:
name: aws
runtime: python3.7
region: us-west-1
role: arn:aws:iam::<account_id>:role/<role_name>
stage: dev
deploymentBucket:
name: serverless-framework-dev-us-west-1
serverSideEncryption: AES256
functions:
hello:
handler: handler.hello
events:
- s3:
bucket: <company>-dev-ec2-us-west-2
existing: true
events: s3:ObjectCreated:*
rules:
- prefix: gaurav/lambdas/123/
- suffix: .avro
When I run serverless deploy, I get the following error:
ServerlessError: An error occurred: IamRoleCustomResourcesLambdaExecution - API: iam:CreateRole User: arn:aws:sts::<account_id>:assumed-role/serverless-framework-dev/jenkins_braze_lambdas_deploy is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH.
I see some mentions of Serverless needing iam:CreateRole because of how CloudFormation works but can anyone confirm if that is the only solution if I want to use existing: true? Is there another way around it except using the old Serverless plugin that was used prior to the framework adding support for the existing: true configuration?
Also, what is 1M5QQI6P2ZYUH in arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH? Is it a random identifier? Does this mean that Serverless will try to create a new IAM role every time I try to deploy the Lambda function?
I've just encountered this, and overcome it.
I also have a lambda for which I want to attach an s3 event to an already existing bucket.
My place of work has recently tightened up AWS Account Security by the use of Permission Boundaries.
So i've encountered the very similar error during deployment
Serverless Error ---------------------------------------
An error occurred: IamRoleCustomResourcesLambdaExecution - API: iam:CreateRole User: arn:aws:sts::XXXXXXXXXXXX:assumed-role/xx-crossaccount-xx/aws-sdk-js-1600789080576 is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::XXXXXXXXXXXX:role/my-existing-bucket-IamRoleCustomResourcesLambdaExec-LS075CH394GN.
If you read Using existing buckets on the serverless site, it says
NOTE: Using the existing config will add an additional Lambda function and IAM Role to your stack. The Lambda function backs-up the Custom S3 Resource which is used to support existing S3 buckets.
In my case I needed to further customise this extra role that serverless creates so that it is also assigned the permission boundary my employer has defined should exist on all roles. This happens in the resources: section.
If your employer is using permission boundaries you'll obviously need to know the correct ARN to use
resources:
Resources:
IamRoleCustomResourcesLambdaExecution:
Type: AWS::IAM::Role
Properties:
PermissionsBoundary: arn:aws:iam::XXXXXXXXXXXX:policy/xxxxxxxxxxxx-global-boundary
Some info on the serverless Resources config
Have a look at your own serverless.yaml, you may already have a permission boundary defined in the provider section. If so you'll find it under rolePermissionsBoundary, this was added in I think version 1.64 of serverless
provider:
rolePermissionsBoundary: arn:aws:iam::XXXXXXXXXXXX:policy/xxxxxxxxxxxx-global-boundary
If so, you can should be able to use that ARN in the resources: sample I've posted here.
For testing purpose we can use:
provider:
name: aws
runtime: python3.8
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action: "*"
Resource: "*"
For running sls deploy, I would suggest you use a role/user/policy with Administrator privileges.
If you're restricted due to your InfoSec team or the like, then I suggest you have your InfoSec team have a look at docs for "AWS IAM Permission Requirements for Serverless Framework Deploy." Here's a good link discussing it: https://github.com/serverless/serverless/issues/1439. At the very least, they should add iam:CreateRole and that can get you unblocked for today.
Now I will address your individual questions:
can anyone confirm if that is the only solution if I want to use existing: true
Apples and oranges. Your S3 configuration has nothing to do with your error message. iam:CreateRole must be added to the policy of whatever/whoever is doing sls deploy.
Also, what is 1M5QQI6P2ZYUH in arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH? Is it a random identifier? Does this mean that serverless will try to create a new role every time I try to deploy the function?
Yes, it is a random identifier
No, sls will not create a new role every time. This unique ID is cached and re-used for updates to an existing stack.
If a stack is destroyed/recreated, it will assign a generate a new unique ID.

BQ Schedule Queries with Deployment Manager: "P4 service account needs iam.serviceAccounts.getAccessToken permission"

I’m trying to create a deployment manager template for bigquery data transfer to initiate a scheduled query. I’ve created a type provider for transfer configs and when I call the type provider for a scheduled query, I get the following error:
"P4 service account needs iam.serviceAccounts.getAccessToken permission."
However, I’ve already given it ‘Service Account Token Creator’ permission on with "gcloud project add-iam-policy-binding .." How else would I be able to solve this?
Type Provider:
- name: custom-type-provider
type: deploymentmanager.v2beta.typeProvider
properties:
descriptorUrl: "https://bigquerydatatransfer.googleapis.com/$discovery/rest?version=v1"
options:
inputMappings:
- fieldName: Authorization
location: HEADER
value: >
$.concat("Bearer ", $.googleOauth2AccessToken())
Calling the type provider:
- name: test
type: project_id:custom-type-provider:projects.transferConfigs
properties:
parent: project/project_id
..
..
I think you've hit a limitation on Scheduled Queries, where you have to use user accounts instead of service accounts in order to do the queries.
There is a feature request to allow service accounts to act on behalf for this particular action.

Getting service account key deployed via Google DM

Is it possible to get a service account key that is deployed via Google Deployment Manager (iam.v1.serviceAccounts.key resource) as a result of request to DM?
I have seen an option to expose it in outputs (https://cloud.google.com/deployment-manager/docs/configuration/expose-information-outputs) , but can't see any possibility to get the key as a response of Deployment Manager insert/update API methods.
To fetch the key you can set up output or reference to the PrivatekeyData in the same configuration as creating the key. If there is not a reference or output to that field, then DM will ignore it.
Example config looks like:
outputs:
- name: key
value: $(ref.iam-key.privateKeyData)
resources:
- name: iam-account
type: iam.v1.serviceAccount
properties:
accountId: iam-account
displayName: iam-account-display
- name: iam-key
type: iam.v1.serviceAccounts.key
properties:
parent: $(ref.iam-account.name)
When running the above yaml file with
gcloud deployment-manager deployments create [DemploymentName] --config key.yaml.
This creates a service account with an associated key. You can look up at the manifest associated with the configuration. You can also access Deployment-> Deployment properties-> Layout in the cloud console.