I was trying to setup for GCS bucket using deployment manager template but cannot find how to setup condition.
https://github.com/GoogleCloudPlatform/cloud-foundation-toolkit/blob/master/dm/templates/gcs_bucket/gcs_bucket.py
As advised by Tanjin the code snippet is already included in the namespace you have shared.
You will also need to add the accessControl section to the top-level configuration for each resource for which you want to apply access control policies.
Here is the code sample:
resources:
- name: a-new-pubsub-topic
type: pubsub.v1.topic
properties:
...
accessControl:
gcpIamPolicy:
bindings:
- role: roles/pubsub.editor
members:
- "user:alice#example.com"
- role: roles/pubsub.publisher
members:
- "user:jane#example.com"
- "serviceAccount:my-other-app#appspot.gserviceaccount.com"
You can also check this link for the official GCP guide on that setup.
Related
I'm trying to deploy my Serverless project for several environments. I would like to run a develop, staging and production environment. To make this work I'm using serverless-dotenv-plugin with a NODE_ENV=development or NODE_ENV=acceptation (in this case). Everything related to the plugin works.
Everything related to the plugin seems to work. When I deploy for development or acceptance it loads the correct .env file, as well it does try to create the related S3 buckets.
As you can see in the attached image there are two buckets for each environment which I want to link to a Route53 domain. The initial deployment created the correct buckets. When I now deploy again, for development there is no issue. Although when I deploy for acceptance I get the error An error occurred: BucketGatsbySite - project-bucket-acc-www-gatsby already exists., so the build breaks.
Of course the bucket already exists, but because it's already created it shouldn't be re-created. This seems to work for development, but not for acceptance and I have no clue why. In this AWS documentation I can't find anything related to this. Although as you can see below I do have the DeletionPolicy: Retain, which I think should mean there shouldn't be a new one created, but the old one should be retained.
So to summarise, I want to create a bucket but not overwrite it. Only create it once and after that retain the old ones and don't try to create new ones.
My config is as follows:
service: project
package:
individually: true
provider:
name: aws
runtime: nodejs12.x
lambdaHashingVersion: 20201221
stage: ${env:STAGE}
region: ${env:REGION}
environment:
REGION: ${env:REGION}
STAGE: ${env:STAGE}
NODE_ENV: ${env:NODE_ENV}
CLIENT_ID: ${env:AWS_CLIENT_ID}
TABLE: "project-db-${env:STAGE}"
BUCKET: "project-bucket-${env:STAGE}"
POOL: "project-userpool-${env:STAGE}"
iam:
role:
statements:
- Effect: Allow
Action:
- dynamodb:*
Resource:
- !GetAtt projectTable.Arn
BucketReactApp:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
AccessControl: PublicRead
BucketName: "${self:provider.environment.BUCKET}-www-react"
BucketGatsbySite:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
AccessControl: PublicRead
BucketName: "${self:provider.environment.BUCKET}-www-gatsby"
Every suggestion would be really appreciated, since I'm kinda stuck on this..
Some changes in CloudFormation (CFN) require update of the resource. It's mentioned on the "AWS::S3::Bucket" documentation page as Update requires property for each statement.
And here is the list of all "Update behaviors of stack resources" and Replacement will means that the bucket will be recreated.
But it's still strange, because the only two statements that require replacement on update are:
BucketName
ObjectLockEnabled
So maybe some intermediate operation on CFN stack requires recreation of S3 bucket.
maybe you should be looking at the UpdateReplacePolicy attribute:
BucketGatsbySite:
Type: AWS::S3::Bucket
...
UpdateReplacePolicy:
link: link
I want to assume an IAM role that's already been created in another serverless.yml file. It seems as if using the iam property is the only (?) way to do this for all of the iam functions at once. The source code I've encountered mainly uses iamRoleStatements to apply IAM permissions, but that doesn't seem to be made to actually have the option to assume already created roles.
Secondary question, should I use the ARN of the role or create an export for it from the stack where it's being created?
provider:
name: aws
runtime: python3.8
stage: ${opt:stage, 'dev'}
region: ${opt:region, 'eu-west-1'}
iam:
role: arn:aws:iam::123456789012:role/execution-role
iamRoleStatements:
- Effect: Allow
Action:
- events:PutEvents
Resource: arn:aws:events:${self:provider.region}:#{AWS::AccountId}:blablabla-${self:provider.stage}
- Effect: Allow
Action:
- states:SendTaskSuccess
Resource: arn:aws:states:${self:provider.region}:#{AWS::AccountId}:stateMachine:${self:provider.stage}-blablabla
If you want to use one existing role for all functions in a Serverless app, you'll need to either specify iam.role or iamRoleStatements.
I believe the framework is attempting to create a new role based on the iamRoleStatements you provided. You can refer to the documentation for more information.
I am deploying a CloudFormation template to AWS. A role for my Lambda invocation is being created by a template that I am importing, and I cannot modify it directly. I wish to modify that role to attach the AWS managed policy AWSLambdaVPCAccessExecutionRole that already exists in my AWS account. So far, all of my searches have come up empty.
I have found instructions for how to create a new role with an existing managed policy
I have found instructions for how to create a new policy and attach it to an existing role.
I have found instructions for how to Update a Stack using the AWS console or the CLI, but not via a template (YAML or JSON)
I have found instructions for calling something called aws_iam_role_policy_attachment in something called Terraform, but that is not available to me
I am hoping for something like the following but I cannot find any evidence of this existing anywhere. Is there anything that can do what I am trying to do?
---
Resources:
AdditionalRolePermissions:
Type: "AWS::IAM::RolePolicyAttachment"
Properties:
Roles:
- Ref: ExistingRole
PolicyName:
- Ref: ExistingPolicy
The best solution I have come up with so far is to create a new policy that has a manually created PolicyDocument that is the same as the existing one for AWSLambdaVPCAccessExecutionRole and attach it to the role upon creation. I would prefer not to do that though because it will be harder to maintain.
Unfortunately, you can not do this in pure CloudFormation unless you create a custom resource but this isn't really pure CloudFormation at that point as you'd need to create a lambda and other resources to implement the custom resource. There is no concept of a policy attachment in CloudFormation presently and these attachments only happen when you define a policy or role resource.
The simplest thing would be to go with your solution of creating a policy that duplicates AWSLambdaVPCAccessExecutionRole. That policy is fairly simple and shouldn't clutter up your CloudFormation template too much compared to some other complicated policies.
It is possible as of 2021. Please see: https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-attach-managed-policy/
Example:
AWSTemplateFormatVersion: '2010-09-09'
Description: something cool
Resources:
IAM:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- 'sts:AssumeRole'
RoleName: some_role_name
Policies:
['arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole']
I am currently trying to add both the AWS SAM policy templates KMSDecryptPolicy & KMSEnecryptPolicy to my config yml but the KMS key is in a different account and I would need cross account access to do this.
However when using the above mentioned policy templates I can only pass the KeyId and not the AWS account number which is a placeholder variable.
I am trying to do this using the AWS SAM policy templates.
Would appreciate any support on this.
This is an example of how my current policies look like.
Policies:
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
- DynamoDBCrudPolicy:
TableName: !Ref InvoiceFeaturesTable
- S3CrudPolicy:
BucketName: !Ref InvoiceFeaturesBucket
example code
Both KMSDecryptPolicy and KMSEnecryptPolicy uses ${AWS::AccountId} which defaults to the current AWS account and you cannot override it via policy templates. You ONLY can pass KeyId: Reference
What you can simply do is to copy the policy template into your SAM template as inline policy and modify it as required. Reference
I am trying to deploy an AWS Lambda function that gets triggered when an AVRO file is written to an existing S3 bucket.
My serverless.yml configuration is as follows:
service: braze-lambdas
provider:
name: aws
runtime: python3.7
region: us-west-1
role: arn:aws:iam::<account_id>:role/<role_name>
stage: dev
deploymentBucket:
name: serverless-framework-dev-us-west-1
serverSideEncryption: AES256
functions:
hello:
handler: handler.hello
events:
- s3:
bucket: <company>-dev-ec2-us-west-2
existing: true
events: s3:ObjectCreated:*
rules:
- prefix: gaurav/lambdas/123/
- suffix: .avro
When I run serverless deploy, I get the following error:
ServerlessError: An error occurred: IamRoleCustomResourcesLambdaExecution - API: iam:CreateRole User: arn:aws:sts::<account_id>:assumed-role/serverless-framework-dev/jenkins_braze_lambdas_deploy is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH.
I see some mentions of Serverless needing iam:CreateRole because of how CloudFormation works but can anyone confirm if that is the only solution if I want to use existing: true? Is there another way around it except using the old Serverless plugin that was used prior to the framework adding support for the existing: true configuration?
Also, what is 1M5QQI6P2ZYUH in arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH? Is it a random identifier? Does this mean that Serverless will try to create a new IAM role every time I try to deploy the Lambda function?
I've just encountered this, and overcome it.
I also have a lambda for which I want to attach an s3 event to an already existing bucket.
My place of work has recently tightened up AWS Account Security by the use of Permission Boundaries.
So i've encountered the very similar error during deployment
Serverless Error ---------------------------------------
An error occurred: IamRoleCustomResourcesLambdaExecution - API: iam:CreateRole User: arn:aws:sts::XXXXXXXXXXXX:assumed-role/xx-crossaccount-xx/aws-sdk-js-1600789080576 is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::XXXXXXXXXXXX:role/my-existing-bucket-IamRoleCustomResourcesLambdaExec-LS075CH394GN.
If you read Using existing buckets on the serverless site, it says
NOTE: Using the existing config will add an additional Lambda function and IAM Role to your stack. The Lambda function backs-up the Custom S3 Resource which is used to support existing S3 buckets.
In my case I needed to further customise this extra role that serverless creates so that it is also assigned the permission boundary my employer has defined should exist on all roles. This happens in the resources: section.
If your employer is using permission boundaries you'll obviously need to know the correct ARN to use
resources:
Resources:
IamRoleCustomResourcesLambdaExecution:
Type: AWS::IAM::Role
Properties:
PermissionsBoundary: arn:aws:iam::XXXXXXXXXXXX:policy/xxxxxxxxxxxx-global-boundary
Some info on the serverless Resources config
Have a look at your own serverless.yaml, you may already have a permission boundary defined in the provider section. If so you'll find it under rolePermissionsBoundary, this was added in I think version 1.64 of serverless
provider:
rolePermissionsBoundary: arn:aws:iam::XXXXXXXXXXXX:policy/xxxxxxxxxxxx-global-boundary
If so, you can should be able to use that ARN in the resources: sample I've posted here.
For testing purpose we can use:
provider:
name: aws
runtime: python3.8
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action: "*"
Resource: "*"
For running sls deploy, I would suggest you use a role/user/policy with Administrator privileges.
If you're restricted due to your InfoSec team or the like, then I suggest you have your InfoSec team have a look at docs for "AWS IAM Permission Requirements for Serverless Framework Deploy." Here's a good link discussing it: https://github.com/serverless/serverless/issues/1439. At the very least, they should add iam:CreateRole and that can get you unblocked for today.
Now I will address your individual questions:
can anyone confirm if that is the only solution if I want to use existing: true
Apples and oranges. Your S3 configuration has nothing to do with your error message. iam:CreateRole must be added to the policy of whatever/whoever is doing sls deploy.
Also, what is 1M5QQI6P2ZYUH in arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH? Is it a random identifier? Does this mean that serverless will try to create a new role every time I try to deploy the function?
Yes, it is a random identifier
No, sls will not create a new role every time. This unique ID is cached and re-used for updates to an existing stack.
If a stack is destroyed/recreated, it will assign a generate a new unique ID.