I've been working with Serverless (the framework) and I've come across an issue. This might be down to my minimal knowledge of AWS and it's archicture but maybe someone can point me in the right direction.
I've created an S3 bucket with Terraform, it utilises AWS's KMS to give the bucket Server Side Encrpytion. Uploading to this bucket works fine from the CLI, but when using a Lambda created by serverless it returns and "Access Denied".
The serverless yaml has permissions for uploads to S3 and I've tested this with SSE turned off and it works fine.
What I don't understand is how to specify the key for AWS. I thought adding it to the top of the service might work (but to no avail).
Here is the yaml file:
service:
name: lambdas
awsKmsKeyArn: [KEY GOES HERE]
custom:
serverless-offline:
port: 3000
bucket:
name: evidence-bucket
serverSideEncryption: aws:kms
sseKMSKeyId: [ KEY GOES HERE]
provider:
name: aws
runtime: nodejs12.x
region: eu-west-2
iamRoleStatements:
- Effect: Allow
Action:
- s3:ListBucket
- s3:PutObject
- s3:PutObjectAcl
Resource: "arn:aws:s3:::${self:custom.bucket.name}/*"
- Effect: Allow
Action:
- kms:Encrypt
- kms:Decrypt
- kms:DescribeKey
Resource: "[KEY GOES HERE]"
functions:
storeEvidence:
handler: handler.storeEvidence
environment:
BUCKET: ${self:custom.bucket.name}
events:
- http:
path: store-evidence
method: post
Do I need an additional plugin? There is a lot of information about creating a bucket with serverless but not for using an existing bucket with SSE? How do I get around this "Access Denied" message?
As jarmod in the comment said, you are missing the kms:GenerateDataKey. Here I am gonna show you what exact need to add to your existing yaml shown above:
# ...
provider:
name: aws
runtime: nodejs12.x
region: eu-west-2
iamRoleStatements:
- Effect: Allow
Action:
- s3:ListBucket
- s3:PutObject
- s3:PutObjectAcl
Resource: "arn:aws:s3:::${self:custom.bucket.name}/*"
- Effect: Allow
Action:
- kms:Encrypt
- kms:Decrypt
- kms:DescribeKey
- kms:GenerateDataKey # <------ this is the new permission
Resource: "[KEY GOES HERE]"
#...
And it is worth to note that if your code literally just use s3:PutObject to upload, you don't need to add Encrypt,DescribeKey permissions. See: https://aws.amazon.com/premiumsupport/knowledge-center/s3-access-denied-error-kms/
If your code involves multipart upload, you do need kms:DescribeKey, kms:Encrypt and more permissions(like kms:ReEncrypt*, kms:GenerateDataKey*...) See details: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuAndPermissions.html
Try this:
iamRoleStatements:
- Effect: Allow
Action:
- s3:*
Resource: "arn:aws:s3:::${self:custom.bucket.name}/*"
- Effect: Allow
Action:
- kms:*
Resource: "[KEY GOES HERE]"
If this works, you know you were missing some action. Its then a painful process of finding that missing action, or if your happy, just leave the *s in.
Related
I'm trying to set a lambda to trigger when an object is created in an S3 bucket.
My serverless.yml includes this:
handleNewRawObjectInS3:
handler: lambdas/handleNewRawObjectInS3/handleNewRawObjectInS3.handleS3Event
events:
- s3:
bucket: ${file(../evn.${opt:stage, 'dev'}.json):RAW_IMAGE_BUCKET}
event: s3:ObjectCreated:*
This results in an error:
Error:
CREATE_FAILED: S3Bucketxxxxxxxxxrawimages (AWS::S3::Bucket)
xxxxxxxxx-raw-images already exists in stack arn:aws:cloudformation:us-east-1:xxxx:stack/s3-xxxxxxxxx-raw-images/310e7010-xxx-xxx-xxxx-12f066874c93
I already have the bucket created -- created via uploading a cloudformation template directly (our corporate version of AWS doesn't allow us to use serverless framework to create a bucket). How to add S3 trigger event on AWS Lambda function using Serverless framework? indicates this was not possible with older versions of serverless, but following the rabbit hole, you can see a feature request ... and later an answer that shows you need to add 'existing: true'.
So I add that to my serverless framework setup:
service: my-service-events
provider:
name: aws
runtime: nodejs14.x
region: ${file(../evn.${opt:stage, 'dev'}.json):REGION}
stage: ${opt:stage, 'dev'}
deploymentBucket: #must name manually-created bucket for deployment because enterprise doesn't allow automated bucket creation
name: ${file(../evn.${opt:stage, 'dev'}.json):DEPLOYMENT_BUCKET}
iam: #must name a role because enterprise doesn't allow automated role creation
role: myServerlessRole # This is a reference to the resource name from the role created in the resources -> iam roles section
deploymentRole: ${file(../evn.${opt:stage, 'dev'}.json):DEPLOYMENT_ROLE}
resources:
- ${file(../iam-roles.${opt:stage, 'dev'}.yml)}
functions:
handleNewRawObjectInS3:
handler: lambdas/handleNewRawObjectInS3/handleNewRawObjectInS3.handleS3Event
events:
- s3:
bucket: ${file(../evn.${opt:stage, 'dev'}.json):RAW_IMAGE_BUCKET}
event: s3:ObjectCreated:*
existing: true
The IAM file/role referenced above looks like this:
Resources:
myServerlessRole:
Type: AWS::IAM::Role
Properties:
PermissionsBoundary: arn:aws:iam::xxx:policy/csr-Developer-Permissions-Boundary
Path: /my/default/path/
RoleName: myServerlessRole-${self:service}
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
Policies:
- PolicyName: myPolicyName
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource:
- 'Fn::Join':
- ':'
-
- 'arn:aws:logs'
- Ref: 'AWS::Region'
- Ref: 'AWS::AccountId'
- 'log-group:/aws/lambda/*:*:*'
- Effect: "Allow"
Action:
- "s3:*"
Resource: "arn:aws:s3:::*" #
- Effect: "Allow"
Action:
- "lambda:*"
Resource: "*"
- Effect: "Allow"
Action:
- cloudfront:CreateDistribution
- cloudfront:GetDistribution
- cloudfront:UpdateDistribution
- cloudfront:DeleteDistribution
- cloudfront:TagResource
Resource: "arn:aws:cloudfront:::*"
Trying to deploy this gets me the error:
Error:
CREATE_FAILED: CustomDashresourceDashexistingDashs3LambdaFunction (AWS::Lambda::Function)
Resource handler returned message: "The role defined for the function cannot be assumed by Lambda. (Service: Lambda, Status Code: 400, Request ID: 812c5384-1c26-42c9-bdef-1ce4a59f2be4)" (RequestToken: 9cdbb5af-3bc7-d6bf-384b-5126d1048ccd, HandlerErrorCode: InvalidRequest)
How can I deploy this lambda triggered by an s3 event?
The issue comes from the fact that CustomDashresourceDashexistingDashs3LambdaFunction lambda runs as the deployementRole and not under the defined role (which is the default role that lambdas run under). Given the deployment role doesn't normally need to assumeRole my deployment role did not have the assumeRole permission.
The fix for this is to ensure that the sts:assumeRole trust relationship has been applied to the deploymentRole like so:
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
Endeavouring to apply the principle of least privilege to a CMK I'm creating, the goal is to create a CloudFormation template that can be used via StackSets to the entire organisation. As a result, I want a key that can be used (kms:Encrypt, kms:Decrypt etc.) for general encryption tasks in the account, but that cannot be modified by principals in the account (specifically kms:PutKeyPolicy but not only).
I have a working policy lifted from a hand crafted key. The CloudFormation template works fine, the StackSet initiates the resource creation.
But only when I don't restrict the account principal with any conditions, which removes the least privilege principle. The CreateKey and PutKeyPolicy API calls both have an option BypassPolicyLockoutSafetyCheck for those of us idiotic enough to think they know better! Except CloudFormation doesn't expose that for AWS::KMS::Key :(
So unless I basically leave the key policy wide open, I get the following error in the Stack:
Resource handler returned message: "The new key policy will not allow you to update the key policy in the future. (Service: Kms, Status Code: 400, Request ID: xxxx, Extended Request ID: null)" (RequestToken: xxxx, HandlerErrorCode: InvalidRequest)
I've tried a variety of options for the principal, with the Condition removed (as below) the key is created, with it in no joy.
- Sid: AllowUpdatesByCloudFormation
Effect: Allow
Principal:
AWS: !Sub "arn:aws:iam::${AWS::AccountId}:root"
Action:
- kms:Describe*
- kms:List*
- kms:PutKeyPolicy
- kms:CreateAlias
- kms:UpdateAlias
- kms:UpdateKeyDescription
- kms:EnableKey
- kms:DisableKey
- kms:EnableKeyRotation
- kms:DisableKeyRotation
- kms:GetKey*
- kms:DeleteAlias
- kms:TagResource
- kms:UntagResource
- kms:ScheduleKeyDeletion
- kms:CancelKeyDeletion
Resource: '*'
# Condition:
# StringEquals:
# "aws:PrincipalAccount": !Sub ${AWS::AccountId}
# "kms:ViaService": !Sub "cloudformation.${AWS::Region}.amazonaws.com"
I've tried with different principal settings, including AWS: "*" and a few different Service options, and different settings on the Condition. I've tried with and without the region in the service name. I must be missing something, but I've lost a few hours scratching my head over this one.
Web searches, AWS forum searches have turned up nothing, so I'm hoping the good bhurgers of StackOverflow can guide me - and future me's searching for the same help :)
It doesn't seem possible to link to the table section on the AWS KMS API page for the condition keys on CreateKey or PutKeyPolicy but I don't think I've missed a trick with those?
Thanks in advance - Robert.
Try giving the root user all kms permissions - (kms:*)
The principle of least privilege still applies when giving root all access.
That will enable IAM User permissions.
Add additional policies to each role or user or user group.
Add a policy for key administration.
Add a policy for usage.
That is where you can fine tune your access.
Try working with this key policy and tweak it.
This is a key I use for RDS encryption in a cfn stack.
(Yes! I know that policies should be applied to user groups, not users directly for best practices... but this is inside a sandbox environment I have access to from 'aCloud guru')
KeyPolicy:
Id: key-consolepolicy-3
Version: "2012-10-17"
Statement:
- Sid: Enable IAM User Permissions
Effect: Allow
Principal:
AWS: !Sub "arn:aws:iam::${AWS::AccountId}:root"
Action: kms:*
Resource: '*'
- Sid: Allow access for Key Administrators
Effect: Allow
Principal:
AWS:
- !Sub "arn:aws:iam::${AWS::AccountId}:role/admin"
- !Sub "arn:aws:iam::${AWS::AccountId}:user/cloud_user"
Action:
- kms:Create*
- kms:Describe*
- kms:Enable*
- kms:List*
- kms:Put*
- kms:Update*
- kms:Revoke*
- kms:Disable*
- kms:Get*
- kms:Delete*
- kms:TagResource
- kms:UntagResource
- kms:ScheduleKeyDeletion
- kms:CancelKeyDeletion
Resource: '*'
- Sid: Allow use of the key
Effect: Allow
Principal:
AWS: !Sub "arn:aws:iam::${AWS::AccountId}:role/aws-service-role/rds.amazonaws.com/AWSServiceRoleForRDS"
Action:
- kms:Encrypt
- kms:Decrypt
- kms:ReEncrypt*
- kms:GenerateDataKey*
- kms:DescribeKey
Resource: '*'
- Sid: Allow attachment of persistent resources
Effect: Allow
Principal:
AWS: !Sub "arn:aws:iam::${AWS::AccountId}:role/aws-service-role/rds.amazonaws.com/AWSServiceRoleForRDS"
Action:
- kms:CreateGrant
- kms:ListGrants
- kms:RevokeGrant
Resource: '*'
Condition:
Bool:
kms:GrantIsForAWSResource: "true"
Any idea why I’m getting an AccessDenied error when trying to upload to my S3 bucket?
serverless.yml:
service: foo-service
custom:
bucket: my-bucket-name
provider:
name: aws
iamRoleStatements:
- Effect: Allow
Action:
- s3:PutObject
Resource: "arn:aws:s3:::${self:custom.bucket}/*"
functions:
hello:
handler: handler.hello
environment:
BUCKET: ${self:custom.bucket}
I'm trying to add a file to S3 with public-read permissions.
The s3:PutObject permission alone allows you to add an item to the S3 bucket, but if you configure any ACL attributes you'll need the additional permission s3:PutObjectAcl.
It should be like this:
provider:
name: aws
iamRoleStatements:
- Effect: Allow
Action:
- s3:PutObject
- s3:PutObjectAcl
Resource: "arn:aws:s3:::${self:custom.bucket}/*"
I am configuring an AWS Cognito Identity Pool using the severless framework, and I am editing a file in the yml configuration to add an unauthenticated role for users to upload an image to an s3 bucket.
The code was previously deployed without an unauthenticated role being specified, and the deployment went fine and was stable. After I went looking for a way to control the permissions regarding accessing the S3 bucket, I discovered that the only way to give write, but not read, permissions on an S3 bucket is to specify it in a user policy, so I have to add an unathenticated role to the identity pool. However, when I deploy the code, I get an error stating:
Serverless Error ---------------------------------------
An error occurred: CognitoIdentityPoolRoles - Resource cannot be updated.
I have managed to get around the problem in the dev environment but it required totally deleting the stack and rebuilding it from scratch.
I also do not want to go in and adjust the resources manually in the AWS console since resources should be managed in cloudformation or in the console, but doing it both ways leads to chaos.
So, at the moment, the options I see are to delete the entire stack and rebuild it with the new roles, or find a way to update through cloudformation.
Does anyone have a way to avoid the first option and allow me to update the stack without attaching the role in the console?
Relevant section of serverless.yml is below...
Resources:
# The federated identity for our user pool to auth with
CognitoIdentityPool:
Type: AWS::Cognito::IdentityPool
Properties:
# Generate a name based on the stage
IdentityPoolName: ${self:custom.stage}MyIdentityPool
# Allow unathenticated users
AllowUnauthenticatedIdentities: true
# Link to our User Pool
CognitoIdentityProviders:
- ClientId:
Ref: CognitoUserPoolClient
ProviderName:
Fn::GetAtt: [ "CognitoUserPool", "ProviderName" ]
# IAM roles
CognitoIdentityPoolRoles:
Type: AWS::Cognito::IdentityPoolRoleAttachment
Properties:
IdentityPoolId:
Ref: CognitoIdentityPool
Roles:
authenticated:
Fn::GetAtt: [CognitoAuthRole, Arn]
# Next two lines are the 2 lines of code which break everything
unauthenticated:
Fn::GetAtt: [CognitoUnAuthRole, Arn]
# IAM role for UN-authenticated users
CognitoUnAuthRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: 'Allow'
Principal:
Federated: 'cognito-identity.amazonaws.com'
Action:
- 'sts:AssumeRoleWithWebIdentity'
Condition:
StringEquals:
'cognito-identity.amazonaws.com:aud':
Ref: CognitoIdentityPool
'ForAnyValue:StringLike':
'cognito-identity.amazonaws.com:amr': unauthenticated
Policies:
- PolicyName: 'CognitoUnAuthorizedPolicy'
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: 'Allow'
Action:
- 'mobileanalytics:PutEvents'
- 'cognito-sync:*'
- 'cognito-identity:*'
Resource: '*'
# Allow users to upload attachments to their
# folder inside our S3 bucket
- Effect: 'Allow'
Action:
- 's3:PutObject'
Resource:
- Fn::Join:
- ''
-
- Fn::GetAtt: [MediafilesBucket, Arn]
- '/submissions/'
Fixed.
I commented out the sections of serverless.yml related to the identity pool deployed (destroyed), then uncommented that section, redeployed, and restored from backup.
It seems to be a bit of a hack, but it worked.
I also feel like there should be a way to edit identity pool roles through cloudformation...
How can I allow specific lambda to access to a particular s3 bucket in the serverless.yml?
For example, I am porting file upload functionality to lambda by using serverless. To upload a file to a particular s3 bucket, I need to allow lambda to access to that s3 bucket. How can I do this in the serverless.yml?
From Serverless Framework - AWS Lambda Guide - IAM:
To add specific rights to this service-wide Role, define statements in provider.iamRoleStatements which will be merged into the generated policy.
service: new-service
provider:
name: aws
iam:
role:
statements:
- Effect: 'Allow'
Action:
- 's3:ListBucket'
Resource:
Fn::Join:
- ''
- - 'arn:aws:s3:::'
- Ref: ServerlessDeploymentBucket
- Effect: 'Allow'
Action:
- 's3:PutObject'
Resource:
Fn::Join:
- ''
- - 'arn:aws:s3:::'
- Ref: ServerlessDeploymentBucket
- '/*'