How can I recreate lambda function via cloudformation? - amazon-web-services

I am using serverless to manage IaC which uses cloudformation internally. There are a number of lambdas and roles defined in serverless.yml. It works very well until I tried to sls remove all resources and sls deploy again. After doing that, I get an error when run lambdas: The role defined for the function cannot be assumed by Lambda. (Service: AWSLambda; Status Code: 403; Error Code: AccessDeniedException; Request ID: 0879c203-bec7-480b-81c6-4c7a61e2cb15.
The error says lambda doesn't have permission however, it works if I change the lambda role to something else and change it back. It seems that lambda's role still references to the deprecated one.
I wonder what the proper way to do remove followed by deploy.
The role is:
{
"Role": {
"Path": "/",
"RoleName": "getSiteHandlerRole",
"RoleId": "xxxxx",
"Arn": "arn:aws:iam::115136697128:role/getSiteHandlerRole",
"CreateDate": "2020-07-27T03:37:18Z",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
},
"Description": "",
"MaxSessionDuration": 3600,
"Tags": [
{
"Key": "STAGE",
"Value": "user"
}
]
}
}

Related

In CloudFormation, does "A DependsOn B" ensure that A is deleted before B?

We are using CloudFormation to set up a role and a policy for it. The policy is set to depend on the role using the "DependsOn" property like so:
Role definition:
"LambdaExecutionRole": {
"Type": "AWS::IAM::Role",
"Properties": {
[...]
Policy definition:
"lambdaexecutionpolicy": {
"DependsOn": [
"LambdaExecutionRole"
],
"Roles": [
{
"Ref": "LambdaExecutionRole"
}
],
[...]
From the official documentation, I understand that this DependsOn relation between the two entities should ensure that the policy is always deleted before the role.
Resource A is deleted before resource B.
However, we encounter an error where it appears that the system tries to delete the role before the policy:
Resource Name: [...] (AWS::IAM::Role)
Event Type: delete
Reason: Cannot delete entity, must delete policies first. (Service: AmazonIdentityManagement; Status Code: 409; Error Code: DeleteConflict; Request ID: [...]; Proxy: null)
I'm not sure how that's even possible, as I would have considered the "A DependsOn B" to ensure that the system never tries to delete B before deleting A. Is my understanding wrong here? Can there be a situation where the system tries to delete B before A?
And yes, I understand that in this case the obvious solution is to use an inline policy, as the policy is only used for this specific role. But as this behavior seems to conflict with my intuitive understanding of the official documentation, I want to properly understand what the "DependsOn" property actually means.
TL;DR Unable to replicate the error. DependsOn does not seem to be the culprit.
I used the CDK to create two versions of a minimum test stack with only two resources, AWS::IAM::Role and AWS::IAM::ManagedPolicy. V1 had no explicit policy dependency set on the role. V2, like the OP, did. The difference made no difference. Both versions deployed and were destroyed without error.
Version 1: CDK-Generated Default: no 'depends on' in the template
Version 2 (as in OP): has explicit dependency - Policy depends on the Role. The CDK added one line to the template: "DependsOn": [ "TestRole6C9272DF" ] under "TestPolicyCC05E598"
The two versions differed only by that single DependsOn. Both versions deployed and destroyed as expected without error.
// resource section of CDK-generated Cloud Formation Template
"Resources": {
"TestRole6C9272DF": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
}
}
],
"Version": "2012-10-17"
}
},
"Metadata": {
"aws:cdk:path": "TsCdkPlaygroundIamDependencyStack/TestRole/Resource"
}
},
"TestPolicyCC05E598": {
"Type": "AWS::IAM::ManagedPolicy",
"Properties": {
"PolicyDocument": {
"Statement": [
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Effect": "Allow",
"Resource": "*"
}
],
"Version": "2012-10-17"
},
"Description": "",
"Path": "/",
"Roles": [
{
"Ref": "TestRole6C9272DF"
}
]
},
"DependsOn": [
"TestRole6C9272DF" // <-- The difference that makes no difference
],
"Metadata": {
"aws:cdk:path": "TsCdkPlaygroundIamDependencyStack/TestPolicy/Resource"
}
},

Jobs from specific AWS Batch permissions

How to allow only jobs from a certain AWS Batch queue (and based on a specific job description) to publish to the specific SNS topic?
I though about attaching to jobs IAM policy with the statement:
{
"Effect": "Allow",
"Action": "sns:Publish",
"Resource": ["<arn of the specific SNS topic"]
"Condition": {"ArnEquals": {"aws:SourceArn": "arn:aws:???"}}
}
But what should be the source ARN? ARN of the job queue, ARN of the job definition? Or maybe this should be set up completely differently?
I had a similar experience when worked with AWS Batch jobs executed in Fargate containers which follow the same principles as ECS in scope of assigning roles and permissions.
If you are going to publish messages into specific topic from the code executed inside of your container, then you should create a role with necessary permissions and then use its ARN in the JobRoleArn property of your job definition.
For example (there can be minor mistakes in the code below, but I am just trying to explain the concept here):
Role cloudformation:
"roleresourceID": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"AWS": "*"
}
}
],
"Version": "2012-10-17"
},
"RoleName": "your-job-role"
}
}
Policy attached to the role:
"policyresourceid": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyDocument": {
"Statement": [
{
"Action": "sns:Publish",
"Effect": "Allow",
"Resource": "<arn of the specific SNS topic>"
}
],
"Version": "2012-10-17"
},
"PolicyName": "your-job-role-policy",
"Roles": [
{
"Ref": "roleresourceID"
}
]
}
}
And finally attach role to the Job Definition:
....other job definition properties
"JobRoleArn": {
"Fn::GetAtt": [
"roleresourceID",
"Arn"
]
}
Of course you may structure and format roles and policies in way you like, the main idea of this explanation is that you need to attach proper role using JobRoleArn property of your job definition.

Upload to S3 failed with the following error: Access Denied - CodeStarConnections

I am building a CI/CD pipeline using AWS Codepipeline, the repository source is on bitbucket and I used the AWS-Codestarconnections to create a connection between the bitbucket repository and the pipeline.
The pipeline details are below:
{
"pipeline": {
"name": "test_pipeline",
"roleArn": "arn:aws:iam::<AccountId>:role/PipelineServiceRole",
"artifactStore": {
"type": "S3",
"location": "tadadadada-artifact"
},
"stages": [
{
"name": "Source",
"actions": [
{
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"provider": "CodeStarSourceConnection",
"version": "1"
},
"runOrder": 1,
"configuration": {
"BranchName": "dev",
"ConnectionArn": "arn:aws:codestar-connections:us-east-2:<AccountId>:connection/4ca7b1cf-2917-4fda-b681-c5239944eb33",
"FullRepositoryId": "<username>/repository_name",
"OutputArtifactFormat": "CODE_ZIP"
},
"outputArtifacts": [
{
"name": "SourceArtifact"
}
],
"inputArtifacts": [],
"region": "us-east-2",
"namespace": "SourceVariables"
}
]
},
{
"name": "Build",
"actions": [
{
....
}
]
}
],
"version": 1
},
"metadata": {
"pipelineArn": "arn:aws:codepipeline:us-east-2:<AccountId>:test_pipeline",
"created": 1611669087.267,
"updated": 1611669087.267
}
}
The PipelineServiceRole + that policy attached to it are:
Service Role
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "codepipeline.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "IamPassRolePolicy",
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": "*",
"Condition": {
"StringEqualsIfExists": {
"iam:PassedToService": [
"cloudformation.amazonaws.com",
"ec2.amazonaws.com",
"ecs-tasks.amazonaws.com"
]
}
}
},
{
"Sid": "CodeBuildPolicy",
"Effect": "Allow",
"Action": [
"codebuild:BatchGetBuilds",
"codebuild:StartBuild"
],
"Resource": "*"
},
{
"Sid": "S3AccessPolicy",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:GetObjectVersion",
"s3:GetBucketAcl",
"s3:GetBucketLocation"
],
"Resource": "*"
},
{
"Sid": "ECRAccessPolicy",
"Effect": "Allow",
"Action": [
"ecr:DescribeImages"
],
"Resource": "*"
},
{
"Sid": "CodeStarConnectionsAccessPolicy",
"Effect": "Allow",
"Action": [
"codestar-connections:UseConnection"
],
"Resource": "*"
}
]
}
The source stage fails with an error :
[Bitbucket] Upload to S3 failed with the following error: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 085999D90C19E650; S3 Extended Request ID: gJ6l08+cX3U6i2Vj0+fW7PiqA/UzM6ZGCfyECmWb+Jit4Knu+gi/L4y3F24uqkFWUfGy9tZo0VE=; Proxy: null) (Service: null; Status Code: 0; Error Code: null; Request ID: null; S3 Extended Request ID: null; Proxy: null) (Service: null; Status Code: 0; Error Code: null; Request ID: null; S3 Extended Request ID: null; Proxy: null)
The error message lacks details, I am not sure which service is trying to access s3, shouldn't it be code-pipeline (which in this case has PutObject permission)?
Resolved this by changing the OutputArtifactFormat from "OutputArtifactFormat": "CODE_ZIP" to "OutputArtifactFormat": "CODEBUILD_CLONE_REF".
CODEBUILD_CLONE_REF - from the console description is a Full clone, in which case
AWS CodePipeline passes metadata about the repository that allows subsequent actions to do a full git clone. Only supported for AWS CodeBuild actions.
The "CODE_ZIP" option does not include the git metadata about the repository
This issue appears to be related to a recent change in the CDK's default IAM Role for the BitBucketSourceAction.
I found that by adding the "s3:PutObjectAcl" action to the list I was able to successfully integrate the BitBucketSourecAction (for GitHub version 2 connection). Note: this did not require:
Changing the OutputArtifactFormat from "OutputArtifactFormat": "CODE_ZIP" to "OutputArtifactFormat": "CODEBUILD_CLONE_REF", or,
S3-full-access "s3:*"
As detailed in this CDK issue, I was using the BitBucketSourceAction to integrate with a GitHub repository. I got the following error when the CodePipeline first attempted the GitHub (Version2) action:
[GitHub] Upload to S3 failed with the following error: Access Denied
On a previous pipeline I released with the BitBucketSourceAction the "s3:PutObject*" wildcarded action was included in the synthesized template. On reviewing the IAM role generated during my latest cdk deployment (using version 1.91.0) the BitBucketSourceAction only had the "s3:PutObject" action (i.e. not wildcarded). This excludes the "s3:PutObjectAcl" action which seems to be required to upload the source repository from GitHub to S3 and free it up for use further along in the pipeline.
Adding the s3:PutObjectAcl action permission to the role policy associated with the Pipeline Bucket Store worked for me.
I had to add the following permissions:
s3:GetObject
s3:GetObjectVersion
s3:PutObject
s3:GetBucketVersioning
s3:PutObjectAcl
I had the same problem using GitHub.
[GitHub] Upload to S3 failed with the following error: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: foo; S3 Extended Request ID: bar; Proxy: null)
But in the artifact store S3 bucket, object was updated.
So I changed s3 service policy to full access.
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetBucketVersioning",
↓
"s3:*",
Had this exact problem today and idk why this fixed it but the policy attached to the PipelineGithubRole had 2 s3 statements, one contained just List* action and the other contained all the Read & Put actions, so I just moved them into a single statement and it started working.

Lambda call fails with no permission error

I have a custom resource in cloudformation template that references a lambda function . Inside the lambda function , I have written code to push items into a DynamoDB table . However , the operation is failing when the cloudformation stack is being created . The error is as follows :
User: arn:aws:sts::551250655555:assumed-role/custom-resource-stack-CustomResourceLambdaExecutio-1OX3T8494LEP5/custom-resource-stack-CustomResourceFunction-1GLEDE3BEPWDP is not authorized to perform: dynamodb:DescribeTable on resource: arn:aws:dynamodb:us-east-1:551250655555:table/MasterTable1
My lambda function name is : custom-resource-stack-CustomResourceFunction-1GLEDE3BEPWDP
and my custom role created is : custom-resource-stack-CustomResourceLambdaExecutio-1OX3T8494LEP5
However , in my serverless template file , I have provided the following permissions :
"CustomResourceLambdaExecutionPolicy": {
"DependsOn": ["CustomResourceLambdaExecutionRole"],
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "CustomResourceLambdaExecutionPolicyDocument",
"Roles": [{
"Ref": "CustomResourceLambdaExecutionRole"
}],
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Sid": "DynamoDBAccess",
"Action": "dynamodb:*",
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "CloudwatchLogGroupAccess",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
}
}
which gives access to all dynamodb operations and tables . Any ideas on what I am doing wrong here .
You are experiencing a race condition.
The Lambda function depends on the IAM role but not on the policy. Thus the function is invoked before the IAM policy is attached to the role.
If you add the policy to the role as part of the IAM role definition that should fix it.
You can also make the Lambda function depend on the IAM policy.

CloudFormation: The role defined for the function cannot be assumed by Lambda

I've been searching for this error and nothing really answers how I can fix it with my CloudFormation template. From the events log, I can see that the role was created before the Lambda functions.
Could you please help?
You are probably missing an AssumeRolePolicyDocument allowing Lambda (lambda.amazonaws.com) to assume your IAM role.
Example:
...
"LambdaRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service" : "lambda.amazonaws.com"},
"Action": ["sts:AssumeRole"]
}]
},
"Path": "/",
"Policies": [...]
}
}
...