Error Updating Stack to Add S3 Trigger - amazon-web-services

I successfully created a lambda function and S3 bucket using a cloudformation stack. I then ran an update to the stack to add a trigger to the S3 bucket to invoke a lambda function.
When I run the update it's giving the following error:
Unable to validate the following destination configurations (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument; Request ID: XXXXX; S3 Extended Request ID: XXXXX
This is the update JSON I'm using to add the trigger to the S3 bucket:
"MyBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": "my-bucket",
"NotificationConfiguration": {
"LambdaConfigurations": [
{
"Event": "s3:ObjectCreated:*",
"Function": "arn:aws:lambda:ap-southeast-2:my-lambda-arn"
}
]
}
I then added an IAM role to give access to the S3 bucket to invoke a lambda function:
"ResourceAccess": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Path": "/",
"Policies": [
{
"PolicyName": "giveaccesstodeltas3",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:ap-southeast-2:my-lambda-arn",
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "123456"
},
"ArnLike": {
"AWS:SourceArn": "arn:aws:s3:::my-bucket"
}
}
}
]
}
}
]
}
It's giving an error saying:
Policy document should not specify a principal. (Service: AmazonIdentityManagement; Status Code: 400; Error Code: MalformedPolicyDocument; Request ID: XXXXXX)

In order to add this trigger, you must give your S3 bucket permission to invoke the lambda function. In addition, your lambda must have permission to invoke whatever services it affects. My guess is you are missing the first permissions to give:
permissions for your S3 bucket to invoke your lambda function.
You can create a policy similar to the following to give the appropriate permissions to your S3 bucket:
{
"Version": "2012-10-17",
"Id": "default",
"Statement": [
{
"Sid": "<optional>",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "<ArnToYourFunction>",
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "<YourAccountId>"
},
"ArnLike": {
"AWS:SourceArn": "arn:aws:s3:::<YourBucketName>"
}
}
}
]
}
See this AWS documentation for more info.

Related

Cannot set S3 trigger for Lambda function in AWS

I've been all over the internet looking for a solution to this. I have been trying to setup an AWS Lambda function to send a message to SNS every time a file is uploaded to a particular S3 bucket, according to this tutorial. At this point, I have the function setup and I can invoke it successfully. However, when I attempt to connect the function to S3, I get an error stating An error occurred (InvalidArgument) when calling the PutBucketNotification operation: Unable to validate the following destination configurations. According to this article, I should be able to add a permission that will let S3 invoke the Lambda function, like this:
aws lambda add-permission \
--function-name my-file-upload \
--principal s3.amazonaws.com \
--statement-id AcceptFromImport \
--action "lambda:InvokeFunction" \
--source-arn arn:aws:s3:::file-import \
--source-account my_account_id
I did this, and noticed that the policy associated with the Lambda function updated and appeared to be correct. However, the error persists. I've looked at a similar question, here, but none of the solutions here worked.
Execution Role ARN: arn:aws:iam::my_account_id:role/lambda-upload-stream
Execution Role (lambda-upload-stream) trust relationship:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Execution Role policy (my-file-upload):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AccessObject",
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::file-import/*"
},
{
"Sid": "SendUpdate",
"Effect": "Allow",
"Action": "sns:Publish",
"Resource": "arn:aws:sns:ap-northeast-1:my_account_id:comm-in"
}
]
}
Lambda function ARN: arn:aws:lambda:ap-northeast-1:my_account_id:function:my-file-upload
Lambda function role document
{
"roleName": "lambda-upload-stream",
"policies": [
{
"name": "my-file-upload",
"id": "AWS_ACCESS_KEY_ID",
"type": "managed",
"arn": "arn:aws:iam::my_account_id:policy/my-file-upload",
"document": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AccessObject",
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::file-import/*"
},
{
"Sid": "SendUpdate",
"Effect": "Allow",
"Action": "sns:Publish",
"Resource": "arn:aws:sns:ap-northeast-1:my_account_id:comm-in"
}
]
}
}
],
"resources": {
"s3": {
"service": {
"name": "Amazon S3",
"icon": "data:image/svg+xml;base64,very_long_base64_string1"
},
"statements": [
{
"resource": "arn:aws:s3:::file-import/*",
"service": "s3",
"effect": "Allow",
"action": "s3:GetObject",
"source": {
"index": "AccessObject",
"policyName": "my-file-upload",
"policyType": "managed"
}
}
]
},
"sns": {
"service": {
"name": "Amazon SNS",
"icon": "data:image/svg+xml;base64,very_long_base64_string2"
},
"statements": [
{
"resource": "arn:aws:sns:ap-northeast-1:my_account_id:comm-in",
"service": "sns",
"effect": "Allow",
"action": "sns:Publish",
"source": {
"index": "SendUpdate",
"policyName": "my-file-upload",
"policyType": "managed"
}
}
]
}
},
"trustedEntities": [
"lambda.amazonaws.com"
]
}
Lambda function resource policy:
{
"Version": "2012-10-17",
"Id": "default",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:ap-northeast-1:my_account_id:function:my-file-upload",
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "my_account_id"
},
"ArnLike": {
"AWS:SourceArn": "arn:aws:s3:::file-import"
}
}
}
]
}
My question is: what am I doing wrong here and how do I fix it?
The thing you need to create is called a "Resource-based policy", and is what should be created by aws lambda add-permission.
A Resource-based policy gives S3 permission to invoke your lambda. This is a property on your lambda itself, and is not part of your lambda's IAM role (Your lambda's IAM role controls what your lambda can do, a Resource-based policy controls who can do what to your lambda. You can view this resource in the UI on the aws console by going to your lambda, clicking "Permissions" and scrolling down to "Resource-based policy". The keyword you want to look out for is lambda:InvokeFunction, which is what gives other things permission to call your lambda, including other AWS accounts, and other AWS services on your account (like s3).
That being said, the command you ran should have created this policy. Did you make sure to replace my_account_id with your actual account id when you ran the command?
In addition, make sure you replace --source-arn arn:aws:s3:::file-import with the actual ARN of your bucket (I assume you had to create a bucket with a different name because s3 buckets must have globally unique names, and file-import is almost surely already taken)
I figured out what the problem was. My initial command was:
aws s3api put-bucket-notification --bucket azure-erp-import \
--notification-configuration "CloudFunctionConfiguration={Id=file-uploaded,Events=[],Event=s3:ObjectCreated:*,CloudFunction=arn:aws:lambda:ap-northeast-1:my_account_id:function:my-file-upload,InvocationRole=arn:aws:iam::my_account_id:role/lambda-upload-stream}"
This failed because the arn:aws:iam::my_account_id:role/lambda-upload-stream role doesn't have permissions to call lambda:InvokeFunction on the lambda function. Removing this value fixed the error.

Upload to S3 failed with the following error: Access Denied - CodeStarConnections

I am building a CI/CD pipeline using AWS Codepipeline, the repository source is on bitbucket and I used the AWS-Codestarconnections to create a connection between the bitbucket repository and the pipeline.
The pipeline details are below:
{
"pipeline": {
"name": "test_pipeline",
"roleArn": "arn:aws:iam::<AccountId>:role/PipelineServiceRole",
"artifactStore": {
"type": "S3",
"location": "tadadadada-artifact"
},
"stages": [
{
"name": "Source",
"actions": [
{
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"provider": "CodeStarSourceConnection",
"version": "1"
},
"runOrder": 1,
"configuration": {
"BranchName": "dev",
"ConnectionArn": "arn:aws:codestar-connections:us-east-2:<AccountId>:connection/4ca7b1cf-2917-4fda-b681-c5239944eb33",
"FullRepositoryId": "<username>/repository_name",
"OutputArtifactFormat": "CODE_ZIP"
},
"outputArtifacts": [
{
"name": "SourceArtifact"
}
],
"inputArtifacts": [],
"region": "us-east-2",
"namespace": "SourceVariables"
}
]
},
{
"name": "Build",
"actions": [
{
....
}
]
}
],
"version": 1
},
"metadata": {
"pipelineArn": "arn:aws:codepipeline:us-east-2:<AccountId>:test_pipeline",
"created": 1611669087.267,
"updated": 1611669087.267
}
}
The PipelineServiceRole + that policy attached to it are:
Service Role
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "codepipeline.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "IamPassRolePolicy",
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": "*",
"Condition": {
"StringEqualsIfExists": {
"iam:PassedToService": [
"cloudformation.amazonaws.com",
"ec2.amazonaws.com",
"ecs-tasks.amazonaws.com"
]
}
}
},
{
"Sid": "CodeBuildPolicy",
"Effect": "Allow",
"Action": [
"codebuild:BatchGetBuilds",
"codebuild:StartBuild"
],
"Resource": "*"
},
{
"Sid": "S3AccessPolicy",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:GetObjectVersion",
"s3:GetBucketAcl",
"s3:GetBucketLocation"
],
"Resource": "*"
},
{
"Sid": "ECRAccessPolicy",
"Effect": "Allow",
"Action": [
"ecr:DescribeImages"
],
"Resource": "*"
},
{
"Sid": "CodeStarConnectionsAccessPolicy",
"Effect": "Allow",
"Action": [
"codestar-connections:UseConnection"
],
"Resource": "*"
}
]
}
The source stage fails with an error :
[Bitbucket] Upload to S3 failed with the following error: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 085999D90C19E650; S3 Extended Request ID: gJ6l08+cX3U6i2Vj0+fW7PiqA/UzM6ZGCfyECmWb+Jit4Knu+gi/L4y3F24uqkFWUfGy9tZo0VE=; Proxy: null) (Service: null; Status Code: 0; Error Code: null; Request ID: null; S3 Extended Request ID: null; Proxy: null) (Service: null; Status Code: 0; Error Code: null; Request ID: null; S3 Extended Request ID: null; Proxy: null)
The error message lacks details, I am not sure which service is trying to access s3, shouldn't it be code-pipeline (which in this case has PutObject permission)?
Resolved this by changing the OutputArtifactFormat from "OutputArtifactFormat": "CODE_ZIP" to "OutputArtifactFormat": "CODEBUILD_CLONE_REF".
CODEBUILD_CLONE_REF - from the console description is a Full clone, in which case
AWS CodePipeline passes metadata about the repository that allows subsequent actions to do a full git clone. Only supported for AWS CodeBuild actions.
The "CODE_ZIP" option does not include the git metadata about the repository
This issue appears to be related to a recent change in the CDK's default IAM Role for the BitBucketSourceAction.
I found that by adding the "s3:PutObjectAcl" action to the list I was able to successfully integrate the BitBucketSourecAction (for GitHub version 2 connection). Note: this did not require:
Changing the OutputArtifactFormat from "OutputArtifactFormat": "CODE_ZIP" to "OutputArtifactFormat": "CODEBUILD_CLONE_REF", or,
S3-full-access "s3:*"
As detailed in this CDK issue, I was using the BitBucketSourceAction to integrate with a GitHub repository. I got the following error when the CodePipeline first attempted the GitHub (Version2) action:
[GitHub] Upload to S3 failed with the following error: Access Denied
On a previous pipeline I released with the BitBucketSourceAction the "s3:PutObject*" wildcarded action was included in the synthesized template. On reviewing the IAM role generated during my latest cdk deployment (using version 1.91.0) the BitBucketSourceAction only had the "s3:PutObject" action (i.e. not wildcarded). This excludes the "s3:PutObjectAcl" action which seems to be required to upload the source repository from GitHub to S3 and free it up for use further along in the pipeline.
Adding the s3:PutObjectAcl action permission to the role policy associated with the Pipeline Bucket Store worked for me.
I had to add the following permissions:
s3:GetObject
s3:GetObjectVersion
s3:PutObject
s3:GetBucketVersioning
s3:PutObjectAcl
I had the same problem using GitHub.
[GitHub] Upload to S3 failed with the following error: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: foo; S3 Extended Request ID: bar; Proxy: null)
But in the artifact store S3 bucket, object was updated.
So I changed s3 service policy to full access.
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetBucketVersioning",
↓
"s3:*",
Had this exact problem today and idk why this fixed it but the policy attached to the PipelineGithubRole had 2 s3 statements, one contained just List* action and the other contained all the Read & Put actions, so I just moved them into a single statement and it started working.

Cross Account SNS Subscribe to Lambda in second account

I have used the below policy for the SNS topic to subscribe this SNS in Lambda with account number as 222222222222. I have also given access to my lambda with a similar policy adding it to the execution role of Lambda.
Getting the error below:
An error occurred when creating the trigger: User:
arn:aws:sts::222222222222:assumed-role/TSI_Base_FullAccess/AXXXXXXXX
is not authorized to perform: SNS:Subscribe on resource:
arn:aws:sns:eu-west-1:111111111111:Story-5555 (Service: AmazonSNS;
Status Code: 403; Error Code: AuthorizationError; Request ID:
1321942c-25c4-52a1-bacb-c2e9bd641067)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1582008007178",
"Action": [
"sns:GetSubscriptionAttributes",
"sns:GetTopicAttributes",
"sns:ListSubscriptions",
"sns:ListSubscriptionsByTopic",
"sns:ListTagsForResource",
"sns:ListTopics",
"sns:Publish",
"sns:Subscribe"
],
"Effect": "Allow",
"Resource": "arn:aws:sns:eu-west-1:111111111111:Story-5555",
"Condition": {
"ArnEquals": {
"aws:PrincipalArn": "arn:aws:lambda:eu-west-1:222222222222:function:New_Cross_SNS"
}
}
}
]
}
According AWS Documentation you should specify principle additionally to the condition.
So your policy should resemble
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1582008007178",
"Action": [
"sns:GetSubscriptionAttributes",
"sns:GetTopicAttributes",
"sns:ListSubscriptions",
"sns:ListSubscriptionsByTopic",
"sns:ListTagsForResource",
"sns:ListTopics",
"sns:Publish",
"sns:Subscribe"
],
"Effect": "Allow",
"Resource": "arn:aws:sns:eu-west-1:111111111111:Story-5555",
"Principal": {
"AWS": ["222222222222"]
},
"Condition": {
"ArnEquals": {
"aws:PrincipalArn": [
"arn:aws:lambda:eu-west-1:222222222222:function:New_Cross_SNS",
"arn:aws:sts::222222222222:assumed-role:TSI_Base_FullAccess:AXXXXXXXX"
]
}
}
}
]
}
The way to be sure which ARN to specify in the condition section of the policy is to call (and print) get-caller-identity API from your function.

Access denied when calling the AssumeRole operation

I have a cloudformation template that creates a lambda funciton as well as a role for that lambda function. I try assuming the role in the lambda function but keep getting the error :
An error occurred (AccessDenied) when calling the AssumeRole operation: Access denied
Is there a step I'm missing? Not sure why I don't have permission to assume the role. I'm assuming I'm missing some sort of permission if the error I'm getting is access denied as opposed to some execution error.
Cloudformation Snippet :
"LambdaRoleCustomResource": {
"Type": "AWS::IAM::Role",
"Condition": "CreateWebACL",
"DependsOn": "WAFWebACL",
"Properties": {
"RoleName": {
"Fn::Join": ["-", [{
"Ref": "AWS::StackName"
}, "Custom-Resource"]]
},
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": ["lambda.amazonaws.com"]
},
"Action": ["sts:AssumeRole"]
}]
},
"Path": "/",
"Policies": [{
"PolicyName": "S3Access",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:GetBucketLocation",
"s3:GetBucketNotification",
"s3:GetObject",
"s3:ListBucket",
"s3:PutBucketNotification"
],
"Resource": {
"Fn::Join": ["", ["arn:aws:s3:::", {
"Ref": "AccessLogBucket"
}]]
}
}]
}
}
Lambda Function Snippet:
sts_client = boto3.client('sts')
sts_credentials = sts_client.assume_role(RoleArn='arn:aws:iam::XXXXXXXXX:role/portal-cloudfront-waf-Custom-Resource', RoleSessionName='custom-resource-cf-session')
sts_credentials = sts_credentials['Credentials']
cf = boto3.client('cloudformation', aws_access_key_id=sts_credentials['AccessKeyId'], aws_secret_access_key=sts_credentials['SecretAccessKey'], aws_session_token=sts_credentials['SessionToken'])
stack_name = event['ResourceProperties']['StackName']
cf_desc = cf.describe_stacks(StackName=stack_name)
global waf
sts_client = boto3.client('sts')
sts_credentials = sts_client.assume_role(RoleArn='arn:aws:iam::XXXXXXXX:role/portal-cloudfront-waf-Custom-Resource', RoleSessionName='custom-resource-waf-session')
sts_credentials = sts_credentials['Credentials']
s3 = boto3.client('waf', aws_access_key_id=sts_credentials['AccessKeyId'], aws_secret_access_key=sts_credentials['SecretAccessKey'], aws_session_token=sts_credentials['SessionToken'])
waf = boto3.client('waf')
Your Lambda function will automatically use the permissions associated with the Role attached to the function. There is no need to create credentials.
So, just use:
cf = boto3.client('cloudformation')
s3 = boto3.client('waf')

Allow access to file only from specific Lambda [s3]

A s3 bucket (static web hosting) have a certain policy that deny access to everyone concerning a certain file.
How can I allow only a specific lambda function to access it ? (using only the bucket policy)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Authentication",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"NotResource": "arn:aws:s3:::web/auth.html"
}
]
}
UPDATE : Changing the previous policy with this one gives the desired result
{
"Version": "2012-10-17",
"Id": "Policy1477651215159",
"Statement": [
{
"Sid": "Console administration",
"Effect": "Allow",
"NotPrincipal": {
"AWS": "arn:aws:iam::XXXX:role/role_lambda"
},
"Action": "s3:GetObject",
"NotResource": "arn:aws:s3:::web/auth.html"
}
]
}
Lambda functions run in a Execution Role. You can make a customer IAM Role for your lambda function. See this
Then you can use that IAM Role to grant access to that S3 Object. See this article for steps to follow.
This is a CloudFormation snippet. You can allow your Lambda role access to S3 using the following IAM policy statement:
"LambdaRolePolicy" : {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "Lambda",
"PolicyDocument": {
"Statement" : [ {
"Action" : [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Effect" : "Allow",
"Resource" : {
"Fn::Join": [ "", [
"arn:aws:s3:::",
{ "Ref": "S3Bucket" },
"/*"
] ]
}
} ]
},
"Roles" : [ { "Ref": "RootRole" } ]
}
}
S3Bucket resource is your S3 bucket and RootRole is the Lambda role.