Unable to add cloudfront as trigger to lambda function - amazon-web-services

Hi I've followed this instruction try to resize image with Cloudfront and lambda#edge. When I tried to test the resized image, I keep getting the error message below:
The Lambda function associated with the CloudFront distribution is
invalid or doesn't have the required permissions.
So I checked the lambda functions created by cloud formation provided by the article I mentioned in the beginning, and I found there's no trigger in it.
I've tried to set it manually but getting the error message below:
CloudFront events cannot be associated with $LATEST or Alias. Choose
Actions to publish a new version of your function, and then retry
association.
I followed the instruction in the error message; publish, and add Cloudfront as trigger but it seems there's no way to apply it. It's still running the one without Cloudfront as the trigger.
Is there any way to set Cloudfront as trigger and make this work properly?

For people Googling "The Lambda function associated with the CloudFront distribution is invalid or doesn't have the required permissions":
I got that error and struggled to debug it. It turned out there were some programmatic errors inside my Lambda that I had to resolve. But, how do you debug it if, when hitting Cloudfront you keep getting "The Lambda function associated with the CloudFront distribution is invalid or doesn't have the required permissions". That, and there's nothing inside the Cloudwatch logs.
My Lambda was defined in Cloudformation inside a AWS::Lambda::Function's ZipFile attribute. I ended up going to the Lambda service inside AWS and creating a Lambda test payload corresponding to my Cloudfront event as documented here: Lambda#Edge Event Structure. Then, I could debug the Lambda inside the Lambda console without having to hit Cloudfront or having to navigate to Cloudwatch logs.

I see a couple of you guys stating that the root cause of the issue was not a permissions issue and an issue with your code. Which is likely the correct root cause. Cloud front tends to use a 403 error for everything even a basic 404 will show up as a 403 in most cases.
I have also seen some of the comments above stating that you could not find any logs associated with the error in lambda. I think this is most likely because you guys are looking for the logs on us-east-1 and dont live on the east coast of the USA. The logs will be in your local region where they are executed. So choose the region in closest proximity to where you are sitting and you will likely find the log group there.

For other ppl suffering from the poor quality of dev articles from aws blog; I found it's due to the wrong S3 bucket policy. The article says:
ImageBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref ImageBucket
PolicyDocument:
Statement:
- Action:
- s3:GetObject
Effect: Allow
Principal: "*"
Resource: !Sub arn:aws:s3:::${ImageBucket}/*
- Action:
- s3:PutObject
Effect: Allow
Principal:
AWS: !GetAtt EdgeLambdaRole.Arn
Resource: !Sub arn:aws:s3:::${ImageBucket}/*
- Action:
- s3:GetObject
Effect: Allow
Principal:
AWS: !GetAtt EdgeLambdaRole.Arn
Resource: !Sub arn:aws:s3:::${ImageBucket}/*
It turns out you have to grant the permissions to allow other actions besides of GetObject and PutObject, because it needs to create folders in the bucket.
Simply the problem is resolved by changing it to s3:*

For me, the missing cloud front trigger on the lambda screen was because I was not in us-east-1 region

I ran into the same error message with no log in CloudWatch. I finally noticed that my Python runtime handler was index.handler while my index.py defined lambda_handler. After changing my Python runtime handler to index.lambda_handler, the error went away. HTH.

If you found this answer googling "The Lambda function associated with the CloudFront distribution is invalid or doesn't have the required permissions", this can be caused if your function is not wired correctly from cloudformation. For example given yaml:
Code: ./src/ # or CodeUri ./src/
Handler: foo.bar
Double check that ./src/foo.js has exports.bar = function...

When I changed "Include body" in Lambda Function Trigger from "Yes" to "No" it started working.
I had to delete and create CloudFront trigger again to change that setting.

just reading an article from here.
If you create a lambda in one region and use it with cloudfront (and later be requested by user in other edge-region), the issue is due to lambda does not have enough cloudwatch log permission.
Check this, all credits go to author.
https://dev.to/aws-builders/authorizing-requests-with-lambdaedge-mjm

Related

AWS CloudFormation error: iam:PutRolePolicy

I'm getting this error while modifying the stack change
API: iam:PutRolePolicy User: arn:aws:sts::769558805:assumed-role/AWS-QuickSetup-StackSet-Local-AdministrationRole/AWSCloudFormation is not authorized to perform: iam:PutRolePolicy on resource: role test-eu-west-1-lambdaRole because no identity-based policy allows the iam:PutRolePolicy action
Previously, I also updated the same using the Designer, however, everything went smoothly without any errors. The error only appeared this time. Does anyone know what may be the cause is?
Questions:
Where should I put this iam:PutRolePolicy policy? In JSON template or attach it to the AWS-QuickSetup-StackSet-Local-AdministrationRole/AWSCloudFormation in IAM > Policy?
Okay, so my stack has these events:
LogGroup
LambdaFunction
EventsRuleSchedule1
LambdaPermissionEventsRuleSchedule1
which require these rules in policy:
EventBridge
IAM
Lambda
S3
S3 Object Lambda
After several steps of creating change sets, receiving errors, and fixing them, I finally made it work. So the solution here was to check the error line by line, type by type, then adjust the policy accordingly.
However, this is still a bit time-consuming as I needed to test and run the stack every time I added a new policy. Not sure if there is a way to know all these "required" policies before executing stacks, if anyone knows any references, please comment below.

Serverless Framework AWS StepFunction not working

I have a problem that so far I'm unable to identify the root cause.
I have an AWS step machine that should be invoked once a file is uploaded to an S3 bucket.
So far when I upload the file to the S3 bucket, the lambda function that is defined in the StartAt key (StartAt: ImgUploadedEvent) starts as I can see in the lambda logs.
Here is the code:
stepFunctions:
stateMachines:
ValidateImageStateMachine:
loggingConfig:
level: ALL
includeExecutionData: true
destinations:
- Fn::GetAtt: [ StepFuncLogGroup, Arn ]
definition:
Comment: "This state function validates the images after users upload them to S3"
StartAt: ImgUploadedEvent
States:
ImgUploadedEvent:
Type: Task
Resource:
Fn::GetAtt: [ImgUploaded, Arn]
End: true
Below is the lambda function that is declared as the start of the StepMachine
This lambda function as I can see from the logs indeed get called once I modified an Object in S3
functions:
ImgUploaded:
handler: src/stepfunctions/imageWasUploadedEvent.handler
events:
- s3:
bucket: !Ref AttachmentsBucket
existing: true
iamRoleStatements:
- Effect: "Allow"
Action:
- "states:StartExecution"
Resource:
- "*"
To check that the Step Function was working I created a log group and added it to the Step Function.
resources:
Resources:
StepFuncLogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: /aws/stepfunctions/${self:service}-${self:provider.stage}
I see this cloud watch log group correctly associated with the Step function in the AWS Console.
However when I upload an object to S3, I do see in the logs of the lambda functions that it was invoked, but I can not see any logs on the Step Function log group
My question is:
Is the Step Function indeed working and it is just an issue with the logs in the Step Function?
Or it is that the Step Function itself is not working and the lambda function is working just a lambda function totally independent of the Step Function?
What I have to do so the lambda function gets trigger as part of the Step Function?
BR
After studying how StepFunctions work I finally arrived to the conclusion that this is a wrong pattern.
In no place on the documentation of the plugins or Amazon it says that what I did is a pattern
StepFunctions can be started by events on CloudWatch and those events could be originated on a change on S3. That is not what I did here
There is no a direct link between an action on S3 and a StepFunction
The link in AWS documentation could confuse someone to think otherwise. Here is the title Starting a State Machine Execution in Response to Amazon S3 Events
But the state machine does not start because an S3 events but because the CloudWatch log that this event generates
A lambda proxy function is a good way of invoking an State Machine. This is a easy to use pattern and very common as we can use it with SQS etc
So the correct response to this question is
The state machine does not start because it is never invoked. We just called a lambda function that was used as the StartAt for an StateMachine. This does mean I invoked the State Machine.
That is the reason why there is no logs for the state machine meanwhile there are correct logs for the lambda function
Hope this response helps
I will add more details and reference to this response
BR
Or it is that the Step Function itself is not working and the lambda function is working just a lambda function totally independent of the Step Function?
To verify whether lambda was invoked as part of step function or not can't you just check execution history from the step function console. Also unless you have explicitly configured s3 to publish events to lambda, your lambda will not be automatically invoked upon uploading files to s3.
What I have to do so the lambda function gets trigger as part of the Step Function?
To be able to call trigger step function on file-upload to s3 you can follow this tutorial: https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-cloudwatch-events-s3.html

Lambda#Edge through Go SDK

I am trying to associate a Lambda#Edge Function using the AWS Go SDK.
Creating the Function by hand in the console and assigning it to the Cloudfront distro using the SDK => works.
Creating the Function (using the same IAM role from 1.) in the code w/o assigning to cloudfront => works.
Assigning the created function from 2. by hand in the console => fails.
Assigning the created function from 2. via the SDK => fails.
Deploying the created function from 2. by hand in the lambda console (Actions => deploy to lambda#edge) => works. => after this the function can be assigned by hand and by code w/o problems...
The error in 3. and 4. is the same:
InvalidLambdaFunctionAssociation: Lambda#Edge cannot retrieve the specified Lambda function. Update the IAM policy to add permission: lambda:GetFunction for resource: arn:aws:lambda:us-east-1:123456789:function:example:1 and try again.
What confuses me is that I am reusing the same role that was created during 1.
This is how I create the function by code:
lam := lambda.New(session)
lam.CreateFunction(&lambda.CreateFunctionInput{
FunctionName: aws.String("example"),
Handler: aws.String("index.handler"),
Runtime: aws.String("nodejs12.x"),
Role: aws.String("arn:aws:iam::123456:role/service-role/existing-role"),
Code: &lambda.FunctionCode{
S3Bucket: aws.String("bucket-xyz"),
S3Key: aws.String("source.zip"),
},
}) // works w/o issues
lam.AddPermission(&lambda.AddPermissionInput{
FunctionName: aws.String("example"),
StatementId: aws.String("AllowExecutionFromCloudFront"),
SourceArn: aws.String("arn:aws:cloudfront::12333456:distribution/CDNID1234"),
Principal: aws.String("edgelambda.amazonaws.com"),
Action: aws.String("lambda:GetFunction"),
}) // also works w/o error
// assigning the created lambda function would now fail
using
go 1.13
github.com/aws/aws-sdk-go v1.31.8
I found the issue.
The error has absolutely nothing to do with the actual problem. Very misleading error if you ask me.
All that's been missing is a published version of the lambda function at hand.
To achieve that using the Go SDK you have to do:
lam := lambda.New(session)
lam.PublishVersion(&lambda.PublishVersionInput{
FunctionName: aws.String("example"),
Description: aws.String("Dont forget to publish ;)"),
})
using the CLI you would want to do the following:
aws lambda publish-version --function-name example --description "Dont forget to publish"
It actually makes sense that you cannot use a function that hasn't been published. However the error from AWS didn't really help there.
Hopefully this can help anybody!
This error occurred for me because the IAM user didn't have adequate permissions to access versions of the Lambda function.
Before (only one resource specifying the Lambda function):
arn:aws:lambda:<region>:*:function:<function_name>
After (additional wildcard resource for versions of the Lambda function):
arn:aws:lambda:<region>:*:function:<function_name>
arn:aws:lambda:<region>:*:function:<function_name>:*

How to avoid giving `iam:CreateRole` permission when using existing S3 bucket to trigger Lambda function?

I am trying to deploy an AWS Lambda function that gets triggered when an AVRO file is written to an existing S3 bucket.
My serverless.yml configuration is as follows:
service: braze-lambdas
provider:
name: aws
runtime: python3.7
region: us-west-1
role: arn:aws:iam::<account_id>:role/<role_name>
stage: dev
deploymentBucket:
name: serverless-framework-dev-us-west-1
serverSideEncryption: AES256
functions:
hello:
handler: handler.hello
events:
- s3:
bucket: <company>-dev-ec2-us-west-2
existing: true
events: s3:ObjectCreated:*
rules:
- prefix: gaurav/lambdas/123/
- suffix: .avro
When I run serverless deploy, I get the following error:
ServerlessError: An error occurred: IamRoleCustomResourcesLambdaExecution - API: iam:CreateRole User: arn:aws:sts::<account_id>:assumed-role/serverless-framework-dev/jenkins_braze_lambdas_deploy is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH.
I see some mentions of Serverless needing iam:CreateRole because of how CloudFormation works but can anyone confirm if that is the only solution if I want to use existing: true? Is there another way around it except using the old Serverless plugin that was used prior to the framework adding support for the existing: true configuration?
Also, what is 1M5QQI6P2ZYUH in arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH? Is it a random identifier? Does this mean that Serverless will try to create a new IAM role every time I try to deploy the Lambda function?
I've just encountered this, and overcome it.
I also have a lambda for which I want to attach an s3 event to an already existing bucket.
My place of work has recently tightened up AWS Account Security by the use of Permission Boundaries.
So i've encountered the very similar error during deployment
Serverless Error ---------------------------------------
An error occurred: IamRoleCustomResourcesLambdaExecution - API: iam:CreateRole User: arn:aws:sts::XXXXXXXXXXXX:assumed-role/xx-crossaccount-xx/aws-sdk-js-1600789080576 is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::XXXXXXXXXXXX:role/my-existing-bucket-IamRoleCustomResourcesLambdaExec-LS075CH394GN.
If you read Using existing buckets on the serverless site, it says
NOTE: Using the existing config will add an additional Lambda function and IAM Role to your stack. The Lambda function backs-up the Custom S3 Resource which is used to support existing S3 buckets.
In my case I needed to further customise this extra role that serverless creates so that it is also assigned the permission boundary my employer has defined should exist on all roles. This happens in the resources: section.
If your employer is using permission boundaries you'll obviously need to know the correct ARN to use
resources:
Resources:
IamRoleCustomResourcesLambdaExecution:
Type: AWS::IAM::Role
Properties:
PermissionsBoundary: arn:aws:iam::XXXXXXXXXXXX:policy/xxxxxxxxxxxx-global-boundary
Some info on the serverless Resources config
Have a look at your own serverless.yaml, you may already have a permission boundary defined in the provider section. If so you'll find it under rolePermissionsBoundary, this was added in I think version 1.64 of serverless
provider:
rolePermissionsBoundary: arn:aws:iam::XXXXXXXXXXXX:policy/xxxxxxxxxxxx-global-boundary
If so, you can should be able to use that ARN in the resources: sample I've posted here.
For testing purpose we can use:
provider:
name: aws
runtime: python3.8
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action: "*"
Resource: "*"
For running sls deploy, I would suggest you use a role/user/policy with Administrator privileges.
If you're restricted due to your InfoSec team or the like, then I suggest you have your InfoSec team have a look at docs for "AWS IAM Permission Requirements for Serverless Framework Deploy." Here's a good link discussing it: https://github.com/serverless/serverless/issues/1439. At the very least, they should add iam:CreateRole and that can get you unblocked for today.
Now I will address your individual questions:
can anyone confirm if that is the only solution if I want to use existing: true
Apples and oranges. Your S3 configuration has nothing to do with your error message. iam:CreateRole must be added to the policy of whatever/whoever is doing sls deploy.
Also, what is 1M5QQI6P2ZYUH in arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH? Is it a random identifier? Does this mean that serverless will try to create a new role every time I try to deploy the function?
Yes, it is a random identifier
No, sls will not create a new role every time. This unique ID is cached and re-used for updates to an existing stack.
If a stack is destroyed/recreated, it will assign a generate a new unique ID.

AWS CodePipeline permission error on Release change action

I have started getting the following error recently on release change action int eh AWS codePipeline console. Also attaching the screenshot
Action execution failed
Insufficient permissions The provided role does not have permissions
to perform this action. Underlying error: Access Denied (Service:
Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID:
CA26EF93E3DAF8F0; S3 Extended Request ID:
mKkobqLGbj4uco8h9wDOBjPeWrRA2ybCrEsVoSq/MA4IFZqJb6QJSrlNrKk/EQK40TfLbTbqFuQ=)
I can't find any resources online anywhere for this error code.
Your pipeline is trying to access a S3 bucket, but AWS CodePipeline ServiceRole does not have permission to access it. Create an IAM policy that provides access to S3 and attach it to the CodePipeline service role.
As #Jeevagan said, you must create a new IAM Policy that grant access to the Pipeline Buckets.
Do not forget to add the following actions:
Action:
- "s3:GetObject"
- "s3:List*"
- "s3:GetObjectVersion"
I lost a few minutes because of this one in particular: GetObjectVersion
By checking your codedeploy-output, you'll be able to see that the process is downloading a particular version of your artefact with the parameter "versionId".
Hope it will help.
You are missing the GetBucketVersioning action in your policy, so the correct example looks like below. I don't know why it's not mentioned anywhere in the reference/documentation:
- PolicyName: AccessRequiredByPipeline
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- s3:PutObject
- s3:GetObject
- s3:GetObjectVersion
Effect: Allow
Resource: !Sub ${YouBucket.Arn}/*
- Action:
- s3:GetBucketVersioning
Resource: !Sub ${YouBucket.Arn}
Effect: Allow
- Action:
- kms:GenerateDataKey
- kms:Decrypt
Effect: Allow
Resource: !GetAtt KMSKey.Arn
Another potential culprit that mascarades behind this error that references S3 is missing KMS permissions on the IAM Role for the CodePipeline. If you configured your CodePipeline to use KMS encryption, then the service role used/associated with the CodePipeline will also need KMS permissions to that KMS key in order to interact with the KMS encrypted objects in S3. In my experience, the missing KMS permissions will cause the same error message to appear which references S3.
I just ran into this issue, but the permissions were all set properly - I used the same CloudFormation template with other projects no problem. It turned out that the key name I was using in the S3 bucket was too long. Apparently it doesn't like anything more than 20 characters. Once I changed the key name in my S3 bucket (and all of its associated references in the CloudFormation template files), everything worked properly
I run into the same issue when I used cloud formation to build my CI/CD, my problem was the CodePipeline ArtifactStore pointed to the wrong location in the S3 ("codepipeline" a not allowed access folder in my case). Changing the ArtifactStore to an existing folder fixed my issue.
You can view pipeline details like where the SourceArtifact is pointed by following this link