Lambda#Edge not logging on cloudfront request - amazon-web-services

As explained in the Docs , I set up Lambda#edge for cloudfront trigger of Viewer Response.
The lambda function code :
'use strict';
exports.handler = (event, context, callback) => {
console.log('----EXECUTED------');
const response = event.Records[0].cf.response;
console.log(event.Records[0].cf_response);
callback(null, response);
};
I have set up trigger appropriately for the Viewer Response event.
Now when I make a request through cloudfront, it must be logged in cloudwatch, but it doesn't.
If I do a simple Test Lambda Function (using Button), it is logged properly.
What might be the issue here ?

When you deploy Lambda#Edge function, It is deployed to all edge cache regions across the world with their version Replica of the Lambda Edge function. Regional edge caches are a subset of the main AWS regions and edge locations.
When a user requests to the nearest pop/edge, the lambda associated with the edge cache region will get called. All logs of Lambda associated with those regions will in their edge cache region CloudWatch logs.
For example:
If a user is hitting us-east-1 region then its associated logs will be in us-east-1.
To know exactly where (on which region) your function is logging, you can run this AWS CLI script:
FUNCTION_NAME=function_name_without_qualifiers
for region in $(aws --output text ec2 describe-regions | cut -f 3)
do
for loggroup in $(aws --output text logs describe-log-groups --log-group-name "/aws/lambda/us-east-1.$FUNCTION_NAME" --region $region --query 'logGroups[].logGroupName')
do
echo $region $loggroup
done
done
on which you have to replace "function_name_without_qualifiers" with the name of your lambda#edge. Link
Hope it helps.

For those who have also searched for logs and couldn't find them with the script provided by #Kannaiyan.
TL;DR
Use this IAM Role for your Lambda function
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:*:*:log-group:*:*"
]
}
]
}
====
Make sure you have correct IAM role. If you created a Lambda first and then deployed it to Lambda#Edge, automatically generated IAM Role will only have permissions enough to log data in a single region into the log group named after the Lambda function, whilst using Lambda#Edge means it'll try to log data in different regions into the "/aws/lambda/." log group. Therefore it is necessary to change the IAM Role to allow creation of log group and write access there in different regions. In the TL;DR section, I provided the sample IAM Role, but make sure to narrow down the access to the specific list of log groups in production

According to AWS Documentation for Lambda#Edge Functions:
When you check for the log files, be aware that log files are stored in the Region closest to the location where the function is executed. So if you visit a website from, for example, London, you must change the Region to view the CloudWatch Logs for the London Region.

The Lambda#Edge logs and what region a request was executed in are available in the AWS CloudFront console, although the path is convoluted and AWS did a really lousy job of documenting the steps.
Here are the steps that work as of this posting:
Navigate to the AWS CloudFront console.
Click the "Monitoring" link under the "Telemetry" section (not "Logs", that takes to you CloudFront logs).
Click on the "Lambda#Edge" tab.
Choose your Lambda function and then click the "View metrics" button.
You can then use the "Invocations" graph to determine in which region a specific invocation of the Lambda function was executed. Once you have the region, you can, at long last, use the "View function logs" drop-down menu to view the Lambda function's logs for a specific region.
I figured this out by digging around in the console for a long time. The "documentation" for this logging is here, but doesn't seem to explain how to actually find Lambda#Edge logs for a specific region.
If anyone happens to find proper documentation about this, please update the post.

Following on from #yevhenii-hordashnyk's answer, if you're using "Serverless" framework, by default it creates a IAM user with logging permissions for the execution region only, and it is locked to the application name (which does not work for Edge functions because they are prefixed by the region of the installed function, thus requiring different permissions).
You have to specify a custom role, and apply that role to your function as per https://www.serverless.com/framework/docs/providers/aws/guide/iam
Note the following snippet uses * instead of - Ref: 'AWS::Region', as well as additional edgelambda.amazonaws.com service in Trust Relationships.
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
- edgelambda.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: myPolicyName
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow # note that these rights are given in the default policy and are required if you want logs out of your lambda(s)
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource:
- 'Fn::Join':
- ':'
-
- 'arn:aws:logs'
- '*'
- Ref: 'AWS::AccountId'
- 'log-group:/aws/lambda/*:*:*'
By default it does add the `AWSLambdaVPCAccessExecutionRole` policy to the lambda role, but I do not know why it does not create the Log Stream. Maybe I've missed something, but after doing the above, it works now.

Related

Permission error when using Boto3, but works via aws cli

I'm stuck on a missing permissions issue trying to create a Lambda function.
The execution role I've configured has the following permissions:
$ aws --output=text iam get-role-policy --policy-name=MyRolePolicy --role-name=my-role
<snip>
POLICYDOCUMENT 2012-10-17
STATEMENT Allow
ACTION s3:Get*
ACTION s3:List*
ACTION logs:CreateLogGroup
ACTION logs:CreateLogStream
ACTION logs:PutLogEvents
ACTION ec2:DescribeNetworkInterfaces
ACTION ec2:CreateNetworkInterface
ACTION ec2:DeleteNetworkInterface
And when I create a Lambda function with that role, creation succeeds:
$ aws lambda create-function \
--function-name=my-test --runtime=java8 \
--role='arn:aws:iam::1234567890:role/my-role' \
--handler=MyHandler \
--code=S3Bucket=my-bucket,S3Key=app.zip
<result successful>
However, when I create the function using the same arguments (esp. the same execution role) I get the following error:
Boto3 Usage
client.create_function(
FunctionName=function_name,
Runtime='java8',
Role=getenv('execution_role_arn'),
Handler='MyHandler',
Code={
"S3Bucket": bucket,
"S3Key": artifact_name
},
Publish=True,
VpcConfig={
'SubnetIds': getenv('vpc_subnet_ids').split(','),
'SecurityGroupIds': getenv('vpc_security_group_ids').split(',')
}
)
Boto3 Result
{
'Error':{
'Message':'The provided execution role does not have permissions to call CreateNetworkInterface on EC2',
'Code':'InvalidParameterValueException'
},
'ResponseMetadata':{
'RequestId':'47b6640a-f3fe-4550-8ac3-38cfb2842461',
'HTTPStatusCode':400,
'HTTPHeaders':{
'date':'Wed, 24 Jul 2019 10:55:44 GMT',
'content-type':'application/json',
'content-length':'119',
'connection':'keep-alive',
'x-amzn-requestid':'47b6640a-f3fe-4550-8ac3-38cfb2842461',
'x-amzn-errortype':'InvalidParameterValueException'
},
'RetryAttempts':0
}
}
Creating a function via the console with this execution role works as well, so I must be missing something in how I'm using Boto3, but I'm at a loss to explain.
Hopefully someone can catch a misapplication of Boto3 here, cause I'm at a loss!
Your boto3 code is specifying a VPC:
VpcConfig={
'SubnetIds': getenv('vpc_subnet_ids').split(','),
'SecurityGroupIds': getenv('vpc_security_group_ids').split(',')
However, the CLI version is not specifying a VPC.
Therefore, the two requests are not identical. That's why one works and the other does not work.
From Configuring a Lambda Function to Access Resources in an Amazon VPC - AWS Lambda:
To connect to a VPC, your function's execution role must have the following permissions.
ec2:CreateNetworkInterface
ec2:DescribeNetworkInterfaces
ec2:DeleteNetworkInterface
These permissions are included in the AWSLambdaVPCAccessExecutionRole managed policy.
The lambda has a role that allows ec2:CreateNetworkInterface and not the account executing script.
The current role assigned to lambda function allows for the lambda to create VpcConfig.
Check that the account running the script to provision the lambda is allowed the ec2:CreateNetworkInterface action.

AWS CodePipeline permission error on Release change action

I have started getting the following error recently on release change action int eh AWS codePipeline console. Also attaching the screenshot
Action execution failed
Insufficient permissions The provided role does not have permissions
to perform this action. Underlying error: Access Denied (Service:
Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID:
CA26EF93E3DAF8F0; S3 Extended Request ID:
mKkobqLGbj4uco8h9wDOBjPeWrRA2ybCrEsVoSq/MA4IFZqJb6QJSrlNrKk/EQK40TfLbTbqFuQ=)
I can't find any resources online anywhere for this error code.
Your pipeline is trying to access a S3 bucket, but AWS CodePipeline ServiceRole does not have permission to access it. Create an IAM policy that provides access to S3 and attach it to the CodePipeline service role.
As #Jeevagan said, you must create a new IAM Policy that grant access to the Pipeline Buckets.
Do not forget to add the following actions:
Action:
- "s3:GetObject"
- "s3:List*"
- "s3:GetObjectVersion"
I lost a few minutes because of this one in particular: GetObjectVersion
By checking your codedeploy-output, you'll be able to see that the process is downloading a particular version of your artefact with the parameter "versionId".
Hope it will help.
You are missing the GetBucketVersioning action in your policy, so the correct example looks like below. I don't know why it's not mentioned anywhere in the reference/documentation:
- PolicyName: AccessRequiredByPipeline
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- s3:PutObject
- s3:GetObject
- s3:GetObjectVersion
Effect: Allow
Resource: !Sub ${YouBucket.Arn}/*
- Action:
- s3:GetBucketVersioning
Resource: !Sub ${YouBucket.Arn}
Effect: Allow
- Action:
- kms:GenerateDataKey
- kms:Decrypt
Effect: Allow
Resource: !GetAtt KMSKey.Arn
Another potential culprit that mascarades behind this error that references S3 is missing KMS permissions on the IAM Role for the CodePipeline. If you configured your CodePipeline to use KMS encryption, then the service role used/associated with the CodePipeline will also need KMS permissions to that KMS key in order to interact with the KMS encrypted objects in S3. In my experience, the missing KMS permissions will cause the same error message to appear which references S3.
I just ran into this issue, but the permissions were all set properly - I used the same CloudFormation template with other projects no problem. It turned out that the key name I was using in the S3 bucket was too long. Apparently it doesn't like anything more than 20 characters. Once I changed the key name in my S3 bucket (and all of its associated references in the CloudFormation template files), everything worked properly
I run into the same issue when I used cloud formation to build my CI/CD, my problem was the CodePipeline ArtifactStore pointed to the wrong location in the S3 ("codepipeline" a not allowed access folder in my case). Changing the ArtifactStore to an existing folder fixed my issue.
You can view pipeline details like where the SourceArtifact is pointed by following this link

Unable to add cloudfront as trigger to lambda function

Hi I've followed this instruction try to resize image with Cloudfront and lambda#edge. When I tried to test the resized image, I keep getting the error message below:
The Lambda function associated with the CloudFront distribution is
invalid or doesn't have the required permissions.
So I checked the lambda functions created by cloud formation provided by the article I mentioned in the beginning, and I found there's no trigger in it.
I've tried to set it manually but getting the error message below:
CloudFront events cannot be associated with $LATEST or Alias. Choose
Actions to publish a new version of your function, and then retry
association.
I followed the instruction in the error message; publish, and add Cloudfront as trigger but it seems there's no way to apply it. It's still running the one without Cloudfront as the trigger.
Is there any way to set Cloudfront as trigger and make this work properly?
For people Googling "The Lambda function associated with the CloudFront distribution is invalid or doesn't have the required permissions":
I got that error and struggled to debug it. It turned out there were some programmatic errors inside my Lambda that I had to resolve. But, how do you debug it if, when hitting Cloudfront you keep getting "The Lambda function associated with the CloudFront distribution is invalid or doesn't have the required permissions". That, and there's nothing inside the Cloudwatch logs.
My Lambda was defined in Cloudformation inside a AWS::Lambda::Function's ZipFile attribute. I ended up going to the Lambda service inside AWS and creating a Lambda test payload corresponding to my Cloudfront event as documented here: Lambda#Edge Event Structure. Then, I could debug the Lambda inside the Lambda console without having to hit Cloudfront or having to navigate to Cloudwatch logs.
I see a couple of you guys stating that the root cause of the issue was not a permissions issue and an issue with your code. Which is likely the correct root cause. Cloud front tends to use a 403 error for everything even a basic 404 will show up as a 403 in most cases.
I have also seen some of the comments above stating that you could not find any logs associated with the error in lambda. I think this is most likely because you guys are looking for the logs on us-east-1 and dont live on the east coast of the USA. The logs will be in your local region where they are executed. So choose the region in closest proximity to where you are sitting and you will likely find the log group there.
For other ppl suffering from the poor quality of dev articles from aws blog; I found it's due to the wrong S3 bucket policy. The article says:
ImageBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref ImageBucket
PolicyDocument:
Statement:
- Action:
- s3:GetObject
Effect: Allow
Principal: "*"
Resource: !Sub arn:aws:s3:::${ImageBucket}/*
- Action:
- s3:PutObject
Effect: Allow
Principal:
AWS: !GetAtt EdgeLambdaRole.Arn
Resource: !Sub arn:aws:s3:::${ImageBucket}/*
- Action:
- s3:GetObject
Effect: Allow
Principal:
AWS: !GetAtt EdgeLambdaRole.Arn
Resource: !Sub arn:aws:s3:::${ImageBucket}/*
It turns out you have to grant the permissions to allow other actions besides of GetObject and PutObject, because it needs to create folders in the bucket.
Simply the problem is resolved by changing it to s3:*
For me, the missing cloud front trigger on the lambda screen was because I was not in us-east-1 region
I ran into the same error message with no log in CloudWatch. I finally noticed that my Python runtime handler was index.handler while my index.py defined lambda_handler. After changing my Python runtime handler to index.lambda_handler, the error went away. HTH.
If you found this answer googling "The Lambda function associated with the CloudFront distribution is invalid or doesn't have the required permissions", this can be caused if your function is not wired correctly from cloudformation. For example given yaml:
Code: ./src/ # or CodeUri ./src/
Handler: foo.bar
Double check that ./src/foo.js has exports.bar = function...
When I changed "Include body" in Lambda Function Trigger from "Yes" to "No" it started working.
I had to delete and create CloudFront trigger again to change that setting.
just reading an article from here.
If you create a lambda in one region and use it with cloudfront (and later be requested by user in other edge-region), the issue is due to lambda does not have enough cloudwatch log permission.
Check this, all credits go to author.
https://dev.to/aws-builders/authorizing-requests-with-lambdaedge-mjm

AWS IAM Roles and policies in simple English?

I've been working with the AWS PHP SDK and I seem to get everything except the IAM Roles and permissions.
Can someone please explain to me in the simplest term how the IAM roles work and explain the following terms: StatementId, Action, ARN and most importantly Principal in simple English?
To give you the source of my confusion, here is a problem I recently faced. I'm trying to create an API Gateway in which a Resource's method triggers a Lambda function. It wasn't working until I copy pasted this bit:
$lambdaClient->addPermission([
'FunctionName' => 'fn name',
'StatementId' => 'ManagerInvokeAccess',
'Action' => 'lambda:InvokeFunction',
'Principal' => 'apigateway.amazonaws.com',
]);
But in some other thread someone suggested to use the following for the same:
const permissions = {
FunctionName: target,
StatementId: 'api-gateway-execute',
Action: 'lambda:InvokeFunction',
Principal: 'apigateway.amazonaws.com',
SourceArn: 'arn:aws:execute-api:' + nconf.get('awsRegion') + ':' + nconf.get('awsAccountId') + ':' + nconf.get('apiGatewayId') + '/*'};
How come the the first one doesn't contain any account info but The second one does? Also then there is another person who has pasted something totally different to get the same working for him. There are so many keys in the last example (like "Fn::Join"), I don't even know where to begin and what it does.
How does one figure out where to find these policies? Do we just copy-paste them from somewhere is there is a way to ascertain them. If so what keys must always be specified.
Any help will be appreciated because I'm totally confused right now.
First of all, Welcome to the world of AWS !!! :-D
Let me try to explain your doubts about how to understand IAM(in general) with an analogy.
Think that there is an organization called ORG1.
Deparments of ORG1: HR-dept, Test-dept, DEV-dept
Employees of ORG1: EMP1, EMP2, EMP3 ... EMP10
Members of HR dept: HR1, HR2, HR3
Now I want to create a role for HR dept to give them permission to hire/suspend an employee. The policy will look like below:
{
"Version": "2012-10-17", // This is version of the template. Don't change this. This is NOT a date field for your use.
"Statement": [
{
"Sid": "SOME-RANDOM-ID-WITH-NUMBER-1P1PP43EZUVRM", // This is used as ID in some cases to identify different statments
"Principal": HR-dept, // the dept who is allowed to assume this role or the one who is allowed to invoke this role
"Effect": "Allow", // has only 2 values: ALLOW/DENY. Either You want to provided the below privileges or you want to striped off these privileges.
"Action": [
"hire",
"suspend",
], // these are privileges which are granted
"Resource": "EMP1", // the entity on whom do you want to apply those actions on. In this case employee EMP1.
"Condition": {
"ArnLike": {
"AWS:SourceArn": "HR*" // You want anyone from HR-dept whose id starts with HR to be able to execute the action.ie HR1,HR2 or HR3 .
}
}
}
]
}
Now try to understand the below code from the same perspective(Internally this code creates a template similar to above):
const permissions = {
FunctionName: target,
StatementId: 'api-gateway-execute', // This is just an ID. Dont sweat about it.
Principal: 'apigateway.amazonaws.com', //which entity group the invoker belongs to
Action: 'lambda:InvokeFunction', // The privilege you are giving to API gateway api's
SourceArn: 'arn:aws:execute-api:.. blah blah blah' // ie. the exact Id of api-gateway which all has rights to invoke lambda function
};
In AWS ARN is a unique ID of a resource. Kind of like EmployeeId in a company.This is unique globally.
Believe me, At first it may seem that what you are trying to do in AWS is difficult to comprehend, But at some point you will start getting comfortable as you go on crossing each hurdle you face. And then you will admire at how customizable AWS features are.
How does one figure out where to find these policies?
You need to refer the AWS Documentation for specific service to find out what are the principals, actions and statements they support. For example if you need to find out policies for DynamoDB, check DynamoDB API Permissions. It can be confusing at first, since AWS Need to cater using IAM to authorize all of their services, but it becomes straight forward over time.
Let me explain each part of the policy
StatementId(Sid) - Its just and optional statement identifier (e.g 1, 2, abcd & etc.) and for some services(e.g SQS, SNS) it requires uniqueness.
Action - What your policy allows to do on a AWS Service. e.g For DynamoDB you can allow creating Tables, Putting new items & etc. For EC2 instance, it can allow starting and stopping.
ARN(Amazon Resource Name) - This is a unique name to uniquely identify AWS resources like a EC2 server, S3 bucket, DynamoDB table and even IAM policy, Role & etc.
Principal - Principal is to restrict who is allowed to use this policy. It can be a user (IAM user, federated user, or assumed-role user), AWS account, AWS service, or other principal entity that is allowed or denied access to a resource.
In addition you need to include Resource parameter, where you can either use a wildcard '*' or a ARN with Account ID within it.
I think most of the answers are correct but here it is from the horse's mouth/the great AWS document (full credit)
Role: An IAM role is an IAM identity that you can create in your account that has specific permissions.
Policies: IAM policies define permissions for an action regardless of the method that you use to perform the operation
Typically you have a role and you assign polices to your role.
To answer last part of your question "How does one figure out where to find these policies". This is all depends on what you are trying to do but always start with the least amount of permission (same concept as linux file permission don't give 777 ). How do you define your policies there are standard one already defined in your AWS account but you can use a tool to customize yours policies using the below tool
https://awspolicygen.s3.amazonaws.com/policygen.html

cloudformation lifecycle events cannot publish to sns

I am trying to create a lifecycle event for an autoscaling group in AWS Amazon cloudformation, however I keep getting a really ambiguous error back when deploying my stack:
Unable to publish test message to notification target
arn:aws:sns:us-east-1:000000000000:example-topic using IAM role arn:aws:iam::000000000000:role/SNSExample. Please check your target and role configuration and try to put lifecycle hook again.
I have tested the SNS topic and it can send emails fine and my code appears to be in line with what Amazon suggest:
"ASGLifecycleEvent": {
"Type": "AWS::AutoScaling::LifecycleHook",
"Properties": {
"AutoScalingGroupName": "ASG-179ZOVNY8SEFT",
"LifecycleTransition": "autoscaling:EC2_INSTANCE_LAUNCHING",
"NotificationTargetARN": "arn:aws:sns:us-east-1:000000000000:example-topic",
"RoleARN": "arn:aws:iam::000000000000:role/SNSExample"
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "83129091-8efc-477d-86ef-9a08de4d6fac"
}
}
}
And I have granted full access to everything in that IAM role, however I still get this error message. Does anyone have any other ideas what could really be causing this error?
Your SNSExample role needs to delegate permissions from the AutoScalingNotificationAccessRole managed policy to the autoscaling.amazonaws.com service via an associated Trust Policy (the AssumeRolePolicyDocument Property in the CloudFormation Resource):
SNSExample:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: [autoscaling.amazonaws.com]
Action: ['sts:AssumeRole']
Path: /
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AutoScalingNotificationAccessRole
(You can also delegate access to sns:Publish action instead of using the managed policy, but I recommend the managed policy because it will stay up to date if additional permissions are required for this service in the future.)
See the Receive Notification Using Amazon SNS part of the Auto Scaling Lifecycle Hooks section of the Auto Scaling User Guide for more information.