I am trying to create a lifecycle event for an autoscaling group in AWS Amazon cloudformation, however I keep getting a really ambiguous error back when deploying my stack:
Unable to publish test message to notification target
arn:aws:sns:us-east-1:000000000000:example-topic using IAM role arn:aws:iam::000000000000:role/SNSExample. Please check your target and role configuration and try to put lifecycle hook again.
I have tested the SNS topic and it can send emails fine and my code appears to be in line with what Amazon suggest:
"ASGLifecycleEvent": {
"Type": "AWS::AutoScaling::LifecycleHook",
"Properties": {
"AutoScalingGroupName": "ASG-179ZOVNY8SEFT",
"LifecycleTransition": "autoscaling:EC2_INSTANCE_LAUNCHING",
"NotificationTargetARN": "arn:aws:sns:us-east-1:000000000000:example-topic",
"RoleARN": "arn:aws:iam::000000000000:role/SNSExample"
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "83129091-8efc-477d-86ef-9a08de4d6fac"
}
}
}
And I have granted full access to everything in that IAM role, however I still get this error message. Does anyone have any other ideas what could really be causing this error?
Your SNSExample role needs to delegate permissions from the AutoScalingNotificationAccessRole managed policy to the autoscaling.amazonaws.com service via an associated Trust Policy (the AssumeRolePolicyDocument Property in the CloudFormation Resource):
SNSExample:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: [autoscaling.amazonaws.com]
Action: ['sts:AssumeRole']
Path: /
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AutoScalingNotificationAccessRole
(You can also delegate access to sns:Publish action instead of using the managed policy, but I recommend the managed policy because it will stay up to date if additional permissions are required for this service in the future.)
See the Receive Notification Using Amazon SNS part of the Auto Scaling Lifecycle Hooks section of the Auto Scaling User Guide for more information.
Related
This stack was working at one point... I'm not sure what's going on. This permission is no longer doing what it did before, or has become invalid.
I have a Lambda function that rotates a Secret, so naturally it must be triggered by Secrets Manager. So I built up the Permission as follows
import * as aws from '#pulumi/aws'
export const accessTokenSecret = new aws.secretsmanager.Secret('accessTokenSecret', {});
export const smPermission = new aws.lambda.Permission(`${lambdaName}SecretsManagerPermission`, {
action: 'lambda:InvokeFunction',
function: rotateKnacklyAccessTokenLambda.name,
principal: 'secretsmanager.amazonaws.com',
sourceArn: accessTokenSecret.arn,
})
And the Policy,
{
Action: [
'secretsmanager:GetResourcePolicy',
'secretsmanager:GetSecretValue',
'secretsmanager:DescribeSecret',
'secretsmanager:ListSecrets',
'secretsmanager:RotateSecret',
],
Resource: 'arn:aws:secretsmanager:*:*:*',
Effect: 'Allow',
},
Running pulumi up -y yields
aws:secretsmanager:SecretRotation (knacklyAccessTokenRotation):
error: 1 error occurred:
* error enabling Secrets Manager Secret "" rotation: AccessDeniedException: Secrets Manager cannot invoke the specified Lambda function. Ensure that the function policy grants access to the principal secretsmanager.amazonaws.com.
This error confuses me, because the Policy created for the Lambda will not accept the Principal param (which makes sense, the same behaviour happens in the AWS Console), so I'm sure they mean Permission instead of Policy.
Based on the log I can tell that the Permission is being created way after the Lambda/Secrets Manager is, I'm not sure if this is a Pulumi issue similar to how it destroys stacks in the incorrect order (Roles and Policies for example).
I can see the Permission in the AWS Lambda configuration section, so maybe it's ok?
(Solved)
I missed this mention on the aws user guide You can use the AmazonEC2FullAccess policy to give users complete access to work with Amazon EC2 Auto Scaling resources, launch templates, and other EC2 resources in their AWS account
Now I added permissions as same as on the AmazonEC2FullAccess policy on my custom policy, and the lambda is working well.
The AmazonEC2FullAccess has full permissions of CloudWatch, EC2, EC2 Auto Scaling, ELB, ELB v2, and limited IAM write permission.
#Marcin _ Thanks! your comment made me check this part.
I'm trying to update the ASG with 'updateAutoScalingGroup' API on lambda.
But this error "AccessDenied: You are not authorized to use launch template" is blocking me...
At the first time, I applied only related permissions on the IAM policy depend on the document, but now I allowed full permissions of EC2 and Autoscaling on the policy to solve this issue.
But no lucks.
On google, I saw some posts that saying this is just an error, or issue from AMI existence.
But my AMI for the launch template is in the same account, same region...
Could you give me some hint or reference to solve this?
Thanks
const AWS = require('aws-sdk')
exports.handler = (event) => {
const autoscaling = new AWS.AutoScaling()
const { asgName, templateName, version } = event
const params = {
AutoScalingGroupName: asgName,
LaunchTemplate: {
LaunchTemplateName: templateName,
Version: version
},
MaxSize: 4,
MinSize: 1,
DesiredCapacity: 1
}
autoscaling.updateAutoScalingGroup(params, async (err, data)=> {
if(err) console.log("err---", err)
else console.log("data---", data)
})
};
Below was added after the comments from Marcin, John Rotenstein, samtoddler
Now the policy has full permission for EC2, EC2 Auto Scaling, EC2 Image Builder, Auto Scaling, and some permissions on CloudWatch Logs. But no lucks yet.
The AMI is in the same account, same region. And I added the account number on the 'Modify Image Permissions' on it. (I don't know well on this but just tried.)
describeLaunchTemplates() shows the launchTemplate which I want to use.
CloudTrail shows 'RunInstances' and 'UpdateAutoScalingGroup' events. 'RunInstances' returned "errorCode": "Client.UnauthorizedOperation", and 'UpdateAutoScalingGroup' returned "errorCode": "AccessDenied", "errorMessage": "An unknown error occurred"
Without LaunchTemplate part, API is working well. (I tried update the min and max count only, and it succeed.)
Even I changed AMI as public, it's not working for this.
Now I'm trying to search about launch template and AMI related configuration..
Unfortunately, the errors provided by AWS in some cases are very unclear and could mislead.
Besides checking that you have the proper rights, this error is also returned when you are trying to create an autoscaling group with an invalid AMI or one that doesn't exist.
Actually, problem is your EC2 instance is having an IAM Role which you are not authorised to use it. Add below policy to lambda or whatever role or IAM user you using to pass the role that is attached to EC2 instance. Once that is done it will start working.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"iam:GetRole",
"iam:PassRole"
],
"Resource": "arn:aws:iam::account-id:role/EC2-roles-for-XYZ-*"
}]
}
I'm stuck on a missing permissions issue trying to create a Lambda function.
The execution role I've configured has the following permissions:
$ aws --output=text iam get-role-policy --policy-name=MyRolePolicy --role-name=my-role
<snip>
POLICYDOCUMENT 2012-10-17
STATEMENT Allow
ACTION s3:Get*
ACTION s3:List*
ACTION logs:CreateLogGroup
ACTION logs:CreateLogStream
ACTION logs:PutLogEvents
ACTION ec2:DescribeNetworkInterfaces
ACTION ec2:CreateNetworkInterface
ACTION ec2:DeleteNetworkInterface
And when I create a Lambda function with that role, creation succeeds:
$ aws lambda create-function \
--function-name=my-test --runtime=java8 \
--role='arn:aws:iam::1234567890:role/my-role' \
--handler=MyHandler \
--code=S3Bucket=my-bucket,S3Key=app.zip
<result successful>
However, when I create the function using the same arguments (esp. the same execution role) I get the following error:
Boto3 Usage
client.create_function(
FunctionName=function_name,
Runtime='java8',
Role=getenv('execution_role_arn'),
Handler='MyHandler',
Code={
"S3Bucket": bucket,
"S3Key": artifact_name
},
Publish=True,
VpcConfig={
'SubnetIds': getenv('vpc_subnet_ids').split(','),
'SecurityGroupIds': getenv('vpc_security_group_ids').split(',')
}
)
Boto3 Result
{
'Error':{
'Message':'The provided execution role does not have permissions to call CreateNetworkInterface on EC2',
'Code':'InvalidParameterValueException'
},
'ResponseMetadata':{
'RequestId':'47b6640a-f3fe-4550-8ac3-38cfb2842461',
'HTTPStatusCode':400,
'HTTPHeaders':{
'date':'Wed, 24 Jul 2019 10:55:44 GMT',
'content-type':'application/json',
'content-length':'119',
'connection':'keep-alive',
'x-amzn-requestid':'47b6640a-f3fe-4550-8ac3-38cfb2842461',
'x-amzn-errortype':'InvalidParameterValueException'
},
'RetryAttempts':0
}
}
Creating a function via the console with this execution role works as well, so I must be missing something in how I'm using Boto3, but I'm at a loss to explain.
Hopefully someone can catch a misapplication of Boto3 here, cause I'm at a loss!
Your boto3 code is specifying a VPC:
VpcConfig={
'SubnetIds': getenv('vpc_subnet_ids').split(','),
'SecurityGroupIds': getenv('vpc_security_group_ids').split(',')
However, the CLI version is not specifying a VPC.
Therefore, the two requests are not identical. That's why one works and the other does not work.
From Configuring a Lambda Function to Access Resources in an Amazon VPC - AWS Lambda:
To connect to a VPC, your function's execution role must have the following permissions.
ec2:CreateNetworkInterface
ec2:DescribeNetworkInterfaces
ec2:DeleteNetworkInterface
These permissions are included in the AWSLambdaVPCAccessExecutionRole managed policy.
The lambda has a role that allows ec2:CreateNetworkInterface and not the account executing script.
The current role assigned to lambda function allows for the lambda to create VpcConfig.
Check that the account running the script to provision the lambda is allowed the ec2:CreateNetworkInterface action.
I have started getting the following error recently on release change action int eh AWS codePipeline console. Also attaching the screenshot
Action execution failed
Insufficient permissions The provided role does not have permissions
to perform this action. Underlying error: Access Denied (Service:
Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID:
CA26EF93E3DAF8F0; S3 Extended Request ID:
mKkobqLGbj4uco8h9wDOBjPeWrRA2ybCrEsVoSq/MA4IFZqJb6QJSrlNrKk/EQK40TfLbTbqFuQ=)
I can't find any resources online anywhere for this error code.
Your pipeline is trying to access a S3 bucket, but AWS CodePipeline ServiceRole does not have permission to access it. Create an IAM policy that provides access to S3 and attach it to the CodePipeline service role.
As #Jeevagan said, you must create a new IAM Policy that grant access to the Pipeline Buckets.
Do not forget to add the following actions:
Action:
- "s3:GetObject"
- "s3:List*"
- "s3:GetObjectVersion"
I lost a few minutes because of this one in particular: GetObjectVersion
By checking your codedeploy-output, you'll be able to see that the process is downloading a particular version of your artefact with the parameter "versionId".
Hope it will help.
You are missing the GetBucketVersioning action in your policy, so the correct example looks like below. I don't know why it's not mentioned anywhere in the reference/documentation:
- PolicyName: AccessRequiredByPipeline
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- s3:PutObject
- s3:GetObject
- s3:GetObjectVersion
Effect: Allow
Resource: !Sub ${YouBucket.Arn}/*
- Action:
- s3:GetBucketVersioning
Resource: !Sub ${YouBucket.Arn}
Effect: Allow
- Action:
- kms:GenerateDataKey
- kms:Decrypt
Effect: Allow
Resource: !GetAtt KMSKey.Arn
Another potential culprit that mascarades behind this error that references S3 is missing KMS permissions on the IAM Role for the CodePipeline. If you configured your CodePipeline to use KMS encryption, then the service role used/associated with the CodePipeline will also need KMS permissions to that KMS key in order to interact with the KMS encrypted objects in S3. In my experience, the missing KMS permissions will cause the same error message to appear which references S3.
I just ran into this issue, but the permissions were all set properly - I used the same CloudFormation template with other projects no problem. It turned out that the key name I was using in the S3 bucket was too long. Apparently it doesn't like anything more than 20 characters. Once I changed the key name in my S3 bucket (and all of its associated references in the CloudFormation template files), everything worked properly
I run into the same issue when I used cloud formation to build my CI/CD, my problem was the CodePipeline ArtifactStore pointed to the wrong location in the S3 ("codepipeline" a not allowed access folder in my case). Changing the ArtifactStore to an existing folder fixed my issue.
You can view pipeline details like where the SourceArtifact is pointed by following this link
As explained in the Docs , I set up Lambda#edge for cloudfront trigger of Viewer Response.
The lambda function code :
'use strict';
exports.handler = (event, context, callback) => {
console.log('----EXECUTED------');
const response = event.Records[0].cf.response;
console.log(event.Records[0].cf_response);
callback(null, response);
};
I have set up trigger appropriately for the Viewer Response event.
Now when I make a request through cloudfront, it must be logged in cloudwatch, but it doesn't.
If I do a simple Test Lambda Function (using Button), it is logged properly.
What might be the issue here ?
When you deploy Lambda#Edge function, It is deployed to all edge cache regions across the world with their version Replica of the Lambda Edge function. Regional edge caches are a subset of the main AWS regions and edge locations.
When a user requests to the nearest pop/edge, the lambda associated with the edge cache region will get called. All logs of Lambda associated with those regions will in their edge cache region CloudWatch logs.
For example:
If a user is hitting us-east-1 region then its associated logs will be in us-east-1.
To know exactly where (on which region) your function is logging, you can run this AWS CLI script:
FUNCTION_NAME=function_name_without_qualifiers
for region in $(aws --output text ec2 describe-regions | cut -f 3)
do
for loggroup in $(aws --output text logs describe-log-groups --log-group-name "/aws/lambda/us-east-1.$FUNCTION_NAME" --region $region --query 'logGroups[].logGroupName')
do
echo $region $loggroup
done
done
on which you have to replace "function_name_without_qualifiers" with the name of your lambda#edge. Link
Hope it helps.
For those who have also searched for logs and couldn't find them with the script provided by #Kannaiyan.
TL;DR
Use this IAM Role for your Lambda function
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:*:*:log-group:*:*"
]
}
]
}
====
Make sure you have correct IAM role. If you created a Lambda first and then deployed it to Lambda#Edge, automatically generated IAM Role will only have permissions enough to log data in a single region into the log group named after the Lambda function, whilst using Lambda#Edge means it'll try to log data in different regions into the "/aws/lambda/." log group. Therefore it is necessary to change the IAM Role to allow creation of log group and write access there in different regions. In the TL;DR section, I provided the sample IAM Role, but make sure to narrow down the access to the specific list of log groups in production
According to AWS Documentation for Lambda#Edge Functions:
When you check for the log files, be aware that log files are stored in the Region closest to the location where the function is executed. So if you visit a website from, for example, London, you must change the Region to view the CloudWatch Logs for the London Region.
The Lambda#Edge logs and what region a request was executed in are available in the AWS CloudFront console, although the path is convoluted and AWS did a really lousy job of documenting the steps.
Here are the steps that work as of this posting:
Navigate to the AWS CloudFront console.
Click the "Monitoring" link under the "Telemetry" section (not "Logs", that takes to you CloudFront logs).
Click on the "Lambda#Edge" tab.
Choose your Lambda function and then click the "View metrics" button.
You can then use the "Invocations" graph to determine in which region a specific invocation of the Lambda function was executed. Once you have the region, you can, at long last, use the "View function logs" drop-down menu to view the Lambda function's logs for a specific region.
I figured this out by digging around in the console for a long time. The "documentation" for this logging is here, but doesn't seem to explain how to actually find Lambda#Edge logs for a specific region.
If anyone happens to find proper documentation about this, please update the post.
Following on from #yevhenii-hordashnyk's answer, if you're using "Serverless" framework, by default it creates a IAM user with logging permissions for the execution region only, and it is locked to the application name (which does not work for Edge functions because they are prefixed by the region of the installed function, thus requiring different permissions).
You have to specify a custom role, and apply that role to your function as per https://www.serverless.com/framework/docs/providers/aws/guide/iam
Note the following snippet uses * instead of - Ref: 'AWS::Region', as well as additional edgelambda.amazonaws.com service in Trust Relationships.
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
- edgelambda.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: myPolicyName
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow # note that these rights are given in the default policy and are required if you want logs out of your lambda(s)
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource:
- 'Fn::Join':
- ':'
-
- 'arn:aws:logs'
- '*'
- Ref: 'AWS::AccountId'
- 'log-group:/aws/lambda/*:*:*'
By default it does add the `AWSLambdaVPCAccessExecutionRole` policy to the lambda role, but I do not know why it does not create the Log Stream. Maybe I've missed something, but after doing the above, it works now.