CloudFormation removing AWS Cognito Lambda Triggers on update stack operations - amazon-web-services

I️ have noticed whenever a new CloudFormation stack change is deployed, my User Pool triggers are removed and have to be manually re-added within the AWS dashboard or programmatically. This is a bit of a concern as these triggers conduct some crucial operations with communication between Cognito and the backend system.
At first I️ thought it was the deployment framework we are using, but here is a barebones example of a CF template I️ was able to replicate it with:
Updated to reflect Lambda attachment to User Pool
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"UserPool": {
"Type": "AWS::Cognito::UserPool",
"Properties": {
"UserPoolName": "test",
"UsernameAttributes": [
"email"
],
"EmailVerificationMessage": "Your verification code is {####}.",
"EmailVerificationSubject": "Your verification code",
"Policies": {
"PasswordPolicy": {
"MinimumLength": 8,
"RequireLowercase": true,
"RequireNumbers": true
}
}
}
},
"UserPoolClient": {
"Type": "AWS::Cognito::UserPoolClient",
"Properties": {
"ClientName": "Test Client",
"UserPoolId": {
"Ref": "UserPool"
},
"ExplicitAuthFlows": [
"ALLOW_REFRESH_TOKEN_AUTH",
"ALLOW_USER_PASSWORD_AUTH",
"ALLOW_USER_SRP_AUTH"
],
"GenerateSecret": false
}
},
"PreSignUpHandlerLambdaFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Role": "arn:aws:iam::...",
"Code": {
"S3Bucket": "code-bucket",
"S3Key": "code-bucket/functions.zip"
},
"Handler": "handlers/pre-sign-up.default",
"Runtime": "nodejs12.x",
"FunctionName": "test-preSignUpHandler",
"MemorySize": 1024,
"Timeout": 6
}
},
"PreSignUpHandlerCustomCognitoUserPool1": {
"Type": "Custom::CognitoUserPool",
"Version": 1,
"DependsOn": [
"PreSignUpHandlerLambdaFunction"
],
"Properties": {
"ServiceToken": "arn:aws:lambda:...",
"FunctionName": "test-preSignUpHandler",
"UserPoolName": "test",
"UserPoolConfigs": [
{
"Trigger": "PreSignUp"
}
]
}
}
}
}
I️ have dug into CloudWatch logs generated by the update, but nothing is transparent regarding the User Pool update and the removal of the triggers.
Has anyone else experienced this and are there any work-arounds?

This is the expected behavior of CloudFormation. When config drift is detected on stack update it will bring it back in line with your stack template. If you want to retain the changes you should specify the triggers in your CFN template. Be sure to grant cognito access in the resource policy:
{
"Version": "2012-10-17",
"Id": "default",
"Statement": [
{
"Sid": "lambda-allow-cognito-my-function",
"Effect": "Allow",
"Principal": {
"Service": "cognito-idp.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:us-east-1:123456789012:function:my-function",
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "123456789012"
},
"ArnLike": {
"AWS:SourceArn": "arn:aws:cognito-idp:us-east-1:123456789012:userpool/us-east-1_myUserPoolId"
}
}
}
]
}

Related

Need help configuring DLQ for Lambda triggered by SNS

I'd like to receive an email if my Lambda fails. The Lambda is triggered via SNS (which is triggered by SES).
When I publish to the SNS Topic, the Lambda runs and throws an error (for testing) due to a missing package. I see from the console logs that the Lambda runs 3 times.
I have an SQS queue attached to the Redrive policy (dead-letter queue) of the SNS Topic's subscription (that triggers the lambda).
{
"deadLetterTargetArn": "arn:aws:sqs:us-east-1:123456789012:myproj-sns-topic-dlq"
}
I tested, and things didn't work. I noticed a warning in the AWS console for the SNS Topic's subscription:
Dead-letter queue (redrive policy) permissions The Amazon SQS queue
specified as the dead-letter queue for your subscription (redrive
policy) doesn't permit deliveries from topics. To allow an Amazon SNS
topic to send messages to an Amazon SQS queue, you must create an
Amazon SQS queue policy.
Following the steps Subscribing an Amazon SQS queue to an Amazon SNS topic, I added the 2nd statement to my SQS queue's Access policy:
{
"Version": "2008-10-17",
"Id": "__default_policy_ID",
"Statement": [
{
"Sid": "__owner_statement",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:root"
},
"Action": "SQS:*",
"Resource": "arn:aws:sqs:us-east-1:123456789012:myproj-sns-topic-dlq"
},
{
"Effect": "Allow",
"Principal": "*",
"Action": "sqs:SendMessage",
"Resource": "arn:aws:sqs:us-east-1:123456789012:myproj-sns-topic-dlq",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "arn:aws:sns:us-east-1:123456789012:myproj-snstopic"
}
}
}
]
}
The Principal was {"Service": "sns.amazonaws.com"}, but that results in a warning in the AWS console saying it can't test permissions. I tested anyway and it didn't work. (Lambda runs 3 times, but nothing gets put in the DLQ.)
I set the Principal to * for now (per snippet above). That eliminates the warning in the console, but things still don't work.
My goal it to have the event drop into the SQS DLQ after the Lambda fails. I have an alarm on that queue that will notify me by email...
Edit: added missing condition
According to this article, you can use a CloudWatch Log filter to parse a log for a Lambda function and get an email notification.
To implement this solution, you must create the following:
An SNS topic
An IAM role
A Lambda function
A CloudWatch log trigger
As pointed out by #fedonev, the SNS Subscription's (for lambda) DLQ is used when the event cannot be delivered. If the event is delivered (but the Lambda fails), you can use Lambda's async event DLQ or wire-up the 'on failed' destination of the Lambda.
I'm using AWS Amplify and decided to use the Lambda's "async" DLQ as opposed to a lambda destination.
Step 1 - Add a custom category to add:
SQS (dlq) to save the failed attampt's event
CloudWatch Alarm to watch the SQS resource
SNS Topic and Subscription(s) used by the Alarm
And "Output" the SQS queue's ARN which is needed by the Lambda.
Step 2 - Add a "DeadLetterConfig" to the Lambda that pushes fails into the above queue.
amplify add custom
name: LambdaAlarm
File: amplify/backend/custom/LambdaAlarm/LambdaAlarm-cloudformation-template.json
{
"AWSTemplateFormatVersion": "2010-09-09",
"Parameters": {
"env": {
"Type": "String"
}
},
"Resources": {
"SQSDLQ": {
"Type": "AWS::SQS::Queue",
"Properties": {
"QueueName": {
"Fn::Join": [
"",
[
"myproject-lambdafailed-dlq",
{
"Fn::Select": [
3,
{
"Fn::Split": [
"-",
{
"Ref": "AWS::StackName"
}
]
}
]
},
"-",
{
"Ref": "env"
}
]
]
},
"MessageRetentionPeriod": 1209600,
"VisibilityTimeout": 5432,
"SqsManagedSseEnabled": false
}
},
"SNSTOPIC": {
"Type": "AWS::SNS::Topic",
"Properties": {
"TopicName": {
"Fn::Join": [
"",
[
"myproject-lambda-failed-alarm-topic",
{
"Fn::Select": [
3,
{
"Fn::Split": [
"-",
{
"Ref": "AWS::StackName"
}
]
}
]
},
"-",
{
"Ref": "env"
}
]
]
}
}
},
"SNSSubscriptionEmailJoeAtGmail": {
"Type": "AWS::SNS::Subscription",
"Properties": {
"Protocol": "email",
"TopicArn": {
"Ref": "SNSTOPIC"
},
"Endpoint": "yourname+myprojectalert#gmail.com"
}
},
"SNSSubscriptionEmailJillAtQuad": {
"Type": "AWS::SNS::Subscription",
"Properties": {
"Protocol": "email",
"TopicArn": {
"Ref": "SNSTOPIC"
},
"Endpoint": "jill#stakeholder.com"
}
},
"ALARM": {
"Type": "AWS::CloudWatch::Alarm",
"Properties": {
"AlarmName": {
"Fn::Join": [
"",
[
"myproject-lambda-failed-dlq-alarm",
{
"Fn::Select": [
3,
{
"Fn::Split": [
"-",
{
"Ref": "AWS::StackName"
}
]
}
]
},
"-",
{
"Ref": "env"
}
]
]
},
"AlarmDescription": "There are messages in the 'Lambda Failed' dead letter queue.",
"Namespace": "AWS/SQS",
"MetricName": "ApproximateNumberOfMessagesVisible",
"Dimensions": [
{
"Name": "QueueName",
"Value": {
"Fn::GetAtt": [
"SQSDLQ",
"QueueName"
]
}
}
],
"Statistic": "Sum",
"Period": 60,
"EvaluationPeriods": 1,
"Threshold": 0,
"ComparisonOperator": "GreaterThanThreshold",
"AlarmActions": [
{
"Ref": "SNSTOPIC"
}
]
}
}
},
"Outputs": {
"SQSDLQArn": {
"Value": {
"Fn::GetAtt": [
"SQSDLQ",
"Arn"
]
}
}
},
"Description": ""
}
Next, update and add the new custom resource as a dependency of the Lambda(s) to monitor.
File: backend-config.json
"function": {
"MYLambda": {
"build": true,
"dependsOn": [
{
"attributes": [
"SQSDLQArn"
],
"category": "custom",
"resourceName": "LambdaAlarm"
}
],
"providerPlugin": "awscloudformation",
"service": "Lambda"
},
},
In the Lambda(s) you want to monitor, make 3 changes to the cloudformation:
Pull in the output variable (customLambdaAlarmSQSDLQArn) from your custom category and add it to the Parameters
Add the DeadLetterConfig property to the Lambda
Add a policy to the LambdaExecutionRole
File: amplify/backend/function/MyLambda/MyLambda-cloudformation-template.json
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "...",
"Parameters": {
...snip...
"customLambdaAlarmSQSDLQArn": {
"Type": "String"
},
...snip...
},
"Conditions": {...snip...},
"Resources": {
"LambdaFunction": {
"Type": "AWS::Lambda::Function",
"Metadata": {...snip},
"Properties": {
"Code": {...snip...},
"Handler": "index.handler",
"FunctionName": {...snip...},
"Environment": {
"Variables": {...snip...}
},
"Role": {...snip...},
"Runtime": "nodejs18.x",
"Architectures": ["arm64"],
"Layers": [],
"MemorySize": 256,
"Timeout": 120,
"DeadLetterConfig": {
"TargetArn": {
"Ref": "customLambdaAlarmSQSDLQArn"
}
}
}
},
"LambdaExecutionRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": {...snip...},
"Policies": [
{
"PolicyName": "custom-lambda-execution-policy",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowSQSSendMessage",
"Effect": "Allow",
"Action": [
"SQS:SendMessage"
],
"Resource": {
"Ref": "customLambdaAlarmSQSDLQArn"
}
}
]
}
}
],
"AssumeRolePolicyDocument": {...snip...}
}
},
"lambdaexecutionpolicy": {...snip...},
"AmplifyResourcesPolicy": {...snip...},
"CustomLambdaExecutionPolicy": {...snip...}
},
"Outputs": {...snip...}
}
Finally, due to an Amplify quirk you must amplify env checkout dev because you manually touched the backend-config.json file.
Then you can deploy your changes. The above is not specific to AWS Amplify.

SQS API: sqs:CreateQueue Access to the resource https://sqs.us-east-1.amazonaws.com/ is denied on `amplify push` using Cloudformation

I'm implementing SQS fifo queue. I have to implement i using cloudformation template.
When I do amplify push, I get
Error
API: sqs:CreateQueue Access to the resource https://sqs.us-east-1.amazonaws.com/ is denied
I've added SQS policy followed from aws docs
. Except for accountID, I'm using service in the "Principal" as "sqs.amazonaws.com".
My cloudformation looks like:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "SQS fifo queue",
"Parameters": {
"env": {
"Type": "String"
}
},
"Resources": {
"QueueExecutionRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": {
"Fn::Join": [
"",
[
"queue-exec-role-",
{
"Ref": "env"
}
]
]
},
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "sqs.amazonaws.com"
},
"Action": ["sts:AssumeRole"]
}
]
}
}
},
"SQSPolicy": {
"Type": "AWS::SQS::QueuePolicy",
"Properties": {
"Queues": [{ "Ref": "groupingQueue" }],
"PolicyDocument": {
"Statement": [
{
"Action": ["SQS:SendMessage", "SQS:ReceiveMessage"],
"Effect": "Allow",
"Resource": {
"Fn::GetAtt": ["groupingQueue", "Arn"]
},
"Principal": {
"Service": "sqs.amazonaws.com"
}
}
]
}
}
},
"groupingQueue": {
"Type": "AWS::SQS::Queue",
"Properties": {
"FifoQueue": "true",
"QueueName": {
"Fn::Join": [
"",
[
"grouping-queue-",
{
"Ref": "env"
},
".fifo"
]
]
}
}
}
},
"Outputs": {
"QueueURL": {
"Description": "URL of new Amazon SQS Queue",
"Value": { "Ref": "groupingQueue" }
},
"QueueARN": {
"Description": "ARN of new Amazon SQS Queue",
"Value": { "Fn::GetAtt": ["groupingQueue", "Arn"] }
},
"QueueName": {
"Description": "Name new Amazon SQS Queue",
"Value": { "Fn::GetAtt": ["groupingQueue", "QueueName"] }
}
}
}
I do not want to give AccountID in "Principal", That why used sqs service.
With this exact template, I get access denied on amplify push -y.
I was doing amplify push from server. When I pushed it from my local computer it worked.
Turns out the aws profile I set in server did not have sqs:CreateQueue permissions while my local had the administrator access.
So, I added administrator full access to my server user from console, did amplify push again and it worked smoothly.
PS: you don't need to give administrator permission, you can just give sqs:CreateQueue permission. I did it because I was testing.

Cloudformation error while creating multiple users

I am trying to create two user using CFT i am very new to cloudformation how do we define multiple users i have tried below but getting cft error.
{
"Resources": {
"AWSSCRIPTS": {
"Type": "AWS::IAM::User"
},
"AWSSCRIPTSPolicy": {
"Type": "AWS::IAM::ManagedPolicy",
"Properties": {
"Description" : "This policy allows to run scripts in new account.",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
},
"Users": [{
"Ref": "AWSSCRIPTS"
}]
}
},
"AWSSCRIPTSKeys": {
"Type": "AWS::IAM::AccessKey",
"Properties": {
"UserName": {
"Ref": "AWSSCRIPTS"
}
}
}
},
"ADDUSER": {
"Type": "AWS::IAM::User"
},
"ADDUSERPolicy": {
"Type": "AWS::IAM::ManagedPolicy",
"Properties": {
"Description" : "This policy allows to list IAM Roles for AAD User.",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
},
"Users": [{
"Ref": "ADDUSER"
}]
}
},
"ADDUSERKeys": {
"Type": "AWS::IAM::AccessKey",
"Properties": {
"UserName": {
"Ref": "ADDUSER"
}
}
},
"Outputs": {
"AccessKey": {
"Value": {
"Ref": "AWSSCRIPTS"
},
"Description": "Access Key ID of AWS Scripts"
},
"SecretKey": {
"Value": {
"Fn::GetAtt": [
"AWSSCRIPTSKeys",
"SecretAccessKey"
]
},
"Description": "Secret Key of AWS Scripts User"
},
"AccessKey2": {
"Value": {
"Ref": "ADDUSER"
},
"Description": "Access Key ID of ADD USER"
},
"SecretKey2": {
"Value": {
"Fn::GetAtt": [
"ADDUSERKeys",
"SecretAccessKey"
]
},
"Description": "Secret Key of ADD User"
}
}
}
I am getting below error
Invalid template property or properties [ADDUSERPolicy, ADDUSER, ADDUSERKeys]
Create credentials for the user, depending on the type of access the user requires:
Programmatic access: The IAM user might need to make API calls, use the AWS CLI, or use the Tools for Windows PowerShell. In that case, create an access key (access key ID and a secret access key) for that user.
AWS Management Console access: If the user needs to access the AWS Management Console, create a password for the user.
ADDUSER, ADDUSERPolicy and ADDUSERKeys should be in Resources, but they are on the same level:

ECS unable to assume role

From the console, I am invoking a lambda which submits a batch job. The batch job fails, indicating that ECS is unable to assume the role that is provided to execute the job definition.
For the role, I've added the lambda and ECS services.
The error message:
"ECS was unable to assume the role
'arn:aws:iam::749340585813:role/golfnow-invoke-write-progress' that
was provided for this task. Please verify that the role being passed
has the proper trust relationship and permissions and that your IAM
user has permissions to pass this role."
"TrainingJobRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": "golfnow-invoke-write-progress",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com",
"ecs.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Path": "/"
}
}
The batch job:
"TrainingJob": {
"Type": "AWS::Batch::JobDefinition",
"Properties": {
"Type": "container",
"JobDefinitionName": {
"Fn::Sub": "c12e-golfnow-${Environment}-job"
},
"ContainerProperties": {
"Image": {
"Fn::Join": [
"",
[
"{{ image omitted }}",
{
"Ref": "AWS::Region"
},
".amazonaws.com/amazonlinux:latest"
]
]
},
"Vcpus": 2,
"Memory": 2000,
"Command": [
"while", "True", ";", "do", "echo", "'hello';", "done"
],
"JobRoleArn": {
"Fn::GetAtt": [
"TrainingJobRole",
"Arn"
]
}
},
"RetryStrategy": {
"Attempts": 1
}
}
},
"JobQueue": {
"Type": "AWS::Batch::JobQueue",
"Properties": {
"Priority": 1,
"ComputeEnvironmentOrder": [
{
"Order": 1,
"ComputeEnvironment": {
"Ref": "ComputeEnvironment"
}
}
]
}
}
Is the issue with the way it's being invoked? My user has admin privileges, so I don't think this is an issue with my user having insufficient permissions.
You have to add the principal "ecs-tasks.amazonaws.com" to the trust policy for the role that's submitting a Batch job (not "ecs.amazonaws.com").
Revised role:
"TrainingJobRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": "golfnow-invoke-write-progress",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com",
"ecs-tasks.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Path": "/"
}
},
And for those who are writing CDK script in Java, while defining the TaskDefinition you don't have to explicitly provide any taskRole and executionRole. CDK will create appropriate Role for you.
You would need to add a trust policy to ECS to call the Batch service.
"Principal": {
"Service": [
"batch.amazonaws.com"
]
},
My issue was resolved by adding role name in the CDK script.
const ecsFargateServiceRole = new iam.Role(this, 'execution-role', {
assumedBy: new iam.ServicePrincipal('ecs-tasks.amazonaws.com'),
roleName: "execution-role"
});
ecsFargateServiceRole.addToPolicy(executionRolePolicy);

Cannot create encryption key with cloudformation

I am trying to create my Encryption key with cloudformation. So just to test I have a very simple one as follow:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Creates a KMS key and attaches a policy similar to the default policy. Also, creates two Roles which allow encryption and decryption under this key.",
"UserPrincipal": {
"Type": "String",
"Default": "user/datadog"
}
},
"Resources": {
"DemonstrationKey": {
"Type": "AWS::KMS::Key",
"Properties": {
"KeyPolicy": {
"Id": "DefaultKmsPolicy",
"Version": "2012-10-17",
"Statement": [{
"Sid": "Enable IAM User Permissions",
"Effect": "Allow",
"Principal": {
"AWS": [{
"Fn::Join": [
":", [
"arn:aws:iam:",
{
"Ref": "AWS::AccountId"
},
"root"
]
]
}]
},
"Action": "kms:*",
"Resource": "*"
}]
}
}
}
},
"Outputs": {
"KeyID": {
"Description": "Key ID",
"Value": {
"Ref": "DemonstrationKey"
}
}
}
}
And it works fine but this is not what I want. Instead I want to attach the already existing policy to it for example sth like this:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Creates a KMS key and attaches a policy similar to the default policy. Also, creates two Roles which allow encryption and decryption under this key.",
"UserPrincipal": {
"Type": "String",
"Default": "user/datadog"
}
},
"Resources": {
"DemonstrationKey": {
"Type": "AWS::KMS::Key",
"Properties": {
"KeyPolicy": "arn:aws:iam::******:policy/testtestpol1"
}
}
},
"Outputs": {
"KeyID": {
"Description": "Key ID",
"Value": {
"Ref": "DemonstrationKey"
}
}
}
}
But this does not work and I get the following error:
MalformedPolicyDocumentException
Can anyone help me with that. Is it doable at all?