My CodePipeline is not getting triggered when I upload code.zip to s3-bucket or I copy code.zip using aws cli (aws s3 cp).
Cloudformation event Rule snippet:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- 'aws.s3'
detail:
eventSource:
- 's3.amazonaws.com'
eventName:
- 'CopyObject'
- 'PutObject'
- 'CompleteMultipartUpload'
requestParameters:
bucketName:
- 's3-bucket'
key:
- 'code.zip'
State: 'ENABLED'
Targets:
-
Arn: <CodePipeline ARN>'
Id: 'Target-1'
RoleArn: <trigger role ARN>
Trigger role policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codepipeline:StartPipelineExecution"
],
"Resource": "*"
}
]
}
Event Pattern:
{
"source": [
"aws.s3"
],
"detail": {
"eventSource": [
"s3.amazonaws.com"
],
"requestParameters": {
"bucketName": [
"s3-bucket"
],
"key": [
"code.zip"
]
},
"eventName": [
"CopyObject",
"PutObject",
"CompleteMultipartUpload"
]
}
}
What is missing here ? Or anyone has any pointers on how this can be debugged further?
There are two ways to further debug this.
First you want to ensure that you have a working event pattern. The easiest way to do this would to get a sample code-pipeline event and then make a test call via either https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_TestEventPattern.html.
Next, if you have a working rule, you can check the metrics https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-monitoring-cloudwatch-metrics.html and setup DLQ https://docs.aws.amazon.com/eventbridge/latest/userguide/rule-dlq.html. Both provide visibility into if the rule had a match and was it successfully delivered.
The problem was events were not getting triggered out of S3 and when i enabled it from CloudTrail it worked.
Here is the related answer using which i resolved it:
S3 object level events are not getting triggered
Related
So I've tried setting up a LAMBDA function using python 3.9, which calls my SSM Document which will restart "Coldfusion 2018 application server" within windows, if the cloudwatch alert name is triggered. I have it set in eventbridge to alarm state change which means everything the domain goes down "Coldfusion service stopped" it should run the SSM document and the powershell script. But nothing is working at all and ive tried practically everything i know of.
Below are my default roles for LAMBDA + Inline Policy, along with my LAMBDA function, my SSM document and my eventbridge.
My Lambda default role assigned is :
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "arn:aws:logs:ap-southeast-2:727665054500:*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:ap-southeast-2:727665054500:log-group:/aws/lambda/johntest:*"
]
}
]
}
And my Inline policy attached to my default role, to allow SSM is:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "ssm:SendCommand",
"Resource": "*"
}
]
}
Lambda function:
import os
import boto3
import json
ssm = boto3.client('ssm')
def lambda_handler(event, context):
InstanceId = 'i-06692c60000c89460'
ssmDocument = 'johntest'
log_group = os.environ['AWS_LAMBDA_LOG_GROUP_NAME']
targetInstances = [InstanceId]
response = ssm.send_command(
InstanceIds=targetInstances,
DocumentName=ssmDocument,
DocumentVersion='$DEFAULT',
CloudWatchOutputConfig={
'CloudWatchLogGroupName': log_group,
'CloudWatchOutputEnabled': True
}
)
EventBridge (CloudWatch events) which is the trigger for lambda
{
"source": [
"aws.cloudwatch"
],
"detail-type": [
"CloudWatch Alarm State Change"
],
"detail": {
"alarmName": [
"TASS-john2-Testing-SiteDown for domain https://johntest.tassdev.cloud/tassweb"
],
"state": {
"value": [
"ALARM"
]
},
"previousState": {
"value": [
"OK"
]
}
}
}
SSM document to run the PowerShell script In SSM when you create an initial document, I selected the command/session document which may be wrong? Do I need to make an automation document? If so can someone show me the correct code/syntax please?
---
schemaVersion: "2.2"
description: "Example document"
parameters:
Message:
type: "String"
description: "Example parameter"
default: "Hello World"
mainSteps:
- action: "aws:runPowerShellScript"
name: "example"
inputs:
timeoutSeconds: '600'
runCommand:
- Restart-Service -DisplayName "ColdFusion 2018 Application Server"
I tried setting up the lambda function, with my instance ID/SSM document name, and the trigger is event bridge, which is set to my CloudWatch alarm based on state change. I cannot get the SSM document to trigger my window service "coldfusion" at all.
I have pasted my above code for the eventbridge/ssm document/lambda and even my default lambda role/inline policy which still doesn't seem to work. I also have the SSM agent installed on my instance but still nothing.
EDIT: Just went to Systems Manager, and clicked run command and it ran the powershell script and started up the coldfusion service, but why isn't it triggering from the CloudWatch alarm?
Cheers.
I am using the AWS-CDK to create a stack with an AWS-MSK cluster and a Lambda function which should be triggered, when a new message is available in a specific topic.
I already had it working nicely and then I decided to add clientAuthentication and now I am stuck. I am using SASL/SCRAM for authentication. I have created a custom encryption key via the KMS service and I am using that key in a Secret in the SecretsManager. I have associated that Secret with my MSK cluster and turned on clientAuthentication there.
I have also already created an interface endpoint in my VPC to the Lambda Service in order for the service to be able to access my cluster (again, this already worked when I hadn't activated clientAuthentication).
Now I am defining my Lambda listener handler function like this:
const listener = new aws_lambda.Function(this, 'ListenerHandler', {
vpc,
vpcSubnets: { subnetGroupName: 'ListenerPrivate' },
runtime: aws_lambda.Runtime.NODEJS_14_X,
code: aws_lambda.Code.fromAsset('lambda'),
handler: 'listener.handler'
});
listener.addToRolePolicy(new aws_iam.PolicyStatement({
effect: Effect.ALLOW,
actions: ['kafka:*', 'kafka-cluster:*', 'secretsmanager:DescribeSecret', 'secretsmanager:GetSecretValue'],
resources: [cluster.ref]
}));
const secretsFromLambdaAccessRole = new aws_iam.Role(this, 'AccessSecretsFromLambdaRoles', {
assumedBy: new aws_iam.ServicePrincipal('kafka.amazonaws.com')
});
secretsFromLambdaAccessRole.addToPolicy(new aws_iam.PolicyStatement({
effect: Effect.ALLOW,
actions: ['secretsmanager:DescribeSecret', 'secretsmanager:GetSecretValue'],
resources: [KAFKA_ACCESS_SECRET_ARN]
}));
listener.role?.addManagedPolicy(
aws_iam.ManagedPolicy
.fromAwsManagedPolicyName("service-role/AWSLambdaVPCAccessExecutionRole")
);
listener.role?.addManagedPolicy(
aws_iam.ManagedPolicy
.fromAwsManagedPolicyName("service-role/AWSLambdaMSKExecutionRole")
);
const kafkaAccessSecret = aws_secretsmanager.Secret
.fromSecretCompleteArn(this, 'kafkaAccessSecret', KAFKA_ACCESS_SECRET_ARN);
listener.addEventSource(new ManagedKafkaEventSource({
clusterArn: cluster.ref,
topic: "MyTopic",
startingPosition: StartingPosition.LATEST,
secret: kafkaAccessSecret,
}));
The secret also has policies assigned to it:
{
"Version" : "2012-10-17",
"Statement" : [ {
"Sid" : "AWSLambdaResourcePolicy",
"Effect" : "Allow",
"Principal" : {
"Service" : "lambda.amazonaws.com"
},
"Action" : [ "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret", "secretsmanager:ListSecretVersionIds" ],
"Resource" : "arn:aws:secretsmanager:some-region:some-account:secret:AmazonMSK_some-secret"
}, {
"Sid" : "AWSKafkaResourcePolicy",
"Effect" : "Allow",
"Principal" : {
"Service" : "kafka.amazonaws.com"
},
"Action" : "secretsmanager:GetSecretValue",
"Resource" : "arn:aws:secretsmanager:some-region:some-account:secret:AmazonMSK_some-secret"
} ]
}
Now, when I try to deploy my lambda function via the CDK and it comes to the point, where it should add the event source mapping, I get this error:
Failed resources:
MskExampleStack | 17:23:22 | CREATE_FAILED | AWS::Lambda::EventSourceMapping | ListenerHandler/KafkaEventSource:MskExampleStackListenerHandler4711MyTopic (ListenerHandlerKafkaEventSourceMskExampleStackListenerHandler4711MyTopic0815)
Resource handler returned message: "Invalid request provided: Cannot access secret manager value arn:aws:secretsmanager:some-region:some-account:secret:AmazonMSK_dev-some-secret.
Please ensure the role can perform the 'secretsmanager:GetSecretValue' action on your broker in IAM. (Service: Lambda, Status Code: 400, Request ID: 123456789, Extended Request ID: null)" (RequestToken: 987654321, HandlerErrorCode: InvalidRequest)
I cannot figure out, what I am missing. What role is the error referring to? Where do I need to add the action "secretsmanager:GetSecretValue"? My user has complete Admin Rights.
You need the following:
kms permissions on the lambda role
secretsmanager permissions on the lambda role
(What I was missing) lambda.amazonaws.com in the key policy for your kms key
my setup:
lambda permissions:
- Effect: "Allow"
Action:
- kms:Decrypt
- kms:GenerateDataKey*
Resource:
- "*"
- Effect: "Allow"
Action:
- secretsmanager:GetSecretValue
Resource:
- "your secret arn"
KMS Policy:
{
"Sid": "Decrypt",
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
},
"Action": [
"kms:GenerateDataKey*",
"kms:Decrypt",
],
"Resource": "*"
}
Amazon has a knack for writing about 500 needless words to document a feature and never document things around using KMS with that feature even though it seems to vary greatly.
I was trying to create an AWS state machine(step functions) using AWS SAM which is triggered by S3 event. Following is my AWS SAM yml snippet.
SampleStateMachine:
Type: AWS::Serverless::StateMachine
Properties:
Name: sample-state-machine
DefinitionUri: state-machines/my-definition.asl.json
Events:
S3PutEvent:
Type: EventBridgeRule
Properties:
Pattern:
source:
- "aws.s3"
detail:
eventSource:
- s3.amazonaws.com
eventName:
- PutObject
requestParameters:
bucketName:
- !Ref MyBucketName
On deploying this application, it's successfully creating the rule with the pattern that I've specified in the sam yml template. (but with a slight change in the order of JSON key-value pairs)
{
"source": [
"aws.s3"
],
"detail": {
"eventSource": [
"s3.amazonaws.com"
],
"requestParameters": {
"bucketName": [
"my-bucket"
]
},
"eventName": [
"PutObject"
]
}
}
Unfortunately, this rule is not capturing any event from the event bus. so I've tried like changing the JSON Key-Value pair in the following order,
{
"source": [ "aws.s3"
],
"detail": {
"eventSource": [
"s3.amazonaws.com"
],
"eventName": [
"PutObject"
],
"requestParameters": {
"bucketName": [
"my-bucket"
]
}
}
}
and it started receiving events and working fine.
So my question is,
Is this order really matters for AWS eventbridge rule pattern?
If so, how we can preserve this order while AWS sam execution(YML to JSON)?
Thanks
Order should not matter. If you can reproduce the issue, you should file a bug report with AWS support to get the service to fix it.
I created an SQS queue and added policy under permission tab allowing only my account users to configure the configure the notification
Policy Document
{
"Version": "2012-10-17",
"Id": "arn:aws:sqs:us-east-1:111111111111:sqsqueue/SQSDefaultPolicy",
"Statement": [
{
"Sid": "Sid111111111111",
"Effect": "Allow",
"Principal": {
"AWS": "111111111111"
},
"Action": [
"sqs:SendMessage",
"sqs:ReceiveMessage"
],
"Resource": "arn:aws:sqs:us-east-1:111111111111:queue"
}
]
Navigate to S3 and try to configure event notification for the above queue, it is throwing an error
Unable to validate the following destination configurations. Permissions on the destination queue do not allow S3 to publish
notifications from this bucket.
(arn:aws:sqs:us-east-1:111111111111:queue)*
am I doing something wrong? Can someone help me please
I was able to resolve this issue by adding "Service": "s3.amazonaws.com"
in the Principal tag.
Here the policy document
{
"Version": "2012-10-17",
"Id": "arn:aws:sqs:us-east-1:111111111111:sqsqueue/SQSDefaultPolicy",
"Statement": [
{
"Sid": "Sid111111111111",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": [
"sqs:SendMessage",
"sqs:ReceiveMessage"
],
"Resource": "arn:aws:sqs:us-east-1:111111111111:queue"
}
]
This is explained in https://forums.aws.amazon.com/thread.jspa?threadID=173251
This template file creates a bucket, SQS Queue and a policy to connect the two:
AWSTemplateFormatVersion: 2010-09-09
Parameters:
IncomingBucketName:
Type: 'String'
Description: 'Incoming Bucket Name'
Default: 'some-bucket-name-here'
Resources:
IncomingFileQueue:
Type: 'AWS::SQS::Queue'
Properties: {}
SQSQueuePolicy:
Type: 'AWS::SQS::QueuePolicy'
Properties:
PolicyDocument:
Id: 'MyQueuePolicy'
Version: '2012-10-17'
Statement:
- Sid: 'Statement-id'
Effect: 'Allow'
Principal:
AWS: "*"
Action: 'sqs:SendMessage'
Resource:
Fn::GetAtt: [ IncomingFileQueue, Arn ]
Queues:
- Ref: IncomingFileQueue
IncomingFileBucket:
Type: 'AWS::S3::Bucket'
DependsOn:
- SQSQueuePolicy
- IncomingFileQueue
Properties:
AccessControl: BucketOwnerFullControl
BucketName:
Ref: IncomingBucketName
NotificationConfiguration:
QueueConfigurations:
- Event:
s3:ObjectCreated:Put
Queue:
Fn::GetAtt: [ IncomingFileQueue, Arn ]
I was getting the same issue but used this page to work out how to connect the three resources in order to successfully deploy the stack:
https://aws.amazon.com/premiumsupport/knowledge-center/unable-validate-destination-s3/
I'm still working on the Policy Condition as the form recommended in the above link doesn't work for SQS. That being the case, the above template is not secure and shouldn't be used in production as it allows anyone to add messages to the queue.
I'll update this answer once I've figured that bit out...
I am trying to prototype a distributed application using SNS and SQS.I have this topic:
arn:aws:sns:us-east-1:574008783416:us-east-1-live-auction
and this queue:
arn:aws:sqs:us-east-1:574008783416:queue4
I created the queue using the JS Scratchpad. I added the subscription using the Console. I AddPermission to the queue using the scratchpad. The queue policy is now:
{
"Version":"2008-10-17",
"Id":"arn:aws:sqs:us-east-1:574008783416:queue4/SQSDefaultPolicy",
"Statement":[
{
"Sid":"RootPerms",
"Effect":"Allow",
"Principal":{
"AWS":"574008783416"
},
"Action":"SQS:*",
"Resource":"arn:aws:sqs:us-east-1:574008783416:queue4"
}
]
}
I have an email subscription on the same topic and the emails arrive fine but the messages never arrive on the queue. I've tried SendMessage directly to the queue - rather than via SNS - using Scratchpad and it works fine. Any ideas why it won't send to the queue?
This was posted a while back on the AWS forums: https://forums.aws.amazon.com/thread.jspa?messageID=202798
Then I gave the SNS topic the permission to send messages to the SQS queue. The trick here is to allow all principals. SNS doesn't send from your account ID -- it has its own account ID that it sends from.
Adding to Skyler's answer, if like me you cringe at the idea of allowing any principal (Principal: '*'), you can restrict the principal to SNS:
Principal:
Service: sns.amazonaws.com
Although this behavior is undocumented, it works.
Most of the answers (beside #spg answer) propose usage of principal: * - this is very dangerous practice and it will expose your SQS to whole world.
From AWS docs
For resource-based policies, such as Amazon S3 bucket policies, a wildcard (*) in the principal element specifies all users or public access.
We strongly recommend that you do not use a wildcard in the Principal element in a role's trust policy unless you otherwise restrict access through a Condition element in the policy. Otherwise, any IAM user in any account in your partition can access the role.
Therefore it is strongly not recommended to use this principal.
Instead you need to specify sns service as your principal:
"Principal": {
"Service": "sns.amazonaws.com"
},
Example policy:
{
"Version": "2012-10-17",
"Id": "Policy1596186813341",
"Statement": [
{
"Sid": "Stmt1596186812579",
"Effect": "Allow",
"Principal": {
"Service": "sns.amazonaws.com"
},
"Action": [
"sqs:SendMessage",
"sqs:SendMessageBatch"
],
"Resource": "Your-SQS-Arn"
}
]
}
With this policy sns will be able to send messages to your SQSs.
There are more permissions for SQS but from what I see SendMessage and SendMessageBatch should be enough for SNS->SQS subscribtion.
Here's a full CloudFormation example of Skyler's answer
{
"Resources": {
"MyTopic": {
"Type": "AWS::SNS::Topic"
},
"MyQueue": {
"Type": "AWS::SQS::Queue"
},
"Subscription": {
"Type" : "AWS::SNS::Subscription",
"Properties" : {
"Protocol" : "sqs",
"TopicArn" : {"Ref": "MyTopic"},
"Endpoint": {"Fn::GetAtt": ["MyQueue", "Arn"]}
}
},
"QueuePolicy": {
"Type": "AWS::SQS::QueuePolicy",
"Properties": {
"Queues": [
{"Ref": "MyQueue"}
],
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "allow-sns-messages",
"Effect": "Allow",
"Principal": {"Service": "sns.amazonaws.com"},
"Action": "sqs:SendMessage",
"Resource": {"Fn::GetAtt": ["MyQueue", "Arn"]},
"Condition": {
"ArnEquals": {
"aws:SourceArn": {"Ref": "MyTopic"}
}
}
}
]
}
}
}
}
}
Amazon has more options in their Sending Amazon SNS Messages to Amazon SQS Queues document.
I just experienced this and took me a while to figure out why:
If I create a SQS subscription from the SNS console, it does not add necessary permissions to the SQS access policy.
If I create the subscription in the SQS console to the same SNS, it does.
Old question but using an AWS SDK version > 1.10
Check out the docs SQS-SNS sendMessage Permission
private static void updateQueuePolicy(AmazonSQS sqs, String queueURL, String topicARN) {
Map<String, String> attributes = new HashMap<String, String>(1);
Action actions = new Action() {
#Override
public String getActionName() {
return "sqs:SendMessage"; // Action name
}
};
Statement mainQueueStatements = new Statement(Statement.Effect.Allow)
.withActions(actions)
.withPrincipals(new Principal("Service", "sns.amazonaws.com"))
.withConditions(
new Condition()
.withType("ArnEquals")
.withConditionKey("aws:SourceArn")
.withValues(topicARN)
);
final Policy mainQueuePolicy = new Policy()
.withId("MainQueuePolicy")
.withStatements(mainQueueStatements);
attributes.put("Policy", mainQueuePolicy.toJson());
updateQueueAttributes(sqs, queueURL, attributes);
}
Which outputs a policy similar to
{
Policy={
"Version":"2012-10-17",
"Id":"MainQueuePolicy",
"Statement":
[
{
"Sid":"1",
"Effect":"Allow",
"Principal": {
"Service": "sns.amazonaws.com"
},
"Action":["sqs:SendMessage"],
"Condition":
{"ArnEquals":
{"aws:SourceArn":["arn:aws:sns:us-east-1:3232:testSubscription"]}
}
}
]
}
}
Like the other answers mentioned, you must opt in and grant permission to this SNS topic to publish to your SQS queue.
If you use terraform, you can use the sqs_queue_policy resource.
Here is an example:
resource "aws_sqs_queue_policy" "your_queue_policy" {
queue_url = "${aws_sqs_queue.your_queue.id}"
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "sqspolicy",
"Statement": [
{
"Sid": "First",
"Effect": "Allow",
"Principal": "*",
"Action": "sqs:SendMessage",
"Resource": "${aws_sqs_queue.your_queue.arn}",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "${aws_sns_topic.your_topic.arn}"
}
}
}
]
}
POLICY
}
If you have enabled encryption on your queue, can also be a reason for SNS not being able to put message on subscriber queue. You need to give access to SNS to that KMS key.
This article explains how to solve this problem:
only you need subscribe the queue to the topic from the queue console.
step one: select the queue
step two: queue Actions
step three: Subscribe Queue to SNS Topic
step choose: the topic
end.
It's by AWS::SQS::QueuePolicy.
You need define a this kind of policy to allow specific SNS perform actions to specific SQS.
QueuePolicy:
Type: AWS::SQS::QueuePolicy
Properties:
Queues:
- Ref: MyQueue1
- Fn::Sub: arn:aws:sqs:us-east-1:${AWS::AccountId}:my-queue-*
PolicyDocument:
Version: '2012-10-17'
Statement:
- Sid: allow-sns-messages
Effect: Allow
Principal:
Service: sns.amazonaws.com
Action: sqs:SendMessage
Resource:
- Fn::Sub: arn:aws:sqs:us-east-1:${AWS::AccountId}:my-queue-*
Condition:
ArnEquals:
aws:SourceArn:
- Fn::Sub: arn:aws:sns:us-east-1:${AWS::AccountId}:source-sns-*
With Lambda this kind of policies dont apply. Please if anyone know why please share it.