I have a AWS lambda function deployed in multiple accounts. I'm looking for a way to schedule to trigger these lambda function from master account via Cloudwatch Event Bus. Is this possible?
In line with what #amitd is suggesting you need to implement something like this (Using EventBridge , EventBus).
To configure cross-account event bridge communication following needs to be done. I am providing sample events and filters, you can replace the event and filters as per requirement.
Steps to be performed on Account B: Receiver account
Create an event bus named event-bus-b. Put the resource-based policy as shown below.
{
"Version": "2012-10-17",
"Statement": [{
"Sid": "WebStoreCrossAccountPublish",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<account-A>:root"
},
"Action": "events:PutEvents",
"Resource": "arn:aws:events:<your-region>:<Account-B>:event-bus/event-bus-b"
}]
}
Create a rule in account B let's calls it eb-rule-b. In this Rule select event-bus-b as a source event bus.
Provision following event filter pattern:
Event pattern:
{
"detail-type": [
"uoe"
],
"source": [
"somesource"
]
}
Also, test the pattern using the test event.
Test Event:
{
"version": "0",
"id": "55fghj-89a9-a0b3-1ccb-79c25c7d6cd2",
"detail-type": "uoe",
"source": "somesource",
"account": "<ACCOUNT_ID>",
"time": "2020-04-24T13:53:21Z",
"region": "<YOUR_REGION>",
"resources": [],
"detail": {
"userOrg" : "OrgName"
}
}
Select the event bus event-bus-b in the drop-down.
Select the target "Lambda"
Put the ARN of the event bus which you have created in Account B.
arn:aws:lambda:<your-region>:<AccountB>:function:<AccountBLambda>
Also check on the check box "Create a new role for this specific resource". This will create a role in account A which enables lambda execution.
Click on create and create the rule.
Now click on the event bus event-bus-a and click on Send events button.
Send a dummy event as shown below and validate that the communication between event bus and the lambda in account B is all ok.
If you face some issue in this plumbing refer to :https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-troubleshooting.html#eb-lam-function-not-invoked
Once we are good in Account B ( i.e we are able to invoke the lambda by sending events on the event bus, configure other accounts by following the same steps.
Steps to be performed on Account A: Sender account
Create an event bus event-bus-a in account A.
Create a rule eb-rule-a in account A with the following details:
Event pattern:
{
"detail-type": [
"uoe"
],
"source": [
"somesource"
]
}
Also, test the pattern using the test event.
Test Event:
{
"version": "0",
"id": "55fghj-89a9-a0b3-1ccb-79c25c7d6cd2",
"detail-type": "uoe",
"source": "somesource",
"account": "<ACCOUNT_ID>",
"time": "2020-04-24T13:53:21Z",
"region": "<YOUR_REGION>",
"resources": [],
"detail": {
"userOrg" : "OrgName"
}
}
Select the event bus event-bus-a in the drop-down.
Select the target "Event bus in different account or Region"
Put the ARN of the event bus which you have created in Account B.
arn:aws:events:<your-region>:<Account-B>:event-bus/event-bus-b
Also check on the check box "Create a new role for this specific resource". This will create a role in account A which enables the users in account A to publish on account b event bus. The below policy is auto-created and you don't need to do anything.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"events:PutEvents"
],
"Resource": [
arn:aws:events:<your-region>:<Account-B>:event-bus/event-bus-b
]
}
]
}
Click on create and create the rule.
Now click on the event bus event-bus-a and click on Send events button.
Provide details and click on send.
Sample event:
{
"version": "0",
"id": "55fghj-89a9-a0b3-1ccb-79c25c7d6cd2",
"detail-type": "uoe",
"source": "somesource",
"account": "<ACCOUNT_ID>",
"time": "2020-04-24T13:53:21Z",
"region": "<YOUR_REGION>",
"resources": [],
"detail": {
"userOrg" : "OrgName"
}
}
Event will propagate to the event bus defined in account B.
Repete from steps 4- 10 for all other accounts ( i.e create multiple targets in the same rule).
Once configured a single event in Account A will propagates to multiple accounts and you will achieve the necessary fanning.
Please refer following options and related documentation from AWS;
Using CloudWatchEvents:
a. Sending and Receiving Events Between AWS Accounts
b. Cross-Account Delivery of CloudWatch Events
OR
Using Amazon EventBridge:
a. Simplifying cross-account access with Amazon EventBridge
b. Sending and recieving Amazon EventBridge events between AWS accounts
Related
I am trying to "Deny" create sqs queue (sqs:CreateQueue) permission for all user if they forgot to encrypt the queue while creating. I tried with the below policy but still the policy is allowing the user to create queue if they are encrypting or not.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "denysqsifnotencrypted",
"Action": "sqs:CreateQueue",
"Effect": "Deny",
"Resource": "*",
"Condition": {
"ForAnyValue:StringNotEquals": {
"aws:CalledVia": ["kms.amazonaws.com"]
}
}
}
]
}
Surprisingly, you will not be able to do this because SQS does not support any Condition keys. From the documentation:
SQS has no service-specific context keys that can be used in the Condition element of policy statements.
As a workaround, you can create an EventBridge rule that gets triggered anytime an SQS queue is created without a KMS key.
Here's what the EventBridge rule would look like:
{
"source": ["aws.sqs"],
"detail-type": ["AWS API Call via CloudTrail"],
"detail": {
"eventSource": ["sqs.amazonaws.com"],
"eventName": ["CreateQueue"],
"requestParameters": {
"attribute": {
"KmsMasterKeyId": [""]
}
}
}
}
You can then configure an AWS Lambda function as a target for the Eventbridge rule, and you can have this function delete the queue immediately (and you could maybe even notify your users via an email with SES).
Background
I'm creating a Step Function state machine that starts an AWS CodeBuild once a defined AWS CodePipeline has an execution status of SUCCEED. I'm using the .waitForTaskToken feature within the Step Function to wait on the CodePipeline to succeed via a CloudWatch event. Once the pipeline succeeds, the event sends back the token to the step function and runs the CodeBuild.
Here's the step function definition:
{
"StartAt": "PollCP",
"States": {
"PollCP": {
"Next": "UpdateCP",
"Parameters": {
"Entries": [
{
"Detail": {
"Pipeline": [
"bar-pipeline"
],
"State": [
"SUCCEEDED"
],
"TaskToken.$": "$$.Task.Token"
},
"DetailType": "CodePipeline Pipeline Execution State Change",
"Source": "aws.codepipeline"
}
]
},
"Resource": "arn:aws:states:::events:putEvents.waitForTaskToken",
"Type": "Task"
},
"UpdateCP": {
"End": true,
"Parameters": {
"ProjectName": "foo-project"
},
"Resource": "arn:aws:states:::codebuild:startBuild.sync",
"Type": "Task"
}
}
}
The permissions for the step function:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": "codebuild:StartBuild",
"Resource": "*"
},
{
"Sid": "",
"Effect": "Allow",
"Action": "codepipeline:*",
"Resource": "*"
}
]
}
and arn:aws:iam::aws:policy/CloudWatchEventsFullAccess
Problem
The cloudwatch event within the step function returns the error:
Error
EventBridge.FailedEntry
Cause
{
"Entries": [
{
"ErrorCode": "NotAuthorizedForSourceException",
"ErrorMessage": "Not authorized for the source."
}
],
"FailedEntryCount": 1
}
Attempts:
Modify the associated Codepipeline and Codebuild role to have step function permission to send task statuses. Specifically, the permission is:
{
"Effect": "Allow",
"Action": [
"states:SendTaskSuccess",
"states:SendTaskFailure",
"states:SendTaskHeartbeat"
],
"Resource": "*"
}
Got the same original error mentioned above.
Modify the associated Step Function machine's permission to have full access to all step function actions and resources. Got the same original error mentioned above.
Test the event rule specified in the PollCP step function task with the default AWS EventBridge bus. The event was:
{
"version": "0",
"detail-type": "CodePipeline Pipeline Execution State Change",
"source": "aws.codepipeline",
"account": "123456789012",
"time": "2021-06-14T00:44:41Z",
"region": "us-west-2",
"resources": [],
"detail": {
"pipeline": "<pipeline-arn>",
"state": "SUCCEED"
}
}
The event outputted the same error mentioned above. This probably means the error is strictly related to the event entry mentioned in the code snippet above.
Are you trying to trigger your State Machine using CloudWatch event when a CodePipeline pipeline completes/succeeds?
If so, you cannot define your trigger in your state machine.
The integration with EventBridge is not so that state machine can be triggered by events. Rather it is to Publish events to an event bus from your state machine or workflow.
Read more here: https://aws.amazon.com/blogs/compute/introducing-the-amazon-eventbridge-service-integration-for-aws-step-functions/
So I suggest you create a CloudWatch rule and target your state machine instead.
If you want to use the waitForTaskToken pattern. You will have to explicitly return that token with a send_task_success API call (sample below for python/boto3).
sfn.send_task_success(
taskToken=task_token,
output=json.dumps(some_optional_payload)
)
This means that, when the step executes, it will publish the event to EventBridge bus. You will have to detect this event outside your state machine, most likely with an CloudWatch event rule. Then trigger a lambda function from the rule. The lambda function performs the send_task_success API call which restarts/continues your workflow/state machine.
In my opinion, that is just unnecessary. Like I said, you can simply watch for pipeline execution state changes uing CW event rule, trigger your state machine, and your state machine starts with the CodeBuild stage.
Side note: It's nice to see people using Step Functions for CI/CD pipeline. It's just got more flexibility and ability to do complex branching strategies. Will probably do a blog post around this soon.
Your CodeBuild service role will need permission to use the states:SendTask* (Success, Failure, and Heartbeat) actions so that it can notify the state machine. This page in the docs has more details.
I need to create a cloudwatch event that runs a lambda function everytime my file in S3 gets updated/re-uploaded. What "eventName" should I use? I tried using "ObjectCreated" but it doesn't seem to work. Perhaps the syntax is incorrect.
https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html
{
"source": [
"aws.s3"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"s3.amazonaws.com"
],
"eventName": [ "ObjectCreated:*"],
"requestParameters": {
"bucketName": [
"mynewbucket"
],
"key": [
"file.csv"
]
}
}
}
CloudWatch Events (or EventBridge) does not automatically track data events for S3 objects. You need to either use CloudTrail for this, which tracks data events on a particular S3 bucket and emits CloudWatch Events (or EventBridge) events for that: https://aws.amazon.com/blogs/compute/using-dynamic-amazon-s3-event-handling-with-amazon-eventbridge/
Or you can use S3 Event Notifications with an SNS topic and use a Lambda subscription on the SNS topic.
I need to programmatically disable a lambda's SNS trigger, however, I seem to be unable to do so. I want this to show "Disabled" in the AWS Lambda console for the function:
Here's the code I've tried:
function updateEndpoints(endpoints, enable) {
const promises = [];
endpoints.forEach((endpoint) => {
console.log(`${enable ? 'Enabling' : 'Disabling'} Endpoint: ${endpoint}`);
promises.push(
SNS.setEndpointAttributes({
EndpointArn: endpoint,
Attributes: {
Enabled: enable ? 'True' : 'False',
},
}).promise()
.catch((e) => {
console.error(`Error ${enable ? 'Enabling' : 'Disabling'} Endpoint: ${endpoint}`);
console.error(e);
}));
});
return Promise.all(promises);
}
The endpoint ARN is passed in correctly with a string like (with correct values in place of the <> below):
-
arn:aws:lambda:<region>:<accountId>:function:<functionName>
-
This produces an error from AWS for each endpoint I try to enable or disable:
-
InvalidParameter: Invalid parameter: EndpointArn Reason: Vendor lambda is not of SNS
-
Is it not possible to disable the trigger/endpoint for a lambda via SNS? How would one go about doing this? I would prefer not to have to unsubscribe/subscribe as this would take the subscription objects out of CloudFormation's scope (correct?). I looked at updateEventSourceMappings, however, per the documentation, that only works with DynamoDB streams, Kinesis Streams, and SQS -- not SNS.
I found the (100%) correct way to do this. While the answer from #John Rotenstein could be used, it's not quite right, but should still work.
I found when you click the toggle, the lambda's policy is actually updated:
Enabled:
{
"Version": "2012-10-17",
"Id": "default",
"Statement": [
{
"Sid": "my-lambda-1552674933742",
"Effect": "Allow",
"Principal": {
"Service": "sns.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:us-west-2:1234567890:function:my-lambda",
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:sns:us-west-2:1234567890:my-lambda"
}
}
}
]
}
Disabled:
{
"Version": "2012-10-17",
"Id": "default",
"Statement": [
{
"Sid": "my-lambda-1552674933742",
"Effect": "Allow",
"Principal": {
"Service": "sns.amazonaws.com"
},
"Action": "lambda:DisableInvokeFunction",
"Resource": "arn:aws:lambda:us-west-2:1234567890:function:my-lambda",
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:sns:us-west-2:1234567890:my-lambda"
}
}
}
]
}
Notice Action is lambda:InvokeFunction vs. lambda:DisableInvokeFunction.
My process to do this is as follows:
- Lambda.listFunctions
- for each function, Lambda.removePermission
- for each function, Lambda.addPermission
Notes:
the Lambda api has a default safety throttle of 100 concurrent executions per account per region.
You can only update resource-based policies for Lambda resources within the scope of the AddPermission and AddLayerVersionPermission API actions. You can't author policies for your Lambda resources in JSON, or use conditions that don't map to parameters for those actions. See docs here
Also, you can use Lambda.getPolicy to see the policy of the lambda to ensure it is updated.
It appears that there is no capability to "disable" a Lambda subscription to an SNS topic.
I base my reasoning on the follow steps I took:
Created an AWS Lambda function
Created an Amazon SNS topic
Subscribed the Lambda function to the SNS topic (done via the SNS console)
Confirmed in the Lambda console that the function subscription to SNS is "enabled"
Ran aws sns list-subscriptions-by-topic --topic-arn arn:aws:sns:ap-southeast-2:123456789012:my-topic
Saw that the Lambda function was subscribed
The response was:
{
"Subscriptions": [
{
"SubscriptionArn": "arn:aws:sns:ap-southeast-2:123456789012:stack:...",
"Owner": "123456789012",
"Protocol": "lambda",
"Endpoint": "arn:aws:lambda:ap-southeast-2:743112987576:function:my-function",
"TopicArn": "arn:aws:sns:ap-southeast-2:123456789012:stack"
}
]
}
I then disabled the trigger in the Lambda console and saved the Lambda function. When I re-ran the above command, the results were empty:
{
"Subscriptions": []
}
When I enabled it again, the subscription returned.
So, my assumption is that, since the "disable/enable" button actually adds and removes a subscription, there does not appear to be any capability to 'disable' a subscription.
I have created a CF script that creates an EC2 instance that contains a web service. It also creates an SNS Topic and a Subscription that uses this web service as it's http endpoint.
The script successfully creates the stack; the Topic and the Subscription exist. However, the Subscription remains in the PendingConfirmation state.
What must I do to get my script to confirm this Subscription upon creation?
I had a similar issue and my problem ended up being a misconfigured CloudFomation template. An AWS::SQS::QueuePolicy is required to give your SNS topic permission to send messages to the queue.
"SQSQueuePolicy": {
"Properties": {
"PolicyDocument": {
"Id": "usecase1",
"Statement": [
{
"Action": "SQS:SendMessage",
"Condition": {
"ArnEquals": {
"aws:SourceArn": {
"Ref": "SnsTopic"
}
}
},
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Resource": {
"Fn::GetAtt": [
"SqsQueue",
"Arn"
]
},
"Sid": "1"
}
],
"Version": "2012-10-17"
},
"Queues": [
{
"Ref": "SqsQueue"
}
]
},
"Type": "AWS::SQS::QueuePolicy"
}
You need to Subscribe to endpoint for this to work.
Read the value for SubscribeURL and visit that URL. To confirm the subscription and start receiving notifications at the endpoint, you must visit the SubscribeURLURL (for example, by sending an HTTP GET request to the URL)
When you visit the URL, you will get back a response that looks like the following XML document. The document returns the subscription ARN for the endpoint within the ConfirmSubscriptionResult element.
<ConfirmSubscriptionResponse xmlns="http://sns.amazonaws.com/doc/2010-03-31/">
<ConfirmSubscriptionResult>
<SubscriptionArn>arn:aws:sns:us-west-2:123456789012:MyTopic:2bcfbf39-05c3-41de-beaa-fcfcc21c8f55</SubscriptionArn>
</ConfirmSubscriptionResult>
<ResponseMetadata>
<RequestId>075ecce8-8dac-11e1-bf80-f781d96e9307</RequestId>
</ResponseMetadata>
</ConfirmSubscriptionResponse>
As an alternative to visiting the SubscribeURL, you can confirm the subscription using the ConfirmSubscription action with the Token set to its corresponding value in the SubscriptionConfirmation message. If you want to allow only the topic owner and subscription owner to be able to unsubscribe the endpoint, you call the ConfirmSubscription action with an AWS signature.
You can Refer to this AWS Documentation
Hope this Helps!