Cannot add a Lambda target to an existing CloudWatch rule - amazon-web-services

I want to add a Lambda target to an existing CloudWatch rule. I used the following to refer to the existing rule:
rule = events.Rule.from_event_rule_arn(self, "Rule",event_rule_arn='')
Later I add a target with:
rule.add_target(targets.LambdaFunction(lambdaFn))
When I execute a cdk synth or deploy, I get the following error:
AttributeError: '+' object has no attribute 'add_target'
I know the IRule element does not have that method but I cannot find a clear way how to achieve what I need.
I also try using event source in Lambda but got the following error:
Unrecognized event source, must be kinesis, dynamodb stream or sqs.

I do not think this is possible. You need to reference the lambda function and manage the rule from the stack to which the rule belongs.

As MilanG suggests, it is not possible to do.
My use case needs to create several Lambda functions and set the same trigger to them and CloudWatch Rules is not a fit for it because of a 5 targets per rule hard limit. I use SNS instead as follows:
sns_topic = aws_sns.Topic.from_topic_arn(scope, id, topic_arn=config)
lambdaFn.add_event_source(aws_lambda_event_sources.SnsEventSource(sns_topic))

Related

Using 'newUUID()' aws iot function in AWS SiteWise service that returns a random 16-byte UUID to be stored as a partition key

I am trying to use the 'newUUID()' aws iot function in the AWS SiteWise service (as part of an alarm action) that returns a random 16-byte UUID to be stored as a partition key for a DynamoDb tables partition key column.
With reference to the attached screenshot, in the 'PartitionKeyValue' trying to
use the value returned by newUUID() function that will be passed to the DynamoDb as part of the action trigger.
Although this gives an error as follows:
"Invalid Request exception: Failed to parse expression due to: Invalid expression. Unrecognized function: newUUID".
I do understand the error, but not sure how can I solve this and use a random UUID generator. Kindly note that I do not want to use a timestamp, because there could be eventualities where multiple events get triggered at the same time and hence the same timestamp.
Any ideas that how can I use this function, or any other information that helps me achieve the above-mentioned.
The docs you refer to say that function is all lowercase newuuid().
Perhaps that will work, but I believe that function is only available in IoT Core SQL Statements. I think with event notifications, you only have these expressions to work with, which is not much. Essentially, you need to get what you need from the alarm event itself.
You may need the alarm event to invoke Lambda, rather than directly write to DynamoDB. Your Lambda function can create a UUID and write the alarm record to DynamoDB using the SDKs.

How to delete GCF function or How to change trigger type of existing one using cloudcustodian?

I have GCF function deployed using Cloudcustodian c7n-org, I have to change trigger type of existing function with HTTP trigger, when I tried to delete it , it has been deleted, but when I am trying to apply rule, it is created HTTP trigger function rather pub-sub type though my rules file contain pubsub type trigger.
When I create new policy, it is successfully created.
Am I missing anything?
Issue has been fixed. It was due to incorrect cloudcustodian document, I am trying to create rule with pubsub type but trigger_type variable should be like target_type.

AWS EventBridge - Use output of first target as input to the next

A rule in AWS EventBridge allows us to provide upto 5 targets. For each of these targets, we have some options to choose the input - based on the event that matched the rule. Is there a way to pass the output of the first target (lambda function) as the input to the next (another lambda function).
I know we can do this by triggering an SNS at the end of the first lambda function. But, I am looking for a way to do this within the EventBridge.
Thanks for your help
A cleaner way to do this would be to have Eventbridge hand over the event to a Step Functions state machine and have 5 steps in that state machine.
Step functions allow you to consume output from first step in the second step and so on.
More on Step Functions here.
More on Lambda with Step Functions here.
I'd agree with #paritosh answer that Step-functions makes more sense for a workflow but if you need something more light-weight (and don't want to learn one more thing); you can use set Eventbridge as the lambda destination. Lambda should then send the event back to Eventbridge via a PutEvents api call https://aws.amazon.com/blogs/compute/introducing-aws-lambda-destinations/
If you want to change the input before triggering the lambda, you can use InputTransformer https://docs.aws.amazon.com/eventbridge/latest/userguide/transform-input.html

add CloudwatchEvent dynamically in AWS Lambda code

I have a Lambda function triggered by S3 file Put event.Now once the lambda is triggered i want to attach cloudwatch event(cron) to the same lambda in the code.Is it possible?
You need to do 2 things to accomplish this
Add a target(lambda) in the cloudwatch rule (cron)
Add permission in lambda to allow the rule to invoke the lambda
I don't have an exact code sample to give you, but the below snippets will have to be included in your function to achieve this -
import boto3
event_client = boto3.client('events')
event_response = event_client.put_targets(
Rule=RULENAME,
Targets=[{
'Id': 'A_UNIQUE_STRING',
'Arn': 'ARN_LAMBDA'
}]
)
lambda_client = boto3.client('lambda')
lambda_response = lambda_client.add_permission(
FunctionName="LAMBDA_NAME",
StatementId="A_UNIQUE_STRING",
Action="lambda:InvokeFunction",
Principal="events.amazonaws.com",
SourceArn="ARN_RULE"
)
ARN_LAMBDA should be something like - arn:aws:lambda:<aws-region>:<aws-account-number>:function:<lambda-name>
ARN_RULE should be something like - arn:aws:events:<aws-region>:<aws-account-number>:rule/<rule-name>
A_UNIQUE_STRING - you can generate something in your code which is meaningful and unique or just a random string.
You can refer the guides in the boto3 documentation of lambda and cloudwatch events for more details -
http://boto3.readthedocs.io/en/latest/reference/services/lambda.html#Lambda.Client.add_permission
http://boto3.readthedocs.io/en/latest/reference/services/events.html#CloudWatchEvents.Client.put_targets
This should be possible for non-streaming triggers but you need to handle the two different event types in code.
On the other hand it would be better to use two seperate Lambdas since you will be only paying for the usage.

How to move file from one in another S3 Bucket after time?

How to move file from one in another S3 Bucket after time?
i checked this thread: Is it possible to automatically move objects from an S3 bucket to another one after some time?
but i don't want to use the Glacier option, because our files a really small. Is there another option?
EDIT:
Requirement is to mark files as invalid (there is a metadata table where we change an attribute for this) and later on to delete them. (invalid means = e.g. older as some date...maybe 30 days). After that, this invalid files should be deleted after, we say 120 days.
Why to move files?
Separate by business requirement (invalid vs. valid files) - don't have to check this attribute again (whether it's invalid or valid)
less files to iterate over, if we may want to iterate over valid files
important: 'S3 PUT event' of new bucket can invoke a
lambda function: this lambda function can do other stuff. Like
change attribute (valid/invalid/deleted) in our dynamoDB table.
Yes, we can also renounce the use of moving files. But i don't see a way how to execute lambda functions after a delay of 30 days.
Best regards
It is possible to schedule lambda function execution using CloudWatch scheduled events
The design should include:
Implementation of Lambda function which identifies and moves required objects in S3
Scheduled expression rule with desired schedule. Cloudwatch supports crontab expression like aws events put-rule --schedule-expression
"cron(0 0 1 * ? *)" --name ObjectExpirationRule
Assign the rule to the lambda aws events put-targets --rule ObjectExpirationRule --targets '[{"1", "arn:aws:lambda:us-east-1:123456789012:function:MyLambdaFunction"}]'