I want to Run cloudwatch rule once the previous step function completes.
This needs to be done multiple times(you can say reusable).
Example- once I trigger a step function rule and its execution gets complete, the next cloudwatch rule shoud get triggered and so on.
Can this be done like- once step function completes, a message should be published to SQS and then using the sqs, a cloudwatch event can get triggered?
We can create a cloudwatch rule with Event Source on Step Function with status SUCCEEDED
{
"source": [
"aws.states"
],
"detail-type": [
"Step Functions Execution Status Change"
],
"detail": {
"status": [
"SUCCEEDED"
],
"stateMachineArn": [
"arn:aws:states:us-east-1:555666611111:stateMachine:my-state-machine-qa"
]
}
}
and add next Stepfunction as Target.
Related
I know you can trigger sagemaker pipelines with all kind of events. Can you also trigger sagemaker pipelines when another pipeline finishes it's execution?
Yes, use an Amazon EventBridge event, such as
{
"source": [
"aws.sagemaker"
],
"detail-type": [
"SageMaker Model Building Pipeline Execution Status Change"
],
"detail": {
"currentPipelineExecutionStatus": [
"Succeeded"
]
}
}
Then call the next pipeline as the target of the EventBridge event
You can use any EventbridgeEvent to trigger a pipeline step. Since Eventbridge supports Sagemaker Pipeline Status Change as an event, you should be able to trigger a pipeline by another one.
Yes, use an Amazon EventBridge event, such as
{
"source": ["aws.sagemaker"],
"detail-type": ["SageMaker Model Building Pipeline Execution Status Change"],
"detail": {
"currentPipelineExecutionStatus": ["Succeeded"],
"previousPipelineExecutionStatus": ["Executing"],
"pipelineArn": ["your-pipelineArn"]
}
}
I want to be able to setup an AWS CloudWatch event rule that will trigger to an SNS topic whenever one of my Step Functions completes (either success or failure). I do not want this to run for all Step Functions, but there will be an indeterminate number of them based on a common name prefix. Ideally, I'd like to be able to do something like this, but it appears that wildcards are not allowed in Event Patterns. Are there any creative ways to work around this?
{
"source": [
"aws.states"
],
"detail-type": [
"Step Functions Execution Status Change"
],
"detail": {
"status": [
"FAILED",
"SUCCEEDED"
],
"stateMachineArn": [
"arn:aws:states:us-west-1:123456789012:stateMachine:Prefix-*"
]
}
}
Wildcards are not supported in Cloudwatch event rule according to AWS official forum.
You will have to add all the arn's in the state machine ARN list. To do it easily you may write a script that does the following:
Get all the state machine names with specific prefix.
Update the Cloudwatch Event Rule to include all the state machine arn's with specific prefix.
My solution is below:
{
"source": ["aws.states"],
"detail-type": ["Step Functions Execution Status Change"],
"detail": {
"status": ["SUCCEEDED", "FAILED"],
"stateMachineArn": [ { "prefix": "arn:aws:states::us-west-1:123456789012:stateMachine:prefix-" } ]
}
}
I have a requirement where I need to trigger my lambda function when all of the glue crawlers have run & my data is ready in redshift to be queried.
I have setup the following AWS cloudwatch rule but it triggers the lambda if any of the crawlers have succeeded.
{
"detail-type": [
"Glue Crawler State Change"
],
"source": [
"aws.glue"
],
"detail": {
"crawlerName": [
"crw-raw-recon-the-hive-ces-cashflow",
"crw-raw-recon-the-hive-ces-position",
"crw-raw-recon-the-hive-ces-trade",
"crw-raw-recon-the-hive-ces-movement",
"crw-raw-recon-the-hive-ces-inventory"
],
"state": [
"Succeeded"
]
}
}
Now my question is there a way I could enforce the lambda to be triggered only when all of them have succeeded?
Also, I am not sure if redshift generates any similar events when it receives data.
I create a rule in cloudwatch to trigger a lambda function when a glue job state is changed. The rule patterned is defined:
{
"detail-type": [
"Glue Job State Change"
],
"source": [
"aws.glue"
]
}
In Show metrics for the rule view I can see that there is one FailedInvocation but I can't find a way to see why the invocation is failed. I have checked the lambda function log but it is not being called. So how can I view the log of the failed invocation?
We have a lambda function we want to use to remove systems from our monitoring system when they are being terminated due to AutoScaling lifecycle events. The function works as expected when we run it manually but we do not see it being called when an instance is terminated. We've setup the following cloudwatch event with a target of the lambda function. We've been testing manually by scaling down an ASG and the instances terminate but the function is never called. Does anyone know what we're missing or where to look for logs of the issue.
{
"source": [
"aws.autoscaling"
],
"detail-type": [
"EC2 Instance-terminate Lifecycle Action"
],
"detail": {
"AutoScalingGroupName": [
"ASG_NAME"
]
}
}
Realized I didn't have a Lifecycle Hook on the ASG, after adding it it's working as expected.