AWS Step function process output from lambda - amazon-web-services

I have a AWS step function need to call 3 lambda in sequence, but at the end of each call, the step function need to process the response from a lambda, and determine the next lambda call.
So how can the step function process the response from a lambda? Can you show an example please?

Assuming that, you have called a lambda function as the first step of your step function. Based on the response from the lambda, you need to decide on which lambda should be triggered next.
This workaround is pretty straightforward, you can return an attribute (eg: next_state) in the lambda response, create a "Choice" flow, in the step function, and give this next_state attribute as an input.
"Choice" flow is nothing but an if-else condition and you can redirect the next step to the expected lambda.
For example,
and the definition will be something like below,
{
"Comment": "A description of my state machine",
"StartAt": "Lambda Invoke 1",
"States": {
"Lambda Invoke 1": {
"Type": "Task",
"Resource": "arn:aws:states:::lambda:invoke",
"OutputPath": "$.Payload",
"Parameters": {
"Payload.$": "$",
"FunctionName": "<your lambda name>"
},
"Retry": [
{
"ErrorEquals": [
"Lambda.ServiceException",
"Lambda.AWSLambdaException",
"Lambda.SdkClientException"
],
"IntervalSeconds": 2,
"MaxAttempts": 6,
"BackoffRate": 2
}
],
"Next": "Choice"
},
"Choice": {
"Type": "Choice",
"Choices": [
{
"Variable": "$.next_state",
"StringEquals": "Lambda Invoke 2",
"Next": "Lambda Invoke 2"
},
{
"Variable": "$.next_state",
"StringEquals": "Lambda Invoke 3",
"Next": "Lambda Invoke 3"
}
],
"Default": "Lambda Invoke 3"
},
"Lambda Invoke 2": {
"Type": "Task",
"Resource": "arn:aws:states:::lambda:invoke",
"OutputPath": "$.Payload",
"Parameters": {
"Payload.$": "$",
"FunctionName": "<your lambda name>"
},
"Retry": [
{
"ErrorEquals": [
"Lambda.ServiceException",
"Lambda.AWSLambdaException",
"Lambda.SdkClientException"
],
"IntervalSeconds": 2,
"MaxAttempts": 6,
"BackoffRate": 2
}
],
"End": true
},
"Lambda Invoke 3": {
"Type": "Task",
"Resource": "arn:aws:states:::lambda:invoke",
"OutputPath": "$.Payload",
"Parameters": {
"Payload.$": "$",
"FunctionName": "<your lambda name>"
},
"Retry": [
{
"ErrorEquals": [
"Lambda.ServiceException",
"Lambda.AWSLambdaException",
"Lambda.SdkClientException"
],
"IntervalSeconds": 2,
"MaxAttempts": 6,
"BackoffRate": 2
}
],
"End": true
}
}
}

There are two ways of catching response of lambda function from step function.
Using add_retry and add_catch to handle any exception from lambda function
eg.
.start(record_ip_task
.add_retry(errors=["States.TaskFailed"],
interval=core.Duration.seconds(2),
max_attempts=2)
.add_catch(errors=["States.ALL"], handler=notify_failure_job)) \
Response value from lambda function such as return '{"Result": True} then the step function job will check that value for next task, eg.
.next(
is_block_succeed
.when(step_fn.Condition.boolean_equals('$.Result', False), notify_failure_job)
.otherwise(send_slack_task)
)
Ref: https://dev.to/vumdao/aws-guardduty-combine-with-security-hub-and-slack-17eh
https://github.com/vumdao/aws-guardduty-to-slack

Related

AWS Step Functions Consuming messages from SQS

I am consuming messages from SQS to trigger queries.
When I normally consume a message from SQS in Python, I need to delete the message from SQS.
Do I have to manually delete the message from SQS in a Step Function?
What is the best/simplest way to do so?
I believe SQS has done the integration:
{
"Comment": "Run Redshift Queries",
"StartAt": "ReceiveMessage from SQS",
"States": {
"ReceiveMessage from SQS": {
"Type": "Task",
"Parameters": {
"QueueUrl": "******"
},
"Resource": "arn:aws:states:::aws-sdk:sqs:receiveMessage",
"Next": "Run Analysis Queries",
"ResultSelector": {
"body.$": "States.StringToJson($.Messages[0].Body)"
}
},
"Run Analysis Queries": {
"Type": "Task",
"Parameters": {
"ClusterIdentifier": "******",
"Database": "prod",
"Sql": "select * from ******"
},
"Resource": "arn:aws:states:::aws-sdk:redshiftdata:executeStatement",
"End": true
}
},
"TimeoutSeconds": 3600
}
I just did a test and it seems that the messages goes down temporarily but then goes up again.
Is the best way to insert a Lambda in between the "ReceiveMessage from SQS" stage & Redshift stage?
This raised another question. I have only run this manually. How do I activate this Step Function eventually to run on any message?
If you must use SQS, then you will need to have a lambda function to act as a proxy. You will need to set up the queue as a lambda trigger, and you will need to write a lambda that can parse the SQS message and make the appropriate call to the Step Functions StartExecution API.
After you consume a message, you have to delete it using sqs:deleteMessage. The reason you see it reappear in the queue is because once it's read by an application it becomes hidden for ~30 seconds to avoid other applications process it simultaneously.
Here is an example of how to read, process and delete a message from the queue. Mind that I added MaxNumberOfMessages equals 1 and a ResultPath different than $
"ReceiveMessage from SQS": {
"Type": "Task",
"Parameters": {
"MaxNumberOfMessages": 1,
"QueueUrl": "******"
},
"Resource": "arn:aws:states:::aws-sdk:sqs:receiveMessage",
"Next": "Run Analysis Queries",
"ResultSelector": {
"body.$": "States.StringToJson($.Messages[0].Body)"
}
},
"Run Analysis Queries": {
"Type": "Task",
"Parameters": {
"ClusterIdentifier": "******",
"Database": "prod",
"Sql": "select * from ******"
},
"Resource": "arn:aws:states:::aws-sdk:redshiftdata:executeStatement",
"ResultPath": "$.redshift_output",
"Next": "delete_sqs"
},
"delete_sqs": {
"Comment": "Deletes SQS message",
"Type": "Task",
"Resource": "arn:aws:states:::aws-sdk:sqs:deleteMessage",
"Parameters": {
"ReceiptHandle.$": "$.Messages[0].ReceiptHandle",
"QueueUrl": "******"
},
"ResultPath": null,
"Next": "update_result"
}
Also, you may read up to 10 messages at a time setting MaxNumberOfMessages equals 10 along with a Map step like in this example here:
{
"StartAt": "read_sqs",
"States": {
"read_sqs": {
"Type": "Task",
"Resource": "arn:aws:states:::aws-sdk:sqs:receiveMessage",
"Parameters": {
"MaxNumberOfMessages": 10,
"QueueUrl": "*******"
},
"ResultPath": "$.queueResponse",
"Next": "check_results"
},
"check_results": {
"Comment": "Checking if queue is empty",
"Type": "Choice",
"Choices": [
{
"Variable": "$.queueResponse.Messages[0]",
"IsPresent": true,
"Next": "map_results"
}
],
"Default": "exit"
},
"map_results": {
"Comment": "Performs a 'map' operation over each payload",
"Type": "Map",
"ItemsPath": "$.queueResponse.Messages",
"MaxConcurrency": 10,
"Iterator": {
"StartAt": "read_request",
"States": {
"read_request": {
"Comment": "Parses and moves the request body into the response",
"Type": "Pass",
"Parameters": {
"requestBody.$": "States.StringToJson($.Body)"
},
"ResultPath": "$.map_response",
"Next": "Run Analysis Queries"
},
"Run Analysis Queries": {
"Type": "Task",
"Parameters": {
"ClusterIdentifier": "******",
"Database": "prod",
"Sql": "select * from ******"
},
"Resource": "arn:aws:states:::aws-sdk:redshiftdata:executeStatement",
"ResultPath": "$.redshift_output",
"Next": "delete_sqs"
},
"delete_sqs": {
"Comment": "Deletes SQS message",
"Type": "Task",
"Resource": "arn:aws:states:::aws-sdk:sqs:deleteMessage",
"Parameters": {
"ReceiptHandle.$": "$.ReceiptHandle",
"QueueUrl": "*******"
},
"ResultPath": null,
"End": true
}
}
},
"ResultPath": "$.flowResponse",
"Next": "exit"
},
"exit": {
"Type": "Pass",
"End": true
}
}
}

AWS Step-Function: pass a specific value from one AWS lambda to another in step function parallel state

I have the below state machine. The requirement is to have a lambda to query DB and get all the ids. Next I have a parallel state call that calls more than five lambdas at once. Instead of passing all the ids fetched to all the lambdas, I need to pass the respective ids to each lambda.
In the below state language, first call is DB_CALL, lets say it returns {id1, id2, id3, id4, id5, id6}, I want to pass only id1 to First_Lambda and id2 to Second_Lambda etc...
The entire id object should get passed to all lambdas. Please suggest a way to achieve this.
{
"Comment": "Concurrent Lambda calls",
"StartAt": "StarterLambda",
"States": {
"StarterLambda": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:123456789012:function:DB_CALL",
"Next": "ParallelCall"
},
"State": {
"ParallelCall": {
"Type": "Parallel",
"End": true,
"Branches": [
{
"StartAt": "First",
"States": {
"First": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:123456789012:function:First_Lambda",
"TimeoutSeconds": 120,
"End": true
}
}
},
{
"StartAt": "Second",
"States": {
"Second": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:123456789012:function:Second_Lambda",
"Retry": [ {
"ErrorEquals": ["States.TaskFailed"],
"IntervalSeconds": 1,
"MaxAttempts": 2,
"BackoffRate": 2.0
} ],
"End": true
}
}
},
{
"StartAt": "Third",
"States": {
"Third": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:123456789012:function:Third_Lambda",
"Catch": [ {
"ErrorEquals": ["States.TaskFailed"],
"Next": "CatchHandler"
} ],
"End": true
},
"CatchHandler": {
"Type": "Pass",
"Resource": "arn:aws:lambda:us-east-1:123456789012:function:CATCH_HANDLER",
"End": true
}
}
},
{
"StartAt": "Fourth",
"States": {
"Fourth": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:123456789012:function:Fourth_Lambda",
"TimeoutSeconds": 120,
"End": true
}
}
},
{
"StartAt": "Fifth",
"States": {
"Fifth": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:123456789012:function:Fifth_Lambda",
"TimeoutSeconds": 120,
"End": true
}
}
},
{
"StartAt": "Sixth",
"States": {
"Sixth": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:123456789012:function:Sixth_Lambda",
"TimeoutSeconds": 120,
"End": true
}
}
}
}
]
}
}
}
}
You can use Step Function parameter option.
This would allow you to send specific value or json to next lambda.
"Parameters": {
"toprocess.$": "$.MetaData.CorrelationId"
},
So input to this lambda would be smaller dto than compared to you first lambda. So while returning value from this lambda avoid assigning it back to Step function result.
"OutputPath": "$",
"ResultPath": "$.PartialResutl",
What you are looking for is the Map State. With this state, you pass in the iterator, in your case the path to the ids. The map state will run once for each item in the list. Within the map state, you have a full state machine, so you can call a Lambda or any other state. It has controls to limit how many are running at once if that is needed.

Is it possible to execute Step Concurrency for AWS EMR through AWS STEP FUNCTION without Lambda?

This is my Scenario, I'm trying to create 4 AWS EMR clusters, where each cluster will be assigned with 2 jobs in it, so it'll be like 4 clusters with 8 jobs orchestrated using Step Function.
My Flow should be like:
4 Clusters will start at the same time running 8 jobs parallelly, where each cluster will run 2 jobs parallelly.
Now, recently AWS has launched this feature to run 2 (or) more jobs in a single cluster simultaneously using StepConcurrencyLevel in EMR to reduce the runtime of the cluster, which can be performed using EMR console, AWS CLI (or) even through AWS lambda.
But, I want to execute this process of launching 2 (or) more jobs parallelly in a single cluster using AWS Step Function with it's state machine language like the format referred here https://docs.aws.amazon.com/step-functions/latest/dg/connect-emr.html
I've tried referring many sites to execute this process, where I'm getting solution for doing it through the console (or) through boto3 format in AWS lambda, but I couldn't find the solution on executing this through Step Function itself...
Is there any Solution for this!?
Thanks in Advance..
So, I went through few more sites and found a solution for my issue...
The issue I faced was StepConcurrencyLevel, where I can add it using AWS Console (or) through AWS CLI (or) even through Python using BOTO3... But I was expecting a solution using State Machine Language and I found one...
All we have to do is while creating our cluster using the State Machine Language we have to specify the StepConcurrencyLevel in it like 2 (or) 3, where the default is 1. Once it's been set then create 4 steps under that cluster and run the State Machine.
Where the cluster will recognize the number of concurrency been set and will run the Steps accordingly.
My Sample Process:
-> JSON Script of my orchestration
{
"StartAt": "Create_A_Cluster",
"States": {
"Create_A_Cluster": {
"Type": "Task",
"Resource": "arn:aws:states:::elasticmapreduce:createCluster.sync",
"Parameters": {
"Name": "WorkflowCluster",
"StepConcurrencyLevel": 2,
"Tags": [
{
"Key": "Description",
"Value": "process"
},
{
"Key": "Name",
"Value": "filename"
},
{
"Key": "Owner",
"Value": "owner"
},
{
"Key": "Project",
"Value": "roject"
},
{
"Key": "User",
"Value": "user"
}
],
"VisibleToAllUsers": true,
"ReleaseLabel": "emr-5.28.1",
"Applications": [
{
"Name": "Spark"
}
],
"ServiceRole": "EMR_DefaultRole",
"JobFlowRole": "EMR_EC2_DefaultRole",
"LogUri": "s3://prefix/prefix/log.txt/",
"Instances": {
"KeepJobFlowAliveWhenNoSteps": true,
"InstanceFleets": [
{
"InstanceFleetType": "MASTER",
"TargetSpotCapacity": 1,
"InstanceTypeConfigs": [
{
"InstanceType": "m4.xlarge",
"BidPriceAsPercentageOfOnDemandPrice": 90
}
]
},
{
"InstanceFleetType": "CORE",
"TargetSpotCapacity": 1,
"InstanceTypeConfigs": [
{
"InstanceType": "m4.xlarge",
"BidPriceAsPercentageOfOnDemandPrice": 90
}
]
}
]
}
},
"Retry": [
{
"ErrorEquals": [
"States.ALL"
],
"IntervalSeconds": 5,
"MaxAttempts": 1,
"BackoffRate": 2.5
}
],
"Catch": [
{
"ErrorEquals": [
"States.ALL"
],
"Next": "Fail_Cluster"
}
],
"ResultPath": "$.cluster",
"OutputPath": "$.cluster",
"Next": "Add_Steps_Parallel"
},
"Fail_Cluster": {
"Type": "Task",
"Resource": "arn:aws:states:::sns:publish",
"Parameters": {
"TopicArn": "arn:aws:sns:us-west-2:919490798061:rsac_error_notification",
"Message.$": "$.Cause"
},
"Next": "Terminate_Cluster"
},
"Add_Steps_Parallel": {
"Type": "Parallel",
"Branches": [
{
"StartAt": "Step_One",
"States": {
"Step_One": {
"Type": "Task",
"Resource": "arn:aws:states:::elasticmapreduce:addStep.sync",
"Parameters": {
"ClusterId.$": "$.ClusterId",
"Step": {
"Name": "The first step",
"ActionOnFailure": "TERMINATE_CLUSTER",
"HadoopJarStep": {
"Jar": "command-runner.jar",
"Args": [
"spark-submit",
"--deploy-mode",
"cluster",
"--master",
"yarn",
"--conf",
"spark.dynamicAllocation.enabled=true",
"--conf",
"maximizeResourceAllocation=true",
"--conf",
"spark.shuffle.service.enabled=true",
"--py-files",
"s3://prefix/prefix/pythonfile.py",
"s3://prefix/prefix/pythonfile.py"
]
}
}
},
"Retry": [
{
"ErrorEquals": [
"States.ALL"
],
"IntervalSeconds": 5,
"MaxAttempts": 1,
"BackoffRate": 2.5
}
],
"Catch": [
{
"ErrorEquals": [
"States.ALL"
],
"ResultPath": "$.err_mgs",
"Next": "Fail_SNS"
}
],
"ResultPath": "$.step1",
"Next": "Terminate_Cluster_1"
},
"Fail_SNS": {
"Type": "Task",
"Resource": "arn:aws:states:::sns:publish",
"Parameters": {
"TopicArn": "arn:aws:sns:us-west-2:919490798061:rsac_error_notification",
"Message.$": "$.err_mgs.Cause"
},
"ResultPath": "$.fail_cluster",
"Next": "Terminate_Cluster_1"
},
"Terminate_Cluster_1": {
"Type": "Task",
"Resource": "arn:aws:states:::elasticmapreduce:terminateCluster.sync",
"Parameters": {
"ClusterId.$": "$.ClusterId"
},
"End": true
}
}
},
{
"StartAt": "Step_Two",
"States": {
"Step_Two": {
"Type": "Task",
"Resource": "arn:aws:states:::elasticmapreduce:addStep",
"Parameters": {
"ClusterId.$": "$.ClusterId",
"Step": {
"Name": "The second step",
"ActionOnFailure": "TERMINATE_CLUSTER",
"HadoopJarStep": {
"Jar": "command-runner.jar",
"Args": [
"spark-submit",
"--deploy-mode",
"cluster",
"--master",
"yarn",
"--conf",
"spark.dynamicAllocation.enabled=true",
"--conf",
"maximizeResourceAllocation=true",
"--conf",
"spark.shuffle.service.enabled=true",
"--py-files",
"s3://prefix/prefix/pythonfile.py",
"s3://prefix/prefix/pythonfile.py"
]
}
}
},
"Retry": [
{
"ErrorEquals": [
"States.ALL"
],
"IntervalSeconds": 5,
"MaxAttempts": 1,
"BackoffRate": 2.5
}
],
"Catch": [
{
"ErrorEquals": [
"States.ALL"
],
"ResultPath": "$.err_mgs_1",
"Next": "Fail_SNS_1"
}
],
"ResultPath": "$.step2",
"Next": "Terminate_Cluster_2"
},
"Fail_SNS_1": {
"Type": "Task",
"Resource": "arn:aws:states:::sns:publish",
"Parameters": {
"TopicArn": "arn:aws:sns:us-west-2:919490798061:rsac_error_notification",
"Message.$": "$.err_mgs_1.Cause"
},
"ResultPath": "$.fail_cluster_1",
"Next": "Terminate_Cluster_2"
},
"Terminate_Cluster_2": {
"Type": "Task",
"Resource": "arn:aws:states:::elasticmapreduce:terminateCluster.sync",
"Parameters": {
"ClusterId.$": "$.ClusterId"
},
"End": true
}
}
}
],
"ResultPath": "$.steps",
"Next": "Terminate_Cluster"
},
"Terminate_Cluster": {
"Type": "Task",
"Resource": "arn:aws:states:::elasticmapreduce:terminateCluster.sync",
"Parameters": {
"ClusterId.$": "$.ClusterId"
},
"End": true
}
}
}
In this script (or) AWS Step Function's State Machine Language, While creating the cluster I've mentioned the StepConcurrencyLevel as 2 and added 2 spark jobs as Steps below the cluster.
When I ran this script in Step Function, I was able to orchestrate the cluster and steps to run 2 steps concurrently in a cluster without directly configuring it in AWS EMR console (or) through AWS CLI (or) even through BOTO3.
I just used the State Machine Language to execute the orchestration of running 2 steps concurrently in a single cluster under AWS Step Function without any help from other services like lambda or livy API or BOTO3 etc...
This is how the Flow Diagram Looks:
AWS Step Function Workflow for concurrent step execution
To be more accurate on where I inserted the StepConcurrencyLevel in the above State Machine Language is here:
"Create_A_Cluster": {
"Type": "Task",
"Resource": "arn:aws:states:::elasticmapreduce:createCluster.sync",
"Parameters": {
"Name": "WorkflowCluster",
"StepConcurrencyLevel": 2,
"Tags": [
{
"Key": "Description",
"Value": "process"
},
Under Create_A_Cluster.
Thank You.

Cannot pass array to next task in AWS StepFunction

Working on an AWS StepFunction that gets an array of dates from a Lambda call, then passes to a Task that should take that array as a parameter to pass into a lambda.
The Get Date Range task works fine and outputs the date array:
{
"rng": [
"2019-05-07",
"2019-05-09"
]
}
...and the array gets passed into the ProcessDateRange task, but I cannot assign the array the range Parameter.
It literally tries to pass this: "$.rng" instead of this:
[
"2019-05-07",
"2019-05-09"
]
Here's the StateMachine:
{
"StartAt": "Try",
"States": {
"Try": {
"Type": "Parallel",
"Branches": [{
"StartAt": "Get Date Range",
"States": {
"Get Date Range": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:123456789:function:get-date-range",
"Parameters": {
"name": "thename",
"date_query": "SELECT date from sch.tbl_dates;",
"database": "the_db"
}
,
"ResultPath": "$.rng",
"TimeoutSeconds": 900,
"Next": "ProcessDateRange"
},
"ProcessDateRange": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:123456789:function:process-date-range",
"Parameters": {
"range": "$.rng"
},
"ResultPath": "$",
"Next": "Exit"
},
"Exit": {
"Type": "Succeed"
}
}
}],
"Catch": [{
"ErrorEquals": ["States.ALL"],
"ResultPath": "$.Error",
"Next": "Failed"
}],
"Next": "Succeeded"
},
"Failed": {
"Type": "Fail",
"Cause": "There was an error. Please review the logs.",
"Error": "error"
},
"Succeeded": {
"Type": "Succeed"
}
}
}
This is because you are using the wrong syntax for Lambda tasks. To specify the input you need to set the InputPath key, for example:
"ProcessDateRange": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:123456789:function:process-date-range",
"InputPath": "$.rng",
"ResultPath": "$",
"Next": "Exit"
},
If you want a parameter to be interpreted as a JSON path instead of a literal string, add ".$" to the end of the parameter name. To modify your example:
"ProcessDateRange": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:123456789:function:process-date-range",
"Parameters": {
"range.$": "$.rng"
},
"ResultPath": "$",
"Next": "Exit"
},
Relevant docs here: https://docs.aws.amazon.com/step-functions/latest/dg/connectors-parameters.html#connectors-parameters-path

How to reuse state definition across aws state machines?

I have a state machine like below. If it has 1000 messages to notify, it spreads the notifications across 15 minutes.
Now, if I have a TwoHourStateMachine with exact same state flows but with its own set of lambdas, how will I reuse the states so that I dont duplicate the definition again?
State machine:
FifteenMinuteStateMachine:
Type: "AWS::StepFunctions::StateMachine"
Properties:
StateMachineName: "FifteenMinuteStateMachine"
DefinitionString:
Fn::Sub: |-
{
"Comment": "A 15 minute state machine",
"StartAt": "Initialize",
"TimeoutSeconds": 900,
"States": {
"Initialize" : {
"Type": "Task",
"Resource": "${InitFifteenMinuteLambda.Arn}",
"TimeoutSeconds": 15,
"Retry": [ {
"ErrorEquals": [ "States.Timeout", "Lambda.Unknown" ],
"IntervalSeconds": 2,
"MaxAttempts": 3,
"BackoffRate": 2
} ],
"Catch": [{
"ErrorEquals": ["States.ALL"],
"ResultPath": "$.errorOutput",
"Next": "Update Status"
}],
"Next": "Notification Job"
},
"Notification Job" : {
"Type": "Task",
"Resource": "${NotificationFifteenMinuteLambda.Arn}",
"TimeoutSeconds": 15,
"Retry": [ {
"ErrorEquals": [ "States.Timeout", "Lambda.Unknown" ],
"IntervalSeconds": 2,
"MaxAttempts": 3,
"BackoffRate": 2
} ],
"Catch": [{
"ErrorEquals": ["States.ALL"],
"ResultPath": "$.errorOutput",
"Next": "Update Status"
}],
"Next": "All Notifications sent?"
},
"All Notifications sent?": {
"Type": "Choice",
"Choices": [
{
"Variable": "$.status",
"StringEquals": "IN_PROGRESS",
"Next": "Wait X Seconds"
},
{
"Variable": "$.status",
"StringEquals": "SUCCEEDED",
"Next": "Update Status"
}
],
"Default": "Wait X Seconds"
},
"Wait X Seconds": {
"Type": "Wait",
"SecondsPath": "$.notificationIntervalInSeconds",
"Next": "Notification Job"
},
"Update Status": {
"Type": "Task",
"Resource": "${StatusUpdateFifteenMinuteLambda.Arn}",
"TimeoutSeconds": 15,
"End": true
}
}
}
RoleArn:
Fn::GetAtt: [ StepFunctionExecutionRole, Arn ]
Ashok,
If you can frame the problem with 1 set of lambda functions I believe the solution is already done for you in your example. Are you required to call different lambda functions? Ideally you can reuse the same lambda function and reuse that in the definition. Unfortunately, you can not use ARN variables at runtime currently, which is what I believe you're asking for.
Hope this helps!