I have a step function that publishes to an SNS topic, which then sends an email notification. The email notification is sent as expected, but then the task gets stuck in "running" state when it should exit and terminate the step function. Does anyone know where I'm going wrong or what might be causing this?
"ErrorNotification": {
"Type": "Task",
"Resource":"arn:aws:states:::sns:publish.waitForTaskToken",
"OutputPath": "$",
"Parameters": {
"TopicArn": "<topic-arn>",
"Message":{
"Input.$":"$",
"TaskToken.$":"$$.Task.Token"
}
},
"End": true
},
this specific line
"Resource":"arn:aws:states:::sns:publish.waitForTaskToken",
implements a Wait for a Callback with the Task Token
Call Amazon SNS with Step Functions
The following includes a Task state that publishes to an Amazon SNS topic and then waits for the task token to be returned. See Wait for a Callback with the Task Token.
{
"StartAt":"Send message to SNS",
"States":{
"Send message to SNS":{
"Type":"Task",
"Resource":"arn:aws:states:::sns:publish.waitForTaskToken",
"Parameters":{
"TopicArn":"arn:aws:sns:us-east-1:123456789012:myTopic",
"Message":{
"Input.$":"$",
"TaskToken.$":"$$.Task.Token"
}
},
"End":true
}
}
}
In that case, you need to check if you are sending the appropriate event from the (usually a lambda) who is handling the callback and sending the final response back.
For example I handle my callback functionality via a lambda roughly like below for successful or failed.
...
LOG.info(f"Sending task heartbeat for task ID {body['taskToken']}")
STEP_FUNCTIONS_CLIENT.send_task_heartbeat(taskToken=body["taskToken"])
is_task_success = random.choice([True, False])
if is_task_success:
LOG.info(f"Sending task success for task ID {body['taskToken']}")
STEP_FUNCTIONS_CLIENT.send_task_success(
taskToken=body["taskToken"],
output=json.dumps({"id": body['id']})
)
else:
LOG.info(f"Sending task failure for task ID {body['taskToken']}")
STEP_FUNCTIONS_CLIENT.send_task_failure(
taskToken=body["taskToken"],
cause="Random choice returned False."
)
..
Related
As mentioned in this PubSub Notifications Attributes Documentation, I should be able to retrieve the Attributes such as eventType for all notifications sent by Cloud Storage to Pub/Sub topic.
However, in my case, I am not seeing any of the payload attributes when an object is added/removed from a cloud storage bucket that has been configured to send notifications.
Below is the message I am getting when an object is added to a cloud storage bucket:
{
"kind": "storage#object",
"id": "coral-ethos-xxxx.appspot.com/part-00000-of-00001.avro/1642783080470217",
"selfLink": "https://www.googleapis.com/storage/v1/b/coral-ethos-xxxxx.appspot.com/o/part-00000-of-00001.avro",
"name": "part-00000-of-00001.avro",
"bucket": "coral-ethos-xxxxx.appspot.com",
"generation": "1642783080470217",
"metageneration": "1",
"contentType": "application/octet-stream",
"timeCreated": "2022-01-21T16:38:00.624Z",
"updated": "2022-01-21T16:38:00.624Z",
"storageClass": "STANDARD",
"timeStorageClassUpdated": "2022-01-21T16:38:00.624Z",
"size": "202521",
"md5Hash": "w0QzRMUCOHj42vpME2P/Ww==",
"mediaLink": "https://www.googleapis.com/download/storage/v1/b/coral-ethos-xxxxxx.appspot.com/o/part-00000-of-00001.avro?generation=1642783080470217&alt=media",
"contentLanguage": "en",
"crc32c": "XdhSwg==",
"etag": "CMmt0e+jw/UCEAE="
}
eventType and notificationConfiguration attributes are not available on the above message.
I cannot identify if the object was added or removed without the 2 mentioned attributes.
Below is the code I used to display the message:
ProjectSubscriptionName subscriptionName =
ProjectSubscriptionName.of(projectId, subscriptionId);
// Instantiate an asynchronous message receiver.
MessageReceiver receiver =
(PubsubMessage message, AckReplyConsumer consumer) -> {
// Handle incoming message, then ack the received message.
System.out.println("Id: " + message.getMessageId());
System.out.println("Data: " + message.getData().toStringUtf8());
consumer.ack();
};
Can someone let me know if I am missing something in configuring the storage bucket?
I am running a step function with many different step yet I am still stuck on the 2nd step.
The first step is a Java Lambda that gets all the input parameters and does what it needs to do.
The lambda returns null as it doesn't need to return anything.
The next step is a call for API gateway which needs to use one of the parameters in the URL.
However, I see that neither the URL has the needed parameter nor do I actually get the parameters into the step. ("input": null under TaskStateEntered)
The API gateway step looks as follows: (I also tried "Payload.$": "$" instead of the "Input.$": "$")
"API Gateway start": {
"Type": "Task",
"Resource": "arn:aws:states:::apigateway:invoke",
"Parameters": {
"Input.$": "$",
"ApiEndpoint": "aaaaaa.execute-api.aa-aaaa-1.amazonaws.com",
"Method": "GET",
"Headers": {
"Header1": [
"HeaderValue1"
]
},
"Stage": "start",
"Path": "/aaa/aaaa/aaaaa/aaaa/$.scenario",
"QueryParameters": {
"QueryParameter1": [
"QueryParameterValue1"
]
},
"AuthType": "IAM_ROLE"
},
"Next": "aaaaaa"
},
But when my step function gets to this stage it fails and I see the following in the logs:
{
"name": "API Gateway start",
"input": null,
"inputDetails": {
"truncated": false
}
}
And eventually:
{
"error": "States.Runtime",
"cause": "An error occurred while executing the state 'API Gateway start' (entered at the event id #9). Unable to apply Path transformation to null or empty input."
}
What am I missing here? Note that part of the path is a value that I enter at the step function execution. ("Path": "/aaa/aaaa/aaaaa/aaaa/$.scenario")
EDIT:
As requested by #lynkfox, I am adding the lambda definition that comes before the API gateway step:
And to answer the question, yes its standard and I see no input.
"Run tasks": {
"Type": "Task",
"Resource": "arn:aws:states:::lambda:invoke",
"OutputPath": "$.Payload",
"Parameters": {
"Payload.$": "$",
"FunctionName": "arn:aws:lambda:aaaaaa-1:12345678910:function:aaaaaaa-aaa:$LATEST"
},
"Retry": [
{
"ErrorEquals": [
"Lambda.ServiceException",
"Lambda.AWSLambdaException",
"Lambda.SdkClientException"
],
"IntervalSeconds": 2,
"MaxAttempts": 6,
"BackoffRate": 2
}
],
"Next": "API Gateway start"
},
So yes, as I commented, I believe the problem is the OutputPath of your lambda task definition. What this is saying is Take whatever comes out of this lambda (which is nothing!) and cut off everything other than the key Payload.
Well you are returning nothing, so this causes nothing to be sent to the next task.
I am assuming your incoming vent already has a key in the Json that is named Payload, so what you want to do is remove the OutputPath from your lambda. It doesn't need to return anything so it doesn't need an Output or Result path.
Next, on your API task, assuming again that your initializing event has a key of Payload, you would have "InputPath": "$.Payload" - if you have your headers or parameters in the initializing json Event then, you can reference those keys in the Parameters section of the definition.
Every AWS Service begins with an Event and ends with an Event. Each Event is a JSON object. (Which I'm sure you know). With State Machines, this continues - the State Machine/Step Function is just the controller for passing Events from one Task to the next.
So any given task can have an InputPath, OutputPath, or Result Path - These three definition parameters can decide what values go into the Task and what are sent onto the Next Task. State machines are, by definition, for maintaining State between Tasks, and these help control that 'State' (and there is pretty much only one 'state' at any given time, the event heading to the next Task(s)
The ResultPath is where, in that overall Event, the task puts the data. If you put ResultPath: "$.MyResult" by itself it appends this key to the incoming event
If you add OutputPath, it ONLY passes that key from the output event of the Task onto the next step in the Step Functions.
These three give you a lot of control.
Want to Take an Event into a Lambda and respond with something completely different - you don't need the incoming data - you combine OutputPath and ResultPath with the same value (and your Lambda needs to respond with a Json Object) then you can replace the event wholesale.
If you have ResultPath of some value and OutputPath: "$." you create a new json object with a single Key that contains the result of your task (the key being the definition set in ResultPath
InputPath allows you to set what goes into the Task. I am not 100% certain but I'm pretty sure it does not remove anything from the next Task in the chain.
More information can be found here but it can get pretty confusing.
My quick guide:
ResultPath by itself if you want to append the data to the event
ResultPath + OutputPath of the same value if you want to cut off the Input and only have the output of the task continue (and it returns a JSON style object)
I am using the AWS CDK to create a state machine that sends a message to a fifo queue and waits for a callback from the lambda worker to continue execution.
I would like the messages that get sent to the fifo queue to have a dynamic MessageGroupId assigned to them so I can control the number of lambda workers processing the messages. The only way I can think of to have a dynamic MessageGroupId is to reference some parameter on the step function input with JsonPath, however I have not come across any documentation about it. My initial tests to use JsonPath to dynamically pass the MessageGroupId failed, simply passing the string "$.MessageGroupId" effectively giving each message the same message group id and thus one lambda worker.
Is it possible to dynamically assign a message group id to a sqs message when sent from a step function?
If so, how?
With the help AWS Support, I managed to do it by either using the Context Object or passing an ID from the initial input and reference it with $.
Here's an example:
{
"Comment": "Generate unique MessageGroupId",
"StartAt": "Start",
"States": {
"Start": {
"Type": "Task",
"TimeoutSeconds": 60,
"Resource": "arn:aws:states:::sqs:sendMessage.waitForTaskToken",
"Parameters": {
"QueueUrl": "<YOUR_QUEUE_URL>",
"MessageBody": {
"Input.$": "$",
"TaskToken.$": "$$.Task.Token"
},
"MessageGroupId.$": "$$.Execution.Id"
},
"ResultPath": "$",
"End": true
}
}
}
My problem was that I was trying to MessageGroupId like so:
"MessageGroupId": "$$.Execution.Id"
Where I should have done:
"MessageGroupId.$": "$$.Execution.Id"
Appending .$ would resolve the expression "$$.Execution.Id" instead of putting literally the string "$$.Execution.Id".
Currently, a wait state in AWS can wait only for a defined set period of time.
let's say my step function checks with API for status if the status is updated it will move ahead or else it will wait again for a set period of time!
I would like to make this waiting period dynamic
i.e. (the backoff rate is set to 2)
1st retry: wait for 3600s
2nd retry: wait for 7200s (3600x2)
3rd retry: wait for 14400s (7200x2)
and so on.
Is there any way I can do this without using any other external computation resource (such as lambda)
Just raise a custom exception (for example: StatusNotUpdated) in your function if the status is not updated, and then you can define the step like this:
"Check API Status": {
"Type": "Task",
"Resource": "arn:aws:states:us-east-1:123456789012:task:check_api_status",
"Next": "Status Updated",
"Retry": [ {
"ErrorEquals": [ "StatusNotUpdated" ],
"IntervalSeconds": 3600,
"BackoffRate": 2.0,
"MaxAttempts": 15
} ],
"Catch": [ {
"ErrorEquals": [ "States.ALL" ],
"Next": "Status Update Failed"
} ]
}
Check here for more info
I was not able to find a inbuilt tool for this
So
I created a custom logic in a library
the library has 2 parts
CDK template housing the lambda/compute service
Service Code housing the exp wait logic code
The approach I took to solve this problem was
when the request comes in the step function I append an object with wait time parameters
These parameters are used by the lambda to calculate the dynamic wait time and update the json path with new wait time value
I have an SNS topic in eu-west-1 hosted by AWS.
If I log into the aws sns console and publish manually to mind end point then the notification(s) are sent correctly to the devices with the correct data.
However I have a simple clojure server which uses the Amazonica library to handle the aws API calls and regardless of what message I send to my SNS topic, the notification arrives at the device with the message last used in the SNS console.
example:
log into SNS console and send the following:
{
"default": "Test data",
"email": "Test data",
"sqs": "Test data",
"lambda": "Test data",
"http": "Test data",
"https": "Test data",
"sms": "Test data",
"APNS": "{\"aps\":{\"alert\": \"Test data\"} }",
"APNS_SANDBOX":"{\"aps\":{\"alert\":\"Test data\"}}",
"APNS_VOIP":"{\"aps\":{\"alert\":\"Test data\"}}",
"APNS_VOIP_SANDBOX": "{\"aps\":{\"alert\": \"Test data\"} }",
"MACOS":"{\"aps\":{\"alert\":\"Test data\"}}",
"MACOS_SANDBOX": "{\"aps\":{\"alert\": \"Test data\"} }",
"GCM": "{ \"data\": { \"message\": \"Test data\" } }",
"ADM": "{ \"data\": { \"message\": \"Test data\" } }",
"BAIDU": "{\"title\":\"Test data\",\"description\":\"Test data\"}",
"MPNS" : "<?xml version=\"1.0\" encoding=\"utf-8\"?><wp:Notification xmlns:wp=\"WPNotification\"><wp:Tile><wp:Count>ENTER COUNT</wp:Count><wp:Title>Test data</wp:Title></wp:Tile></wp:Notification>",
"WNS" : "<badge version\"1\" value\"23\"/>"
}
This is generated using the generate JSON feature of the console. This works as expected the notification arrives with the message Test data. All is golden.
However if I do the following in clojure:
(defn- sns-push [body]
(sns/publish (env :sns) :topic-arn "arn:aws:sns:eu-west-1:xxxxxxxxxxsecret"
:subject "Dummy Subject"
:message "Dummy message"))
where (env :sns) is set correctly, the notifications get sent but instead of having the message Dummy message they are being sent with Test data which is the last sent message from the console.
I have no idea what is causing this to happen.
Are your queues configured to resend messages that are not deleted from the queue? This is an extremely common configuration for queues. A typical flow is:
receive a message
process it and save the result
if the result was saved, delete the message.
otherwise don't delete it from the queue.
If the message is not deleted it will be resent a later time when the queue is read from, allowing it to be eventually processed. This makes the system reliable in the face of queue consumers that could die half way through processing a message (which is always the case in any real system). Many refer to this arrangement as "at least once delivery"