I have a state machine with multiple steps.
I want to send notification whenever a step run successfully or give error.
One solution is to add a SNS step after each step so whenever a step is successful then next step will run which is to send notification but what if a step fails then How can I send email ?
Is there any solution to this problem ?
I know we can set cloud watch rules but it send notification when a complete state machine fails but here I want to get notifications at lower level i.e. at every step of state machine.
Thanks for your question.
In the AWS Step Functions Console, there's a sample project called Callback pattern example which should illustrate what you're trying to achieve:
Step Functions offers error catching functionality which allows you to transition to a specific state based on the error thrown. For more information see this helpful documentation on error handling: Error Handling in Step Functions - AWS Step Functions
Related
I intend to write software that posts daily feeds using SubmitFeed, and while planning to do so, I have seen in the documentation that I get some response from Amazon, possibly way before the actual parsing is complete. When I know that the operation has been completed, I need to call GetFeedSubmissionResult, however, the problem is that I need to find out somehow when the submission has finished. I could poll using GetFeedSubmissionList until the status is complete, but this would waste resources and is hacky. The way I would like to go is to use Amazon SNS and get notifications from FeedProcessingFinishedNotification.
However, I don't know how I could use Amazon SNS. Even though I read into the docs, I don't really know how I could use this. I suppose that something would need to run in my Linux CentOS or my Wildfly/Jboss which would "see" that a message has arrived and as a result would trigger the execution of a code I would intend to execute when such a push notification arrives. However, I do not know how I need to do this. How can I properly receive Amazon SNS push notifications at my Linux CentOS and Wildfly/Jboss so a custom Java code I write would be executed?
P.S.
This is a link which deals with RedHat and Maven: https://access.redhat.com/documentation/en-us/red_hat_jboss_fuse/7.0-tp/html/apache_camel_component_reference/aws-sns-component
However, reading it it's not clear to me how I can receive messages from Amazon, like an order has been placed on a product.
This article about CLI: https://docs.aws.amazon.com/cli/latest/userguide/cli-services-sns.html
describes how to subscribe using the email protocol. Reading about subscription protocols I have found this article: https://docs.aws.amazon.com/sns/latest/api/API_Subscribe.html
It seems that if I choose an HTTPS address, then the messages would be requests to that address. I'm really confused about this.
I have a state machine (AWS Step function). I invoke it from java code either to start or stop. How do I pause a state machine and resume it back.
To pause the state machine you can add a manual approval step with API Gateway and call a GetActivityTask when you're ready to unpause. See more details in this tutorial https://aws.amazon.com/blogs/compute/implementing-serverless-manual-approval-steps-in-aws-step-functions-and-amazon-api-gateway/
Alternatively, if the java code where you need to pause the step function sends logs to CloudWatch and unpause is not required to be done immediately after your code complete (can wait 5 minutes), you can trigger the lambda steps to proceed after some event in the logs. For more details see https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-cloudwatch-events-target.html
say I have this:
Step 1: A azure webjob triggered by a timer, and this job will create 1000 messages and I will put them in a queue.
Step 2: I have another azure webjob triggered by above message queue, this webjob will process these messages.
Step 3: The final webjob should only be triggered when all messages have been processed by step 2.
Looks like azure Queue doesn't support ordering and the only way is to use ServiceBus. I am wondering is it really the only way?
What I am thinking is this kind of process:
Put all these messages into an azure table, with some guid as primary key and status to be 0.
after finishing step 2, change the status of this message to 1 (i.e. finished) and will trigger step 3 if every messages have been done.
Will it work? Or maybe there are some nuget packages that I can use to achieve what I want?
The simplest way I think is the combination of Azure Logic App and Azure Function.
Logic App is a automated scalable workflow, you could trigger it with Timer, HTTP request and etc. And Azure function is a serverless compute service that enables you to run code on-demand without having to explicitly provision or manage infrastructure.
The Logic App support add code with Function, as for the use of Function, It's similar with WebJob. So I think you could create a Logic App with three Function, the they will run one by one.
As for the WebJob, yes,the QueueTrigger doesn't support ordering. And the Service Bus you mentioned, It did meet your some requirements for its FIFO feature. However you need to make sure your step 3 would be triggered after step 1, because it's already null in your queue before creating queues.
Hope my answer could help you.
I'm hoping to clarify what topics I need to subscribe to in order to get the next job off the queue in AWS IoT.
Based on this documentation it looks like I should just need to subscribe to the notify-next topic. However, when I do that I don't actually get the next job when my application starts up, even if I issue the describe request for the job $next. That information comes in on the jobs/$next/get/accepted topic. Do I need to subscribe to that topic too? I'm worried about getting duplicate jobs if I subscribe to both.
Here is some python code using the AWSIoTMQTTThingJobsClient that does not work. This will only notify me if the next job changes, I don't get the original next job when starting up.
client.createJobSubscription(
callback=handle_job,
jobExecutionType=jobExecutionTopicType.JOB_NOTIFY_NEXT_TOPIC)
client.sendJobsDescribe('$next')
Ideally my application does the following:
On startup, get the next job in the queue and execute it
After executing a job, see if there is another job on the queue, if so, execute that next
If new jobs are created while the application is running, asynchronously get the job details and execute it (i.e., don't poll for new jobs)
I am able to make this happen by changing the code above to this
client.createJobSubscription(
callback=handle_job,
jobExecutionType=jobExecutionTopicType.JOB_DESCRIBE_TOPIC,
jobReplyType=jobExecutionTopicReplyType.JOB_ACCEPTED_REPLY_TYPE,
jobId='$next')
client.createJobSubscription(
callback=handle_job,
jobExecutionType=jobExecutionTopicType.JOB_NOTIFY_NEXT_TOPIC)
client.sendJobsDescribe('$next')
Basically, add in another subscriber to jobs/$next/get/accepted so the first job can be fetched. But for the rest of the duration of the application just use the notify-next topic. It's just that most sample code or documentation indicates that I shouldn't need that extra subscription, so I want to make sure I'm not doing anything wrong.
My Amazon Lambda function (in Python) is called when an object 123456 is created in S3's input_bucket, do a transformation in the object and saves it in output_bucket.
I would like to notify my main application if the request was successful or unsuccessful. For example, a POST http://myapp.com/successful/123456 if the processing is successful and http://myapp.com/unsuccessful/123456 if its not.
One solution I thought is to create a second Amazon Lambda function that is triggered by a put event in output_bucket, and it to do the successful POST request. This solves half of the problem because but I can't trigger the unsuccessful POST request.
Maybe AWS has a more elegant solution using a parameter in Lambda or a service that deals with these types of notifications. Any advice or point in the right direction will be greatly appreciated.
Few possible solutions which I see as elegant
Using SNS Topic: From your transformation lambda, trigger a SNS topic, with success/unsuccess message, where SNS will call a HTTP/HTTPS endpoint with message payload. The advantage here is, your transformation lambda is loosely coupled with endpoint trigger and only connected through messaging.
Using Lambda Step Functions:
You could arrange to run a Lambda function every time a new object is uploaded to an S3 bucket. This function can then kick off a state machine execution by calling StartExecution. The advantage in using step functions is that you can coordinate the components of your application as series of steps in a visual workflow.
I don't think there is any elegant AWS solution, unless you re-architect, something like your lambda sends message to SQS or some intermediatery messaging service with STATUS and then interemdeiatery invokes POST to your application.
If you still want to go with your way of solving, you might need to configure "DeadLetter queue" to do error handling in failure cases (note that use cases described here are not comprehensive, so need to make sure it covers your case) like described here.