Can Control-M execute a http service endpoint to GET job status? - web-services

I am very new to control-m and wanted to ask if control-m supports this scenario:
We have a http webservice that runs a log running job e.g.
http://myserver/runjob?jobname=A
This will then start job A on the server and returns a job id back. I return job id so i can get status of the job from the server when ever i want to. The job has many statuses e.g. Waiting, In progress, error
I want the control-m job status to be updated as soon as the job on the server updates. For that, I have created a webservice url:
http://localhost/getjobsatus?jobid=1
This url request will get the job status of the job id 1
Can control-m poll a web service url for a job status and can I call a web service to run a job and get its id back?
Apologies for asking this basic level question. Any help will be really appreciated.

Welcome to the Control-M community :-)
You can implement 2 Control-M WebServices jobs (available with BPI – Business Process Integration Suite), one to submit your job and get its ID, and one to track its status.
Alternatively you can implement this in 1 Control-M OS type job using the ctmsubmit command inside a script…
Feel free to join our Control-M online community

Related

Continue request django rest framework

I have a request that lasts more than 3 minutes, I want the request to be sent and immediately give the answer 200 and after the end of the work - give the result
The workflow you've described is called asynchronous task execution.
The main idea is to remove time or resource consuming parts of work from the code that handles HTTP requests and deligate it to some kind of worker. The worker might be a diffrent thread or process or even a separate service that runs on a different server.
This makes your application more responsive, as the users gets the HTTP response much quicker. Also, with this approach you can display such UI-friendly things as progress bars and status marks for the task, create retrial policies if task failes etc.
Example workflow:
user makes HTTP request initiating the task
the server creates the task, adds it to the queue and returns the HTTP response with task_id immediately
the front-end code starts ajax polling to get the results of the task passing task_id
the server handles polling HTTP requests and gets status information for this task_id. It returns the info (whether results or "still waiting") with the HTTP response
the front-end displays spinner if server returns "still waiting" or the results if they are ready
The most popular way to do this in Django is using the celery disctributed task queue.
Suppose a request comes, you will have to verify it. Then send response and use a mechanism to complete the request in the background. You will have to be clear that the request can be completed. You can use pipelining, where you put every task into pipeline, Django-Celery is an option but don't use it unless required. Find easy way to resolve the issue

Is there a way to be notified of status changes in Google AI Platform training jobs without polling the REST API?

Right now I monitor my submitted jobs on Google AI Platform (formerly ml engine) by polling the job REST API. I don't like this solution for a few reasons:
Awareness of status changes is often delayed or missed altogether if the interval between status changes is smaller than the monitoring polling rate
Lots of unnecessary network traffic
Lots of unnecessary function invocations
I would like to be notified as soon as my training jobs complete. It'd be great if there is some way to assign hooks or callbacks to run when the job status changes.
I've also considered adding calls to cloud functions directly within the training task python package that runs on AI Platform. However, I don't think those function calls will occur in cases where the training job is shutdown unexpectedly, such as when a job is cancelled or forced to end by GCP.
Is there a better way to go about this?
You can use a Stackdriver sink to read the logs and send it to Pub/Sub. From Pub/Sub, you can connect to a bunch of other providers:
1. Set up a Pub/Sub sink
Make sure you have access to the logs and publish rights to the topic you desire before you get started. Follow the instructions for setting up a Stackdriver -> Pub/Sub sink. You’ll want to use this query to limit the events only to Training jobs:
resource.type = "ml_job"
resource.labels.task_name = "service"
Note that Stackdriver can further limit down the query. For example, you can limit to a particular Job by adding a condition like resource.labels.job_id = "..." or to a certain event with a filter like jsonPayload.message : "..."
2. Respond to the Pub/Sub message
In order to tell what changed, the recipient of the Pub/Sub message can either query the job status from the ml.googleapis.com API or read the text of the message
Reading state from ml.googleapis.com
When you receive the message, make a call to https://ml.googleapis.com/v1/<project_id>/jobs/<job_id> to get the Job information, replacing [project_id] and [job_id] in the URL with the values of resource.label.project_id and resource.label.job_id from the Pub/Sub message, respectively.
The returned Job object contains a field state that, naturally, tells the status of the job.
Reading state from the message text
The Pub/Sub message will contain a string telling what happened to the job. You probably want behavior when the job ends. Look for these strings in jsonPayload.message:
"Job completed successfully."
"Job cancelled."
"Job failed."
I implemented a Terraform module as #htappen said. I'm happy if it would help you. But my real hope is that Google updates AI Platform with the same feature.
https://github.com/sfujiwara/terraform-google-ai-platform-notification
I think you can programmatically publish a PubSub message at the end of your training job code. Something like this:
from google.cloud import pubsub_v1
# publish job complete message
client = pubsub_v1.PublisherClient()
topic = client.topic_path(args.gcp_project_id, 'topic-name')
data = {
'ACTION': 'JOB_COMPLETE',
'SAVED_MODEL_DIR': args.job_dir
}
data_bytes = json.dumps(data).encode('utf-8')
client.publish(topic, data_bytes)
Then you can setup a cloud function to be triggered by the same pubsub topic.
You can work around the lack of a callback from the service on a custom TF training job by adding a LamdbaCallback to the fit() call. In the on_epoch method, you could then send yourself a notification on job progress and on_train_end when it finishes.
https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/LambdaCallback

Running Task In The Background

What is the technology which allows the web application to process the task in the background without holding user to wait until the task to finish.
Example, as a user,
1. I want to submit a form which requires heavy processing. (Assume it requires to checking or actions, upload documentation or etc)
2. After submitting the form, the task will be running in the background, then I can go to other page and do something else.
2.1 At the same time, I might submit another form to the server.
The request can be process at the same time or can be queue under a queue system
3. I will receive a notification from the system whenever the server return a response. (Regardless it is success or failure)
This feature is similar to Google Cloud Platform.
Try Kue or any other similar libraries. The term to "google" is "[language] task queue"
You can of course roll your own. Though it will be much easier if you make use of an existing server such as redis or rabbitmq. So that queuing part is handled for you by the server and you could concentrate on your business logic.

Monitoring the status of a google pub/sub submitted job

I am new to Google Compute/Google App Engine platform. I am currently migrating a python flask application using celery for async tasks to Google Compute/Google App Engine platform. However in the docs it's written I should use Google Pub/Sub instead of celery. In my application whenever I run an async task I have a page to monitor the status of the job using the same principle as http://blog.miguelgrinberg.com/post/using-celery-with-flask. I have checked the documents for google pub/sub, but I am at loss how to implement the same using google pub/sub. Can anybody help or point me to the right direction to implement the same in google pub/sub.
You might be able to use psq for this, which is designed to look like celery. From a general Cloud Pub/Sub perspective, you would follow these steps:
Create a topic for your status update messages.
In the async task whose status you want to monitor, periodically publish a message with the status. This message will be of some format of your choosing that would indicate percentage completion or specific message to display.
Create a subscription for your monitoring page that will receive messages on the topic.
In your monitoring page (or a background process that will supply the data to your monitoring page), pull messages for the subscription.
Process the messages and update the state of your jobs for your monitoring page.
Ack the messages you pulled and processed.
A couple of things to keep in mind in this workflow:
Cloud Pub/Sub guarantees at-least-once delivery. That means you could potentially receive the same message more than once.
Cloud Pub/Sub does not provide any guarantees on ordering. Therefore, if you are periodically publishing status updates, your subscriber could potentially receive these out of order. For your case, you'll probably want your message to include some sort of timestamp or strictly-increasing identifier in your message to sequence your status updates per task. If you keep track of the most recent status update received, then you can disregard older messages and ack them immediately.

aws swf - get workflow execution id from within the workflow

I am using Amazon SWF service to automate some recurring tasks.
I am trying to use signals to execute some commands on remote machines. After the commands are finished executing, I'd like to send a signal back to the workflow to indicate success or failure.
The question is how can I find the workflow execution id programmatically? This is required for the remote machines to send a signal.
Thanks
Per http://docs.aws.amazon.com/AWSRubySDK/latest/AWS/SimpleWorkflow/WorkflowExecution.html, shouldn't
your_workflow_execution_variable.run_id
get you exactly what you're looking for?