Editing the existing task in the task scheduler C++ - c++

I am started working on Windows10 Task Scheduler .
I am developing GUI application in that I have requirement to control the number of days interval for trigger the task.
Suppose If I enter 5 days in my GUI then it should trigger the task in 5 days. Likewise randomly user can change the days.
Currently I have a task which is already exists in the task Scheduler, in that I need to control the days for to trigger in the User Interface.
I have seen the examples of Task Scheduler in MSDN but all of them are related to creating a new task or retrieving the states of existing task.
I don't want to create a new task , I want to edit the existing same task every time.
I didn't find anything which is related to editing the task which is already present in Task Scheduler.
Requesting anyone to please help me in editing the time trigger days in C++ using Task Scheduler 2.0.

Seems like what you can do is get access to all the ITrigger instances that are defined for a given task. These provide you with a pretty straightforward interface through which you set trigger times/limits/repetitions etc. for the task.
You access these trigger objects through the ITaskDefinition's get_Triggers() method, which returns an ITriggerCollection which you can iterate to find your trigger. You get the ITaskDefinition object by callng IRegisteredTask's get_Definition() method. You get the IRegisteredTask object by calling ITaskFolder's GetTask() method. An instance of which, to begin with, you get by calling ITaskService's GetFolder() method. Few. Thank MS for this idea of an API.
If you need basic help finding headers and running code then there's a code example at the bottom of this documentation page that does something different than what you are requesting (it creates a new task, not accessing an existing one), but can provide you with all the necessary boilerplate snippets to get going.

Related

Suppress mesage-The Mapping task failed to run. Another instance of the task is currently running

I have set up multiple jobs in Informatica cloud to sync data from Oracle with Informatica objects. The job is scheduled to run every 3 minutes as per the business requirements. Sometimes the job used to run long due to secure agent resource crunch and my team used to multiple emails as below
The Mapping task failed to run. Another instance of the task is currently running.
Is there any way to suppress these failure emails in the mapping?
This wont get set at the mapping level but on the session or integration service level see following https://network.informatica.com/thread/7312
This type of error comes when workflow/session is running and trying to re-run. use check if by script if already running then wait. If want to run multiple instance of same:
In Workflow Properties Enable 'Configure Concurrent Execution' by checking the check box.
once its enables you 2 options
Allow Concurrent Run with same instance name
Allow Concurrent run only with unique instance name
Notifications configured at the task level over ride those at the org level, so you could do this by configuring notifications at the task level and only sending out warnings to the broader list. That said, some people should still be receiving the error level warning because if it recurs multiple times within a short period of time there may be another issue.
Another thought is that batch processes that run every three minutes that take longer than three minutes is usually an opportunity to improve the design. Often a business requirement for short batch intervals is around a "near real time" desire. If you have also Cloud Application Integration service, you may want to set up an event to trigger the batch run. If there is still overlap based on events, you can use the Cloud Data Integration API to and create a dynamic version of the task each time. For really simple integrations you could perform the integration in CAI, which does allow multiple instances running at the same time.
HTH

schedule tasks same worker using AWS SWF

I am writing a workflow in AWS SWF and at a point it has the following steps:
DownloadFromS3 -> doSomeProcessing -> UploadResults
My idea is to write each each step as a different task and let the decider schedule each one. The problem is, how can i guarantee that the worker that receives the doSomeProcessing is the same one that downloaded the file? I am running a pool of around 20 workers.
PS1: I know I can create a different task list for every worker and route the tasks to them individually, but this seems like a hack to me and not a proper solution.
PS2: There's an example in the SWF console that has a download and upload task, but it's written in Java (which I cannot understand) and it seems that it's written with a single worker in mind.
PS3: I'm currently using a server written in Go that execute all 3 steps and manage the states in between. However, it would be nice to offload the state management to a decider in SWF because doSomeProcessing is not a trivial task (engineering CFD simulations) and a lot of things can go wrong.
Thanks
You're right in your assumptions, you have basically two ways to handle that:
- either have all 3 steps in the same SWF activity task ; this it what I do at work for the case you describe, because we consider that downloading from/uploading to S3 are trivial things that "just work"
- either split steps in 3 different activity tasks ; then the only way you can guarantee the same node will be used is by changing the task list for the 2nd and the 3rd task ; we also do this for some very long task, and it works quite well.
The second option is actually not a hack, and you don't create any object, it's just a routing mechanism. The only downside in my opinion is that you have no native way to check if the backlog is too important on a given task list when they're dynamic like that, since you don't have a native way for listing those task lists. This can be handled with a wrapping system though, or you can rely on timeouts to alert yourself when a node cannot keep up.

Suitability of app with long running tasks for AWS Lambda or AWS Step Functions

I have an application on an AWS EC2 instance that runs once daily. The application fetches some files from a web service, parses the files line by line, updates a database, updates S3 files based on changes in the database, sends notification emails to customers as well as a few other tasks.
This is a series of logical tasks that must take place in sequence, although some of the tasks can be thought of as sub-tasks that can be executed in parallel. All tasks are a combination of Perl scripts and Java programs, with a single Perl script acting as the manager that executes each in turn. Some tasks can take as long as 45 minutes to complete, and the whole process can take up to 3 hours in total.
I'd like to make this whole process serverless. My initial idea was to use AWS Lambda, whereby each task would execute as a Lambda function, until I discovered Lambda functions impose a 5 minute execution timeout. It seems like the AWS Step Functions service is actually a better fit for my use case, but my understanding is that this service is backed by Lambda, so the tasks will still have the 5 min execution limitation.
(I'm also aware that I would have to re-write my Perl scripts to a language supported by Lambda).
I assume that I can work around the execution time limit by refactoring my code into smaller functions that will guarantee to complete in under 5 minutes. In my particular situation though, this seems inefficient.
Currently the database update task processes lines from a file one at a time. For this to work with Lambda, a Lambda function would need to handle only a single line from the file (or a very small number of lines) in order to guarantee not spilling over 5 minutes execution time. This would involve opening and closing a connection with the database on every invocation of the Lambda function. Also, each line processed should result in an entry written to a file, to be stored in S3. Right now, I just keep a file handle in memory and write the file to S3 when all lines are processed, but with Lambda I would need to keep reading the file, updating it and writing it back to S3.
What I'm asking is:
Is my use case a bad fit for AWS Lambda and/or AWS Step Functions?
Have I misunderstood how these services work?
Is there another AWS service that would be a better fit for my use case?
After further research, I think AWS Batch might be a good idea.
What you want are called Activity Workers. Tl;dr: You register "activities" and each gets an ARN. Then you can put that ARN in the resource field of Task states and then you run some code (the "worker") somewhere (in a Lambda, on EC2, in your basement, wherever) that polls for tasks identified by that ARN, then calls back to report success or failure. Activity Workers can run for up to a year.
Step-by-step details at the AWS docs
In response to RTF's comment, here's a deeper dive: Suppose you have code to color turtles in color_turtles.pl. So what you do is call the CreateActivity API - see http://docs.aws.amazon.com/step-functions/latest/apireference/API_CreateActivity.html - giving the name "ColorTurtles" and it'll give you back an ARN, a string beginning arn:aws... Then in your state machine you make a Task state with that ARN as the value of the resource field. Then you add code to color_turtles.pl to poll the service with http://docs.aws.amazon.com/step-functions/latest/apireference/API_GetActivityTask.html - whenever a machine you're running gets to that task, it'll go look for activity workers polling. It'll give your polling worker the input for the task, then you process the input and generate some output, and call SendTaskSuccess or SendTaskFailure. All these are just REST HTTP calls, so you can run them anywhere and I mean anywhere; in a Lambda, on an EC2 instance, or on some computer anywhere on the Internet.
So to answer your questions:
1) Yeah, if you've got something that'll run for around 45 minutes, whilst you could engineer it with Lambda/Step functions you're probably better off getting a EC2 micro instance.
2)Nope you've pretty much got it.
3) As above you want to go with EC2 for this, there's a good article on using Data Pipelines to start / stop an EC2 instance here that way by starting instance only when you need it the cost(if any) is negligible.
I have jobs that run in this fashion normally you can get away with with a t2.micro instance which is free tier eligible.
You can also run your perl scripts on an EC2 instance so no need to rewrite them!
I will start with that it seems you are looking for workflow solutions on AWS. SWF and Step functions are the two most popular ones. Steps function is more recent offering and encouraged by AWS more than SWF.
SWF has native capability to handle long-running tasks, the downside is that you have to provide your own execution environment for deciders (can't use lambda).
With step functions, you can do this in two different ways. One of the approaches is suggested by Tim in his answer. There is an alternative way to achieve the same which is using job poller in step functions. Job pollers have the ability to call (poll) your resource and find out if the task is done and if not you can send execution in wait mode for the specified time. As mentioned above maximum execution time allowed currently for any workflow is 1 year. In case you have tasks which may take longer than 1 year, you can't use step functions in its current form.

Azure Scheduler Implementation

I have written a web job which will do multiple tasks that run on different schedules like once a day, once in every hour and so and I achieved this by using Timer delegate. Now I am thinking of changing that approach and create a Scheduler job for each scenario. I was able to find some information regarding schedules from googling but was never able to join them to form a flow.
I learned that we can create job collection and each collection can have 'n' jobs based on the pricing tier we are using. After creating a job the program logic that the job must do how can we bind them to the corresponding job?
Also linking jobs to job collection how can I achieve that?
Thanks
A typical workflow is that you would write to a Azure Message Queue with a message, then you would have an Azure Cloud Service that reads from that and does the processing.
To tie specific jobs to specific program logic you can either embed information about the type into the message and have something that generically picks the messages up and turns them into specific operations/classes or you could have behavior specific queues and each job would write to its own queue and you would read from each queue by a different Cloud Service.
I think this will solve my problem either using API calls or queue processing
Solution
If I understand your question, you have a WebJob that has multiple methods, each of which needs to be called on a different schedule. Instead of going through the hassle of setting up a Scheduler and having yet another resource that you have to manage, mark each method you need called with a TimerTriggerAttribute.

How to best launch an asynchronous job request in Django view?

One of my view functions is a very long processing job and clearly needs to be handled differently.
Instead of making the user wait for long time, it would be best if I were able to lunch the processing job which would email the results, and without waiting for completion notify the user that their request is being processed and let them browse on.
I know I can use os.fork, but I was wondering if there is a 'right way' in terms of Django. Perhaps I can return the HTTP response, and than go on with this job somehow?
There are a couple of solutions to this problem, and the best one depends a bit on how heavy your workload will be.
If you have a light workload you can use the approach used by django-mailer which is to define a "jobs" model, save new jobs into the database, then have cron run a stand-alone script every so often to process the jobs stored in the database (deleting them when done). You can use something like django-chronograph to manage the job scheduling easier
If you need help understanding how to write a script to process the job see James Bennett's article Standalone Django Scripts for help.
If you have a very high workload, meaning you'll need more than a single server to process the jobs, then you want to use a real distribute task queue. There is a lot of competition here so I can't really detail all the options, but a good one to use with for Django apps is celery.
Why not simply start a thread to do the processing and then go on to send the response?
Before you select a solution, you need to determine how the process will be run. I.e is it the same process for every single user, the data is the same and can be scheduled regularly? or does each user request something and the results are slightly different ?
As an example, if the data will be the same for every single user and can be run on a schedule you could use cron.
See: http://www.b-list.org/weblog/2007/sep/22/standalone-django-scripts/
or
http://docs.djangoproject.com/en/dev/howto/custom-management-commands/
However if the requests will be adhoc and you need something scalable that can handle high load and is asynchronous: what you are actually looking for is a message queing system. Your view will add a request to the queue which will then get acted upon.
There are a few options to implement this in Django:
Django Queue service is purely django & python and simple, though the last commit was in April and it seems the project has been abandoned.
http://code.google.com/p/django-queue-service/
The second option if you need something that scales, is distributed and makes use of open source message queing servers: celery is what you need
http://ask.github.com/celery/introduction.html
http://github.com/ask/celery/tree