How to schedule a Template to run multiple times a day in ansible tower? - scheduling

There is no option to add multiple times a day..it's limited to just once a day

You can schedule the template to run the multiple times a day by setting the Repeat Frequency in Hours instead of Days.
And Under Frequency Details Section, set the duration between Job to Job per day.
https://docs.ansible.com/ansible-tower/3.5.0/html/userguide/scheduling.html
Thank you.

Related

GCP Big Query scheduled query UTC to local time zone bug

I have scheduled my queries to run at 11:50pm local time, however Big Query runs them an hour early.
Is this a known issue?
I have scheduled them to run at 12:50am the next day but I am worried this will cause more issues.
Turns out it was a daylight savings issue and I needed to change the date to the current day before changing the time.
Not very intuitive.

How to set a GCP Cloud Monitoring (Stackdriver) alert policy period greater than 24 hours?

Currently 24 hours is the limit of time a Cloud Monitoring (erstwhile Stackdriver) alert policy can be set.
However, if you have a daily activity, like a database backup, it might take a little more or less time each day (e.g. run in 1 hour 10min one day, 1 hour 12min the next day). In this case, you might not see your completion indicator until 24 hours and 2 minutes since the prior indicator. This will cause Cloud Monitoring to issue an alert (because you are +2min over the alerting window limit).
Is there a way to better handle the variance in these alerts, like a 25 hour look back period?
Currently, there is no way to increase the period time over 24 hours.
However, there is a Feature Request already opened for that.
You can follow it in this public link [1].
Cheers,
[1] https://issuetracker.google.com/175703606
I found a work around to this problem.
Create a metric for when your job starts (e.g. started_metric)
Create a metric for when your job finishes (e.g. completed_metric)
Now create a two part Alert Policy
Require that started_metric occurs once per 24 hours
Require that completed_metric occurs once per 24 hours
Trigger if (1) and (2) above are met (e.g. both > 24 hours)
This works around the 24 hour job jitter issue, as the job might take > 24 hours to complete, but it should always start (e.g. cron job) within 24 hours.

Query Scheduling

I am trying to schedule a query in Google Cloud Platform query scheduler. But whenever I try to schedule, it is hardly executing. What I did is in steps below
1) Created a dataset with location US
2) Created a table in same location
3) Wrote a query
4) Scheduled a query . To check I gave the time to run in the next 3 minutes (not a cron type). Just added a scheduled time
But in the end, 1/10 times, it is executing as per the schedule. Rest, it is not even starting so I could log the error as well. Please advice
More clarification could be needed, but I see two possible situations:
1) You expected it to run on the exact time of the "Scheduled start time", but it doesn't work like this, it runs according to the schedule you set on the "repeats" dropdown. You can verify the exact scheduled time by going to "Scheduled queries" and then check "Next Scheduled"
2) If you set a custom schedule, did you consider that the time is UTC? You can also check the "Next Scheduled" time for what time would it be in your local time.

How control parallel job runs count in AWS batch?

Aws batch supports up to 10000 job in one array. But what if each job writes to DynamoDb? It is needed to control rate in this situation. How to do that? Is there a setting to keep only N job in the running state and do not launch others?
Easiest way would be to send DyanmoDB jobs to an SQS queue, and have workers/lambdas poll this queue at a rate you specify. That is the classic approach to rate-limiting in AWS world. I would do some calculations as to what rate this should be in capacity units and configure your Tables' capacity accordingly with the queue polling rate.
Keep in mind that you may have other processes accessing your DynamoDB using up your Table's capacity as well as noting the retention time of the queue you setup. You may benefit immensely speed and cost wise with some caching implemented for read jobs, have a look at DAX for that.
Edit Just to address your comments. So as you say if you have 20 units for your table, you can only execute 10 jobs per second if each job is using 2 units in 1 second. Say you submit 10,000 jobs, at 10 jobs a second that will be 1,000 seconds to process all those jobs. If, however you submit more than 3,456,000 jobs, that will take more than 4 days to process at 10 jobs a second. The default retention time for SQS is 4 days, so you would start losing messages/jobs at this rate.
And as I mentioned you could have other processes accessing your table which could blow it's usage past 20 units, so you will need to be very careful when approaching your Table's limit.

Control-M run job N days after ordering it

I would like to order a job today in Control-M but have it run 8 days later, how would one go about doing this? Tomorrow the job should be ordered again for tomorrow's date and ran 8 days from tomorrow, so on and so forth.
If you want run a job after 8days if it is not schedule on that date then you can order out that job on 7thdays depending up on time . Or if the request is that it should run on permantely on calendar days then you can add schedule and move job to production so that it get automatically order out and run.