I found the thread Automating Linux EBS snapshots backup and clean-up which
Sergey Romanov has given the step-by-step instruction. I tested it and works great. (Thanks, Surgey.)
The cron entry I used * 23 * * * /usr/local/ec2/backup.php
However have a problem with adding cron job (last step). Got the error crond[478]: (CRON) bad command (/etc/crontab). To fix it, I added a username (root) after which the error did not appear.
The updated cron entry is * 23 * * * root /usr/local/ec2/backup.php
However, the job is still not running. Is there anything I did was wrong?
Thanks
Related
I have created a glue crawler to run every 6 hours , I am using "Crawl new folders only" option. Every time crawler runs it fails with "Internal Service Exception" error.
What I tried so far ?
Created another crawler with option "Crawl all the folders" set to run every 6 hours. It works perfectly without any issue.
Created another crawler with option "Crawl new folders only" but with "Run on demand" option. It works perfectly without any issue.
All above 3 scenarios point to same S3 bucket with same IAM policy rule. I also tried reducing 6 hours run time to 15 mins / 1 hour but no luck.
What I am missing ? Any help is appreciated.
If you are setting RecrawlPolicy as CRAWL_NEW_FOLDERS_ONLY then please make sure that the UpdateBehavior is LOG only otherwise you will get the error like The SchemaChangePolicy for "Crawl new folders only" Amazon S3 target can have only LOG DeleteBehavior value and LOG UpdateBehavior value.
I want to trigger an AWS Lambda function, using Cloudwatch Rules and the requirement is as follows.
Condition 1: Trigger Daily
Condition 2: Every 5 mins
Condition 3: It should NOT trigger between 11PM and
1AM every day (Maintenance Window).
I read the documentation on crons and am unable to come up with a Cron expression to fulfill condition 3. I came up with the following expression.
0/5 ## * * ? *
How can I fulfill condition 3 mentioned above? I have left it as ## on the cron expression. I am well aware of the timezones.
You can utilize cron online tools such as
https://crontab.guru/
http://www.cronmaker.com/
Here is the expression that I created as per your requirement
*/5 1-23 * * *
I'm answering my own question. Regular cron expressions are not applicable for AWS Cloudwatch Rules. AWS Cloudwatch Rules have a cron expression of SIX (6) Required Fields. (Link to Documentation). The answer to this question is as follows. I will present different scenarios, because in some cases, the timezone is important and we might have to skip different hours instead of Midnight.
Both of the below scenarios are working and I tested in the AWS Cloudwatch Rules.
Scenario 1: Trigger Daily - Every 5 Mins - Skip 11PM to 1AM.
0/5 1-22 * * ? *
Explanation: First field mentions, it should be triggered only in 0th and 5th Minutes. Second field mentions that it should be triggered from 1st Hour to 22nd Hour. Hence, after 22:55 the next trigger will be 1:00. It will skip from 23:00 to 00:55.
Scenario 2: Trigger Daily - Every 5 Mins - Skip 5PM to 7PM
0/5 0-16,19-23 * * ? *
Explanation: First field mentions, it should be triggered only in 0th and 5th Minutes. Second field mentions that it should be triggered from 0th Hour to 16th Hour and from the 19th Hour to 23rd Hour. Hence, after 16:55 the next trigger will be 19:00. It will skip from 17:00 to 18:55.
This command can help you :
*/5 1-23* * * * /usr/bin/php /home/username/public_html/cron.ph >/dev/null 2>&1
Reference : crontab run every 15 minutes between certain hours
The answer is already given but few thing that I can add, and the https://crontab.guru/ create confusion as AWS cron expression is consist of 6 figures while crontab consist of 5, which just create confusion as mentioned by #keet
So first you can ignore the year section and check your cron express in crontab but again crontab does not understand ? mark when it comes to days of week.
So my suggestion is to use AWS console for expression testing which will list next 10 triggers.
So hope it helps someone else to know about the next trigger.
For more details, you check AWS CronExpressions
I have created a job in Cloud scheduler like below:
Name : Start_BOT1
Frequency : 0 */15 * * * (Asia/Calcutta)
Target : A topic in Pub/Sub
As per the Frequency the job has to start every 15 mins once. But the job is not working as expected. It runs only when we click on "Run NOW" button.
Can someone help explain how the scheduler works in GCP and how the timezones works here.
Here you can find detailed information on Configuring Cron Job Schedules with the unix-cron format.
The 1st asterisk represents the minute
The 2th asterisk represents the hour
The 3th asterisk represents the day of month
The 4th asterisk represents the month
The 5th asterisk represents the day of the week
For step values, you correctly used the slash, executing every N steps.
For your case - running the job every 15 minutes, the configuration would be: “*/15 * * * *”
You can select the time zone for evaluating the schedules either by using the dropdown on the GCP Console Create a job screen or the gcloud --time-zone flag when you create the job. The default time-zone is Etc/UTC.
I need help with a project to generate snapshot in AWS.
When generating a crontab it tells me that the crontab I typed is not valid. I need it to generate from Monday to Friday from 10 to 22 UTC, every 10 minutes.
Can anyone help me?
I tried this crontab:
0/10 10-22 * * MON-FRI *
It is not quite clear your cron is running within your on-premise Linux-dist, or you are generating the cron in AWS native cloudwatch rule for snapshot. I believe the later scenario is more AWS-style. If yes, the correct syntax is
0/10 10-22 ? * MON-FRI *
(It should be ? if you need any day of Month/Week)
I have an application deployed to Elasticbeanstalk and run as worker, I wanted to add a periodic task ti run each hour, so I create a cron.yaml with this conf:
version: 1
cron:
- name: "task1"
url: "/task"
schedule: "00 * * * *"
But during the deploy I always got this error:
[Instance: i-a072e41d] Command failed on instance. Return code: 1 Output: missing required parameter params[:table_name] - (ArgumentError). Hook /opt/elasticbeanstalk/addons/sqsd/hooks/start/02-start-sqsd.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
I added the right permission to EBT role, and I verified the cron.yaml maybe it formatted for Windows (CR/LF), but always got the same error.
missing required parameter params[:table_name] looks like DynamoDB table name is missing, where I can define it ? ,
Any idea how I can fix that.
Thanks !
Well I didn't figure out a solution with this issue so I moved to another approach which use CloudWatch Event to create a Rule type:schedule and select a target as SQS Queue (the one configured with the worker).
Works perfectly!
I encountered this same error when I was dynamically generating a cron.yaml file in a container command instead of already having it in my application root.
The DynamoDB table for the cron is created in the PreInitStage which occurs before any of your custom code executes so if there is no cron.yaml file than no DynamoDB table is created. When the file later appears and the cron jobs are being scheduled it fails because the table was never created.
I solved this problem by having a skeleton cron.yaml in my application root. It must have a valid cron job (I just hit my health check URL once a month) but it doesn't get scheduled since the job registration does happen after your custom commands which can reset the file with only the jobs you need.
This might not be your exact problem but hopefully it helps you find yours as it appears the error happens when the DynamoDB table does not get created.
I looks like your yaml formatting is off. That might be the issue here.
version: 1
cron:
- name: "task1"
url: "/task"
schedule: "00 * * * *"
Formatting is critical in Yaml. Give this a try at the very least.