I have set some cron jobs in an ec2 instance. I want to be notified whenever a cron job fails in an ec2. In my cron.log, I don't see any error or alert even if the cron fails to execute. How can I capture the failed crons and send cloud watch alarm which can be picked up by SNS.
Thank you.
Related
I have a trigger on Eventarc that is supposed to run after each Cloud Scheduler invocation, which is google.cloud.scheduler.v1beta1.CloudScheduler.RunJob
However, it is not being triggered anyhow!
Other triggers, like force run, are working.
I want to trigger a Cloud Run after a Job execution. Is it possible or I am facing a bug?
If you are expecting your Cloud Run service to be executed at each scheduled invocation of Cloud Scheduler, it isn't possible to do so through Eventarc and Cloud Audit logs.
This is due to Cloud Scheduler not being in the list of services that write audit logs. Adding to that, the RunJob event you are filtering by will only get written if you manually execute a job (using the API), and not by your set CRON schedule.
A manual job run did trigger Eventarc when I tested this scenario, but I had to set my trigger as global.
If you would like to execute the Cloud Run service on a schedule, you can do that by having Cloud Scheduler send a request to the service URL directly. Another alternative is to instead of having Eventarc listen to Audit logs, have it listen to messages on a Pub/Sub topic, which will be sent by Cloud Scheduler. Let me know if this was helpful.
I setup cron job on Elasticbeanstalk using cron.yaml file and sqs run my tasks periodically. Is there a way to trigger a cron job manually through sqs platform so that for not-frequently running tasks I can easily test the results without waiting for the schedule itself? I tried to send a message to sqs queue attached to eb instance but can't set the http headers required for cronjob.
I have two elastic beanstalk environments.
One is the 'primary' web server environment and the other is a worker environment that handles cron jobs.
I have 12 cron jobs, setup via a cron.yaml file that all point at API endpoints on the primary web server.
Previously my cron jobs were all running on the web server environment but of course this created duplicate cron jobs when this scaled up.
My new implementation works nicely but where my cron jobs fail to run as expected the cron job repeats, generally within a minute or so.
I would rather avoid this behaviour and just attempt to run the cron job again at the next scheduled interval.
Is there a way to configure the worker environment/SQS so that failed jobs do not repeat?
Simply configure a CloudWatch event to take over your cron, and have it create an SQS message ( either directly or via a Lambda function ).
Your workers will now just have to handle SQS jobs and if needed, you will be able to scale the workers as well.
http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html
Yes, you can set the Max retries parameter in the Elastic Beanstalk environment and the Maximum Receives parameter in the SQS queue to 1. This will ensure that the message is executed once, and if it fails, it will get sent to the dead letter queue.
With this approach, your instance may turn yellow if there are any failed jobs, because the messages would end up in the dead letter queue, which you can simple observe and ignore, but it may be annoying if you are OCD about needing all environments to be green. You can set the Message Retention Period parameter for the dead letter queue to something short so that it will go away sooner though.
An alternative approach, if you're interested, is to return a status 200 OK in your code regardless of how the job ran. This will ensure that the SQS daemon deletes the message in the queue, so that it won't get picked up again.
Of course, the downside is that you would have to modify your code, but I can see how this would make sense if you don't care about the result.
Here's a link to AWS documentation that explains all of the parameters.
We have aws alarms set up to email on alarm but we would like to continue to get the alarm notification even if the state is in Alarm without a state change. How could I achieve this (would be happy to use a lambda but no idea how to do it)
Amazon CloudWatch alarm notifications are only sent when the state of the alarm changes. It is not possible to configure CloudWatch to continually send notifications while in the ALARM state.
You would need to write your own code to send such notifications. This could be accomplished via a cron job, scheduled AWS Lambda function or your own application.
Try with a script using Cloudwatch API for example with Boto3 + Python or a Lambda running every X minutes. I have a python script to get values from cloudwatch you can adapt it. http://www.dbigcloud.com/cloud-computing/230-integrando-metricas-de-aws-cloudwatch-en-zabbix.html
One alternative is, to create a Lambda function to send email and host that function using CloudWatch Rule with Scheduled option and target as Lambda function that you have created. In Schedule option, you can set the frequency of time that you expect to receive email. In defined frequency, the Rule will trigger Lambda Function to send email.
I have a business case when an EC2 instance runs out of space, we need to spawn new EBS volume, attach it to EC2 instance and format it.
I have created one cron job which keeps sending disk usage to cloud watch and trying to create one alarm this custom metric.
Now I am not able to find out any information regarding how to spawn an EBS volume when this alarm triggers.
So I would like to know if it is it possible to spawn EBS volume when cloudwatch alarm triggers? If yes, please give some steps or point to the document where I can find this information.
As if now all I have found out is that we can either spawn new instances or send some emails whenever alarm triggers.
You can fire an notification to an SNS topic when the CloudWatch alarm fires, and have a SQS queue as a subscriber to that topic. Then, an EC2 instance consuming that SQS queue can perform the desired change using the AWS CLI or SDKs.