I setup cron job on Elasticbeanstalk using cron.yaml file and sqs run my tasks periodically. Is there a way to trigger a cron job manually through sqs platform so that for not-frequently running tasks I can easily test the results without waiting for the schedule itself? I tried to send a message to sqs queue attached to eb instance but can't set the http headers required for cronjob.
Related
I have set some cron jobs in an ec2 instance. I want to be notified whenever a cron job fails in an ec2. In my cron.log, I don't see any error or alert even if the cron fails to execute. How can I capture the failed crons and send cloud watch alarm which can be picked up by SNS.
Thank you.
Is it possible to schedule a DAG run when message arrives at the SQS queue?? I also need the dag to process the message in the queue. From what I know this could be done by using the SQSSensor but I couldn't find any example and I am confused on how to move forward.
Airflow runs DAGs on a fixed interval, while you're now looking to trigger DAGs per event. You'll have to do this outside of Airflow, e.g. using a Lambda trigger listening on the queue, which triggers an Airflow DAG via the REST API.
The SQSSensor in Airflow won't allow for event-by-event processing because it simply polls the queue after a DAG run starts (checking for new messages, pushing them to an XCom with key "messages", and deleting the messages if found). So if your DAG run is scheduled to once a day, an SQSSensor would only start polling for new messages once a day.
I can't find an SQSOperator in Airflow for reading SQS messages, so to create an event-triggered SQS + Airflow workflow, my best guess is to set up a Lambda for triggering Airflow DAGs via the REST API, and the DAG itself will start with an SQSSensor which reads all messages on the queue, and other tasks after that read and process the values from the XCom created by the SQSSensor task. The schedule_interval of the DAG can be set to None since it will be triggered via the REST API.
In my architecture when I receive a new file on S3 bucket, a lambda function triggers an ECS task.
The problem occurs when I receive multiple files at the same time: the lambda will trigger multiple instance of the same ECS task that acts on the same shared resources.
I want to ensure only 1 instance is running for specific ECS Task, how can I do?
Is there a specific setting that can ensure it?
I tried to query ECS Cluster before run a new instance of the ECS task, but (using AWS Python SDK) I didn't receive any information when the task is in PROVISIONING status, the sdk only return data when the task is in PENDING or RUNNING.
Thank you
I don't think you can control that because your S3 event will trigger new tasks. It will be more difficult to check if the task is already running and you might miss execution if you receive a lot of files.
You should think different to achieve what you want. If you want only one task processing that forget about triggering the ECS task from the S3 event. It might work better if you implement queues. Your S3 event should add the information (via Lambda, maybe?) to an SQS queue.
From there you can have an ECS service doing a SQS long polling and processing one message at a time.
I wonder whether there is a way to use only lambda function when I dispatched a job in laravel.
What I'm using is below.
Laravel 5.8(PHP 7.2)
AWS SQS
Supervisord
In Laravel, I dispatch a job with SQS connection and job is in Laravel project.
I searched how I can use SQS as a trigger for lambda function. And I found this document. ( Using AWS Lambda with Amazon SQS
)
If I follow this document, I think I can run job in lambda. But In Laravel project, job will be run again. I want to use only lambda as a job.
How I can run only lambda function as a job?
No it is not possible. Sqs, database or redis are just for keeping the serialized(encoded etc) version of your laravel jobs. Here is the closest you may get;
Forget about sqs queue driver.
Implement your job in aws lambda.
Allow lambda to consume your sqs (policies, triggers etc listed in the documentation)
Make a request from your laravel app via aws php sdk or http request(guzzle, curl) to your sqs and let lambda to consume your sqs.
You may use some async driver to trigger your sqs requests asynchronous.
If you want to use sqs delay queue, The maximum is 15 minutes - here for the doc
I was trying to setup a periodic task (using cron.yaml) in EB worker environment which is using a FIFO SQS queue. When cron job tries submit job to SQS, it fails because it does not have message group id which is required for FIFO queue.
Is there a way around this? (Apart from using some other scheduling mechanism or using general queue)
scheduler: dropping leader, due to failed to send message for job
'italian-job', because: The request must contain the parameter
MessageGroupId. (Aws::SQS::Errors::MissingParameter)
Update: As a work around, I created a cloud watch trigger to start a lambda which sends messages to SQS queue.