I am wanting to send notifications based on a user-specified time. IE, in google calendar, I can receive a text message when my task time is hit.
Is the solution to this to run a cron job, have it execute every minute and scan which users have a time equaling the current time?
Since you tagged your question with celery, I assume you have celery running. You could use the eta kwarg to apply_async() to schedule a task to run at a specific time, see here:
http://docs.celeryproject.org/en/latest/userguide/calling.html#eta-and-countdown
If you need to use a cron job, I would not check if notification_time == current_time, but rather track unsent notifications with a boolean is_sent field on the model and check for notification_time <= current_time and not is_sent. This seems to be slightly less error prone. You could also add some form of check to prevent mass-sending notifications in case your system goes down for a few hours.
Related
I have a step function with 3 lambdas, the last lambda is basically writing an entry in the dynamo DB with a timestamp, status = "unpaid" (this is updated to "paid" for some automatically based on another workflow), email and closes the execution. Now I want to schedule a reminder on any entry in the DynamoDB which is unpaid & over 7 days, a second reminder if any entry is unpaid over 14 days, a third last reminder on 19th day - sent via email. So the question is:
Is there any way to do this scheduling per Step function execution (that can then monitor that particular entry in ddb for 7, 14, 19 days and send reminders accordingly until the status is "unpaid").
If yes, would it be too much overhead since there could be millions of transactions.
The second way which I was thinking was to build another scheduler lambda sequence: the first lambda basically parsing through the whole ddb searching for entries valid for reminder (either 7, 14, 19). The second lambda getting the list from the first lambda and prepares the reminder based on whether its first, second or third (in loop) & the third Lambda one sending the reminder through SES.
Is there a better or easier way to do this?
I know we can trigger step functions or lambdas through cloud events or we also have crons that we can use but they were not suiting the use case much.
Any help here is appreciated?
DynamoDB does not have functionality for a delayed notification based on logic, you would need to design this flow yourself. Luckily AWS has all the tools you need to perform this.
I believe the best option would probably be to create a CloudWatch Events/EventBridge when the item is written to DynamoDB (either via your application or as a trigger via a Lambda using DynamoDB Streams).
This event would be scheduled for 7 days time, in the 7 days any checks could be performed to validate if it has been paid or not. If it has not been paid you schedule the next event and send out the notification. If it had been paid you would simply exit the Lambda function. This would then continue for the next 2 time periods.
You could then further enhance this by using DynamoDB streams so that in the event of the DynamoDB table being updated a Lambda is triggered to detect whether status has changed from unpaid. If this occurs simply remove the event trigger to prevent it even having to process.
I have written a cloud storage trigger based cloud function. I have 10-15 files landing at 5 secs interval in cloud bucket which loads data into a bigquery table(truncate and load).
While there are 10 files in the bucket I want cloud function to process them in sequential manner i.e 1 file at a time as all the files accesses the same table for operation.
Currently cloud function is getting triggered for multiple files at a time and it fails in BIgquery operation as multiple files trying to access the same table.
Is there any way to configure this in cloud function??
Thanks in Advance!
You can achieve this by using pubsub, and the max instance param on Cloud Function.
Firstly, use the notification capability of Google Cloud Storage and sink the event into a PubSub topic.
Now you will receive a message every time that a event occur on the bucket. If you want to filter on file creation only (object finalize) you can apply a filter on the subscription. I wrote an article on this
Then, create an HTTP functions (http function is required if you want to apply a filter) with the max instance set to 1. Like this, only 1 function can be executed in the same time. So, no concurrency!
Finally, create a PubSub subscription on the topic, with a filter or not, to call your function in HTTP.
EDIT
Thanks to your code, I understood what happens. In fact, BigQuery is a declarative system. When you perform a request or a load job, a job is created and it works in background.
In python, you can explicitly wait the end on the job, but, with pandas, I didn't find how!!
I just found a Google Cloud page to explain how to migrate from pandas to BigQuery client library. As you can see, there is a line at the end
# Wait for the load job to complete.
job.result()
than wait the end of the job.
You did it well in the _insert_into_bigquery_dwh function but it's not the case in the staging _insert_into_bigquery_staging one. This can lead to 2 issues:
The dwh function work on the old data because the staging isn't yet finish when you trigger this job
If the staging take, let's say, 10 seconds and run in "background" (you don't wait the end explicitly in your code) and the dwh take 1 seconds, the next file is processed at the end of the dwh function, even if the staging one continue to run in background. And that leads to your issue.
The architecture you describe isn't the same as the one from the documentation you linked. Note that in the flow diagram and the code samples the storage events triggers the cloud function which will stream the data directly to the destination table. Since BigQuery allow for multiple streaming insert jobs several functions could be executed at the same time without problems. In your use case the intermediate table used to load with write-truncate for data cleaning makes a big difference because each execution needs the previous one to finish thus requiring a sequential processing approach.
I would like to point out that PubSub doesn't allow to configure the rate at which messages are sent, if 10 messages arrive to the topic they all will be sent to the subscriber, even if processed one at a time. Limiting the function to one instance may lead to overhead for the above reason and could increase latency as well. That said, since the expected workload is 15-30 files a day the above maybe isn't a big concern.
If you'd like to have parallel executions you may try creating a new table for each message and set a short expiration deadline for it using table.expires(exp_datetime) setter method so that multiple executions don't conflict with each other. Here is the related library reference. Otherwise the great answer from Guillaume would completely get the job done.
I have to implement functionality that requires delayed sending of a message to a user once on a specific date, which can be anytime - from tomorrow till in a few months from now.
All our code is so far implemented as lambda functions.
I'm considering three options on how to implement this:
Create an entry in DynamoDB with hash key being date and range key being unique ID. Schedule lambda to run once a day and pick up all entries/tasks scheduled for this day, send a message for each of them.
Using SDK Create cloudwatch event rule with cron expression indicating single execution and make it invoke lambda function (target) with ID of user/message. The lambda would be invoked on a specific schedule with a specific user/message to be delivered.
Create a step function instance and configure it to sleep & invoke step with logic to send a message when the right moment comes.
Do you have perhaps any recommendation on what would be best practice to implement this kind of business requirement? Perhaps an entirely different approach?
It largely depends on scale. If you'll only have a few scheduled at any point in time then I'd use the CloudWatch events approach. It's very low overhead and doesn't involve running code and doing nothing.
If you expect a LOT of schedules then the DynamoDB approach is very possibly the best approach. Run the lambda on a fixed schedule, see what records have not yet been run, and are past/equal to current time. In this model you'll want to delete the records that you've already processed (or mark them in some way) so that you don't process them again. Don't rely on the schedule running at certain intervals and checking for records between the last time and the current time unless you are recording when the last time was (i.e. don't assume you ran a minute ago because you scheduled it to run every minute).
Step functions could work if the time isn't too far out. You can include a delay in the step that causes it to just sit and wait. The delays in step functions are just that, delays, not scheduled times, so you'd have to figure out that delay yourself, and hope it fires close enough to the time you expect it. This one isn't a bad option for mid to low volume.
Edit:
Step functions include a wait_until option on wait states now. This is a really good option for what you are describing.
As of November 2022, the cleanest approach would be to use EventBridge Scheduler's one-time schedule.
A one-time schedule will invoke a target only once at the date and time that you specify using a valid date, and a timestamp. EventBridge Scheduler supports scheduling in Universal Coordinated Time (UTC), or in the time zone that you specify when you create your schedule. You configure a one-time schedule using an at expression.
Here is an example using the AWS CLI:
aws scheduler create-schedule --schedule-expression "at(2022-11-30T13:00:00)" --name schedule-name \
--target '{"RoleArn": "role-arn", "Arn": "QUEUE_ARN", "Input": "TEST_PAYLOAD" }' \
--schedule-expression-timezone "America/Los_Angeles"
--flexible-time-window '{ "Mode": "OFF"}'
Reference: Schedule types on EventBridge Scheduler - EventBridge Scheduler
User Guide
Instead of using DynamoDB I would suggest to use s3. Store the message and time to trigger as key value pairs.
S3 to store the date and time as key value store.
Use s3 lambda trigger to create the cloudwatch rules that would target specific lambda's etc
You can even schedule a cron to a lambda that will read the files from s3 and update the required cron for the message to be sent.
Hope so this is in line with your requirements
My tasks are added to Task Queue, but nothing executed automatically. I need to click the button "Run now" to run tasks, tasks are executed without problem. Have I missed some configurations ?
I use default queue configuration, standard App Engine with python 27.
from google.appengine.api import taskqueue
taskqueue.add(
url='/inserturl',
params={'name': 'tablename'})
This documentation is for the API you are now mentioning. The idea would be the same: you need to specify the parameter for when you want the task to be executed. In this case, you have different options, such as countdown or eta. Here is the specific documentation for the method you are using to add a task to the queue (taskqueue.add)
ORIGINAL ANSWER
If you follow this tutorial to create queues and tasks, you will see it is based on the following github repo. If you go to the file where the tasks are created (create_app_engine_queue_task.py). There is where you should specify the time when the task must be executed. In this tutorial, to finally create the task, they use the following command:
python create_app_engine_queue_task.py --project=$PROJECT_ID --location=$LOCATION_ID --queue=$QUEUE_ID --payload=hello
However, it is missing the time when you want to execute it, it should look like this
python create_app_engine_queue_task.py --project=$PROJECT_ID --location=$LOCATION_ID --queue=$QUEUE_ID --payload=hello --in_seconds=["countdown" for when the task will be executed, in seconds]
Basically, the key is in this part of the code in create_app_engine_queue_task.py:
if in_seconds is not None:
# Convert "seconds from now" into an rfc3339 datetime string.
d = datetime.datetime.utcnow() + datetime.timedelta(seconds=in_seconds)
# Create Timestamp protobuf.
timestamp = timestamp_pb2.Timestamp()
timestamp.FromDatetime(d)
# Add the timestamp to the tasks.
task['schedule_time'] = timestamp
If you create the task now and you go to your console, you will see you task will execute and disappear from the queue in the amount of seconds you specified.
In my Django project, users are allowed to register to a free trial, but if they do not complete a purchase within 15 days, their accounts are locked out until they do complete the purchase. After 13 days (ie within 48 hours or expiry) I wish to send an email the registered user reminding him/her to purchase.
Currently, I have a cron job set up to run daily and check all trial accounts if the registration date and current date are 2 days apart and if so, I send an email.
I was wondering if there is a more elegant solution to do this?
If you don't want to mess with your cron file you should check out Celery, an asynchronous task queue written in Python. It was originally created with Django in mind but has since been broken out into a separate package. What you want to do then is set up a Celerybeat schedule like this:
CELERYBEAT_SCHEDULE = {
"purchase-reminder": {
"task": "accounts.tasks.remind",
"schedule": timedelta(hours=24),
},
}
This will call the task (read: function) accounts.tasks.remind every 24 hours.