What is the best strategy for implementing scheduled payment in Django - django

I wanted to implement scheduled jobs/actions in my DJANGO backend.
Actions are basically deducting monthly recurring payment from the customer. Sending payment link say before 10 days etc. etc. The dates will be based on when the user buys the subscription.
I have never implemented scheduled jobs before. I know there are some ways like cron tabs and celery.
I wanted to know what will be the best strategy/tool for scheduled payments.
So basically what I think i will do is that i will run the scheduled job every day at a particular time and will check the available candidates and will run the payment module.
Is this strategy correct to run jobs everyday. Are there any better methods available. Is there a way that jobs run automatically when say the customers new billing cycle arrives.

Yes, the strategy you are following is correct. You can use celery, redis and crontab to execute the payment system.
So, firstly you can specify the schedules using crontab. Also, .delay() function will help you in triggering the jobs whenever customer`s new billing cycle arrives.
So, the flow will be, tasks get triggered when a new billing cycle arrives using .delay()
Then the celery worker will register the task and the schedules. Then you may use celery beat to run the tasks periodically. Also you may use redis as a message queue.
Read about .delay() here
Read about celery configuration setting here
Read about setting a task scheduler using celery here

Related

View the scheduled tasks of celery in Django

I am scheduling some tasks with apply_async() providing the countdown for the task. Once the tasks execute I can view the state and logs with django-celery-results's task_results model. Now, I know all the scheduled tasks are added to the message queue from where I should be able to retrieve the scheduled tasks that haven't executed yet. But is there any other way to view them with state/logs etc? Or will I need to create a model similar to task_results and implement the feature myself?

Cron Jobs vs Task Scheduler table for scheduled emails

Preamble: I have a web app, the backend is based on the serverless architecture. It's basically an amplify app hosted on AWS with a dynamoDB database. I've learnt is possible to create a task scheduling system of sorts more here. A quick summary of the article is "Its possible to create a task scheduling table taking advantage of TTL and dynamoDB streams to execute lambda function at specific times. The TTL specifies a set time for an record to be deleted, we can capture this delete event in a dynamoDB stream and run some tasks based on information from the stream"
Problem:
The goal is to send a series of emails to users who sign up for our service. Each user that signs up gets a series of "Getting Started" emails. The first of the emails is sent 24 hours after a user signs up, the second 3 days later and the third exactly 7 days after sign up.
I see how a cron job would be suitable here, but it just seems a bit inefficient to me. I would basically have to search the users table for users whose sign up time falls between a specific 24 hour period and send the email to the users whereas with a Task scheduler table I could add a task to the table ( something like send first email to user300 with a TTL of when I want it to be sent ) and listen for delete events to run the task. No need to run a cron job daily, just a function that handles each task as it comes.
I think this is more like a performance vs storage problem. Having a task scheduler table would take up space, if we add all the emails to be sent to a user as tasks on the table (each email to be sent to a specific user is it's own task) each time a user signs up then I see the task scheduler table growing 3n records for every n user signed up. But this may not really be a problem as tasks are deleted after they are run. I do not know the performance cost of using a cron job for this particular task hence I'm here. I also may be wrong and the cost of running and updating this task scheduler table may be more than that of the cron job.
I initially thought of setting up a dummy user table and running both the cron and the task scheduler and documenting cost of running both, but you can imagine how much time and effort that would take.
So I guess my question is which is a more efficient solution in terms of performance and cost?
There is no perfect solution here. Keep in mind that Dynamodb TTL takes up to 48h to invoke, so it's probably unacceptable. CRON Jobs with Lambda are cheap, and it's easy to set. You coul also use SQS and populate it with daily CRON. Yan Cui wrote great article about this problem https://theburningmonk.com/2019/03/dynamodb-ttl-as-an-ad-hoc-scheduling-mechanism/
This may not exactly be an answer. Based on the medium article you linked the guy had a plausible reason why the TTL and dynamoDB streams would be better than a cron job which you reiterated. Setting up a cron job is easier and cheaper (free) and I doubt the performance will be that much worse unless the database is huge. I don't have any experience doing something like this so I wouldn't know how large the database would have to be for it to make sense to switch over. Alternatively, you can have as many cron jobs as you want so I don't see how you couldn't just set up a user specific cron job whenever someone signs up.
You can setup a CloudWatch Event to fire a Lambda function on a regular schedule. The Lambda function can search a database for an applicable result set and perform other actions - send an email, a text message, etc.
Here is an AWS tutorial that covers a very similar use case with step by step instructions. This tutorial is implemented by using the AWS Java API (but you can implement it using other supported programming languages).
https://github.com/awsdocs/aws-doc-sdk-examples/tree/master/javav2/usecases/creating_scheduled_events
From a Cost perspective - Lambda allows 1M free requests per month. Details are here - https://aws.amazon.com/lambda/pricing/

How to skip waiting for response from db part when saving data in Django?

The challenge is that I need to recompute all the data I have in db, after saving a new instance. The computation takes not more than 2 mins, which is fine for my problem. I have custom save method and all I need to do is to go through all items and item.save(), but as I said It takes more than 30 sec, so I have issues with 'request timeout'(using Heroku btw). Any ideas on how to deal with this?
#Uladzislau Malinouski, you can use the asynchronous tools like celery.
The way it works is that the task that is taking a significant amount of time can be wrapped as an asynchronous task and the computing can be done in the background. You can setup celery on Heroku by following this guide - https://devcenter.heroku.com/articles/celery-heroku.
If you are using free account and have not provided your card details on Heroku, it'd not be possible for you to add the addon for the broker which are used along with celery-like Redis, Rabbitmq etc.
In such cases, you may follow this guide to schedule an asynchronous task on Heroku - https://devcenter.heroku.com/articles/clock-processes-python

Django scheduled jobs

I'm writing some auction functionality for a website powered by Django, and there is an option to have the auction be extended by a few seconds every time somebody bids (so that there is no last-minute sniping). When the auction is finished, certain things need to be done like notifying the winner of their victory and notifying the seller to whom they need to send the item.
How, in Django, do I go about running an auction-completion script whenever an auction ends, especially when the auction end time may be a moving target?
Thanks.
It sounds like extending an auction_end DateTime field could easily be done on the bid() view
As far as emailing outcomes you should write a management script http://docs.djangoproject.com/en/dev/howto/custom-management-commands/ checking for auctions that have ended and haven't had notifications sent and schedule this script to run in cron at regular intervals (by minute if neccessary). An example would be pinax's django-notification emit_notices script.
http://github.com/jtauber/django-notification/blob/master/notification/management/commands/emit_notices.py
Better still you might be able to leverege django-notification itself.
Django command extensions provides jobs system (job scheduling). After you add it to the crontab it is capable of running jobs hourly/daily/weekly/monthly.
I think that it should be easy to extend job management command to acomplish your tasks.
Sounds like you probably want to have whatever code is responsible for closing the auction fire a custom signal to indicate that it's happening.

How to best launch an asynchronous job request in Django view?

One of my view functions is a very long processing job and clearly needs to be handled differently.
Instead of making the user wait for long time, it would be best if I were able to lunch the processing job which would email the results, and without waiting for completion notify the user that their request is being processed and let them browse on.
I know I can use os.fork, but I was wondering if there is a 'right way' in terms of Django. Perhaps I can return the HTTP response, and than go on with this job somehow?
There are a couple of solutions to this problem, and the best one depends a bit on how heavy your workload will be.
If you have a light workload you can use the approach used by django-mailer which is to define a "jobs" model, save new jobs into the database, then have cron run a stand-alone script every so often to process the jobs stored in the database (deleting them when done). You can use something like django-chronograph to manage the job scheduling easier
If you need help understanding how to write a script to process the job see James Bennett's article Standalone Django Scripts for help.
If you have a very high workload, meaning you'll need more than a single server to process the jobs, then you want to use a real distribute task queue. There is a lot of competition here so I can't really detail all the options, but a good one to use with for Django apps is celery.
Why not simply start a thread to do the processing and then go on to send the response?
Before you select a solution, you need to determine how the process will be run. I.e is it the same process for every single user, the data is the same and can be scheduled regularly? or does each user request something and the results are slightly different ?
As an example, if the data will be the same for every single user and can be run on a schedule you could use cron.
See: http://www.b-list.org/weblog/2007/sep/22/standalone-django-scripts/
or
http://docs.djangoproject.com/en/dev/howto/custom-management-commands/
However if the requests will be adhoc and you need something scalable that can handle high load and is asynchronous: what you are actually looking for is a message queing system. Your view will add a request to the queue which will then get acted upon.
There are a few options to implement this in Django:
Django Queue service is purely django & python and simple, though the last commit was in April and it seems the project has been abandoned.
http://code.google.com/p/django-queue-service/
The second option if you need something that scales, is distributed and makes use of open source message queing servers: celery is what you need
http://ask.github.com/celery/introduction.html
http://github.com/ask/celery/tree