apscheduler in triggering twice - flask

I'm using apschedular lib to run the cron job every 5 min.So that it will send the push notifications(It is sending twice)
job_defaults = {
'coalesce': True, # The accumulated task will only run once
'max_instances': 1000
}
schedule = BackgroundScheduler(job_defaults=job_defaults)
#app.before_first_request
#schedule.scheduled_job('interval', minutes=5)
def timed_job():
with app.app_context():
time = datetime.now().replace(microsecond=0)
print("calling create {0}".format(time))
schedule_notifications.create()
I have tried this solution
apscheduler in Flask executes twice
But still it is calling twice
I have updates this in flask app also
app.run(use_reloader=False)
I hope master and child class are calling , is there a way to stop this

Related

Mock async_task of Django-q

I'm using django-q and I'm currently working on adding tests using mock for my existing tasks. I could easily create tests for each task without depending on django-q but one of my task is calling another async_task. Here's an example:
import requests
from django_q.tasks import async_task
task_a():
response = requests.get(url)
# process response here
if condition:
async_task('task_b')
task_b():
response = requests.get(another_url)
And here's how I test them:
import requests
from .tasks import task_a
from .mock_responses import task_a_response
#mock.patch.object(requests, "get")
#mock.patch("django_q.tasks.async_task")
def test_async_task(self, mock_async_task, mock_task_a):
mock_task_a.return_value.status_code = 200
mock_task_a.return_value.json.return_value = task_a_response
mock_async_task.return_value = "12345"
# execute the task
task_a()
self.assertTrue(mock_task_a.called)
self.assertTrue(mock_async_task.called)
I know for a fact that async_task returns the task ID, hence the line, mock_async_task.return_value = "12345". However, after running the test, mock_async_task returns False and the task is being added into the queue (I could see a bunch of 01:42:59 [Q] INFO Enqueued 1 from the server) which is what I'm trying to avoid. Is there any way to accomplish this?
In order to prevent the task from being added to the queue, you need to change the configuration sync to True when the tests are running. You can find more info about the configurations here

Not able to execute a task in Background using apscheduler

I used Blockingscheduler before, but I am facing problem using Backgroundscheduler.
Need to run a scheduler task after returning a value, but the scheduled task is never executed.
from datetime import datetime
from apscheduler.schedulers.background import BackgroundScheduler
def my_job(text):
print(text)
def job1():
now = datetime.datetime.now()
sched = BackgroundScheduler()
sched.add_job(my_job, 'date', run_date=now +
datetime.timedelta(seconds = 20), args=['text'])
sched.start()
def fun1():
try:
return "hello"
finally:
job1()
print fun1()
I am getting only output as "hello" and the code is exiting. Expected output is "hello" and "text" which should be executed once after 20seconds. Please let me know what I messed up!!
You may find this FAQ entry enlightening.
To summarize, a Python script will exit once it reaches to the end, unless non-daemonic threads are active. The scheduler thread is daemonic by default.
Furthermore, it is bad practice to create a new scheduler in a function and not save the instance in a global variable which could be used to schedule further jobs or to shut down the scheduler. The way your code works now is that it will keep creating new schedulers without shutting down the previous ones.

How to schedule a celery task without blocking Django

I have a Django service that register lot of clients and render a payload containing a timer (lets say 800s) after which the client should be suspended by the service (Change status REGISTERED to SUSPENDED in MongoDB)
I'm running celery with rabbitmq as broker as follows:
celery/tasks.py
#app.task(bind=True, name='suspend_nf')
def suspend_nf(pk):
collection.update_one({'instanceId': str(pk)},
{'$set': {'nfStatus': 'SUSPENDED'}})
and calling the task inside Django view like:
api/views.py
def put(self, request, pk):
now = datetime.datetime.now(tz=pytz.timezone(TIME_ZONE))
timer = now + datetime.timedelta(seconds=response_data["heartBeatTimer"])
suspend_nf.apply_async(eta=timer)
response = Response(data=response_data, status=status.HTTP_202_ACCEPTED)
response['Location'] = str(request.build_absolute_uri())
What am I missing here?
Are you asking that your view blocks totally or view is waiting the "ETA" to complete the execution?
Did you receive any error?
Try using countdown parameter instead of eta.
In your case it's better because you don't need to manipulate dates.
Like this: suspend_nf.apply_async(countdown=response_data["heartBeatTimer"])
Let's see if your view will have some different behavior.
I have finally find a work around, since working on a small project, I don't really need Celery + rabbitmq a simple Threading does the job.
Task look like this :
def suspend_nf(pk, timer):
time.sleep(timer)
collection.update_one({'instanceId': str(pk)},
{'$set': {'nfStatus': 'SUSPENDED'}})
And calling inside the view like :
timer = int(response_data["heartBeatTimer"])
thread = threading.Thread(target=suspend_nf, args=(pk, timer), kwargs={})
thread.setDaemon(True)
thread.start()

Django Celery reduce time, 5 hours to complete 1000 tasks

I'm running on a development environment so this maybe different in production, but when I run a task from Django Celery, it seems to only fetch tasks from the broker every 10-20 seconds. I'm only testing at this point but lets say I'm sending around 1000 tasks this means it will take over 5 hours+ to complete.
Is this is normal? Should it be quicker? Or I'm I doing something wrong?
This is my task
class SendMessage(Task):
name = "Sending SMS"
max_retries = 10
default_retry_delay = 3
def run(self, message_id, gateway_id=None, **kwargs):
logging.debug("About to send a message.")
# Because we don't always have control over transactions
# in our calling code, we will retry up to 10 times, every 3
# seconds, in order to try to allow for the commit to the database
# to finish. That gives the server 30 seconds to write all of
# the data to the database, and finish the view.
try:
message = Message.objects.get(pk=message_id)
except Exception as exc:
raise SendMessage.retry(exc=exc)
if not gateway_id:
if hasattr(message.billee, 'sms_gateway'):
gateway = message.billee.sms_gateway
else:
gateway = Gateway.objects.all()[0]
else:
gateway = Gateway.objects.get(pk=gateway_id)
#response = gateway._send(message)
print(message_id)
logging.debug("Done sending message.")
which gets run from my view
for e in Contact.objects.filter(contact_owner=request.user etc etc):
SendMessage.delay(e.id, message)
Yes this is normal. This is the default workers to be used. They set this default so that it will not affect the performance of the app.
There is another way to change it. The task decorator can take a number of options that change the way the task behaves. Any keyword argument passed to the task decorator will actually be set as an attribute of the resulting task class.
You can set the rate limit which limits the number of tasks that can be run in a given time frame.
//means hundred tasks a minute, another /s (second) and /h (hour)
CELERY_DEFAULT_RATE_LIMIT = "100/m" --> set in settings

Django: Getting Django-cron Running

I am trying to get Django-cron running, but it seems to only run after hitting the site once. I am using Virtualenv.
Any ideas why it only runs once?
On my PATH, I added the location of django_cron: '/Users/emilepetrone/Workspace/zapgeo/zapgeo/django_cron'
My cron.py file within my Django app:
from django_cron import cronScheduler, Job
from products.views import index
class GetProducts(Job):
run_every = 60
def job(self):
index()
cronScheduler.register(GetProducts)
class GetLocation(Job):
run_every = 60
def job(self):
index()
cronScheduler.register(GetLocation)
The first possible reason
There is a variable in django_cron/base.py:
# how often to check if jobs are ready to be run (in seconds)
# in reality if you have a multithreaded server, it may get checked
# more often that this number suggests, so keep an eye on it...
# default value: 300 seconds == 5 min
polling_frequency = getattr(settings, "CRON_POLLING_FREQUENCY", 300)
So, the minimal interval of checking for time to start your task is polling_frequency. You can change it by setting in settings.py of your project:
CRON_POLLING_FREQUENCY = 100 # use your custom value in seconds here
To start a job hit your server at least one time after starting Django web server.
The second possible reason
Your job has an error and it is not queued (queued flag is set to 'f' if your job raises an exception). In this case it stores in field 'queued' of table 'django_cron_job' string value 'f'. You can test it making the query:
select queued from django_cron_job;
If you change the code of your job the field may stay as 'f'. So, if you correct the error of your job you should manually set in queued field: 't'. Or the flag executing in the table django_cron_cron is 't'. It means that your app. server was stopped when your task was in progress. In this case you should manually set it into 'f'.