For a scenario with sales orders, I'm needing to execute a task with a given delay.
To accomplish this, I added a task in my tasks.py file like so:
from huey import crontab
from huey.contrib.djhuey import db_task
#db_task(delay=3600)
def do_something_delayed(instance):
print("Do something delayed...by 3600 seconds")
However, this delay setting doesnt seem to delay anything. The task is just scheduled and executed immediately.
What am I doing wrong?
Thanks to coleifer on the GitHub repo:
https://github.com/coleifer/huey/issues/678#issuecomment-1184540964
The task() decorators do not accept a delay parameter, see https://huey.readthedocs.io/en/latest/api.html#Huey.task
I assume you've already read the docs on scheduling/delaying invocations of tasks: https://huey.readthedocs.io/en/latest/guide.html#scheduling-tasks -- but this applies to individual invocations.
If you want your task to always be delayed by 1 hour, the better way would be:
#db_task()
def do_something(instance):
print("Do something")
def do_something_delayed(instance):
return do_something.schedule((instance,), delay=3600)
Related
I'm using django-q and I'm currently working on adding tests using mock for my existing tasks. I could easily create tests for each task without depending on django-q but one of my task is calling another async_task. Here's an example:
import requests
from django_q.tasks import async_task
task_a():
response = requests.get(url)
# process response here
if condition:
async_task('task_b')
task_b():
response = requests.get(another_url)
And here's how I test them:
import requests
from .tasks import task_a
from .mock_responses import task_a_response
#mock.patch.object(requests, "get")
#mock.patch("django_q.tasks.async_task")
def test_async_task(self, mock_async_task, mock_task_a):
mock_task_a.return_value.status_code = 200
mock_task_a.return_value.json.return_value = task_a_response
mock_async_task.return_value = "12345"
# execute the task
task_a()
self.assertTrue(mock_task_a.called)
self.assertTrue(mock_async_task.called)
I know for a fact that async_task returns the task ID, hence the line, mock_async_task.return_value = "12345". However, after running the test, mock_async_task returns False and the task is being added into the queue (I could see a bunch of 01:42:59 [Q] INFO Enqueued 1 from the server) which is what I'm trying to avoid. Is there any way to accomplish this?
In order to prevent the task from being added to the queue, you need to change the configuration sync to True when the tests are running. You can find more info about the configurations here
I used Blockingscheduler before, but I am facing problem using Backgroundscheduler.
Need to run a scheduler task after returning a value, but the scheduled task is never executed.
from datetime import datetime
from apscheduler.schedulers.background import BackgroundScheduler
def my_job(text):
print(text)
def job1():
now = datetime.datetime.now()
sched = BackgroundScheduler()
sched.add_job(my_job, 'date', run_date=now +
datetime.timedelta(seconds = 20), args=['text'])
sched.start()
def fun1():
try:
return "hello"
finally:
job1()
print fun1()
I am getting only output as "hello" and the code is exiting. Expected output is "hello" and "text" which should be executed once after 20seconds. Please let me know what I messed up!!
You may find this FAQ entry enlightening.
To summarize, a Python script will exit once it reaches to the end, unless non-daemonic threads are active. The scheduler thread is daemonic by default.
Furthermore, it is bad practice to create a new scheduler in a function and not save the instance in a global variable which could be used to schedule further jobs or to shut down the scheduler. The way your code works now is that it will keep creating new schedulers without shutting down the previous ones.
I have a Django service that register lot of clients and render a payload containing a timer (lets say 800s) after which the client should be suspended by the service (Change status REGISTERED to SUSPENDED in MongoDB)
I'm running celery with rabbitmq as broker as follows:
celery/tasks.py
#app.task(bind=True, name='suspend_nf')
def suspend_nf(pk):
collection.update_one({'instanceId': str(pk)},
{'$set': {'nfStatus': 'SUSPENDED'}})
and calling the task inside Django view like:
api/views.py
def put(self, request, pk):
now = datetime.datetime.now(tz=pytz.timezone(TIME_ZONE))
timer = now + datetime.timedelta(seconds=response_data["heartBeatTimer"])
suspend_nf.apply_async(eta=timer)
response = Response(data=response_data, status=status.HTTP_202_ACCEPTED)
response['Location'] = str(request.build_absolute_uri())
What am I missing here?
Are you asking that your view blocks totally or view is waiting the "ETA" to complete the execution?
Did you receive any error?
Try using countdown parameter instead of eta.
In your case it's better because you don't need to manipulate dates.
Like this: suspend_nf.apply_async(countdown=response_data["heartBeatTimer"])
Let's see if your view will have some different behavior.
I have finally find a work around, since working on a small project, I don't really need Celery + rabbitmq a simple Threading does the job.
Task look like this :
def suspend_nf(pk, timer):
time.sleep(timer)
collection.update_one({'instanceId': str(pk)},
{'$set': {'nfStatus': 'SUSPENDED'}})
And calling inside the view like :
timer = int(response_data["heartBeatTimer"])
thread = threading.Thread(target=suspend_nf, args=(pk, timer), kwargs={})
thread.setDaemon(True)
thread.start()
I have a Django model which has a column called celery_task_id. I am using RabbitMQ as the broker. There's a celery function called test_celery which takes a model object as parameter. Now I have the following lines of code which creates a celery task.
def create_celery_task():
celery_task_id = test_celery.apply_async((model_obj,), eta='Future Datetime Object')
model_obj.celery_task_id = celery_task_id
model_obj.save()
----
----
Now inside the celery function I am verifying if the task id is same as of the one stored in the DB or not.
#app.task
def test_celery(model_obj):
if model_obj.celery_task_id == test_celery.request.id:
## Do something
My problem is there are a lot of cases where I can see the task being received and succeeding in the log but not executing the code inside of if condition.
Is it possible that celery task id changes after redistribution. Or are there any other reasons.
One of the recommendations is not to pass Database/ORM objects into the Celery tasks because the may contain stale data. Try to rewrite the task as:
#app.task
def test_celery(model_obj_id):
model_obj = YourModel.objects.get(id=model_obj_id)
if model_obj:
if model_obj.celery_task_id == test_celery.request.id:
## Do something
I have a bunch of code that I have working in flask correctly, but these requests can take over 30 minutes to finish. I am using chained generators to use my existing code with yields to return to the browser.
Since these tasks take 30 minutes or more to complete, I want to offload these tasks but at am a loss. I have not succesfully gotten celery/rabbitmq/redis or any other combination to work correctly and am looking for how I can accomplish this so my page returns right away and I can check if the task is complete in the background.
Here is example code that works for now but takes 4 seconds of processing for the page to return.
I am looking for advice on how to get around this problem, can celery/redis or rabbitmq deal with generators like this? should I be looking at a different solution?
Thanks!
import time
import flask
from itertools import chain
class TestClass(object):
def __init__(self):
self.a=4
def first_generator(self):
b = self.a + 2
yield str(self.a) + '\n'
time.sleep(1)
yield str(b) + '\n'
def second_generator(self):
time.sleep(1)
yield '5\n'
def third_generator(self):
time.sleep(1)
yield '6\n'
def application(self):
return chain(tc.first_generator(),
tc.second_generator(),
tc.third_generator())
tc = TestClass()
app = flask.Flask(__name__)
#app.route('/')
def process():
return flask.Response(tc.application(), mimetype='text/plain')
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000, debug=True)
Firstly, it's not clear what it would even mean to "pass a generator to Celery". The whole point of Celery is that is not directly linked to your app: it's a completely separate thing, maybe even running on a separate machine, to which you would pass some fixed data. You can of course pass the initial parameters and get Celery itself to call the functions that create the generators for processing, but you can't drip-feed data to Celery.
Secondly, this is not at all an appropriate use for Celery in any case. Celery is for offline processing. You can't get it to return stuff to a waiting request. The only thing you could do would be to get it to save the results somewhere accessible by Flask, and then get your template to fire an Ajax request to get those results when they are available.