I have been using viewflow for a while, and I managed to create my process without any problem.
But now I need someone to review the work from someone else. I don't want to create roles for that simple task, because I want everybody to be able to review somebody's work at any time. In other words, there is one task (task1) that can be executed for everybody, but it cannot be executed for the same person that finished the previous task.
task1 = (
flow.View(
UpdateProcessView,
fields=["quality_check", "quality_check_comments"],
task_description="writte comments"
).Permission(
auto_create=True
).Next(this.task2)
)
task2 = (
flow.View(
UpdateProcessView,
fields=["quality_check_completed"],
task_description="Perform a quality control on the work instructions"
).Permission(
auto_create=True
).Next(this.check_qa_manual)
)
From
Django-viewflow how to get the current user? I understand that I can assign the task to the user that created the task or the owner of a previous task, but I want the contrary. Is there a way to say .Assign(this.start.(not)owner) or .Assign(this.start.(not)created_by) ?
You could implement custom callable for the user selection.
.Assing accepts a callable that should take a process activation and return a user instance, ex
flow.View(...).Assign(lambda act: User.objects.filter(...).first())
https://github.com/viewflow/viewflow/blob/master/demo/shipment/flows.py#L57
Related
I have this code:
size = Size.objects.get(size = 'XS')
good1.Size.remove(size)
return redirect('/')
time.sleep(600)
good1.Size.add(size)
So, I need to recover a model object after 10 min, but the user must be redirected to another page and be able to use another pages of the site during 10 min.
How can I do it?
Your best option would be to delegate the task of recovering the object to a background worker process using something like Celery. By making use of task.apply_async(countdown=60 * 10) you could redirect your user and have Celery take care of recovering the object for you.
I have requirements:
I have few heavy-resource-consume task - exporting different reports that require big complex queries, sub queries
There are lot users.
I have built project in django, and queue task using celery
I want to restrict user so that they can request 10 report per minute. The idea is they can put hundreds of request 10 minute, but I want celery to execute 10 task for a user. So that every user gets their turn.
Is there any way so that celery can do this?
Thanks
Celery has a setting to control the RATE_LIMIT (http://celery.readthedocs.org/en/latest/userguide/tasks.html#Task.rate_limit), it means, the number of task that could be running in a time frame.
You could set this to '100/m' (hundred per second) maning your system allows 100 tasks per seconds, its important to notice, that setting is not per user neither task, its per time frame.
Have you thought about this approach instead of limiting per user?
In order to have a 'rate_limit' per task and user pair you will have to do it. I think (not sure) you could use a TaskRouter or a signal based on your needs.
TaskRouters (http://celery.readthedocs.org/en/latest/userguide/routing.html#routers) allow to route tasks to a specify queue aplying some logic.
Signals (http://celery.readthedocs.org/en/latest/userguide/signals.html) allow to execute code in few well-defined points of the task's scheduling cycle.
An example of Router's logic could be:
if task == 'A':
user_id = args[0] # in this task the user_id is the first arg
qty = get_task_qty('A', user_id)
if qty > LIMIT_FOR_A:
return
elif task == 'B':
user_id = args[2] # in this task the user_id is the seconds arg
qty = get_task_qty('B', user_id)
if qty > LIMIT_FOR_B:
return
return {'queue': 'default'}
With the approach above, every time a task starts you should increment by one in some place (for example Redis) the pair user_id/task_type and
every time a task finishes you should decrement that value in the same place.
Its seems kind of complex, hard to maintain and with few failure points for me.
Other approach, which i think could fit, is to implement some kind of 'Distributed Semaphore' (similar to distributed lock) per user and task, so in each task which needs to limit the number of task running you could use it.
The idea is, every time a task which should have 'concurrency control' starts it have to check if there is some resource available if not just return.
You could imagine this idea as below:
#shared_task
def my_task_A(user_id, arg1, arg2):
resource_key = 'my_task_A_{}'.format(user_id)
available = SemaphoreManager.is_available_resource(resource_key)
if not available:
# no resources then abort
return
try:
# the resourse could be acquired just before us for other
if SemaphoreManager.acquire(resource_key):
#execute your code
finally:
SemaphoreManager.release(resource_key)
Its hard to say which approach you SHOULD take because that depends on your application.
Hope it helps you!
Good luck!
cred_query = credits_tbl.query(ancestor=user_key).fetch(1)
for q in cred_query:
q.total_credits = q.total_credits + credits_bought
q.put()
I have a task running which is constantly updating a users total_credits in the credits table.
While that task runs the user can also buy additional credits at any point (as shown in the code above) to add to the total. However, when they try to do so, it does not update the total_credits in the credits table.
I guess I don't understand the 'strongly consistent' modelling of appengine (using ndb) as well as I thought.
Do you know why this happens?
My goal is to retrieve all the task_ids from a django celery chord call so that I can revoke the tasks later if needed. However, I cannot figure out the correct method to retrieve the task ids. I execute the chord as:
c = chord((loadTask.s(i) for i in range(0, num_lines, CHUNK_SIZE)), finalizeTask.si())
task_result = c.delay()
# get task_ids
I examined the task_result's children variable, but it is None.
I can manual create the chord semantics by using a group and another task as follows, and retrieve the associated task_ids, but I do not like breaking up the call. When this code is run within a task as subtasks, it can cause the main task to hang when the group is revoked before the finalize task begins.
g = group((loadTask.s(i) for i in range(0, num_lines, CHUNK_SIZE)))
task_result = g.delay()
storeTaskIds(task_result.children)
task_result.get()
task_result2 = self.finalizeTask.delay()
storeTaskIds([task_result2.task_id])
Any thoughts would be appreciated!
I'm trying to do something similar, I was hoping I could just revoke the chord with one call and everything within it would be recursively revoked for me.
You could make a chord out of the group and your finalizeTask to keep from breaking up the calls.
I realize this is coming two months after you asked, but maybe it'll help someone and maybe I should just get the task ids of everything in my group.
In one of my applications i want to limit users to make a only a specific number of document conversion each calendar month and want to notify them of the conversions they've made and number of conversions they can still make in that calendar month.
So I do something like the following.
class CustomUser(models.Model):
# user fields here
def get_converted_docs(self):
return self.document_set.filter(date__range=[start, end]).count()
def remaining_docs(self):
converted = self.get_converted_docs()
return LIMIT - converted
Now, document conversion is done in the background using celery. So there may be a situation when a conversion task is pending, so in that case the above methods would let a user make an extra conversion, because the pending task is not being included in the count.
How can i get the number of tasks pending for a specific CustomUser object here ??
update
ok so i tried the following:
from celery.task.control import inspect
def get_scheduled_tasks():
tasks = []
scheduled = inspect().scheduled()
for task in scheduled.values()
tasks.extend(task)
return tasks
This gives me a list of scheduled tasks but now all the values are unicode for the above mentioned task args look like this:
u'args': u'(<Document: test_document.doc>, <CustomUser: Test User>)'
is there a way these can be decoded back to original django objects so that i can filter them ?
Store the state of your documents somewhere else, don't inspect your queue.
Either create a seperate model for that, or eg. have a state on your document model, at least independently from your queue. This should have several advantages:
Inspecting the queue might be expensive - also depending on the backend for that. And as you see it can also turn out to be difficult.
Your queue might not be persistent, if eg. your server crashes and use something like Redis you would loose this information, so it's a good thing to have a log somewhere else to be able to reconstruct the queue)