With viewflow, My usecase is this:
A user is assigned multiple tasks. He wants to select some of the task and apply the same transition(Approve/Reject) to them. How can he do this?
Nothing specific. Just to activate and complete activation per-task. For safety reasons, you need to pre .select_for_update all included processes.
Process.objects.filter(...).select_for_update()
for task in _list_of_tasks_:
activation = task.activate()
activation.prepare()
# do something
# activation.process.approved = False
# activation.process.save()
activation.done()
Related
I have read and write queries in my single Django view function. As in below code:
def multi_query_function(request):
model_data = MyModel.objects.all() #first read command
...(do something)...
new_data = MyModel(
id=1234,
first_property='random value',
second_property='another value'
)
new_data.save() #second write command
return render(request, index.html)
I need these queries in the function to be executed consecutively. For example, if multiple users use this function at the same time, it should execute this function for both users one by one. The 'read' of one user should only be allowed if the previous user has completed both of his/her 'read and write'. The queries of both users should never be intermingled.
Should I use table locking feature of my PostgreSQL DB or is there any other well managed way?
Yep, using your database's locks are a good way to do this.
https://github.com/Xof/django-pglocks looks like a good library to give you a lock context manager.
I'm building a Django app that will periodically take information from an external source, and use it to update model objects.
What I want to to be able to do is create a QuerySet which has all the objects which might match the final list. Then check which model objects need to be created, updated, and deleted. And then (ideally) perform the update in the fewest number of transactions. And without performing any unnecessary DB operations.
Using create_or_update gets me most of the way to what I want to do.
jobs = get_current_jobs(host, user)
for host, user, name, defaults in jobs:
obj, _ = Job.upate_or_create(host=host, user=user, name=name, defaults=defaults)
The problem with this approach is that it doesn't delete anything that no longer exists.
I could just delete everything up front, or do something dumb like
to_delete = set(Job.objects.filter(host=host, user=user)) - set(current)
(Which is an option) but I feel like there must already be an elegant solution that doesn't require either deleting everything, or reading everything into memory.
You should use Redis for storage and use this python package in your code. For example:
import redis
import requests
pool = redis.StrictRedis('localhost')
time_in_seconds = 3600 # the time period you want to keep your data
response = requests.get("url_to_ext_source")
pool.set("api_response", response.json(), ex=time_in_seconds)
I want to perform some tasks in background but with something like "run as". In other words, like a task was launched by the user from the context of his session.
Something like
def perform
env['warden'].set_user(#task_owner_user)
MyService::current_user_dependent_method
end
but I'm not sure it' won't collide with other tasks. I'm not very familiar with Sidekiq.
Can I safely perform separate tasks, each with a different user context, somehow?
I'm not sure what your shooting for with the "run as" context, but I've always setup sidekiq jobs that need a unique object by passing the id in the perform. This way the worker always knows which object it is trying to work on. Maybe this is what you're looking for?
def perform id
user = User.find(id)
user.current_user_dependent_method
end
Then setup a route in a controller for triggering this worker to start, something like:
def custom_route_for_performing_job
#users= User.where(your_conditions)
#users.each do |user|
YourWorker.perform_async user.id
end
redirect_to :back, notice: "Starting background job for users_dependent_method"
end
The proper design is to use a server-side middleware + a thread local variable to set the current user context per job.
class MyServerMiddlware
def call(worker, message, queue)
Thread.current[:current_user] = message['uid'] if message['uid']
yield
ensure
Thread.current[:current_user] = nil
end
end
You'd create a client-side middleware to capture the current uid and put it in the message. In this way, the logic is encapsulated distinctly from any one type of Worker. Read more:
https://github.com/mperham/sidekiq/wiki/Middleware
I am new to django/python and working my way through my webapp. I need assistance in solving one of my problems.
In my app, I am planning to assign each user (from auth_user) to one of the group ( from auth_group). Each group can have multiple users. I have entry in auth_group, auth_user and auth_user_groups. Here is my question:
At time of login I want to check that logging user belongs to which group?
I want to keep that group info in session/cache so all pages I can show information about that group only.
If you have any sample code will be great.
Giving support to the very well #trinchet's answer with an example of context_processor code.
Puts inside your webapp a new file called context_processors.py and writes this lines on it:
def user_groups(request):
"""
Add `groups` var to the context with all the
groups the logged in user has, so you can access
in your templates to this var as: {{ groups }}
"""
groups = None
if request.user.is_authenticated():
groups = user.groups
return {'groups': groups}
Finally on your settings.py add 'webbapp.context_processors.user_groups'to TEMPLATE_CONTEXT_PROCESSOR:
TEMPLATE_CONTEXT_PROCESSORS = (
'webbapp.context_processors.user_groups',
)
1) Be user an instance of auth.models.User, you can get all groups the user belong to, through user.groups. If you want to ask at time of login then you should do this in your login view.
2) You can use session or cache approaches to deal with, this is irrelevant, but once you have the group you need to render the pages having this value, i mean, you need to provide the group to the template rendering, to do this I suggest to you using a custom context processor.
I have a question regarding transactions and celery tasks. So it's no mystery to me that of course if you have a transaction and a celery task accessing the same table/records we'll have a race condition.
However, consider the following piece of code:
def f(self):
# function of module that inherits from models.Model
self.field_a = datetime.now()
self.save()
transaction.commit_unless_managed()
# depending on the configuration of this module
# this might return None or a datetime object.
eta = self.get_task_eta()
if eta:
celery_task_do_something.apply_async(args=(self.pk, self.__class__),
eta=eta)
else:
celery_task_do_something.delay(self.pk, self.__class__)
Here's the celery task:
def celery_task_do_something(pk, cls):
o = cls.objects.get(pk=pk)
if o.field_a:
# perform something
return True
return False
As you can see, before creating the task we call transaction.commit_unless_managed and it should commit, since django transaction is not currently managed.
However, when running celery task the field field_a is not set.
My question:
Since we do commit before creating the task, is it still possible that there's a race condition?
Additional info
We're using Postgres version 9.1
Every transaction is run with READ COMMITTED isolation level
On a different db with engine dowant.lib.db.backends.postgresql_psycopg2_debugger field_a is already set and the task works as expected. With engine dowant.lib.db.backends.postgresql_psycopg2_hstore_ready the described issue appears (not sure if it's related with the engine).
Celery version is 2.2
I tried different databases. Still the same behavior, except when the engines change. So that's why I mentioned this.
Thanks a lot.
Try to add self.__class__.objects.select_for_update().get(pk=self.pk) before save and see what happens.
It should block all reads to this row untill commit is done.
This is late but since django 1.9
transaction.on_commit(lambda: enqueue_atask()))