Out system accesses to a middle platform(I do not know how to call it in English, we call it 中台 in Chinese) which is for authentication like logging in, JWT verifying, etc.
Then, We have encoutered questions when it is neccessary to rollback actions because of unexpected program errors. Like code below, the program will crash when running at 1 / 0, then AdminPermission.objects.create can be rolled back, but do_user_actions can not, cause it is a RPC function. So, we need to override transaction.atomic or something like this to implement our requirements. But, I do not know how to implement it. Pls give me some suggestions or sample code. Thx a lot.
#transaction.atomic # can not rollback remote func call `do_user_actions`
def create(self, request, *args, **kwargs):
# call a remote func here to create a admin account.
user_dict = do_user_actions(query_kwargs=query_kwargs, action='create-admin')
user_id = user_dict.get('id')
permission_code = 'base_permission'
AdminPermission.objects.create(user_id=user_id, permission_code=permission_code)
# some unexpected errors, like:
1 / 0
return Response('success')
Instead of using atomic with the decorator you could use it as a context manager, something like this,
with transaction.atomic():
try:
# do your stuff
user_dict = do_user_actions(query_kwargs=query_kwargs, action='create-admin')
user_id = user_dict.get('id')
permission_code = 'base_permission'
AdminPermission.objects.create(user_id=user_id, permission_code=permission_code)
1/0 # error
except SomeError: # capture error
revert_user_actions() # revert do_user_actions
transaction.set_rollback(True) # rollback
return Response(status=status.HTTP_424_FAILED_DEPENDENCY) # failure
return Response(serializer.data, status=status.HTTP_201_CREATED) # success
Django Docs - Transactions
UPDATE
As #Gorgine mention in the comment, the documentation don't recommend to handle errors inside atomic. Because is always a good idea to follow recomendations, you can put atomic inside try block. In this case atomic block will handle the rollback if an error occur, so you will need to handle the actions that are not rollback by atomic in the except block. Something like this:
try:
with transaction.atomic():
# do your stuff
user_dict = do_user_actions(query_kwargs=query_kwargs, action='create-admin')
user_id = user_dict.get('id')
permission_code = 'base_permission'
AdminPermission.objects.create(user_id=user_id, permission_code=permission_code)
1/0 # error
except SomeError: # capture error
revert_user_actions() # revert do_user_actions
return Response(status=status.HTTP_424_FAILED_DEPENDENCY) # failure
return Response(serializer.data, status=status.HTTP_201_CREATED) # success
Related
How could one handle exceptions globally with Flask? I have found ways to use the following to handle custom db interactions:
try:
sess.add(cat2)
sess.commit()
except sqlalchemy.exc.IntegrityError, exc:
reason = exc.message
if reason.endswith('is not unique'):
print "%s already exists" % exc.params[0]
sess.rollback()
The problem with try-except is I would have to run that on every aspect of my code. I can find better ways to do that for custom code. My question is directed more towards global catching and handling for:
apimanager.create_api(
Model,
collection_name="models",
**base_writable_api_settings
)
I have found that this apimanager accepts validation_exceptions: [ValidationError] but I have found no examples of this being used.
I still would like a higher tier of handling that effects all db interactions with a simple concept of "If this error: show this, If another error: show something else" that just runs on all interactions/exceptions automatically without me including it on every apimanager (putting it in my base_writable_api_settings is fine I guess). (IntegrityError, NameError, DataError, DatabaseError, etc)
I tend to set up an error handler on the app that formats the exception into a json response. Then you can create custom exceptions like UnauthorizedException...
class Unauthorized(Exception):
status_code = 401
#app.errorhandler(Exception)
def _(error):
trace = traceback.format_exc()
status_code = getattr(error, 'status_code', 400)
response_dict = dict(getattr(error, 'payload', None) or ())
response_dict['message'] = getattr(error, 'message', None)
response_dict['traceback'] = trace
response = jsonify(response_dict)
response.status_code = status_code
traceback.print_exc(file=sys.stdout)
return response
You can also handle specific exceptions using this pattern...
#app.errorhandler(ValidationError)
def handle_validation_error(error):
# Do something...
Error handlers get attached to the app, not the apimanager. You probably have something like
app = Flask()
apimanager = ApiManager(app)
...
Put this somewhere using that app object.
My preferred approach uses decorated view-functions.
You could define a decorator like the following:
def handle_exceptions(func):
#wraps(func)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except ValidationError:
# do something
except HTTPException:
# do something else ...
except MyCustomException:
# do a third thing
Then you can simply decorate your view-functions, e.g.
#app.route('/')
#handle_exceptions
def index():
# ...
I unfortunately do not know about the hooks Flask-Restless offers for passing view-functions.
I would like to execute a transaction to delete some values, after that, count in db and if the result is < 1, rollback, i tried the following code:
#login_required
#csrf_exempt
#transaction.atomic
def update_user_groups(request):
if request.POST:
userId = request.POST['userId']
groups = request.POST.getlist('groups[]')
result = None
with transaction.atomic():
try:
GroupsUsers.objects.filter(user_id=int(userId)).delete()
for group in groups:
group_user = GroupsUsers()
group_user.user_id = userId
group_user.group_id = group
group_user.save()
count = UsersInAdmin.objects.all().count()
if count < 1:
transaction.commit()
else:
transaction.rollback()
except Exception, e:
result = e
return JsonResponse(result, safe=False)
Thanks,
It's not possible to manually commit or rollback the transaction inside an atomic block.
Instead, you can raise an exception inside the atomic block. If the block completes, the transaction will be committed. If you raise an exception, the transaction will be rolled back. Outside the atomic block, you can catch the exception and carry on your view.
try:
with transaction.atomic():
do_stuff()
if ok_to_commit():
pass
else:
raise ValueError()
except ValueError:
pass
I think you should give the Django documentation about db transactions another look.
The most relevant remark is the following:
Avoid catching exceptions inside atomic!
When exiting an atomic block, Django looks at whether it’s exited normally or with an exception to determine whether to commit or roll back. If you catch and handle exceptions inside an atomic block, you may hide from Django the fact that a problem has happened. This can result in unexpected behavior.
Adapting the example from the docs, it looks like you should nest your try and with blocks like this:
#transaction.atomic
def update_user_groups(request):
# set your variables
try:
with transaction.atomic():
# do your for loop
if count < 1:
# You should probably make a more descriptive, non-generic exception
# see http://stackoverflow.com/questions/1319615/proper-way-to-declare-custom-exceptions-in-modern-python for more info
raise Exception('Count is less than one')
except:
handle_exception()
Consider this simple example :
# a bank account class
class Account:
#transaction.commit_on_success
def withdraw(self, amount):
# code to withdraw money from the account
#transaction.commit_on_success
def add(self, amount):
# code to add money to the account
# somewhere else
#transaction.commit_on_success
def makeMoneyTransaction(src_account, dst_account, amount):
src_account.withdraw(amount)
dst_account.add(amount)
(taken from https://code.djangoproject.com/ticket/2227)
If an exception raises in Account.add(), the transaction in Account.withdraw() will still be committed and money will be lost, because Django doesn't currently handle nested transactions.
Without applying patchs to Django, how can we make sure that the commit is sent to the database, but only when the main function under the #transaction.commit_on_success decorator finishes without raising an exception?
I came across this snippet: http://djangosnippets.org/snippets/1343/ and it seems like it could do the job. Is there any drawbacks I should be aware of if I use it?
Huge thanks in advance if you can help.
P.S. I am copying the previously cited code snippet for purposes of visibility:
def nested_commit_on_success(func):
"""Like commit_on_success, but doesn't commit existing transactions.
This decorator is used to run a function within the scope of a
database transaction, committing the transaction on success and
rolling it back if an exception occurs.
Unlike the standard transaction.commit_on_success decorator, this
version first checks whether a transaction is already active. If so
then it doesn't perform any commits or rollbacks, leaving that up to
whoever is managing the active transaction.
"""
commit_on_success = transaction.commit_on_success(func)
def _nested_commit_on_success(*args, **kwds):
if transaction.is_managed():
return func(*args,**kwds)
else:
return commit_on_success(*args,**kwds)
return transaction.wraps(func)(_nested_commit_on_success)
The problem with this snippet is that it doesn't give you the ability to roll back an inner transaction without rolling back the outer transaction as well. For example:
#nested_commit_on_success
def inner():
# [do stuff in the DB]
#nested_commit_on_success
def outer():
# [do stuff in the DB]
try:
inner()
except:
# this did not work, but we want to handle the error and
# do something else instead:
# [do stuff in the DB]
outer()
In the example above, even if inner() raises an exception, its content won't be rolled back.
What you need is a savepoint for the inner "transactions". For your code, it might look like this:
# a bank account class
class Account:
def withdraw(self, amount):
sid = transaction.savepoint()
try:
# code to withdraw money from the account
except:
transaction.savepoint_rollback(sid)
raise
def add(self, amount):
sid = transaction.savepoint()
try:
# code to add money to the account
except:
transaction.savepoint_rollback(sid)
raise
# somewhere else
#transaction.commit_on_success
def makeMoneyTransaction(src_account, dst_account, amount):
src_account.withdraw(amount)
dst_account.add(amount)
As of Django 1.6, the atomic() decorator does exactly that: it uses a transaction for the outer use of the decorator, and any inner use uses a savepoint.
Django 1.6 introduces #atomic, which does exactly what I was looking for!
Not only it supports "nested" transactions, but it also replaces the old, less powerful, decorators. And it is good to have a unique and consistent behavior for transactions management across different Django apps.
In my django piston API, I want to yield/return a http response to the the client before calling another function that will take quite some time. How do I make the yield give a HTTP response containing the desired JSON and not a string relating to the creation of a generator object?
My piston handler method looks like so:
def create(self, request):
data = request.data
*other operations......................*
incident.save()
response = rc.CREATED
response.content = {"id":str(incident.id)}
yield response
manage_incident(incident)
Instead of the response I want, like:
{"id":"13"}
The client gets a string like this:
"<generator object create at 0x102c50050>"
EDIT:
I realise that using yield was the wrong way to go about this, in essence what I am trying to achieve is that the client receives a response right away before the server moves onto the time costly function of manage_incident()
This doesn't have anything to do with generators or yielding, but I've used the following code and decorator to have things run in the background while returning the client an HTTP response immediately.
Usage:
#postpone
def long_process():
do things...
def some_view(request):
long_process()
return HttpResponse(...)
And here's the code to make it work:
import atexit
import Queue
import threading
from django.core.mail import mail_admins
def _worker():
while True:
func, args, kwargs = _queue.get()
try:
func(*args, **kwargs)
except:
import traceback
details = traceback.format_exc()
mail_admins('Background process exception', details)
finally:
_queue.task_done() # so we can join at exit
def postpone(func):
def decorator(*args, **kwargs):
_queue.put((func, args, kwargs))
return decorator
_queue = Queue.Queue()
_thread = threading.Thread(target=_worker)
_thread.daemon = True
_thread.start()
def _cleanup():
_queue.join() # so we don't exit too soon
atexit.register(_cleanup)
Perhaps you could do something like this (be careful though):
import threading
def create(self, request):
data = request.data
# do stuff...
t = threading.Thread(target=manage_incident,
args=(incident,))
t.setDaemon(True)
t.start()
return response
Have anyone tried this? Is it safe? My guess is it's not, mostly because of concurrency issues but also due to the fact that if you get a lot of requests, you might also get a lot of processes (since they might be running for a while), but it might be worth a shot.
Otherwise, you could just add the incident that needs to be managed to your database and handle it later via a cron job or something like that.
I don't think Django is built either for concurrency or very time consuming operations.
Edit
Someone have tried it, seems to work.
Edit 2
These kind of things are often better handled by background jobs. The Django Background Tasks library is nice, but there are others of course.
You've turned your view into a generator thinking that Django will pick up on that fact and handle it appropriately. Well, it won't.
def create(self, request):
return HttpResponse(real_create(request))
EDIT:
Since you seem to be having trouble... visualizing it...
def stuff():
print 1
yield 'foo'
print 2
for i in stuff():
print i
output:
1
foo
2
I want to check if a user has any new messages each time they load the page. Up until now, I have been doing this inside of my views but it's getting fairly hard to maintain since I have a fair number of views now.
I assume this is the kind of thing middleware is good for, a check that will happen every single page load. What I need it to do is so:
Check if the user is logged in
If they are, check if they have any messages
Store the result so I can reference the information in my templates
Has anyone ever had to write any middleware like this? I've never used middleware before so any help would be greatly appreciated.
You could use middleware for this purpose, but perhaps context processors are more inline for what you want to do.
With middleware, you are attaching data to the request object. You could query the database and find a way to jam the messages into the request. But context processors allow you to make available extra entries into the context dictionary for use in your templates.
I think of middleware as providing extra information to your views, while context processors provide extra information to your templates. This is in no way a rule, but in the beginning it can help to think this way (I believe).
def messages_processor(request):
return { 'new_messages': Message.objects.filter(unread=True, user=request.user) }
Include that processor in your settings.py under context processors. Then simply reference new_messages in your templates.
I have written this middleware on my site for rendering messages. It checks a cookie, if it is not present it appends the message to request and saves a cookie, maybe you can do something similar:
class MyMiddleware:
def __init__(self):
#print 'Initialized my Middleware'
pass
def process_request(self, request):
user_id = False
if request.user.is_active:
user_id = str(request.user.id)
self.process_update_messages(request, user_id)
def process_response(self, request, response):
self.process_update_messages_response(request, response)
return response
def process_update_messages(self, request, user_id=False):
update_messages = UpdateMessage.objects.exclude(expired=True)
render_message = False
request.session['update_messages'] = []
for message in update_messages:
if message.expire_time < datetime.datetime.now():
message.expired = True
message.save()
else:
if request.COOKIES.get(message.cookie(), True) == True:
render_message = True
if render_message:
request.session['update_messages'].append({'cookie': message.cookie(), 'cookie_max_age': message.cookie_max_age})
messages.add_message(request, message.level, message)
break
def process_update_messages_response(self, request, response):
try:
update_messages = request.session['update_messages']
except:
update_messages = False
if update_messages:
for message in update_messages:
response.set_cookie(message['cookie'], value=False, max_age=message['cookie_max_age'], expires=None, path='/', domain=None, secure=None)
return response