I had to use transaction.on_commit() for synchronous behaviour in one of the signals of my project. Though it works fine, I couldn't understand how does transaction.on_commit() decide which transaction to take. I mean there can be multiple transactions at the same time. But how does django know which transaction to take by using transaction.on_commit()
According to the docs
You can also wrap your function in a lambda:
transaction.on_commit(lambda: some_celery_task.delay('arg1'))
The function you pass in will be called immediately after a hypothetical database write made where on_commit() is called would be successfully committed.
If you call on_commit() while there isn’t an active transaction, the callback will be executed immediately.
If that hypothetical database write is instead rolled back (typically when an unhandled exception is raised in an atomic() block), your function will be discarded and never called.
If you are using it on post_save method with sender=SomeModel. Probably the on_commit is executed each time a SomeModel object is saved. Without the proper code we would not be able to tell the exact case.
If I understand the question correctly, I think the docs on Savepoints explains this.
Essentially, you can nest any number of transactions, but on_commit() is only called after the top most one commits. However, on_commit() that's nested within a savepoint will only be called if that savepoint was committed and all the ones above it are committed. So, it's tied to which ever one is currently open at the point it's called.
Related
When using Model.objects.bulk_create() if an exception occurs during the insertion does it roll back the entire operation or does it continue with the non-conflicting records, and is there any way to know which records were inserted and which threw an error?
If an exception occurs the entire operation will be rolled back. If you look at the source code you'll see that all database operations are wrapped in transaction.atomic().
There's no way of knowing which values caused the conflict. Such information may be available in the database-specific error message, but that's not part of the API.
Note that as of Django 2.2 there will be an ignore_conflicts parameter that will allow you to explicitly control whether the operation will roll back or whether the conflicts will be ignored.
As the title says, I'm doing a CopySubresourceRegion in a loop, and at some point in there I need to force a wait until it completes. From MSDN's documenation, it looks like I can call ID3D11DeviceContext::Flush, then ID3D11DeviceContext::GetData on an event Query created by ID3D11Device::CreateQuery with D3D11_QUERY_EVENT.
I've tried that, and it SEEMS to be working on my tests so far, but there are things I'm uncertain about.
Would it work correctly if I called CreateQuery just once before the loop begins and use that query repeatedly with each GetData call?
Should I destroy the query after creating it to prevent leaking queries? There doesn't seem to be DestroyQuery method, so maybe call free on my ID3D11Query*?
If I can a call to either ID3D11DeviceContext::Map or Unmap before I need to wait on the copy to finish, do I still need Flush?
Why do you need an explicit wait for completion ? D3D11 is shielded internally for life duration an in-flight usage already. If you call Map, the system make sure to wait for completion for you.
Usually it is the opposite behavior we desire, to be able to query for completion in a non blocking way to know when it is safe to call Map, by using Queries.
For 2. Queries are like any other resources in D3D11, you destroy them by calling Release and you can reuse them, create a pool of queries, mark them used when used, then mark them available again once you were able to collect the data with GetData
The signature for an NDB _post_delete_hook in GAE is:
def _post_delete_hook(cls, key, future):
I am wondering what benefit the future parameter gives. According to the docs on Key.delete, this Future will always be None. The docs even say you cannot use the Future to determine if a delete succeeded. Here they are (from Key.delete in key.py):
"""
This returns a Future, whose result becomes available once the
deletion is complete. If no such entity exists, a Future is still
returned. In all cases the Future's result is None (i.e. there is
no way to tell whether the entity existed or not).
"""
So, my question is, what use is this future parameter? Should I block on it to ensure an NDB delete is done before calling my delete hook? Or is it just a holdover/remnant from the _post_delete_hook's initial implementation and the method now has to take 3 parameters no matter what?
It's a very open ended question, so I would just like to bolster my app engine knowledge and see what you guys have in mind/how you've used it in the past.
According to documentation [1]:
If you use post-hooks with asynchronous APIs, the hooks are triggered by calling check_result(), get_result() or yielding (inside a tasklet) an async method's future. Post hooks do not check whether the RPC was successful; the hook runs regardless of failure.
All post- hooks have a Future argument at the end of the call signature. This Future object holds the result of the action. You can call get_result() on this Future to retrieve the result; you can be sure that get_result() won't block, since the Future is complete by the time the hook is called.
For me, the arg Future this is just a remanent.
[1] https://cloud.google.com/appengine/docs/standard/python/ndb/creating-entity-models#using_model_hooks
Django 1.6 proposes #transaction.atomic as part of the rehaul in the transaction management from 1.5.
I have a function which is called by a Django management command which is in turn called by cron, i.e. no HTTP request triggering transactions in this case. Snippet:
from django.db import transaction
#transaction.commit_on_success
def my_function():
# code here
In the above code block commit_on_success uses a single transaction for all the work done in my_function.
Does replacing #transaction.commit_on_success with #transaction.atomic result in the identical behaviour? #transaction.atomic docs state:
Atomicity is the defining property of database transactions. atomic
allows us to create a block of code within which the atomicity on the
database is guaranteed. If the block of code is successfully
completed, the changes are committed to the database. If there is an
exception, the changes are rolled back.
I take it that they result in the same behaviour; correct?
Based on the documentation I have read on the subject, there is a significant difference when these decorators are nested.
Nesting two atomic blocks does not work the same as nesting two commit_on_success blocks.
The problem is that there are two guarantees that you would like to have from these blocks.
You would like the content of the block to be atomic, either everything inside the block is committed, or nothing is committed.
You would like durability, once you have left the block without an exception you are guaranteed, that everything you wrote inside the block is persistent.
It is impossible to provide both guarantees when blocks are nested. If an exception is raised after leaving the innermost block but before leaving the outermost block, you will have to fail in one of two ways:
Fail to provide durability for the innermost block.
Fail to provide atomicity for the outermost block.
Here is where you find the difference. Using commit_on_success would give durability for the innermost block, but no atomicity for the outermost block. Using atomic would give atomicity for the outermost block, but no durability for the innermost block.
Simply raising an exception in case of nesting could prevent you from running into the problem. The innermost block would always raise an exception, thus it never promises any durability. But this loses some flexibility.
A better solution would be to have more granularity about what you are asking for. If you can separately ask for atomicity and durability, then you can perform nesting. You just have to ensure that every block requesting durability is outside those requesting atomicity. Requesting durability inside a block requesting atomicity would have to raise an exception.
atomic is supposed to provide the atomicity part. As far as I can tell django 1.6.1 does not have a decorator, which can ask for durability. I tried to write one, and posted it on codereview.
Yes. You should use atomic in the places where you previously used commit_on_success.
Since the new transaction system is designed to be more robust and consistent, though, it's possible that you could see different behavior. For example, if you catch database errors and try to continue on you will see a TransactionManagementError, whereas the previous behavior was undefined and probably case-dependent.
But, if you're doing things properly, everything should continue to work the same way.
I am running a calculation in a PL/pgSQL function and I want to use the result of that calculation in my C++ code. What's the best way to do that?
I can insert that result into a table and use it from there but I'm not sure how well that fares with best practices. Also, I can send message to stderr with RAISE NOTICE but I don't know can I use that message in my code.
The details here are a bit thin on the ground, so it's hard to say for sure.
Strongly preferable whenever possible is to just get the function's return value directly. SELECT my_function(args) if it returns a single result, or SELECT * FROM my_function(args); if it returns a row or set of rows. Then process the result like any other query result. This is part of the basic use of simple SQL and PL/PgSQL functions.
Other options include:
Returning a refcursor. This can be useful in some circumstances where you want to return a dynamic result set or multiple result sets, though it's now mostly superseded by RETURN QUERY and RETURN QUERY EXECUTE. You then FETCH from the refcursorto get the result rows.
LISTENing for an event and having the function NOTIFY when the work is done, possibly with the result as a notify payload. This is useful when the function isn't necessarily called on the same connection as the program that wants to use its results.
Create a temporary table in the function, then SELECT from the table from the session that called the function.
Emitting log messages via RAISE and setting client_min_messages so you receive them, then processing them. This is a very ugly way to do it and should really be avoided at all costs.
INSERTing the results into an existing non-temporary table, then SELECTing them out once the transaction commits and the rows become visible to other transactions.
Which is better? It depends entirely on what you're trying to do. In almost all cases the correct thing to do is just call the function and process the return value, but there are exceptions in special cases.