I have a view which reads the excel sheet and saves the data. I need a way if there is any error happens(500) in that view the database transactions should not commit and hence they should rollback.
I use the following code but it saves the data before the errors comes. My tasks is if there is any error in views the database should rollback.
from django.db import transaction
#transaction.commit_on_success
def upload_data(request):
..... and so on .....
obj.save()
Error comes here on this line
Want rollback the database as it
was before this view was called
obj1.save()
If error is here noting should be saved.
Thanks
Per Django's documentation on transactions, if you're using Django 1.6, I'd wrap the whole view in #transaction.atomic:
from django.db import transaction
#transaction.atomic
If you want this behavior for your whole app, set ATOMIC_REQUESTS=True in your database configuration, as described in that documentation.
Otherwise, if you're on 1.5 and not getting the behavior you expect, you could switch to #transaction.commit_manually, wrap the whole view in a try block, and do a commit() or rollback() explicitly. It's not elegant, but it might work if you want fine-grained control of when exactly commits happen.
try to put #transaction.commit_on_success just top of your view. So, if you get an error within that view function it will rollback else commit your work.
Related
We have a system consisting of a Django server connected to a PostgreSQL database, and some AWS Lambda functions which need to access the database. When an admin saves a model (PremiumUser - contains premium plan information that needs to be read by the Lambdas), we want to set up a schedule of CloudWatch events based on the saved information. Those events then trigger other Lambdas which also need to directly access the database, as the database state may change at any time and they should work off the most recent state of the database.
The issue is that Django seems to think it has saved the values, but when the Lambdas read the database the expected values aren't there. We have tried using Django's post_save signal, calling the Lambdas inside the triggered function; we have tried overriding Django's default PremiumUser.save method to perform super(PremiumUser, self).save(*args, **kwargs), and only then call Lambdas (in case the post_save signal was getting triggered too early); and we have tried overriding the PremiumUser.save method and calling super(PremiumUser, self).save(*args, **kwargs) in the context of an atomic transaction (ie with transactions.atomic():).
When we call the Lambdas a few seconds after the admin dashboard has updated, they can find the values as expected and work properly, which suggests that somehow Django considers the model as having been 'saved' to the database, while the database has not yet been updated.
Is there a way to force Django to write to the database immediately? This would be the preferred solution, as it would keep Django's model and the database consistent.
An alternative solution we have considered but would prefer not to resort to would be to put a sleep in the Lambdas and calling them asynchronously so that Django's save is able to complete before the Lambda functions access the database. Obviously this would be a race condition, so we don't want to do this if it can be at all avoided.
Alright, after spending more time poring over Django's documentation we found a solution: using on_commit. It seems that post_save is triggered after Django's model is updated, but before the database has been written to. on_commit, on that other hand is triggered after the current database transaction has been committed (completed).
For us, the solution involved setting up some code like this:
from django.db import models, transaction
class PremiumUser(models.Model):
# Normal setup code...
def save(self, *args, **kwargs):
# Do some necessary things before the actual save occurs
super(PremiumUser, self).save(*args, **kwargs)
# Set up a callback for the on_commit
def create_new_schedule():
# Call lambda to create new schedule...
# Register the callback with the current transaction's on_commit
transaction.on_commit(create_new_schedule)
Project use Django and Postgresql 9.5. Sometimes I see the error in celery task.
When object need change specified column it uses celery task.
This task writes in separate table change history of an object and update column(not raw SQL, by Django ORM).
Task write history by FDW extension into the foreign table.
Thrown exception:
Remote SQL command: COMMIT TRANSACTION\nSQL statement "SELECT 1 FROM ONLY "public"."incident_incident" x WHERE "id" OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x"\n',)
I can't take understand why it raises the exception. Task very simple
screen logs(maybe it help):
In celery, when you are doing transaction, then you can use transaction.atomic block to do that.
For example:
#app.task(bind=True)
def do_task(self)
try:
with transaction.atomic():
# Do DB OP
except (SomeException,Exception) as exc:
raise self.retry(exc=exc)
There are other approaches as well. You can add a new field regarding object change in Model and track it. You can read this article on medium regarding this approach. Hope it helps!!
from django.db import connection
def executeQuery(query, params):
cur=connection.cursor()
cur.execute(query, params) // this is update query
cur.close()
I have series of queries and I call this method for each query, but it looks like it rollbacks entire operation if any query (let's say 3rd query) gets failed.
I thought, after execute(), it immediately commit it and it does not depend on the next query.
Shouldn't django has auto commit feature?
Database altering operations are automatically committed. However, if you are using the django.middleware.transaction.TransactionMiddleware or something similar, then they will be only committed if the page rendering finishes without any error, otherwise a rollback will happen.
For further details refer to the documentation for django 1.5 (the version used in the question). Check the latest documentation too.
I'm thinking of creating a log system for my django web application. The web application is quite comprehensive in its use (covers all aspects of a business's processes) so I'd like to track every event that happens. Specifically, I'd like to log every view that runs and not just the "main" ones and, potentially, log what is happening within the view as its executed.
While I'm in the "idea" stage of the logging system, I've quickly hit a few questions that leave me unsure how to proceed. Here are the main questions I have:
I'm thinking of logging all of the events in the same MySQL database that the main web app holds its data. The concern I have is bloating the MySQL database into a massive DB. Also, if the DB crashes or is destroyed somehow (yes I have backups) I'll loose my log too which blows away any ability to track down the problem. Do I use a seperate DB or just go with text files?
How granular do I go? Initially I was thinking of simply logging things like, "Date - In view myView". However, as I'm thinking about it, it would be nice to log all the stuff that happens within the view. Doing this could make the log massive! and would also make my code ugly with so many log entry lines mixed into the code. This kind of detail:
Date - entered view myView
Date - in view myView, retrieved object myObject from the DB
Date - in view myView, setting myObject field myField to myNewValue
Date - leaving myView
Those are my main thoughts at this point. Any advice on this front?
Thanks
I think the best and right way is to create your own custom middleware where you can log actually everything you need.
Here are some links on the subject:
middleware snippets
http://djangosnippets.org/snippets/2624/
http://djangosnippets.org/snippets/290/
http://djangosnippets.org/snippets/264/
django-logging-middleware (pretty old by may give you an idea)
django-request
django.db.backends logging
Is there a Django middleware/plugin that logs all my requests in a organized fashion?
Django verbose request logging
log all sql queries
django orm, how to view (or log) the executed query?
Also, consider using sentry error logging and aggregation platform instead of writing logs into the database. FYI, see using a database for logging.
If you want to log any action run in every view, you can for example, replace entered view A and exited view A by a line in these words: view A - 147ms.
As alecxe stated, you can log requests/SQL, there are plenty of ways to do it with middleware. About database (object) actions, you can tie individual saves, updates and deletes with signals.
For bulk updates and deletes, you could (it's not a clean way but it would work) monkey-patch manager and queryset methods to add logging.
This way you can log actions rather than SQL.
I would see lines like this:
[2013/09/11 15:11:12.0153] view app.module.view 200 148ms
[2013/09/11 15:11:12.0189] orm save:auth.User,id=1 3ms
This is a quick and dirty proposal, but, maybe it's worth it.
i'd like to use a view i've created in my database as the source for my django-view.
Is this possible, without using custom sql?
******13/02/09 UPDATE***********
Like many of the answers suggest, you can just make your own view in the database and then use it within the API by defining it in models.py.
some warning though:
manage.py syncdb will not work anymore
the view need the same thing at the start of its name as all the other models(tables) e.g if your app is called "thing" then your view will need to be called thing_$viewname
Just an update for those who'll encounter this question (from Google or whatever else)...
Currently Django has a simple "proper way" to define model without managing database tables:
Options.managed
Defaults to True, meaning Django will create the appropriate database tables in syncdb and remove them as part of a reset management command. That is, Django manages the database tables' lifecycles.
If False, no database table creation or deletion operations will be performed for this model. This is useful if the model represents an existing table or a database view that has been created by some other means. This is the only difference when managed is False. All other aspects of model handling are exactly the same as normal.
Since Django 1.1, you can use Options.managed for that.
For older versions, you can easily define a Model class for a view and use it like your other views. I just tested it using a Sqlite-based app and it seems to work fine. Just make sure to add a primary key field if your view's "primary key" column is not named 'id' and specify the view's name in the Meta options if your view is not called 'app_classname'.
The only problem is that the "syncdb" command will raise an exception since Django will try to create the table. You can prevent that by defining the 'view models' in a separate Python file, different than models.py. This way, Django will not see them when introspecting models.py to determine the models to create for the app and therefor will not attempt to create the table.
I just implemented a model using a view with postgres 9.4 and django 1.8.
I created custom migration classes like this:
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('myapp', '0002_previousdependency'),
]
sql = """
create VIEW myapp_myview as
select your view here
"""
operations = [
migrations.RunSQL("drop view if exists myapp_myview;"),
migrations.RunSQL(sql)
]
I wrote the model as I normally would. It works for my purposes.
Note- When I ran makemigrations a new migration file was created for the model, which I manually deleted.
Full disclosure- my view is read only because I am using a view derived from a jsonb data type and have not written an ON UPDATE INSTEAD rule.
We've done this quite extensively in our applications with MySQL to work around the single database limitation of Django. Our application has a couple of databases living in a single MySQL instance. We can achieve cross-database model joins this way as long as we have created views for each table in the "current" database.
As far as inserts/updates into views go, with our use cases, a view is basically a "select * from [db.table];". In other words, we don't do any complex joins or filtering so insert/updates trigger from save() work just fine. If your use case requires such complex joins or extensive filtering, I suspect you won't have any problems for read-only scenarios, but may run into insert/update issues. I think there are some underlying constraints in MySQL that prevent you from updating into views that cross tables, have complex filters, etc.
Anyway, your mileage may vary if you are using a RDBMS other than MySQL, but Django doesn't really care if its sitting on top of a physical table or view. It's going to be the RDBMS that determines whether it actually functions as you expect. As a previous commenter noted, you'll likely be throwing syncdb out the window, although we successfully worked around it with a post-syncdb signal that drops the physical table created by Django and runs our "create view..." command. However, the post-syncdb signal is a bit esoteric in the way it gets triggered, so caveat emptor there as well.
EDIT: Of course by "post-syncdb signal" I mean "post-syncdb listener"
From Django Official Documentation, you could call the view like this:
#import library
from django.db import connection
#Create the cursor
cursor = connection.cursor()
#Write the SQL code
sql_string = 'SELECT * FROM myview'
#Execute the SQL
cursor.execute(sql_string)
result = cursor.fetchall()
Hope it helps ;-)