Fixing the auth_permission table after renaming a model in Django - django

Every now and then, you have the need to rename a model in Django (or, in one recent case I encountered, split one model into two, with new/different names). (Yes, proper planning helps to avoid this situation).
After renaming corresponding tables in the db and fixing affected code, one problem remains: Any permissions granted to Users or Groups to operate on those models still references the old model names. Is there any automated or semi-automated way to fix this, or is it just a matter of manual db surgery? (in development you can drop the auth_permissions table and syncdb to recreate it, but production isn't so simple).

Here's a snippet that fills in missing contenttypes and permissions. I wonder if it could be extended to at least do some of the donkey work for cleaning up auth_permissions.

If you happened to have used a South schema migration to rename the table, the following line in the forward migration would have done this automatically:
db.send_create_signal('appname', ['modelname'])

I got about half-way through a long answer that detailed the plan of attack I would take in this situation, but as I was writing I realized there probably isn't any way around having to do a maintenance downtime in this situation.
You can minimize the downtime by having a prepared loaddata script of course, although care needs to be taken to make sure the auth_perms primary keys are in sync.
Also see short answer: no automated way to do this of which I'm aware.

I recently had this issue and wrote a function to solve it. You'll typically have a discrepancy with both the ContentType and Permission tables if you rename a model/table. Django has built-in helper functions to resolve the issue and you can use them as follow:
from django.contrib.auth.management import create_permissions
from django.contrib.contenttypes.management import update_all_contenttypes
from django.db.models import get_apps
def update_all_content_types_and_permissions():
for app in get_apps():
create_permissions(app, None, 2)
update_all_contenttypes()

I changed verbose names in my application, and in Django 2.2.7 this is the only way I found to fix permissions:
from django.core.management.base import BaseCommand, CommandError
from django.contrib.auth.models import Permission
class Command(BaseCommand):
help = 'Fixes permissions names'
def handle(self, *args, **options):
for p in Permission.objects.filter(content_type__app_label="your_app_label_here"):
p.name = "Can %s %s"%(p.codename.split('_')[0], p.content_type.model_class()._meta.verbose_name)
p.save()

Related

How to auto delete the expires data in the database?

If I store a row data in the database table(instance), and the table has a field names expire_time. if the time over the expire_time, I want to delete the row data.
So, if I want to do that, I can every time query the table, traverse every row data, if expires, then delete.
But if I don't query I can not realize the requirement.
So, if there is a method to do that?
I use python django, the database is mariadb.
You can write a custom management command to do this for you. Save this in myapp/management/commands/delete_expired.py for example:
from django.core.management.base import BaseCommand
from django.utils import timezone
from myapp.models import MyModel
class Command(BaseCommand):
help = 'Deletes expired rows'
def handle(self, *args, **options):
now = timezone.now()
MyModel.objects.filter(expire_time__lt=now).delete()
Then either call that command from a cron task or a queue. To do it on the command line you can call:
python manage.py delete_expired
I am not sure what you mean by:
I can not realize the requirement.
But I think you might want consider:
custom manage.py command, and cron this command with your venv python source
add django-cron to routinely check for expired data and delete it
try celery as another solution to cron but it could be too complecated for your case
add event to MariaDB and schedule it periodical
The drawback of custom manage.py cmd and event is if you migrate server you should remember to add new cron job/event to clean db periodicaly.
I don't know a database-level approach to do that (maybe you want to add the mariadb tag if you are looking for a database-specific solution).
At the application level, an approach comes to mind. You may use Celery and, whenever you store a row data, schedule a task to delete it. The celery task should check that expire_time is effectively invalid (can that field be modified or updated?).
You can also (in addition or as an alternative) have a Celery beat job that periodically gets the element with smaller expire_time. If it should be removed, removed and call itself again. Otherwise, wait for next beat.

Running tests with unmanaged tables in django

My django app works with tables that are not managed and have the following defined in my model like so:
class Meta:
managed = False
db_table = 'mytable'
When I run a simple test that imports the person, I get the following:
(person)bob#sh ~/person/dapi $ > python manage.py test
Creating test database for alias 'default'...
DatabaseError: (1060, "Duplicate column name 'db_Om_no'")
The tests.py is pretty simple like so:
import person.management.commands.dorecall
from person.models import Person
from django.test import TestCase
import pdb
class EmailSendTests(TestCase):
def test_send_email(self):
person = Person.objects.all()[0]
Command.send_email()
I did read in django docs where it says "For tests involving models with managed=False, it’s up to you to ensure the correct tables are created as part of the test setup.". So I understand that my problem is that I did not create the appropriate tables. So am I supposed to create a copy of the tables in the test_person db that the test framework created?
Everytime I run the tests, the test_person db gets destroyed (I think) and re-setup, so how am I supposed to create a copy of the tables in test_person. Am I thinking about this right?
Update:
I saw this question on SO and added the ManagedModelTestRunner() in utils.py. Though ManagedModelTestRunner() does get run (confirmed through inserting pbd.set_trace()), I still get the Duplicate column name error. I do not get errors when I do python manage.py syncdb (though this may not mean much since the tables are already created - will try removing the table and rerunning syncdb to see if I can get any clues).
I had the same issue, where I had an unmanaged legacy database that also had a custom database name set in the models meta property.
Running tests with a managed model test runner, as you linked to, solved half my problem, but I still had the problem of Django not knowing about the custom_db name:
django.db.utils.ProgrammingError: relation "custom_db" does not exist
The issue was that ./manage.py makemigrations still creates definitions of all models, managed or not, and includes your custom db names in the definition, which seems to blow up tests. By installing:
pip install django-test-without-migrations==0.2
and running tests like this:
./manage.py test --nomigrations
I was able to write tests against my unmanaged model without getting any errors.

Django fixtures for permissions

I'm creating fixtures for permissions in Django. I'm able to get them loaded the way it's needed. However, my question is..say I want to load a fixture for the table auth_group_permissions, I need to specify a group_id and a permission_id, unfortunately fixtures aren't the best way to handle this. Is there an easier way to do this programmatically? So that I can get the id for particular values and have them filled in? How is this normally done?
As of at least Django >=1.7, it is now possible to store permissions in fixtures due to the introduction of "natural keys" as a serialization option.
You can read more about natural keys in the Django serialization documentation
The documentation explicitly mentions the use case for natural keys being when..
...objects are automatically created by Django during the database synchronization process, the primary key of a given relationship isn’t easy to predict; it will depend on how and when migrate was executed. This is true for all models which automatically generate objects, notably including Permission, Group, and User.
So for your specific question, regarding auth_group_permissions, you would dump your fixture using the following syntax:
python manage.py dumpdata auth --natural-foreign --natural-primary -e auth.Permission
The auth_permissions table must be explicitly excluded with the -e flag as that table is populated by the migrate command and will already have data prior to loading fixtures.
This fixture would then be loaded in the same way as any other fixtures
The proper solution is to create the permissions in the same manner the framework itself does.
You should connect to the built-in post_migrate signal either in the module management.py or management/__init__.py and create the permissions there. The documentation does say that any work performed in response to the post_migrate signal should not perform any database schema alterations, but you should also note that the framework itself creates the permissions in response to this signal.
So I'd suggest that you take a look at the management module of the django.contrib.auth application to see how it's supposed to be done.
Just to add to #jonpa's comment, if you are using multitenant app and you want to directly save the fixtures to a file you can do:
python manage.py tenant_command dumpdata --schema=<schema_name> --natural-foreign --natural-primary -e auth.Permission --indent 4 > /path/to/fixtures/fixtures.json

Django+Postgres: "current transaction is aborted, commands ignored until end of transaction block"

I've started working on a Django/Postgres site. Sometimes I work in manage.py shell, and accidentally do some DB action that results in an error. Then I am unable to do any database action at all, because for any database action I try to do, I get the error:
current transaction is aborted, commands ignored until end of transaction block
My current workaround is to restart the shell, but I should find a way to fix this without abandoning my shell session.
(I've read this and this, but they don't give actionable instructions on what to do from the shell.)
You can try this:
from django.db import connection
connection._rollback()
The more detailed discussion of This issue can be found here
this happens to me sometimes, often it's the missing
manage.py migrate
or
manage.py syncdb
as mentioned also here
it also can happen the other way around, if you have a schemamigration pending from your models.py. With south you need to update the schema with.
manage.py schemamigration mymodel --auto
Check this
The quick answer is usually to turn on database level autocommit by adding:
'OPTIONS': {'autocommit': True,}
To the database settings.
I had this error after restoring a backup to a totally empty DB. It went away after running:
./manage syncdb
Maybe there were some internal models missing from the dump...
WARNING: the patch below can possibly cause transactions being left in an open state on the db (at least with postgres). Not 100% sure about that (and how to fix), but I highly suggest not doing the patch below on production databases.
As the accepted answer does not solve my problems - as soon as I get any DB error, I cannot do any new DB actions, even with a manual rollback - I came up with my own solution.
When I'm running the Django-shell, I patch Django to close the DB connection as soon as any errors occur. That way I don't ever have to think about rolling back transactions or handling the connection.
This is the code I'm loading at the beginning of my Django-shell-session:
from django import db
from django.db.backends.util import CursorDebugWrapper
old_execute = CursorDebugWrapper.execute
old_execute_many = CursorDebugWrapper.executemany
def execute_wrapper(*args, **kwargs):
try:
old_execute(*args, **kwargs)
except Exception, ex:
logger.error("Database error:\n%s" % ex)
db.close_connection()
def execute_many_wrapper(*args, **kwargs):
try:
old_execute_many(*args, **kwargs)
except Exception, ex:
logger.error("Database error:\n%s" % ex)
db.close_connection()
CursorDebugWrapper.execute = execute_wrapper
CursorDebugWrapper.executemany = execute_many_wrapper
For me it was a test database without migrations. I was using --keepdb for testing. Running it once without it fixed the error.
There are a lot of useful answers on this topic, but still it can be a challenge to figure out what is the root of the issue. Because of this, I will try to give just a little more context on how I was able to figure out the solution for my issue.
For Django specifically, you want to turn on logs for db queries and before the error is raised, you can find the query that is failing in the console. Run that query directly on db, and you will see what is wrong.
In my case, one column was missing in db, so after migration everything worked correctly.
I hope this will be helpful.
If you happen to get such an error when running migrate (South), it can be that you have lots of changes in database schema and want to handle them all at once. Postgres is a bit nasty on that. What always works, is to break one big migration into smaller steps. Most likely, you're using a version control system.
Your current version
Commit n1
Commit n2
Commit n3
Commit n4 # db changes
Commit n5
Commit n6
Commit n7 # db changse
Commit n8
Commit n9 # db changes
Commit n10
So, having the situation described above, do as follows:
Checkout repository to "n4", then syncdb and migrate.
Checkout repository to "n7", then syncdb and migrate.
Checkout repository to "n10", then syncdb and migrate.
And you're done. :)
It should run flawlessly.
If you are using a django version before 1.6 then you should use Christophe's excellent xact module.
xact is a recipe for handling transactions sensibly in Django applications on PostgreSQL.
Note: As of Django 1.6, the functionality of xact will be merged into the Django core as the atomic decorator. Code that uses xact should be able to be migrated to atomic with just a search-and-replace. atomic works on databases other than PostgreSQL, is thread-safe, and has other nice features; switch to it when you can!
I add the following to my settings file, because I like the autocommit feature when I'm "playing around" but dont want it active when my site is running otherwise.
So to get autocommit just in shell, I do this little hack:
import sys
if 'shell' in sys.argv or sys.argv[0].endswith('pydevconsole.py'):
DATABASES['default']['OPTIONS']['autocommit'] = True
NOTE: That second part is just because I work in PyCharm, which doesnt directly run manage.py
I got this error in Django 1.7. When I read in the documentation that
This problem cannot occur in Django’s default mode and atomic()
handles it automatically.
I got a bit suspicious. The errors happened, when I tried running migrations. It turned out that some of my models had my_field = MyField(default=some_function). Having this function as a default for a field worked alright with sqlite and mysql (I had some import errors, but I managed to make it work), though it seems to not work for postgresql, and it broke the migrations to the point that I didn't event get a helpful error message, but instead the one from the questions title.

don't load 'initial_data.json' fixture when testing

I'm testing a django app not written by myself, which uses two fixtures: initial_data.json and testing.json. Both fixtures files contain conflicting data (throwing an integrity error).
For testing, I've specified TestCase.fixtures = ['testing.json'], but initial_data.json is loaded too.
How can I avoid loading initial_data.json (not renaming it) in the testcase?
Quoting from Django Website:
If you create a fixture named
initial_data.[xml/yaml/json], that
fixture will be loaded every time you
run syncdb. This is extremely
convenient, but be careful: remember
that the data will be refreshed every
time you run syncdb. So don't use
initial_data for data you'll want to
edit.
So I guess there's no way to say "okay, don't load initial data just this once". Perhaps you could write a short bash script that would rename the file. Otherwise you'd have to dig into the Django code.
More info here: http://docs.djangoproject.com/en/dev/howto/initial-data/#automatically-loading-initial-data-fixtures
You might want to think about whether initial_data.json is something your app actually needs. It's not hard to "manually" load your production data with ./manage.py loaddata production.json after running a syncdb (how often do you run syncdb in production, anyway?), and it would make loading your testing fixture much easier.
If you want to have tables with no initial data, this code will help you:
edit tests.py:
from django.core import management
class FooTest(TestCase):
#classmethod
def setUpClass(cls):
management.call_command('flush', interactive=False, load_initial_data=False)
this will remove your data and syncdb again without loading initial data.