I'd like to run a custom command in my migration, that calls functions from other modules. These functions use some models, and as expected I ran into schema version mismatch (OperationalError: (1054, "Unknown column 'foo' in 'bar'").
If I were to use those models in the custom command I'd access the model with apps.get_model('my_app', 'bar'), but as those models are used in the external functions, I can't do that.
I'm sure, someone ran into this before although I couldn't find anything.
I was thinking about using the unittest.mock.patch decorator but it doesn't feel like the right solution.
I'm wondering if there's a more general solution for this?
The versioned app registries are not globally accessible. You could pass the model as a parameter to the function, and use the current model as the default:
from my_app.models import Bar
def my_function(..., bar_model=Bar):
# Use bar_model instead of Bar
# Your RunPython function
def migrate_something(apps, schema_editor):
my_function(bar_model=apps.get_model('my_app', 'bar'))
You don't have to pass the bar_model parameter if you call it from regular code, but when calling it from a migration you can pass the historical model.
If you need multiple models you could pass apps instead:
from django.apps import apps as global_apps
def my_function(..., apps=global_apps):
Bar = apps.get_model('my_app', 'bar')
Related
When using models in migrations, in Django we can use apps.get_model() to make sure that the migration will use the right "historical" version of the model (as it was when the migration was defined).
But how do we deal with "regular" variables (not models) imported from the codebase?
If we import a variable from another module, we will probably face issues in the future. For example:
If someday in the future we delete the variable (because we changed the implementation) this will break migrations. So we won't be able to re-run migrations locally to re-create a database from scratch.
If we modify the variable (e.g. we change the values in a list) this will produce unexpected effects when we run the reverse operation on an existing db.
So the question is: what's the best practice for writing migrations? Should we always hard-code values without importing external variables?
Example
Suppose I want to simply modify the value of a field with a variable that I've defined somewhere in the codebase. For example, I want to turn all normal users into admins. I stored user roles in an enum (UserRoles). One way to write the migration would be this:
from django.db import migrations
from user_roles import UserRoles
def change_user_role(apps, schema_editor):
User = apps.get_model('users', 'User')
users = User.objects.filter(role=UserRoles.NORMAL_USER.value)
for user in users:
user.role = UserRoles.ADMIN.value
User.objects.bulk_update(users, ["role"])
def revert_user_role_changes(apps, schema_editor):
User = apps.get_model('users', 'User')
users = User.objects.filter(role=UserRoles.ADMIN.value)
for user in users:
user.role = UserRoles.NORMAL_USER.value
User.objects.bulk_update(users, ["role"])
class Migration(migrations.Migration):
dependencies = [
('users', '0015_auto_20220612_0824'),
]
operations = [
migrations.RunPython(change_user_role, revert_user_role_changes)
]
As you see, this will have the issues I mentioned above.
I've used the enum example, but this can be applied to every variable referenced inside the migrations.
So the question again: what's the best practice for migrations? Should we always hard-code values without referencing external variables that might change?
When creating data migrations with Django, you create a function like combine_names(apps, schema_editor).
What are the types of the parameters apps and schema_editor in this case?
The types are django.db.migrations.state.StateApps and django.db.backends.postgresql.schema.DatabaseSchemaEditor, respectively.
This makes the example method look like the following with type hints.
# …
from django.db.backends.postgresql.schema import DatabaseSchemaEditor
from django.db.migrations.state import StateApps
def combine_names(apps: StateApps, schema_editor: DatabaseSchemaEditor):
# …
What is/are the best practices to use get_model() and when should it be imported ?
Ref: https://docs.djangoproject.com/en/1.8/ref/applications/
You usually use get_model() when you need to dynamically get a model class.
A practical example: when writing a RunPython operation for a migration, you get the app registry as one of the args, and you use apps.get_model('TheModel') to import historical models.
Another example: you have an app which has dynamically built serializers and you set their Meta.model to the class you just got with get_model() .
Yet another example is importing models in AppConfig.ready() with self.get_model().
An important thing to remember, if you are using AppConfig.get_model() or apps.get_models(), that they can be used only once the application registry is fully populated.
The other option (from .models import TheModel) is just the default way to import models anywhere in your code.
These are just examples though, there are many other possible scenarios.
I Prefer, use .models import, cause is a simple way to get the Model Object.
But if you works with metaclasses, maybe the get_model, would be the best option.
def get_model(self, app_label, model_name=None):
"""
Returns the model matching the given app_label and model_name.
As a shortcut, this function also accepts a single argument in the
form <app_label>.<model_name>.
model_name is case-insensitive.
Raises LookupError if no application exists with this label, or no
model exists with this name in the application. Raises ValueError if
called with a single argument that doesn't contain exactly one dot.
"""
self.check_models_ready()
if model_name is None:
app_label, model_name = app_label.split('.')
return self.get_app_config(app_label).get_model(model_name.lower())
Maybe this SO POST, can help too.
there are plenty of examples the tell you how to extend the user model BUT I cannot find a real, complete and documented example on how to extend an existing model without having to follow the "user profile pattern" (and honestly I wonder why).
In short, my use case is the following: I need to extend django-lfs's product model.
In LFS is registered like this (in lfs.catalog.admin):
from django.contrib import admin
[...]
from lfs.catalog.models import Product
[...]
class ProductAdmin(admin.ModelAdmin):
prepopulated_fields = {"slug": ("name", )}
admin.site.register(Product, ProductAdmin)
[...]
I tried to register mine (that subclasses it) but I got:
django/contrib/admin/sites.py",
line 78, in register
raise AlreadyRegistered('The model %s is already registered' %
model.name)
So, someone suggested me that I have to unregister that object and register mine.
I did it like this:
from lfs.catalog.models import Product
from lfs.catalog.admin import ProductAdmin
admin.site.unregister(Product)
from lfs_product_highlights.catalog.models import Product
admin.site.register(Product,ProductAdmin)
No errors this time BUT there's no change, my custom fields are nowhere to be seen.
Any hints?
The reason why it's difficult is because of the object-relational impedance mismatch (love that phrase). Objects and classes do not map perfectly onto relational databases: ORMs like Django's attempt to smooth out the edges, but there are some places where the differences are just too great. Inheritance is one of these: there is simply no way to make one table "inherit" from another, so it has to be simulated via Foreign Keys or the like.
Anyway, for your actual problem, I can't really see what's going on but one possible way to fix it would be to subclass ProductAdmin as well, and specifically set the model attribute to your subclassed model.
I asked this in the users group with no response so i thought I would try here.
I am trying to setup a custom manager to connect to another database
on the same server as my default mysql connection. I have tried
following the examples here and here but have had no luck. I get an empty tuple when returning
MyCustomModel.objects.all().
Here is what I have in manager.py
from django.db import models
from django.db.backends.mysql.base import DatabaseWrapper
from django.conf import settings
class CustomManager(models.Manager):
"""
This Manager lets you set the DATABASE_NAME on a per-model basis.
"""
def __init__(self, database_name, *args, **kwargs):
models.Manager.__init__(self, *args, **kwargs)
self.database_name = database_name
def get_query_set(self):
qs = models.Manager.get_query_set(self)
qs.query.connection = self.get_db_wrapper()
return qs
def get_db_wrapper(self):
# Monkeypatch the settings file. This is not thread-safe!
old_db_name = settings.DATABASE_NAME
settings.DATABASE_NAME = self.database_name
wrapper = DatabaseWrapper()
wrapper._cursor(settings)
settings.DATABASE_NAME = old_db_name
return wrapper
and here is what I have in models.py:
from django.db import models
from myproject.myapp.manager import CustomManager
class MyCustomModel(models.Model):
field1 = models.CharField(max_length=765)
attribute = models.CharField(max_length=765)
objects = CustomManager('custom_database_name')
class Meta:
abstract = True
But if I run MyCustomModel.objects.all() I get an empty list.
I am pretty new at this stuff so I am not sure if this works with
1.0.2, I am going to look into the Manager code to see if I can figure
it out but I am just wondering if I am doing something wrong here.
UPDATE:
This now in Django trunk and will be part of the 1.2 release
http://docs.djangoproject.com/en/dev/topics/db/multi-db/
You may want to speak to Alex Gaynor as he is adding MultiDB support and its pegged for possible release in Django 1.2. I'm sure he would appreciate feedback and input from those that are going to be using MultiDB. There is discussions about it in the django-developers mainling list. His MultiDB branch may even be useable, I'm not sure.
Since I guess you probably can't wait and if the MultiDB branch isn't usable, here are your options.
Follow Eric Flows method, bearing in mind that its not supported and new released of Django may break it. Also, some comments suggest its already been broken. This is going to be hacky.
Your other option would be to use a totally different database access method for one of your databases. Perhaps SQLAlchemy for one and then Django ORM. I'm going by the guess that one is likely to be more Django centric and the other is a legacy database.
To summarise. I think hacking MultiDB into Django is probably the wrong way to go unless your prepared to keep up with maintaining your hacks later on. Therefore I think another ORM or database access would give you the cleanest route as then you are not going out with supported features and at the end of the day, its all just Python.
My company has had success using multiple databases by closely following this blog post: http://www.eflorenzano.com/blog/post/easy-multi-database-support-django/
This probably isnt the answer your looking for, but its probably best if you move everything you need into the one database.