When I change the verbose_name attribute of a Django Model, Django will generate a related migration (running make migrations command).
However, without applying the migration (migrate command), the change seems to be applied all over the Django project. I think this is because verbose_name is something used at Django level rather than at database level.
This makes me wonder: what is the purpose of this migration file?
Django makes abstraction of the backend used. Indeed, you can use a different backend, by altering the settings.py file. In fact you can define a backend yourself.
All changes to the models thus can have impact on the database. You could for example define a backend that uses the verbose_name of a column as the "comment string" you frequently can add to such column at the database side. If you for example define choices, then a backend could, for some databases, try to enforce these choices at database level.
Since, as said before, Django aims to work unaware of the backend used, it thus aims to be conservative, and make migrations for a lot of model changes. For some of these, the backend will decide to simply do nothing. So no query is constructed, and nothing is changed at the database side. You can indeed see such changes as "useless migrations". But keep in mind that if you later would use a different backend, these "useless migrations" might in fact do something.
Since such migrations do not do anything on the database side, these are however usually not time cosuming. You might want to look to "squash" migration files [Django-doc] together, and thus reduce the number of migration files used. The migrations will still be part of the file, but since there are no queries involved, it will usually not do much harm.
Furthermore, you can in fact patch the deconstruct function on a field, such that the help_text, etc. do not appear in the result of a deconstruct call anymore and thus are not visible to the "detector". This Gist script shows a way to patch the makemigration command:
"""
Patch the creation of database migrations in Django
Import this early from `__init__.py``.
- Don't want verbose_name changes in the migrations file.
- Don't want help_text in the migrations file.
"""
from functools import wraps
from django.db.models import Field
def patch_deconstruct(old_func, condition):
"""
Patch the ``Field.deconstruct`` to remove useless information.
This only happens on internal apps, not third party apps.
"""
#wraps(old_func)
def new_deconstruct(self):
name, path, args, kwargs = old_func(self)
# AutoField has no model on creation, but can be skipped
if hasattr(self, 'model') and condition(self):
kwargs.pop('verbose_name', None)
kwargs.pop('help_text', None)
return name, path, args, kwargs
return new_deconstruct
Field.deconstruct = patch_deconstruct(Field.deconstruct, lambda self: self.model.__module__.startswith('apps.'))
Related
I spin up a django project. Afterwards, i didn't write models.py but instead I created a database from MySQL command line(independent from django) and created three tables with required columns. Finally i connected my django app with that database successfully. I applied migrations. But now i am confused do i need to write models.py with every field name as in column?
I remember implementing a basic project in which i did write models.py and created database using "python manage.py shell" and then put values using
"from polls.models import Choice, Question"? How do i put data now initially and then using python on some action from UI?
Do i need models.py even for ready made mysql databases?
You do not need to construct models. Some (small) webservers are even completely stateless, and thus do not use a database. But a large part of how Django can help you is based on models.
You can write your own queries, forms, etc. But often by using a ModelForm, Django can for example remove a large amount of boilerplate code. It will make it furthermore less likely that there are mistakes in your code. So although not strictly necessary, the models are usually a keystone in how Django can help you.
You can use the inspectdb [Django-doc] command to inspect the database, and let Django "sketch" the models for you. Usually you will have still some work. Since Django can, for example, not derive that a field is an EmailField, since both a CharField and EmailField look exactly the same at the database side.
You do not need to use inspectdb however. You can construct your own models. If you create your own models, but these exist already at the database side, you might want to set managed = False [Django-doc] in the Meta of your model, to prevent Django from constructing migrations.
I am creating a project in django and django rest framework. Its an api for an angular app. The database setup consists of multiple databases. one is default database, all the django tables reside in this database; rest of the databases belong to a type of a user, each user is supposed to have a separate database. So, all the user related data goes to its separate database. To implement the selecting database dynamically, user object has an extra field to store the database to write to.
from django.contrib.auth.models import AbstractUser
class CustomUser(AbstractUser):
"""Custom User model."""
database= models.CharField(max_length=9)
Reason for doing this was performance improvement as each database is separate, ListView and DetailView would work faster than if the data was stored in the one database only.
I know I can choose a database to store by using the using method on the model manager. In the rest api things work fine and data is being stored in their separate databases, but I end up overriding methods that django has defined. Its adding development cost to the project. Foreign keys and ManytoMany keys needs to be resolved with the current database of the user, which is not happening as I have customized the database setup. Also, my code cant be as good as theirs :p , as they have written django over the course of many years.
I have overwritten many querysets already, but django still uses default database many times. If only I could use the request object in the model manager of django models to swap the default database on per request basis, things would be different i think.
My questions are -
Is there a way to access the request object in the model manager? I could do something to the effect of below code.
class CustomManager(models.Manager):
def get_queryset(self, request):
return super(CustomManager, self).using(request.user.database).get_queryset()
Model manager has _db property that could be used to select database. Would overriding it is advised? if yes, how and where in the code?
Is there a better way to implement the separate databases?
Thanks in advance.
Regards
Using a database router is recommended in Django docs, but the problem is it only accesses the model class.
Found a couple of questions related to the problem of switching databases dynamically. This post has a solution that would solve the problem of passing the request.user or any other parameter by using a threading.local instance.
Someone created a reusable plugin even for this - https://github.com/ambitioninc/django-dynamic-db-router
Hope that helps.
I run multiple (more or less) identical Django (1.11) deployments running on the exact same schema, but different settings (I make my own Settings models). Values in these Settings models, of which there are plenty, are different for each deployment, so these sites can appear differently depending on the settings, for example.
A business requirement came up that requires me to regularly export these Settings models (DisplaySettings, CurrencySettings, etc.) from one stack, and import them into another stack. I know dumpdata and loaddata offer basic functionality in the form of JSON files, but I also need these extra functionalities from the business side:
The admin must be able to select which settings to export, including ForeignKey and ManyToManyField relations that may be in these settings.
When importing that file, the admin must be able to choose which settings in the file to import, and how (update the existing settings model, or create a new one).
The same exported file can be re-imported into the same stack to create duplicate copies of these settings.
Here are the solutions I have tried so far:
dumpdata/loaddata: Does not need the "choose which settings to import/export" requirement.
django-import-export: only supports exporting of tabular structures, i.e. foreign keys cannot be exported as part of a settings record.
django-easydump: completely unrelated package that uploads the dump to s3. Can specify which models to include, but not the attributes in each model to include.
Writing custom nested ModelSerializers in djangorestframework: doable but tedious. Requires custom front-end to handle the first requirement.
My question is: is there already a built-in way to do perform imports/exports as described, or if not, are there any qualifying third-party packages, not listed above, that I have obviously missed?
There's nothing built in that will handle all of your requirements.
If the schema is the same across all your deployments the easiest thing to do would be to set up drf endpoints for each model. Unless I'm missing something they don't need to be nested.
def import_currency_settings(new=False, remove_fields=[]):
endpoint = default_domain + '/currency_settings/'
settings = requests.get(endpoint, auth=(api_user, api_pass)).json()
for setting in settings:
for field in remove_fields:
setting.pop(field, None)
if new:
CurrencySettings.objects.create(**setting)
else:
updated = setting
updated.pop('id', None)
CurrencySettings.update_or_create(
id=setting['id'],
fields=updated
)
import_currency_settings(new=True, remove_fields=['vat'])
How can I specify which database (by its alias name) a Django ModelForm should use?
A Django ModelForm knows its corresponding model, and the fields included.
The ModelForm instance clearly knows how to specify a database, internally. It can validate its fields against the database, and can save a new model instance to the database. This implies its operations have knowledge of which database to use.
I can't find how to specify any database other than the default, when creating the ModelForm nor when it interacts with the database::
import csv
from cumquat_app.forms import CumquatImportForm
db_alias = 'foo'
reader = csv.DictReader(input_file)
for row in reader:
fields = make_fields_from_input_row(reader)
# Wanted: ‘form = CumquatInputForm(fields, using=db_alias)’.
form = CumquatImportForm(fields)
# Wanted: ‘if form.is_valid(using=db_alias)’.
if form.is_valid():
# Wanted: ‘form.save(using=db_alias)’.
form.save()
What I need is to specify the database alias as an external user of the ModelForm, when creating the instance or when calling ModelForm.clean or ModelForm.is_valid or ModelForm.save etc.
The same way I can with the ‘using’ hook of QuerySet.using('foo'),
or Model.save(using='foo').
Note that this is not a job for multi-database routing policy configuration. The use case is that I need to specify exactly one database, only known at run time. If the connection fails it should not fall back to any other, so database routes are the wrong hammer for this nail.
I can request the ModelForm.save method to not commit its change (with commit=False) and then use the Model.save directly. That does not address the other behaviour of a ModelForm which accesses the database, so it is not a solution to this question.
A ModelManager.db_manager could do the job, if I use it to create the model instance. But I'm relying on the form to create the instance; I can't create a model instance because I don't have field values to assign yet. That's the job of the form.
If it matters: this is in a management command, where I need to be able
to specify from the command line that a particular database alias is the
context for a command.
What is the equivalent for using='foo' when instantiating a ModelForm for the model, or calling its methods (ModelForm.clean, ModelForm.save, etc.)?
Unless someone can find a way to do it as requested using the ModelForm interface, I can only conclude Django offers no way to do this with the ModelForm API as it stands.
My old worker installed Pinax through PIP and it's installed in site-packages. All the apps lives there. Our own apps are within our Django project structure.
I want to modify Pinax's account app, by switching the create_user's is_active flag to False. Currently, the app makes it True. I also want to add additional functionality to create_user or whatever function I want to do.
from pinax.account import forms
class MyCustomizeForm(forms.SignupForm):
def create_user(....):
super(....)
// additional...
Maybe this?
But doesn't that require me to do at least two commit transactions talking to DB?
Is that preferable? Also, does doing this require me to change anything else in my Django project (how users signup, what views it uses... what forms it uses)?
Currently, I've an app living in my Django project supposes to deal with the extension / customization of account app. I can't commit site-packages to VCS.... I mean.. I am not supposed to make any changes there.
Thanks.
Pinax account/models.py
class Account(models.Model):
...
def its_own_method(...)
# this is in the same indentation level as class Account
def create_account(sender, instance=None, **kwargs):
if instance is None:
return
account, created = Account.objects.get_or_create(user=instance)
post_save.connect(create_account, sender=User)
You can use the django signals for exactly this situation. Signals are meant for apps that need to be distributed generally and won't always know how they will be integrated into a project.
The signal of interest to you here is the pre_save. You can connect a pre_save to the pinax.account model and be notified when a save is about to happen. This will give you a chance to make changes to that model instance. Signals are synchronous, meaning you are making your change serially, right before the pinax.accounts will finish committing the save