Scaling or avoiding migrations when using Django 2.x? - django

I'm just beginning my journey with Django framework and I read that Django developers have made using migrations mandatory beginning from version 2.0. I might be old school but I like my database separate from my code. I have always kept my database separate from my code models. I think that the migrations won't scale with the engineering team size.
So my question is 2 fold.
Can you not use Django 2.0 without the migrations as I don't think it will scale well and won't fit the CI/CD pipeline?
If we can't avoid the db migrations then how can we integrate them in a robust CI/CD pipeline where a model can be changed by different developers from different teams.

Yes, you can. You can create your tables manually and set Django to not manage your tables.
After your Django project is configured, just run on your terminal python manage.py inspectdb > models.py, and django will pick the models on the configured database. This is particularly good if your project will use a already existing or legacy database
Then, you can tell django to not manage your tables on the meta options of the model:
class MyModel(models.Model):
# your fields here
class Meta:
managed = False
See the docs here
But, unless you have a very good way to keep track of your table changes, I must say this is a mistake. Django migrations help you to keep track on your models changes along the way. It is really helpful if you need to rollback or understand your database history.

Migrations are not mandatory, it's not clear what you think has changed in 2.0 to make them so.
Migrations are intended for large teams. If you avoid them, you'll make things much much harder for yourself and your fellow team members.

Related

why does django not combine makemigrations and migrate commands?

I can think of three reasons why:
providing users with the flexibility on "when" to commit model changes
debugging modularity
perhaps resource consumption in larger
databases
However, it does seem that migrate always follows shortly after migration (tutorials/youtube videos).
so is there a philosophy behind this that I'm missing?
Ofcourse there are some reasons.
First of all, 'makemigrations' doesn't touch real DB, it just tells django how models(db scheme) have changed so you can see what's going on when you do 'migrate'.
and this makes django more safe.
This also provides to make default options for new fields or db changes..
Other reason is 'revert'.
If you want to roll-back db scehme with specific migrations, you can just tell django to roll back to specific migration file.
Another reason is 'reusable-app' principle.
If you create app with django and it could be reusable with no-db-interaction. It means if you deploy your app(or project, too!) to another project or server, it just needs 'migrations' files not real db.

shared DB across django projects

Our product has a restful API and a server rendered app (the CMS). Both share the database. Both are written in django
The fields and the models needed in both are not mutually exclusive, there are some only particular to the API, some particular to the CMS, and some which are common.
My question is if I run migrations on one of the repos will they try to drop the fields that aren't present in the models of that particular repo, and needed by the other. Will running the migrations individually in both repos keep the database up to date and not pose a problem.
The only other valid option IMHO (besides merging projects) is turning off automation of Django migrations on common models (Meta.managed = False) and taking table creation & versioning into your own hands. You still can write migration scripts using django.db.migrations but makemigrations command won't do anything for these tables.
This was solved by using a schema migration tool external to Django's own. We use
yoyo migrations to migrate our schema now.
Will running the migrations individually in both repos keep the database up to
date and not pose a problem.
Unfortunately, no. As you suspected, changes in one will attempt to override the other.
The easiest thing to do is merge the two projects into one so this problem goes away entirely.
If this isn't an option, can the code be organised in such a way that both projects share the same models.py files? You could do this by perhaps having the models.py files and migrations folders only exist in one project. The second project could have a symlink across to each models.py file it uses. The trick (and the difficult part) will be to make sure you never create migrations for the app which uses the symlinks.
I think the best things to do would be to have one repo that contains all the fields. This project will be responsible to apply the migrations.
In the other projects, you'll need a db_router containing a function allow_migrate which will return False on your model classes.
Also having different db user with different db permissions can prevent from altering the tables.

Migrate after removing ForeignKey to third-party model

I'm in the process of removing some dead code in a project and I have the opportunity to remove a dependency on a third-party app that we've been using since we started the project. One of our models has a ForeignKey to a model in the third-party app and I'm running into trouble when trying to apply migrations on a fresh instance of the project.
An example model:
from django.db import models
from thirdparty.models import ThirdPartyModel
class MyModel(models.Model):
fk = models.ForeignKey(ThirdPartyModel)
Removing MyModel.fk is detected by South and the migration is created successfully. Applying the migration and rolling it back works too. I can now remove thirdparty from INSTALLED_APPS and commit the changes (the new migration and settings.py).
The trouble starts when I clone the repository on another machine. ./manage.py syncdb runs as expected and creates all the tables not managed by South but ./manage.py migrate myapp fails when creating the table for (an early version of) MyModel because the foreign key to thirdparty_thirdpartymodel cannot be created (as expected because thirdparty is no longer in INSTALLED_APPS so none of the tables are created).
Are there standard ways to handle removing external dependencies? Is this an appropriate time to reset my migrations?
This is an old question but it is still valid and even independent of South and would also be an issue with Django Migrations.
You should take care that the migration files are separated in a way that you can fake the migrations that depend on the non-existing app (removed from INSTALLED_APPS). That way, you would go about creating new installations by faking those migrations, and actually running those migrations on existing installations.
Of course, if you have the possibility to start over (like a complete relaunch), you could wipe your database, remove all existing migration files and simply create completely new migrations. All other developers would have to drop their databases, as well.
In case you have existing production data and still want to start from scratch, there are different possibilities on how to transfer the data.
Which way is best depends on how much data there is, how much the structure has changed etc:
plain SQL (change the DB by hand, after running the new migrations, by transferring the data from old tables into new and dropping the tables, and foreign keys etc.)
fixtures (dump the data via Django in the old system and change the JSON to fit the new structure)
two parallel installations of the old and new system and transfer via Django/Python scripts (slower than plain SQL but you can make use of the Django Model logic, apply validation checks, transformations etc. in a more comfortable way).
Of course, don't do this in production but somewhere else and simply apply the result.

Is there a way to update the database with the changes in my models? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
update django database to reflect changes in existing models
I've used Django in the past and one of the frustrations I've had with it as an ORM tools is the inability to update an existing database with changes in the model. (Hibernate does this very well and makes things really easy for updating and heavily modifying a model and applying this to an existing database.) Is there a way to do this without wiping the database every time? It gets really old having to regenerate admin users and sites after every change in the model which I'd like to play with.
You will want to look into South. It provides a migrations system to migrate both schema changes as well as data from one version to the next.
It's quite powerful and the vast majority of changes can be handled simple by going
manage.py schemamigration --auto
manage.py migrate
The auto functionality does have it limits, and especially if the change is going to be run on a production system eventually you should check the code --auto generated to be sure it's doing what you expect.
South has a great guide to getting started and is well documented. You can find it at http://south.aeracode.org
No.
As the documentation of syncdb command states:
Syncdb will not alter existing tables
syncdb will only create tables
for models which have not yet been installed. It will never issue
ALTER TABLE statements to match changes made to a model class after
installation. Changes to model classes and database schemas often
involve some form of ambiguity and, in those cases, Django would have
to guess at the correct changes to make. There is a risk that critical
data would be lost in the process.
If you have made changes to a model and wish to alter the database
tables to match, use the sql command to display the new SQL structure
and compare that to your existing table schema to work out the
changes.
South seems to be how most people solve this problem, but a really quick and easy way to do this is to change the db directly through your database's interactive shell. Just launch your db shell (usually just dbshell) and manually alter, add, drop the fields and tables you need changed using your db syntax.
You may want to run manage.py sqlall appname to see the sql statements Django would run if it was creating the updated table, and then use those to alter the database tables and fields as required.
The Making Changes to a Database Schema section of the Django book has a few examples of how to do this: http://www.djangobook.com/en/1.0/chapter05/
I manually go into the database - whatever that may be for you: MySQL, PostgreSQL, etc. - to change database info, and then I adjust the models.py accordingly for reference. I know there is Django South, but I didn't want to bother with using another 3rd party application.

Refactoring models from one django app into two

I've got an app which has grown too big and I'd like to split it into two. It's not an easy task code-wise, but I'm more afraid of the database migrations I'll have to perform. Is there a proven way of doing such kind of thing?
Currently my thoughts are something like this:
Use south
Copy models to another app
Make a data migration to copy relevant models from one app to the other
Make another migration that removes the now unused models from the old app
Will have to make a script that does those operations in correct order
Has anybody done anything like this?
No need to do any database migrations if you don't want to. Just set the db_table Meta attribute on your models to refer to the old names, and everything will be perfectly happy.