After setting up and running my migration I realized I messed up and forgot to add nullable to 4 of my fields.
Ran
/**
* #ORM\Column(type="float")
*/
private $widgets;
Suppose to be
/**
* #ORM\Column(type="float", nullable=true)
*/
private $widgets;
What's the recommended way to fix this? Manually change the migration and force it to run again? Not even sure I can do that. Or should I create another migration for it and run it?
It depends if you have some data in the DB or not. If yes - then do it manually(from MySQL console and edit your current migration) or create another migration and run it, because dropping the migration will remove your data from these columns. If no - then you can drop, remove the current migration, and create a new migration with the nullable option.
Related
I have two this model, which I want to move to another app.
After running this migration, I successfully could see the model under the required app.
But when I added another field, It was adding the entire model to the new migrations in new_app.
According to most tutorials, it should have added just the field.
I don't wanna fake the migrations, as it can cause issues.
Please point out my mistake.
Problem: On adding a new field in new_app model, the migrations has "CreateModel". How to avoid this?
Please help.
Migrations are persisted on your db, there's a dedicated table that keeps the track of your changes. You shouldn't modify migrations directly.
My suggest:
for development purpose you can delete your migration's files and migration's table.
Try to create your migration once you moved your model If you can't delete anything,
then you should have something like.
users/0001_mymodel.py
users/0002_mymodel_deleted.py
otherapp/0001_mymodel_added.py
otherapp/0002_adding_field.py
I have a production Django deployment (Django 1.11) with a PostgreSQL database. I'd like to add a non-nullable field to one of my models:
class MyModel(models.Model):
new_field = models.BooleanField(default=False)
In order to deploy, I need to either update the code on the servers or run migrations first, but because this is a production deployment, requests can (and will) happen in between my updating the database and my updating the server. If I update the server first, I will get an OperationalError no such column, so I clearly need to update the database first.
However, when I update the database first, I get the following error from requests made on the server before it is updated with the new code:
django.db.utils.IntegrityError: NOT NULL constraint failed: myapp_mymodel.new_field
On the surface, this makes no sense because the field has a default. Digging into this further, it appears that defaults are provided by Django logic alone and not actually stored on the SQL level. If the server doesn't have the updated code, it will not pass the column to SQL for the update, which SQL interprets as NULL.
Given this, how do I deploy this new non-nullable field to my application without my users getting any errors?
Migrations should always be run at the beginning of deployments or else you get other problems. The solution to this problem is to split the changes into two deployments.
In deployment 1, the field needs to be nullable (either a NullBooleanField or null=True). You should make a migration for the code in this state and make sure the rest of your code will not crash if the value of the field is None. This is necessary because requests can go to servers that do not yet have the new code; if those servers create instances of the model, they will create it with the field being null.
In deployment 2, you set the field to be not nullable, make a migration for this, and remove any extra code you wrote to handle cases where the value of the field is None. If the field does not have a default, the migration you make for this second deployment will need to fill in values for objects that have None in this field.
The two deployments technique is needed to safely delete fields as well, although it looks a bit different. For this, use the library django-deprecate-fields. In the first deployment, you deprecate the field in your models file and remove all references to it from your code. Then, in deployment 2, you actually delete the field from the database.
You can accomplish this by starting with a NullBooleanField:
Add new_field = models.NullBooleanField(default=False) to your model
Create schema migration 1 with makemigrations
Change model to have new_field = models.BooleanField(default=False)
Create schema migration 2 with makemigrations
Run schema migration 1
Update production code
Run Schema migration 2
If the old production code writes to the table between steps 5 and 6, a null value of new_field will be written. There will be a time between steps 6 and 7 where there can be null values for the BooleanField, and when the field is read, it will be null. If your code can handle this, you'll be ok, and then step 7 will convert all of those null values to False. If your new code can't handle these null values, you can perform these steps:
Add new_field = models.NullBooleanField(default=False) to your model
Create schema migration 1 with makemigrations
Run schema migration 1
Update production code
Change model to have new_field = models.BooleanField(default=False)
Create schema migration 2 with makemigrations
Run schema migration 2
Update production code
*note that these methods were only tested with Postgres.
Typically, a django upgrade process looks as follows:
LOCAL DEVELOPMENT ENV:
Change your model locally
Migrate the model (python manage.py makemigrations)
Test your changes locally
Commit & push your changes to (git) server
ON THE PRODUCTION SERVER:
Set ENV paramaters
Pull from your version control system (git fetch --all; git reset --hard origin/master)
update python dependencies (eg pip install -r requirements.txt)
migrate (manage.py migrate_schemas)
update static files (python manage.py collectstatic)
restart django server (depends on the server but could be something like 'python manage.py runserver')
I'm working in a development environment on a flask-app with a Postgres 10 database that has ~80 tables. There are lots of relationships and ForeignKeyConstraints networking it all together.
It was working fine with Flask-Migrate. I'd bootstrapped and migrated up to this point with ~80 tables. But, I wanted to test out some new scripts to seed the database tables, and thought it would be quickest to just drop the database and bring it back up again using Flask-Migrate.
In this process, the migration folder was deleted, so I just started over fresh with a db init. Then ran db migrate. I manually fixed a few imports in the migrate script. Finally, I ran db upgrade.
However, now with all these 80 create_table commands in my migrate script, when I run db_upgrade, I receive an error:
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) relation "items" does not exist
I receive this error for every Child table that has a ForeignKeyConstraint if the Child table is not in an order which is below the Parent table in the migration file.
But, the autogenerated script from db migrate has the tables sorted alphabetically, ordered by table name.
Referring to documentation, I don't see this importance of sort order mentioned.
Bottom line is, it seems I'm either forced to write a script to sort all these tables in an order where the Parent table is above the Child table. Or else, just cut and paste like a jigsaw puzzle until all the tables are in the required order.
What am I missing? Is there an easier way to do this with Flask-Migrate or Alembic?
After researching this, it seems flask-migrate and/or Alembic does not have any built-in methods to resolve this sort order issue. I fixed it by cutting and pasting the tables in an order which ensured the Parent table was above the child tables in the migration file.
I've just encountered this myself, and could not find a better and/or official answer.
My approach was to separate the table creation from the creation of foreign key constraints:
Edit Alembic's auto-generated migration script: In each table create operation, remove all lines creating foreign key constraints
Run Alembic's upgrade command (tables are created, minus the FK constraints, of course)
Run Alembic's migrate command (additional migration script created, that adds all FK constraints)
Run Alembic's upgrade command (FK constraints added to tables)
I've faced some problems when using flask-sqlalchemy and flask-migrate, I solved it using python interactive shell.
>>> from yourapp import db, create_app
>>> db.create_all(app=create_app())
Check this link to get more information.
Happy coding...
I just tested it myself. I had Django models, and there have already been instances of the models in the database.
Then I added a dummy integer field to a model and ran manage.py syncdb. Checked the database, and nothing happened to the table. I don't see the extra field added in.
Is this the expected behavior? What's the proper way of modifying the model, and how will that alter the data that's already in the database?
Django will not alter already existing tables, they even say so in the documentation. The reason for this is that django can not guarantee that there will be no information lost.
You have two options if you want to change existing tables. Either drop them and run syncdb again, but you will need to store your data somehow if you want to keep it. The other options is to use a migrations tool to do this for you. Django can show you the SQL for the new database schema and you can diff that to the current version of the database to create the update script.
You could even update your database mannually if it is a small change and you don't want to bother with migrations tools, though I would recommend to use one.
Please use south for any kind of changes to get reflected to your database tables,
here goes the link for using south
Link for South documentation
So I have a model that has a field, which originally defaulted to not allow nulls. I want to change it to allow nulls, but syncdb doesn't make the change. Is it as simple as changing it in the database and reflecting it in the models.py file just for the next time its run against a new database?
To answer the question: Yes it should work if you change it in the model and in your database manually, otherwise check out django-south or django-evolution to help you evolving your database scheme!
another possibility would be to dump your current db as a fixture, drop the tables, run syncdb and reload the fixtures (guess this would work for changing the null setting, but not for bigger changes).
You can save yourself a whole world of hurt by using some kind of database migration app with Django. Then you can chop and change model fields and their attributes basically as much as you please.
I highly recommend South for its features and friendliness