Flask-Migrate `db upgrade` fails with "relation does not exist" - flask

I'm working in a development environment on a flask-app with a Postgres 10 database that has ~80 tables. There are lots of relationships and ForeignKeyConstraints networking it all together.
It was working fine with Flask-Migrate. I'd bootstrapped and migrated up to this point with ~80 tables. But, I wanted to test out some new scripts to seed the database tables, and thought it would be quickest to just drop the database and bring it back up again using Flask-Migrate.
In this process, the migration folder was deleted, so I just started over fresh with a db init. Then ran db migrate. I manually fixed a few imports in the migrate script. Finally, I ran db upgrade.
However, now with all these 80 create_table commands in my migrate script, when I run db_upgrade, I receive an error:
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) relation "items" does not exist
I receive this error for every Child table that has a ForeignKeyConstraint if the Child table is not in an order which is below the Parent table in the migration file.
But, the autogenerated script from db migrate has the tables sorted alphabetically, ordered by table name.
Referring to documentation, I don't see this importance of sort order mentioned.
Bottom line is, it seems I'm either forced to write a script to sort all these tables in an order where the Parent table is above the Child table. Or else, just cut and paste like a jigsaw puzzle until all the tables are in the required order.
What am I missing? Is there an easier way to do this with Flask-Migrate or Alembic?

After researching this, it seems flask-migrate and/or Alembic does not have any built-in methods to resolve this sort order issue. I fixed it by cutting and pasting the tables in an order which ensured the Parent table was above the child tables in the migration file.

I've just encountered this myself, and could not find a better and/or official answer.
My approach was to separate the table creation from the creation of foreign key constraints:
Edit Alembic's auto-generated migration script: In each table create operation, remove all lines creating foreign key constraints
Run Alembic's upgrade command (tables are created, minus the FK constraints, of course)
Run Alembic's migrate command (additional migration script created, that adds all FK constraints)
Run Alembic's upgrade command (FK constraints added to tables)

I've faced some problems when using flask-sqlalchemy and flask-migrate, I solved it using python interactive shell.
>>> from yourapp import db, create_app
>>> db.create_all(app=create_app())
Check this link to get more information.
Happy coding...

Related

How to make the initial migration for a DB that diverged from the corresponding models?

Situation: a project I'm working in had a file corruption or something. Models are the latest version, but the SQLite DB had to be rolled back before some columns were added/removed/modified. Migration files are all gone. Trying to create migrations anew results in the new columns being present in the initial migration file, so I can neither migrate for real (due to the table existing) nor fake it (since the columns are missing in the DB).
Given those circumstances, how can I make an initial migration matching the columns currently present in the DB so I can fake it and then make a real second migration to bring the tables in line with the models? The only thing that comes to mind is manually tweaking the models to match the DB schema, making the initial migration, faking it and then restoring the new version of the models, but I'd much prefer having this done automatically.
django inpsectdb to the rescue.
But first, learn how to use git if you had used proper version control, you would not be facing this difficulty now.
First step, add the code to version control.
Delete the existing models files
Use inspectdb to generate a models.py from the tables in the database. This is not perfect, you will have to edit the file manually and you may have to spread it out between different models files manually.
Now delete the contents of the migrations table
do a ./manage.py makemigrations (yourapp)
Do the fake migration you mentioned
replace the generated models.py with your current models.py (a git checkout of that file will do the trick nicely)
do a makemigrations and migrate again.
good luck.

Django won't create a table

I have two different databases in django. Initially, I had a table called cdr in my secondary database. I decided to get rid of the second database and just add the cdr table to the first database.
I deleted references (all of them, I think) to the secondary database in the settings file and throughout my app. I deleted all of the migration files and ran make migrations fresh.
The table that used to be in the secondary database is not created when I run migrate even though it doesn't exist on my postgres database.
I simply cannot for the life of me understand why the makemigrations function will create the migration file for the table when I add it back in to the model definition and I have verified that it is in the migration file. When I run migrate, it tells me there are no migrations to apply.
Why is this so. I have confirmed that I have managed=True. I have confirmed that the model is not on my postgres database by logging into the first database and running \dt.
Why does Django still think that this table still exists such that it is telling me no migrations to apply even though it shows a create command in the migrations file? I even dropped the secondary database to make sure it wasn't somehow being referenced.
I suspect code isn't needed to explain this to me but I will post if needed. I figure I am missing something simple here.
Why does Django still think that this database still exists such that
it is telling me no migrations to apply even though it shows a create
command in the migrations file
Because django maintains a table called django_migrations in your database which lists all the migrations that have been applied. Since you are almost starting afresh, clear out this table and then run the migrations.
If this still doesn't work and still assuming that you are still on a fresh start, it's a simple matter to drop all the tables (or even the database and do the migration again). OTH that you have data you want to save, you need to look at the --fake and --fake-initial options to migrate

Tables not recognized on restored geodjango postgis backup

So I backed up my geodjango postgis database using pg_dump before performing some calculations which I've managed to mess up. I've created a new database using
createdb -T template0 -O proj proj_backup
psql proj_backup < proj_backup.pg
This seemed to work fine (though there were a few errors during the import), and connecting to the database using psql all my tables are there and appear to have the correct numbers of rows etc.
However, changing my settings.py to connect to my newly imported backup db (proj_backup in my example), gives me the following errors:
DatabaseError: relation "appname_model" does not exist
Any ideas? I'm guessing I did the dump wrong, or that I haven't maintained the ForeignKeys somehow. Thanks very much.
Update
So I figured out my first mistake: I had two similarly named backup databases and connected to the wrong one. Connecting to the correct one seems to have fixed everything. However, it's still quite strange that it didn't recognize the tables in the other backup database, which definitely did exist. Running syncdb on the incorrect database ends up duplicating those tables (if I remember correctly, there were duplicate table names when I listed them all from within psql). Unfortunately, the way I discovered my mistake was by dropping the bad table to recreate it, and so in order to reproduce this error I'll probably have to use time machine. Still very strange, I'll give that a shot when I can get physical access to my work machine.
So is your appname_model table there or is it a view? Was it in public or another named schema?
If the table is there, then chances are you have it in a schema that is not in your database search path. Check the search_path of your old database. It might have included something other than the default, or your default search schema is set in postgresql.conf and is non-standard.

SQL commands generated in Django by running sqlall

In my Django app, I just ran
$ python manage.py sqlall
and I see a lot of SQL statements that look like this, when describing FK relationships:
ALTER TABLE `app1_model1` ADD CONSTRAINT model2_id_refs_id_728de91f FOREIGN KEY (`model2_id`) REFERENCES `app1_model2` (`id`);
Where does "7218de91f" come from? I would like to know because I'd like to manually write SQL statements to accompany models changes in the app so that my db's can be kept up to date.
Why not use a migration app to write all your SQL for you. It's definitely the smart way to go. Check out South -- part of it will be merged into Django core soon

Django unit-testing with loading fixtures for several dependent applications problems

I'm now making unit-tests for already existing code. I faced the next problem:
After running syncdb for creating test database, Django automatically fills several tables like django_content_type or auth_permissions.
Then, imagine I need to run a complex test, like check the users registration, that will need a lof ot data tables and connections between them.
If I'll try to use my whole existing database for making fixtures (that would be rather convinient for me) - I will receive the error like here. This happens because, Django has already filled tables like django_content_type.
The next possible way is to use django dumpdata --exclude option for already filled with syncdb tables. But this doesn't work well also, because if I take User and User Group objects from my db and User Permissions table, that was automatically created by syncdb, I can receive errors, because the primary keys, connecting them are now pointing wrong. This is better described here in part 'fixture hell', but the solution shown there doensn't look good)
The next possible scheme I see is next:
I'm running my tests; Django creates test database, makes syncdb and creates all those tables.
In my test setup I'm dropping this database, creating the new blank database.
Load data dump from existing database also in test setup
That's how the problem was solved:
After the syncdb has created the test database, in setUp part of the tests I use os.system to access shell from my code. Then I'm just loading the dump of the database, which I want to use for tests.
So this works like this: syncdb fills contenttype and some other tables with data. Then in setUp part of tests loading the sql dump clears all the previously created data and i get a nice database.
May be not the best solution, but it works=)
My approach would be to first use South to make DB migrations easy (which doesn't help at all, but is nice), and then use a module of model creation methods.
When you run
$ manage.py test my_proj
Django with South installed with create the Test DB, and run all your migrations to give you a completely updated test db.
To write tests, first create a python module calle, test_model_factory.py In here create functions that create your objects.
def mk_user():
User.objects.create(...)
Then in your tests you can import your test_model_factory module, and create objects for each test.
def test_something(self):
test_user = test_model_factory.mk_user()
self.assert(test_user ...)