I'm running Flask with FLask-SQLalchemy on DigitalOcean Apps. I can't get Flask-Migrate to work properly on the production environment. Calling flask db migrate on my production app does nothing. No changes detected. Nothing. Just this below:
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
As such I tried this tutorial - which is basically a way of 'starting again'. I'd create the initial migration to a local empty DB, then commit that migration script to source and push it live. The changes (the initial migration) were detected just fine on an empty local DB. All that's changed is the DB is empty and I changed the DATABASE_URI env.
This means there's a migration for the 'first migration' on the production instance. As far as I'm aware, it reflects the state of the production DB.
I'd then run flask db stamp head on production and local. Running flask db migrate on production (with the change I want to migrate) nothing happens. No changes detected. flask db upgrade produces the same results. I checked the migrate instance on production - it has the correct db connection string. The web app works, but its not detecting or able to push through new changes?
I have data in my database I absolutely cannot drop it.
Related
I have a Django application and I'm using MariaDB as the database. Both of them have been deployed to a namespace on kubernetes. Now I want to add an additional field, so I made changes in the models.py file in the django app. These changes were made locally - I pulled the code from GIT and just made the changes locally. Normally to apply the changes, I have to run manage.py makemigrations and manage.py migrate and all the changes would've been reflected on to the DB if the DB was present locally.
So now my questions are
How can I apply the changes to MariaDb that is there on Kubernetes ?
Will running manage.py makemigrations and manage.py migrate locally and redeploying the django app to kubernetes solve this issue ?
TLDR;
If you don't require multiple replicas, then the simplest way to do it would be to run your migrations when the container starts.
If you require multiple replicas, then you'll have to get creative with jobs and init containers.There's a good article with more info on it here: https://andrewlock.net/deploying-asp-net-core-applications-to-kubernetes-part-7-running-database-migrations/
I'm new to Django but I'm deploying a Django-based website to Heroku, using Postgresql. The deployment was successful, and the website is online and has established connection with the database. However, none of the data from my local database has migrated to the heroku database, causing it to be a blank database. If I go into the admin section and manually input a datapoint, it appears on my site, so I know that database is correctly serving data. What is the proper way for migrating data from your local database to your online, heroku version of the database? I thought the following code would migrate the data:
heroku run python manage.py makemigrations
heroku run python manage.py migrate
But apparently I'm missing something.
make migrations will create a migration that contains your schema, but no data. The migrate command applies the migration to the database.
In order to provide data to be sent over as part of the migrate command you need to either create a data migration or use a fixture.
Another option you have is to dump your local database and do an import into Heroku Postgres
All in all, it depends on how much local data you have that you want copied over. If its only a few rows, I would use either a data migration or a fixture, if its 100s or 1000s of rows an export/import of your dataset is your best bet.
I have a flask setup with development & staging environments.
Now I want to add a production environment wit a production database.
I have troubles integrating the new database into flask-migrations.
I did these steps:
created fresh postgres DB
ran db.create_all() from the flask app
(resulting in a DB reflecting the latest version of the data model)
now all flask-migrate commands have errors
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) column "testfield" of relation "customer_feedback" already exists,
because flask migrate
seems to think it needs to apply all migrations that have been
created until today. But they are not necessary because DB is already fully reflecting models.py.
How can I convince flask-migrate to accept the current state as fully migrated?
Or whats the standard workflow for this?
In other words:
I am coming from Django, where the migrate command creates and updates the model if necessary when adding a blank DB. How should it be done with flask?
You need to tell flask migrate that db has already been created and all requirements are already fulfilled. Try following command -
flask db stamp head
This will tell flask migrate not to attempt to add anything.
I have a django based herokuapp site. I set everything up a long time ago and am now unable to get things working with a local instance of postgresql. In my settings file, I updated:
DATABASES['default'] = dj_database_url.config()
to work with a local database:
DATABASES['default'] = dj_database_url.config(default='postgres://localhost/appDB')
When running foreman, I can view the site, but the database is not currently populated (although I did create the empty DB). Running:
heroku run python manage.py dumpdata
Returns the contents of the remote (herokuapp) database, while a syncdb command results in "Installed 0 object(s) from 0 fixture(s)". So it looks like I'm still contacting the remote database. I'm pretty sure the postgresql DB is setup correctly locally; how can I force the app to use it?
I'm sure this is simple, but I haven't seen anything useful yet. I did try
export DATABASE_URL=postgres:///appDB
but that hasn't helped.
Cheers
I am trying to create a development server from a production server from which I can test out new ideas.
I created a duplicate of my production server's database by dumping it using Postgres' db_dump and then imported the dump into a new database.
I then copied my production django directory and altered all .py files to refer to server_debug. rather than server in my import statements.
Using the admin interface to alter some data works in that only the development server has its data altered.
However, when I then try adding a new field in my models.py in my development server, manage.py syncdb fails to create it.
Is there something I am neglecting that could cause manage.py to refer to my production server rather than my development server?
syncdb doesn't touch tables that already exist. You need to either reset the app (easiest if you don't care about the data), modify the table manually (more of a quick hack) or use a migration app and version your models — South, for example.