I made some updates to a project: add 1 admin model, add 1 template
I'm using wagtail. I pulled the updates to my server, ran migrations, got success. I restart nginx and gunicorn, I even rebooted the server.
When I go to the wagtail admin, my adminmodel is missing (it exists locally). when I go to create a new page, my template is available, but when I select it I get taken to a wagtail 404 page.
Ubuntu 20.04
ngnix
gunicorn
django/wagtail
digital ocean vpc
digital ocean postgres database cluster
The site works like normal, only a template is available I can't select, and the Model, that migrated, isn't available and isn't showing up in admin. my local version is working perfectly, with no differences. it seems like the server is both updating and not updating. I don't get it. running makemigrations or migrate returns no changes. even when running on the specific app. Do I need to do something to rebot the database?
Listen it looks like caching issue with Nginx. Try clearing the cache.
I have 2 settings files: dev.py, production.py
dev.py is connected to a sqlite3 db and production is connected to the digitalocean postgres cluster.
python manage.py automatically uses dev.py (at least in my setup) so my migrations were working, but they were targeting an old copy of the sqlite.db I had on the server, or perhaps created one as it does when one doesn't exist. Either way, the live site runs on the production.py settings and this is why the changes were being made but not reflecting.
The correct way to run migrations when you have a production.py file is similar to:
python manage.py migrate --settings=<settings app>.<settings folder>.production
I'm also going to add that I found that you need to collectstatic each time you update your css stylesheet. that is a little tedious, and maybe I am missing an automation step, but considering that is the case, just to be safe, I run like this:
python manage.py collectstatic --settings=<settings app>.<settings folder>.production
It doesn't cause any issues and it works, so I use the --settings flag to be safe.
git pull
python manage.py migrate --settings=app.settings.production
python manage.py collectstatic --settings=app.settings.production
sudo systemctl restart ngix
sudo systemctl restart gunicorn
I don't know that restarting ngix is necessary except it solved something with my static files updating I believe.
Related
I have a Django application and I'm using MariaDB as the database. Both of them have been deployed to a namespace on kubernetes. Now I want to add an additional field, so I made changes in the models.py file in the django app. These changes were made locally - I pulled the code from GIT and just made the changes locally. Normally to apply the changes, I have to run manage.py makemigrations and manage.py migrate and all the changes would've been reflected on to the DB if the DB was present locally.
So now my questions are
How can I apply the changes to MariaDb that is there on Kubernetes ?
Will running manage.py makemigrations and manage.py migrate locally and redeploying the django app to kubernetes solve this issue ?
TLDR;
If you don't require multiple replicas, then the simplest way to do it would be to run your migrations when the container starts.
If you require multiple replicas, then you'll have to get creative with jobs and init containers.There's a good article with more info on it here: https://andrewlock.net/deploying-asp-net-core-applications-to-kubernetes-part-7-running-database-migrations/
There is a really old thread on stackoverflow here Getting 'DatabaseOperations' object has no attribute 'geo_db_type' error when doing a syncdb
but the difference that I have with their issue is that my containers have the POSTGIS and POSTGRES installed in. Specifically I used QGIS and the image is like so
db:
image: kartoza/postgis:13.0
volumes:
- postgis-data:/var/lib/postgresql
So locally I have two docker images - one is web and the other is the kartoza/postgis
I also have this as well in the settings.py file
import dj_database_url
db_from_env = dj_database_url.config(conn_max_age=500)
DATABASES['default'].update(db_from_env)
which should support the GIS data. I see all my packages gis, geolocation packages installed with no issues. But I am getting the above error when I run heroku run python manage.py migrate
The website runs with very limited functionality as the geo variables are needed to get you past the landing page.
The steps I have taken to deploy is
heroku create appname
heroku stack:set container -a appname
heroku addons:create heroku-postgresql:hobby-dev -a appname
heroku git:remote -a appname
git push heroku main
EDIT The db url on heroku is postgres://foobar:3242q34rq2rq32rf3q2rfq2q2r3vq23rvq23vr#er3-13-234-91-69.compute-
I have also ran the below command and it shows that the db now takes GIS, but I still get the error
$ heroku pg:psql
create extension postgis;
try replacing db with localhost
heroku config:add DATABASE_URL=postgis://foo:bar#localhost:5432/gis
Ok so after 14 or so hours of reading. The issue here is that
Heroku database does NOT have qgis capabilities enabled as a default - you have to use heroku pg:psql then run CREATE EXTENSION postgis;
Having QGIS installed within the docker container results it not being able to perform functions such as hero pg:push localdatabase -postgresqlremoteherokudb and heroku config:add DATABASE_URL=postgis://foo:bar#localhost:5432/gis So if anyone is following the python nearbyshops tutorial, you might end up here as well. I got fixated here thinking it's a db issue, but it was more so to do with heroku configuration settings.
If you are coming here from docker, which might mean you need to install django_heroku package to modify your settings. There are recent 2019 and 2018 guides on how to deploy Geospatial apps to heroku, but again, you don't need those. (at least for me I didn't)
GIS now is available as an extension, you no longer need to add buildpacks.
There are still some configuration issues if you are using gunicorn for staticfiles, so the website at first won't launch properly. Highly suggest that you track your heroku log errors (manual settings.py coding or heroku addons).
My django app is containerized along side postgresql. The problem is that migrations do not seem to be persisting in the directory. Whenever I run docker exec -it <container_id> python manage.py makemigrations forum, the same migrations are detected. If I spin down the stack and spin it back up the and run makemigrations again, I see the same migrations detected. Changes to the fields, adding models, deleting models, none ever get detected. These migrations that do appear seem to be getting written to the database, as when I try to migrate, I get an error that there are existing fields already. But if I look at my migrations folder, only the init.py folder is present. All the migrate commands add no changes to the migrations folder.
I also tried unregistered the post model from the admin and spinning up the stack, yet I still see it present in the admin. Same things with changes to the templates. No change I make sticks from inside docker seems to stick.
*Note this problem started after I switched to wsl 2 and enabled it in docker desktop (windows)
**Update migrations can be made from bash of docker container
I found out what the problem was. My docker-stack.yml file was pointed to a directory that did not exist in the dockerfile.
I am working on a Django app, and I would like my Database migrations to be run when deploying on Heroku.
So far we have simply put the following command in the Procfile:
python manage.py migrate
When deploying the migrations are indeed run, but they seem to be run once for each dyno (and we use several dynos). As a consequence, data migrations (as opposed to pure schema migrations) are run several times, and data is duplicated.
Running heroku run python manage.py migrate after the deployment is not satisfactory since we want the database to be in sync with the code at all times.
What is the correct way to do this in Heroku?
Thanks.
This is my Procfile and it is working exactly as you describe:
release: python manage.py migrate
web: run-program waitress-serve --port=$PORT settings.wsgi:application
See Heroku docs on defining a release process:
https://devcenter.heroku.com/articles/release-phase#defining-a-release-command
The release command is run immediately after a release is created, but before the release is deployed to the app’s dyno formation. That means it will be run after an event that creates a new release:
An app build
A pipeline promotion
A config var change
A rollback
A release via the platform API
The app dynos will not boot on a new release until the release command finishes successfully.
If the release command exits with a non-zero exit status, or if it’s shut down by the dyno manager, the release will be discarded and will not be deployed to the app’s formation.
Be aware, however, this feature is still in beta.
Update:
When you have migrations that remove models and content types, Django requires a confirmation in the console
The following content types are stale and need to be deleted:
...
Any objects related to these content types by a foreign key will also be deleted. Are you sure you want to delete these content types? If you're unsure, answer 'no'. Type 'yes' to continue, or 'no' to cancel:
The migrate command in your Procfile does not respond and the release command fails. In this scenario, remove the migrate line, push live, run the migrate command manually, then add it back for future deploys.
The migrate does automatically runs on Heroku, but for now you can safely do it once your dyno is deployed with heroku run python manage.py migrate.
If production, you can put your app in maintenance first with heroku maintenance:on --app=<app name here>
Setup your Procfile like in the docs
release: python manage.py migrate
web: gunicorn myproject.wsgi --log-file -
documented at https://devcenter.heroku.com/articles/release-phase#specifying-release-phase-tasks
You can create a file bin/post_compile which will run bash commands after the build.
Note that it is still considered experimental.
Read here for more buildpack info.
See here for an example
Alternatively, Heroku is working on a new Releases feature, which aims to simplify and solve this process. (Currently in Beta).
Good luck!
I need some advice on pushing Django updates, specifically the database updates, from my development server to my production server. I believe that updating the scripts, files, and such will be easy -- simply copy the new files from the dev server to the production server. However, the database updates are what have me unsure.
For Django, I have been using South during initial web app creation to change the database schema. If I were to have some downtime on the production server for updates, I could copy all the files over to the production server. These would include and changed models.py files which describe the database tables. I could then perform a python manage.py schemamigration my_app --auto and then a python migrate my_app to update the database based on the new files/models.py I've copied over.
Is this an OK solution or are there more appropriate ways to go about updating a database from development to production servers?
Your thoughts?
Thanks
Actually, python manage.py schemamigration my_app --auto will only create the migration based on the changes in models.py. To actually apply the migration to the database, you need to run python manage.py migrate my_app. Another option would be to create migrations (by running schemamigration) on the development server, and then copy over the migration files to the production server and apply migrations by running migrate.
Of course, having a source code repository would be way better than copying the files around. You could create migrations on your development server, commit them to the repository, on the production server pull the new files from repository, and finally apply migrations.