I'm trying to set up a continous integration pipeline for my python 3.5.1 / django 1.9.7 project.
The project is running fine on heroku, and the codeship deployment pipeline for heroku works well as long as my database is unchanged.
If I want to run migrations, I have to do so manually by entering heroku run python manage.py migrate on my computer which I would like to avoid.
I added a "Custom Script" in my codeship deployment pipeline after the "heroku"-pipeline containing heroku run python manage.py migrate, but when coedship attempts to execute it, it fails with the
Cannot run more than 1 Free size dynos.
message. I assume this is because the server is already up and running and I don't have more worker processes available? (please correct me if I'm wrong)
EDIT: This is where I was wrong - I had an additional process running (see answer)
Is there any way to include the database migration step in the heroku deployment pipeline? Or did I do something wrong?
Ifound the answer here: Heroku: Cannot run more than 1 Free size dynos
My assumption about theweb server beeing the blocking dyno was wrong, I had a zombie process (createsuperuser) running I did not know about.
I used heroku ps to show all running prcesses. Output was:
=== web (Free): gunicorn my_app.wsgi --log-file - (1)
web.1: idle 2016/06/07 17:09:06 +0200 (~ 13h ago)
=== run: one-off processes (1)
run.7012 (Free): up 2016/06/07 15:19:13 +0200 (~ 15h ago): python manage.py createsuperuser
I killed the process by typing
heroku ps:stop run.7012
and afterwards my migration via codeship custom script worked as expected.
Related
i'm kind of new to Heroku, but got a question in relation to dynos.
My situation is as follows. I'm running on main application in django that runs the following dyno command
"web gunicorn DJANGO_WEBPAGE.wsgi --log-file - ", and inside my main project there is a streamlit app that runs the following command
"streamlit run geo_dashboard.py", but it seems i can't run both commands on the same dyno.
1.- Is there a way to accomplish this?
2.- Do they need to be on separate apps?
3.- In case of limitations do to being a free user, does the hobby plan covers it?
I've tried running my procfile this way
web: gunicorn DJANGO_WEBPAGE.wsgi --log-file - && web: sh setup.sh && streamlit run geo_dashboard.py
Even though i get no errors, only the streamlit app appears leaving the django app shutdown.
We are running our stack on Heroku. We are using Django 2.2 with a Postgres 11 DB. Our build pipeline (Github Actions) pushes to Heroku (git push https://git.heroku.com...) and immediately afterwards runs the migrations (heroku run python manage.py migrate --app heroku-app-name). All of that was working with a Postgres 9.6 database and is still working in our staging environment (Postgres 11). Now with production being on Postgres 11, the django migrate command is just stuck and doesn't produce any output, even so there are no actual migrations to apply.
The only differences between our production setup and our staging setup are a follower/slave in production attached to the master DB and "production workload".
In order to fix that deployment I have to run a
heroku pg:killall -a heroku-app-name
heroku restart -a heroku-app-name
At this point the migrations task in the build pipeline fails.
and afterwards migrations can be applied manually without problems:
heroku run python manage.py migrate --app heroku-app-name
So for some reason the migrations command is "waiting" for something, some database lock or whatever, yet I cannot put my finger on it. Especially odd for me is the fact that its also stuck where no migrations are to applied. Why would it be stuck there?
We found the solution. There are actually three things coming together.
We trigger a DB backup before running any migrations. We only do so in Production and not on staging, which was the reason why our staging environment had no issues while production had.
A DB migration (even so it looks like there is nothing to apply) is actually running some commands (besides regular SELECT, UPDATE, INSERTS). E.g. in our case there was a CREATE EXTENSION ... IF NOT EXISTS always executed at the beginning.
While it was possible with Postgres 9.6 to have a backup job running in parallel (I don't know what heroku is using under the hood, yet I assume a noraml pg_dump) the backup job on Postgres 11 (and others?) has now a more exclusiv lock on some operations. I assume that that a CREATE EXTENSION ... IF NOT EXISTS (even so the extension already exists) cannot be executed while having a backup job running in parallel.
(I am sure there are some Postgres internas missing to explain this more correctly)
As a result of these three things, the DB blocks the migrate operation, waiting for the backup job to finish. I have moved the daily backup job to a different time and reconfigured our pipeline to wait for the "pre-deploy"-backup to finish first.
I have deployed app from github repository to the heroku account of my customer as collaborator but this time I had to add some new models.
However I realized that when I deploy my changes from github heroku does not run makemigrations and migrate.
I I read some answers on stackoverflow and understood this is how it is supposed to be.
However my question is what should I do ? What is the best practise to deploy change models to heroku app. (I assume it is not deleting and recreating my app again since customer already has data there.)
(I am able to run makemigrations and migrate from the bash manually but when I have 30+ deployments it's a pain)
Check out the new feature on Heroku called "Release Phase": https://devcenter.heroku.com/articles/release-phase It will allow you to run migrations during the deployment. Just add whatever command you want to your Procfile, like this:
web: your_web_command
release: python manage.py migrate
The release command will run after your app is done building, and before it's launched.
I am building a Python+Django development environment using docker. I defined Dockerfile files and services in docker-compose.yml for web server (nginx) and database (postgres) containers and a container that will run our app using uwsgi. Since this is a dev environment, I am mounting the the app code from the host system, so I can easily edit it in my IDE.
The question I have is where/how to run migrate command.
In case you don't know Django, migrate command creates the database structure and later changes it as needed by the project. I have seen people run migrate as part of the compose command directive command: python manage.py migrate && uwsgi --ini app.ini, but I do not want migrations to run on every container restart. I only want it to run once when I create the containers and never run again unless I rebuild.
Where/how would I do that?
Edit: there is now an open issue with the compose team. With any luck, one time command containers will get supported by compose. https://github.com/docker/compose/issues/1896
You cannot use RUN because as you mentioned in the comments your source is mounted during running of the container.
You cannot use CMD either since you don't want it to run everytime you restart the container.
I recommend using docker exec manually after running the container. I do not think there is a way to automate this inside a dockerfile or docker-compose because of the two reasons I gave above.
It sounds like what you need is a tool for managing project tasks. dobi is a tool designed to handle these tasks (disclaimer: I am the author of this tool).
You can see an example of how to run a migration here: https://github.com/dnephin/dobi/tree/master/examples/init-db-with-rails. The example uses rails, but it's basically the same idea as django.
You could setup a task called migrate which would run the command in a container and write the data to a volume. Then when you start your docker-compose containers, use that volume as the source for your database service.
https://github.com/docker/compose/issues/1896 is finally resolved now by the new service profiles introduced with docker-compose 1.28.0. With profiles you can mark services to be only started in specific profiles:
services:
nginx:
# ...
postgres:
# ...
uwsgi:
# ...
migrations:
profiles: ["cli-only"] # profile name chosen freely
# ...
docker-compose up # start only your app services, no migrations
docker-compose run migrations # run migrations on-demand
docker exec -it container-name bash
Then you will be inside the container and you can run any command you normally do when you develop without using docker.
I am working on a Django app, and I would like my Database migrations to be run when deploying on Heroku.
So far we have simply put the following command in the Procfile:
python manage.py migrate
When deploying the migrations are indeed run, but they seem to be run once for each dyno (and we use several dynos). As a consequence, data migrations (as opposed to pure schema migrations) are run several times, and data is duplicated.
Running heroku run python manage.py migrate after the deployment is not satisfactory since we want the database to be in sync with the code at all times.
What is the correct way to do this in Heroku?
Thanks.
This is my Procfile and it is working exactly as you describe:
release: python manage.py migrate
web: run-program waitress-serve --port=$PORT settings.wsgi:application
See Heroku docs on defining a release process:
https://devcenter.heroku.com/articles/release-phase#defining-a-release-command
The release command is run immediately after a release is created, but before the release is deployed to the app’s dyno formation. That means it will be run after an event that creates a new release:
An app build
A pipeline promotion
A config var change
A rollback
A release via the platform API
The app dynos will not boot on a new release until the release command finishes successfully.
If the release command exits with a non-zero exit status, or if it’s shut down by the dyno manager, the release will be discarded and will not be deployed to the app’s formation.
Be aware, however, this feature is still in beta.
Update:
When you have migrations that remove models and content types, Django requires a confirmation in the console
The following content types are stale and need to be deleted:
...
Any objects related to these content types by a foreign key will also be deleted. Are you sure you want to delete these content types? If you're unsure, answer 'no'. Type 'yes' to continue, or 'no' to cancel:
The migrate command in your Procfile does not respond and the release command fails. In this scenario, remove the migrate line, push live, run the migrate command manually, then add it back for future deploys.
The migrate does automatically runs on Heroku, but for now you can safely do it once your dyno is deployed with heroku run python manage.py migrate.
If production, you can put your app in maintenance first with heroku maintenance:on --app=<app name here>
Setup your Procfile like in the docs
release: python manage.py migrate
web: gunicorn myproject.wsgi --log-file -
documented at https://devcenter.heroku.com/articles/release-phase#specifying-release-phase-tasks
You can create a file bin/post_compile which will run bash commands after the build.
Note that it is still considered experimental.
Read here for more buildpack info.
See here for an example
Alternatively, Heroku is working on a new Releases feature, which aims to simplify and solve this process. (Currently in Beta).
Good luck!