I am hosting a site via elastic beanstalk and I have a 01_migrate.sh file in .platform/hooks/postdeploy in order to migrate model changes to a postgres database on Amazon RDS:
#!/bin/sh
source /var/app/venv/staging-LQM1lest/bin/activate
python /var/app/current/manage.py migrate --noinput
python /var/app/current/manage.py createsu
python /var/app/current/manage.py collectstatic --noinput
This used to work well bu now when I check the hooks log, although it appears to find the file there is no output to suggest that the migrate command has been ran
i.e. previously I would get the following even if no new migrations:
2022/03/29 05:12:56.530728 [INFO] Running command .platform/hooks/postdeploy/01_migrate.sh
2022/03/29 05:13:11.872676 [INFO] Operations to perform:
Apply all migrations: account, admin, auth, blog, contenttypes, home, se_balance, sessions, sites, socialaccount, taggit, users, wagtailadmin, wagtailcore, wagtaildocs, wagtailembeds, wagtailforms, wagtailimages, wagtailredirects, wagtailsearch, wagtailusers
Running migrations:
No migrations to apply.
Found another file with the destination path 'favicon.ico'. It will be ignored since only the first encountered file is collected. If this is not what you want, make sure every static file has a unique path.
Whereas now I just get
2022/05/23 08:47:49.602719 [INFO] Running command .platform/hooks/postdeploy/01_migrate.sh
Found another file with the destination path 'favicon.ico'. It will be ignored since only the first encountered file is collected. If this is not what you want, make sure every static file has a unique path.
I dont know what has occurred to make this change. Of potential relevance is that eb deploy stopped being ablke to find the 01_migrate.sh file so I had to move the folder and its contents .platform/hooks/postdeploy/01_migrate.sh up a to the parent directory and then it became able to find it again.
As per documentation on platform hooks:
All files must have execute permission. Use chmod +x to set execute permission on your hook files. For all Amazon Linux 2 based platforms versions that were released on or after April 29, 2022, Elastic Beanstalk automatically grants execute permissions to all of the platform hook scripts. In this case you don't have to manually grant execute permissions.
The permissions on your script may have changed after moving the file around locally.
Try setting executable permissions again on your script - chmod +x 01_migrate.sh - and redeploying your application.
Related
I have a Python/Django app and I'm trying to get it deployed to AWS Elastic Beanstalk. I'm using the EB CLI. What I've run into is similar to others (e.g. Deploying Django to Elastic Beanstalk, migrations failed). But I can't get past the migration failure that is blocking a successful eb deploy. I removed the .ebextensions/db-migrate.config file from my project, but it seems like it's cached somewhere since the logs still show this:
...
2022-12-01 22:14:21,238 [INFO] Running config postbuild_0_ikfplatform
2022-12-01 22:14:21,272 [ERROR] Command 01_migrate (source /var/app/venv/*/bin/activate && python3 manage.py migrate) failed
2022-12-01 22:14:21,272 [ERROR] Error encountered during build of postbuild_0_ikfplatform: Command 01_migrate failed
...
My project's .ebextensions folder now only has django.config with the following contents:
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: IFSRestAPI.wsgi:application
I'm baffled how eb deploy is deciding to execute Django migration when I've deleted the db-migrate.config file from the face of the Earth.
How can I prevent Elastic Beanstalk from somehow magically running the migration command (which is no longer in my project)?
FYI: I can successfully migrate the RDS DB from my local computer which is fine for now.
I have a django repository setup in gitlab and I am trying to automate build and deploy on google cloud using gitlab CI/CD.
The app has to be deployed on App Engine and has to use CloudSQL for dynamic data storage.
Issue that I am facing is while executing migration on the db, before deploying my application.
I am supposed to run ./manage.py migrate which connects to cloudSQL.
I have read that we can use cloud proxy to connect to cloudSQL and migrate db. But it kind of seems like a hack. Is there a way to migrate my db via CI/CD pipeline script?
Any help is appreciated. Thanks.
When running Django in the App Engine Standard environment the recommended way of approaching database migration is to run ./manage.py migrate directly from the Console shell or from your local machine (which requires using the cloud sql proxy).
If you want the database migration to be decoupled from your application deployment and run it in Gitlab CI/CD you could do something along these lines:
Use as base image google/cloud-sdk:latest
Acquire credentials gcloud auth activate-service-account --key-file $GOOGLE_SERVICE_ACCOUNT_FILE
Download the cloudsqlproxy with wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy and make it executable chmod +x cloud_sql_proxy.
Start the proxy ./cloud_sql_proxy -instances="[YOUR_INSTANCE_CONNECTION_NAME]"=tcp:3306.
Finally, run the migration script.
You might also create a custom docker image that already does what's above behind the scenes, the result would be the same.
If you want to read further on the matter I suggest taking a look at the following articles link1 link2.
I'm also trying to find the correct way to do that.
One other hacky way would be to just add a call to it in the settings file that is loaded with your app.
something like the migrate.py file does:
from django.core.management import execute_from_command_line
execute_from_command_line(['./manage.py', 'migrate'])
so everytime after you'll deploy a new version of the app it will also run the migrate.
I want to beleive there are other ways, not involving the proxy, especially if you also want to work with a private ip for the sql - then this script must run in the same vpc.
I am working on a Django app, and I would like my Database migrations to be run when deploying on Heroku.
So far we have simply put the following command in the Procfile:
python manage.py migrate
When deploying the migrations are indeed run, but they seem to be run once for each dyno (and we use several dynos). As a consequence, data migrations (as opposed to pure schema migrations) are run several times, and data is duplicated.
Running heroku run python manage.py migrate after the deployment is not satisfactory since we want the database to be in sync with the code at all times.
What is the correct way to do this in Heroku?
Thanks.
This is my Procfile and it is working exactly as you describe:
release: python manage.py migrate
web: run-program waitress-serve --port=$PORT settings.wsgi:application
See Heroku docs on defining a release process:
https://devcenter.heroku.com/articles/release-phase#defining-a-release-command
The release command is run immediately after a release is created, but before the release is deployed to the app’s dyno formation. That means it will be run after an event that creates a new release:
An app build
A pipeline promotion
A config var change
A rollback
A release via the platform API
The app dynos will not boot on a new release until the release command finishes successfully.
If the release command exits with a non-zero exit status, or if it’s shut down by the dyno manager, the release will be discarded and will not be deployed to the app’s formation.
Be aware, however, this feature is still in beta.
Update:
When you have migrations that remove models and content types, Django requires a confirmation in the console
The following content types are stale and need to be deleted:
...
Any objects related to these content types by a foreign key will also be deleted. Are you sure you want to delete these content types? If you're unsure, answer 'no'. Type 'yes' to continue, or 'no' to cancel:
The migrate command in your Procfile does not respond and the release command fails. In this scenario, remove the migrate line, push live, run the migrate command manually, then add it back for future deploys.
The migrate does automatically runs on Heroku, but for now you can safely do it once your dyno is deployed with heroku run python manage.py migrate.
If production, you can put your app in maintenance first with heroku maintenance:on --app=<app name here>
Setup your Procfile like in the docs
release: python manage.py migrate
web: gunicorn myproject.wsgi --log-file -
documented at https://devcenter.heroku.com/articles/release-phase#specifying-release-phase-tasks
You can create a file bin/post_compile which will run bash commands after the build.
Note that it is still considered experimental.
Read here for more buildpack info.
See here for an example
Alternatively, Heroku is working on a new Releases feature, which aims to simplify and solve this process. (Currently in Beta).
Good luck!
i purchased an outsource service to develop a web site in django to be deployed in heroku and AWS S3 (boto package).
Unfortunately the developer did not comment the code, despite it was asked, and left the project uncompleted for following up with a bigger client.
I've hired another django 'expert' to fix a part which was not developed, and he want to (over)charge for deployment testing, which i think should be a normal matter for good practices! i am working on my own budject, and need to work it out myself.
I was able to make the project run locally and make myself the frontend templates which were not fully developed, but I am having issues in deploying the code on my own staging environment.
I set up a staging environment under my credential to check if everything is ok, before pushing to production.
I think I almost get there, though:
heroku run python manage.py migrate --all --noinput --app my-app-staging
generate in the console:
Running `python manage.py migrate --all --noinput` attached to terminal... up, run.4833
DatabaseError: relation "south_migrationhistory" does not exist
LINE 1: ...gration", "south_migrationhistory"."applied" FROM "south_mig...
In the browser:
DatabaseError at /
relation "django_site" does not exist
LINE 1: ..."django_site"."domain", "django_site"."name" FROM "django_si...
^
Request Method: GET
Request URL: http://my-app-staging.herokuapp.com/
Django Version: 1.5.6
Exception Type: DatabaseError
Exception Value:
relation "django_site" does not exist
LINE 1: ..."django_site"."domain", "django_site"."name" FROM "django_si...
^
Exception Location: /app/.heroku/python/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py in execute, line 5
I checked my settings and they look ok:
i check AWS S3 bucket and it is able to write there;
settings in heroku console display that the db has been created.
I followed:
Heroku created table but when I'll migrate, he says that doesn't created
but it looks my locals.py are ok too, and in my local git branch .gitignore will exclude db.sqlite
My git and heroku ssh keys have been generated and added, so i dont' think it is an issue of authentification.
How could i check that the db is properly connected to django project and I am not invalidated?
Could you please help in debriefing to understand what this error means and how to solve it?
So much thank you.
It sounds like you might not have created the initial South migration tables on your staging server. This is actually done using syncdb:
Once South is added in, you’ll need to run ./manage.py syncdb to make the South migration-tracking tables (South doesn’t use migrations for its own models, for various reasons).
To run this on Heroku, you'll probably want to use something like
heroku run python manage.py syncdb
Once this is done, you should be able to move forward with the South commands.
I have a git repository with two Django 1.5 projects: one for a website, the other for a REST api. The git repository looks like this:
api_project/
www_project/
logs/
manage.py
my_app_1/
my_app_2/
The manage.py file defaults to www_project.settings. To launch the api_project, I run:
DJANGO_SETTINGS_MODULE=api_project.settings ./manage.py shell
I guess I could setup 3 git repositories, one with the common apps, one for the api project and one for the www project, using git submodules and all, but it really seems overkill. Up to now, everything worked fine.
But now I'm trying to deploy this setup using Chef. I'd like to use the application and application_python cookbooks, and run my django projects with gunicorn, but these cookbooks seem to be meant to deploy only one project at a time.
Here's what my chef recipe looks like for the www_project:
application "django_app" do
path "/var/django"
owner "django"
group "django"
repository "git.example.com:blabla"
revision "master"
migrate true
packages ["libevent-dev", "libpq5" , "git"]
# libevent-dev for gevent (for gunicorn), libpq5 for postgresql
environment "DJANGO_SETTINGS_MODULE" => "www_project.settings"
# for syncdb and migrate
django do
local_settings_file "www_project/settings.py"
settings_template "settings.py.erb"
purge_before_symlink ["logs"]
symlinks "logs" => "logs"
collectstatic true
database do
database "blabla"
engine "postgresql_psycopg2"
username "django"
password "super!password"
end
database_master_role "blabla_postgresql_master"
migration_command "/var/django/shared/env/bin/python manage.py" +
" syncdb --noinput && /var/django/shared/env/bin/python" +
" manage.py migrate"
end
gunicorn do
app_module "www_project.wsgi:application"
preload_app true
worker_class "egg:gunicorn#gevent"
workers node['cpu']['total'].to_i * 2 + 1
port 8081
proc_name "blabla_www"
end
end
I would just like to know how to add another gunicorn ressource for the api_project.
Has anyone run into a similar problem? Would you recommend patching my local copy of the application_python cookbook to allow multiple projects in one git repo? Or should I go through the pain of setting up 3 separate git repositories? Or any other solution?
Thanks!
You can separate your code into two separate "application" blocks, as all the resources defined inside are sub-resources and the actual execution is done at the level of "application".
Another solution would be to fork/patch the application_python providers django and gunicorn to allow more complex behaviors, for example allowing more than one application to be deployed. Although it is probably not required by so many users to merit all the effort and complexity.