How to PREVENT AWS Elastic Beanstalk from running migration - django

I have a Python/Django app and I'm trying to get it deployed to AWS Elastic Beanstalk. I'm using the EB CLI. What I've run into is similar to others (e.g. Deploying Django to Elastic Beanstalk, migrations failed). But I can't get past the migration failure that is blocking a successful eb deploy. I removed the .ebextensions/db-migrate.config file from my project, but it seems like it's cached somewhere since the logs still show this:
...
2022-12-01 22:14:21,238 [INFO] Running config postbuild_0_ikfplatform
2022-12-01 22:14:21,272 [ERROR] Command 01_migrate (source /var/app/venv/*/bin/activate && python3 manage.py migrate) failed
2022-12-01 22:14:21,272 [ERROR] Error encountered during build of postbuild_0_ikfplatform: Command 01_migrate failed
...
My project's .ebextensions folder now only has django.config with the following contents:
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: IFSRestAPI.wsgi:application
I'm baffled how eb deploy is deciding to execute Django migration when I've deleted the db-migrate.config file from the face of the Earth.
How can I prevent Elastic Beanstalk from somehow magically running the migration command (which is no longer in my project)?
FYI: I can successfully migrate the RDS DB from my local computer which is fine for now.

Related

Django 'ModuleNotFoundError: No module named 'blog/wsgi' when deploying to Elastic Beanstalk, as well as 'Error while connecting to Upstream'

I am trying to deploy a static Django website to Elastic Beanstalk via the UI 'upload your code' and not the EB CLI. I have created a zip file of all of my contents and have tried uploading it a million times, only to be met with the errors 'ModuleNotFoundError: No module named 'blog/wsgi' when deploying to Elastic Beanstalk, as well as 'Error while connecting to Upstream'. I think it is an error with the 'django.config' file in my .ebextensions folder. The contents of the 'django.config' file is: option_settings: aws:elasticbeanstalk:container:python: WSGIPath: blog/wsgi:application. The contents of my 'wsgi.py' file is import os from django.core.wsgi import get_wsgi_application os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'blog.blog.settings') application = get_wsgi_application(). I have attached a screenshot of my folder structure as well. I can attach any other files if necessary. Thank you.
I faced the same issue when deploying to Amazon Linux 2 machine due to incompatibility of versions. I fixed it by changing the django.config file:
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: ebdjango.wsgi
Credits to this answer

Elastic Beanstalk platform hook no longer running

I am hosting a site via elastic beanstalk and I have a 01_migrate.sh file in .platform/hooks/postdeploy in order to migrate model changes to a postgres database on Amazon RDS:
#!/bin/sh
source /var/app/venv/staging-LQM1lest/bin/activate
python /var/app/current/manage.py migrate --noinput
python /var/app/current/manage.py createsu
python /var/app/current/manage.py collectstatic --noinput
This used to work well bu now when I check the hooks log, although it appears to find the file there is no output to suggest that the migrate command has been ran
i.e. previously I would get the following even if no new migrations:
2022/03/29 05:12:56.530728 [INFO] Running command .platform/hooks/postdeploy/01_migrate.sh
2022/03/29 05:13:11.872676 [INFO] Operations to perform:
Apply all migrations: account, admin, auth, blog, contenttypes, home, se_balance, sessions, sites, socialaccount, taggit, users, wagtailadmin, wagtailcore, wagtaildocs, wagtailembeds, wagtailforms, wagtailimages, wagtailredirects, wagtailsearch, wagtailusers
Running migrations:
No migrations to apply.
Found another file with the destination path 'favicon.ico'. It will be ignored since only the first encountered file is collected. If this is not what you want, make sure every static file has a unique path.
Whereas now I just get
2022/05/23 08:47:49.602719 [INFO] Running command .platform/hooks/postdeploy/01_migrate.sh
Found another file with the destination path 'favicon.ico'. It will be ignored since only the first encountered file is collected. If this is not what you want, make sure every static file has a unique path.
I dont know what has occurred to make this change. Of potential relevance is that eb deploy stopped being ablke to find the 01_migrate.sh file so I had to move the folder and its contents .platform/hooks/postdeploy/01_migrate.sh up a to the parent directory and then it became able to find it again.
As per documentation on platform hooks:
All files must have execute permission. Use chmod +x to set execute permission on your hook files. For all Amazon Linux 2 based platforms versions that were released on or after April 29, 2022, Elastic Beanstalk automatically grants execute permissions to all of the platform hook scripts. In this case you don't have to manually grant execute permissions.
The permissions on your script may have changed after moving the file around locally.
Try setting executable permissions again on your script - chmod +x 01_migrate.sh - and redeploying your application.

How to fix Django "ModuleNotFoundError: No module named 'application'" on AWS?

I am trying to redeploy a Django web application on AWS. My elastic beanstalk environment has been red a couple of times. When I ran eb logs on the cli, I am getting a "ModuleNotFoundError: No module named 'application' error". I think this has got to do with my wsgi configuration.
I have deployed this web app on AWS before. I messed up when I tried deploying a new version then decided to just start over. Here is my wsgi.py configuration:
```import os
from django.core.wsgi import get_wsgi_application
from django.contrib.staticfiles.handlers import StaticFilesHandler
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings")
application = StaticFilesHandler(get_wsgi_application())```
When I deploy the app, it's giving me a 502: Bad gateway error. Let me know if you would like more info on the issue. Any pointers would be greatly appreciated.
By default, Elastic Beanstalk looks for a file named application.py to start your application. Because this doesn't exist in the Django project that you've created, you need to make some adjustments to your application's environment. You also must set environment variables so that your application's modules can be loaded.
follow these instructions:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.html#python-django-configure-for-eb
I got this error because in an older version the django.config looked like this:
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: ebdjango/wsgi.py
and now it should look like this:
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: ebdjango.wsgi:application
I faced this issue as well. In my case I was getting a 502 nginx error after deploying to AWS EB using eb deploy and my environment had a red flag.
My AWS EB was using Amazon Linux 2 AMI, everything was set correctly BUT after lot of tries and errors I realized my .ebextensions folder was named .ebtextensions.
I wish EB or django would have indicated on its logs that my folder was nanmed incorrectly.
I got this error and also spent days trying to solve it. The problem is from the django.config file.
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: <app name>.wsgi:application
The first value (app name) in the WSGIPath option needs to be the directory that contains the wsgi.py file
Check the WSGIPath in elastic beanstalk configuration (Software Category) in the AWS console too.
It should be
<app_name>.wsgi:application

Running migrations when deploying django app to heroku with codeship

I'm trying to set up a continous integration pipeline for my python 3.5.1 / django 1.9.7 project.
The project is running fine on heroku, and the codeship deployment pipeline for heroku works well as long as my database is unchanged.
If I want to run migrations, I have to do so manually by entering heroku run python manage.py migrate on my computer which I would like to avoid.
I added a "Custom Script" in my codeship deployment pipeline after the "heroku"-pipeline containing heroku run python manage.py migrate, but when coedship attempts to execute it, it fails with the
Cannot run more than 1 Free size dynos.
message. I assume this is because the server is already up and running and I don't have more worker processes available? (please correct me if I'm wrong)
EDIT: This is where I was wrong - I had an additional process running (see answer)
Is there any way to include the database migration step in the heroku deployment pipeline? Or did I do something wrong?
Ifound the answer here: Heroku: Cannot run more than 1 Free size dynos
My assumption about theweb server beeing the blocking dyno was wrong, I had a zombie process (createsuperuser) running I did not know about.
I used heroku ps to show all running prcesses. Output was:
=== web (Free): gunicorn my_app.wsgi --log-file - (1)
web.1: idle 2016/06/07 17:09:06 +0200 (~ 13h ago)
=== run: one-off processes (1)
run.7012 (Free): up 2016/06/07 15:19:13 +0200 (~ 15h ago): python manage.py createsuperuser
I killed the process by typing
heroku ps:stop run.7012
and afterwards my migration via codeship custom script worked as expected.

Correct way to run manage.py commands after deployment on Elastic Beanstalk?

I'm deploying a Django app to EB - my first EB deployment - and I'm confused about the order of things.
My container commands are these:
container_commands:
01_migrate:
command: "django-admin.py migrate"
leader_only: true
02_collectstatic:
command: "django-admin.py collectstatic --noinput"
leader_only: true
What I've noticed, however, is that each time I deploy, these container commands are run on my old codebase. Suppose my current code is app-v1.zip. I update my models.py, and create a migration. I then eb deploy, which creates app-v2.zip. The migrate command is run on the EB environment, but is run on the old codebase (app-v1.zip), before app-v2.zip is unpacked into /var/app/current, and so my migration isn't applied.
If I then run another eb deploy, it will create app-v3.zip, but will run migrate on the code in app-v2.zip. So, it works, but it means I have to run eb deploy twice any time I want to change either data models or static files (the same issue applies to collectstatic).
There is more explanation and a workaround on this blog post and this SO question, but all the "deploy Django to EB" tutorials do things the way I've done it with container_commands.
What's the correct way?
You made me worry, but I have confirmed that eb deploy does run the commands with the new version of the code. It does this on a staging area, before actually releasing to the server, but it does with the proper version.
You can do eb logs -a after the deployment and look for the eb-activity.log to see how all commands get executed, and the proper migration happens.
As for the flow, I prefer NOT to call collecstatics as part of the EB flow, as I am releasing that code directly into S3 (and CloudFront) using a gulp based flow. So, I just run migrate as part of the deployment (plus other things particular to my application):
01_migrate:
command: "django-admin.py migrate --noinput"
leader_only: true
and everything works as expected.