How to run the collectstatic script after deployment to amazon elastic beanstalk? - django

I have a django app that is deployed on aws elastic beanstalk when I want to deploy I need to run the migrate, and the collectstatic script.
I have created 01_build.config in .ebextensions directory and this is its content
commands:
migrate:
command: "python manage.py migrate"
ignoreErrors: true
collectstatic:
command: "python manage.py collectstatic --no-input"
ignoreErrors: true
but still, it is not running these scripts.

Sounds like you want to run these scripts after the app has been set up, in which case you need to use the key container_commands rather than commands. From the docs:
The commands run before the application and web server are set up and the application version file is extracted.
and
Container commands run after the application and web server have been set up and the application version archive has been extracted, but before the application version is deployed. Non-container commands and other customization operations are performed prior to the application source code being extracted.

Related

Can I download source code which has deployed with github from Heroku?

I have deployed my django-app on heroku with Github.
It is test server so I am using sqlitedb.
But because of the dyno manager, my sqlitedb resets everyday.
So I am going to download only db on heroku.
I tried this command.
heroku git:clone -a APP-NAME
But this clones empty repository.
And when I run $heroku run bash -a APP-NAME command, I got ETIMEOUT error.
Is there any other way to download the source code on heroku?
What you want to do with git is not possible because changes to the database is not versioned.
The command to run bash on Heroku is heroku run bash, not heroku bash run. You may have to specify the app using the -a flag: https://devcenter.heroku.com/articles/heroku-cli-commands#heroku-run
I have solved with downloading the application slug.
If you have not used git to deploy your application, or using heroku git:clone has only created an empty repository, you can download the slug that was build when you application was last deployed.
First, install the heroku-slugs CLI plugin with heroku plugins:install heroku-slugs,
then run:
heroku slugs:download -a APP_NAME
This will download and compress your slug into a directory with the same name as your application.

Why not makemigrations in Elastic Beanstalk Config File Django?

Go to aws's setup guide for deploying a django app to elastic beanstalk here and go to the "Add a database migration configuration file" section.
You'll see that in the db-migrate.config file, they migrate the django app when it is deployed. I am wondering why they do not run makemigrations as well?
Thanks!
I'm no expert on django, but from what I can understand (e.g. here), makemigrations should be run when you change your existing models.
In the tutorial there are no changes made to existing models, thus its not executed.
If you want you can add makemigrations command to the db-migrate.config and check if it will have effect.
Possible new version of db-migrate.config:
container_commands:
10_makemigrate:
command: "python manage.py makemigrations"
leader_only: true
20_migrate:
command: "django-admin.py migrate"
leader_only: true
option_settings:
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: ebdjango.settings
The reason this is not done is because migrate and makemigrations are two very different (though related) operations. While Marcin is right that there are no changes in the tutorial to warrant running makemigrations, that is not why this isn't added to the config file.
makemigrations just generates the migration files from any new or altered models, while migrate actually runs these migrations on the database. You ideally should not be generating migration files after deploying your code. These files should be generated before deploying, and then they will be deployed to the remote instances (just like any other code updates), which will then allow you to run migrate with them, and update the remote database.

Issue with Django migrations when deploying in Prod via AWS Beanstalk

I am struggling with Django migrations when deploying into several environments. I have the following ones:
Dev locally on my laptop with sqlite3 database
Test in AWS (deployed via Beanstalk) connected to AWS RDS Production Database
Prod in AWS (deployed via Beanstalk) connected to AWS RDS Production Database.
My workflow is once I have done my dev locally, I deploy to the Test instance, which should make the backward compatible changes to the prod database, where I run my tests, before deploying to the Prod instance.
In my .gitignore, I have excluded all the migrations folder as when I used to apply the same migrations I created in dev, it lead to some inconsistencies and errors during deployment. I thought it would be cleaner to recreate the migrations on the test or prod servers when deploying.
In my .ebextensions I then have the following config that are executed when deploying the application:
01_makemigration:
command: "source /opt/python/run/venv/bin/activate && python manage.py makemigrations --noinput"
leader_only: true
02_migrate:
command: "source /opt/python/run/venv/bin/activate && python manage.py migrate --noinput"
leader_only: true
However when I deploy to the Test platform after having made changes to my models, the migrations don't seem to happen. I have tried to run those commands manually but it tells me that everything is up to date and no migrations are to be applied.
I have also tried to "rebase" the migrations using the following sequence:
On the Database:
> delete from django_migrations;
On the App:
> rm -rf calc/migrations/
> source /opt/python/run/venv/bin/activate
> python manage.py migrate --fake
> python manage.py makemigrations calc
> python manage.py migrate --fake-initial
But it does not seem to work either.
Would someone be able to advise what is the right way of applying migrations in that type of scenario? I would prefer to avoid committing dev migrations and create new and clean ones on the test environment (and the prod database) but I dont seem to find the right way to do it.
Thank you
Regards
Yann
You must not exclude your migrations from git. They are a part of your code base and need to be deployed with your app. You shouldn't be running makemigrations in prod, only migrate.

Django Manage.py Migrate from Google Managed VM Dockerfile - How?

I'm working on a simple implementation of Django hosted on Google's Managed VM service, backed by Google Cloud SQL. I'm able to deploy my application just fine, but when I try to issue some Django manage.py commands within the Dockerfile, I get errors.
Here's my Dockerfile:
FROM gcr.io/google_appengine/python
RUN virtualenv /venv -p python3.4
ENV VIRTUAL_ENV /venv
ENV PATH /venv/bin:$PATH
# Install dependencies.
ADD requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
# Add application code.
ADD . /app
# Overwrite the settings file with the PROD variant.
ADD my_app/settings_prod.py /app/my_app/settings.py
WORKDIR /app
RUN python manage.py migrate --noinput
# Use Gunicorn to serve the application.
CMD gunicorn --pythonpath ./my_app -b :$PORT --env DJANGO_SETTINGS_MODULE=my_app.settings my_app.wsgi
# [END docker]
Pretty basic. If I exclude the RUN python manage.py migrate --noinput line, and deploy using the GCloud tool, everything works fine. If I then log onto the VM, I can issue the manage.py migrate command without issue.
However, in the interest of simplifying deployment, I'd really like to be able to issue Django manage.py commands from the Dockerfile. At present, I get the following error if the manage.py statement is included:
django.db.utils.OperationalError: (2002, "Can't connect to local MySQL server through socket '/cloudsql/my_app:us-central1:my_app_prod_00' (2)")
Seems like a simple enough error, but it has me stumped, because the connection is certainly valid. As I said, if I deploy without issuing the manage.py command, everything works fine. Django can connect to the database, and I can issue the command manually on the VM.
I wondering if the reason for my problem is that the sql proxy (cloudsql/) doesn't exist when the Dockerfile is being deployed. If so, how do I get around this?
I'm new to Docker (this being my first attempt) and newish to Django, so I'm unsure of what the correct approach is for handling a deployment of this nature. Should I instead be positioning this command elsewhere?
There are two steps involved in deploying the application.
In the first step, the Dockerfile is used to build the image, which can happen on your machine or on another machine.
In the second step, the created docker image is executed on the Managed VM.
The RUN instruction is executed when the image is being built, not when it's being run.
You should move manage.py to the CMD command, which is run when the image is being run.
CMD python manage.py migrate --noinput && gunicorn --pythonpath ./my_app -b :$PORT --env DJANGO_SETTINGS_MODULE=my_app.settings my_app.wsgi

Django with AWS - Correct way to syncdb and run scheema migrations using South

In development when I was running django on a local server, I first added South in my installed apps and then did
python manage.py syncdb
After that, whenever I made a change to the database, I'd do
python manage.py scheemamigration
python manage.py migrate appName
I now am using AWS elastic beanstalk and do
git add .
git commit "change made"
git aws.push
to update the aws server. However, I cannot run
python manage.py syncdb
because it says
Unknown command 'syncdb'
so I cannot syncdb and do scheemamigrations. What is the best way for me syncdb and do scheema migrations using South now that I am using AWS servers.
You need to make a container command, heres a snippet from the aws docs...
On your local computer, update your configuration file (e.g., myapp.config) in the > .ebextensions directory.
container_commands:
01_syncdb:
command: "django-admin.py syncdb --migrate --noinput"
leader_only: true
See http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python_django.html (Step 6, point number 2) Sorry no anchors in aws docs..
EDIT: Added in migrate flag to syncdb, and changed aws doc reference to a more pertinent one