Django Manage.py Migrate from Google Managed VM Dockerfile - How? - django

I'm working on a simple implementation of Django hosted on Google's Managed VM service, backed by Google Cloud SQL. I'm able to deploy my application just fine, but when I try to issue some Django manage.py commands within the Dockerfile, I get errors.
Here's my Dockerfile:
FROM gcr.io/google_appengine/python
RUN virtualenv /venv -p python3.4
ENV VIRTUAL_ENV /venv
ENV PATH /venv/bin:$PATH
# Install dependencies.
ADD requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
# Add application code.
ADD . /app
# Overwrite the settings file with the PROD variant.
ADD my_app/settings_prod.py /app/my_app/settings.py
WORKDIR /app
RUN python manage.py migrate --noinput
# Use Gunicorn to serve the application.
CMD gunicorn --pythonpath ./my_app -b :$PORT --env DJANGO_SETTINGS_MODULE=my_app.settings my_app.wsgi
# [END docker]
Pretty basic. If I exclude the RUN python manage.py migrate --noinput line, and deploy using the GCloud tool, everything works fine. If I then log onto the VM, I can issue the manage.py migrate command without issue.
However, in the interest of simplifying deployment, I'd really like to be able to issue Django manage.py commands from the Dockerfile. At present, I get the following error if the manage.py statement is included:
django.db.utils.OperationalError: (2002, "Can't connect to local MySQL server through socket '/cloudsql/my_app:us-central1:my_app_prod_00' (2)")
Seems like a simple enough error, but it has me stumped, because the connection is certainly valid. As I said, if I deploy without issuing the manage.py command, everything works fine. Django can connect to the database, and I can issue the command manually on the VM.
I wondering if the reason for my problem is that the sql proxy (cloudsql/) doesn't exist when the Dockerfile is being deployed. If so, how do I get around this?
I'm new to Docker (this being my first attempt) and newish to Django, so I'm unsure of what the correct approach is for handling a deployment of this nature. Should I instead be positioning this command elsewhere?

There are two steps involved in deploying the application.
In the first step, the Dockerfile is used to build the image, which can happen on your machine or on another machine.
In the second step, the created docker image is executed on the Managed VM.
The RUN instruction is executed when the image is being built, not when it's being run.
You should move manage.py to the CMD command, which is run when the image is being run.
CMD python manage.py migrate --noinput && gunicorn --pythonpath ./my_app -b :$PORT --env DJANGO_SETTINGS_MODULE=my_app.settings my_app.wsgi

Related

How to run the collectstatic script after deployment to amazon elastic beanstalk?

I have a django app that is deployed on aws elastic beanstalk when I want to deploy I need to run the migrate, and the collectstatic script.
I have created 01_build.config in .ebextensions directory and this is its content
commands:
migrate:
command: "python manage.py migrate"
ignoreErrors: true
collectstatic:
command: "python manage.py collectstatic --no-input"
ignoreErrors: true
but still, it is not running these scripts.
Sounds like you want to run these scripts after the app has been set up, in which case you need to use the key container_commands rather than commands. From the docs:
The commands run before the application and web server are set up and the application version file is extracted.
and
Container commands run after the application and web server have been set up and the application version archive has been extracted, but before the application version is deployed. Non-container commands and other customization operations are performed prior to the application source code being extracted.

Django running in Docker container: makemigrations and migrate do not see app's model on launch

I have Django running in a Docker container. The CMD of my Docker file simply runs a script, launch.sh, which inter alia has the following commands:
python manage.py makemigrations --no-input --verbosity 1
python manage.py migrate --no-input --verbosity 1
So, these commands make migrations on my Django project, and then perform the migrations (if any), whenever my container launches. This works as intended for the specifically project-level migrations.
However, inevitably, only the project-level migrations are made — that is, the app-level migrations are never made and so are never performed. But if I log into the container (with docker exec -it ... bash) and execute the same migration commands manually, the app-level migrations are made and performed.
Googling and numerous variations to my code haven't turned up any explanations for this behavior or any fix, so I'm stumped.
Any ideas?
P.S. Here is my project and app structure:
/django/
project/
app/
static/
manage.py
Also, I tried executing the same set of commands twice in succession in my script, and also running the same set of commands in succession but with my app specified as the target option, but these attempts still produced the same results: only the project migrations are made, not the app migrations.
As asked, here's my Dockerfile:
FROM python:3-slim
ENV PYTHONUNBUFFERED 1
ADD django-requirements.pip .
RUN pip install --upgrade pip && \
pip install --no-cache-dir -r django-requirements.pip
WORKDIR /
ADD launch.sh .
CMD ./launch.sh
My Django project is mounted at launch at /django, and my launch script CDs to /django before running the migration commands.
Check your Django app Dokerfile for WORKDIR
# In my case it is /app
WORKDIR /app
and change your launch.sh file
# manage.py will be inside working dir
python /app/manage.py migrate --noinput
UPDATE
It depends on where you copied launch.sh file inside the
container.
If you copied all files of Django app inside /app dir
COPY . /app
and copy your launch.sh file outside it like
COPY ./<path to launch file>/launch.sh /launch.sh
then inside launch.sh you have to use manage.py as
# should prepend `/app/`
python /app/manage.py migrate --noinput
But if you copied launch.sh inside /app/ as.
COPY ./<path to launch file>/launch.sh /app/launch.sh
Then you can use migrate command as the traditional way
python manage.py migrate --noinput
Now When you run the command using docker exec -it container-id, Then it runs
inside working DIR and locates manage.py file.
I had exactly the same problem.
I think there must be a migrations/__init__.py in your Django app dir.
Be sure that you copied it to your container.
My solution was to change a line in .dockerignore:
app/migrations/* to app/migrations/0*.

django-cookiecutter, docker-compose run: can't type in terminal

I'm having an issue with a project started with django cookiecutter.
To verify that the issue was not with my project, I'm testing on a blank django cookiecutter project.
The issue is that when I run:
docker-compose -f production.yml run --rm django python manage.py createsuperuser
I get the prompt but can't type in the terminal.
Same thing when I run:
docker-compose -f production.yml run --rm django python manage.py shell
I get the shell prompt, but I can't type.
The app is running on a machine on DigitalOcean created with the docker-machine create command.
Any thoughts on what the issue could be and how I could debug this?
to enable typing in docker-compose terminal, you need to specify the terminal session is interactive on the docker-compose.yml. Because by default, docker console is not interactive.
stdin_open: true
tty: true
bash can be accessed inside the docker container using the following command.
docker-compose -f production.yml exec django bash
This will give you full access to the container of your django server.
There you can run all your normal django commands with full interactivity.
Create Superuser
python manage.py createsuperuser
For Local ENV
docker-compose -f local.yml exec django bash

Django running fine outside Docker but never starts within Docker, how to find issue?

I had a working Django project that had built and deployed many images over the last few weeks. My django project kept working fine when ran as "python manage.py runserver" and the Docker images kept being built fine (all successful builds). However the django app now doesn't deploy. What could be the issue and where should I start to look for it? I've tried the logs but they only say "Starting Django" without actually starting the service
I use github and have gone back to previous versions of the code and none of them now work, even though the code is exactly the same. It also fails to deploy the Django server on AWS Elastic Beanstalk infrastructure which is my ultimate goal with this code.
start.sh:
#!/bin/bash
echo Starting Django
cd TN_API
exec python ../manage.py runserver 0.0.0.0:8000
Dockerfile:
FROM python:2.7.13-slim
# Set the working directory to /app
WORKDIR /TN_API
# Copy the current directory contents into the container at /app
ADD . /TN_API
# COPY startup script into known file location in container
COPY start.sh /start.sh
# Install requirements
RUN pip install -r requirements.txt
# EXPOSE port 8000 to allow communication to/from server
EXPOSE 8000
# CMD specifcies the command to execute to start the server running.
CMD ["/start.sh"]
Commands:
sudo docker run -d tn-api
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d28115a919f9 tn-api "/start.sh" 11 seconds ago Up 8 seconds 8000/tcp festive_darwin
sudo docker logs [container id]
Starting Django
(doesn't do the whole:
Performing system checks...
System check identified no issues (0 silenced).
August 06, 2017 - 20:54:36
Django version 1.10.5, using settings 'TN_API.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.)
I changed several things and although it doesn't work locally it seems to work fine when deployed to AWS. I still don't get the feedback I used to get such as below but that's ok. I can hit the server and it works. Thank you all for your help.
Performing system checks...
System check identified no issues (0 silenced). August 06, 2017 - 20:54:36 Django version 1.10.5, using settings 'TN_API.settings' Starting development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C.
It looks like the path is wrong for the manage.py script in /start.sh.
Your start.sh:
#!/bin/bash
echo Starting Django
cd TN_API
exec python ../manage.py runserver 0.0.0.0:8000
Seeing that you set WORKDIR to you project directory in the Dockerfile, the start.sh script is actually run from inside the project directory - which means it is actually doing this:
cd /TN_API # WORKDIR directive in Dockerfile
echo Starting Django # from the start.sh script
cd /TN_API/TN_API # looking for TN_API within your current pwd
exec python /TN_API/manage.py runserver 0.0.0.0:8000 # goes up a level (..) to look for manage.py
So it could be that your context for running runserver is off.
You can avoid this path jumping by rewriting your Dockerfile to include an CMD directive as follows:
FROM python:2.7.13-slim
# Set the working directory to /app
WORKDIR /TN_API
# Copy the current directory contents into the container at /app
ADD . /TN_API
# Install requirements
RUN pip install -r requirements.txt
# EXPOSE port 8000 to allow communication to/from server
EXPOSE 8000
# CMD specifcies the command to execute to start the server running.
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Here using python manage.py runserver 0.0.0.0:8000 will work since you set the WORKDIR to your project directory already. So you wouldn't need your start.sh script necessarily.

Change path to manage.py when running Git Push Heroku Master

I just deployed Askbot forum to heroku successfully, but sometimes when running 'git push heroku master', the automatic collectstatic process fails (to me it looks like a random failure), prompting:
-----> Python app detected
-----> Installing dependencies with pip
Cleaning up...
-----> Preparing static assets
Collectstatic configuration error. To debug, run:
$ heroku run python ./askbot/setup_templates/manage.py collectstatic --noinput`
Well I don't really know if that's the problem, but the manage.py in .askbot/setup_templates/ contains the app's native version of the file, not the one I'm using for deployment, which is located in the app's root.
How could I make git push heroku master use the right manage.py file?
Change the path in your Procfile. Typically its something like this:
web: gunicorn hellodjango.wsgi --log-file -
Adjust it:
# from:
web: gunicorn .askbot/setup_templates/yourApp.wsgi
# to:
web: gunicorn .askbot/yourApp.wsgi
to check if the path was indeed your problem run this from the terminal:
heroku run python ./manage.py collectstatic
# or
heroku run python ./yourApp.wsgi collectstatic
Deleting or renaming Manage.py in askbot/setup_templates/ solved the problem.
Git Push Heroku Master never fails whan running collectstatic ever after.
so I believe that for some reason, probably because of sys.path configuration sometimes
Git Push Heroku Master first discovered and used ./askbot/setup_templates/manage.py instead of ./manage.py (which was the right one), and encountered an ImportError.