populating django database using migrations - django

I created an api that allows to read some data with filters based on the models I created.
and launch the database is empty so I created a script to populate the database.
I then dockerized the app and deployed to AWS.
the issue here is that everytime the container restarts the script is re-run.
I would like to use migrations to do this.
how?
for now the docker entrypoint is
python manage.py wait_for_db
python manage.py makemigrations API
python manage.py migrate
# python import_csv.py
uwsgi --socket :9000 --workers 4 --master --enable-threads --module myapi.wsgi

Related

How to run the qcluster process in production (Django-q)?

I have a Django webapp. The app has some scheduled tasks, for this I'm using django-q. In local development you need to run manage.py qcluster to be able to run the scheduled tasks.
How can I automatically run the qcluster process in production?
I'm deploying to a Digital Ocean droplet, using ubuntu, nginx and gunicorn.
Are you using a Procfile?
My configuration is to have a Procfile that contains:
web: python ./manage.py runserver 0.0.0.0:$PORT
worker: python ./manage.py qcluster
This way, every time the web process is started, another process for django-q is also created.

How run makemigrations <app> --empty with Django and Docker?

I have a Django app with Docker and Docker-compose.
So I can build, up or down my container but how can I run manage.py or django-admin command like makemigrationn <app> --empty ?
In a "classical" Django project architecture, no problem
But using Docker, I have an error
django.core.exceptions.ImproperlyConfigured: Requested setting STATIC_URL, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
I try django-admin makemigrations myapp --empty --settings=core.settings.dev but have another error no module core found
Typically when you have a Django app using docker-compose, you need to issue the command to the correct service/container. If you have a db service and a web service in your compose file, and the web service is the Django app, you would do something like:
docker-compose exec web python manage.py makemigrations

How do I run migrations in Dockerized Django?

I followed a Docker + Django tutorial which was great, in that I could successfully build and run the website following the instructions. However, I can't for the life of me figure out how to successfully run a database migration after changing a model.
Here are the steps I've taken:
Clone the associated git repo
Set up a virtual machine called dev
with docker-machine create -d virtualbox dev
and point to it with eval $(docker-machine env dev)
Built and started it up with:
docker-compose build
and docker-compose up -d
Run initial migration (the only time I'm able to run a migration that appears successful):
docker-compose run web python manage.py migrate
Checked that the website works by navigating to the IP address returned by:
docker-machine ip dev
Make a change to a model. I just added this to the Item model in web/docker_django/apps/todo/models.py file.:
name = models.CharField(default='Unnamed', max_length=50, null=False)
Update the image and restart the containers with:
docker-compose down --volumes
then docker-compose build
then docker-compose up --force-recreate -d
Migration attempt number 1:
I used:
docker-compose run web python manage.py makemigrations todo
Then:
docker-compose run web python manage.py migrate
After the makemigrations command, it said:
Migrations for 'todo':
0001_initial.py:
- Create model Item
When I ran the migrate command, it gave the following message:
Operations to perform:
Synchronize unmigrated apps: messages, todo, staticfiles
Apply all migrations: contenttypes, admin, auth, sessions
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Installing custom SQL...
Running migrations:
No migrations to apply.
So that didn't work.
Migration attempt number 2:
This time I tried running migrations from directly inside the running web container. This looked like this:
(macbook)$ docker exec -it dockerizingdjango_web_1 bash
root#38f9381f179b:/usr/src/app# ls
Dockerfile docker_django manage.py requirements.txt static tests
root#38f9381f179b:/usr/src/app# python manage.py makemigrations todo
Migrations for 'todo':
0001_initial.py:
- Create model Item
root#38f9381f179b:/usr/src/app# python manage.py migrate
Operations to perform:
Synchronize unmigrated apps: staticfiles, messages
Apply all migrations: contenttypes, todo, admin, auth, sessions
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Installing custom SQL...
Running migrations:
Rendering model states... DONE
Applying todo.0001_initial...Traceback (most recent call last):
File "/usr/local/lib/python3.5/site-packages/django/db/backends/utils.py", line 62, in execute
return self.cursor.execute(sql)
psycopg2.ProgrammingError: relation "todo_item" already exists
Moreover, I couldn't find any migrations folders in that container.
I clearly have very little idea what's happening under the hood here, so if someone could show me how to successfully change models and run database migrations I would much appreciate it. Bonus points if you can help me conceptualize what's happening where when I run these commands that have to get the web and postgres images to work together.
EDIT: What worked for me
#MazelTov's suggestions will all be helpful for automating the process as I get more used to developing with Docker, but the thing I was missing, that #MazelTov filled me in on in a very helpful discussion, was mounting so that migrations show up in my local machine.
So basically, my Migration Attempt 1 would have worked just fine if instead of, for example:
docker-compose run web python manage.py makemigrations todo
...I used:
docker-compose run --service-ports -v $(pwd)/web:/usr/src/app web python manage.py makemigrations todo
There are many ways how to achieve this.
1) Run ./manage.py migrate before you start your app (uwsgi, runserver,...) in bash script
Dockerfile
FROM debian:latest
...
# entrypoint, must be executable file chmod +x entrypoint.sh
COPY entrypoint.sh /home/docker/entrypoint.sh
# what happens when I start the container
CMD ["/home/docker/entrypoint.sh"]
entrypoint.sh
#!/bin/bash
./manage.py collectstatic --noinput
# i commit my migration files to git so i dont need to run it on server
# ./manage.py makemigrations app_name
./manage.py migrate
# here it start nginx and the uwsgi
supervisord -c /etc/supervisor/supervisord.conf -n
2) If you have a lot of migration files and you dont want any downtime, you could run the migrate command from seperate docker-compose service
docker-compose.yml
version: '3.3'
services:
# starts the supervisor (uwsgi + nginx)
web:
build: .
ports: ["80:80"]
# this service will use same image, and once the migration is done it will be stopped
web_migrations:
build: .
command: ./manage.py migrate
I solved this by doing:
docker-compose exec web /usr/local/bin/python manage.py makemigrations todo
and then :
docker-compose exec web /usr/local/bin/python manage.py migrate
I got it from this issue.

Deploying Django app to Heroku via CircleCI: How to migrate database?

How can I run python manage.py makemigrations and python manage.py migrate automatically when deploying a Django app to Heroku via CircleCI. It seems all commands run local to CircleCI, but not on the deployed application in production.
Is there a way?
python manage.py makemigrations should be running locally, and you may commit the migration files along with the code.
Regaridng the migrations, open your Procfile and add this line: release: python manage.py migrate.
This will tell Heroku to migrate before deploying a new version of your code.

Django Manage.py Migrate from Google Managed VM Dockerfile - How?

I'm working on a simple implementation of Django hosted on Google's Managed VM service, backed by Google Cloud SQL. I'm able to deploy my application just fine, but when I try to issue some Django manage.py commands within the Dockerfile, I get errors.
Here's my Dockerfile:
FROM gcr.io/google_appengine/python
RUN virtualenv /venv -p python3.4
ENV VIRTUAL_ENV /venv
ENV PATH /venv/bin:$PATH
# Install dependencies.
ADD requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
# Add application code.
ADD . /app
# Overwrite the settings file with the PROD variant.
ADD my_app/settings_prod.py /app/my_app/settings.py
WORKDIR /app
RUN python manage.py migrate --noinput
# Use Gunicorn to serve the application.
CMD gunicorn --pythonpath ./my_app -b :$PORT --env DJANGO_SETTINGS_MODULE=my_app.settings my_app.wsgi
# [END docker]
Pretty basic. If I exclude the RUN python manage.py migrate --noinput line, and deploy using the GCloud tool, everything works fine. If I then log onto the VM, I can issue the manage.py migrate command without issue.
However, in the interest of simplifying deployment, I'd really like to be able to issue Django manage.py commands from the Dockerfile. At present, I get the following error if the manage.py statement is included:
django.db.utils.OperationalError: (2002, "Can't connect to local MySQL server through socket '/cloudsql/my_app:us-central1:my_app_prod_00' (2)")
Seems like a simple enough error, but it has me stumped, because the connection is certainly valid. As I said, if I deploy without issuing the manage.py command, everything works fine. Django can connect to the database, and I can issue the command manually on the VM.
I wondering if the reason for my problem is that the sql proxy (cloudsql/) doesn't exist when the Dockerfile is being deployed. If so, how do I get around this?
I'm new to Docker (this being my first attempt) and newish to Django, so I'm unsure of what the correct approach is for handling a deployment of this nature. Should I instead be positioning this command elsewhere?
There are two steps involved in deploying the application.
In the first step, the Dockerfile is used to build the image, which can happen on your machine or on another machine.
In the second step, the created docker image is executed on the Managed VM.
The RUN instruction is executed when the image is being built, not when it's being run.
You should move manage.py to the CMD command, which is run when the image is being run.
CMD python manage.py migrate --noinput && gunicorn --pythonpath ./my_app -b :$PORT --env DJANGO_SETTINGS_MODULE=my_app.settings my_app.wsgi