Migrations detected wherease makemigrations/migrate already done (and database changed already applied) - django

stack: Django/Docker/Postgresql
I have made some changes in database models last month and deployed in preprod.
- remove fields
- add fields
- alter one field constraint
All seems to be correct, changes were applied and app was running.
Yesterday, I've made some minor changes and re-deployed but when I re-build my project, new migrations are detected.
These migrations are exactly the same as above.
And migrate failed because trying to remove a field that did not exist anymore.
Django app update procedure:
- sudo docker-compose -f docker-compose.preprod.yml down -v
- git pull
- sudo docker-compose -f docker-compose.preprod.yml up -d --build --remove-orphans #<= error raise
- sudo docker-compose -f docker-compose.preprod.yml up
entrypoint.sh
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
python manage.py makemigrations --noinput
python manage.py migrate
exec "$#"

Your entrypoint must absolutely not have makemigrations in it. If you have been running with this in production, you may be screwed (i.e. you'll have migrations in your production database that are to be found nowhere else).
makemigrations must only be run at development time, and those migrations must be committed to source control.

Related

Django running in Docker container: makemigrations and migrate do not see app's model on launch

I have Django running in a Docker container. The CMD of my Docker file simply runs a script, launch.sh, which inter alia has the following commands:
python manage.py makemigrations --no-input --verbosity 1
python manage.py migrate --no-input --verbosity 1
So, these commands make migrations on my Django project, and then perform the migrations (if any), whenever my container launches. This works as intended for the specifically project-level migrations.
However, inevitably, only the project-level migrations are made — that is, the app-level migrations are never made and so are never performed. But if I log into the container (with docker exec -it ... bash) and execute the same migration commands manually, the app-level migrations are made and performed.
Googling and numerous variations to my code haven't turned up any explanations for this behavior or any fix, so I'm stumped.
Any ideas?
P.S. Here is my project and app structure:
/django/
project/
app/
static/
manage.py
Also, I tried executing the same set of commands twice in succession in my script, and also running the same set of commands in succession but with my app specified as the target option, but these attempts still produced the same results: only the project migrations are made, not the app migrations.
As asked, here's my Dockerfile:
FROM python:3-slim
ENV PYTHONUNBUFFERED 1
ADD django-requirements.pip .
RUN pip install --upgrade pip && \
pip install --no-cache-dir -r django-requirements.pip
WORKDIR /
ADD launch.sh .
CMD ./launch.sh
My Django project is mounted at launch at /django, and my launch script CDs to /django before running the migration commands.
Check your Django app Dokerfile for WORKDIR
# In my case it is /app
WORKDIR /app
and change your launch.sh file
# manage.py will be inside working dir
python /app/manage.py migrate --noinput
UPDATE
It depends on where you copied launch.sh file inside the
container.
If you copied all files of Django app inside /app dir
COPY . /app
and copy your launch.sh file outside it like
COPY ./<path to launch file>/launch.sh /launch.sh
then inside launch.sh you have to use manage.py as
# should prepend `/app/`
python /app/manage.py migrate --noinput
But if you copied launch.sh inside /app/ as.
COPY ./<path to launch file>/launch.sh /app/launch.sh
Then you can use migrate command as the traditional way
python manage.py migrate --noinput
Now When you run the command using docker exec -it container-id, Then it runs
inside working DIR and locates manage.py file.
I had exactly the same problem.
I think there must be a migrations/__init__.py in your Django app dir.
Be sure that you copied it to your container.
My solution was to change a line in .dockerignore:
app/migrations/* to app/migrations/0*.

How do I run migrations in Dockerized Django?

I followed a Docker + Django tutorial which was great, in that I could successfully build and run the website following the instructions. However, I can't for the life of me figure out how to successfully run a database migration after changing a model.
Here are the steps I've taken:
Clone the associated git repo
Set up a virtual machine called dev
with docker-machine create -d virtualbox dev
and point to it with eval $(docker-machine env dev)
Built and started it up with:
docker-compose build
and docker-compose up -d
Run initial migration (the only time I'm able to run a migration that appears successful):
docker-compose run web python manage.py migrate
Checked that the website works by navigating to the IP address returned by:
docker-machine ip dev
Make a change to a model. I just added this to the Item model in web/docker_django/apps/todo/models.py file.:
name = models.CharField(default='Unnamed', max_length=50, null=False)
Update the image and restart the containers with:
docker-compose down --volumes
then docker-compose build
then docker-compose up --force-recreate -d
Migration attempt number 1:
I used:
docker-compose run web python manage.py makemigrations todo
Then:
docker-compose run web python manage.py migrate
After the makemigrations command, it said:
Migrations for 'todo':
0001_initial.py:
- Create model Item
When I ran the migrate command, it gave the following message:
Operations to perform:
Synchronize unmigrated apps: messages, todo, staticfiles
Apply all migrations: contenttypes, admin, auth, sessions
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Installing custom SQL...
Running migrations:
No migrations to apply.
So that didn't work.
Migration attempt number 2:
This time I tried running migrations from directly inside the running web container. This looked like this:
(macbook)$ docker exec -it dockerizingdjango_web_1 bash
root#38f9381f179b:/usr/src/app# ls
Dockerfile docker_django manage.py requirements.txt static tests
root#38f9381f179b:/usr/src/app# python manage.py makemigrations todo
Migrations for 'todo':
0001_initial.py:
- Create model Item
root#38f9381f179b:/usr/src/app# python manage.py migrate
Operations to perform:
Synchronize unmigrated apps: staticfiles, messages
Apply all migrations: contenttypes, todo, admin, auth, sessions
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Installing custom SQL...
Running migrations:
Rendering model states... DONE
Applying todo.0001_initial...Traceback (most recent call last):
File "/usr/local/lib/python3.5/site-packages/django/db/backends/utils.py", line 62, in execute
return self.cursor.execute(sql)
psycopg2.ProgrammingError: relation "todo_item" already exists
Moreover, I couldn't find any migrations folders in that container.
I clearly have very little idea what's happening under the hood here, so if someone could show me how to successfully change models and run database migrations I would much appreciate it. Bonus points if you can help me conceptualize what's happening where when I run these commands that have to get the web and postgres images to work together.
EDIT: What worked for me
#MazelTov's suggestions will all be helpful for automating the process as I get more used to developing with Docker, but the thing I was missing, that #MazelTov filled me in on in a very helpful discussion, was mounting so that migrations show up in my local machine.
So basically, my Migration Attempt 1 would have worked just fine if instead of, for example:
docker-compose run web python manage.py makemigrations todo
...I used:
docker-compose run --service-ports -v $(pwd)/web:/usr/src/app web python manage.py makemigrations todo
There are many ways how to achieve this.
1) Run ./manage.py migrate before you start your app (uwsgi, runserver,...) in bash script
Dockerfile
FROM debian:latest
...
# entrypoint, must be executable file chmod +x entrypoint.sh
COPY entrypoint.sh /home/docker/entrypoint.sh
# what happens when I start the container
CMD ["/home/docker/entrypoint.sh"]
entrypoint.sh
#!/bin/bash
./manage.py collectstatic --noinput
# i commit my migration files to git so i dont need to run it on server
# ./manage.py makemigrations app_name
./manage.py migrate
# here it start nginx and the uwsgi
supervisord -c /etc/supervisor/supervisord.conf -n
2) If you have a lot of migration files and you dont want any downtime, you could run the migrate command from seperate docker-compose service
docker-compose.yml
version: '3.3'
services:
# starts the supervisor (uwsgi + nginx)
web:
build: .
ports: ["80:80"]
# this service will use same image, and once the migration is done it will be stopped
web_migrations:
build: .
command: ./manage.py migrate
I solved this by doing:
docker-compose exec web /usr/local/bin/python manage.py makemigrations todo
and then :
docker-compose exec web /usr/local/bin/python manage.py migrate
I got it from this issue.

Django build is taking so much time in jenkins?

I am using jenkins for continuous integration from gitlab and continuous deployment. My "execute shell" consist of the following commands .
#!/bin/bash
source /my_env/bin/activate # Activate the virtualenv
cd /var/lib/jenkins/workspace/Operations_central
#pip install -r requirements.txt # Install or upgrade dependencies
python manage.py makemigrations
python manage.py migrate # Apply South's database
sudo service nginx restart
fuser -n tcp -k 8088
gunicorn applicationfile.wsgi:application --bind=myserverip:portno`

Django Manage.py Migrate from Google Managed VM Dockerfile - How?

I'm working on a simple implementation of Django hosted on Google's Managed VM service, backed by Google Cloud SQL. I'm able to deploy my application just fine, but when I try to issue some Django manage.py commands within the Dockerfile, I get errors.
Here's my Dockerfile:
FROM gcr.io/google_appengine/python
RUN virtualenv /venv -p python3.4
ENV VIRTUAL_ENV /venv
ENV PATH /venv/bin:$PATH
# Install dependencies.
ADD requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
# Add application code.
ADD . /app
# Overwrite the settings file with the PROD variant.
ADD my_app/settings_prod.py /app/my_app/settings.py
WORKDIR /app
RUN python manage.py migrate --noinput
# Use Gunicorn to serve the application.
CMD gunicorn --pythonpath ./my_app -b :$PORT --env DJANGO_SETTINGS_MODULE=my_app.settings my_app.wsgi
# [END docker]
Pretty basic. If I exclude the RUN python manage.py migrate --noinput line, and deploy using the GCloud tool, everything works fine. If I then log onto the VM, I can issue the manage.py migrate command without issue.
However, in the interest of simplifying deployment, I'd really like to be able to issue Django manage.py commands from the Dockerfile. At present, I get the following error if the manage.py statement is included:
django.db.utils.OperationalError: (2002, "Can't connect to local MySQL server through socket '/cloudsql/my_app:us-central1:my_app_prod_00' (2)")
Seems like a simple enough error, but it has me stumped, because the connection is certainly valid. As I said, if I deploy without issuing the manage.py command, everything works fine. Django can connect to the database, and I can issue the command manually on the VM.
I wondering if the reason for my problem is that the sql proxy (cloudsql/) doesn't exist when the Dockerfile is being deployed. If so, how do I get around this?
I'm new to Docker (this being my first attempt) and newish to Django, so I'm unsure of what the correct approach is for handling a deployment of this nature. Should I instead be positioning this command elsewhere?
There are two steps involved in deploying the application.
In the first step, the Dockerfile is used to build the image, which can happen on your machine or on another machine.
In the second step, the created docker image is executed on the Managed VM.
The RUN instruction is executed when the image is being built, not when it's being run.
You should move manage.py to the CMD command, which is run when the image is being run.
CMD python manage.py migrate --noinput && gunicorn --pythonpath ./my_app -b :$PORT --env DJANGO_SETTINGS_MODULE=my_app.settings my_app.wsgi

How to simplify migrations in Django 1.7?

There are already similar questions for South, but I have started my project with Django 1.7 and am not using South.
During development a lot of migrations have been created, however the software is not yet delievered and there exists no database that must be migrated. Therefore I would like to reset the migrations as if my current model was the original one and recreate all databases.
What is the recommended way to do that?
EDIT: As of Django 1.8 there is a new command named squashmigrations which more or less solves the problem described here.
I got this. I just figured this out and it is good.
First, to clear migrations table:
./manage.py migrate --fake <app-name> zero
Remove app-name/migrations/ folder or contents.
Make the migrations:
./manage.py makemigrations <app-name>
Finally tidy up your migrations without making other database changes:
./manage.py migrate --fake <app-name>
In the Django 1.7 version of migrations the reset functionality that used to be in South has been dropped in favor of new functionality for 'squashing' your migrations. This is supposed to be a good way to keep the number of migrations in check.
https://docs.djangoproject.com/en/dev/topics/migrations/#squashing-migrations
If you still want to really start from scratch i assume you still could by emptying the migrations table and removing the migrations after which you would run makemigrations again.
I just had the same problem.
Here's my workaround.
#!/bin/sh
echo "Starting ..."
echo ">> Deleting old migrations"
find . -path "*/migrations/*.py" -not -name "__init__.py" -delete
find . -path "*/migrations/*.pyc" -delete
# Optional
echo ">> Deleting database"
find . -name "db.sqlite3" -delete
echo ">> Running manage.py makemigrations"
python manage.py makemigrations
echo ">> Running manage.py migrate"
python manage.py migrate
echo ">> Done"
The find command: http://unixhelp.ed.ac.uk/CGI/man-cgi?find
Assuming this is your project structure,
project_root/
app1/
migrations/
app2/
migrations/
...
manage.py
remove_migrations.py
you can run the script remove_migrations.py from the the place indicated above to delete all migrations files.
#remove_migrations.py
"""
Run this file from a Django =1.7 project root.
Removes all migration files from all apps in a project.
"""
from unipath import Path
this_file = Path(__file__).absolute()
current_dir = this_file.parent
dir_list = current_dir.listdir()
for paths in dir_list:
migration_folder = paths.child('migrations')
if migration_folder.exists():
list_files = migration_folder.listdir()
for files in list_files:
split = files.components()
if split[-1] != Path('__init__.py'):
files.remove()
Manually deleting can be tiring if you have an elaborate project. This saved me a lot of time. Deleting migration files is safe. I have done this an umpteenth number of times without facing any problems...yet.
However when I deleted the migrations folder, makemigrations or migrate did not create the folder back for me. The script makes sure that the migration folder with its __init__.py stays put, only deleting the migration files.
Delete files:
delete_migrations.py (in root of prj):
import os
for root, dirs, files in os.walk(".", topdown=False):
for name in files:
if '/migrations' in root and name != '__init__.py':
os.remove(os.path.join(root, name))
DELETE FROM django_migrations Where app in ('app1', 'app2');
./manage.py makemigrations
./manage.py migrate --fake
OR, you can write migration from this all
I try different commands and some of the answers help me. Only this sequence in my case fixed both broken dependencies in migrations in MYAPP and clean all past migrations starting from scratch.
Before doing this ensure that database is already synced (e.g. do not add a new Model field here or change Meta options).
rm -Rf MYAPP/migrations/*
python manage.py makemigrations --empty MYAPP
python manage.py makemigrations
python manage.py migrate --fake MYAPP 0002
Where 0002 is the migration number returned by the last makemigrations command.
Now you can run makemigrations / migrate again normally because migration 0002 is stored but not reflected in the already-synced database.
If you don't care about previous migrations, what about just removing all migrations in the migrations/ directory? you will start the migration sequence from scratch, taking your current model as reference as if you had written the whole model now.
If you don't trust me enought to remove, then try to move them away instead.
A simple way is
Go to every app and delete the migration files.
Then go to the django-migrtaions table in the database and truncate it(delete all entries).
After that you can create migrations once again.
cd to src directory
cd /path/to/src
delete migration directories
rm -rf your_app/migrations/
note that this should be done for each app separately
migrate
python3.3 manage.py migrate
if you wish to start again
python3.3 manage.py makemigrations your_app
If you're in development mode and you just want to reset everything (database, migrations, etc), I use this script based on Abdelhamid Ba's answer. This will wipe the tables of the database (Postgres), delete all migration files, re-run the migrations and load my initial fixtures:
#!/usr/bin/env bash
echo "This will wipe out the database, delete migration files, make and apply migrations and load the intial fixtures."
while true; do
read -p "Do you wish to continue?" yn
case $yn in
[Yy]* ) make install; break;;
[Nn]* ) exit;;
* ) echo "Please answer yes or no.";;
esac
done
echo ">> Deleting old migrations"
find ../../src -path "*/migrations/*.py" -not -name "__init__.py" -delete
# Optional
echo ">> Deleting database"
psql -U db_user -d db_name -a -f ./reset-db.sql
echo ">> Running manage.py makemigrations and migrate"
./migrations.sh
echo ">> Loading initial fixtures"
./load_initial_fixtures.sh
echo ">> Done"
reset-db.sql file:
DO $$ DECLARE
r RECORD;
BEGIN
-- if the schema you operate on is not "current", you will want to
-- replace current_schema() in query with 'schematodeletetablesfrom'
-- *and* update the generate 'DROP...' accordingly.
FOR r IN (SELECT tablename FROM pg_tables WHERE schemaname = current_schema()) LOOP
EXECUTE 'DROP TABLE IF EXISTS ' || quote_ident(r.tablename) || ' CASCADE';
END LOOP;
END $$;
migration.sh file:
#!/usr/bin/env bash
cd ../../src
./manage.py makemigrations
./manage.py migrate
load_initial_fixtures.sh file:
#!/usr/bin/env bash
cd ../../src
./manage.py loaddata ~/path-to-fixture/fixture.json
Just be sure to change the paths to corresponds to your app. I personally have these scripts in a folder called project_root/script/local, and django's sources are in project_root/src.
After deleting each "migrations" folder in my app (manually), I ran:
./manage.py dbshell
delete from django_migrations;
Then I thought I could just do ./manage.py makemigrations to regenerate them all. However, no changes were detected. I then tried specifying one app at a time: ./manage.py makemigrations foo, ./manage.py makemigrations bar. However, this resulted in circular dependencies that could not be resolved.
Finally, I ran a single makemigrations command that specified ALL of my apps (in no particular order):
./manage.py makemigrations foo bar bike orange banana etc
This time, it worked - circular dependencies were automatically resolved (it created additional migrations files where necessary).
Then I was able to run ./manage.py migrate --fake and was back in business.