Django docker error table not exist - django

I had a existing Django Rest project with an existing MySQL database (named libraries) which I wanted to Dockerize.
My dockerfile:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY . /code/
RUN pip install -r requirements.txt
My docker-compose:
version: '3'
services:
db:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: libraries
MYSQL_USER: root
MYSQL_PASSWORD: root
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
Steps:
I ran: docker-compose build - build was successful
I ran: docker-compose up - had to run this command twice and then I could access my API by hitting localhost:8000
However, whenever I hit any API endpoint I get an error Table "XYZ" does not exist. All the tables are already present.
Why this happens?

First of all, it's strange that you had to run docker-compose up twice. I recommend to run docker logs after the first run to see what goes wrong, then start another question if you need help.
Regarding your main question, keep it mind that docker containers are stateless. That means unless you add persistent volume configurations, you'll get the same "fresh" one every time you start a new container.
Based on your compose file, there are two containers: a "web" one and a "db" one. A fresh "db" one just contains an empty MySQL instance with db name, db user, and db password settings. There's no data in it. You have two options:
Run migration from your "web" container to set up the db schema in your "db" container.
If you have some data in your local/dev setting and want to use them, consider backing up these data from your local setting then restoring it into your "db" container. In case you don't know how, consult MySQL documents to see how to backup data, and consult the "Initializing a fresh instance" part of the MySQL docker hub to see how to start a new "db" container with some data.

First you need to run django migrations:
$ docker exec -it [container] bash
# python manage.py migrate

Related

Docker Compose and Django app using AWS Lighstail containers fail to deploy

I'm trying to get a Django application running on the latest version of Lightsail which supports deploying docker containers as of Nov 2020 (AWS Lightsail Container Announcement).
I've created a very small Django application to test this out. However, my container deployment continues to get stuck and fail.
Here are the only logs I'm able to see:
This is my Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
And this is my docker-compose.yml:
version: "3.9"
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
image: argylehacker/app-stats:latest
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
I'm wondering a few things:
Right now I'm only uploading the web container to Lightsail. Should I also be uploading the db container?
Should I create a postgres database in Lightsail and connect to it first?
Do I need to tell Django to run the db migrations before the application starts?
Is there a way to enable more logs from the containers? Or does the lack of logs mean that the containers aren't even able to start.
Thanks for the help!
Docker
This problem stemmed from a bad understanding of Docker. I was previously trying to include image: argylehacker/app-stats:latest in my docker-compose.yml to upload the web container to DockerHub. This is the wrong way of going about things. From what I understand now, docker-compose is most helpful for orchestrating your local environment rather than creating docker images that can be run in containers.
The most important thing is to upload a container to Lightsail that can start your server. When you're using Docker this can be specified using the CMD and the end of your Dockerfile. In my case I needed to add this line to my Dockerfile:
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
So now it looks like this:
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
CMD ["python", "manage.py", "runserver", "2.0.0.0:8000"]
Finally, I removed the image: argylehacker/app-stats:latest line from my docker-compose.yml file.
At this point you should be able to:
Build your container docker build -t argylehacker/app-stats:latest .
Upload it to DockerHub docker push argylehacker/app-stats:latest
Deploy it in AWS Lightsail pointing to argylehacker/app-stats:latest
Troubleshooting
I got stuck on this because I couldn't see any meaningful logs in the Lightsail log terminal. This was because my container wasn't actually running anything.
In order to get debug this locally I took the following steps
Build the image docker build -t argylehacker/app-stats:latest .
Run the container docker run -it --rm -p 8000:8000 argylehacker/app-stats:latest.
At this point docker should be running the container and you can view the logs. This is exactly what Lightsail is going to do when it runs your container.
Answers to my Original Questions
The Dockerfil is very different than a docker-compose file used to compose services. The purpose of docker-compose is to coordinate containers, vs a Dockerfile will define how an image is built. All you need to do for Lightsail is build the image docker build <container>:<tag>
Yes, you'll need to create a Postgres database in AWS Lightsail so that Django can connect to a database and run. You'll modify the settings.py file to include the database credentails once it is available in Lightsail.
Still tracking down the best way to run the db migrations
The lack of logs was because the Dockerfile wasn't starting Django

AWS copilot with Django never finishes deploying

I've followed the this short guide to create a django app with docker
https://docs.docker.com/compose/django/
and then following copilot instructional to push up the container to ECS:
https://aws.amazon.com/blogs/containers/introducing-aws-copilot/
I've also used this sample to test everything -- which works out fine:
https://github.com/aws-samples/aws-copilot-sample-service
The deploy completes and outputs and URL endpoint.
In my case, the everything is successfully built, but once the test environment is being deployed it just continuously builds at this:
72ff4719 size: 3055
⠏ Deploying load-bal:7158348 to test.
and never finishes. I've even downsized my requirements.txt to a bare minimum.
My Dockerfile
FROM python:3.7.4
ENV PYTHONUNBUFFERED=1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
EXPOSE 80
COPY . /code/
docker-compose.yml
version: "3.8"
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
requirements.txt
Django==3.0.8
djangorestframework==3.11.0
gunicorn==20.0.4
pipenv==2020.6.2
psycopg2-binary==2.8.5
virtualenv==16.7.6
Instructions I follow:
sudo docker-compose run web django-admin startproject composeexample .
Successfully creates the Django App
copilot init
Setup naming for app and load balancer
Choose to create test environment
Everything builds successfully and then just sits here. I've tried a number of variations, but the only one that works is just doing the copilot instructional without django involved.
6f3494a64128: Pushed
cfe650cc4def: Pushed
a477d6671cc7: Pushed
90df760355a7: Pushed
574ea6c52bdd: Pushed
d1573fad78d1: Pushed
14c1ff636882: Pushed
48ebd1638acd: Pushed
31f78d833a92: Pushed
2ea751c0f96c: Pushed
7a435d49206f: Pushed
9674e3075904: Pushed
831b66a484dc: Pushed
ini: digest: sha256:b7460876bc84b1a26e7513fa6d17b5bffd5560ae958a933984376ed2c9fe53f3 size: 3052
⠏ Deploying aiinterview-lb:ini to test.
tl;dr the Dockerfile that's being used by this tutorial is incomplete for Copilot's purposes. It needs an extra line containing
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
and the EXPOSE directive should be updated to 8000. Because Copilot doesn't recognize Docker Compose syntax and there's no command or entrypoint specified in the Dockerfile, the image will never start with Copilot's configuration settings.
Details
AWS Copilot is designed around "services" consisting of an image, possible sidecars, and additional storage resources. That means that its basic unit of config is the Docker image and the service manifest. It doesn't natively read Docker Compose syntax, so all the config that Copilot knows about is that which is specified in the Dockerfile or image and each service's manifest.yml and addons directory.
In this example, designed for use with Docker Compose, the Dockerfile doesn't have any kind of CMD or ENTRYPOINT directive, so the built image which gets pushed to Amazon ECR by Copilot won't ever start. The tutorial specifies the image's command (python manage.py runserver 0.0.0.0:8000) as an override in docker-compose.yml, so you'll want to update your Dockerfile to the following:
FROM python:3.7.4
ENV PYTHONUNBUFFERED=1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
EXPOSE 8000
COPY . /code/
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Note here that I've changed the EXPOSE directive to 8000 to match the command from docker-compose.yml and added the command specified in the web section to the Dockerfile as a CMD directive.
You'll also want to run
copilot init --image postgres --name db --port 5432 --type "Backend Service" --deploy
This will create the db service specified in your docker-compose.yml. You may need to run this first so that your web container doesn't fail to start while searching for credentials.
Some other notes:
You can specify your database credentials by adding variables and secrets in the manifest file for db which is created in your workspace at ./copilot/db/manifest.yml. For more on how to add a secret to SSM and make it accessible to your Copilot services, check out our documentation
variables:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
secrets:
POSTGRES_PASSWORD: POSTGRES_PASSWORD
Your database endpoint is accessible over service discovery at db.$COPILOT_SERVICE_DISCOVERY_ENDPOINT--you may need to update your service code which connects to the database to reflect this endpoint instead of localhost or 0.0.0.0.

Docker pull Django image and run container

So, I have followed this tutorial by Docker to create a Django image.
It completely works on my local machine by just running a docker-compose up command from the root directory of my project.
But, after pushing the image to docker hub https://hub.docker.com/repository/docker/vivanks/firsttry
I am pulling the image to another machine and then running:
docker run -p 8020:8020 vivanks/firsttry
But it's not getting started and showing this error:
EXITED(0)
Can anyone help me on how to pull this image and run it?
My Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
My docker-compose.yml
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
As #larsks mentioned in his answer your problem is that your command is in the Compose file, rather than in Dockerfile.
To run your project on another machine as-is, use the following docker-compose.yml:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
image: vivanks/firsttry:latest
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
depends_on:
- db
If you already added CMD python manage.py runserver 0.0.0.0:8000 to your Dockerfile and rebuilt the image, the above can be further simplified to:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
image: vivanks/firsttry:latest
ports:
- "8000:8000"
depends_on:
- db
Using docker run will fail in either case, since it won't set up a database.
Edit:
OP, I admire your persistence, but at the same time do not understand the insistence on using Docker CLI rather than docker-compose. I recommend using one of the above docker-compose.yml files to start your app.
Nevertheless, I accept the challenge of running it without docker-compose.
Your application fails to start when you use docker run command, because it tries to connect to database on host db, which does not exist. In your (and mine) docker-compose.yml there is a definition of a service called db. Docker-compose uses that definition to set up a database container for you and makes it available for your application under hostname db.
To start your application without using docker-compose, you need to manually do everything it does for you automatically (the commands below assume you have added CMD... to your Dockerfile:
docker network create --driver bridge django-test-network
docker run --detach --env POSTGRES_DB=postgres --env POSTGRES_USER=postgres --env POSTGRES_PASSWORD=postgres --network django-test-network --name db postgres:latest
docker run -it --rm --network django-test-network --publish 8080:8000 vivanks/firsttry:latest
The above 3 commands create a new bridged network, create and start a detached (background) container with properly configured database connected to that network and finally create and start an attached (foreground) container based on your image, also attached to that new network. Since both containers are on the same, non-default bridged network, your application will be able to resolve hostname db to internal IP address of the database container and start properly.
Once you shut it down with Ctrl+C, the container with your application will delete itself (as it was started with option --rm), but you need to also manually clean up the rest. To do so run the following commands:
docker stop db
docker rm -v db
docker network remove django-test-network
The first one stops the database container, the second one removes it and its anonymous volume and the third one removes the network.
I hope this explains everything.
Your Dockerfile doesn't specify a CMD or ENTRYPOINT. When you run...
docker run -p 8020:8020 vivanks/firsttry
...the container has nothing to do (which means it will actually try to start a Python interactive shell, but since you're not allocating a terminal with -t, the shell just exits. Successfully). In your docker-compose.yml, you're passing in an explicit command:
command: python manage.py runserver 0.0.0.0:8000
So the equivalent docker run command line would look like:
docker run -docker run -p 8020:8020 vivanks/firsttry python manage.py runserver 0.0.0.0:8000
But you probably want to bake that into your Dockerfile like this:
CMD python manage.py runserver 0.0.0.0:8000

Duplicate images on docker-compose build. How to properly push two services of docker-compose.yml to Docker hub registry?

I have a docker-compose.yml defined as follows with two services (the database and the app):
version: '3'
services:
db:
build: .
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=(adminname)
- POSTGRES_PASSWORD=(adminpassword)
- CLOUDINARY_URL=(cloudinarykey)
app:
build: .
ports:
- "8000:8000"
depends_on:
- db
The reason I have build: . in both services is due to how you can't do docker-compose push unless you have a build in all services. However, this means that both services are referring to the same Dockerfile, which builds the entire app. So after I run docker-compose build and look at the images available I see this:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mellon_app latest XXXXXXXXXXXX 27 seconds ago 1.14GB
postgres latest XXXXXXXXXXXX 27 seconds ago 1.14GB
The IMAGE_ID is the exact same for both images, the size is exactly the same for both images. This makes me think I've definitely done some unnecessary duplication as they're both just running the same Dockerfile. I don't want to take up any unnecessary space, how do I do this properly?
This is my Dockerfile:
FROM (MY FRIENDS ACCOUNT)/django-npm:latest
RUN mkdir usr/src/mprova
WORKDIR /usr/src/mprova
COPY frontend ./frontend
COPY backend ./backend
WORKDIR /usr/src/mprova/frontend
RUN npm install
RUN npm run build
WORKDIR /usr/src/mprova/backend
ENV DJANGO_PRODUCTION=True
RUN pip3 install -r requirements.txt
EXPOSE 8000
CMD python3 manage.py collectstatic && \
python3 manage.py makemigrations && \
python3 manage.py migrate && \
gunicorn mellon.wsgi --bind 0.0.0.0:8000
What is the proper way to push the images to my Docker hub registry without this duplication?
Proper way is to do
docker build -f {path-to-dockerfile} -t {desired-docker-image-name}.
docker tag {desired-docker-image-name}:latest {desired-remote-image-name}:latest or not latest but what you want, like datetime in int format
docker push {desired-remote-image-name}:latest
and cleanup:
4. docker rmi {desired-docker-image-name}:latest {desired-remote-image-name}:latest
Whole purpose of docker-compose is to help your local development, so it's easier to start several pods and combine them in local docker-compose network etc...

Docker Compose, Django: role "_" does not exist

Context
I am trying to run my Django application and Postgres database in a docker development environment using docker-compose (it's my first time using Docker).
I want to use my application with a custom role and database both named teddycrepineau (as opposed to using the default postgres user and db).
Goal
My goal is to deploy a web app powered on the front end by react and the backend by django restapi, the whole running in a docker.
System/Version
python: 3.7
django: 2.1
OS: Mac OS High Sierra
What error am I getting
When running docker-compose up with my custom role and db, I am getting the following error django.db.utils.OperationalError: FATAL: role "teddycrepineau" does not exist. When running the same command with the default role and db postgres Django is able to start normally.
My understanding was that running docker-compose up would create the role and db passed as environment variable.
What I have tried so far
I read multiple threat on this site, GitHub, and docker:
tried to delete my container and rebuilt it with formatting as suggested here
Went through this GitHub issue
Tried to move my environment variable from .env file the environment inside my docker-compose.yml file and rebuild my container
Files
docker-compose.yml
version: '3'
volumes:
postgres_data: {}
services:
postgres:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
env_file: .env
ports:
- "5432"
django:
build:
context: teddycrepineau-backend
dockerfile: teddycrepineau-root/Dockerfile
command: ./teddycrepineau-backend/teddycrepineau-root/start.sh
env_file: .env
volumes:
- .:/teddycrepineau-backend
ports:
- "8000:8000"
depends_on:
- postgres
Dockerfile
FROM python:3.7
ENV PYTHONUNBUFFERED 1
WORKDIR /teddycrepineau-backend/
ADD ./teddycrepineau-root/requirements.txt /teddycrepineau-backend/
RUN pip install -r requirements.txt
ADD . /teddycrepineau-backend/
RUN chmod +x ./teddycrepineau-root/start.sh
start.sh
#!/usr/bin/env bash
python3 ./teddycrepineau-backend/teddycrepineau-root/manage.py runserver
.env
POSTGRES_PASSWORD=
POSTGRES_USER=teddycrepineau
POSTGRES_DB=teddycrepineau
EDIT
My file structure is as follow
root
|___ teddycrepineau-backend
|___ teddycrepineau-root
|___ teddycrepineau
|___ Dockerfile
|___ manage.py
|___ start.sh
|___ teddycrepineau-frontend
|___ React-App
|___ .env
|___ docker-compose.yml
When I move my docker-compose.yml file inside my backend folder, it starts as expected (though I am not able to access my site when going to 127.0.0.1:8000 but that is mostly a different issue) with custom user and db. When I put my docker-compose.yml file to my root folder, I get the error django.db.utils.OperationalError: FATAL: role "teddycrepineau" does not exist
This happens because your pgsql db was launched without any envs. The pgsql docker image only uses the envs the first time you created the container, after that it won't recreate DB and users.
The solution is to remove the pgsql volume so next time you docker-compose up you will have a fresh db with envs read. Simple way to do it is docker-compose down -v
Change your env order like this.
POSTGRES_DB=teddycrepineau
POSTGRES_USER=teddycrepineau
POSTGRES_PASSWORD=
I find it at this issue. I hope it works.
when you run the
sudo docker-compose exec web python manage.py migrate
yes of course you will receive
"django.db.utils.OperationalError: FATAL: role "user" does not exist
first you need to put
sudo docker-compose down -v
sudo docker system prune
check container, they should be deleted
sudo docker ps -a
then check images
sudo docker image ls
don`t forget delete images
sudo docker image rm 3e57319a7a3a
run to the project folder and then check out
python manage.py migrate
if it didn`t works put the
python manage.py migrate —run-syncdb
and
sudo docker-compose up -d --build
sudo docker-compose exec web python manage.py collectstatic --no-input
sudo docker-compose exec web python manage.py makemigrations
sudo docker-compose exec web python manage.py migrate auth
sudo docker-compose exec web python manage.py migrate --run-syncdb
I encountered the issue due to a mismatch between the $POSTGRES_DB and $POSTGRES_USER variables. By default, psql will attempt to set the database to the same name as the user logging in, so when there is a mismatch between the variables it fails with an error along the lines of psql:
FATAL: database "root" does not exist
I had to edit the init script that I was writing to include the -d "$POSTGRES_DB" option like so:
#!/bin/bash
set -e
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" -d "$POSTGRES_DB" <<-EOSQL
CREATE USER docker;
CREATE DATABASE docker;
GRANT ALL PRIVILEGES ON DATABASE docker TO docker;
EOSQL