I've been trying to find the best method to handle setting up a Django project with Docker. But I'm somewhat confused as to how CMD and ENTRYPOINT function in relation to the compose commands.
When I first set the project up, I need to run createsuperuser and migrate for the database. I've tried using a script to run the commands as the entrypoint in my Dockerfile but it didn't seem to work consistently. I switched to the configuration shown below, where I overwrite the Dockerfile CMD with commands in my compose file where it is told to run makemigrations, migrate, and createsuperuser.
The issue I'm having is exactly how to set it up so that it does what I need. If I set a command (shown as commented out in the code) in my compose file it should overwrite the CMD in my Dockerfile from what I understand.
What I'm unsure of is whether or not I need to use ENTRYPOINT or CMD in my Dockerfile to achieve this? Since CMD is overwritten by my compose file and ENTRYPOINT isn't, wouldn't it cause problems if it was set to ENTRYPOINT, since it would try to run gunicorn a second time after the compose command is executed?
Would there be any drawbacks in this approach compared to using an entrypoint script?
Lastly, is there a general best practice approach to handling Django's setup commands when deploying a dockerized Django application? Or am I already doing what is typically done?
Here is my Dockerfile:
FROM python:3.6
LABEL maintainer x#x.com
ARG requirements=requirements/production.txt
ENV DJANGO_SETTINGS_MODULE=site.settings.production_test
WORKDIR /app
COPY manage.py /app/
COPY requirements/ /app/requirements/
RUN pip install -r $requirements
COPY config config
COPY site site
COPY templates templates
COPY logs logs
COPY scripts scripts
EXPOSE 8001
CMD ["/usr/local/bin/gunicorn", "--config", "config/gunicorn.conf", "--log-config", "config/logging.conf", "-e", "DJANGO_SETTINGS_MODULE=site.settings.production_test", "-w", "4", "-b", "0.0.0.0:8001", "site.wsgi:application"]
And my compose file (omitted the nginx and postgres sections as they are unnecessary to illustrate the issue):
version: "3.2"
services:
app:
restart: always
build:
context: .
dockerfile: Dockerfile.prodtest
args:
requirements: requirements/production.txt
#command: bash -c "python manage.py makemigrations && python manage.py migrate && gunicorn --config gunicorn.conf --log-config loggigng.conf -e DJANGO_SETTINGS_MODULE=site.settings.production_test -W 4 -b 0.0.0.0:8000 site.wsgi"
container_name: dj01
environment:
- DJANGO_SETTINGS_MODULE=site.settings.production_test
- PYTHONDONTWRITEBYTECODE=1
volumes:
- ./:/app
- /static:/static
- /media:/media
networks:
- main
depends_on:
- db
I have the following entrypoint script that will attempt to do the migrate automatically on my Django project:
#!/bin/bash -x
python manage.py migrate --noinput || exit 1
exec "$#"
The only change that would need to happen to your Dockerfile is to ADD it and specify the ENTRYPOINT. I usually put these lines directly about the CMD instruction:
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod a+x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
(please note that the chmod is only necessary if the docker-entrypoint.sh file on in your build environment is not executable already)
I add || exit 1 so that the script will stop the container should the migrate fail for any reason. When starting your project via docker-compose, it's possible that the database may not be 100% ready to accept connections when this migrate command runs. Between the exit on error approach and the restart: always that you have in your docker-compose.yml already, this will handle that race condition properly.
Note that the -x option I specify for bash echoes out what bash is doing, which I find helpful for debugging my scripts. It can be omitted if you want less verbosity in the container logs.
Dockerfile:
...
ENTRYPOINT ["entrypoint.sh"]
CMD ["start"]
entrypoint.sh will be executed all the time whilst CMD will be the default argument for it (docs)
entrypoint.sh:
if ["$1" = "start"]
then
/usr/local/bin/gunicorn --config config/gunicorn.conf \
--log-config config/logging.conf ...
elif ["$1" = "migrate"]
# whatever
python manage.py migrate
fi
now it is possible to do something like
version: "3.2"
services:
app:
restart: always
build:
...
command: migrate # if needed
or
docker exec -it <container> bash -c entrypoint.sh migrate
Related
tldr version: how do i do x everytime i build a container, instead of everytime i build a new image.
im building a very basic docker django example. when i do docker-compose build everything works as i want
version: '3.9'
services:
app:
build:
context: .
command: sh -c "python manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
volumes:
- ./app:/app
environment:
- SECRET_KEY=devsecretkey
- DEBUG=1
this runs the django devserver, however only when the image is being built. the containers created by the image do nothing, but actually i want them to run the django devserver. So i figure i should just move the command: sh -c "python manage.py runserver 0.0.0.0:8000" from docker-compose to my dockerfile as an entrypoint.
below is my docker file
FROM python:3.9-alpine3.13
LABEL maintainer="madshit.com"
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
COPY ./app /app
WORKDIR /app
EXPOSE 8000
RUN python -m venv /py && \
/py/bin/pip install --upgrade pip && \
/py/bin/pip install -r /requirements.txt && \
adduser --disabled-password --no-create-home app
ENV PATH="/py/bin:$PATH"
USER app
ENTRYPOINT python manage.py runserver # i added this because i thought it would be called everytime my docker environment was finished setting up. no dice :(
The bottom section of the image below is a screenshot of the logs of my image from docker desktop. strangely the last command it accepted was to set the user not anything to do with entrypoint. maybe it ignored entrypoint and thats the problem? The top section shows the logs of the instance created from this image (kinda bare).
what do i need to do to make the django webserver run in each container when deployed?
why doesnt entrypoint seem to get called? (its not in the logs)
I would recommend changing your environment variable logic slightly.
environment:
- SECRET_KEY=devsecretkey
- DEBUG=1 <-- replace this
- SERVER='localhost' <-- or other env like staging or live
And then in your settings file you can do:
SERVER = os.environ.get('SERVER')
And then you can set variables based on the string like so:
if SERVER == 'production':
DEBUG = FALSE
else:
DEBUG = True
This is a very regular practice so that we can customise all kinds of settings and there are plenty of use cases for this method.
If that still doesn't work, we may have to look at other issues that might be causing these symptoms.
I have been running an app without docker and have just added in Dockerfile and docker-compose.
The issue I am having is that after I successfuly build the app, runserver produces the below error when I run either that or migrate.
➜ app git:(master) sudo docker-compose run app sh -c "python manage.py runserver"
Error loading shared library libpython3.8.so.1.0: No such file or directory (needed by /usr/local/bin/python)
Error relocating /usr/local/bin/python: Py_BytesMain: symbol not found
failed to resize tty, using default size
%
➜ app git:(master) sudo docker-compose run app sh -c "python manage.py migrate"
Error loading shared library libpython3.8.so.1.0: No such file or directory (needed by /usr/local/bin/python)
Error relocating /usr/local/bin/python: Py_BytesMain: symbol not found
Dockerfile
FROM python:3.8-alpine
MAINTAINER realize-sec
ENV PYTHONUNBUFFERED 1
COPY requirements.txt /requirements.txt
RUN pip install -r requirements.txt
RUN mkdir /app
WORKDIR /app
COPY ./app /app
RUN adduser -D user
USER user
docker-compose.yml
version: "3"
services:
app:
build:
context: ""
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
What am I doing wrong that is causing this?
When I run without docker using python3 manage.py runserver it works fine.
Because I haven’t tested the build, I don’t know whether any of these things will help you to ultimately build your containers, however here are some observations to hopefully set you on the right path.
Your context is a null string and is usually a dot (.)
You typically finish the Dockerfile with the following command:
CMD [ "python3", "manage.py", "runserver", "0.0.0.0:8000" ]
So you can remove that from your compose file.
Other than that, on a more general note, although Alpine images are small, they are prone to breaking because of the additional dependencies and packages that you need to add/remove. You’re probably better off with going for the slim version overall. The original build will take a bit longer but it will be more manageable.
Also, if you’re running a modern version of Docker on your machine, then you can move the syntax version of the compose file to version 3.7 or 3.8, depending upon your version of Docker.
I would to run a script (populate my MySql Docker container) only when my docker containers are built. I'm running the following docker-compose.yml file, which contains a Django container.
version: '3'
services:
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'chicommons'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3406"
volumes:
- my-db:/var/lib/mysql
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn maps.wsgi:application -w 2 -b :8000
depends_on:
- mysql
apache:
restart: always
build: ./apache/
ports:
- "80:80"
#volumes:
# - web-static:/www/static
links:
- web:web
volumes:
my-db:
I have this web/Dockerfile
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
RUN mkdir -p /app/
WORKDIR /app/
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY entrypoint.sh /app/
COPY . /app/
RUN ["chmod", "+x", "/app/entrypoint.sh"]
ENTRYPOINT ["/app/entrypoint.sh"]
and these are the contents of my entrypoint.sh file
#!/bin/bash
set -e
python manage.py migrate maps
python manage.py loaddata maps/fixtures/country_data.yaml
python manage.py loaddata maps/fixtures/seed_data.yaml
exec "$#"
The issue is, when I repeatedly run "docker-compose up," the entrypoint.sh script is getting run with its commands. I would prefer the commands only get run when the docker container is first built but they seem to always get run when the container is restored. Is there any way to adjust what I have to achieve this?
An approach that I've used before is to wrap your loaddata calls in your own management command, which first checks if there's any data in the database, and if there is, doesn't do anything. Something like this:
# your_app/management/commands/maybe_init_data.py
from django.core.management import call_command
from django.core.management.base import BaseCommand
from address.models import Country
class Command(BaseCommand):
def handle(self, *args, **options):
if not Country.objects.exists():
self.stdout.write('Seeding initial data')
call_command('loaddata', 'maps/fixtures/country_data.yaml')
call_command('loaddata', 'maps/fixtures/seed_data.yaml')
And then change your entrypoint script to:
python manage.py migrate
python manage.py maybe_init_data
(Assumption here that you have a Country model - replace with a model that you do actually have in your fixtures.)
The idea of seeding your database in the first run, is a very common case. As others have suggested, you can change your entrypoint.sh script and apply some conditioning logic to it and make it the way you want it to work.
But I think it is a really better practice if you separate the logic for seeding the database and running services and do not keep them tangled to each other. This might cause some unwanted behavior in the future.
I was going to suggest a workaround using docker-compose and started searching for some syntax for excluding some services while doing docker-compose up but found out this is still an open issue. But I found this stack overflow answer witch has suggested a very nice approach.
version: '3'
services:
all-services:
image: docker4w/nsenter-dockerd # you want to put there some small image
command: sh -c "echo start"
depends_on:
- mysql
- web
- apache
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'chicommons'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3406"
volumes:
- my-db:/var/lib/mysql
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn maps.wsgi:application -w 2 -b :8000
depends_on:
- mysql
apache:
restart: always
build: ./apache/
ports:
- "80:80"
#volumes:
# - web-static:/www/static
links:
- web:web
seed:
build: ./web
env_file: .env
environment:
DEBUG: 'true'
entrypoint: /bin/bash -c "/bin/bash -c \"$${#}\""
command: |
/bin/bash -c "
set -e
python manage.py loaddata maps/fixtures/country_data.yaml
python manage.py loaddata maps/fixtures/seed_data.yaml
/bin/bash || exit 0
"
depends_on:
- mysql
volumes:
my-db:
If you use something like above, you will be able to run seeding stage before running docker-compose up.
For seeding your databse, run:
docker-compose up seed
For running all your stack, use:
docker-compose up -d all-services
I think it is a clean approach and, can be extended to many different scenarios and use cases.
UPDATE
If you really want to be able to run the whole stack altogether and also prevent unexpected behaviors caused by running loaddata command multiple times, I would suggest you define a new django management command to check for existing data. Look at this:
checkseed.py
from django.core.management.base import BaseCommand, CommandError
from project.models import Country # or whatever model you have seeded
class Command(BaseCommand):
help = 'Check if seed data already exists'
def handle(self, *args, **options):
if Country.objects.all().count() > 0:
self.stdout.write(self.style.WARNING('Data already exists .. skipping'))
return False
# do all the checks for your data integrity
self.stdout.write(self.style.SUCCESS('Nothing exists'))
return True
And after this, you can change your seed part of docker-compose as below:
seed:
build: ./web
env_file: .env
environment:
DEBUG: 'true'
entrypoint: /bin/bash -c "/bin/bash -c \"$${#}\""
command: |
/bin/bash -c "
set -e
python manage.py checkseed &&
python manage.py loaddata maps/fixtures/country_data.yaml
python manage.py loaddata maps/fixtures/seed_data.yaml
/bin/bash || exit 0
"
depends_on:
- mysql
This way, you can be sure that if anyone runs docker-compose up -d by mistake, will not cause integrity errors and problems like that.
Instead of using the entrypoint.sh file, why not just run the commands in the web/Dockerfile?
RUN python manage.py migrate maps
RUN python manage.py loaddata maps/fixtures/country_data.yaml
RUN python manage.py loaddata maps/fixtures/seed_data.yaml
That way these changes will be baked into the image and, when you start the image, these changes will already have been executed.
I had a similar case recently. As the "ENTRYPOINT" contains the command that will be executed every time the container starts a solution would be to include some logic on the entrypoint.sh script in order to avoid to apply the updates ( in your case the migration and the load of the data ) if the effects of these operations are already present on the database.
For example:
#!/bin/bash
set -e
#Function that goes to verify if effects of migration and load data are present on database
function checkEffects() {
IS_UPDATED=0
#Check effects and set to 1 IS_UPDATED if effects are not present
}
checkEffects
if [[ $IS_UPDATED == 0 ]]
then
echo "Database already initialized. Nothing to do"
else
echo "Database is clean. Initializing it"
python manage.py migrate maps
python manage.py loaddata maps/fixtures/country_data.yaml
python manage.py loaddata maps/fixtures/seed_data.yaml
fi
exec "$#"
However the scenario is more complex because to verify the effects that allow to decide if to proceed or not with the updates can be a quite difficult if these involves multiple data and data.
Moreover it becomes very complex if you think on the containers upgrade over time.
Example: Today you're working with a local Dockerfile for your web service but I think in production you'll start to versioning this service uploading it on a Docker registry. So when you'll upload
your first release ( for example the 1.0.0 version ) you'll
specify the following on your docker-compose.yml:
web:
restart: always
image: <DOCKER_REGISTRY_HOST>:<DOCKER_REGISTRY_PORT>/web:1.0.0
ports: # to access the container from outside
- "8000:8000"
Then you'll release the "1.2.0" version of the web service
container when you'll include other changes on the schema for example
loading other data on entrypoint.sh:
#1.0.0 updates
python manage.py migrate maps
python manage.py loaddata maps/fixtures/country_data.yaml
python manage.py loaddata maps/fixtures/seed_data.yaml
#1.2.0 updates
python manage.py loaddata maps/fixtures/other_seed_data.yaml
Here you'll have 2 scenarios (
let's ignore for now the need to check for effects on the script ):
1- You deploy for the first time your services with web:1.2.0: As you start from a clean database you should be sure that all
updates are executed ( both 1.1.0 and 1.2.0 ).
The solution to this case is easy because you can just execute all updates.
2- You upgrade web container to 1.2.0 on an existing environment where 1.0.0 was running: As your database has been initialized
with updates from 1.0.0 you should be sure that only 1.2.0
updates are executed
Here is difficult because you should be able to check what is the version on database applied in order to skip 1.0.0 updates. This will
means you should store the web version somewhere on database for
example
As per all this discussion so I think the best solution is to work directly on scripts that goes to create schema and populate data in order to make these instructions idempotent paying attention on upgrade ones.
Some examples:
1- Create a table
Instead to create table as follow:
CREATE TABLE country
use the if not exists to avoid table already present error:
CREATE TABLE IF NOT EXISTS country
2- Insert default data
Instead to insert data without primary key specified:
INSERT INTO maps.country (name) VALUES ("USA");
Include primary key in order to avoid duplicates:
INSERT INTO maps.country (id,name) VALUES (1,"USA");
Usually build and deploy steps are separated.
Your ENTRYPOINT is part of deploy.
If you want configure manually witch deploy run should run migrate commands and witch just replace containers by a new one (maybe from fresh image), then you can slit it into a separate commands
start database (if not running)
docker-compose -p production -f docker-compose.yml up mysql -d
migrate
docker run \
--rm \
--network production_default \
--env-file docker.env \
--entrypoint python \
my-backend-image-name:prod python manage.py migrate maps
and then deploy fresh image
docker-compose -p production -f docker-compose.yml up -d
And each time manually decide should you run migrate step or not
I have a Django project and I've been struggling with the automation of the static files generation. My project structure has a docker-compose.yml file and a Dockerfile for every container image.
The docker-compose.yml file for my project:
version: '3'
services:
web:
build: ./dispenser
command: gunicorn -c gunicorn.conf.py dispenser.wsgi
volumes:
- ./dispenser:/dispenser
ports:
- "8000:8000"
restart: on-failure
nginx:
build: ./nginx/
depends_on:
- web
command: nginx -g 'daemon off;'
ports:
- "80:80"
volumes:
- ./dispenser/staticfiles:/var/www/static
restart: on-failure
The Dockerfile for the Django project I'm using:
FROM python:3.7.4
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
WEBAPP_DIR=/dispenser \
GUNICORN_LOG_DIR=/var/log/gunicorn
WORKDIR $WEBAPP_DIR
RUN mkdir -p $GUNICORN_LOG_DIR \
mkdir -p $WEBAPP_DIR
ADD pc-requirements.txt $WEBAPP_DIR
RUN pip install -r pc-requirements.txt
ADD . $WEBAPP_DIR
RUN python manage.py makemigrations && \
python manage.py migrate && \
python manage.py collectstatic --no-input
After several hours of test and research I've found out that running the collectstatic and migrations commands from the Dockerfile doesn't produce the same result as doing it via the command argument on the docker-compose.yml file.
If I do it as shown above, when time for running the collectstatic command comes, only the "staticfiles" folder is generated (no files inside it). Also database migrations weren't applied (note that I'm using the default .sqlite3 db). Even though the stdout when creating the container said that migrations were applied and staticfiles generated.
The only workaround I found to make it work was executing bash from the container and then running those commands from there.
But later I've found out that if I specify those commands on the docker-file.yml everything works as expected. Leaving the files as follows:
docker-compose.yml
version: '3'
services:
web:
build: ./dispenser
command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py collectstatic --no-input && gunicorn -c gunicorn.conf.py dispenser.wsgi"
volumes:
- ./dispenser:/dispenser
ports:
- "8000:8000"
restart: on-failure
nginx:
build: ./nginx/
depends_on:
- web
command: nginx -g 'daemon off;'
ports:
- "80:80"
volumes:
- ./dispenser/staticfiles:/var/www/static
restart: on-failure
Dockerfile
FROM python:3.7.4
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
WEBAPP_DIR=/dispenser \
GUNICORN_LOG_DIR=/var/log/gunicorn
WORKDIR $WEBAPP_DIR
RUN mkdir -p $GUNICORN_LOG_DIR \
mkdir -p $WEBAPP_DIR
ADD pc-requirements.txt $WEBAPP_DIR
RUN pip install -r pc-requirements.txt
ADD . $WEBAPP_DIR
Can anyone explain me why does this occur? And if is there another way of achieving what I intend without having to specify the commands on the docker-compose.yml file?
When you mount a host directory into a container, the contents of host directory shadow the contents of the container.
volumes:
- ./dispenser:/dispenser
So when you run your container, the initial contents of /dispenser inside container will be the contents of ./dispenser from host machine. Any content already at /dispenser inside the container is shadowed. So the content generated during image build time by the RUN instructions inside your Dockerfile will be lost.
In your second approach of using command in compose file, you are mounting the volume first and then generating the content and hence it works.
The command instruction in Dockerfile is used to override the default command in the Docker image which can be set using CMD instruction in Dockerfile. Since you want to use the first approach of running your python script during image build time using RUN instructions, you can RUN them in a different directory(say /tmp/dispenser) and as part of the command in compose or CMD in Dockerfile, you can move the generated content from /tmp/dispenser to /dispenser.
I'm quite new to Docker. I'm trying to run Django on Docker. Following is my docker-compose file.
version: '2'
services:
django:
build:
context: .
dockerfile: ./deploy/dev/Dockerfile
tty: true
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
volumes:
- ./app:/src/app
depends_on:
- "workflow_db"
- "rabbitmq"
env_file:
- ./deploy/dev/envvar.env
workflow_db:
image: postgres:9.6
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=hello_django
- POSTGRES_PASSWORD=hello_django
- POSTGRES_DB=hello_django
rabbitmq:
image: "rabbitmq:3-management"
hostname: "rabbitmq"
environment:
RABBITMQ_ERLANG_COOKIE: "SWQOKODSQALRPCLNMEQG"
RABBITMQ_DEFAULT_USER: "rabbitmq"
RABBITMQ_DEFAULT_PASS: "rabbitmq"
RABBITMQ_DEFAULT_VHOST: "/"
ports:
- "15672:15672"
- "5672:5672"
volumes:
postgres_data:
DockerFile
FROM python:3.7-alpine
RUN apk update && apk add --no-cache gcc libffi-dev g++ python-dev build-base linux-headers postgresql-dev postgresql postgresql-contrib pcre-dev bash alpine-sdk \
&& pip install wheel
#Copy over application files
COPY ./app /src/app
#Copy over, and grant executable permission to the startup script
COPY ./deploy/dev/entrypoint.sh /
RUN chmod +x /entrypoint.sh
WORKDIR /src/app
#Install requirements pre-startup to reduce entrypoint time
RUN pip install -r requirements.txt
ENTRYPOINT [ "/entrypoint.sh" ]
And finally my entrypoint.sh
#! /bin/bash
cd /src/app || exit
echo "PIP INSTALLATION" && pip install -r requirements.txt
echo "UPGRADE" && python manage.py migrate
# echo "uwsgi" && uwsgi "uwsgi.ini"
I do django-compose build, it builds the image. But when I do docker-compose up django_1 exited with code 0.
However, if I uncomment the last line of entrypoint.sh, it runs perfectly well.
Can someone help me understand the reason behind it?
When you have both a command and an entrypoint, Docker runs only the entrypoint, and passes the command to it as arguments. See Understand how CMD and ENTRYPOINT interact in the Dockerfile docs. As soon as the entrypoint exits, the container is over; it can do whatever it likes with the command part, including completely ignoring it.
Typical practice is to end the entrypoint script with
exec "$#"
which causes it to just take its command-line arguments and run them as a command, replacing the entrypoint script as the main container process.
Without this, you get to the end of the entrypoint script, and the container has done everything it's told to do, so it exits successfully (status code 0).
If you want to container to keep running, you need either to:
run a foreground process
connect to it's terminal by --ti
you need to connect with terminal to see if the python command is failing when the uwsgi is not executed, hence stopping the container
AFAIU, when you uncomment the last line in entrypoint, the container doesn't have a foreground process anymore which can keep it up & running, hence it exits with status 0. Entrypoint must have a foreground process to keep container up & running. Also, you are doing "pip install" multiple times. This step should just be in Dockerfile.
Try moving python manage.py runserver 0.0.0.0:8000 in entrypoint.sh itself.
Update -
In case of port conflict, the container will not exit with status 0 & the port conflict error with come in STDOUT. Also, when he uncomments, there is no chance of port conflict. So, it seems like the foreground process is not getting executed at all.