docker doesn't reflect code change in docker container - django

I have a Django application that runs in the Docker container locally on Mac machine. This docker container is managed via docker-compose. Django application is configured to reload if its code changes.
However, and this is a problem, when I change application code, Django default server reloads service, but changes are not reflected in the container. API response doesn't change. Here is my docker configuration:
Dockerfile
FROM python:2-slim
RUN apt-get update && apt-get install -y build-essential
COPY requirements /requirements
RUN pip install -r /requirements/build.txt
# copy app source into image
COPY service_api /opt/service_api
COPY manage.py /opt
docker-compose.yml
version: '3.7'
services:
service-django:
image: service-django
build:
dockerfile: Dockerfile
context: .
ports:
- 8000:8000
volumes:
- ./service_api/:/opt/service_api/service_api # this path is correct!
container_name: service-django
hostname: service-django
restart: always
Docker desktop: 3.5.0
Docker Engine: 20.10.7
Compose: 1.29.2
Big Sur: 11.4
Any help will be appreciated!

You would either inject your code into the container during the build time using that COPY service_api /opt/service_api in your Dockerfile (which is not what you want here, since it kinda burns the source code into the image) or going for another approach (which is desired here) which is to bind your source code directory as a volume into the container, which enables the modifications you make into your source code to be visible inside the container, hence allowing code updates to be applied by Django server reload (As you're doing the exact thing in your compose file)
So all thing you need to do here is to remove copying files during build time and let your source code to be visible only through that volume.

Related

How to persist database file in project folder?

My application gathers data and stores it in an SQLite database (located in ./data/db.sqlite3) that we want to exchange through our Git repo along with the code. When I dockerize the app, on each machine the data persists, yet not in the project folder itself.
docker-compose.yml:
version: "3.8"
services:
gcs:
build: .
volumes:
- type: bind
source: ./data
target: /data
ports:
- "8000:8000"
Docker file :
FROM python:3.8-slim-buster
WORKDIR /gcs
COPY requirements.txt requirements.txt
RUN apt-get update && apt-get install nginx vim binutils libproj-dev gdal-bin -y --no-install-recommends
RUN pip3 install -r requirements.txt
COPY . .
CMD python3 manage.py runserver 0.0.0.0:8000
It mounts the 'data' folder in projects folder to a data folder in the container. When I use Docker inspect the source folder corresponds with project database location:
"Mounts": [
{
"Type": "bind",
"Source": "/Users/rleblon/projects/gcsproject/data",
"Destination": "/data",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
Yet somehow the persistent data is kept elsewhere, never ./data/db.sqlite3 itself gets updated on the host machine.
As I understand Docker creates a regular volume elsewhere and shares that between containers, but what's the point of specifying source and target when the source file was already copied into container? How can I get Docker to use the exact sqlite3 file I want, instead of creating a volume elsewhere and persisting a copy there?
It looks like you may be both copying your sqlite3 db into your image - via the line in your Dockerfile
COPY . .
which given your defined working directory in the Dockerfile
WORKDIR /gcs
and the looks of your directory structure, may copy the db to
/gcs/data
as well as mounting the same sqlite3 db via your docker-compose file in line
volumes:
- type: bind
source: ./data
target: /data
to
/data
in your running container of this image.
So you may have two versions of your data running in a container of this image: at
/gcs/data <-- copied over to the image in Dockerfile
/data <-- mounted as volume in docker-compose
Make sure you are not copying a version of the db into the image via your Dockerfile.
If your gcs service interacts with the version of your sqlite3 db located in /gcs/data, and not the mounted volume at /data, then any changes to the db would not persist after the container is destroyed.
On the other hand, if your app is pointing to the mounted volume version of your db at /data, then make sure the permissions set on your host / image allow for the db operations you desire.
Remember you are using a bind mount in your compose file - this means the mounted directory is shared between your host and the running container. This statement
As far as I understand Docker simply creates a regular volume elsewhere on my computer and shares that between containers
is true of docker volumes, but not the bind mount type you have used. See e.g., the offical docs or this nice summary post for a nice breakdown of the difference between volume types.

Getting "Error processing tar file(exit status 1): open /myenv/include/python3.6m/Python-ast.h: no such file or directory" while docker-compose build

So I am pretty new to docker and django. Unfortunately while running the below command on my linux machine which i am connected using my physical windows machine using putty:
docker-compose build
I am getting an error:
Error processing tar file(exit status 1): open /myenv/include/python3.6m/Python-ast.h: no such file or directory
'myenv' is the environment I have created inside my folder.
I am getting a container started on port 9000. The app doesn't have anything yet just a simple project so i just expect to see the 'congratulations' screen. I don't know where I am going wrong. My final goal would be to run the docker url in my windows browser and see the screen of docker container.
This is my docker-compose.yml file:
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:9000
ports:
- 202.179.92.106:8000:9000
the IP: 202.179.92.106 is my public IP. I did the above binding so as to access the docker container from my windows machine. Would request additional inputs for the port binding as well if correct/incorrect.
Below is my Dockerfile:
FROM python:3.6.9
RUN mkdir djangotest
WORKDIR djangotest
ADD . /djangotest
RUN pip install -r requirements.txt
Please help me out peeps!
If you have a virtual environment in your normal development tree, you can't copy it into a Docker image. You can exclude this from the build sequence by mentioning it in a .dockerignore file:
# .dockerignore
myenv
Within the Dockerfile, the RUN pip install line will install your application's dependencies into the Docker image, so you should have a complete self-contained image.

Docker Compose and Django app using AWS Lighstail containers fail to deploy

I'm trying to get a Django application running on the latest version of Lightsail which supports deploying docker containers as of Nov 2020 (AWS Lightsail Container Announcement).
I've created a very small Django application to test this out. However, my container deployment continues to get stuck and fail.
Here are the only logs I'm able to see:
This is my Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
And this is my docker-compose.yml:
version: "3.9"
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
image: argylehacker/app-stats:latest
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
I'm wondering a few things:
Right now I'm only uploading the web container to Lightsail. Should I also be uploading the db container?
Should I create a postgres database in Lightsail and connect to it first?
Do I need to tell Django to run the db migrations before the application starts?
Is there a way to enable more logs from the containers? Or does the lack of logs mean that the containers aren't even able to start.
Thanks for the help!
Docker
This problem stemmed from a bad understanding of Docker. I was previously trying to include image: argylehacker/app-stats:latest in my docker-compose.yml to upload the web container to DockerHub. This is the wrong way of going about things. From what I understand now, docker-compose is most helpful for orchestrating your local environment rather than creating docker images that can be run in containers.
The most important thing is to upload a container to Lightsail that can start your server. When you're using Docker this can be specified using the CMD and the end of your Dockerfile. In my case I needed to add this line to my Dockerfile:
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
So now it looks like this:
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
CMD ["python", "manage.py", "runserver", "2.0.0.0:8000"]
Finally, I removed the image: argylehacker/app-stats:latest line from my docker-compose.yml file.
At this point you should be able to:
Build your container docker build -t argylehacker/app-stats:latest .
Upload it to DockerHub docker push argylehacker/app-stats:latest
Deploy it in AWS Lightsail pointing to argylehacker/app-stats:latest
Troubleshooting
I got stuck on this because I couldn't see any meaningful logs in the Lightsail log terminal. This was because my container wasn't actually running anything.
In order to get debug this locally I took the following steps
Build the image docker build -t argylehacker/app-stats:latest .
Run the container docker run -it --rm -p 8000:8000 argylehacker/app-stats:latest.
At this point docker should be running the container and you can view the logs. This is exactly what Lightsail is going to do when it runs your container.
Answers to my Original Questions
The Dockerfil is very different than a docker-compose file used to compose services. The purpose of docker-compose is to coordinate containers, vs a Dockerfile will define how an image is built. All you need to do for Lightsail is build the image docker build <container>:<tag>
Yes, you'll need to create a Postgres database in AWS Lightsail so that Django can connect to a database and run. You'll modify the settings.py file to include the database credentails once it is available in Lightsail.
Still tracking down the best way to run the db migrations
The lack of logs was because the Dockerfile wasn't starting Django

AWS copilot with Django never finishes deploying

I've followed the this short guide to create a django app with docker
https://docs.docker.com/compose/django/
and then following copilot instructional to push up the container to ECS:
https://aws.amazon.com/blogs/containers/introducing-aws-copilot/
I've also used this sample to test everything -- which works out fine:
https://github.com/aws-samples/aws-copilot-sample-service
The deploy completes and outputs and URL endpoint.
In my case, the everything is successfully built, but once the test environment is being deployed it just continuously builds at this:
72ff4719 size: 3055
⠏ Deploying load-bal:7158348 to test.
and never finishes. I've even downsized my requirements.txt to a bare minimum.
My Dockerfile
FROM python:3.7.4
ENV PYTHONUNBUFFERED=1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
EXPOSE 80
COPY . /code/
docker-compose.yml
version: "3.8"
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
requirements.txt
Django==3.0.8
djangorestframework==3.11.0
gunicorn==20.0.4
pipenv==2020.6.2
psycopg2-binary==2.8.5
virtualenv==16.7.6
Instructions I follow:
sudo docker-compose run web django-admin startproject composeexample .
Successfully creates the Django App
copilot init
Setup naming for app and load balancer
Choose to create test environment
Everything builds successfully and then just sits here. I've tried a number of variations, but the only one that works is just doing the copilot instructional without django involved.
6f3494a64128: Pushed
cfe650cc4def: Pushed
a477d6671cc7: Pushed
90df760355a7: Pushed
574ea6c52bdd: Pushed
d1573fad78d1: Pushed
14c1ff636882: Pushed
48ebd1638acd: Pushed
31f78d833a92: Pushed
2ea751c0f96c: Pushed
7a435d49206f: Pushed
9674e3075904: Pushed
831b66a484dc: Pushed
ini: digest: sha256:b7460876bc84b1a26e7513fa6d17b5bffd5560ae958a933984376ed2c9fe53f3 size: 3052
⠏ Deploying aiinterview-lb:ini to test.
tl;dr the Dockerfile that's being used by this tutorial is incomplete for Copilot's purposes. It needs an extra line containing
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
and the EXPOSE directive should be updated to 8000. Because Copilot doesn't recognize Docker Compose syntax and there's no command or entrypoint specified in the Dockerfile, the image will never start with Copilot's configuration settings.
Details
AWS Copilot is designed around "services" consisting of an image, possible sidecars, and additional storage resources. That means that its basic unit of config is the Docker image and the service manifest. It doesn't natively read Docker Compose syntax, so all the config that Copilot knows about is that which is specified in the Dockerfile or image and each service's manifest.yml and addons directory.
In this example, designed for use with Docker Compose, the Dockerfile doesn't have any kind of CMD or ENTRYPOINT directive, so the built image which gets pushed to Amazon ECR by Copilot won't ever start. The tutorial specifies the image's command (python manage.py runserver 0.0.0.0:8000) as an override in docker-compose.yml, so you'll want to update your Dockerfile to the following:
FROM python:3.7.4
ENV PYTHONUNBUFFERED=1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
EXPOSE 8000
COPY . /code/
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Note here that I've changed the EXPOSE directive to 8000 to match the command from docker-compose.yml and added the command specified in the web section to the Dockerfile as a CMD directive.
You'll also want to run
copilot init --image postgres --name db --port 5432 --type "Backend Service" --deploy
This will create the db service specified in your docker-compose.yml. You may need to run this first so that your web container doesn't fail to start while searching for credentials.
Some other notes:
You can specify your database credentials by adding variables and secrets in the manifest file for db which is created in your workspace at ./copilot/db/manifest.yml. For more on how to add a secret to SSM and make it accessible to your Copilot services, check out our documentation
variables:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
secrets:
POSTGRES_PASSWORD: POSTGRES_PASSWORD
Your database endpoint is accessible over service discovery at db.$COPILOT_SERVICE_DISCOVERY_ENDPOINT--you may need to update your service code which connects to the database to reflect this endpoint instead of localhost or 0.0.0.0.

Host Dockerized Django project on AWS

I have a Django project which is working fine on my local machine. I want to host the same on AWS, but confused on what service to use and what is the best practice to so. Do I use EC2, create a ubuntu instance on it and install Docker or use ECS ?
What is the best practice to transfer my django project to AWS. Do I create a repository on Docker hub ?
Please help me explain the best workflow on this.
My docker-compose file looks like this:
version: '3'
services:
db:
image: mysql:latest
restart: always
environment:
- MYSQL_DATABASE=tg_db
- MYSQL_ROOT_PASSWORD=password
volumes:
- ./dbdata:/var/lib/mysql
web:
build: .
command: bash -c "python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Thanks!
UPDATE (Steps I took for deployment)
Dockerfile:
# Start with a python image
FROM python:3
# Some stuff that everyone has been copy-pasting
# since the dawn of time.
ENV PYTHONUNBUFFERED 1
# Install things
RUN apt-get update
# Make folders and locations for project
RUN mkdir /code
COPY . /code
WORKDIR /code/project/t_backend
# Install requirements
RUN pip install -U pip
RUN pip install -Ur requirements.txt
I used sudo docker-compose up -d and project is running on local
Now I pushed my tg_2_web:latest on ECR.
Where does the database and Apache containers come in action.
Do I have to create a separated repository for both mysql database and apache container.
How will I connect all the containers using ECS ?
Thanks !
The answer to this question can be really wide but just to give you a heads up on what all processes it is supposed to go through -
Packaging Images
You create a docker image by using writing Dockerfile which actually copies your Python Django source code & installs all the dependencies.
This can either be done locally of you can use any CI/CD tools for the same.
Storing Images
This is the part where you will push & store your Docker image. All the packaged images will be pushed in this step.
This could be any registry from where EC2 instances can fetch the docker image, preferably ECR but you can opt for dockerhub as well. In case of dockerhub, you need to store your credentials into S3.
Deploying images
In this part, you will be deploying the images to EC2 instances.
You can use various services depending on your requirement like ECS, ElasticBeanstalk multicontainer or maybe Fargate(relatively new).
ECS - Most preferred way of deployment but you need to manage clusters & resources by yourself. Images have to be defined in a task definition which is a JSON file.
Beanstalk Multi Container - Relatively new to ECS, uses ECS in the background to deploy your docker images to the clusters. You do not have to worry about resources, just feed a JSON file to your environment & rest is taken care by Beanstalk.
Fargate - Manage or deploy your containers without worrying about your clusters/managers etc. Quite new, never got a chance to have a look into it.
Ref -
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_ecs.html
https://aws.amazon.com/fargate/