I am confused with how to work with Django apps (create/using) when using Docker. Some of the tutorials suggest using command startapp after starting the web docker container (I'm using docker-compose to up the containers). But since the files are created inside that container, how do I go about adding code to that from my local dev machine? and moreover, this does not seem right to create apps like this to edit code...
I've been using this following structure as is so far which starts up the container and works fine. But with just one "app" which is todo
(taken from https://github.com/realpython/dockerizing-django)
.
├── README.md
├── docker-compose.yml
├── nginx
│ ├── Dockerfile
│ └── sites-enabled
│ └── django_project
├── production.yml
├── tmp.json
└── web
├── Dockerfile
├── Pipfile
├── Pipfile.lock
├── docker_django
│ ├── __init__.py
│ ├── apps
│ │ ├── __init__.py
│ │ └── todo
│ │ ├── __init__.py
│ │ ├── admin.py
│ │ ├── models.py
│ │ ├── static
│ │ │ └── main.css
│ │ ├── templates
│ │ │ ├── _base.html
│ │ │ ├── dashboard.html
│ │ │ └── home.html
│ │ ├── tests.py
│ │ ├── urls.py
│ │ └── views.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
├── manage.py
├── requirements.txt
├── shell-script.sh
└── tests
├── __init__.py
└── test_env_settings.py
When I use the above structure, I am not able to create apps locally as we have to use manage.py to create apps, but I need to navigate to apps folder to do that but manage.py is not accessible. So, I try to give full abs path to manage.py, but it complains about SeTTINGS_MODULE SECRET_KEY error.
What is the proper way to work with django apps when using Docker-Compose?
Do I need to change the above structure? or should I change my workflow?
EDIT:
my docker-compose:
version: '3.7'
services:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn docker_django.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- web-static:/usr/src/app/static
links:
- web:web
postgres:
restart: always
image: postgres:latest
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
pgadmin:
restart: always
image: fenglc/pgadmin4
ports:
- "5050:5050"
volumes:
- pgadmindata:/var/lib/pgadmin/data/
environment:
DEFAULT_USER: 'pgadmin4#pgadmin.org'
DEFAULT_PASSWORD: 'admin'
redis:
restart: always
image: redis:latest
ports:
- "6379:6379"
volumes:
- redisdata:/data
volumes:
web-django:
web-static:
pgdata:
redisdata:
pgadmindata:
My Dockerfile inside web folder:
FROM python:3.7-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ADD Pipfile /usr/src/app
ADD Pipfile.lock /usr/src/app
RUN python -m pip install --upgrade pip
RUN python -m pip install pipenv
COPY requirements.txt requirements.txt
RUN pipenv install --system
COPY . /usr/src/app
you structure is correct. what you are looking for is a volume, to make your django project on the host to be available inside the container, you can create whatever you like in your project, and the changes will take effect on the container.
for example:
the structure is :
.
├── django
│ ├── Dockerfile
│ └── entireDjangoAppFiles
└── docker-compose.yml
say this is my django dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
RUN pip install Django psycopg2
EXPOSE 8000
CMD python manage.py runserver 0.0.0.0:8000
and my docker compose:
version: '3.7'
services:
django:
build:
context: django
dockerfile: Dockerfile
ports:
- "8000:8000"
volumes:
- "./django:/code"
now any change i do in my django directory will be applied to the container's /code dir as well
EDIT
our docker-compose files are not similar... you are using named volumes
instead of usual mounting. those volumes are being created docker own volumes directory and the containers can use them, but nothing tell docker that you want those volume to contain your apps- so they are empty. to fix this, you may just remove them from the volumes option in your docker-compose, and prefer mount-volumes:
version: '3.7'
services:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
volumes:
- .web:/usr/src/app #mount the project dir
- .path/to/static/files/dir:/usr/src/app/static #mount the static files dir
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn docker_django.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- web-static:/usr/src/app/static
links:
- web:web
postgres:
restart: always
image: postgres:latest
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
pgadmin:
restart: always
image: fenglc/pgadmin4
ports:
- "5050:5050"
volumes:
- pgadmindata:/var/lib/pgadmin/data/
environment:
DEFAULT_USER: 'pgadmin4#pgadmin.org'
DEFAULT_PASSWORD: 'admin'
redis:
restart: always
image: redis:latest
ports:
- "6379:6379"
volumes:
- redisdata:/data
volumes:
#web-django:
#web-static:
pgdata:
redisdata:
pgadmindata:
a note about the other named volumes - if you wondered why do you have to use them - they are the databases volumes, which supposed to be populated by the containers.
Related
I have this project structure:
└── folder
└── my_project_folder
├── my_app
│ ├── __init__.py
│ ├── asgi.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── manage.py
├── .env.dev
├── docker-compose.yml
├── entrypoint.sh
├── requirements.txt
└── Dockerfile
docker-compose.yml:
version: '3.9'
services:
web:
build: .
command: python my_app/manage.py runserver 0.0.0.0:8000
volumes:
- .:/usr/src/app/
ports:
- 8000:8000
env_file:
- .env.dev
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=db_admin
- POSTGRES_PASSWORD=db_pass
- POSTGRES_DB=some_db
volumes:
postgres_data:
Dockerfile:
FROM python:3.10.0-alpine
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN pip install --upgrade pip
RUN apk update
RUN apk add postgresql-dev gcc python3-dev musl-dev
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
COPY . .
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
It's working, but i dont like the line python my_app/manage.py runserver 0.0.0.0:8000 in my docker-compose file.
What should i change to run manage.py from the docker folder?
I mean, how can i use python manage.py runserver 0.0.0.0:8000 (without my_app)?
In your Dockerfile, you can use WORKDIR to change your directory inside docker file:
...
COPY . .
WORKDIR "my_app"
...
Then you are inside my_app dir and you can call your command:
python manage.py ...
I run Django in docker and I want to read a file from host. I mount a path in host to container using following command in my docker-compose file:
volumes:
- /path/on/host/sharedV:/var/www/project/src/shV
Then I execute mongodump and export a collection into sharedV on host. After that, I inspect web container and go to the shV directory in container and I can see the backup file. However, when I run os.listdir(path) in django, the result is empty list. In the other word, I can access to sharedV directory in Django but I can not see its contents!
Here is the Mount part of container inspect:
"Mounts": [
{
"Type": "bind",
"Source": "/path/on/host/sharedV",
"Destination": "/var/www/project/src/shV",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
]
Is there any idea that how can I access to host from a running container?
thanks
This works for me, Tying to give you a perception
Project Tree
.
├── app
│ ├── dir
│ ├── file.txt
│ └── main.py
├── dir
│ └── demo.txt
├── docker-compose.yml
└── Dockerfile
Dockerfile
# Dockerfile
FROM python:3.7-buster
RUN mkdir -p /app
WORKDIR /app
RUN useradd appuser && chown -R appuser /app
USER appuser
CMD [ "python", "./main.py" ]
docker-compose
version: '2'
services:
api:
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
volumes:
- ./app:/app/
- ./dir:/app/dir
main.py
if __name__ == '__main__':
path = './dir'
dir = os.listdir(path)
print(f'Hello', dir)
Prints
Hello ['demo.txt']
I have a project with the following structure.
ProjectName/
├── Dockerfile
├── api/
│ ├── Dockerfile
│ └── manage.py
├── docker-compose.yml
├── frontend/
│ ├── Dockerfile
│ ├── build/
│ └── src/
└── manifests/
├── development.yml
└── production.yml
docker-compose.yml has a database image that's common between both environments, and the dev.yml and prod.yml have similar but slightly different images for production and dev.
Example: The api dev uses django and just runs python manage.py runserver, but in prod it will run gunicorn api.wsgi.
And the frontend runs npm start but in prod I want it to be based off a different image. Currently the dockerfile only works with one or the other, since the npm command is only found when I use FROM node and the nginx command only shows up when I use FROM kyma/docker-nginx.
So how can I separate these out when in different environments?
./frontend/Dockerfile:
FROM node
WORKDIR /app/frontend
COPY package.json /app/frontend
RUN npm install
EXPOSE 3000
CMD ["npm", "start"]
# Only run this bit in production environment, and not anything above this line.
#FROM kyma/docker-nginx
#COPY build/ /var/www
#CMD 'nginx'
./api/Dockerfile:
FROM python:3.5
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
postgresql-client \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app/api
COPY requirements.txt /app/api
RUN pip install -r requirements.txt
EXPOSE 8000
# Run this command in dev
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
# Run this command in prod
#CMD ["gunicorn", "api.wsgi", "-b 0.0.0.0:8000"]
./docker-compose.yml:
version: '3'
services:
db:
image: postgres
restart: always
ports:
- "5432:5432"
volumes:
node-modules:
./manifests/production.yml:
version: '3'
services:
gunicorn:
build: ./api
command: ["gunicorn", "api.wsgi", "-b", "0.0.0.0:8000"]
restart: always
volumes:
- ./api:/app/api
ports:
- "8000:8000"
depends_on:
- db
nginx:
build: ./frontend
command: ["nginx"]
restart: always
volumes:
- ./frontend:/app/frontend
- ./frontend:/var/www
- node-modules:/app/frontend/node_modules
ports:
- "80:80"
volumes:
node-modules:
./manifests/development.yml:
version: '3'
services:
django:
build: ./api
command: ["python", "manage.py", "runserver", "0.0.0.0:8000"]
restart: always
volumes:
- ./api:/app/api
ports:
- "8000:8000"
depends_on:
- db
frontend:
build: ./frontend
command: ["npm", "start"]
restart: always
volumes:
- ./frontend:/app/frontend
- node-modules:/app/frontend/node_modules
ports:
- "3000:3000"
volumes:
node-modules:
You could have as an ENTRYPOINT a script running one or the other command depending of an environment variable that you can set at run time:
docker run -e env=DEV
# or
docker run -e env=PROD
You can set that same environment variable in a docker compose file.
I am running two different django projects in my machine. My machine is running on os -ubuntu 16.04.
I am very new to docker. As far as I know, the only way to differentiate between two projects setup is defining different containers. To have different containers, I have given different container_name in my docker-compose.yml file. So, basically, I do have different containers name for two different project and I have used different postgres database name in settings.py file for the each project as well.
Following are two different docker-compose.yml file configurations
PROJECT - 1
version: '3'
services:
nginx:
restart: always
image: nginx:latest
container_name: NGINX_P1
ports:
- "8000:8000"
volumes:
- ./src:/src
- ./config/nginx:/etc/nginx/conf.d
- /static:/static
depends_on:
- web
web:
restart: always
build: .
container_name: DJANGO_P1
command: bash -c "python manage.py makemigrations && python manage.py migrate && gunicorn safersit.wsgi -b 0.0.0.0:8000 --reload"
depends_on:
- db
volumes:
- ./src:/src
- /static:/static
expose:
- "8000"
db:
restart: always
image: postgres:latest
container_name: PSQL_P1
And, the settings.py file for project-1 is :
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'postgres_test', <--- different name for postgres
'USER': 'postgres',
'HOST': 'db',
'PORT': 5432,
}
}
For PROJECT - 2
version: '3'
services:
nginx:
restart: always
image: nginx:latest
container_name: NGINX_P2
ports:
- "8000:8000"
volumes:
- ./src:/src
- ./config/nginx:/etc/nginx/conf.d
- /static:/static
depends_on:
- web
web:
restart: always
build: .
container_name: DJANGO_P2
command: bash -c "python manage.py makemigrations && python manage.py migrate && gunicorn safersit.wsgi -b 0.0.0.0:8000 --reload"
depends_on:
- db
volumes:
- ./src:/src
- /static:/static
expose:
- "8000"
db:
restart: always
image: postgres:latest
container_name: PSQL_P2
And, the settings.py file for project-2 is as follows:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'postgres',
'USER': 'postgres',
'HOST': 'db',
'PORT': 5432,
}
}
The project-1 is already setup and running well. But, when I try to run the second project, it is giving following error:
DJANGO_P2 | users.User.numbers: (models.E006) The field 'numbers' clashes with the field 'numbers' from model 'users.user'.
I don't have any field named numbers in my seconds project users model, but it is trying to access it from my first project (project-1). I am very confused right now with this set up. Am I doing it right way? Why is my second project trying to access database from first one, even though I do have different container name and database name?
PROJECT-1 folder structure
.
├── config
│ ├── nginx
│ └── requirements.pip
├── docker-compose-backup.yml
├── docker-compose.yml
├── Dockerfile
├── README.md
└── src
├── project1
├── manage.py
├── messages
└── users
PROJECT-2 folder structure:
.
├── config
│ ├── nginx
│ └── requirements.pip
├── docker-compose.yml
├── Dockerfile
├── README.md
└── src
├── manage.py
├── project2
└── users
And the running containers list from both projects is as follows :
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1f12e1f78ce3 nginx:latest "nginx -g 'daemon ..." 21 minutes ago Exited (0) 11 minutes ago NGINX_P1
6c20c4a10a8a project1server_web "bash -c 'python m..." About an hour ago Up 23 minutes
8000/tcp DJANGO_P1
b7781939ce29 postgres:latest "docker-entrypoint..." About an hour ago Up 23 minutes
5432/tcp PSQL_P1
4600da6f7d29 nginx:latest "nginx -g 'daemon ..." 9 hours ago Exited (0) 9 minutes ago NGINX_P2
3069796edfd5 project2server_web "bash -c 'python m..." 9 hours ago Restarting (1) 14 minutes ago DJANGO_P2
3be863184995 postgres:latest "docker-entrypoint..." 9 hours ago Up About an hour 5432/tcp PSQL_P2
I will be grateful for your guidance.
I followed the steps line by line in the documentation, but i keep getting this error:
Your WSGIPath refers to a file that does not exist.
Here is my '.config' file: (except for the appname and the keys)
container_commands:
01_syncdb:
command: "python manage.py syncdb --noinput"
leader_only: true
option_settings:
- namespace: aws:elasticbeanstalk:container:python
option_name: WSGIPath
value: [myapp]/wsgi.py
- option_name: DJANGO_SETTINGS_MODULE
value: [myapp].settings
- option_name: AWS_SECRET_KEY
value: XXXX
- option_name: AWS_ACCESS_KEY_ID
value: XXXX
I googled around and found that someone else had a similar problem and they solved it by editing the 'optionsettings.[myapp]', I don't want to delete something I need, but here is what I have:
[aws:autoscaling:asg]
Custom Availability Zones=
MaxSize=1
MinSize=1
[aws:autoscaling:launchconfiguration]
EC2KeyName=
InstanceType=t1.micro
[aws:autoscaling:updatepolicy:rollingupdate]
RollingUpdateEnabled=false
[aws:ec2:vpc]
Subnets=
VPCId=
[aws:elasticbeanstalk:application]
Application Healthcheck URL=
[aws:elasticbeanstalk:application:environment]
DJANGO_SETTINGS_MODULE=
PARAM1=
PARAM2=
PARAM3=
PARAM4=
PARAM5=
[aws:elasticbeanstalk:container:python]
NumProcesses=1
NumThreads=15
StaticFiles=/static/=static/
WSGIPath=application.py
[aws:elasticbeanstalk:container:python:staticfiles]
/static/=static/
[aws:elasticbeanstalk:hostmanager]
LogPublicationControl=false
[aws:elasticbeanstalk:monitoring]
Automatically Terminate Unhealthy Instances=true
[aws:elasticbeanstalk:sns:topics]
Notification Endpoint=
Notification Protocol=email
[aws:rds:dbinstance]
DBDeletionPolicy=Snapshot
DBEngine=mysql
DBInstanceClass=db.t1.micro
DBSnapshotIdentifier=
DBUser=ebroot
The user who solved that problem deleted certain lines and then did 'eb start'. I deleted the same lines that the original user said they deleted, but when I 'eb start'ed it I got the same exact problem again.
If anybody can help me out, that would be amazing!
I was having this exact problem all day yesterday and I am using ubuntu 13.10.
I also tried deleting the options file under .ebextensions to no avail.
What I believe finally fixed the issue was under
~/mysite/requirements.txt
I double checked what the values were after I was all set and done doing eb init and eb start and noticed they were different from what http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python_django.html mentioned at the beginning of the tutorial.
The file was missing the MySQL line when I checked during the WSGIPath problem, I simply added the line :
MySQL-python==1.2.3
and then committed all the changes and it worked.
If that doesn't work for you, below are the .config file settings and the directory structure.
My .config file under ~/mysite/.ebextensions is exactly what was in the tutorial, minus the secret key and access key, you need to replace those with your own:
container_commands:
01_syncdb:
command: "django-admin.py syncdb --noinput"
leader_only: true
option_settings:
- namespace: aws:elasticbeanstalk:container:python
option_name: WSGIPath
value: mysite/wsgi.py
- option_name: DJANGO_SETTINGS_MODULE
value: mysite.settings
- option_name: AWS_SECRET_KEY
value: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
- option_name: AWS_ACCESS_KEY_ID
value: AKIAIOSFODNN7EXAMPLE
My requirements.txt:
Django==1.4.1
MySQL-python==1.2.3
argparse==1.2.1
wsgiref==0.1.2
And my tree structure. This starts out in ~/ so if I were to do
cd ~/
tree -a mysite
You should get the following output, including a bunch of directories under .git ( I removed them because there is a lot):
mysite
├── .ebextensions
│ └── myapp.config
├── .elasticbeanstalk
│ ├── config
│ └── optionsettings.mysite-env
├── .git
├── .gitignore
├── manage.py
├── mysite
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── settings.py
│ ├── settings.pyc
│ ├── urls.py
│ ├── wsgi.py
│ └── wsgi.pyc
└── requirements.txt