Permission denied: '/app/manage.py' Docker - django

I'm having a permissions issue when trying to run my docker-compose command
docker-compose run app sh -c "django-admin.py startproject app ."
ERROR:
PermissionError: [Errno 13] Permission denied: '/app/manage.py'
I've done some research and found a similar issue here: docker-compose , PermissionError: [Errno 13] Permission denied: '/manage.py'
However I think I'm doing something wrong when trying to change permissions in my Dockerfile
Dockerfile:
FROM python:3.8-alpine
MAINTAINER Nick
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
RUN mkdir /app
WORKDIR /app
COPY ./app /app
RUN adduser -D user
RUN chown user:user -R /app/
RUN chmod +x /app
USER user
I add RUN chown user:user -R /app/ and RUN chmod +x /app
running docker build . works succesfully however I still keep getting permissions issue
docker-compose.yml
version: "3"
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app /app
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"

It seems your Django project works well, I've tried to reproduce your error but I couldn't get it.
My directory tree:
.
|____app
| |____app
| | |____asgi.py
| | |______init__.py
| | |____settings.py
| | |____urls.py
| | |____wsgi.py
| |____manage.py
|____requirements.txt
|____Dockerfile
|____docker-compose.yml
requirements.txt
Django==3.1.5
Dockerfile:
FROM python:3.8-alpine
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
RUN mkdir /app
WORKDIR /app
COPY ./app /app
RUN adduser -D user
RUN chown user:user -R /app/
RUN chmod +x /app
USER user
Docker-compose:
version: "3"
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app /app
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
My steps:
django-admin startproject app
docker-compose build app
docker-compose up
Output:
Creating django-3-1-5_app_1 ... done
Attaching to django-3-1-5_app_1
app_1 | Watching for file changes with StatReloader
app_1 | Performing system checks...
app_1 |
app_1 | System check identified no issues (0 silenced).
app_1 |
app_1 | You have 18 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions.
app_1 | Run 'python manage.py migrate' to apply them.
app_1 | January 06, 2021 - 21:03:42
app_1 | Django version 3.1.5, using settings 'app.settings'
app_1 | Starting development server at http://0.0.0.0:8000/
app_1 | Quit the server with CONTROL-C.
app_1 | [06/Jan/2021 21:03:52] "GET / HTTP/1.1" 200 16351
app_1 | [06/Jan/2021 21:03:52] "GET /static/admin/css/fonts.css HTTP/1.1" 200 423
app_1 | [06/Jan/2021 21:03:52] "GET /static/admin/fonts/Roboto-Regular-webfont.woff HTTP/1.1" 200 85876
app_1 | [06/Jan/2021 21:03:52] "GET /static/admin/fonts/Roboto-Light-webfont.woff HTTP/1.1" 200 85692
app_1 | [06/Jan/2021 21:03:52] "GET /static/admin/fonts/Roboto-Bold-webfont.woff HTTP/1.1" 200 86184
app_1 | Not Found: /favicon.ico
app_1 | [06/Jan/2021 21:03:52] "GET /favicon.ico HTTP/1.1" 404 1969
^CGracefully stopping... (press Ctrl+C again to force)
Stopping django-3-1-5_app_1 ... done
Browser:
I'm thinking, you have another kind of perm issue, have you tried to build this image from a MacOS?
Try the following configuration:
System Preferences -> Security & Privacy -> Privacy tab -> Full Disk Access (on the left, somewhere in the list) -> Click on the + -> Docker application
I hope it may help you and other users.

For whatever reason changing my
docker-compose.yml from
version: "3"
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app /app
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
To
version: "3"
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
I modified - ./app /app to - ./app:/app. It fixed the issue. If anyone has an explanation that would be great.
Thanks for the help.

It worked for me by removing : from the volume (docker-compose.yml):
change from:
version: '3.7'
services:
web:
build: ./app
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./app/:/usr/src/app
ports:
- 8000:8000
env_file:
- ./.env.dev
to :
version: '3.7'
services:
web:
build: ./app
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./app//usr/src/app
ports:
- 8000:8000
env_file:
- ./.env.dev

Related

Unable to open unix socket in redis - Permission denied while firing up docker container

I am trying to fire up a separate redis container which will work as a broker for celery. Can someone help me with as to why the docker user is not able to open the UNIX socket. I have even tried making the user as root but it doesn't seem to work. Please find below the Dockerfile, docker-compose file and redis.conf file.
Dockerfile:
FROM centos/python-36-centos7
USER root
ENV DockerHOME=/home/django
RUN mkdir -p $DockerHOME
ENV PYTHONWRITEBYCODE 1
ENV PYTHONUNBUFFERED 1
ENV PATH=/home/django/.local/bin:$PATH
COPY ./oracle-instantclient18.3-basiclite-18.3.0.0.0-3.x86_64.rpm /home/django
COPY ./oracle-instantclient18.3-basiclite-18.3.0.0.0-3.x86_64.rpm /home/django
COPY ./oracle.conf /home/django
RUN yum install -y dnf
RUN dnf install -y libaio libaio-devel
RUN rpm -i /home/django/oracle-instantclient18.3-basiclite-18.3.0.0.0-3.x86_64.rpm && \
cp /home/django/oracle.conf /etc/ld.so.conf.d/ && \
ldconfig && \
ldconfig -p | grep client64
COPY ./requirements /home/django/requirements
WORKDIR /home/django
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r ./requirements/development.txt
COPY . .
RUN chmod 777 /home/django
EXPOSE 8000
ENTRYPOINT ["/bin/bash", "-e", "docker-entrypoint.sh"]
Docker-compose file:
version : '3.8'
services:
app:
build: .
volumes:
- .:/django
- cache:/var/run/redis
image: app_name:django
container_name: app_name
ports:
- 8000:8000
depends_on:
- db
- redis
db:
image: postgres:10.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- 5432:5432
environment:
- POSTGRES_USER=app_name
- POSTGRES_PASSWORD=app_password
- POSTGRES_DB=app_db
labels:
description : "Postgres Database"
container_name: app_name-db-1
redis:
image: redis:alpine
command: redis-server /etc/redis/redis.conf
restart: unless-stopped
ports:
- 6379:6379
volumes:
- ./redis/data:/var/lib/redis
- ./redis/redis-server.log:/var/log/redis/redis-server.log
- cache:/var/run/redis/
- ./redis/redis.conf:/etc/redis/redis.conf
container_name: redis
healthcheck:
test: redis-cli ping
interval: 1s
timeout: 3s
retries: 30
volumes:
postgres_data:
cache:
static-volume:
docker-entrypoint.sh:
# run migration first
python manage.py migrate
python manage.py preload_sites -uvasas -l
python manage.py preload_endpoints -uvasas -l
python manage.py collectstatic --noinput
#start celery
export C_FORCE_ROOT='true'
celery multi start 1 -A realm -l INFO -c4
# start the server
python manage.py runserver 0:8000
redis.conf
unixsocket /var/run/redis/redis.sock
unixsocketperm 770
logfile /var/log/redis/redis-server.log
I am new to docker so apologies if I have not done something very obvious or if I have not followed some of the best practices.

How to run periodic task using crontab inside docker container?

I'm building Django+Angular web application which is deployed on server using docker-compose. And I need to periodically run one django management command. I was searching SO a bit and tried following:
docker-compose:
version: '3.7'
services:
db:
restart: always
image: postgres:12-alpine
environment:
POSTGRES_DB: ${DB_NAME}
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
ports:
- "5432:5432"
volumes:
- ./db:/var/lib/postgresql/data
api:
restart: always
image: registry.gitlab.com/*******/price_comparison_tool/backend:${CI_COMMIT_REF_NAME:-latest}
build: ./backend
ports:
- "8000:8000"
volumes:
- ./backend:/code
environment:
- SUPERUSER_PASSWORD=********
- DB_HOST=db
- DB_PORT=5432
- DB_NAME=price_tool
- DB_USER=price_tool
- DB_PASSWORD=*********
depends_on:
- db
web:
restart: always
image: registry.gitlab.com/**********/price_comparison_tool/frontend:${CI_COMMIT_REF_NAME:-latest}
build:
context: ./frontend
dockerfile: Dockerfile
volumes:
- .:/frontend
ports:
- "80:80"
depends_on:
- api
volumes:
backend:
db:
Dockerfile (backend):
FROM python:3.8.3-alpine
ENV PYTHONUNBUFFERED 1
RUN apk add postgresql-dev gcc python3-dev musl-dev && pip3 install psycopg2
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
ADD entrypoint.sh /entrypoint.sh
ADD crontab_task /crontab_task
ADD run_boto.sh /run_boto.sh
RUN chmod a+x /entrypoint.sh
RUN chmod a+x /run_boto.sh
RUN /usr/bin/crontab /crontab_task
RUN pip install -r requirements.txt
ADD . /code/
RUN mkdir -p db
RUN mkdir -p logs
ENTRYPOINT ["/entrypoint.sh"]
CMD ["gunicorn", "-w", "3", "--timeout", "300", "--bind", "0.0.0.0:8000", "--access-logfile", "-", "price_tool_project.wsgi>
crontab_task:
*/1 * * * * /run_boto.sh > /proc/1/fd/1 2>/proc/1/fd/2
run_boto.sh:
#!/bin/bash -e
cd price_comparison_tool/backend/
python manage.py boto.py
But when I run docker-compose up --build I get following messages in terminal:
api_1 | /bin/ash: python manage.py boto > /proc/1/fd/1 2>/proc/1/fd/2: not found
api_1 | /bin/ash: /run_boto.sh: not found
Project structure is following:
.
├── backend
├── db
├── docker-compose.yml
└── frontend
Can anybody give me an advice how to fix this issue and run management command periodically? Thanks in advance!
EDIT
I made following update:
crontab_task:
*/1 * * * * /code/run_boto.sh > /proc/1/fd/1 2>/proc/1/fd/2
and now run_boto.sh path is correct, but I get following error:
/bin/ash: /code/run_boto.sh: Permission denied
If you are running this application, not as a root user then the problem is, cron or crontab cannot be used as a non-rooted user.
you can take a look at this answer which I got when I was facing the same problem.

How can I connect from Django to Docker Redis Container using BackgroundScheduler?

I am currently working on a Django project, which is supposed to send messages to a mobile app via websockets. For the Django project I used Docker to put it online. Now I want to send planned messages for the first time, for this I use Apscheduler or django-apscheduler. I try to save the jobs in my Redis container. But for some reason the connection is denied. Am I basically doing something wrong or does it hang somewhere?
Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
RUN pip install -r requirements.txt
docker-compose.yml
version: '3'
services:
redis:
image: redis
command: redis-server
ports:
- '6379:6379'
- '6380:6380'
web:
build: .\experiencesampling
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:\code
ports:
- "8000:8000"
# worker_channels:
#
# build: .\experiencesampling
# command: python manage.py runworker channels
# volumes:
# - .:\code
# links:
# - redis
channels:
build: .\experiencesampling
command: daphne -p 8001 experiencesampling.asgi:application
volumes:
- .:\code
ports:
- "8001:8001"
links:
- redis
jobs.py (trying connect to redis), I have already tried 0.0.0.0, localhost, redis://redis for the "host"
jobstores = {
'default': RedisJobStore(jobs_key='dispatched_trips_jobs', run_times_key='dispatched_trips_running', host='redis', port=6380)
}
executors = {
'default': ThreadPoolExecutor(20),
'processpool': ProcessPoolExecutor(5)
}
job_defaults = {
'coalesce': False,
'max_instances': 3
}
#jobStore.remove_all_jobs()
scheduler = BackgroundScheduler(jobstores=jobstores, executors=executors, job_defaults=job_defaults)
register_events(scheduler)
scheduler.start()
print("Scheduler started!")
Error (appears multiple times)
web_1 |
web_1 | Scheduler started!
web_1 | Error getting due jobs from job store 'default': Error 111 connecting to redis:6380. Connection refused.
web_1 | System check identified no issues (0 silenced).
web_1 | July 11, 2020 - 19:00:30
channels_1 | 2020-07-11 19:00:29,866 WARNING Error getting due jobs from job store 'default': Error 111 connecting to redis:6380. Connection refused.
web_1 | Django version 3.0.8, using settings 'experiencesampling.settings'
web_1 | Starting ASGI/Channels version 2.3.1 development server at http://0.0.0.0:8000/
web_1 | Quit the server with CONTROL-C.

Django cant connect to redis in docker

Sorry for my english. I have project in django, In my project i want use celery for background task and now i need set settings in docker for this library. This my docker file:
FROM python:3
MAINTAINER Alex2
RUN apt-get update
# Install wkhtmltopdf
RUN curl -L#o wk.tar.xz https://downloads.wkhtmltopdf.org/0.12/0.12.4/wkhtmltox-0.12.4_linux-generic-amd64.tar.xz \
&& tar xf wk.tar.xz \
&& cp wkhtmltox/bin/wkhtmltopdf /usr/bin \
&& cp wkhtmltox/bin/wkhtmltoimage /usr/bin \
&& rm wk.tar.xz \
&& rm -r wkhtmltox
RUN apt-get install -y cron
# for celery
ENV APP_USER user
ENV APP_ROOT /src
RUN groupadd -r ${APP_USER} \
&& useradd -r -m \
--home-dir ${APP_ROOT} \
-s /usr/sbin/nologin \
-g ${APP_USER} ${APP_USER}
# create directory for application source code
RUN mkdir -p /usr/django/app
COPY requirements.txt /usr/django/app/
WORKDIR /usr/django/app
RUN pip install -r requirements.txt
this my docker-compose.dev
version: '2.0'
services:
web:
build: .
container_name: api_dev
image: img/api_dev
volumes:
- .:/usr/django/app/
- ./static:/static
expose:
- "8001"
env_file: env/dev.env
command: bash django_run.sh
nginx:
build: nginx
container_name: ng_dev
image: img/ng_dev
ports:
- "8001:8001"
volumes:
- ./nginx/dev_api.conf:/etc/nginx/conf.d/api.conf
- .:/usr/django/app/
- ./static:/static
depends_on:
- web
links:
- web:web
db:
image: postgres:latest
container_name: pq01
ports:
- "5432:5432"
redis:
image: redis:latest
container_name: rd01
command: redis-server
ports:
- "8004:8004"
celery:
build: .
container_name: cl01
command: celery worker --app=myapp.celery
volumes:
- .:/usr/django/app/
links:
- db
- redis
and i have this error:
cl01 | User information: uid=0 euid=0 gid=0 egid=0
cl01 |
cl01 | uid=uid, euid=euid, gid=gid, egid=egid,
cl01 | [2018-07-31 16:40:00,207: ERROR/MainProcess] consumer: Cannot connect to redis://redis:8004/0: Error 111 connecting to redis:8004. Connection refused..
cl01 | Trying again in 2.00 seconds...
cl01 |
cl01 | [2018-07-31 16:40:02,211: ERROR/MainProcess] consumer: Cannot connect to redis://redis:8004/0: Error 111 connecting to redis:8004. Connection refused..
cl01 | Trying again in 4.00 seconds...
cl01 |
cl01 | [2018-07-31 16:40:06,217: ERROR/MainProcess] consumer: Cannot connect to redis://redis:8004/0: Error 111 connecting to redis:8004. Connection refused..
cl01 | Trying again in 6.00 seconds...
i cant understand why it not connect. My settings file project
CELERY_BROKER_URL = 'redis://redis:8004/0'
CELERY_RESULT_BACKEND = 'redis://redis:8004/0'
Everything looks like good, but mayby in some file i dont add some settings. Please help me solve this problem
I think the port mapping causes the problem, So, change redis settings in docker-compose.dev file as (removed ports option)
redis:
image: redis:latest
container_name: rd01
command: redis-server
and in your settings.py
CELERY_BROKER_URL = 'redis://redis:6379/0'
CELERY_RESULT_BACKEND = 'redis://redis:6379/0'
You dont have to map the ports unless you are using them in your local envirnment

(django) (docker) Django webserver wont start

I do
docker-compose up
I get
$ docker-compose up
Starting asynchttpproxy_postgres_1
Starting asynchttpproxy_web_1
Attaching to asynchttpproxy_postgres_1, asynchttpproxy_web_1
postgres_1 | LOG: database system was interrupted; last known up at 2017-
05-01 18:52:29 UTC
postgres_1 | LOG: database system was not properly shut down; automatic
recovery in progress
postgres_1 | LOG: invalid record length at 0/150F410: wanted 24, got 0
postgres_1 | LOG: redo is not required
postgres_1 | LOG: MultiXact member wraparound protections are now enabled
postgres_1 | LOG: database system is ready to accept connections
web_1 | Performing system checks...
web_1 |
web_1 | System check identified no issues (0 silenced).
My Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
My docker-compose.yml
postgres:
image: postgres:latest
volumes:
- ./code/
env_file:
- .env
volumes:
- /usr/src/app/static
expose:
- '5432'
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
env_file:
- .env
volumes:
- .:/code
links:
- postgres
expose:
- '8000'
As you can see, the django server wont start. What am I doing wrong? Thanks in advance.
First, try to run on another terminal
docker ps
to check if you server really did not start.
And check if you postgres database setup is ready when your django application start, if not try to run the an bash script to see if the connection is setup at postgress to initialize the django container.
wait-bd.sh
#!/bin/bash
while true; do
COUNT_PG=`psql postgresql://username:password#localhost:5432/name_db -c '\l \q' | grep "name_db" | wc -l`
if ! [ "$COUNT_PG" -eq "0" ]; then
break
fi
echo "Waiting Database Setup"
sleep 10
done
and in docker-compose.yml add the tag command in the django container:
web:
build: .
command: /bin/bash wait-bd.sh && python3 manage.py runserver 0.0.0.0:8000
This script gonna wait you database setup, so will run the django setup container.