How to fix Django logger Permission denied in Docker container? - django

I'm trying to run my Django project on docker
I'm using logger to write in a .txt file but I'm getting an error related to permissions
Django can't write in AdminFileDebug.txt
This is the settings.py code
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'adminsDebug': {
'level': 'DEBUG',
'class': 'logging.FileHandler',
'filename': 'log/AdminFileDebug.txt',
'formatter': 'verbose'
# 'filename': '/path/to/django/debug.log',
},
},
'loggers': {
'AdminsDebug': {
'handlers': ['adminsDebug', 'console'],
'level': 'DEBUG',
'propagate': True,
},
},
'formatters': {
'verbose': {
'format': '{levelname} {asctime} {module} {process:d} {thread:d} {message}',
'style': '{',
},
'simple': {
'format': '{levelname} {asctime} {message}',
'style': '{',
},
},
}
This is the docker compose file for my configuration:
version: '3.9'
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=test
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=test
app:
build:
context: .
command: sh -c "python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/app
ports:
- 8000:8000
environment:
- DJANGO_DEBUG=1
- POSTGRES_NAME=test
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=test
depends_on:
- db
Dockerfile
FROM python:3.10-alpine3.16
ENV PYTHONUNBUFFERED 1
COPY requirements.txt /requirements.txt
RUN apk add --upgrade --no-cache build-base linux-headers && \
pip install --upgrade pip && \
pip install -r /requirements.txt
COPY app/ /app
WORKDIR /app
RUN adduser --disabled-password --no-create-home django
USER django
CMD ["uwsgi", "--socket", ":9000", "--workers", "4", "--master", "--enable-threads", "--module", "app.wsgi"]
when i run docker compose up i get this error
PermissionError: [Errno 13] Permission denied: '/app/log/AdminFileDebug.txt'
Any solutions?

Related

How to set up Gunicorn to display Django exceptions with Heroku?

edit: Added in Django logging, still does not push errors to gunicorn from Django.
I have a django project deployed to Heroku in a worker using gunicorn. I have included all flags/configurations that I am using below. The issue is that the logging in Heroku does not display any output from Django, and my project is running into some 502 errors, that I am unsure of how to debug. Gunicorn just is not displaying any logs whatsoever from Django/python.
Procfile:
release: python manage.py migrate backend
web: bin/boot
worker: gunicorn backend.wsgi:application -b 0.0.0.0:$PORT -c /app/gunicorn.conf.py --log-file -
gunicorn.conf.py
# gunicorn.conf.py
# Non logging stuff
workers = 3
# Whether to send Django output to the error log
capture_output = True
# How verbose the Gunicorn error logs should be
loglevel = "debug"
Logging in settings.py
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'formatters': {
'verbose': {
'format': '%(asctime)s %(levelname)s [%(name)s:%(lineno)s] %(module)s %(process)d %(thread)d %(message)s'
}
},
'handlers': {
'gunicorn': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'formatter': 'verbose',
'filename': '/opt/djangoprojects/reports/bin/gunicorn.errors',
'maxBytes': 1024 * 1024 * 100, # 100 mb
}
},
'loggers': {
'gunicorn.errors': {
'level': 'DEBUG',
'handlers': ['gunicorn'],
'propagate': True,
},
}
}

Django/Docker/Logging: ValueError: Unable to configure handler 'files_debug''

I try to implement logging in my django project (Django/Postgresql/Docker/Celery) but got an error when I deploy my project in linux server (but it work locally)
FileNotFoundError: [Errno 2] No such file or directory: '/usr/src/app/logs/debug.log'
I've read about a solution in this SO post: Django: Unable to configure handler 'syslog'
but fisrt I do not even understand why this solution should works and I do not manage to implement it in my project.
I use Docker/docker-compose so I install netcat with Dockerfile and then try to RUN nc -lU /logs/debug.log /logs/info.log but it failed (no such file or directory)
Dockerfile
# Pull the official base image
FROM python:3.8.3-alpine
# Set a work directory
WORKDIR /usr/src/app
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update && apk add postgresql-dev gcc g++ python3-dev musl-dev netcat-openbsd
# create UNIX socket for logs files
#RUN nc -lU /logs/debug.log /logs/info.log
...
settings.py
...
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'loggers': {
'django': {
'handlers':['files_info','files_debug'],
'propagate': True,
'level':'INFO',
},
},
'handlers': {
'files_info': {
'level': 'INFO',
'class': 'logging.FileHandler',
'filename': './logs/info.log',
'formatter': 'mereva',
},
'files_debug': {
'level': 'DEBUG',
'class': 'logging.FileHandler',
'filename': './logs/debug.log',
'formatter': 'mereva',
},
},
'formatters': {
'mereva': {
'format': '{levelname} {asctime} {module} {message}',
'style': '{',
}
},
}
...
In fact, logs folder and .log files were excluded with my .gitignore files.
Changing to logsfiles folder and debug and info (without extension) files, resolve the problem and I was able to run container
neverthless, nothing is written in theses files...

CannotPullContainerError on Deploying Multi-container App on ElasticBeanstalk

I have a multi-container app which I want to deploy on ElasticBeanstalk. Below are my files.
Dockerfile
FROM python:2.7
WORKDIR /app
ADD . /app
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y \
apt-utils \
git \
python \
python-dev \
libpcre3 \
libpcre3-dev \
python-setuptools \
python-pip \
nginx \
supervisor \
default-libmysqlclient-dev \
python-psycopg2 \
libpq-dev \
sqlite3 && \
pip install -U pip setuptools && \
rm -rf /var/lib/apt/lists/*
RUN pip install -r requirements.txt
EXPOSE 8000
RUN chmod +x entry_point.sh
docker-compose.yml
version: "2"
services:
db:
restart: always
container_name: docker_test-db
image: postgres:9.6
expose:
- "5432"
mem_limit: 10m
environment:
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=docker_test
redis:
restart: always
image: redis:3.0
expose:
- "6379"
mem_limit: 10m
web:
# replace username/repo:tag with your name and image details
restart: always
build: .
image: docker_test
container_name: docker_test-container
ports:
- "8000:8000"
environment:
- DATABASE=db
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=docker_test
mem_limit: 500m
depends_on:
- db
- redis
entrypoint: ./entry_point.sh
command: gunicorn docker_test.wsgi:application -w 2 -b :8000 --timeout 120 --graceful-timeout 120 --worker-class gevent
celery:
image: docker_test
container_name: docker_test-celery
command: celery -A docker_test worker -l info
links:
- db
- redis
mem_limit: 10m
depends_on:
- web
cbeat:
image: docker_test
container_name: docker_test-cbeat
command: celery beat --loglevel=info
links:
- db
- redis
mem_limit: 10m
depends_on:
- web
I works file when I run it on my local system. But when I upload it on elasticbeanstalk, It gives my following error.
ECS task stopped due to: Essential container in task exited. (celery:
db: cbeat: web: CannotPullContainerError: API error (404): pull access
denied for docker_test, repository does not exist or may require
'docker login' redis: )
I transform docker-compose.yml to Dockerrun.aws.json by using container-transform. For above file, my Dockerrun.aws.json is following.
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"command": [
"celery",
"beat",
"--loglevel=info"
],
"essential": true,
"image": "docker_test",
"links": [
"db",
"redis"
],
"memory": 10,
"name": "cbeat"
},
{
"command": [
"celery",
"-A",
"docker_test",
"worker",
"-l",
"info"
],
"essential": true,
"image": "docker_test",
"links": [
"db",
"redis"
],
"memory": 10,
"name": "celery"
},
{
"environment": [
{
"name": "POSTGRES_NAME",
"value": "postgres"
},
{
"name": "POSTGRES_USER",
"value": "postgres"
},
{
"name": "POSTGRES_PASSWORD",
"value": "postgres"
},
{
"name": "POSTGRES_DB",
"value": "docker_test"
}
],
"essential": true,
"image": "postgres:9.6",
"memory": 10,
"name": "db"
},
{
"essential": true,
"image": "redis:3.0",
"memory": 10,
"name": "redis"
},
{
"command": [
"gunicorn",
"docker_test.wsgi:application",
"-w",
"2",
"-b",
":8000",
"--timeout",
"120",
"--graceful-timeout",
"120",
"--worker-class",
"gevent"
],
"entryPoint": [
"./entry_point.sh"
],
"environment": [
{
"name": "DATABASE",
"value": "db"
},
{
"name": "POSTGRES_NAME",
"value": "postgres"
},
{
"name": "POSTGRES_USER",
"value": "postgres"
},
{
"name": "POSTGRES_PASSWORD",
"value": "postgres"
},
{
"name": "POSTGRES_DB",
"value": "docker_test"
}
],
"essential": true,
"image": "docker_test",
"memory": 500,
"name": "web",
"portMappings": [
{
"containerPort": 8000,
"hostPort": 8000
}
]
}
],
"family": "",
"volumes": []
}
How can I resolve this problem?
Please push the image "docker_test" to either dockerhub or ECR for Beanstalk to pull image from. Currently, it's on your local & the ECS agent doesn't know about it.
Tag & Push docker_test image to a registry like dockerhub & ECR.
Update image repo URL in Dockerrun.aws.json.
Allow Beanstalk to pull the image.
I'm not that familiar with EB, but I am pretty familiar with ECR and ECS.
I usually get that error when I try pull an image from an empty repo on ECR, in other words the ECR repo was created but you havn't pushed any docker images to the repo yet.
This can also happen when you try pull an image from ECR and it can't find the version number of the image in the tag. I suggest that you change your docker-compose.yml file to use the latest version of the images. This will mean that everywhere you mention the image docker_test you will need suffix it with ":latest"
Something like this:
image: docker_test:latest
I will post my whole docker-compose.yml I made for you at the end of the reply.
I would suggest that you have a look at this doc:https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html see the section:"Using Images from an Amazon ECR Repository" they explain how you can resolve the docker login issue.
I hope that helps. Please reply if you have any questions regarding this.
version: "2"
services:
db:
restart: always
container_name: docker_test-db
image: postgres:9.6
expose:
- "5432"
mem_limit: 10m
environment:
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=docker_test
redis:
restart: always
image: redis:3.0
expose:
- "6379"
mem_limit: 10m
web:
# replace username/repo:tag with your name and image details
restart: always
build: .
image: docker_test:latest
container_name: docker_test-container
ports:
- "8000:8000"
environment:
- DATABASE=db
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=docker_test
mem_limit: 500m
depends_on:
- db
- redis
entrypoint: ./entry_point.sh
command: gunicorn docker_test.wsgi:application -w 2 -b :8000 --timeout 120 --graceful-timeout 120 --worker-class gevent
celery:
image: docker_test
container_name: docker_test-celery
command: celery -A docker_test worker -l info
links:
- db
- redis
mem_limit: 10m
depends_on:
- web
cbeat:
image: docker_test:latest
container_name: docker_test-cbeat
command: celery beat --loglevel=info
links:
- db
- redis
mem_limit: 10m
depends_on:
- web

Get Celery logs in the same place as 'normal' Django logs and formatted by a Django formatter

I try to find a way to gather the logs generatd from inside async functions called through Celery, inside the same handler that I use to log 'non-celery' django functions.
I have created a dummy function send a log each 3 seconds:
from celery.decorators import periodic_task
from datetime import timedelta
#periodic_task(run_every=timedelta(seconds=3))
def every_3_seconds():
logr.debug("Hello world: Running (debug) periodic task!")
logr.info("Hello world: Running (info) periodic task!")
I have tried also something like this:
clogger = get_task_logger(__name__) # Celery logger
#periodic_task(run_every=timedelta(seconds=3))
def every_3_seconds():
clogger.debug("HelloCelery: Running (debug) periodic task!")
clogger.info("HelloCelery: Running (info) periodic task!")
The log settings are (commented are my previous attempts):
CELERYD_HIJACK_ROOT_LOGGER = False
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'formatter': 'verbose',
},
},
'formatters': {
'verbose': {
'format': 'HOPLA123 %(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
}
},
'loggers': {
'': {
'handlers': ['console'],
'level': os.getenv('DJANGO_LOG_LEVEL', 'DEBUG'),
},
# 'django': {
# 'handlers': ['console'],
# 'level': os.getenv('DJANGO_LOG_LEVEL', 'DEBUG'),
# },
# # Logger for the myappApp
# # Use: logr = logging.getLogger(__name__) in myappApp
# # logr.debug("....")
# 'myappApp': {
# 'handlers': ['console'],
# 'level': os.getenv('DJANGO_LOG_LEVEL', 'DEBUG'),
# 'propagate': False, # To fix duplicate log issue.
# },
# # Default Python Logger
# 'root': {
# 'handlers': ['console'],
# 'level': os.getenv('DJANGO_LOG_LEVEL', 'DEBUG'),
# },
# 'celery': {
# 'handlers': ['console'],
# 'level': os.getenv('DJANGO_LOG_LEVEL', 'DEBUG'),
# 'propagate': True,
# },
},
}
I have a supervisord config that outputs each of the logs in the suitable files:
[program:gunicorn_django]
environment=PYTHONPATH=/opt/myapp/myappServer/myappServer
command = /opt/myapp/venv/bin/gunicorn wsgi -b 0.0.0.0:8000 --timeout 90 --access-logfile /dev/stdout --error-logfile /dev/stderr
directory = /opt/myapp/myappServer
user = root
autostart=true
autorestart=true
stdout_logfile=/var/log/gunicorn.log
stderr_logfile=/var/log/gunicorn.err
[program:redis]
command=redis-server
autostart=true
autorestart=true
stdout_logfile=/var/log/redis.log
stderr_logfile=/var/log/redis.err
[program:django-celery]
command=/opt/myapp/venv/bin/python ./manage.py celery --app=myappServer.celeryapp:app worker -B --loglevel=INFO
directory=/opt/myapp/myappServer
numprocs=1
stdout_logfile=/var/log/celery.log
stderr_logfile=/var/log/celery.err
autostart=true
autorestart=true
startsecs=10
My Hello world logs inside a Celery function are logged into /var/log/celery.err, as specified in Celery doc:
If no logfile is specified, stderr is used.
I would like to have them in the /var/log/gunicorn.log, mainly to be formatted by the verbose formatter (in order to be correctly interpreted by a LogStash instance afterwards). Is there something wrong in the definitions of my loggers?
Answering to myself:
The code from my questions actually works.
It seems that:
CELERYD_HIJACK_ROOT_LOGGER = False
made the trick.

django-cms refusing to publish a specific page in production - where should I start debugging?

I have a small problem in my production cms. One of the pages (There are about 50) is refusing to be published. I mean: if I click on "publish" in the admin interface or use the method publish_page I am not getting any errors. On the page list view there's a green check by this page. But when I browse in there, I am getting a nice 404 error. And if I refresh the page list view, the green check turns into a red sign (not published).
I don't know where should I start debugging this issue.
>>> from cms.api import publish_page
>>> p = Page.objects.get(pk__exact=66)
>>> r = User.objects.get(pk=2)
>>> p2 = publish_page(p, r)
>>> p2
<cms.models.pagemodel.Page object at 0x3561910>
>>> p2.is_public_published()
True
There are no error traces in my /var/log/httpd/access_log nor /var/log/httpd/error_log (apart of the 404 warning). These are my logging settings:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'verbose': {
'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
},
'simple': {
'format': '%(levelname)s %(message)s'
},
},
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
},
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'verbose'
},
},
'loggers': {
'django': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': True,
},
'django.request': {
'handlers': ['mail_admins'],
'level': 'DEBUG',
'propagate': True,
},
'department.models': {
'handlers': ['console'],
'level': 'DEBUG'
},
}
}
Could you please suggest me where to start debugging? Thanks!
Roberto
UPDATE:
My virtual environment has the following installed:
Django - 1.5.4 - active
PIL - 1.1.7 - active
Pillow - 2.2.1 - active
Pygments - 1.6 - active
Python - 2.7.3 - active development (/usr/lib/python2.7/lib-dynload)
South - 0.8.2 - active
argparse - 1.2.1 - active development (/usr/lib/python2.7)
bpython - 0.12 - active
cmsplugin-news - 0.4.2 - active
django-autoslug - 1.7.1 - active
django-ckeditor - 4.0.2 - active
django-classy-tags - 0.4 - active
django-cms - 2.4.2 - active
django-country-dialcode - 0.4.8 - active
django-extensions - 1.2.2 - active
django-guardian - 1.1.1 - active
django-hvad - 0.3 - active
django-modeltranslation - 0.6.1 - active
django-mptt - 0.5.2 - active
django-reusableapps - 0.1.1 - active
django-reversion - 1.7.1 - active
django-sekizai - 0.7 - active
djangocms-text-ckeditor - 1.0.10 - active
html5lib - 1.0b3 - active
pip - 1.2.1 - active
psycopg2 - 2.5.1 - active
python-ldap - 2.4.13 - active
python-magic - 0.4.6 - active
pytz - 2013.7 - active
setuptools - 1.1.6 - active
six - 1.4.1 - active
switch2bill-common - 2.8.1 - active
wsgiref - 0.1.2 - active development (/usr/lib/python2.7)
The list-view of pages is Django-CMS is largely powered by ajax requests.
I would take a look at that view using Firebug to see if any of the publishing functions are returning 500 errors from ajax requests that won't cause the view itself to throw a 500.
I've had plugins get corrupted which in turn caused publishing to fail. In the list-view of pages, the pages appear to publish correctly, as checkboxes get checked, etc, but in Firebug those ajax POST requests were returning 500 errors.