Redis Server inside Docker Container with nginx + redis_pass - django

I'm developing a simple Chat application (based on django-private-chat2 ) with django & django-channels. I want the application to be fully Containerized and use nginx for routing inside the container.
So I'm trying to connect through a web-socket to a redis-server running inside a docker container. I've tried many things (see below) but still can't get it to work. It is not unlikely that my general approach is wrong.
EDIT:
As #Zeitounator suggested I've created a mcve illustrating the Problem. It's on my secondary github here it contains all configuration files for a minimal example (Dockerfile, docker-compose.yaml, nginx.conf, redis.conf, supervisord.ini ...) also the folder 'tests' contains two tests illustrating - that it works locally, but not inside container. Tests are to be run inside root directory.
I've added important configs code at the end.
I still believe my nginx configuration might be off, any help appreciated!
Here is what I've got so far.
Redis-Server and websocket connection works outside Docker container.
Inside the docker container I compile and run nginx with the 'HTTP Redis' here
this module is loaded via 'load_module *.so' inside nginx.conf, I've verified that the module is loaded.
I've configured the redis-server inside the Docker-container to use 'bind 0.0.0.0 and protected-mode no'
Inside nginx then I route all the '/' traffic to a django application running on port 8000.
I route all traffic to 'chat_ws/' ( from the web socket ) to 127.0.0.1:6379 the redis-server ( with nginx reverse_proxy ).
I've verified that routing works properly ( return 404 with nginx on chat_ws addresses works )
I can connect to the redis-server through redis-cli on my machine, when I use 'redis-cli -h DOCKER_CONTAINER_IP' so the redis_pass also seems to work.
In the django settings I've specified the CHANNEL_LAYERS and set the redis backend host to 127.0.0.1:6379 ( which again works completely fine outside the Docker container)
But if I open the webpage (through the Docker container) in my Browser everything works except the websocket connection to the redis-server
I'm especially confused that redis-cli connection to the container works fine but not the websocket, Which does work locally ( outside the container ).
( What I've been thinking:
maybe a web-socket connection w. redis_pass throuh nginx is generally problematic?
maybe 'HTTP Redis' version is to old, but how to debug this since I don't see any log output of the module in nginx stdout. )
Any help to debug recommendation is appreciated. Or Ideas for different approaches. Also tell me If I should provide further Information or share specific config files. Thanks in advance!
Dockerfile:
Installs requirements, compiles nginx w. redis_pass, start everything.
FROM nginx:alpine AS builder
ADD ./requirements.txt /app/requirements.txt
RUN apk add --update --no-cache python3 py3-pip && ln -sf python3 /usr/bin/python
RUN set -ex \
&& apk add --no-cache --virtual .build-deps postgresql-dev build-base python3-dev python2-dev libffi-dev \
&& python3 -m venv /env \
&& python3 -m pip install --upgrade pip \
&& python3 -m pip install --no-cache --upgrade pip setuptools \
&& python3 -m ensurepip \
&& python3 -m pip install -r /app/requirements.txt \
&& runDeps="$(scanelf --needed --nobanner --recursive /env \
| awk '{ gsub(/,/, "\nso:", $2); print "so:" $2 }' \
| sort -u \
| xargs -r apk info --installed \
| sort -u)" \
&& apk add --virtual rundeps $runDeps \
&& apk del .build-deps
RUN apk add --no-cache build-base libressl-dev libffi-dev
# rest of code is mounted to the docker container in docker-compose ( only in dev, for local debugging )
WORKDIR /app
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
# Install Supervisord to, start redis-server, gunicorn and nginx simultaniously
RUN apk add --no-cache supervisor
RUN apk add --no-cache redis
# Stuff needed to make custom nginx compilation
RUN apk add --no-cache --virtual .build-deps \
gcc \
libc-dev \
make \
openssl-dev \
pcre-dev \
zlib-dev \
linux-headers \
curl \
gnupg \
libxslt-dev \
gd-dev \
geoip-dev
RUN wget "http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz" -O nginx.tar.gz
# compile with HTTP redis for nginx
RUN wget "https://people.freebsd.org/~osa/ngx_http_redis-0.3.9.tar.gz" -O redis.tar.gz
# Compile ...
RUN mkdir /usr/src && \
CONFARGS=$(nginx -V 2>&1 | sed -n -e 's/^.*arguments: //p') \
tar -zxC /usr/src -f nginx.tar.gz && \
tar -xzvf "redis.tar.gz" && \
REDISDIR="$(pwd)/ngx_http_redis-0.3.9" && \
cd /usr/src/nginx-$NGINX_VERSION && \
./configure --with-compat $CONFARGS --add-dynamic-module=$REDISDIR && \
make && make install
COPY supervisord.ini /etc/supervisor.d/supervisord.ini
COPY ./nginx.conf /etc/nginx/nginx.conf
RUN mkdir /etc/redis
COPY ./redis.conf /etc/redis/redis.conf
EXPOSE 80
# Start all services ( see ./supercisord.ini )
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor.d/supervisord.ini"]
nginx.conf
daemon off;
user nginx;
worker_processes 1;
pid /var/run/nginx.pid;
load_module /usr/local/nginx/modules/ngx_http_redis_module.so;
events {
worker_connections 1024;
}
http {
access_log /dev/stdout;
upstream asgi {
server 127.0.0.1:8000 fail_timeout=0;
}
server {
listen 80;
server_name localhost;
client_max_body_size 4G;
location /chat_ws {
set $redis_key $uri;
redis_pass 127.0.0.1:6379;
error_page 404 = /fallback;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_pass http://asgi;
}
}
}
redis.conf
bind 0.0.0.0
protected-mode no
docker-compose.yml
version: '3.7'
services:
nginxweb:
image: redis_nginx_docker
restart: always
build:
context: .
dockerfile: ./Dockerfile
ports:
- 8000:80
env_file:
- env
volumes:
- ./mcve:/app
Remaining configuration may be found in the git repo.

Related

Can't modify files created in docker container

I got a container with django application running in it and I sometimes go into the shell of the container and run ./manage makemigrations to create migrations for my app.
Files are created successfully and synchronized between host and container.
However in my host machine I am not able to modify any file created in container.
This is my Dockerfile
FROM python:3.8-alpine3.10
LABEL maintainer="Marek Czaplicki <marek.czaplicki>"
WORKDIR /app
COPY ./requirements.txt ./requirements.txt
RUN set -ex; \
apk update; \
apk upgrade; \
apk add libpq libc-dev gcc g++ libffi-dev linux-headers python3-dev musl-dev pcre-dev postgresql-dev postgresql-client swig tzdata; \
apk add --virtual .build-deps build-base linux-headers; \
apk del .build-deps; \
pip install pip -U; \
pip --no-cache-dir install -r requirements.txt; \
rm -rf /var/cache/apk/*; \
adduser -h /app -D -u 1000 -H uwsgi_user
ENV PYTHONUNBUFFERED=TRUE
COPY . .
ENTRYPOINT ["sh", "./entrypoint.sh"]
CMD ["sh", "./run_backend.sh"]
and run_backend.sh
./manage.py collectstatic --noinput
./manage.py migrate && exec uwsgi --strict uwsgi.ini
what can I do to be able to modify these files in my host machine? I don't want to chmod every file or directory every time I create it.
For some reason there is one project in which files created in container are editable by host machine, but I cannot find any difference between these two.
By default, Docker containers runs as root. This has two issues:
In development as you can see, the files are owned by root, which is often not what you want.
In production this is a security risk (https://pythonspeed.com/articles/root-capabilities-docker-security/).
For development purposes, docker run --user $(id -u) yourimage or the Compose example given in the other answer will match the user to your host user.
For production, you'll want to create a user inside the image; see the page linked above for details.
Usually files created inside docker container are owned by the root user of the container.
You could try with this inside your container:
chown 1000:1000 file-you-want-to-edit-outside
You could add this as the last layer of your Dockerfile as RUN
Edit:
If you are using docker-compose, you can add user to your container:
service:
container:
user: ${CURRENT_HOST_USER}
And have CURRENT_HOST_USER be equal to $(id -u):$(id -g)
The solution was to add
USER uwsgi_user
to Dockerfile and then simpy run docker exec -it container-name sh

azure web app for containers uWSGI listen queue of socket full

My app running in a docker container on Azure Webapps for Containers (linux).
I found out my server is getting error when listening queue increases.
log:
uWSGI listen queue of socket "127.0.0.1:37400" (fd: 3) full !!! **(101/100)**
I have added '''--listen 4096''' option to increase the queue. but my server still throws error.
log:
uWSGI listen queue of socket "127.0.0.1:37400" (fd: 3) full !!! **(129/128)**
some reference says need to increase net.core.somaxconn but I couldn't increase it.
log:
sysctl: error: 'net.core.somaxconn' is an unknown key
Any idea what i am missing?
Thanks
EDIT
Let me share my Dockerfile
FROM python:3.6-alpine
RUN apk update && \
apk add python3 python3-dev \
gcc musl-dev linux-headers zlib zlib-dev libffi libffi-dev\
freetype freetype-dev jpeg jpeg-dev \
postgresql-dev
WORKDIR /code
COPY . /code/
ENV LANG c.UTF-8
ENV DJANGO_SETTINGS_MODULE myproject.settings.prod
ENV PYTHONUNBUFFERED 1
RUN pip3 install -r requirements.txt
EXPOSE 80
CMD ["uwsgi", "--plugins", "http,python", \
"--http", "0.0.0.0:80", \
"--wsgi-file", "/code/myproject/wsgi.py", \
"--master", \
"--listen", "4096", \
"--die-on-term", \
"--single-interpreter", \
"--harakiri", "30", \
"--reload-on-rss", "512", \
"--post-buffering-bufsize", "8192"]

How to write the dockerfile for haproxy, etcd, and confd together?

I want to use docker to run haproxy, etcd and confd together in a container. What would be the recommended way to achieve this?
This is currently what I have:
FROM haproxy:1.8
COPY ./haproxyconfig/haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
###install confd
ADD confd-0.11.0-linux-amd64 /usr/local/bin/confd \
ADD confd /etc/confd
### install etcd
tar zxvf ./tmp/etcd-v3.2.11-linux-amd64.tar.gz
ADD ./tmp/etcd /usr/local/bin/ \
ADD ./tmp/etcdctl /usr/local/bin/ \
RUN mkdir -p /var/etcd/ \
RUN mkdir -p /var/lib/etcd/ \
confd -onetime -backend etcd -node http://127.0.0.1:2379
WORKDIR /usr/local/etc/haproxy
EXPOSE 80 443 1936 2379 2380

How to checkout branches if there are files created in docker image?

In my pet project I set up docker-compose for development. The issue is that I've create django migration inside dockerimage and created commit. After checkout to main branch I see an error. These files become untracked and I cannot merge sub branch into the main.
git checkout master
warning: unable to unlink 'apps/app_name/migrations/0001_initial.py': Permission denied
warning: unable to unlink 'apps/app_name/migrations/0002_auto_20190127_1815.py': Permission denied
warning: unable to unlink 'apps/app_name/migrations/__init__.py': Permission denied
Switched to branch 'master'
Also I tried to it with sudo. All new files will appear untracked in main branch but no new commits will be added(based on git log)
docker-compose.yml
version: '3'
services:
db:
image: postgres
web:
build:
dockerfile: ./compose/Dockerfile.dev
context: .
command: /start
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
links:
- db:db
Dockerfile
FROM python:3.6.8-alpine
ENV PYTHONUNBUFFERED 1
RUN apk update \
# psycopg2 dependencies
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
# Pillow dependencies
&& apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev \
# CFFI dependencies
&& apk add libffi-dev py-cffi \
# Translations dependencies
&& apk add gettext \
# https://docs.djangoproject.com/en/dev/ref/django-admin/#dbshell
&& apk add postgresql-client
RUN mkdir /code
WORKDIR /code
COPY /requirements /code/requirements/
RUN pip install -r requirements/dev.txt
COPY . /code/
COPY ./compose/start /start
RUN sed -i 's/\r//' /start
RUN chmod +x /start
start.sh
#!/bin/sh
set -o errexit
set -o pipefail
set -o nounset
python manage.py migrate
python manage.py runserver_plus 0.0.0.0:8000
Dockerfile
FROM python:3.6.8-alpine
ENV PYTHONUNBUFFERED 1
ARG CONTAINER_USER="python"
ARG CONTAINER_UID="1000"
ARG CONTAINER_GID="1000"
ARG WORKSPACE=/home/"${CONTAINER_USER}"/code
RUN apk update \
# psycopg2 dependencies
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
# Pillow dependencies
&& apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev \
# CFFI dependencies
&& apk add libffi-dev py-cffi \
# Translations dependencies
&& apk add gettext \
# https://docs.djangoproject.com/en/dev/ref/django-admin/#dbshell
&& apk add postgresql-client && \
addgroup -g "${CONTAINER_GID}" -S "${CONTAINER_USER}" && \
adduser -s /bin/ash -u "${CONTAINER_UID}" -G "${CONTAINER_USER}" -h /home/"${CONTAINER_USER}" -D "${CONTAINER_USER}"
USER "${CONTAINER_USER}"
WORKDIR "${WORKSPACE}"
COPY ./requirements/dev.txt "${WORKSPACE}"/requirements.txt
RUN pip install -r requirements.txt
Is bad practice to run whatsoever in a docker container as the root user, just like you wouldn't do it in your computer. I added a user python that will have the same uid of your computer, assuming your operating system user as the uid 1000 as it is normal in Linux machines. If you are in another OS than this may not work and you will need to find the solution for your specific OS.
docker-compose.yml
version: '3'
services:
db:
image: postgres
web:
build:
dockerfile: ./compose/Dockerfile.dev
context: .
args:
CONTAINER_UID: ${UID:-1000}
CONTAINER_GID: ${GID:-1000}
command: ./compose/start
volumes:
- .:/home/python/code
ports:
- "8000:8000"
depends_on:
- db
links is deprecated and was replaced by depends_on, thus not necessary to use both.
In order to build the container with the same permissions of your filesystem for your user I have added args to de dockerfile build section and I use the OS values for $UID and $GID, but if they are not set will default to 1000.
You can see what are the ones in your Linux OS with id -u for $UID and id -g for the $GID.
Shell Script
Make it executable in your repo and commit the change so that you don't need to do it each time you build the docker image.
chmod 700 ./compose/start
I don't use +x because that is a bad practice in terms of security, once you will allow everyone to execute the script.
Summary
Any files created now inside of the container will have the uid and gid of 1000, thus no more conflict should occur with permissions.

Dockerfile PHP, NGINX and Composer

I'm having a difficult time finding resources for creating a Dockerfile to install a proper PHP, Composer and NGINX environment.
I can create a docker-compose container set, but I cannot get composer installed doing that. If anyone has any good resources to point me to, in order to write a full PHP, Composer and NGINX Dockerfile.
this is my docker file example for a similar scenario, I hope it helps. Feedback and ideas are welcomed !
FROM php:7.4-fpm
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
libzip-dev \
zip \
unzip \
software-properties-common \
lsb-release \
apt-transport-https \
ca-certificates \
wget \
gnupg2
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PHP extensions (some are already compiled in the PHP base image)
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd json zip xml
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Create myuser
RUN useradd -G www-data,root -u 1000 -d /home/myuser myuser
RUN mkdir -p /home/myuser/.composer && \
chown -R myuser:myuser /home/myuser
# Set working directory
WORKDIR /var/www/mypage
USER $user
You can add nginx to this container but then, I recommend to use supervisord to control multiple processes.