I have a .Rmd along with a .sql file that is read by the .Rmd file that I’m trying to deploy in ShinyProxy. I am able to run this from within RStudio on my Mac.
The application loads, I can see it in ShinyProxy, but when I click on the application, it launches, then says please wait, then the error java.lang.StackOverflowError. I tried increasing the stack size with the JAVA_OPTS in the Dockerfile.
I do see this in shinyproxy.log:
java.lang.reflect.InvocationTargetException: null
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_332]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_332]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_332]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_332]
.
.
.
.
Caused by: javax.naming.NoInitialContextException: Need to specify class name in environment or system property, or as an applet parameter, or in an application resource file: java.naming.factory.initial
at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:673) ~[na:1.8.0_332]
at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:313) ~[na:1.8.0_332]
at javax.naming.InitialContext.getURLOrDefaultInitCtx(InitialContext.java:350) ~[na:1.8.0_332]
at javax.naming.InitialContext.lookup(InitialContext.java:417) ~[na:1.8.0_332]
... 140 common frames omitted
Dockerfile:
FROM openanalytics/r-base
MAINTAINER John Reber "John.Reber#jefferson.edu"
ENV JAVA_HOME /usr/lib/jvm/java-11-openjdk-amd64
RUN export JAVA_HOME
ENV JAVA_OPTS "-Xms4G -Xmx8G -Xss2G"
RUN export JAVA_OPTS
# Install Java for rJava
RUN apt-get update && \
apt-get install -y default-jdk && \
apt-get install -y default-jre && \
apt-get install -y ca-certificates-java && \
rm -rf /var/lib/apt/lists/*
RUN ["java", "-version"]
CMD javareconf
RUN apt-get update && apt-get install -y \
libcurl4-openssl-dev \
# libcurl4-gnutls-dev \
libssl-dev \
libxml2-dev && \
rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y \
libharfbuzz0b && \
rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y \
sudo \
pandoc \
pandoc-citeproc \
libcairo2-dev \
libxt-dev \
libssh2-1-dev && \
rm -rf /var/lib/apt/lists/*
WORKDIR /opt/oracle
RUN apt-get update && apt-get install -y libaio1 wget unzip \
&& wget https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip \
&& unzip instantclient-basiclite-linuxx64.zip \
&& rm -f instantclient-basiclite-linuxx64.zip \
&& cd /opt/oracle/instantclient* \
&& rm -f *jdbc* *occi* *mysql* *README *jar uidrvci genezi adrci \
&& echo /opt/oracle/instantclient* > /etc/ld.so.conf.d/oracle-instantclient.conf \
&& ldconfig
WORKDIR /
RUN apt-get update && apt-get install -y \
libmysql++-dev \
unixodbc-dev \
libpq-dev && \
rm -rf /var/lib/apt/lists/*
#RUN apt-get update && apt-get install -y \
# libxml2 \
# libssl1.1 && \
# rm -rf /var/lib/apt/lists/*
CMD javareconf
RUN ["java", "-version"]
# install needed R packages
#RUN R -e "install.packages(c('flexdashboard', 'knitr', 'plotly', 'httpuv', 'shiny', 'rJava', 'RJDBC', 'dplyr', 'readr', 'DT', 'lubridate', 'rmarkdown'), dependencies = TRUE, repo='http://cran.r-project.org')"
RUN R -e "install.packages(c('shiny'), dependencies = TRUE, repo='https://cloud.r-project.org')"
RUN R -e "install.packages(c('flexdashboard', 'dplyr', 'rJava', 'RJDBC', 'readr', 'DT', 'lubridate', 'rmarkdown'), dependencies = TRUE, repo='https://cloud.r-project.org')"
# 'sysfonts','gifski', 'Cairo', 'tidyverse',
# make directory and copy Rmarkdown flexdashboard file in it
RUN mkdir -p /prmc
COPY prmc/PRMC.Rmd /prmc/PRMC.Rmd
#COPY prmc/PRMC_Local.Rmd /prmc/PRMC_Local.Rmd
COPY prmc/prmc.sql /prmc/prmc.sql
#COPY prmc/PRMC_ACCRUAL.csv /prmc/PRMC_ACCRUAL.csv
COPY prmc/ojdbc11.jar /prmc/ojdbc11.jar
# Copy Rprofile.site to the image
COPY Rprofile.site /usr/local/lib/R/etc/
# make all app files readable (solves issue when dev in Windows, but building in Ubuntu)
RUN chmod -R 755 /prmc
# expose port on Docker container
EXPOSE 3838
# run flexdashboard as localhost and on exposed port in Docker container
CMD ["R", "-e", "rmarkdown::run('/prmc/PRMC.Rmd', shiny_args = list(port = 3838, host = '0.0.0.0'))"]
application.xml:
proxy:
# title: Open Analytics Shiny Proxy
title: SKCC Open Analytics ShinyProxy
# logo-url: https://www.openanalytics.eu/shinyproxy/logo.png
logo-url: https://ewebapp01pa.jefferson.edu/includes/images/logo-2014.jpg
landing-page: /
heartbeat-rate: 10000
heartbeat-timeout: 60000
port: 8081
# authentication: keycloak
authentication: simple
admin-groups: admin
useForwardHeaders: true
# Example: 'simple' authentication configuration
users:
- name: jack
password: XXXXXXXX
groups: scientists, admin
- name: jeff
password: XXXXXXXXX
groups: mathematicians
# keycloak authentication
keycloak:
auth-server-url: https://kc.kcc.tju.edu/auth
realm: shinyproxy
public-client: true
resource: shinyproxy
credentials-secret: s2NwbneBKh10wG0fHjZjevGnLlNTt44h
use-resource-role-mappings: false
# Docker configuration
docker:
url: http://localhost:2375
port-range-start: 20000
specs:
- id: 01_hello
display-name: Hello Application
description: Application which demonstrates the basics of a Shiny app
container-cmd: ["R", "-e", "shinyproxy::run_01_hello()"]
container-image: openanalytics/shinyproxy-demo
access-groups: [scientists, mathematicians, analyze, admin]
# - id: 06_tabsets
# display-name: 06_tabsets
# description: Application 06_tabsets demonstration
# container-cmd: ["R", "-e", "shinyproxy::run_06_tabsets()"]
# container-image: openanalytics/shinyproxy-demo
# access-groups: []
## - id: euler
# display-name: Euler's number
# container-cmd: [ "R", "-e", "shiny::runApp('/root/euler')" ]
# container-image: openanalytics/shinyproxy-template
# access-groups: scientists
- id: prmc
display-name: PRMC Dashboard
description: (Protocol Review Monitoring Committee Dashboard)
docker-cmd: ["R", "-e rmarkdown::run('/prmc/PRMC.Rmd')"]
container-image: prmc_dashboard3
access-groups: [scientists, mathematicians, analyze, admin]
logging:
file:
name: shinyproxy.log
level:
root: DEBUG
Related
I have a docker-compose service that runs django using gunicorn in an entrypoint shell script.
When I issue CTRL-C after the docker-compose stack has been started, the web and nginx services do not gracefully exit and are not deleted. How do I configure the docker environment so that the services are removed when a CTRL-C is issued?
I have tried using stop_signal: SIGINT but the result is the same. Any ideas?
docker-compose log after CTRL-C issued
^CGracefully stopping... (press Ctrl+C again to force)
Killing nginx ... done
Killing web ... done
docker containers after CTRL-C is issued
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4b2f7db95c90 nginx:alpine "/docker-entrypoint.…" 5 minutes ago Exited (137) 5 minutes ago nginx
cdf3084a8382 myimage "./docker-entrypoint…" 5 minutes ago Exited (137) 5 minutes ago web
Dockerfile
#
# Use poetry to build wheel and install dependencies into a virtual environment.
# This will store the dependencies during compile docker stage.
# In run stage copy the virtual environment to the final image. This will reduce the
# image size.
#
# Install poetry using pip, to allow version pinning. Use --ignore-installed to avoid
# dependency conflicts with poetry.
#
# ---------------------------------------------------------------------------------------
##
# base: Configure python environment and set workdir
##
FROM python:3.8-slim as base
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONFAULTHANDLER=1 \
PYTHONHASHSEED=random \
PYTHONUNBUFFERED=1
WORKDIR /app
# configure user pyuser:
RUN useradd --user-group --create-home --no-log-init --shell /bin/bash pyuser && \
chown pyuser /app
# ---------------------------------------------------------------------------------------
##
# compile: Install dependencies from poetry exported requirements
# Use poetry to build the wheel for the python package.
# Install the wheel using pip.
##
FROM base as compile
ARG DEPLOY_ENV=development \
POETRY_VERSION=1.1.7
# pip:
ENV PIP_DEFAULT_TIMEOUT=100 \
PIP_DISABLE_PIP_VERSION_CHECK=1 \
PIP_NO_CACHE_DIR=1
# system dependencies:
RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential gcc && \
apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false && \
apt-get clean -y && \
rm -rf /var/lib/apt/lists/*
# install poetry, ignoring installed dependencies
RUN pip install --ignore-installed "poetry==$POETRY_VERSION"
# virtual environment:
RUN python -m venv /opt/venv
ENV VIRTUAL_ENV=/opt/venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
# install dependencies:
COPY pyproject.toml poetry.lock ./
RUN /opt/venv/bin/pip install --upgrade pip \
&& poetry install $(if [ "$DEPLOY_ENV" = 'production' ]; then echo '--no-dev'; fi) \
--no-ansi \
--no-interaction
# copy source:
COPY . .
# build and install wheel:
RUN poetry build && /opt/venv/bin/pip install dist/*.whl
# -------------------------------------------------------------------------------------------
##
# run: Copy virtualenv from compile stage, to reduce final image size
# Run the docker-entrypoint.sh script as pyuser
#
# This performs the following actions when the container starts:
# - Make and run database migrations
# - Collect static files
# - Create the superuser
# - Run wsgi app using gunicorn
#
# port: 5000
#
# build args:
#
# GIT_HASH Git hash the docker image is derived from
#
# environment:
#
# DJANGO_DEBUG True if django debugging is enabled
# DJANGO_SECRET_KEY The secret key used for django server, defaults to secret
# DJANGO_SUPERUSER_EMAIL Django superuser email, default=myname#example.com
# DJANGO_SUPERUSER_PASSWORD Django superuser passwd, default=Pa55w0rd
# DJANGO_SUPERUSER_USERNAME Django superuser username, default=admin
##
FROM base as run
ARG GIT_HASH
ENV DJANGO_DEBUG=${DJANGO_DEBUG:-False}
ENV DJANGO_SECRET_KEY=${DJANGO_SECRET_KEY:-secret}
ENV DJANGO_SETTINGS_MODULE=default_project.main.settings
ENV DJANGO_SUPERUSER_EMAIL=${DJANGO_SUPERUSER_EMAIL:-"myname#example.com"}
ENV DJANGO_SUPERUSER_PASSWORD=${DJANGO_SUPERUSER_PASSWORD:-"Pa55w0rd"}
ENV DJANGO_SUPERUSER_USERNAME=${DJANGO_SUPERUSER_USERNAME:-"admin"}
ENV GIT_HASH=${GIT_HASH:-dev}
# install virtualenv from compiled image
COPY --chown=pyuser:pyuser --from=compile /opt/venv /opt/venv
# set path for virtualenv and VIRTUAL_ENV toactivate virtualenv
ENV VIRTUAL_ENV="/opt/venv"
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
COPY --chown=pyuser:pyuser ./docker/docker-entrypoint.sh ./
USER pyuser
RUN mkdir /opt/venv/lib/python3.8/site-packages/default_project/staticfiles
EXPOSE 5000
ENTRYPOINT ["./docker-entrypoint.sh"]
Entrypoint
#!/bin/sh
set -e
echo "Making migrations..."
django-admin makemigrations
echo "Running migrations..."
django-admin migrate
echo "Making staticfiles..."
mkdir -p /opt/venv/lib/python3.8/site-packages/default_project/staticfiles
echo "Collecting static files..."
django-admin collectstatic --noinput
# requires gnu text tools
# echo "Compiling translation messages..."
# django-admin compilemessages
# echo "Making translation messages..."
# django-admin makemessages
if [ "$DJANGO_SUPERUSER_USERNAME" ]
then
echo "Creating django superuser"
django-admin createsuperuser \
--noinput \
--username $DJANGO_SUPERUSER_USERNAME \
--email $DJANGO_SUPERUSER_EMAIL
fi
exec gunicorn \
--bind 0.0.0.0:5000 \
--forwarded-allow-ips='*' \
--worker-tmp-dir /dev/shm \
--workers=4 \
--threads=1 \
--worker-class=gthread \
default_project.main.wsgi:application
exec "$#"
docker-compose
version: '3.8'
services:
web:
container_name: web
image: myimage
init: true
build:
context: .
dockerfile: docker/Dockerfile
environment:
- DJANGO_DEBUG=${DJANGO_DEBUG}
- DJANGO_SECRET_KEY=${DJANGO_SECRET_KEY}
- DJANGO_SUPERUSER_EMAIL=${DJANGO_SUPERUSER_EMAIL}
- DJANGO_SUPERUSER_PASSWORD=${DJANGO_SUPERUSER_PASSWORD}
- DJANGO_SEUPRUSER_USERNAME=${DJANGO_SUPERUSER_USERNAME}
# stop_signal: SIGINT
volumes:
- static-files:/opt/venv/lib/python3.8/site-packages/{{ cookiecutter.project_name }}/staticfiles:rw
ports:
- 127.0.0.1:${DJANGO_PORT}:5000
nginx:
container_name: nginx
image: nginx:alpine
volumes:
- ./docker/nginx:/etc/nginx/conf.d
- static-files:/static
depends_on:
- web
ports:
- 127.0.0.1:8000:80
volumes:
static-files:
You can use docker-compose down
Stops containers and removes containers, networks, volumes, and images created by up.
Reference
I'm trying build a docker container with the following command:
sudo docker build docker_calculadora/
but when it's building, at the step 9 it appears the following error:
Step 9/27 : RUN set -ex; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --export $GPG_KEYS > /etc/apt/trusted.gpg.d/mariadb.gpg; command -v gpgconf > /dev/null && gpgconf --kill all || :; rm -r "$GNUPGHOME"; apt-key list
---> Running in a80677ab986c
mktemp -d
export GNUPGHOME=/tmp/tmp.TiWBSXwFOS
gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys 177F4010FE56CA3336300305F1656F24C74CD1D8
gpg: keybox '/tmp/tmp.TiWBSXwFOS/pubring.kbx' created
gpg: keyserver receive failed: No name
The command '/bin/sh -c set -ex; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --export $GPG_KEYS > /etc/apt/trusted.gpg.d/mariadb.gpg; command -v gpgconf > /dev/null && gpgconf --kill all || :; rm -r "$GNUPGHOME"; apt-key list' returned a non-zero code: 2
My DockerFile:
# vim:set ft=dockerfile:
FROM ubuntu:focal
# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
RUN groupadd -r mysql && useradd -r -g mysql mysql
# https://bugs.debian.org/830696 (apt uses gpgv by default in newer releases, rather than gpg)
RUN set -ex; \
apt-get update; \
if ! which gpg; then \
apt-get install -y --no-install-recommends gnupg; \
fi; \
if ! gpg --version | grep -q '^gpg (GnuPG) 1\.'; then \
# Ubuntu includes "gnupg" (not "gnupg2", but still 2.x), but not dirmngr, and gnupg 2.x requires dirmngr
# so, if we're not running gnupg 1.x, explicitly install dirmngr too
apt-get install -y --no-install-recommends dirmngr; \
fi; \
rm -rf /var/lib/apt/lists/*
# add gosu for easy step-down from root
# https://github.com/tianon/gosu/releases
ENV GOSU_VERSION 1.12
RUN set -eux; \
savedAptMark="$(apt-mark showmanual)"; \
apt-get update; \
apt-get install -y --no-install-recommends ca-certificates wget; \
rm -rf /var/lib/apt/lists/*; \
dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"; \
wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"; \
wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"; \
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
gpgconf --kill all; \
rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; \
apt-mark auto '.*' > /dev/null; \
[ -z "$savedAptMark" ] || apt-mark manual $savedAptMark > /dev/null; \
apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; \
chmod +x /usr/local/bin/gosu; \
gosu --version; \
gosu nobody true
RUN mkdir /docker-entrypoint-initdb.d
# install "pwgen" for randomizing passwords
# install "tzdata" for /usr/share/zoneinfo/
# install "xz-utils" for .sql.xz docker-entrypoint-initdb.d files
RUN set -ex; \
apt-get update; \
apt-get install -y --no-install-recommends \
pwgen \
tzdata \
xz-utils \
; \
rm -rf /var/lib/apt/lists/*
ENV GPG_KEYS \
# pub rsa4096 2016-03-30 [SC]
# 177F 4010 FE56 CA33 3630 0305 F165 6F24 C74C D1D8
# uid [ unknown] MariaDB Signing Key <signing-key#mariadb.org>
# sub rsa4096 2016-03-30 [E]
177F4010FE56CA3336300305F1656F24C74CD1D8
RUN set -ex; \
export GNUPGHOME="$(mktemp -d)"; \
for key in $GPG_KEYS; do \
gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; \
done; \
gpg --batch --export $GPG_KEYS > /etc/apt/trusted.gpg.d/mariadb.gpg; \
command -v gpgconf > /dev/null && gpgconf --kill all || :; \
rm -r "$GNUPGHOME"; \
apt-key list
# bashbrew-architectures: amd64 arm64v8 ppc64le
ENV MARIADB_MAJOR 10.5
ENV MARIADB_VERSION 1:10.5.8+maria~focal
# release-status:Stable
# (https://downloads.mariadb.org/mariadb/+releases/)
RUN set -e;\
echo "deb http://ftp.osuosl.org/pub/mariadb/repo/$MARIADB_MAJOR/ubuntu focal main" > /etc/apt/sources.list.d/mariadb.list; \
{ \
echo 'Package: *'; \
echo 'Pin: release o=MariaDB'; \
echo 'Pin-Priority: 999'; \
} > /etc/apt/preferences.d/mariadb
# add repository pinning to make sure dependencies from this MariaDB repo are preferred over Debian dependencies
# libmariadbclient18 : Depends: libmysqlclient18 (= 5.5.42+maria-1~wheezy) but 5.5.43-0+deb7u1 is to be installed
# the "/var/lib/mysql" stuff here is because the mysql-server postinst doesn't have an explicit way to disable the mysql_install_db codepath besides having a database already "configured" (ie, stuff in /var/lib/mysql/mysql)
# also, we set debconf keys to make APT a little quieter
RUN set -ex; \
{ \
echo "mariadb-server-$MARIADB_MAJOR" mysql-server/root_password password 'unused'; \
echo "mariadb-server-$MARIADB_MAJOR" mysql-server/root_password_again password 'unused'; \
} | debconf-set-selections; \
apt-get update; \
apt-get install -y \
"mariadb-server=$MARIADB_VERSION" \
# mariadb-backup is installed at the same time so that `mysql-common` is only installed once from just mariadb repos
mariadb-backup \
socat \
; \
rm -rf /var/lib/apt/lists/*; \
# purge and re-create /var/lib/mysql with appropriate ownership
rm -rf /var/lib/mysql; \
mkdir -p /var/lib/mysql /var/run/mysqld; \
chown -R mysql:mysql /var/lib/mysql /var/run/mysqld; \
# ensure that /var/run/mysqld (used for socket and lock files) is writable regardless of the UID our mysqld instance ends up having at runtime
chmod 777 /var/run/mysqld; \
# comment out a few problematic configuration values
find /etc/mysql/ -name '*.cnf' -print0 \
| xargs -0 grep -lZE '^(bind-address|log|user\s)' \
| xargs -rt -0 sed -Ei 's/^(bind-address|log|user\s)/#&/'; \
# don't reverse lookup hostnames, they are usually another container
echo '[mysqld]\nskip-host-cache\nskip-name-resolve' > /etc/mysql/conf.d/docker.cnf
VOLUME /var/lib/mysql
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306
RUN apt-get update
#RUN apt-get install -y software-properties-common
#RUN apt-get update
RUN apt-get install -y apache2 curl nano php libapache2-mod-php php7.4-mysql
EXPOSE 80
COPY calculadora.html /var/www/html/
COPY calculadora.php /var/www/html/
COPY success.html /var/www/html/
COPY start.sh /
COPY 50-server.cnf /etc/mysql/mariadb.conf.d/
RUN chmod 777 /start.sh
CMD ["/start.sh"]
'''
The error is because some servers that used the Mariadb image in the Dockerfile are down. Just need to update them.
I tried to install the Airflow via my own image at a public dockerhub, but it works perfect locally, but when I tried to use it on Openshift. I got this error bellow.
`ERROR: Could not install packages due to an OSError: [Errno 13]
Permission denied: '/.local' Check the permissions.
My Dockerfile working for on Windows and Ubuntu.
# VERSION 2.0.0
# AUTHOR: Bruno
# DESCRIPTION: Basic Airflow container
FROM python:3.8-slim-buster
LABEL maintainer="Bruno"
# Never prompt the user for choices on installation/configuration of packages
ENV DEBIAN_FRONTEND noninteractive
ENV TERM linux
COPY requirements.txt .
RUN pip install --user -r requirements.txt --no-cache-dir
# Airflow
ARG AIRFLOW_VERSION=2.0.0
ARG AIRFLOW_USER_HOME=/usr/local/airflow
ENV AIRFLOW_HOME=${AIRFLOW_USER_HOME}
# Define en_US.
ENV LANGUAGE en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LC_ALL en_US.UTF-8
ENV LC_CTYPE en_US.UTF-8
ENV LC_MESSAGES en_US.UTF-8
# Disable noisy "Handling signal" log messages:
# ENV GUNICORN_CMD_ARGS --log-level WARNING
RUN set -ex \
&& buildDeps=' \
freetds-dev \
libkrb5-dev \
libsasl2-dev \
libssl-dev \
libffi-dev \
libpq-dev \
git \
' \
&& apt-get update -yqq \
&& apt-get upgrade -yqq \
&& apt-get install -yqq --no-install-recommends \
$buildDeps \
freetds-bin \
build-essential \
default-libmysqlclient-dev \
apt-utils \
curl \
rsync \
netcat \
locales \
&& sed -i 's/^# en_US.UTF-8 UTF-8$/en_US.UTF-8 UTF-8/g' /etc/locale.gen \
&& locale-gen \
&& update-locale LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 \
&& useradd -ms /bin/bash -d ${AIRFLOW_USER_HOME} airflow \
&& pip install -U pip setuptools wheel \
&& pip install pytz \
&& pip install pyOpenSSL \
&& pip install ndg-httpsclient \
&& pip install pyasn1 \
&& pip install apache-airflow[crypto,celery,postgres,kubernetes,hive,jdbc,mysql,ssh${AIRFLOW_DEPS:+,}${AIRFLOW_DEPS}]==${AIRFLOW_VERSION} \
&& pip install 'redis==3.2' \
&& if [ -n "${PYTHON_DEPS}" ]; then pip install ${PYTHON_DEPS}; fi \
&& apt-get purge --auto-remove -yqq $buildDeps \
&& apt-get autoremove -yqq --purge \
&& apt-get clean \
&& rm -rf \
/var/lib/apt/lists/* \
/tmp/* \
/var/tmp/* \
/usr/share/man \
/usr/share/doc \
/usr/share/doc-base
COPY entrypoint.sh /entrypoint.sh
COPY airflow.cfg ${AIRFLOW_USER_HOME}/airflow.cfg
RUN chown -R airflow: ${AIRFLOW_USER_HOME}
EXPOSE 8080 5555 8793
USER airflow
WORKDIR ${AIRFLOW_USER_HOME}
ENTRYPOINT ["/entrypoint.sh"]
CMD ["webserver"]
In this context there is one thing you have to be aware of when working with Openshift. By default Openshift runs containers with arbitrary user ids. Container images that are relying on fixed user ids may fail to start due to permission issues.
Therefore please make sure your container images are built according to the rules described in
https://docs.openshift.com/container-platform/4.6/openshift_images/create-images.html#images-create-guide-openshift_create-images.
I'm Deploying Django via gunicorn onto a K8s node from a Docker Image.
For a Dockerfile using CMD python manage.py runserver 0.0.0.0:8000, i.e. standard Django dev-server, the backend services requests fine.
For a Dockerfile using CMD gunicorn ..., i.e. a proper staging/production server, requests are serviced SUPER-slow or not at all:
Here's the Dockerfile:
FROM python:3.9-buster
LABEL maintainer="hq#deepspaceprogram.com"
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y gcc && \
apt-get install -y git && \
apt-get install -y libcurl4 && \
apt-get install -y libpq-dev && \
apt-get install -y libssl-dev && \
apt-get install -y python3-dev && \
apt-get install -y librtmp-dev && \
apt-get install -y libcurl4-gnutls-dev && \
apt-get install -y libcurl4-openssl-dev && \
apt-get install -y postgresql-9.3 && \
apt-get install -y python-psycopg2
ENV PROJECT_ROOT /app
WORKDIR /app
# install python packages with poetry
COPY pyproject.toml .
RUN pip3 install poetry && \
poetry config virtualenvs.create false && \
poetry install --no-dev
COPY accounts accounts
COPY analytics analytics
COPY commerce commerce
COPY documents documents
COPY leafsheets leafsheets
COPY leafsheets_django leafsheets_django
COPY marketing marketing
COPY static static
COPY manage.py .
# This should be an empty file if building for staging/production
# Else (image for local dev) it should contain the complete .env
COPY .env-for-docker-image .env
# CMD python manage.py runserver 0.0.0.0:8000
CMD gunicorn \
--bind :8000 \
--workers 3 \
--worker-class gthread \
--worker-tmp-dir /dev/shm \
--timeout 120 \
--log-level debug \
--log-file - \
leafsheets_django.wsgi ;
Logs here show lots of "Connection Closing" messages.
In my settings.py I have CORS setup ok:
# Cors (ref: https://pypi.org/project/django-cors-headers/)
if DEBUG:
# CORS_ORIGIN_ALLOW_ALL = True # TODO: Remove after Django update
CORS_ALLOW_ALL_ORIGINS = True
else:
# CORS_ORIGIN_WHITELIST = ( FRONTEND_URL, ) # TODO: Remove after Django update
CORS_ALLOWED_ORIGINS = ( FRONTEND_URL, )
CORS_ALLOW_CREDENTIALS = True
ALLOWED_HOSTS = ["*"]
What's happening? How to proceed?
I'm using Docker with python:3.7.6-slim image to dockerize the Django application.
I'm using django-import-export plugin to import data in the admin panel which stores the uploaded file in the temporary directory to read while importing.
But on import, it gives an error
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmppk01nf3d'
The same is working when not using docker.
Dockerfile
FROM python:3.7.6-slim
ARG APP_USER=appuser
RUN groupadd -r ${APP_USER} && useradd --no-log-init -r -g ${APP_USER} ${APP_USER}
RUN set -ex \
&& RUN_DEPS=" \
libpcre3 \
mime-support \
default-libmysqlclient-dev \
inkscape \
libcurl4-nss-dev libssl-dev \
" \
&& seq 1 8 | xargs -I{} mkdir -p /usr/share/man/man{} \
&& apt-get update && apt-get install -y --no-install-recommends $RUN_DEPS \
&& pip install pipenv \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir -p /home/${APP_USER}/.config/inkscape \
&& chown -R ${APP_USER} /home/${APP_USER}/.config/inkscape \
# Create directories
&& mkdir /app/ \
&& mkdir /app/config/ \
&& mkdir /app/scripts/ \
&& mkdir -p /static_cdn/static_root/ \
&& chown -R ${APP_USER} /static_cdn/
WORKDIR /app/
COPY Pipfile Pipfile.lock /app/
RUN set -ex \
&& BUILD_DEPS=" \
build-essential \
libpcre3-dev \
libpq-dev \
" \
&& apt-get update && apt-get install -y --no-install-recommends $BUILD_DEPS \
&& pipenv install --deploy --system \
\
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $BUILD_DEPS \
&& rm -rf /var/lib/apt/lists/*
COPY ./src /app/
COPY scripts/ /app/scripts/
COPY configs/ /app/configs/
EXPOSE 8000
ENV UWSGI_WSGI_FILE=qcg/wsgi.py
ENV UWSGI_HTTP=:8000 UWSGI_MASTER=1 UWSGI_HTTP_AUTO_CHUNKED=1 UWSGI_HTTP_KEEPALIVE=1 UWSGI_LAZY_APPS=1 UWSGI_WSGI_ENV_BEHAVIOR=holy
ENV UWSGI_WORKERS=2 UWSGI_THREADS=4
ENV UWSGI_STATIC_MAP="/static/=/static_cdn/static_root/" UWSGI_STATIC_EXPIRES_URI="/static/.*\.[a-f0-9]{12,}\.(css|js|png|jpg|jpeg|gif|ico|woff|ttf|otf|svg|scss|map|txt) 315360000"
USER ${APP_USER}:${APP_USER}
ENTRYPOINT ["/app/scripts/docker/entrypoint.sh"]
and running command
docker run my-image uwsgi --show-config