I have a .Rmd along with a .sql file that is read by the .Rmd file that I’m trying to deploy in ShinyProxy. I am able to run this from within RStudio on my Mac.
The application loads, I can see it in ShinyProxy, but when I click on the application, it launches, then says please wait, then the error java.lang.StackOverflowError. I tried increasing the stack size with the JAVA_OPTS in the Dockerfile.
I do see this in shinyproxy.log:
java.lang.reflect.InvocationTargetException: null
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_332]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_332]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_332]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_332]
.
.
.
.
Caused by: javax.naming.NoInitialContextException: Need to specify class name in environment or system property, or as an applet parameter, or in an application resource file: java.naming.factory.initial
at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:673) ~[na:1.8.0_332]
at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:313) ~[na:1.8.0_332]
at javax.naming.InitialContext.getURLOrDefaultInitCtx(InitialContext.java:350) ~[na:1.8.0_332]
at javax.naming.InitialContext.lookup(InitialContext.java:417) ~[na:1.8.0_332]
... 140 common frames omitted
Dockerfile:
FROM openanalytics/r-base
MAINTAINER John Reber "John.Reber#jefferson.edu"
ENV JAVA_HOME /usr/lib/jvm/java-11-openjdk-amd64
RUN export JAVA_HOME
ENV JAVA_OPTS "-Xms4G -Xmx8G -Xss2G"
RUN export JAVA_OPTS
# Install Java for rJava
RUN apt-get update && \
apt-get install -y default-jdk && \
apt-get install -y default-jre && \
apt-get install -y ca-certificates-java && \
rm -rf /var/lib/apt/lists/*
RUN ["java", "-version"]
CMD javareconf
RUN apt-get update && apt-get install -y \
libcurl4-openssl-dev \
# libcurl4-gnutls-dev \
libssl-dev \
libxml2-dev && \
rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y \
libharfbuzz0b && \
rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y \
sudo \
pandoc \
pandoc-citeproc \
libcairo2-dev \
libxt-dev \
libssh2-1-dev && \
rm -rf /var/lib/apt/lists/*
WORKDIR /opt/oracle
RUN apt-get update && apt-get install -y libaio1 wget unzip \
&& wget https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip \
&& unzip instantclient-basiclite-linuxx64.zip \
&& rm -f instantclient-basiclite-linuxx64.zip \
&& cd /opt/oracle/instantclient* \
&& rm -f *jdbc* *occi* *mysql* *README *jar uidrvci genezi adrci \
&& echo /opt/oracle/instantclient* > /etc/ld.so.conf.d/oracle-instantclient.conf \
&& ldconfig
WORKDIR /
RUN apt-get update && apt-get install -y \
libmysql++-dev \
unixodbc-dev \
libpq-dev && \
rm -rf /var/lib/apt/lists/*
#RUN apt-get update && apt-get install -y \
# libxml2 \
# libssl1.1 && \
# rm -rf /var/lib/apt/lists/*
CMD javareconf
RUN ["java", "-version"]
# install needed R packages
#RUN R -e "install.packages(c('flexdashboard', 'knitr', 'plotly', 'httpuv', 'shiny', 'rJava', 'RJDBC', 'dplyr', 'readr', 'DT', 'lubridate', 'rmarkdown'), dependencies = TRUE, repo='http://cran.r-project.org')"
RUN R -e "install.packages(c('shiny'), dependencies = TRUE, repo='https://cloud.r-project.org')"
RUN R -e "install.packages(c('flexdashboard', 'dplyr', 'rJava', 'RJDBC', 'readr', 'DT', 'lubridate', 'rmarkdown'), dependencies = TRUE, repo='https://cloud.r-project.org')"
# 'sysfonts','gifski', 'Cairo', 'tidyverse',
# make directory and copy Rmarkdown flexdashboard file in it
RUN mkdir -p /prmc
COPY prmc/PRMC.Rmd /prmc/PRMC.Rmd
#COPY prmc/PRMC_Local.Rmd /prmc/PRMC_Local.Rmd
COPY prmc/prmc.sql /prmc/prmc.sql
#COPY prmc/PRMC_ACCRUAL.csv /prmc/PRMC_ACCRUAL.csv
COPY prmc/ojdbc11.jar /prmc/ojdbc11.jar
# Copy Rprofile.site to the image
COPY Rprofile.site /usr/local/lib/R/etc/
# make all app files readable (solves issue when dev in Windows, but building in Ubuntu)
RUN chmod -R 755 /prmc
# expose port on Docker container
EXPOSE 3838
# run flexdashboard as localhost and on exposed port in Docker container
CMD ["R", "-e", "rmarkdown::run('/prmc/PRMC.Rmd', shiny_args = list(port = 3838, host = '0.0.0.0'))"]
application.xml:
proxy:
# title: Open Analytics Shiny Proxy
title: SKCC Open Analytics ShinyProxy
# logo-url: https://www.openanalytics.eu/shinyproxy/logo.png
logo-url: https://ewebapp01pa.jefferson.edu/includes/images/logo-2014.jpg
landing-page: /
heartbeat-rate: 10000
heartbeat-timeout: 60000
port: 8081
# authentication: keycloak
authentication: simple
admin-groups: admin
useForwardHeaders: true
# Example: 'simple' authentication configuration
users:
- name: jack
password: XXXXXXXX
groups: scientists, admin
- name: jeff
password: XXXXXXXXX
groups: mathematicians
# keycloak authentication
keycloak:
auth-server-url: https://kc.kcc.tju.edu/auth
realm: shinyproxy
public-client: true
resource: shinyproxy
credentials-secret: s2NwbneBKh10wG0fHjZjevGnLlNTt44h
use-resource-role-mappings: false
# Docker configuration
docker:
url: http://localhost:2375
port-range-start: 20000
specs:
- id: 01_hello
display-name: Hello Application
description: Application which demonstrates the basics of a Shiny app
container-cmd: ["R", "-e", "shinyproxy::run_01_hello()"]
container-image: openanalytics/shinyproxy-demo
access-groups: [scientists, mathematicians, analyze, admin]
# - id: 06_tabsets
# display-name: 06_tabsets
# description: Application 06_tabsets demonstration
# container-cmd: ["R", "-e", "shinyproxy::run_06_tabsets()"]
# container-image: openanalytics/shinyproxy-demo
# access-groups: []
## - id: euler
# display-name: Euler's number
# container-cmd: [ "R", "-e", "shiny::runApp('/root/euler')" ]
# container-image: openanalytics/shinyproxy-template
# access-groups: scientists
- id: prmc
display-name: PRMC Dashboard
description: (Protocol Review Monitoring Committee Dashboard)
docker-cmd: ["R", "-e rmarkdown::run('/prmc/PRMC.Rmd')"]
container-image: prmc_dashboard3
access-groups: [scientists, mathematicians, analyze, admin]
logging:
file:
name: shinyproxy.log
level:
root: DEBUG
I'm trying build a docker container with the following command:
sudo docker build docker_calculadora/
but when it's building, at the step 9 it appears the following error:
Step 9/27 : RUN set -ex; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --export $GPG_KEYS > /etc/apt/trusted.gpg.d/mariadb.gpg; command -v gpgconf > /dev/null && gpgconf --kill all || :; rm -r "$GNUPGHOME"; apt-key list
---> Running in a80677ab986c
mktemp -d
export GNUPGHOME=/tmp/tmp.TiWBSXwFOS
gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys 177F4010FE56CA3336300305F1656F24C74CD1D8
gpg: keybox '/tmp/tmp.TiWBSXwFOS/pubring.kbx' created
gpg: keyserver receive failed: No name
The command '/bin/sh -c set -ex; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --export $GPG_KEYS > /etc/apt/trusted.gpg.d/mariadb.gpg; command -v gpgconf > /dev/null && gpgconf --kill all || :; rm -r "$GNUPGHOME"; apt-key list' returned a non-zero code: 2
My DockerFile:
# vim:set ft=dockerfile:
FROM ubuntu:focal
# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
RUN groupadd -r mysql && useradd -r -g mysql mysql
# https://bugs.debian.org/830696 (apt uses gpgv by default in newer releases, rather than gpg)
RUN set -ex; \
apt-get update; \
if ! which gpg; then \
apt-get install -y --no-install-recommends gnupg; \
fi; \
if ! gpg --version | grep -q '^gpg (GnuPG) 1\.'; then \
# Ubuntu includes "gnupg" (not "gnupg2", but still 2.x), but not dirmngr, and gnupg 2.x requires dirmngr
# so, if we're not running gnupg 1.x, explicitly install dirmngr too
apt-get install -y --no-install-recommends dirmngr; \
fi; \
rm -rf /var/lib/apt/lists/*
# add gosu for easy step-down from root
# https://github.com/tianon/gosu/releases
ENV GOSU_VERSION 1.12
RUN set -eux; \
savedAptMark="$(apt-mark showmanual)"; \
apt-get update; \
apt-get install -y --no-install-recommends ca-certificates wget; \
rm -rf /var/lib/apt/lists/*; \
dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"; \
wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"; \
wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"; \
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
gpgconf --kill all; \
rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; \
apt-mark auto '.*' > /dev/null; \
[ -z "$savedAptMark" ] || apt-mark manual $savedAptMark > /dev/null; \
apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; \
chmod +x /usr/local/bin/gosu; \
gosu --version; \
gosu nobody true
RUN mkdir /docker-entrypoint-initdb.d
# install "pwgen" for randomizing passwords
# install "tzdata" for /usr/share/zoneinfo/
# install "xz-utils" for .sql.xz docker-entrypoint-initdb.d files
RUN set -ex; \
apt-get update; \
apt-get install -y --no-install-recommends \
pwgen \
tzdata \
xz-utils \
; \
rm -rf /var/lib/apt/lists/*
ENV GPG_KEYS \
# pub rsa4096 2016-03-30 [SC]
# 177F 4010 FE56 CA33 3630 0305 F165 6F24 C74C D1D8
# uid [ unknown] MariaDB Signing Key <signing-key#mariadb.org>
# sub rsa4096 2016-03-30 [E]
177F4010FE56CA3336300305F1656F24C74CD1D8
RUN set -ex; \
export GNUPGHOME="$(mktemp -d)"; \
for key in $GPG_KEYS; do \
gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; \
done; \
gpg --batch --export $GPG_KEYS > /etc/apt/trusted.gpg.d/mariadb.gpg; \
command -v gpgconf > /dev/null && gpgconf --kill all || :; \
rm -r "$GNUPGHOME"; \
apt-key list
# bashbrew-architectures: amd64 arm64v8 ppc64le
ENV MARIADB_MAJOR 10.5
ENV MARIADB_VERSION 1:10.5.8+maria~focal
# release-status:Stable
# (https://downloads.mariadb.org/mariadb/+releases/)
RUN set -e;\
echo "deb http://ftp.osuosl.org/pub/mariadb/repo/$MARIADB_MAJOR/ubuntu focal main" > /etc/apt/sources.list.d/mariadb.list; \
{ \
echo 'Package: *'; \
echo 'Pin: release o=MariaDB'; \
echo 'Pin-Priority: 999'; \
} > /etc/apt/preferences.d/mariadb
# add repository pinning to make sure dependencies from this MariaDB repo are preferred over Debian dependencies
# libmariadbclient18 : Depends: libmysqlclient18 (= 5.5.42+maria-1~wheezy) but 5.5.43-0+deb7u1 is to be installed
# the "/var/lib/mysql" stuff here is because the mysql-server postinst doesn't have an explicit way to disable the mysql_install_db codepath besides having a database already "configured" (ie, stuff in /var/lib/mysql/mysql)
# also, we set debconf keys to make APT a little quieter
RUN set -ex; \
{ \
echo "mariadb-server-$MARIADB_MAJOR" mysql-server/root_password password 'unused'; \
echo "mariadb-server-$MARIADB_MAJOR" mysql-server/root_password_again password 'unused'; \
} | debconf-set-selections; \
apt-get update; \
apt-get install -y \
"mariadb-server=$MARIADB_VERSION" \
# mariadb-backup is installed at the same time so that `mysql-common` is only installed once from just mariadb repos
mariadb-backup \
socat \
; \
rm -rf /var/lib/apt/lists/*; \
# purge and re-create /var/lib/mysql with appropriate ownership
rm -rf /var/lib/mysql; \
mkdir -p /var/lib/mysql /var/run/mysqld; \
chown -R mysql:mysql /var/lib/mysql /var/run/mysqld; \
# ensure that /var/run/mysqld (used for socket and lock files) is writable regardless of the UID our mysqld instance ends up having at runtime
chmod 777 /var/run/mysqld; \
# comment out a few problematic configuration values
find /etc/mysql/ -name '*.cnf' -print0 \
| xargs -0 grep -lZE '^(bind-address|log|user\s)' \
| xargs -rt -0 sed -Ei 's/^(bind-address|log|user\s)/#&/'; \
# don't reverse lookup hostnames, they are usually another container
echo '[mysqld]\nskip-host-cache\nskip-name-resolve' > /etc/mysql/conf.d/docker.cnf
VOLUME /var/lib/mysql
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306
RUN apt-get update
#RUN apt-get install -y software-properties-common
#RUN apt-get update
RUN apt-get install -y apache2 curl nano php libapache2-mod-php php7.4-mysql
EXPOSE 80
COPY calculadora.html /var/www/html/
COPY calculadora.php /var/www/html/
COPY success.html /var/www/html/
COPY start.sh /
COPY 50-server.cnf /etc/mysql/mariadb.conf.d/
RUN chmod 777 /start.sh
CMD ["/start.sh"]
'''
The error is because some servers that used the Mariadb image in the Dockerfile are down. Just need to update them.
I am trying to implement multi-stage docker build to deploy Django web app.
An error occurred while trying to copy files from one docker stage to another.
I am sharing Dockerfile and error traceback for your reference.
The same Docker build worked before one day ago. Somehow, It is not working today. I have searched for some workaround. But, no luck.
My Dockerfile as:
FROM node:10 AS frontend
ARG server=local
RUN mkdir -p /front_code
WORKDIR /front_code
ADD . /front_code/
RUN cd /front_code/webapp/app \
&& npm install js-beautify#1.6.12 \
&& npm install --save moment#2.22.2 \
&& npm install --save fullcalendar#3.10.1 \
&& npm install --save pdfjs-dist#2.3.200 \
&& npm install \
&& npm install --save #babel/runtime \
&& yarn list && ls -l /front_code/webapp/app/static \
&& npm run build \
&& rm -rf node_modules \
&& cd /front_code/webapp/market-app \
&& yarn install \
&& yarn list && ls -l /front_code/webapp/market-app/static \
&& yarn build \
&& rm -rf node_modules \
FROM python:3.8-alpine AS base
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ARG server=local
ARG ENV_BUCKET_NAME=""
ARG REMOTE_ENV_FILE_NAME=""
ARG FRONT_END_MANIFEST=""
ARG s3_bucket=""
ARG AWS_ACCESS_KEY_ID=""
ARG AWS_SECRET_ACCESS_KEY=""
ARG RDS_DB_NAME=""
ARG RDS_USERNAME=""
ARG RDS_PASSWORD=""
ENV server="$server" ENV_BUCKET_NAME="$ENV_BUCKET_NAME" REMOTE_ENV_FILE_NAME="$REMOTE_ENV_FILE_NAME" FRONT_END_MANIFEST="$FRONT_END_MANIFEST" s3_bucket="$s3_bucket" AWS_SECRET_ACCESS_KEY="$AWS_SECRET_ACCESS_KEY" AWS_ACCESS_KEY_ID="$AWS_ACCESS_KEY_ID" RDS_DB_NAME="$RDS_DB_NAME" RDS_USERNAME="$RDS_USERNAME" RDS_PASSWORD="$RDS_PASSWORD"
RUN mkdir -p /code
WORKDIR /code
ADD requirements/updated_requirements.txt /code/
ADD . /code/
COPY --from=frontend /front_code/webapp/app/static/ /code/webapp/app/static/
COPY --from=frontend /front_code/webapp/market-app/static/ /code/webapp/market-app/static/
COPY --from=frontend /front_code/templates/webapp/index.html /code/templates/webapp/index.html
COPY --from=frontend /front_code/templates/market/market-single-page.html /code/templates/market/market-single-page.html
RUN apk update \
&& apk --no-cache add --virtual build-dependencies \
build-base \
zlib-dev \
jpeg-dev \
libc-dev \
libffi-dev \
musl-dev \
mariadb-connector-c-dev \
python3-dev \
libxslt-dev \
libxml2-dev \
supervisor \
openssh \
&& pip install -qq -r updated_requirements.txt \
&& rm -rf .cache/pip \
&& apk del build-dependencies \
&& apk add --no-cache libjpeg nginx libxml2 libxslt-dev libxml2-dev mariadb-connector-c-dev \
&& cd /code/webapp/app \
&& python format_index_html.py \
&& cd /code/ \
&& python utility/s3_upload_tiny.py \
&& cd /code/webapp/market-app \
&& python format_index_html.py \
&& python format_index_html_vendor.py \
&& cd /code \
&& python utility/s3_upload_tiny_market.py \
&& python utility/cron/setup_initial_env.py \
&& chmod +x utility/cron/cron_job.sh \
&& rm /etc/nginx/conf.d/default.conf \
&& cd /code \
&& touch /var/log/cron.log \
&& mkdir -p /etc/cron.d/ \
&& cp utility/cron/django.cron /etc/cron.d/django.cron \
&& crontab /etc/cron.d/django.cron
COPY nginx.conf /etc/nginx/
COPY django-site-nginx.conf /etc/nginx/conf.d/
COPY uwsgi.ini /etc/uwsgi/
COPY supervisord.conf /etc/supervisor/
#RUN python manage.py collectstatic --noinput && python manage.py migrate --noinput
WORKDIR /code
EXPOSE 80
CMD ["/usr/local/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]
Error traceback as :
Step 23/35 : ADD . /code/
---> 1b964365c334
Step 24/35 : COPY --from=frontend /front_code/webapp/app/static/ /code/webapp/app/static/
invalid from flag value frontend: pull access denied for frontend, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Note: 1) I have a free version of Docker account.
2) I am using AWS Codepipeline to build and deploy.
3) Docker version is 19.03.3.
I'm Deploying Django via gunicorn onto a K8s node from a Docker Image.
For a Dockerfile using CMD python manage.py runserver 0.0.0.0:8000, i.e. standard Django dev-server, the backend services requests fine.
For a Dockerfile using CMD gunicorn ..., i.e. a proper staging/production server, requests are serviced SUPER-slow or not at all:
Here's the Dockerfile:
FROM python:3.9-buster
LABEL maintainer="hq#deepspaceprogram.com"
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y gcc && \
apt-get install -y git && \
apt-get install -y libcurl4 && \
apt-get install -y libpq-dev && \
apt-get install -y libssl-dev && \
apt-get install -y python3-dev && \
apt-get install -y librtmp-dev && \
apt-get install -y libcurl4-gnutls-dev && \
apt-get install -y libcurl4-openssl-dev && \
apt-get install -y postgresql-9.3 && \
apt-get install -y python-psycopg2
ENV PROJECT_ROOT /app
WORKDIR /app
# install python packages with poetry
COPY pyproject.toml .
RUN pip3 install poetry && \
poetry config virtualenvs.create false && \
poetry install --no-dev
COPY accounts accounts
COPY analytics analytics
COPY commerce commerce
COPY documents documents
COPY leafsheets leafsheets
COPY leafsheets_django leafsheets_django
COPY marketing marketing
COPY static static
COPY manage.py .
# This should be an empty file if building for staging/production
# Else (image for local dev) it should contain the complete .env
COPY .env-for-docker-image .env
# CMD python manage.py runserver 0.0.0.0:8000
CMD gunicorn \
--bind :8000 \
--workers 3 \
--worker-class gthread \
--worker-tmp-dir /dev/shm \
--timeout 120 \
--log-level debug \
--log-file - \
leafsheets_django.wsgi ;
Logs here show lots of "Connection Closing" messages.
In my settings.py I have CORS setup ok:
# Cors (ref: https://pypi.org/project/django-cors-headers/)
if DEBUG:
# CORS_ORIGIN_ALLOW_ALL = True # TODO: Remove after Django update
CORS_ALLOW_ALL_ORIGINS = True
else:
# CORS_ORIGIN_WHITELIST = ( FRONTEND_URL, ) # TODO: Remove after Django update
CORS_ALLOWED_ORIGINS = ( FRONTEND_URL, )
CORS_ALLOW_CREDENTIALS = True
ALLOWED_HOSTS = ["*"]
What's happening? How to proceed?
I'm using Docker with python:3.7.6-slim image to dockerize the Django application.
I'm using django-import-export plugin to import data in the admin panel which stores the uploaded file in the temporary directory to read while importing.
But on import, it gives an error
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmppk01nf3d'
The same is working when not using docker.
Dockerfile
FROM python:3.7.6-slim
ARG APP_USER=appuser
RUN groupadd -r ${APP_USER} && useradd --no-log-init -r -g ${APP_USER} ${APP_USER}
RUN set -ex \
&& RUN_DEPS=" \
libpcre3 \
mime-support \
default-libmysqlclient-dev \
inkscape \
libcurl4-nss-dev libssl-dev \
" \
&& seq 1 8 | xargs -I{} mkdir -p /usr/share/man/man{} \
&& apt-get update && apt-get install -y --no-install-recommends $RUN_DEPS \
&& pip install pipenv \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir -p /home/${APP_USER}/.config/inkscape \
&& chown -R ${APP_USER} /home/${APP_USER}/.config/inkscape \
# Create directories
&& mkdir /app/ \
&& mkdir /app/config/ \
&& mkdir /app/scripts/ \
&& mkdir -p /static_cdn/static_root/ \
&& chown -R ${APP_USER} /static_cdn/
WORKDIR /app/
COPY Pipfile Pipfile.lock /app/
RUN set -ex \
&& BUILD_DEPS=" \
build-essential \
libpcre3-dev \
libpq-dev \
" \
&& apt-get update && apt-get install -y --no-install-recommends $BUILD_DEPS \
&& pipenv install --deploy --system \
\
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $BUILD_DEPS \
&& rm -rf /var/lib/apt/lists/*
COPY ./src /app/
COPY scripts/ /app/scripts/
COPY configs/ /app/configs/
EXPOSE 8000
ENV UWSGI_WSGI_FILE=qcg/wsgi.py
ENV UWSGI_HTTP=:8000 UWSGI_MASTER=1 UWSGI_HTTP_AUTO_CHUNKED=1 UWSGI_HTTP_KEEPALIVE=1 UWSGI_LAZY_APPS=1 UWSGI_WSGI_ENV_BEHAVIOR=holy
ENV UWSGI_WORKERS=2 UWSGI_THREADS=4
ENV UWSGI_STATIC_MAP="/static/=/static_cdn/static_root/" UWSGI_STATIC_EXPIRES_URI="/static/.*\.[a-f0-9]{12,}\.(css|js|png|jpg|jpeg|gif|ico|woff|ttf|otf|svg|scss|map|txt) 315360000"
USER ${APP_USER}:${APP_USER}
ENTRYPOINT ["/app/scripts/docker/entrypoint.sh"]
and running command
docker run my-image uwsgi --show-config