I have a .Rmd along with a .sql file that is read by the .Rmd file that I’m trying to deploy in ShinyProxy. I am able to run this from within RStudio on my Mac.
The application loads, I can see it in ShinyProxy, but when I click on the application, it launches, then says please wait, then the error java.lang.StackOverflowError. I tried increasing the stack size with the JAVA_OPTS in the Dockerfile.
I do see this in shinyproxy.log:
java.lang.reflect.InvocationTargetException: null
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_332]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_332]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_332]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_332]
.
.
.
.
Caused by: javax.naming.NoInitialContextException: Need to specify class name in environment or system property, or as an applet parameter, or in an application resource file: java.naming.factory.initial
at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:673) ~[na:1.8.0_332]
at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:313) ~[na:1.8.0_332]
at javax.naming.InitialContext.getURLOrDefaultInitCtx(InitialContext.java:350) ~[na:1.8.0_332]
at javax.naming.InitialContext.lookup(InitialContext.java:417) ~[na:1.8.0_332]
... 140 common frames omitted
Dockerfile:
FROM openanalytics/r-base
MAINTAINER John Reber "John.Reber#jefferson.edu"
ENV JAVA_HOME /usr/lib/jvm/java-11-openjdk-amd64
RUN export JAVA_HOME
ENV JAVA_OPTS "-Xms4G -Xmx8G -Xss2G"
RUN export JAVA_OPTS
# Install Java for rJava
RUN apt-get update && \
apt-get install -y default-jdk && \
apt-get install -y default-jre && \
apt-get install -y ca-certificates-java && \
rm -rf /var/lib/apt/lists/*
RUN ["java", "-version"]
CMD javareconf
RUN apt-get update && apt-get install -y \
libcurl4-openssl-dev \
# libcurl4-gnutls-dev \
libssl-dev \
libxml2-dev && \
rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y \
libharfbuzz0b && \
rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y \
sudo \
pandoc \
pandoc-citeproc \
libcairo2-dev \
libxt-dev \
libssh2-1-dev && \
rm -rf /var/lib/apt/lists/*
WORKDIR /opt/oracle
RUN apt-get update && apt-get install -y libaio1 wget unzip \
&& wget https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip \
&& unzip instantclient-basiclite-linuxx64.zip \
&& rm -f instantclient-basiclite-linuxx64.zip \
&& cd /opt/oracle/instantclient* \
&& rm -f *jdbc* *occi* *mysql* *README *jar uidrvci genezi adrci \
&& echo /opt/oracle/instantclient* > /etc/ld.so.conf.d/oracle-instantclient.conf \
&& ldconfig
WORKDIR /
RUN apt-get update && apt-get install -y \
libmysql++-dev \
unixodbc-dev \
libpq-dev && \
rm -rf /var/lib/apt/lists/*
#RUN apt-get update && apt-get install -y \
# libxml2 \
# libssl1.1 && \
# rm -rf /var/lib/apt/lists/*
CMD javareconf
RUN ["java", "-version"]
# install needed R packages
#RUN R -e "install.packages(c('flexdashboard', 'knitr', 'plotly', 'httpuv', 'shiny', 'rJava', 'RJDBC', 'dplyr', 'readr', 'DT', 'lubridate', 'rmarkdown'), dependencies = TRUE, repo='http://cran.r-project.org')"
RUN R -e "install.packages(c('shiny'), dependencies = TRUE, repo='https://cloud.r-project.org')"
RUN R -e "install.packages(c('flexdashboard', 'dplyr', 'rJava', 'RJDBC', 'readr', 'DT', 'lubridate', 'rmarkdown'), dependencies = TRUE, repo='https://cloud.r-project.org')"
# 'sysfonts','gifski', 'Cairo', 'tidyverse',
# make directory and copy Rmarkdown flexdashboard file in it
RUN mkdir -p /prmc
COPY prmc/PRMC.Rmd /prmc/PRMC.Rmd
#COPY prmc/PRMC_Local.Rmd /prmc/PRMC_Local.Rmd
COPY prmc/prmc.sql /prmc/prmc.sql
#COPY prmc/PRMC_ACCRUAL.csv /prmc/PRMC_ACCRUAL.csv
COPY prmc/ojdbc11.jar /prmc/ojdbc11.jar
# Copy Rprofile.site to the image
COPY Rprofile.site /usr/local/lib/R/etc/
# make all app files readable (solves issue when dev in Windows, but building in Ubuntu)
RUN chmod -R 755 /prmc
# expose port on Docker container
EXPOSE 3838
# run flexdashboard as localhost and on exposed port in Docker container
CMD ["R", "-e", "rmarkdown::run('/prmc/PRMC.Rmd', shiny_args = list(port = 3838, host = '0.0.0.0'))"]
application.xml:
proxy:
# title: Open Analytics Shiny Proxy
title: SKCC Open Analytics ShinyProxy
# logo-url: https://www.openanalytics.eu/shinyproxy/logo.png
logo-url: https://ewebapp01pa.jefferson.edu/includes/images/logo-2014.jpg
landing-page: /
heartbeat-rate: 10000
heartbeat-timeout: 60000
port: 8081
# authentication: keycloak
authentication: simple
admin-groups: admin
useForwardHeaders: true
# Example: 'simple' authentication configuration
users:
- name: jack
password: XXXXXXXX
groups: scientists, admin
- name: jeff
password: XXXXXXXXX
groups: mathematicians
# keycloak authentication
keycloak:
auth-server-url: https://kc.kcc.tju.edu/auth
realm: shinyproxy
public-client: true
resource: shinyproxy
credentials-secret: s2NwbneBKh10wG0fHjZjevGnLlNTt44h
use-resource-role-mappings: false
# Docker configuration
docker:
url: http://localhost:2375
port-range-start: 20000
specs:
- id: 01_hello
display-name: Hello Application
description: Application which demonstrates the basics of a Shiny app
container-cmd: ["R", "-e", "shinyproxy::run_01_hello()"]
container-image: openanalytics/shinyproxy-demo
access-groups: [scientists, mathematicians, analyze, admin]
# - id: 06_tabsets
# display-name: 06_tabsets
# description: Application 06_tabsets demonstration
# container-cmd: ["R", "-e", "shinyproxy::run_06_tabsets()"]
# container-image: openanalytics/shinyproxy-demo
# access-groups: []
## - id: euler
# display-name: Euler's number
# container-cmd: [ "R", "-e", "shiny::runApp('/root/euler')" ]
# container-image: openanalytics/shinyproxy-template
# access-groups: scientists
- id: prmc
display-name: PRMC Dashboard
description: (Protocol Review Monitoring Committee Dashboard)
docker-cmd: ["R", "-e rmarkdown::run('/prmc/PRMC.Rmd')"]
container-image: prmc_dashboard3
access-groups: [scientists, mathematicians, analyze, admin]
logging:
file:
name: shinyproxy.log
level:
root: DEBUG
I tried to install the Airflow via my own image at a public dockerhub, but it works perfect locally, but when I tried to use it on Openshift. I got this error bellow.
`ERROR: Could not install packages due to an OSError: [Errno 13]
Permission denied: '/.local' Check the permissions.
My Dockerfile working for on Windows and Ubuntu.
# VERSION 2.0.0
# AUTHOR: Bruno
# DESCRIPTION: Basic Airflow container
FROM python:3.8-slim-buster
LABEL maintainer="Bruno"
# Never prompt the user for choices on installation/configuration of packages
ENV DEBIAN_FRONTEND noninteractive
ENV TERM linux
COPY requirements.txt .
RUN pip install --user -r requirements.txt --no-cache-dir
# Airflow
ARG AIRFLOW_VERSION=2.0.0
ARG AIRFLOW_USER_HOME=/usr/local/airflow
ENV AIRFLOW_HOME=${AIRFLOW_USER_HOME}
# Define en_US.
ENV LANGUAGE en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LC_ALL en_US.UTF-8
ENV LC_CTYPE en_US.UTF-8
ENV LC_MESSAGES en_US.UTF-8
# Disable noisy "Handling signal" log messages:
# ENV GUNICORN_CMD_ARGS --log-level WARNING
RUN set -ex \
&& buildDeps=' \
freetds-dev \
libkrb5-dev \
libsasl2-dev \
libssl-dev \
libffi-dev \
libpq-dev \
git \
' \
&& apt-get update -yqq \
&& apt-get upgrade -yqq \
&& apt-get install -yqq --no-install-recommends \
$buildDeps \
freetds-bin \
build-essential \
default-libmysqlclient-dev \
apt-utils \
curl \
rsync \
netcat \
locales \
&& sed -i 's/^# en_US.UTF-8 UTF-8$/en_US.UTF-8 UTF-8/g' /etc/locale.gen \
&& locale-gen \
&& update-locale LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 \
&& useradd -ms /bin/bash -d ${AIRFLOW_USER_HOME} airflow \
&& pip install -U pip setuptools wheel \
&& pip install pytz \
&& pip install pyOpenSSL \
&& pip install ndg-httpsclient \
&& pip install pyasn1 \
&& pip install apache-airflow[crypto,celery,postgres,kubernetes,hive,jdbc,mysql,ssh${AIRFLOW_DEPS:+,}${AIRFLOW_DEPS}]==${AIRFLOW_VERSION} \
&& pip install 'redis==3.2' \
&& if [ -n "${PYTHON_DEPS}" ]; then pip install ${PYTHON_DEPS}; fi \
&& apt-get purge --auto-remove -yqq $buildDeps \
&& apt-get autoremove -yqq --purge \
&& apt-get clean \
&& rm -rf \
/var/lib/apt/lists/* \
/tmp/* \
/var/tmp/* \
/usr/share/man \
/usr/share/doc \
/usr/share/doc-base
COPY entrypoint.sh /entrypoint.sh
COPY airflow.cfg ${AIRFLOW_USER_HOME}/airflow.cfg
RUN chown -R airflow: ${AIRFLOW_USER_HOME}
EXPOSE 8080 5555 8793
USER airflow
WORKDIR ${AIRFLOW_USER_HOME}
ENTRYPOINT ["/entrypoint.sh"]
CMD ["webserver"]
In this context there is one thing you have to be aware of when working with Openshift. By default Openshift runs containers with arbitrary user ids. Container images that are relying on fixed user ids may fail to start due to permission issues.
Therefore please make sure your container images are built according to the rules described in
https://docs.openshift.com/container-platform/4.6/openshift_images/create-images.html#images-create-guide-openshift_create-images.
I'm creating a new instance in AWS and adding some user data, but part of the job is to create an sh file and then executed.
I'm trying:
#!/bin/bash -x
cd /tmp
INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id)
sudo wget -O ethminer.tar.gz https://github.com/ethereum-mining/ethminer/releases/download/v0.18.0/ethminer-0.18.0-cuda-9-linux-x86_64.tar.gz
sudo tar xvfz ethminer.tar.gz
cd bin
cat > runner.sh << __EOF__
#!/bin/bash -x
SERVERS=(us1 us2 eu1 asia1)
while (true); do
PREFERRED_SERVER=\${!SERVERS[\${!RANDOM} % \${!#SERVERS[#]}]}
./ethminer \
-P stratums://xxx.${!INSTANCE_ID}#\${!PREFERRED_SERVER}.ethermine.org:5555 \
-P stratums://xxx.${!INSTANCE_ID}#us1.ethermine.org:5555 \
-P stratums://xxx.${!INSTANCE_ID}#us2.ethermine.org:5555 \
-P stratums://xxx.${!INSTANCE_ID}#eu1.ethermine.org:5555 \
-P stratums://xxx.${!INSTANCE_ID}#asia1.ethermine.org:5555 \
>> /tmp/ethminer.log 2>&1
done
__EOF__
sudo chmod +x runner.sh
sudo nohup ./runner.sh &
Everything works except the sh, my command creates the runner.sh script but it is empty.
The UserData does not work because its is designed from CloudFormation, thus it has incorrect syntax for use in a standalone instance. The script with correct syntax is below and it will generate your runner.sh. I haven't tested runner's functionality, only the creation of the runner.sh.
#!/bin/bash -x
cd /tmp
INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id)
sudo wget -O ethminer.tar.gz https://github.com/ethereum-mining/ethminer/releases/download/v0.18.0/ethminer-0.18.0-cuda-9-linux-x86_64.tar.gz
sudo tar xvfz ethminer.tar.gz
cd bin
cat > runner.sh << __EOF__
#!/bin/bash -x
SERVERS=(us1 us2 eu1 asia1)
while (true); do
PREFERRED_SERVER=\${SERVERS[\${RANDOM} % \${#SERVERS[#]}]}
./ethminer \
-P stratums://xXX.${insTANCE_ID}#\${PREFERRED_SERVER}.ethermine.org:5555 \
-P stratums://xxx.${INSTANCE_ID}#us1.ethermine.org:5555 \
-P stratums://xxx.${INSTANCE_ID}#us2.ethermine.org:5555 \
-P stratums://xxx.${INSTANCE_ID}#eu1.ethermine.org:5555 \
-P stratums://xxx.${INSTANCE_ID}#asia1.ethermine.org:5555 \
>> /tmp/ethminer.log 2>&1
done
__EOF__
sudo chmod +x runner.sh
sudo nohup ./runner.sh &
I am trying to implement multi-stage docker build to deploy Django web app.
An error occurred while trying to copy files from one docker stage to another.
I am sharing Dockerfile and error traceback for your reference.
The same Docker build worked before one day ago. Somehow, It is not working today. I have searched for some workaround. But, no luck.
My Dockerfile as:
FROM node:10 AS frontend
ARG server=local
RUN mkdir -p /front_code
WORKDIR /front_code
ADD . /front_code/
RUN cd /front_code/webapp/app \
&& npm install js-beautify#1.6.12 \
&& npm install --save moment#2.22.2 \
&& npm install --save fullcalendar#3.10.1 \
&& npm install --save pdfjs-dist#2.3.200 \
&& npm install \
&& npm install --save #babel/runtime \
&& yarn list && ls -l /front_code/webapp/app/static \
&& npm run build \
&& rm -rf node_modules \
&& cd /front_code/webapp/market-app \
&& yarn install \
&& yarn list && ls -l /front_code/webapp/market-app/static \
&& yarn build \
&& rm -rf node_modules \
FROM python:3.8-alpine AS base
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ARG server=local
ARG ENV_BUCKET_NAME=""
ARG REMOTE_ENV_FILE_NAME=""
ARG FRONT_END_MANIFEST=""
ARG s3_bucket=""
ARG AWS_ACCESS_KEY_ID=""
ARG AWS_SECRET_ACCESS_KEY=""
ARG RDS_DB_NAME=""
ARG RDS_USERNAME=""
ARG RDS_PASSWORD=""
ENV server="$server" ENV_BUCKET_NAME="$ENV_BUCKET_NAME" REMOTE_ENV_FILE_NAME="$REMOTE_ENV_FILE_NAME" FRONT_END_MANIFEST="$FRONT_END_MANIFEST" s3_bucket="$s3_bucket" AWS_SECRET_ACCESS_KEY="$AWS_SECRET_ACCESS_KEY" AWS_ACCESS_KEY_ID="$AWS_ACCESS_KEY_ID" RDS_DB_NAME="$RDS_DB_NAME" RDS_USERNAME="$RDS_USERNAME" RDS_PASSWORD="$RDS_PASSWORD"
RUN mkdir -p /code
WORKDIR /code
ADD requirements/updated_requirements.txt /code/
ADD . /code/
COPY --from=frontend /front_code/webapp/app/static/ /code/webapp/app/static/
COPY --from=frontend /front_code/webapp/market-app/static/ /code/webapp/market-app/static/
COPY --from=frontend /front_code/templates/webapp/index.html /code/templates/webapp/index.html
COPY --from=frontend /front_code/templates/market/market-single-page.html /code/templates/market/market-single-page.html
RUN apk update \
&& apk --no-cache add --virtual build-dependencies \
build-base \
zlib-dev \
jpeg-dev \
libc-dev \
libffi-dev \
musl-dev \
mariadb-connector-c-dev \
python3-dev \
libxslt-dev \
libxml2-dev \
supervisor \
openssh \
&& pip install -qq -r updated_requirements.txt \
&& rm -rf .cache/pip \
&& apk del build-dependencies \
&& apk add --no-cache libjpeg nginx libxml2 libxslt-dev libxml2-dev mariadb-connector-c-dev \
&& cd /code/webapp/app \
&& python format_index_html.py \
&& cd /code/ \
&& python utility/s3_upload_tiny.py \
&& cd /code/webapp/market-app \
&& python format_index_html.py \
&& python format_index_html_vendor.py \
&& cd /code \
&& python utility/s3_upload_tiny_market.py \
&& python utility/cron/setup_initial_env.py \
&& chmod +x utility/cron/cron_job.sh \
&& rm /etc/nginx/conf.d/default.conf \
&& cd /code \
&& touch /var/log/cron.log \
&& mkdir -p /etc/cron.d/ \
&& cp utility/cron/django.cron /etc/cron.d/django.cron \
&& crontab /etc/cron.d/django.cron
COPY nginx.conf /etc/nginx/
COPY django-site-nginx.conf /etc/nginx/conf.d/
COPY uwsgi.ini /etc/uwsgi/
COPY supervisord.conf /etc/supervisor/
#RUN python manage.py collectstatic --noinput && python manage.py migrate --noinput
WORKDIR /code
EXPOSE 80
CMD ["/usr/local/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]
Error traceback as :
Step 23/35 : ADD . /code/
---> 1b964365c334
Step 24/35 : COPY --from=frontend /front_code/webapp/app/static/ /code/webapp/app/static/
invalid from flag value frontend: pull access denied for frontend, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Note: 1) I have a free version of Docker account.
2) I am using AWS Codepipeline to build and deploy.
3) Docker version is 19.03.3.
I'm using Docker with python:3.7.6-slim image to dockerize the Django application.
I'm using django-import-export plugin to import data in the admin panel which stores the uploaded file in the temporary directory to read while importing.
But on import, it gives an error
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmppk01nf3d'
The same is working when not using docker.
Dockerfile
FROM python:3.7.6-slim
ARG APP_USER=appuser
RUN groupadd -r ${APP_USER} && useradd --no-log-init -r -g ${APP_USER} ${APP_USER}
RUN set -ex \
&& RUN_DEPS=" \
libpcre3 \
mime-support \
default-libmysqlclient-dev \
inkscape \
libcurl4-nss-dev libssl-dev \
" \
&& seq 1 8 | xargs -I{} mkdir -p /usr/share/man/man{} \
&& apt-get update && apt-get install -y --no-install-recommends $RUN_DEPS \
&& pip install pipenv \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir -p /home/${APP_USER}/.config/inkscape \
&& chown -R ${APP_USER} /home/${APP_USER}/.config/inkscape \
# Create directories
&& mkdir /app/ \
&& mkdir /app/config/ \
&& mkdir /app/scripts/ \
&& mkdir -p /static_cdn/static_root/ \
&& chown -R ${APP_USER} /static_cdn/
WORKDIR /app/
COPY Pipfile Pipfile.lock /app/
RUN set -ex \
&& BUILD_DEPS=" \
build-essential \
libpcre3-dev \
libpq-dev \
" \
&& apt-get update && apt-get install -y --no-install-recommends $BUILD_DEPS \
&& pipenv install --deploy --system \
\
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $BUILD_DEPS \
&& rm -rf /var/lib/apt/lists/*
COPY ./src /app/
COPY scripts/ /app/scripts/
COPY configs/ /app/configs/
EXPOSE 8000
ENV UWSGI_WSGI_FILE=qcg/wsgi.py
ENV UWSGI_HTTP=:8000 UWSGI_MASTER=1 UWSGI_HTTP_AUTO_CHUNKED=1 UWSGI_HTTP_KEEPALIVE=1 UWSGI_LAZY_APPS=1 UWSGI_WSGI_ENV_BEHAVIOR=holy
ENV UWSGI_WORKERS=2 UWSGI_THREADS=4
ENV UWSGI_STATIC_MAP="/static/=/static_cdn/static_root/" UWSGI_STATIC_EXPIRES_URI="/static/.*\.[a-f0-9]{12,}\.(css|js|png|jpg|jpeg|gif|ico|woff|ttf|otf|svg|scss|map|txt) 315360000"
USER ${APP_USER}:${APP_USER}
ENTRYPOINT ["/app/scripts/docker/entrypoint.sh"]
and running command
docker run my-image uwsgi --show-config