sdkman does not install java in a dockerfile - dockerfile

I have this docker file:
# We are going to star from the jhipster image
FROM jhipster/jhipster
# install as root
USER root
### Setup docker cli (don't need docker daemon) ###
# Install some packages
RUN apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common -y
# Add Dockers official GPG key:
RUN ["/bin/bash", "-c", "set -o pipefail && curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -"]
# Add a stable repository
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# Setup aws credentials as environment variables
ENV AWS_ACCESS_KEY_ID "change it!"
ENV AWS_SECRET_ACCESS_KEY "change it!"
# noninteractive install for tzdata
ARG DEBIAN_FRONTEND=noninteractive
# set timezone for tzdata
ENV TZ=America/Sao_Paulo
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
# Install the latest version of Docker Engine - Community and also aws cli
RUN apt-get update && apt-get install docker-ce docker-ce-cli containerd.io awscli -y
# change back to default user
USER jhipster
# install skd and java version 1.8
RUN curl -s "https://get.sdkman.io" | bash
RUN bash $HOME/.sdkman/bin/sdkman-init.sh
RUN bash -c "sdk install java 8.0.222.j9-adpt"
When I run a command to build an image from this dockerfile it fails on the last step with a message:
/bin/sh: 1: sdk: not found
When I install it on my local machine it runs sdkman (sdk) on bash. But on this script it calls it from sh not bash. How can I make it calls skdman (sdk) from sh? What I actually want to do is install a specific java version through sdkman (sdk). Is there another way to do it?

For sdk command to be available you need to run source sdkman-init.sh.
Here is a working sample with java 11 on centos.
FROM centos:latest
ARG CANDIDATE=java
ARG CANDIDATE_VERSION=11.0.6-open
ENV SDKMAN_DIR=/root/.sdkman
# update the image
RUN yum -y upgrade
# install requirements, install and configure sdkman
# see https://sdkman.io/usage for configuration options
RUN yum -y install curl ca-certificates zip unzip openssl which findutils && \
update-ca-trust && \
curl -s "https://get.sdkman.io" | bash && \
echo "sdkman_auto_answer=true" > $SDKMAN_DIR/etc/config && \
echo "sdkman_auto_selfupdate=false" >> $SDKMAN_DIR/etc/config
# Source sdkman to make the sdk command available and install candidate
RUN bash -c "source $SDKMAN_DIR/bin/sdkman-init.sh && sdk install $CANDIDATE $CANDIDATE_VERSION"
# Add candidate path to $PATH environment variable
ENV JAVA_HOME="$SDKMAN_DIR/candidates/java/current"
ENV PATH="$JAVA_HOME/bin:$PATH"
ENTRYPOINT ["/bin/bash", "-c", "source $SDKMAN_DIR/bin/sdkman-init.sh && \"$#\"", "-s"]
CMD ["sdk", "help"]

The problem is every RUN command in Dockerfile is executed within a new bash environment, so you need to put both of your last two commands under the same line to look like this:
RUN bash $HOME/.sdkman/bin/sdkman-init.sh && bash -c "sdk install java 8.0.222.j9-adpt"

Related

Run conda inside singularity

I would like to run a conda command with singularity.
The command is:
singularity exec ~/dockerimage.sif conda
It yields an error:
/.singularity.d/actions/exec: 9: exec: conda: Permission denied
Here is my dockerfile:
FROM ubuntu:20.04
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y apt-utils wget=1.20.3-1ubuntu1 python3.8=3.8.2-1ubuntu1.2 python3-pip=20.0.2-5ubuntu1 python3-yaml=5.3.1-1 git=1:2.25.1-1ubuntu3
RUN wget https://repo.anaconda.com/miniconda/Miniconda3-py38_4.8.3-Linux-x86_64.sh && chmod +x Miniconda3-py38_4.8.3-Linux-x86_64.sh && ./Miniconda3-py38_4.8.3-Linux-x86_64.sh -b && cp /root/miniconda3/bin/conda /usr/bin/conda
RUN wget https://data.qiime2.org/distro/core/qiime2-2020.8-py36-linux-conda.yml && conda env create -n qiime2-2020.8 --file qiime2-2020.8-py36-linux-conda.yml && conda install -y -n qiime2-2020.8 -c conda-forge -c bioconda -c qiime2 -c defaults q2cli q2template q2-types q2-feature-table q2-metadata vsearch snakemake
What should I add to the Dockerfile? How would it work?
You're installing with conda default settings, which puts it in the home of the current user. That user is root. Singularity runs as your current user, so unless you're running as root the conda files will not be available.
modify your conda install command to set the install prefix: -p /opt/conda (or some other arbitrary location)
make sure that any user will be able to access the files installed with conda: chmod -R o+rX /opt/conda
update PATH to include conda: export PATH="$PATH:/opt/conda/bin"
when running your image make sure your environment variables are not overriding those in the container: singularity exec --cleanenv ~/dockerimage.sif conda

How to Install Node 8.15 on alpine:3.9?

I want to install node 8.15 on alpine:3.9
This is my Dockerfile but it is not working.
After docker build I got this error: You need to run "nvm install default" to install it before using it.
Thanks.
FROM alpine:3.9
ENV METEOR_VERSION=1.8.1
ENV METEOR_ALLOW_SUPERUSER true
ENV NODE_VERSION 8.15
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
ENV NVM_DIR /usr/local/nvm
RUN mkdir $NVM_DIR
# Install dependencies
RUN apk update
RUN apk upgrade
RUN apk add --no-cache bash
RUN apk --no-cache add curl
# Install NVM
RUN curl -o- "https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh" | bash
# Install NODE
RUN echo "source $NVM_DIR/nvm.sh && \
nvm install $NODE_VERSION && \
nvm alias default $NODE_VERSION && \
nvm use default" | bash
# Install METEOR
RUN curl "https://install.meteor.com/?release=${METEOR_VERSION}" | /bin/
Why you are installing with NVM when we have nodejs in alpine offical repository? each Docker image should represent a version of nodejs. So I will not suggest NVM in this case also will keep the image small.
You can find version alpine-pacakge-nodejs v8.x.
FROM alpine:3.9
ENV METEOR_VERSION=1.8.1
ENV METEOR_ALLOW_SUPERUSER true
ENV NODE_VERSION 8.15
RUN apk add --no-cache --repository=http://dl-cdn.alpinelinux.org/alpine/v3.8/main/ nodejs=8.14.0-r0 npm
RUN node --version
output
Step 6/6 : RUN node --version
---> Running in 9652a49223fa
v8.14.0

AWSEBCLI not reading env vars

I am attempting to run AWSEBCLI inside a docker container. I am passing the access key and security token as env vars as described in the docs under "Configuration Settings and Precedence"
ERROR: CredentialsError - Operation Denied. You appear to have no credentials
Here is my docker file
FROM circleci/golang
ADD . /go/src
WORKDIR /go/src
RUN sudo apt-get -y -qq update --assume-yes
RUN sudo apt-get install python-pip python-dev build-essential --assume-yes
RUN sudo pip install awscli=="1.16.9"
RUN sudo pip install awsebcli=="3.14.4"
RUN echo ${AWS_ACCESS_KEY_ID}
RUN echo ${AWS_SECRET_ACCESS_KEY}
CMD sudo eb deploy Circledocker
The environment defined in your user session and the sudo session are not the same.
RUN echo ${AWS_ACCESS_KEY_ID} -> Works
RUN sudo echo ${AWS_ACCESS_KEY_ID} -> Will not provide you the value.
Take a look at man sudo, the -E flag :
-E, --preserve-env
Indicates to the security policy that the user wishes to preserve their
existing environment variables. The security policy may return an error
if the user does not have permission to preserve the environment.
So this normally works :
sudo -E bash -c 'echo $AWS_ACCESS_KEY_ID'
Try your eb deploy command like this :
sudo -E bash -c 'eb deploy Circledocker'
Hope it helps !

macOS, Dockerfile mounting a folder cannot change locale

I'm trying to mount a folder with my docker file instead of copying it on build. We use git for development and I don't want to rebuild the image everytime I make a change for testing.
my docker file is now as such
#set base image
FROM centos:centos7.2.1511
MAINTAINER Alex <alex#app.com>
#install yum dependencies
RUN yum -y update \\
&& yum -y install yum-plugin-ovl \
&& yum -y install epel-release \
&& yum -y install net-tools \
&& yum -y install gcc \
&& yum -y install python-devel \
&& yum -y install git \
&& yum -y install python-pip \
&& yum -y install openldap-devel \
&& yum -y install gcc gcc-c++ kernel-devel \
&& yum -y install libxslt-devel libffi-devel openssl-devel \
&& yum -y install libevent-devel \
&& yum -y install openldap-devel \
&& yum -y install net-snmp-devel \
&& yum -y install mysql-devel \
&& yum -y install python-dateutil \
&& yum -y install python-pip \
&& pip install --upgrade pip
# Create the DIR
#RUN mkdir -p /var/www/itapp
# Set the working directory
#WORKDIR /var/www/itapp
# Copy the app directory contents into the container
#ADD . /var/www/itapp
# Install any needed packages specified in requirements.txt
#RUN pip install -r requirements.txt
# Make port available to the world outside this container
EXPOSE 8000
# Define environment variable
ENV NAME itapp
# Run server when the container launches
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
ive commented out the creation and copy of the itapp Django files as I want to mount them instead, (do I need to rebuild this first?)
then my command for mounting is
docker run -it -v /Users/alex/itapp:/var/www/itapp itapp bash
I now get an error:
bash: warning: setlocale: LC_CTYPE: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_COLLATE: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_MESSAGES: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_NUMERIC: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_TIME: cannot change locale (en_US.UTF-8): No such file or directory
and the dev instance does not run.
how would I also set the working directory to the the volume that I'm mounting at runtime?
Try this command. -w WORKDIR option in docker run sets working directory inside the container.
docker run -d -v /Users/alex/itapp:/var/www/itapp -w /var/www/itapp itapp
Also, you'll to map your container port to your host port to be able to access, for example, from a browser to your app.
To do this, use the following command.
docker run -d -p 8000:8000 -v /Users/alex/itapp:/var/www/itapp -w /var/www/itapp itapp
After this, your app should be running in localhost:8000

How to install the Google Cloud SDK in a Docker Image?

How can I build a Docker container with Google's Cloud Command Line Tool/SDK?
The script at the url https://sdk.cloud.google.com appears to require user input so doesn't work in a docker file.
Adding the following to my Docker file appears to work.
# Downloading gcloud package
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
# Installing the package
RUN mkdir -p /usr/local/gcloud \
&& tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz \
&& /usr/local/gcloud/google-cloud-sdk/install.sh
# Adding the package path to local
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
Use this one-liner in your Dockerfile:
RUN curl -sSL https://sdk.cloud.google.com | bash
source:
https://docs.docker.com/v1.8/installation/google/
Doing it with alpine:
FROM alpine:3.6
RUN apk add --update \
python \
curl \
which \
bash
RUN curl -sSL https://sdk.cloud.google.com | bash
ENV PATH $PATH:/root/google-cloud-sdk/bin
RUN curl -sSL https://sdk.cloud.google.com > /tmp/gcl && bash /tmp/gcl --install-dir=~/gcloud --disable-prompts
This will download the google cloud sdk installer into /tmp/gcl, and run it with the parameters as follows:
--install-dir=~/gcloud: Extract the binaries into folder gcloud in home folder. Change this to wherever you want, for example /usr/local/bin
--disable-prompts: Don't show any prompts while installing (headless)
To install gcloud inside a docker container please follow the instructions here.
Basically you need to run
RUN apt-get update && \
apt-get install -y curl gnupg && \
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && \
apt-get update -y && \
apt-get install google-cloud-sdk -y
inside your dockerfile. It's important you are user ROOT when you run this command, so it may necessary to add USER root before the previous command.
As an alternative, you could use the docker image provided by google namely google/cloud-sdk. https://hub.docker.com/r/google/cloud-sdk/
Dockerfile:
FROM centos:7
RUN yum update -y && yum install -y \
curl \
which && \
yum clean all
RUN curl -sSL https://sdk.cloud.google.com | bash
ENV PATH $PATH:/root/google-cloud-sdk/bin
Build:
docker build . -t google-cloud-sdk
Then run gcloud:
docker run --rm \
--volume $(pwd)/assets/root/.config:/root/.config \
google-cloud-sdk gcloud
...or run gsutil:
docker run --rm \
--volume $(pwd)/assets/root/.config:/root/.config \
google-cloud-sdk gsutil
The local assets folder will contain the configuration.
apk upgrade --update-cache --available && \
apk add openssl && \
apk add curl python3 py-crcmod bash libc6-compat && \
rm -rf /var/cache/apk/*
curl https://sdk.cloud.google.com | bash > /dev/null
export PATH=$PATH:/root/google-cloud-sdk/bin
gcloud components update kubectl
I was using Python Alpine image python:3.8.6-alpine3.12 as base and this worked for me:
RUN apk add --no-cache bash
RUN wget https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-327.0.0-linux-x86_64.tar.gz \
-O /tmp/google-cloud-sdk.tar.gz | bash
RUN mkdir -p /usr/local/gcloud \
&& tar -C /usr/local/gcloud -xvzf /tmp/google-cloud-sdk.tar.gz \
&& /usr/local/gcloud/google-cloud-sdk/install.sh -q
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
After building and running the image, you can check if google-cloud-sdk is installed by running docker exec -i -t <container_id> /bin/bash and running this:
bash-5.0# gcloud --version
Google Cloud SDK 327.0.0
bq 2.0.64
core 2021.02.05
gsutil 4.58
bash-5.0# gsutil --version
gsutil version: 4.58
If you want a specific version of google-cloud-sdk, you can visit https://storage.cloud.google.com/cloud-sdk-release
curl https://sdk.cloud.google.com | bash -s -- --disable-prompts
and export env
works for me
I got this working with Ubuntu 18.04 using:
RUN apt-get install -y curl && curl -sSL https://sdk.cloud.google.com | bash
ENV PATH="$PATH:/root/google-cloud-sdk/bin"
You can use multi-stage builds to make this simpler and more efficient than solutions using curl.
FROM bitnami/google-cloud-sdk:0.392.0 as gcloud
FROM base-image-for-production:tag
# Do what you need to configure your production image
COPY --from=gcloud /opt/bitnami/google-cloud-sdk/ /google-cloud-sdk
This work for me.
FROM php:7.2-fpm
RUN apt-get update -y
RUN apt-get install -y python && \
curl -sSL https://sdk.cloud.google.com | bash
ENV PATH $PATH:/root/google-cloud-sdk/bin
An example using debian as the base image:
FROM debian:stretch
RUN apt-get update && apt-get install -y apt-transport-https gnupg curl lsb-release
RUN export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)" && \
echo "cloud SDK repo: $CLOUD_SDK_REPO" && \
echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
apt-get update -y && apt-get install google-cloud-sdk -y
I used most of these examples in some form (thanks #KJoe), but I had to do several other things to setup everything so gcloud would work in the environment. Note that it is preferable to limit the number of lines (it limits layers needed to pull)
Here's a more complete example of Dockerfile with gcloud setup and extending a CircleCI image:
FROM circleci/ruby:2.4.1-jessie-node-browsers
# user is circleci in the FROM image, switch to root for system lib installation
USER root
ENV CCI /home/circleci
ENV GTMP /tmp/gcloud-install
ENV GSDK $CCI/google-cloud-sdk
ENV PATH="${GSDK}/bin:${PATH}"
# do all system lib installation in one-line to optimize layers
RUN curl -sSL https://sdk.cloud.google.com > $GTMP && bash $GTMP --install-dir=$CCI --disable-prompts \
&& rm -rf $GTMP \
&& chmod +x $GSDK/bin/* \
\
&& chown -Rf circleci:circleci $CCI
# change back to the user in the FROM image
USER circleci
# setup gcloud specifics to your liking
RUN gcloud config set core/disable_usage_reporting true \
&& gcloud config set component_manager/disable_update_check true \
&& gcloud components install alpha beta kubectl --quiet
My use case was to generate a google bearer token using the service account, so I wanted the docker container to install gcloud this is how my docker file looks like
FROM google/cloud-sdk
# Setting the default directory in container
WORKDIR /usr/src/app
# copies the app source code to the directory in container
COPY . /usr/src/app
CMD ["/bin/bash","/usr/src/app/token.sh"]
If you need to examine a container after it is built but that isn't running use docker run --rm -it <container-build-id> bash -il and type in gcloud --version if installed correctly or not
In Google documentation you can see the best practice
https://cloud.google.com/sdk/docs/install-sdk
search on the page for "Docker Tip"
eg debian use:
RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && apt-get update -y && apt-get install google-cloud-cli -y
If you're just interested in getting the gcloud CLI available, add this to your Dockerfile:
# Downloading gcloud package
RUN curl https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-409.0.0-linux-x86_64.tar.gz > /tmp/google-cloud-cli.tar.gz
# Installing the gcloud cli
RUN mkdir -p /usr/local/gcloud \
&& tar -xf /tmp/google-cloud-cli.tar.gz \
&& ./google-cloud-sdk/install.sh --quiet