Helm installation - dockerfile

I am trying to install helm using Dockerfile. I have tried following methods:
1.
RUN apt-get update && apt-get -y install apt-transport-https
RUN curl -s https://helm.baltorepo.com/organization/signing.asc | apt-key add -
RUN echo "deb https://baltocdn.com/helm/stable/debian/ all main" | tee /etc/apt/sources.list.d/helm-stable-debian.list
RUN apt-get update && apt-get -y install helm
RUN curl -o helm-v2.10.0-linux-amd64.tgz https://storage.googleapis.com/kubernetes-helm/helm-v2.10.0-linux-amd64.tar.gz && tar -zxvf helm-v2.10.0-linux-amd64.tgz && mv linux-amd64/helm /usr/local/bin/helm
Both of them are returning helm not found when I do helm -h.

Can you try.
Download the binary
curl -OL https://get.helm.sh/helm-version-linux-arm64.tar.gz
Unpack it
tar -zxvf your_dowloaded_file
Find unpacked binary and move it to desired destination
mv directory/helm /usr/local/bin/helm
You can find helm binary releases here
You can find other installation method here

Related

Install gdal geodjango on elastic beanstalk

what are the correct steps to install geodjango on elastic beanstalk?
got eb instance, installed env and made it two instances now I want to use geodjango on it, I'm already using it on a separate ec2 instance for testing
that's my django.config file and it fails
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: hike.project.wsgi:application
commands:
01_gdal:
command: "wget http://download.osgeo.org/gdal/2.1.3/gdal-2.1.3.tar.gz && tar -xzf gdal-2.1.3.tar.gz && cd gdal-2.1.3 && ./configure && make && make install"
then tried this instead and also failed from 100% cpu and time limit
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: hike.project.wsgi:application
commands:
01_install_gdal:
test: "[ ! -d /usr/local/gdal ]"
command: "/tmp/gdal_install.sh"
files:
"/tmp/gdal_install.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo yum-config-manager --enable epel
sudo yum -y install make automake gcc gcc-c++ libcurl-devel proj-devel geos-devel
# Geos
cd /
sudo mkdir -p /usr/local/geos
cd usr/local/geos/geos-3.7.2
sudo wget geos-3.7.2.tar.bz2 http://download.osgeo.org/geos/geos-3.7.2.tar.bz2
sudo tar -xvf geos-3.7.2.tar.bz2
cd geos-3.7.2
sudo ./configure
sudo make
sudo make install
sudo ldconfig
# Proj4
cd /
sudo mkdir -p /usr/local/proj
cd usr/local/proj
sudo wget -O proj-5.2.0.tar.gz http://download.osgeo.org/proj/proj-5.2.0.tar.gz
sudo wget -O proj-datumgrid-1.8.tar.gz http://download.osgeo.org/proj/proj-datumgrid-1.8.tar.gz
sudo tar xvf proj-5.2.0.tar.gz
sudo tar xvf proj-datumgrid-1.8.tar.gz
cd proj-5.2.0
sudo ./configure
sudo make
sudo make install
sudo ldconfig
# GDAL
cd /
sudo mkdir -p /usr/local/gdal
cd usr/local/gdal
sudo wget -O gdal-2.4.4.tar.gz http://download.osgeo.org/gdal/2.4.4/gdal-2.4.4.tar.gz
sudo tar xvf gdal-2.4.4.tar.gz
cd gdal-2.4.4
sudo ./configure
sudo make
sudo make install
sudo ldconfig
and I've no idea what to do,

How to install brew into a Dockerfile (`brew: not found`)

Rather than necro-post on a two-year old thread, I decided to create a new question.
I want add brew (homebrew) to a Docker container, but I get a brew: not found error.
The suggested solution in that previous article doesn't seem to work. This new Dockerfile...
FROM rust:1.63.0-buster
WORKDIR app
RUN apt-get update && \
apt-get install -y -q --allow-unauthenticated \
git \
sudo
RUN useradd -m -s /bin/zsh linuxbrew && \
usermod -aG sudo linuxbrew && \
mkdir -p /home/linuxbrew/.linuxbrew && \
chown -R linuxbrew: /home/linuxbrew/.linuxbrew
USER linuxbrew
RUN /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
USER root
RUN chown -R $CONTAINER_USER: /home/linuxbrew/.linuxbrew
RUN brew install hello
gives this error... What am I missing? Thanks.
=> ERROR [6/6] RUN brew install hello 0.2s
------
> [6/6] RUN brew install hello:
#9 0.181 /bin/sh: 1: brew: not found
------
executor failed running [/bin/sh -c brew install hello]: exit code: 127
This Dockerfile installs brew in /home/linuxbrew/.linuxbrew/bin/brew. Including that directory in the path (with the ENV command) does the trick.
...
ENV PATH="/home/linuxbrew/.linuxbrew/bin:${PATH}"
RUN brew install hello

Using Kaniko cache with Google Cloud Build for Google Cloud Kubernetes Deployments

We have been using Google Cloud Build via build triggers for our GitHub repository which holds a C++ application that is deployed via Google Cloud Kubernetes Cluster.
As seen above, our build configuration is arriving from Dockerfile which is located in our GitHub repository.
Everything is working as expected, however our builds lasts about 55+ minutes. I would like to add Kaniko Cache support as suggested [here], however Google Cloud document only suggests a way to add it via a yaml file as below :
steps:
- name: 'gcr.io/kaniko-project/executor:latest'
args:
- --destination=gcr.io/$PROJECT_ID/image
- --cache=true
- --cache-ttl=XXh
How shall I achieve Kaniko builds with a Dockerfile based trigger ?
FROM --platform=amd64 ubuntu:22.10
ENV GCSFUSE_REPO gcsfuse-stretch
RUN apt-get update && apt-get install --yes --no-install-recommends \
ca-certificates \
curl \
gnupg \
&& echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" \
| tee /etc/apt/sources.list.d/gcsfuse.list \
&& curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - \
&& apt-get update \
&& apt-get install --yes gcsfuse \
&& apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
EXPOSE 80
RUN \
sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list && \
apt-get update && \
apt-get -y upgrade && \
apt-get install -y build-essential && \
apt-get install -y gcc && \
apt-get install -y software-properties-common && \
apt install -y cmake && \
apt-get install -y make && \
apt-get install -y clang && \
apt-get install -y mesa-common-dev && \
apt-get install -y git && \
apt-get install -y xorg-dev && \
apt-get install -y nasm && \
apt-get install -y byobu curl git htop man unzip vim wget && \
rm -rf /var/lib/apt/lists/*
# Update and upgrade repo
RUN apt-get update -y -q && apt-get upgrade -y -q
COPY . /app
RUN cd /app
RUN ls -la
# Set environment variables.
ENV HOME /root
ENV WDIR /app
# Define working directory.
WORKDIR /app
RUN cd /app/lib/glfw && cmake -G "Unix Makefiles" && make && apt-get install libx11-dev
RUN apt-cache policy libxrandr-dev
RUN apt install libxrandr-dev
RUN cd /app/lib/ffmpeg && ./configure && make && make install
RUN cmake . && make
# Define default command.
CMD ["bash"]
Any suggestions are quite welcome.
As I mentioned in the comment, You can only add your kaniko in your cloudbuild.yaml files as its also the only options shown in this github link but you can add the --dockerfile argument to find your Dockerfile path.

Lifetime cycle config is longer than 5 minutes, nohup command not working

I am trying to install a number of dependencies for my jupyter notebook, but would like them to be permanent to save me 20 minutes every time I restart the notebook. I have opted for using a lifecycle configuration, however, my script takes longer than 5 minutes to run. I found this article (https://aws.amazon.com/premiumsupport/knowledge-center/sagemaker-lifecycle-script-timeout/) to help resolve this problem, however, my notebook is still failing to start with the following error:
Notebook Instance Lifecycle Config 'arn:aws:sagemaker:eu-west-2:347285168835:notebook-instance-lifecycle-config/nbs-aap-dev-dsar' for Notebook Instance 'arn:aws:sagemaker:eu-west-2:347285168835:notebook-instance/nbs-aap-dev-dsar' took longer than 5 minutes. Please check your CloudWatch logs for more details if your Notebook Instance has Internet access.
Here is the script I am trying to run:
sudo nohup yum install wget &
sudo yum install autoconf &
sudo yum install automake &
sudo yum install libtool &
sudo yum install jpeg &
sudo yum install tiff &
sudo yum install libpng &
sudo yum install tiff2png &
sudo yum install libtiff &
sudo yum install autoconf aclocal automake &
sudo yum install libtool &
sudo yum -y install libjpeg-devel libpng-devel libpng-devel libtiff-devel zlib-devel &
sudo yum install gcc gcc-c++ make &
sudo wget https://github.com/DanBloomberg/leptonica/releases/download/1.82.0/leptonica-1.82.0.tar.gz &
sudo tar xzvf leptonica-1.82.0.tar.gz &
cd leptonica-1.82.0 &
sudo ./configure --prefix=/usr/local/ &
sudo make &
sudo make install &
sudo wget https://codeload.github.com/tesseract-ocr/tesseract/tar.gz/4.1.1 &
sudo tar -zxvf 4.1.1 &
cd tesseract-4.1.1 &
sudo ./autogen.sh &
sudo cp /home/ec2-user/leptonica-1.82.0/lept.pc /usr/lib64/pkgconfig/. &
sudo LIBLEPT_HEADERSDIR=/usr/local/lib ./configure --prefix=/usr/local/ --with-extra-libraries=/usr/local/lib &
sudo make &
sudo make install &
export LD_LIBRARY_PATH=/usr/local/lib &
sudo ldconfig &
sudo wget https://github.com/tesseract-ocr/tessdata_best/raw/main/eng.traineddata &
sudo mv -v eng.traineddata /usr/local/share/tessdata/eng.traineddata &
sudo wget https://github.com/ArtifexSoftware/ghostpdl-downloads/releases/download/gs9550/ghostpdl-9.55.0.tar.gz &
sudo tar -zxvf ghostpdl-9.55.0.tar.gz &
cd ghostpdl-9.55.0 &
sudo ./configure --prefix=/usr/local/ &
sudo make &
sudo make install &
sudo yum -y install poppler-utils &
sudo wget https://github.com/qpdf/qpdf/releases/download/release-qpdf-10.1.0/qpdf-10.1.0.tar.gz &
sudo tar xzvf qpdf-10.1.0.tar.gz &
cd qpdf-10.1.0 &
sudo ./configure --prefix=/usr/local/ &
sudo make &
sudo make install
You do not need to add ampersand & on the end of the lines. This put them in background and execute some commands in parallel which lead to odd conditions. For example in code:
sudo ./configure --prefix=/usr/local/ &
sudo make &
sudo make install &
the command make start before end of configure and will not end well in most of the cases. Same for make install it try to install the compiled package before it is compiled from make
If you want to put script in background you can group the commands on this way:
sudo nohup yum -y install wget autoconf automake libtool jpeg tiff libpng tiff2png libtiff autoconf aclocal automake libtool libjpeg-devel libpng-devel libpng-devel libtiff-devel zlib-devel gcc gcc-c++ make poppler-utils
nohup sudo wget https://github.com/DanBloomberg/leptonica/releases/download/1.82.0/leptonica-1.82.0.tar.gz && sudo tar xzvf leptonica-1.82.0.tar.gz &&cd leptonica-1.82.0 && sudo ./configure --prefix=/usr/local/ && sudo make && sudo make install &
nohup sudo wget https://codeload.github.com/tesseract-ocr/tesseract/tar.gz/4.1.1 &&sudo tar -zxvf 4.1.1 && cd tesseract-4.1.1 && sudo ./autogen.sh && sudo cp /home/ec2-user/leptonica-1.82.0/lept.pc /usr/lib64/pkgconfig/. && sudo LIBLEPT_HEADERSDIR=/usr/local/lib ./configure --prefix=/usr/local/ --with-extra-libraries=/usr/local/lib && sudo make && sudo make install &
nohup export LD_LIBRARY_PATH=/usr/local/lib && sudo ldconfig && sudo wget https://github.com/tesseract-ocr/tessdata_best/raw/main/eng.traineddata && sudo mv -v eng.traineddata /usr/local/share/tessdata/eng.traineddata && sudo wget https://github.com/ArtifexSoftware/ghostpdl-downloads/releases/download/gs9550/ghostpdl-9.55.0.tar.gz && sudo tar -zxvf ghostpdl-9.55.0.tar.gz && cd ghostpdl-9.55.0 && sudo ./configure --prefix=/usr/local/ && sudo make && sudo make install &
nohup sudo wget https://github.com/qpdf/qpdf/releases/download/release-qpdf-10.1.0/qpdf-10.1.0.tar.gz && sudo tar xzvf qpdf-10.1.0.tar.gz && cd qpdf-10.1.0 && sudo ./configure --prefix=/usr/local/ && sudo make && sudo make install &

macOS, Dockerfile mounting a folder cannot change locale

I'm trying to mount a folder with my docker file instead of copying it on build. We use git for development and I don't want to rebuild the image everytime I make a change for testing.
my docker file is now as such
#set base image
FROM centos:centos7.2.1511
MAINTAINER Alex <alex#app.com>
#install yum dependencies
RUN yum -y update \\
&& yum -y install yum-plugin-ovl \
&& yum -y install epel-release \
&& yum -y install net-tools \
&& yum -y install gcc \
&& yum -y install python-devel \
&& yum -y install git \
&& yum -y install python-pip \
&& yum -y install openldap-devel \
&& yum -y install gcc gcc-c++ kernel-devel \
&& yum -y install libxslt-devel libffi-devel openssl-devel \
&& yum -y install libevent-devel \
&& yum -y install openldap-devel \
&& yum -y install net-snmp-devel \
&& yum -y install mysql-devel \
&& yum -y install python-dateutil \
&& yum -y install python-pip \
&& pip install --upgrade pip
# Create the DIR
#RUN mkdir -p /var/www/itapp
# Set the working directory
#WORKDIR /var/www/itapp
# Copy the app directory contents into the container
#ADD . /var/www/itapp
# Install any needed packages specified in requirements.txt
#RUN pip install -r requirements.txt
# Make port available to the world outside this container
EXPOSE 8000
# Define environment variable
ENV NAME itapp
# Run server when the container launches
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
ive commented out the creation and copy of the itapp Django files as I want to mount them instead, (do I need to rebuild this first?)
then my command for mounting is
docker run -it -v /Users/alex/itapp:/var/www/itapp itapp bash
I now get an error:
bash: warning: setlocale: LC_CTYPE: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_COLLATE: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_MESSAGES: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_NUMERIC: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_TIME: cannot change locale (en_US.UTF-8): No such file or directory
and the dev instance does not run.
how would I also set the working directory to the the volume that I'm mounting at runtime?
Try this command. -w WORKDIR option in docker run sets working directory inside the container.
docker run -d -v /Users/alex/itapp:/var/www/itapp -w /var/www/itapp itapp
Also, you'll to map your container port to your host port to be able to access, for example, from a browser to your app.
To do this, use the following command.
docker run -d -p 8000:8000 -v /Users/alex/itapp:/var/www/itapp -w /var/www/itapp itapp
After this, your app should be running in localhost:8000