I am building my dockerfile using Redhat UBI image, and when I build the image I get the wget: unable to resolve host address'github.com'.
I have tried adding a different URL that does not start with GitHub and that works. Not sure what the problem is.
Below are the errors logs i get when i build the docker file with : wget: unable to resolve host address 'github.com'
Step 11/25 : RUN set -ex; apk update; apk add -f acl dirmngr gpg lsof procps wget netcat gosu tini; rm -rf /var/lib/apt/lists/*; cd /usr/local/bin; wget -nv https://github.com/apangin/jattach/releases/download/v1.5/jattach; chmod 755 jattach; echo >jattach.sha512 "d8eedbb3e192a8596c08efedff99b9acf1075331e1747107c07cdb1718db2abe259ef168109e46bd4cf80d47d43028ff469f95e6ddcbdda4d7ffa73a20e852f9 jattach"; sha512sum -c jattach.sha512; rm jattach.sha512
---> Running in 3ad58c40b25a
+ apk update
fetch https://dl-cdn.alpinelinux.org/alpine/edge/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/edge/community/x86_64/APKINDEX.tar.gz
v20200917-1125-g7274a98dfc [https://dl-cdn.alpinelinux.org/alpine/edge/main]
v20200917-1124-g01e8cb93ff [https://dl-cdn.alpinelinux.org/alpine/edge/community]
OK: 13174 distinct packages available
+ apk add -f acl dirmngr gpg lsof procps wget netcat gosu tini
(1/12) Installing libacl (2.2.53-r0)
(2/12) Installing acl (2.2.53-r0)
(3/12) Installing lsof (4.93.2-r0)
(4/12) Installing libintl (0.20.2-r0)
(5/12) Installing ncurses-terminfo-base (6.2_p20200918-r1)
(6/12) Installing ncurses-libs (6.2_p20200918-r1)
(7/12) Installing libproc (3.3.16-r0)
(8/12) Installing procps (3.3.16-r0)
(9/12) Installing tini (0.19.0-r0)
(10/12) Installing libunistring (0.9.10-r0)
(11/12) Installing libidn2 (2.3.0-r0)
(12/12) Installing wget (1.20.3-r1)
Executing busybox-1.32.0-r3.trigger
OK: 9 MiB in 26 packages
+ rm -rf '/var/lib/apt/lists/*'
+ cd /usr/local/bin
+ wget -nv https://github.com/apangin/jattach/releases/download/v1.5/jattach
wget: unable to resolve host address 'github.com'
The command '/bin/sh -c set -ex; apk update; apk add -f acl dirmngr gpg lsof procps wget netcat gosu tini; rm -rf /var/lib/apt/lists/*; cd /usr/local/bin; wget -nv https://github.com/apangin/jattach/releases/download/v1.5/jattach; chmod 755 jattach; echo >jattach.sha512 "d8eedbb3e192a8596c08efedff99b9acf1075331e1747107c07cdb1718db2abe259ef168109e46bd4cf80d47d43028ff469f95e6ddcbdda4d7ffa73a20e852f9 jattach"; sha512sum -c jattach.sha512; rm jattach.sha512' returned a non-zero code: 4
Here is my docker file that I have which I build to create the image
FROM alpine: edge as BUILD
LABEL maintainer="Project Ranger team <mbyousaf#deloitte.co.uk>"
LABEL repository="https://github.com/docker-solr/docker-solr"
ARG SOLR_VERSION="8.6.2"
ARG SOLR_SHA512="0a43401ecf7946b2724da2d43896cd505386a8f9b07ddc60256cb586873e7e58610d2c34b1cf797323bf06c7613b109527a15105dc2a11be6f866531a1f2cef6"
ARG SOLR_KEYS="E58A6F4D5B2B48AC66D5E53BD4F181881A42F9E6"
# If specified, this will override SOLR_DOWNLOAD_SERVER and all ASF mirrors. Typically used downstream for custom builds
ARG SOLR_DOWNLOAD_URL
# Override the solr download location with e.g.:
# docker build -t mine --build-arg SOLR_DOWNLOAD_SERVER=http://www-eu.apache.org/dist/lucene/solr .
ARG SOLR_DOWNLOAD_SERVER
RUN set -ex; \
apk add --update; \
apk add -f install acl dirmngr gpg lsof procps wget netcat gosu tini; \
rm -rf /var/lib/apt/lists/*; \
cd /usr/local/bin; wget -nv https://github.com/apangin/jattach/releases/download/v1.5/jattach; chmod 755 jattach; \
echo >jattach.sha512 "d8eedbb3e192a8596c08efedff99b9acf1075331e1747107c07cdb1718db2abe259ef168109e46bd4cf80d47d43028ff469f95e6ddcbdda4d7ffa73a20e852f9 jattach"; \
sha512sum -c jattach.sha512; rm jattach.sha512
I would check whether you can resolve github.com on your host where you're doing this build, and I would cat /etc/resolv.conf to see the resolvers of your host. If github.com resolves on your host (which you can see via nslookup github.com), then I would try to use the resolvers explicitly by either configuring the Docker daemon to use it as seen here and here or I would try to do it at a per command level as suggested in an answer here, which is kind of creative.
RUN echo "nameserver XX.XX.XX.XX" > /etc/resolv.conf && \
command_depending_on_dns_resolution
I tried to run cdk inside a docker container. Everything works fine until I try to deploy using command:
cdk deploy myStack --profile testing --require-approval never
Error
❌ MyStack failed: Error: Unable to resolve AWS account to use. It must be either configured when you define your CDK or through the environment
I have created both config and credentials file under docker container's /root/.aws/ folder, since it will match the ~/.aws
I use this setting in my laptop and it works fine. In my laptop, those two files are under /Users/<my user name>/.aws.
My docker file:
FROM openjdk:8-jdk-slim
ARG MAVEN_VERSION=3.6.3
ARG USER_HOME_DIR="/root"
ARG SHA=c35a1803a6e70a126e80b2b3ae33eed961f83ed74d18fcd16909b2d44d7dada3203f1ffe726c17ef8dcca2dcaa9fca676987befeadc9b9f759967a8cb77181c0
ARG BASE_URL=https://apache.osuosl.org/maven/maven-3/${MAVEN_VERSION}/binaries
RUN apt-get update && \
apt-get install -y \
curl procps \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir -p /usr/share/maven /usr/share/maven/ref \
&& curl -fsSL -o /tmp/apache-maven.tar.gz ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz \
&& echo "${SHA} /tmp/apache-maven.tar.gz" | sha512sum -c - \
&& tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 \
&& rm -f /tmp/apache-maven.tar.gz \
&& ln -s /usr/share/maven/bin/mvn /usr/bin/mvn
ENV MAVEN_HOME /usr/share/maven
ENV MAVEN_CONFIG "$USER_HOME_DIR/.m2"
RUN apt-get update
RUN apt-get -y install curl gnupg
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN apt-get -y install nodejs
RUN npm install
RUN node -v
RUN npm -v
RUN npm install -g aws-cdk
RUN mkdir /usr/local/TestingCDK;
COPY ./src /usr/local/TestingCDK/src/
COPY pom.xml /usr/local/TestingCDK/
COPY cdk.json /usr/local/TestingCDK/
RUN cd /usr/local/TestingCDK/ && mvn compile
RUN mkdir ~/.aws
RUN cd ~ && pwd
COPY config /root/.aws/
COPY credentials /root/.aws/
CMD cdk doctor ; cat ~/.aws/config ; cd /usr/local/TestingCDK/ ; cdk deploy myStack --profile myProfile --require-approval never
You should pass the keys and other variables into the container and set AWS_ environment variables instead, to name a few
AWS_SECRET_ACCESS_KEY
AWS_ACCESS_KEY_ID
AWS_DEFAULT_REGION
see here:
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html
saving and copying your access/secret keys into the container is a very bad practice.
I've build a custom docker image from python:3.6 with awscli and session manager:
FROM python:3.6
WORKDIR /app
RUN pip3 install -U awscli
RUN apt-get update -y && \
apt-get install groff less curl -y && \
curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_64bit/session-manager-plugin.deb" -o "session-manager-plugin.deb" && \
dpkg -i session-manager-plugin.deb && \
rm -f session-manager-plugin.deb
RUN curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_64bit/session-manager-plugin.deb" -o "session-manager-plugin.deb" && \
dpkg -i session-manager-plugin.deb && \
rm -f session-manager-plugin.deb
ENTRYPOINT ["aws"]
I've created a custom executable file under /usr/bin/aws:
#!/bin/bash
docker run --rm -v "$(pwd)":"/app" -v "/root/.aws/":"/root/.aws" python-aws "$#"
When I run aws ssm start-session --target i-*** the output is:
^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#
^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#
...
Do you know how to solve the issue?
Just found the solution will writing the question.
I added -it (interactive) to the docker run command.
So the the command is now:
#!/bin/bash
docker run -it --rm -v "$(pwd)":"/app" -v "/root/.aws/":"/root/.aws" python-aws "$#"
Problem solved.
I'm having a difficult time finding resources for creating a Dockerfile to install a proper PHP, Composer and NGINX environment.
I can create a docker-compose container set, but I cannot get composer installed doing that. If anyone has any good resources to point me to, in order to write a full PHP, Composer and NGINX Dockerfile.
this is my docker file example for a similar scenario, I hope it helps. Feedback and ideas are welcomed !
FROM php:7.4-fpm
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
libzip-dev \
zip \
unzip \
software-properties-common \
lsb-release \
apt-transport-https \
ca-certificates \
wget \
gnupg2
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PHP extensions (some are already compiled in the PHP base image)
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd json zip xml
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Create myuser
RUN useradd -G www-data,root -u 1000 -d /home/myuser myuser
RUN mkdir -p /home/myuser/.composer && \
chown -R myuser:myuser /home/myuser
# Set working directory
WORKDIR /var/www/mypage
USER $user
You can add nginx to this container but then, I recommend to use supervisord to control multiple processes.
How can I build a Docker container with Google's Cloud Command Line Tool/SDK?
The script at the url https://sdk.cloud.google.com appears to require user input so doesn't work in a docker file.
Adding the following to my Docker file appears to work.
# Downloading gcloud package
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
# Installing the package
RUN mkdir -p /usr/local/gcloud \
&& tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz \
&& /usr/local/gcloud/google-cloud-sdk/install.sh
# Adding the package path to local
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
Use this one-liner in your Dockerfile:
RUN curl -sSL https://sdk.cloud.google.com | bash
source:
https://docs.docker.com/v1.8/installation/google/
Doing it with alpine:
FROM alpine:3.6
RUN apk add --update \
python \
curl \
which \
bash
RUN curl -sSL https://sdk.cloud.google.com | bash
ENV PATH $PATH:/root/google-cloud-sdk/bin
RUN curl -sSL https://sdk.cloud.google.com > /tmp/gcl && bash /tmp/gcl --install-dir=~/gcloud --disable-prompts
This will download the google cloud sdk installer into /tmp/gcl, and run it with the parameters as follows:
--install-dir=~/gcloud: Extract the binaries into folder gcloud in home folder. Change this to wherever you want, for example /usr/local/bin
--disable-prompts: Don't show any prompts while installing (headless)
To install gcloud inside a docker container please follow the instructions here.
Basically you need to run
RUN apt-get update && \
apt-get install -y curl gnupg && \
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && \
apt-get update -y && \
apt-get install google-cloud-sdk -y
inside your dockerfile. It's important you are user ROOT when you run this command, so it may necessary to add USER root before the previous command.
As an alternative, you could use the docker image provided by google namely google/cloud-sdk. https://hub.docker.com/r/google/cloud-sdk/
Dockerfile:
FROM centos:7
RUN yum update -y && yum install -y \
curl \
which && \
yum clean all
RUN curl -sSL https://sdk.cloud.google.com | bash
ENV PATH $PATH:/root/google-cloud-sdk/bin
Build:
docker build . -t google-cloud-sdk
Then run gcloud:
docker run --rm \
--volume $(pwd)/assets/root/.config:/root/.config \
google-cloud-sdk gcloud
...or run gsutil:
docker run --rm \
--volume $(pwd)/assets/root/.config:/root/.config \
google-cloud-sdk gsutil
The local assets folder will contain the configuration.
apk upgrade --update-cache --available && \
apk add openssl && \
apk add curl python3 py-crcmod bash libc6-compat && \
rm -rf /var/cache/apk/*
curl https://sdk.cloud.google.com | bash > /dev/null
export PATH=$PATH:/root/google-cloud-sdk/bin
gcloud components update kubectl
I was using Python Alpine image python:3.8.6-alpine3.12 as base and this worked for me:
RUN apk add --no-cache bash
RUN wget https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-327.0.0-linux-x86_64.tar.gz \
-O /tmp/google-cloud-sdk.tar.gz | bash
RUN mkdir -p /usr/local/gcloud \
&& tar -C /usr/local/gcloud -xvzf /tmp/google-cloud-sdk.tar.gz \
&& /usr/local/gcloud/google-cloud-sdk/install.sh -q
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
After building and running the image, you can check if google-cloud-sdk is installed by running docker exec -i -t <container_id> /bin/bash and running this:
bash-5.0# gcloud --version
Google Cloud SDK 327.0.0
bq 2.0.64
core 2021.02.05
gsutil 4.58
bash-5.0# gsutil --version
gsutil version: 4.58
If you want a specific version of google-cloud-sdk, you can visit https://storage.cloud.google.com/cloud-sdk-release
curl https://sdk.cloud.google.com | bash -s -- --disable-prompts
and export env
works for me
I got this working with Ubuntu 18.04 using:
RUN apt-get install -y curl && curl -sSL https://sdk.cloud.google.com | bash
ENV PATH="$PATH:/root/google-cloud-sdk/bin"
You can use multi-stage builds to make this simpler and more efficient than solutions using curl.
FROM bitnami/google-cloud-sdk:0.392.0 as gcloud
FROM base-image-for-production:tag
# Do what you need to configure your production image
COPY --from=gcloud /opt/bitnami/google-cloud-sdk/ /google-cloud-sdk
This work for me.
FROM php:7.2-fpm
RUN apt-get update -y
RUN apt-get install -y python && \
curl -sSL https://sdk.cloud.google.com | bash
ENV PATH $PATH:/root/google-cloud-sdk/bin
An example using debian as the base image:
FROM debian:stretch
RUN apt-get update && apt-get install -y apt-transport-https gnupg curl lsb-release
RUN export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)" && \
echo "cloud SDK repo: $CLOUD_SDK_REPO" && \
echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
apt-get update -y && apt-get install google-cloud-sdk -y
I used most of these examples in some form (thanks #KJoe), but I had to do several other things to setup everything so gcloud would work in the environment. Note that it is preferable to limit the number of lines (it limits layers needed to pull)
Here's a more complete example of Dockerfile with gcloud setup and extending a CircleCI image:
FROM circleci/ruby:2.4.1-jessie-node-browsers
# user is circleci in the FROM image, switch to root for system lib installation
USER root
ENV CCI /home/circleci
ENV GTMP /tmp/gcloud-install
ENV GSDK $CCI/google-cloud-sdk
ENV PATH="${GSDK}/bin:${PATH}"
# do all system lib installation in one-line to optimize layers
RUN curl -sSL https://sdk.cloud.google.com > $GTMP && bash $GTMP --install-dir=$CCI --disable-prompts \
&& rm -rf $GTMP \
&& chmod +x $GSDK/bin/* \
\
&& chown -Rf circleci:circleci $CCI
# change back to the user in the FROM image
USER circleci
# setup gcloud specifics to your liking
RUN gcloud config set core/disable_usage_reporting true \
&& gcloud config set component_manager/disable_update_check true \
&& gcloud components install alpha beta kubectl --quiet
My use case was to generate a google bearer token using the service account, so I wanted the docker container to install gcloud this is how my docker file looks like
FROM google/cloud-sdk
# Setting the default directory in container
WORKDIR /usr/src/app
# copies the app source code to the directory in container
COPY . /usr/src/app
CMD ["/bin/bash","/usr/src/app/token.sh"]
If you need to examine a container after it is built but that isn't running use docker run --rm -it <container-build-id> bash -il and type in gcloud --version if installed correctly or not
In Google documentation you can see the best practice
https://cloud.google.com/sdk/docs/install-sdk
search on the page for "Docker Tip"
eg debian use:
RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && apt-get update -y && apt-get install google-cloud-cli -y
If you're just interested in getting the gcloud CLI available, add this to your Dockerfile:
# Downloading gcloud package
RUN curl https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-409.0.0-linux-x86_64.tar.gz > /tmp/google-cloud-cli.tar.gz
# Installing the gcloud cli
RUN mkdir -p /usr/local/gcloud \
&& tar -xf /tmp/google-cloud-cli.tar.gz \
&& ./google-cloud-sdk/install.sh --quiet