This is the contents of my dockerfile:
FROM debian:jessie
RUN mkdir -p /var/www/html && \
mkdir -p /var/log && \
mkdir -p /var/lib/mysql && \
mkdir -p /etc/apache2/sites-enabled && \
chmod 0777 /var/lib/mysql
VOLUME ["/var/www/html", "/var/log", "/var/lib/mysql", "/etc/apache2/sites-enabled"]
Run with:
docker run --name data \
-v ~/test/www/:/var/www/html \
-v ~/test/logs/:/var/log \
-v ~/test/vhosts/:/etc/apache2/sites-enabled \
-v ~/test/mysql/:/var/lib/mysql \
deano87/dockerfiles:data
And it fails to start up. There is nothing printed out anywhere. I have a far more complicated docker image built and running. I followed the same process to build and run etc. I don't see why this one simply fails for no apparent reason?
Sounds like a 'data volume container', see Creating and mounting a data volume container. You only create such a container, you do not actually run it, as there is nothing to run. Is just file system.
Related
Installed docker container. Now trying to pull the Questdb image but system keeps collapsing.
What shall I do?
cd ~
docker run -t -d \
-p 9000:9000 \
-p 9009:9009 \
-p 8012:8012 \
-p 9003:9003 \
--name docker_questdb \
questdb/questdb
Used this command, but it keeps failing
Can someone let me know if we can run MINIO as non root user?
Found some articles where it can run only as root and not as non root.
Please guide if someone has any idea on how it can achieved if possible.
From Minio docs (Run MinIO Docker as a regular user), you can provide the --user argument to the docker run command.
An example for Linux/macOS, from the doc:
mkdir -p ${HOME}/data
docker run -p 9000:9000 \
--user $(id -u):$(id -g) \
--name minio1 \
-e "MINIO_ROOT_USER=AKIAIOSFODNN7EXAMPLE" \
-e "MINIO_ROOT_PASSWORD=wJalrXUtnFEMIK7MDENGbPxRfiCYEXAMPLEKEY" \
-v ${HOME}/data:/data \
minio/minio server /data
I've been trying to put aws-cli on an Alpine based docker image I have. I found someone's tips for including the appropriate libraries for glibc here and I was able to make things run smoothly on my docker build. If I call the aws executable in the build process from the intermediate container it's being built on, it works.
After COPYing the aws binary directory to the destination, the files all are transferred, but the aws executable no longer works. I try to do it from either the CMD or docker exec and I just get an error that the file doesn't exist:
Does anyone have any idea what's going on?
This is the docker repo I started with: https://hub.docker.com/r/alfg/nginx-rtmp/. I'm just pasting the aws cli build code below after the ffmpeg FROM block, and adding the COPY --from=2 line further down in this question.
Here is the build section for the aws cli:
FROM alpine:3.11
ENV GLIBC_VER=2.31-r0
# install glibc compatibility for alpine
RUN apk --no-cache add \
binutils \
curl \
&& curl -sL https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub -o /etc/apk/keys/sgerrand.rsa.pub \
&& curl -sLO https://github.com/sgerrand/alpine-pkg-glibc/releases/download/${GLIBC_VER}/glibc-${GLIBC_VER}.apk \
&& curl -sLO https://github.com/sgerrand/alpine-pkg-glibc/releases/download/${GLIBC_VER}/glibc-bin-${GLIBC_VER}.apk \
&& apk add --no-cache \
glibc-${GLIBC_VER}.apk \
glibc-bin-${GLIBC_VER}.apk \
&& curl -sL https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip -o awscliv2.zip \
&& unzip awscliv2.zip \
&& aws/install \
&& rm -rf \
awscliv2.zip \
aws \
/usr/local/aws-cli/v2/*/dist/aws_completer \
/usr/local/aws-cli/v2/*/dist/awscli/data/ac.index \
/usr/local/aws-cli/v2/*/dist/awscli/examples \
&& apk --no-cache del \
binutils \
curl \
&& rm glibc-${GLIBC_VER}.apk \
&& rm glibc-bin-${GLIBC_VER}.apk \
&& rm -rf /var/cache/apk/*
And here are the final steps of my Dockerfile which copy the files and update PATH:
COPY --from=0 /usr/local/nginx /usr/local/nginx
COPY --from=0 /etc/nginx /etc/nginx
COPY --from=1 /usr/local /usr/local
COPY --from=1 /usr/lib/libfdk-aac.so.2 /usr/lib/libfdk-aac.so.2
COPY --from=2 /usr/local/aws-cli /usr/local/aws-cli
# Add NGINX path, AWS-CLI path, config and static files.
ENV PATH "${PATH}:/usr/local/nginx/sbin:/usr/local/aws-cli/v2/current/bin"
ADD nginx.conf /etc/nginx/nginx.conf.template
RUN mkdir -p /opt/data && mkdir /www
ADD static /www/static
EXPOSE 1935
EXPOSE 80
CMD envsubst "$(env | sed -e 's/=.*//' -e 's/^/\$/g')" < \
/etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf && \
nginx
The docker build runs succesfully and I can launch containers from the image. I've also put things like aws s3 ls s3://my-public-bucket in the Dockerfile at the end of the aws-cli block, and during the build they run succesfully and can pull from s3.
The only thing I can see wrong in the build is in the image below - it happens during the glibc/aws build, but the build still succesfully completes and the binary is functional afterwards:
/usr/glibc-compat/sbin/ldconfig: /usr/glibc-compat/lib/ld-linux-x86-64.so.2 is not a symbolic link
I've build a custom docker image from python:3.6 with awscli and session manager:
FROM python:3.6
WORKDIR /app
RUN pip3 install -U awscli
RUN apt-get update -y && \
apt-get install groff less curl -y && \
curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_64bit/session-manager-plugin.deb" -o "session-manager-plugin.deb" && \
dpkg -i session-manager-plugin.deb && \
rm -f session-manager-plugin.deb
RUN curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_64bit/session-manager-plugin.deb" -o "session-manager-plugin.deb" && \
dpkg -i session-manager-plugin.deb && \
rm -f session-manager-plugin.deb
ENTRYPOINT ["aws"]
I've created a custom executable file under /usr/bin/aws:
#!/bin/bash
docker run --rm -v "$(pwd)":"/app" -v "/root/.aws/":"/root/.aws" python-aws "$#"
When I run aws ssm start-session --target i-*** the output is:
^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#
^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#
...
Do you know how to solve the issue?
Just found the solution will writing the question.
I added -it (interactive) to the docker run command.
So the the command is now:
#!/bin/bash
docker run -it --rm -v "$(pwd)":"/app" -v "/root/.aws/":"/root/.aws" python-aws "$#"
Problem solved.
I am trying to build docker image with Liberty profile.Using below location Docker file.
https://github.com/WASdev/ci.docker/blob/master/ga/developer/kernel/Dockerfile
FROM ibmjava:8-jre
RUN apt-get update \
&& apt-get install -y --no-install-recommends unzip \
&& rm -rf /var/lib/apt/lists/*
#Install WebSphere Liberty
ENV LIBERTY_VERSION 16.0.0_03
ARG LIBERTY_URL
ARG DOWNLOAD_OPTIONS=""
RUN LIBERTY_URL=${LIBERTY_URL:-$(wget -q -O - https://public.dhe.ibm.com/ibmdl/export/pub/software/websphere/wasdev/downloads/wlp/index.yml | grep $LIBERTY_VERSION -A 6 | sed -n 's/\s*kernel:\s//p' | tr -d '\r' )} \
&& wget $DOWNLOAD_OPTIONS $LIBERTY_URL -U UA-IBM-WebSphere-Liberty-Docker -O /tmp/wlp.zip \
&& unzip -q /tmp/wlp.zip -d /opt/ibm \
&& rm /tmp/wlp.zip
ENV PATH=/opt/ibm/wlp/bin:$PATH
# Set Path Shortcuts
ENV LOG_DIR=/logs \
WLP_OUTPUT_DIR=/opt/ibm/wlp/output
RUN mkdir /logs \
&& ln -s $WLP_OUTPUT_DIR/defaultServer /output \
&& ln -s /opt/ibm/wlp/usr/servers/defaultServer /config
# Configure WebSphere Liberty
RUN /opt/ibm/wlp/bin/server create \
&& rm -rf $WLP_OUTPUT_DIR/.classCache /output/workarea
COPY docker-server /opt/ibm/docker/
EXPOSE 9080 9443
CMD ["/opt/ibm/docker/docker-server", "run", "defaultServer"]**
When I build docker image using this code we are getting error like below.Looks like this repository is not active now.Can anyone provide valid repository.
CWWKF1219E: The IBM WebSphere Liberty Repository cannot be reached. Verify that your computer has network access and firewalls are configured correctly, then try the action again. If the connection still fails, the repository server might be temporarily unavailable.
The URL is correct.
As the error message indicates, try checking your network config. To do that you can try to reach this link in a web browser. (this URL is simply from the script)
https://public.dhe.ibm.com/ibmdl/export/pub/software/websphere/wasdev/downloads/wlp/index.yml
Also, you could testing your connection to the repository outside of the docker environment by doing:
$WLP_HOME/bin/installUtility testConnection
If you are able to ping the repo from your computer, but not within the docker container, then perhaps your docker container has no internet access.
To fix the "docker can't access internet" issue, it looks like the solution from the above link was to do:
service docker restart