CMD is not running use together with ENTRYPOINT - dockerfile

I try to run a container with ENTRYPOINT /sbin/init to allow me to run systemctl command.
After that I use CMD sh-exports.sh to execute the some command from the script.
Dockerfile
FROM registry.redhat.io/ubi8/ubi-init
ADD sh-exports.sh /
ARG S3FS_VERSION=v1.86
ARG MNT_POINT=/var/s3fs
ENV MNT_POINT=${MNT_POINT}
RUN yum install somepackages -y && mkdir -p "$MNT_POINT" && chmod 755 "$MNT_POINT" && chmod 777 /sh-exports.sh
ENTRYPOINT [ "/sbin/init", "$#" ]
CMD "/sh-exports.sh"
sh-exports.sh
#!/bin/bash
echo $EXPORTS > /etc/exports
systemctl restart some.services
sleep infinity
The script sh-exports.sh is not executed.
I can actually login to the container and simply run sh /sh-exports.sh and the script can run just fine.
So, is there anyway to allow me to use ENTRYPOINT /sbin/init and then any command at the CMD?

Yes, this is by design, and the way to run your command is to have an entrypoint that is able to receive the command as an argument.
Your actual trial is the opposite of what is commonly done, you usually have a command passed to an entrypoint and this command is the one "keeping the container alive".
So, the Dockerfile should have those instructions:
COPY entrypoint.sh entrypoint.sh
CMD ["sleep", "infinity"]
ENTRYPOINT ["entrypoint.sh"]
And this entrypoint.sh should be:
#!/usr/bin/env bash
/sbin/init
echo $EXPORTS > /etc/exports
systemctl restart some.services
exec "$#"
If you want to be able to alter the instruction passed to systemctl in the docker command, like
docker run my-container other-service
What you can do instead is
COPY entrypoint.sh entrypoint.sh
CMD ["some.services"]
ENTRYPOINT ["entrypoint.sh"]
And then, the entrypoint will look like:
#!/usr/bin/env bash
/sbin/init
echo $EXPORTS > /etc/exports
systemctl restart "$#"
sleep infinity
The reason for all this is, when using a command in combination with an entrypoint, the command is passed as an argument to the entrypoint, so, the responsibility to execute the command (or not) is actually delegated to the entrypoint.
So there is nothing "magic" happening really, it is like you would normally do in a shell script, one script (the entrypoint) receives arguments (the command) and can execute the received arguments as a command, or as part of a command.
This table actually explain the different behaviours:
No ENTRYPOINT
ENTRYPOINT exec_entry p1_entry
ENTRYPOINT [“exec_entry”, “p1_entry”]
No CMD
error, not allowed
/bin/sh -c exec_entry p1_entry
exec_entry p1_entry
CMD [“exec_cmd”, “p1_cmd”]
exec_cmd p1_cmd
/bin/sh -c exec_entry p1_entry
exec_entry p1_entry exec_cmd p1_cmd
CMD [“p1_cmd”, “p2_cmd”]
p1_cmd p2_cmd
/bin/sh -c exec_entry p1_entry
exec_entry p1_entry p1_cmd p2_cmd
CMD exec_cmd p1_cmd
/bin/sh -c exec_cmd p1_cmd
/bin/sh -c exec_entry p1_entry
exec_entry p1_entry /bin/sh -c exec_cmd p1_cmd
Source: https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact

Related

Containerfile entrypoint /bin/sh

I would like to create a container that runs only one shell. For this I have tried the following:
FROM alpine:latest
ENTRYPOINT ["/bin/sh"]
Unfortunately I don't get a shell when I start the container.
podman build -t $IMAGENAME .
podman run --name foobar $IMAGENAME
podman start -ai foobar
But if I start the container as follows it works
podman run --name foobar2 -ti $IMAGENAME /bin/sh
/ #
CTRL+C
podman start -ai foobar2
/ #
I had assumed that the entrypoint "/bin/sh" would directly execute a shell that you can work with.
Your Containerfile is fine.
The problem is because your main command is a shell /bin/sh, a shell needs a pseudo-TTY or it will fail to start.
You can pass a pseudo-TTY with the --tty or -t option. Also, a good option is to use --interactive or -i to allow the main process to receive input.
All the commands below will work for you:
podman build -t $IMAGENAME .
# run (use CTRL + P + Q to exit)
podman run --name foobar -ti $IMAGENAME
# create + start
podman create --name foobar -ti $IMAGENAME
podman start foobar
This is not the case if the main command is something different than a shell like a webserver, as apache, for example.
The entrypoint needs to be a long lasting process. Using /bin/sh as the entrypoint, would cause the container to exit as soon as it starts.
Try using:
FROM alpine:latest
ENTRYPOINT ["sleep", "9999"]
Then you can exec into the container and run your commands.

ENTRYPOINT just refuses to exec or even shell run

This is my 3rd day of tear-your-hair-out since the weekend and I just cannot get ENTRYPOINT to work via gitlab runner 13.3.1, this for something that previously worked with a simple ENTRYPOINT ["/bin/bash"] but that was using local docker desktop and using docker run followed by docker exec commands which worked like a synch. Essentially, at the end of it all I previously got a WAR file built.
Currently I build my container in gitlab runner 13.3.1 and push to s3 bucket and then use the IMAGE:localhost:500/my-recently-builtcontainer and then try and do whatever it is I want with the container but I cannot even get ENTRYPOINT to work, in it's exec form or in shell form - atleast in the shell form I get to see something. In the exec form it just gave "OCI runtime create failed" opaque errors so I shifted to the shell form just to see where I could get to.
I keep getting
sh: 1: sh: echo HOME=/home/nonroot-user params=#$ pwd=/ whoami=nonroot-user script=sh ENTRYPOINT reached which_sh=/bin/sh which_bash=/bin/bash PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin; ls -alrth /bin/bash; ls -alrth /bin/sh; /usr/local/bin/entrypoint.sh ;: not found
In my Dockerfile I distinctly have
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
RUN bash -c "ls -larth /usr/local/bin/entrypoint.sh"
ENTRYPOINT "echo HOME=${HOME} params=#$ pwd=`pwd` whoami=`whoami` script=${0} ENTRYPOINT reached which_sh=`which sh` which_bash=`which bash` PATH=${PATH}; ls -alrth `which bash`; ls -alrth `which sh`; /usr/local/bin/lse-entrypoint.sh ;"
The output after I build the container in gitlab is - and I made sure anyone has rights to see this file and use it - just so that I can proceed with my work
-rwxrwxrwx 1 root root 512 Apr 11 17:40 /usr/local/bin/entrypoint.sh
So, I know it is there and all the chmod flags indicate anybody can look at it - so I am so perplexed why it is saying NOT FOUND
/usr/local/bin/entrypoint.sh ;: not found
entrypoint.sh is ...
#!/bin/sh
export PATH=$PATH:/usr/local/bin/
clear
echo Script is $0
echo numOfArgs is $#
echo paramtrsPassd is $#
echo whoami is `whoami`
bash --version
echo "About to exec ....."
exec "$#"
It does not even reach inside this entrypoint.sh file.

How to run command inside Docker container

I'm new to Docker and I'm trying to understand the following setup.
I want to debug my docker container to see if it is receiving AWS credentials when running as a task in Fargate. It is suggested that I run the command:
curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
But I'm not sure how to do so.
The setup uses Gitlab CI to build and push the docker container to AWS ECR.
Here is the dockerfile:
FROM rocker/tidyverse:3.6.3
RUN apt-get update && \
apt-get install -y openjdk-11-jdk && \
apt-get install -y liblzma-dev && \
apt-get install -y libbz2-dev && \
apt-get install -y libnetcdf-dev
COPY ./packrat/packrat.lock /home/project/packrat/
COPY initiate.R /home/project/
COPY hello.Rmd /home/project/
RUN install2.r packrat
RUN which nc-config
RUN Rscript -e 'packrat::restore(project = "/home/project/")'
RUN echo '.libPaths("/home/project/packrat/lib/x86_64-pc-linux-gnu/3.6.3")' >> /usr/local/lib/R/etc/Rprofile.site
WORKDIR /home/project/
CMD Rscript initiate.R
Here is the gitlab-ci.yml file:
image: docker:stable
variables:
ECR_PATH: XXXXX.dkr.ecr.eu-west-2.amazonaws.com/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
services:
- docker:dind
stages:
- build
- deploy
before_script:
- docker info
- apk add --no-cache curl jq py-pip
- pip install awscli
- chmod +x ./build_and_push.sh
build-rmarkdown-task:
stage: build
script:
- export REPO_NAME=edelta/rmarkdown_report
- export BUILD_DIR=rmarkdown_report
- export REPOSITORY_URL=$ECR_PATH$REPO_NAME
- ./build_and_push.sh
when: manual
Here is the build and push script:
#!/bin/sh
$(aws ecr get-login --no-include-email --region eu-west-2)
docker pull $REPOSITORY_URL || true
docker build --cache-from $REPOSITORY_URL -t $REPOSITORY_URL ./$BUILD_DIR/
docker push $REPOSITORY_URL
I'd like to run this command on my docker container:
curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
How I run this command on container startup in fargate?
For running a command inside docker container you need to be inside the docker container.
Step 1: Find the container ID / Container Name that you want to debug
docker ps A list of containers will be displayed, pick one of them
Step 2 run following command
docker exec -it <containerName/ConatinerId> bash and then enter wait for few seconds and you will be inside the docker container with interactive mode Bash
for more info read https://docs.docker.com/engine/reference/commandline/exec/
Short answer, just replace the CMD
CMD ["sh", "-c", " curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_UR && Rscript initiate.R"]
Long answer, You need to replace the CMD of the DockerFile, as currently running only Rscript.
you have two option add entrypoint or change CMD, for CMD check above
create entrypoint.sh and run run only when you want to debug.
#!/bin/sh
if [ "${IS_DEBUG}" == true ];then
echo "Container running in debug mode"
curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
# uncomment below section if you still want to execute R script.
# exec "$#"
else
exec "$#"
fi
Changes that will required on Dockerfile side
WORKDIR /home/project/
ENV IS_DEBUG=true
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
entrypoint ["/entrypoint.sh"]
CMD Rscript initiate.R

Docker django runs only if I specify the command

I am new to docker and I was trying to create an Image for my Django application.
I have created the image using the following Dockerfile
FROM python:3.6-slim-buster
WORKDIR /app
COPY . /app
RUN pip install -r Requirements.txt
EXPOSE 8000
ENTRYPOINT ["python", "manage.py"]
CMD ["runserver", '0.0.0.0:8000']
The problem is when I run the image using
docker run -p 8000:8000 <image-tag>
I am unable to access the app in my localhost:8000
But if I run the container using the command
docker run -p 8000:8000 <image-tag> runserver 0.0.0.0:8000
I can see my app in localhost:8000
I think that you can use only Entrypoint command.
Try with:
FROM python:3.6-slim-buster
WORKDIR /app
COPY . /app
RUN pip install -r Requirements.txt
EXPOSE 8000
ENTRYPOINT ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Or you can write script file (entrypoint.sh) with the line. And maybe you can run makemigrations and migrations in the same file.
You need to change the single quotes to double quotes in your CMD line.
Let's play with this simplified Dockerfile:
FROM alpine
ENTRYPOINT ["echo", "python", "manage.py"]
CMD ["runserver", '0.0.0.0:8000']
Now build it and run it:
$ docker build .
...
Successfully built 24d598ae4182
$ docker run --rm 24d598ae4182
python manage.py /bin/sh -c ["runserver", '0.0.0.0:8000']
Docker is pretty picky on the JSON-array form of the CMD, ENTRYPOINT, and RUN commands. If something doesn't parse as a JSON array, it will silently fall back to treating it as a plain command, which will get implicitly wrapped in a /bin/sh -c '...' invocation. That's what you're seeing here.
If you edit my Dockerfile to have double quotes in the CMD line and rebuild the image, then you'll see
$ docker run --rm 58114fa1fdb4
python manage.py runserver 0.0.0.0:8000
and if you actually COPY code in, use a Python base image, and delete that debugging echo, that's the command you want to execute.

Not able to start 2 tasks using Dockerfile CMD

I have a question about Dockerfile with CMD command. I am trying to setup a server that needs to run 2 commands in the docker container at startup. I am able to run either 1 or the other service just fine on their own but if I try to script it to run 2 services at the same time, it fails. I have tried all sorts of variations of nohup, &, linux task backgrounding but I haven't been able to solve it.
Here is my project where I am trying to achieve this:
https://djangofan.github.io/mountebank-with-ui-node/
#entryPoint.sh
#!/bin/bash
nohup /bin/bash -c "http-server -p 80 /ui" &
nohup /bin/bash -c "mb --port 2525 --configfile /mb/imposters.ejs --allowInjection" &
jobs -l
Displays this output but the ports are not listening:
djangofan#MACPRO ~/workspace/mountebank-container (master)*$ ./run-container.sh
f5c50afd848e46df93989fcc471b4c0c163d2f5ad845a889013d59d951170878
f5c50afd848e46df93989fcc471b4c0c163d2f5ad845a889013d59d951170878 djangofan/mountebank-example "/bin/bash -c /scripts/entryPoint.sh" Less than a second ago Up Less than a second 0.0.0.0:2525->2525/tcp, 0.0.0.0:4546->4546/tcp, 0.0.0.0:5555->5555/tcp, 2424/tcp, 0.0.0.0:9000->9000/tcp, 0.0.0.0:2424->80/tcp nervous_lalande
[1]- 5 Running nohup /bin/bash -c "http-server -p 80 /ui" &
[2]+ 6 Running nohup /bin/bash -c "mb --port 2525 --configfile /mb/imposters.ejs --allowInjection" &
And here is my Dockerfile:
FROM node:8-alpine
ENV MOUNTEBANK_VERSION=1.14.0
RUN apk add --no-cache bash gawk sed grep bc coreutils
RUN npm install -g http-server
RUN npm install -g mountebank#${MOUNTEBANK_VERSION} --production
EXPOSE 2525 2424 4546 5555 9000
ADD imposters /mb/
ADD ui /ui/
ADD *.sh /scripts/
# these work when ran 1 or the other
#CMD ["http-server", "-p", "80", "/ui"]
#CMD ["mb", "--port", "2525", "--configfile", "/mb/imposters.ejs", "--allowInjection"]
# this doesnt yet work
CMD ["/bin/bash", "-c", "/scripts/entryPoint.sh"]
One process inside docker container has to run not in background mode, because docker container is running while main process inside it is running.
The /scripts/entryPoint.sh should be:
#!/bin/bash
nohup /bin/bash -c "http-server -p 80 /ui" &
nohup /bin/bash -c "mb --port 2525 --configfile /mb/imposters.ejs --allowInjection"
Everything else is fine in your Dockerfile.