Not able to start 2 tasks using Dockerfile CMD - dockerfile

I have a question about Dockerfile with CMD command. I am trying to setup a server that needs to run 2 commands in the docker container at startup. I am able to run either 1 or the other service just fine on their own but if I try to script it to run 2 services at the same time, it fails. I have tried all sorts of variations of nohup, &, linux task backgrounding but I haven't been able to solve it.
Here is my project where I am trying to achieve this:
https://djangofan.github.io/mountebank-with-ui-node/
#entryPoint.sh
#!/bin/bash
nohup /bin/bash -c "http-server -p 80 /ui" &
nohup /bin/bash -c "mb --port 2525 --configfile /mb/imposters.ejs --allowInjection" &
jobs -l
Displays this output but the ports are not listening:
djangofan#MACPRO ~/workspace/mountebank-container (master)*$ ./run-container.sh
f5c50afd848e46df93989fcc471b4c0c163d2f5ad845a889013d59d951170878
f5c50afd848e46df93989fcc471b4c0c163d2f5ad845a889013d59d951170878 djangofan/mountebank-example "/bin/bash -c /scripts/entryPoint.sh" Less than a second ago Up Less than a second 0.0.0.0:2525->2525/tcp, 0.0.0.0:4546->4546/tcp, 0.0.0.0:5555->5555/tcp, 2424/tcp, 0.0.0.0:9000->9000/tcp, 0.0.0.0:2424->80/tcp nervous_lalande
[1]- 5 Running nohup /bin/bash -c "http-server -p 80 /ui" &
[2]+ 6 Running nohup /bin/bash -c "mb --port 2525 --configfile /mb/imposters.ejs --allowInjection" &
And here is my Dockerfile:
FROM node:8-alpine
ENV MOUNTEBANK_VERSION=1.14.0
RUN apk add --no-cache bash gawk sed grep bc coreutils
RUN npm install -g http-server
RUN npm install -g mountebank#${MOUNTEBANK_VERSION} --production
EXPOSE 2525 2424 4546 5555 9000
ADD imposters /mb/
ADD ui /ui/
ADD *.sh /scripts/
# these work when ran 1 or the other
#CMD ["http-server", "-p", "80", "/ui"]
#CMD ["mb", "--port", "2525", "--configfile", "/mb/imposters.ejs", "--allowInjection"]
# this doesnt yet work
CMD ["/bin/bash", "-c", "/scripts/entryPoint.sh"]

One process inside docker container has to run not in background mode, because docker container is running while main process inside it is running.
The /scripts/entryPoint.sh should be:
#!/bin/bash
nohup /bin/bash -c "http-server -p 80 /ui" &
nohup /bin/bash -c "mb --port 2525 --configfile /mb/imposters.ejs --allowInjection"
Everything else is fine in your Dockerfile.

Related

CMD is not running use together with ENTRYPOINT

I try to run a container with ENTRYPOINT /sbin/init to allow me to run systemctl command.
After that I use CMD sh-exports.sh to execute the some command from the script.
Dockerfile
FROM registry.redhat.io/ubi8/ubi-init
ADD sh-exports.sh /
ARG S3FS_VERSION=v1.86
ARG MNT_POINT=/var/s3fs
ENV MNT_POINT=${MNT_POINT}
RUN yum install somepackages -y && mkdir -p "$MNT_POINT" && chmod 755 "$MNT_POINT" && chmod 777 /sh-exports.sh
ENTRYPOINT [ "/sbin/init", "$#" ]
CMD "/sh-exports.sh"
sh-exports.sh
#!/bin/bash
echo $EXPORTS > /etc/exports
systemctl restart some.services
sleep infinity
The script sh-exports.sh is not executed.
I can actually login to the container and simply run sh /sh-exports.sh and the script can run just fine.
So, is there anyway to allow me to use ENTRYPOINT /sbin/init and then any command at the CMD?
Yes, this is by design, and the way to run your command is to have an entrypoint that is able to receive the command as an argument.
Your actual trial is the opposite of what is commonly done, you usually have a command passed to an entrypoint and this command is the one "keeping the container alive".
So, the Dockerfile should have those instructions:
COPY entrypoint.sh entrypoint.sh
CMD ["sleep", "infinity"]
ENTRYPOINT ["entrypoint.sh"]
And this entrypoint.sh should be:
#!/usr/bin/env bash
/sbin/init
echo $EXPORTS > /etc/exports
systemctl restart some.services
exec "$#"
If you want to be able to alter the instruction passed to systemctl in the docker command, like
docker run my-container other-service
What you can do instead is
COPY entrypoint.sh entrypoint.sh
CMD ["some.services"]
ENTRYPOINT ["entrypoint.sh"]
And then, the entrypoint will look like:
#!/usr/bin/env bash
/sbin/init
echo $EXPORTS > /etc/exports
systemctl restart "$#"
sleep infinity
The reason for all this is, when using a command in combination with an entrypoint, the command is passed as an argument to the entrypoint, so, the responsibility to execute the command (or not) is actually delegated to the entrypoint.
So there is nothing "magic" happening really, it is like you would normally do in a shell script, one script (the entrypoint) receives arguments (the command) and can execute the received arguments as a command, or as part of a command.
This table actually explain the different behaviours:
No ENTRYPOINT
ENTRYPOINT exec_entry p1_entry
ENTRYPOINT [“exec_entry”, “p1_entry”]
No CMD
error, not allowed
/bin/sh -c exec_entry p1_entry
exec_entry p1_entry
CMD [“exec_cmd”, “p1_cmd”]
exec_cmd p1_cmd
/bin/sh -c exec_entry p1_entry
exec_entry p1_entry exec_cmd p1_cmd
CMD [“p1_cmd”, “p2_cmd”]
p1_cmd p2_cmd
/bin/sh -c exec_entry p1_entry
exec_entry p1_entry p1_cmd p2_cmd
CMD exec_cmd p1_cmd
/bin/sh -c exec_cmd p1_cmd
/bin/sh -c exec_entry p1_entry
exec_entry p1_entry /bin/sh -c exec_cmd p1_cmd
Source: https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact

Docker file needs to install nvm without internet gateway

I'm working on a webapp on AWS CodePipeline, and one of my backend pipeline's stages includes a docker build command, and the Dockerfile includes these commands:
RUN curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.37.2/install.sh | bash
RUN /bin/bash -c ". ~/.nvm/nvm.sh && \
nvm install $NODE_VERSION && nvm use $NODE_VERSION && \
npm install -g aws-cdk cdk-assume-role-credential-plugin#1.1.1 && \
nvm alias default node && nvm cache clear"
RUN echo export PATH="\
/root/.nvm/versions/node/${NODE_VERSION}/bin:\
$(python3.8 -m site --user-base)/bin:\
$(python3 -m site --user-base)/bin:\
$PATH" >> ~/.bashrc && \
echo "nvm use ${NODE_VERSION} 1> /dev/null" >> ~/.bashrc
RUN /bin/bash -c ". ~/.nvm/nvm.sh && cdk --version"
ENTRYPOINT [ "/bin/bash", "-c", ". ~/.nvm/nvm.sh && uvicorn cdkproxymain:app --host 0.0.0.0 --port 8080" ]
The problem is that I'm running code in a VPC without an internet gateway (client's policy), so the curl command fails. I have tried to install nvm locally by copying the nvm folder to my src directory, but I lack the skills to script this.
Any advice is welcome, Thank you so much !

Cloud sql proxy not working from docker container

My application is running on docker container and deployed with google compute groups and autoscalling enabled.
The problem iam facing is connecting mysql instance from auto-scaled compute instances but its not working expected.
Dockerfile
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y software-properties-common && \
...installation other extenstion
RUN curl -sS https://getcomposer.org/installer | \
php -- --install-dir=/usr/bin/ --filename=composer
COPY . /var/www/html
CMD cd /var/www/html
RUN composer install
ADD nginx.conf/default /etc/nginx/sites-available/default
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
RUN mkdir /cloudsql
RUN chmod 777 /cloudsql
RUN chmod 777 -R storage bootstrap/cache
EXPOSE 80
**CMD service php7.1-fpm start && nginx -g "daemon off;" && ./cloud_sql_proxy -dir=/cloudsql -instances=<connectionname>=tcp:0.0.0.0:3306 -credential_file=file.json &**
The last line ./cloud_sql_proxy -dir=/cloudsql -instances=<connectionname>=tcp:0.0.0.0:3306 -credential_file=file.json & is not getting executed when I run my container.
If I run this ./cloud_sql_proxy -dir=/cloudsql -instances=<connectionname>=tcp:0.0.0.0:3306 -credential_file=file.json & inside container (by going to container via docker command) it's working and when i close the terminal again its stop working.
Even I tried to run in background, but no luck.
Anyone have a idea of it?
Has been fixed by
Create start.sh file and move all command to start.sh
After start sql proxy I put sleep 10 and start the nginx and php
Now it's works as expected.
Dockerfile
FROM ubuntu:16.04
...other command
ADD start.sh /
RUN chmod +x /start.sh
EXPOSE 80
CMD ["/start.sh"]
and this is start.sh file
//start.sh
#!/bin/sh
./cloud_sql_proxy -dir=/cloudsql -instances=<connectionname>=tcp:0.0.0.0:3306 -credential_file=<file>.json &
sleep 10
service php7.1-fpm start
nginx -g "daemon off;"

ECS Docker container won't start

I have a Docker container with this Dockerfile:
FROM node:8.1
RUN rm -fR /var/lib/apt/lists/*
RUN echo "deb http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" | tee /etc/apt/sources.list.d/webupd8team-java.list
RUN echo "deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" | tee -a /etc/apt/sources.list.d/webupd8team-java.list
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys EEA14886
RUN apt-get update
RUN echo debconf shared/accepted-oracle-license-v1-1 select true | \
debconf-set-selections
RUN echo debconf shared/accepted-oracle-license-v1-1 seen true | \
debconf-set-selections
RUN apt-get install -y oracle-java8-installer
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN mkdir -p /app
WORKDIR /app
# Install app dependencies
COPY package.json /app/
RUN npm install
# Bundle app source
COPY . /app
# Environment Variables
ENV PORT 8080
# start the SSH daemon service
RUN service ssh start
# create a non-root user & a home directory for them
RUN useradd --create-home --shell /bin/bash tunnel-user
# set their password
RUN echo 'tunnel-user:93wcBjsp' | chpasswd
# Copy the SSH key to authorized_keys
COPY tunnel.pub /app/
RUN mkdir -p /home/tunnel-user/.ssh
RUN cat tunnel.pub >> /home/tunnel-user/.ssh/authorized_keys
# Set permissions
RUN chown -R tunnel-user:tunnel-user /home/tunnel-user/.ssh
RUN chmod 0700 /home/tunnel-user/.ssh
RUN chmod 0600 /home/tunnel-user/.ssh/authorized_keys
# allow the tunnel-user to SSH into this machine
RUN echo 'AllowUsers tunnel-user' >> /etc/ssh/sshd_config
EXPOSE 8080
EXPOSE 22
CMD [ "npm", "start" ]
My ECS task has this definition. I'm using a role which has AmazonEC2ContainerServiceforEC2Role.
When I try to start it as a task in my ECS cluster I get this error:
CannotStartContainerError: API error (500): driver failed programming external connectivity on endpoint ecs-ssh-4-ssh-8cc68dbfaa8edbdc0500 (387e024a87752293f51e5b62de9e2b26102d735e8da500c8e7fa5e1b4b4f0983): Error starting userland proxy: listen tcp 0.0.0
How do I fix this?

Starting nginx on Ubuntu 12.04 with init.d and custom configuration file

I have an nginx configuration specific to a project I'm currently working on (Django, to be precise).
It looks like the "right" way to start nginx on Ubuntu is
sudo /etc/init.d/nginx start
However, I want to supply a custom configuration file. Normally I'd do this in the following way:
sudo nginx -c /my/project/config/nginx.conf
Looking at the init.d/nginx file, it doesn't look like there the start command passes in any arguments, so I can't do
sudo /etc/init.d/nginx start -c /my/project/config/nginx.conf
What's the best way to solve my problem?
init.d is not the right supervisor to use on Ubuntu anymore, you should use Upstart. Put this in /etc/init/nginx.conf and you will be able to start/stop it with sudo start nginx and sudo stop nginx:
description "nginx http daemon"
author "George Shammas"
start on (filesystem and net-device-up IFACE=lo)
stop on runlevel [!2345]
env DAEMON=/usr/local/nginx/sbin/nginx -c /my/project/config/nginx.conf
env PID=/usr/local/nginx/logs/nginx.pid
expect fork
respawn
respawn limit 10 5
pre-start script
$DAEMON -t
if [ $? -ne 0 ]
then exit $?
fi
end script
exec $DAEMON