How to identify why a program is not starting inside docker - c++

I have a docker image which has a c++ executable with dependencies packed into it. This executable runs fine outside docker environment and i have tested it multiple times.
However inside docker it stops immediately as and when started.
To debug i have added a std::cout << "Main 1" << std::endl as soon as main() function is called. But even this is not being printed when i start the executable inside docker.
Any tips on how to debug this issue.
Adding docker file which is used to build the docker image.
FROM ubuntu:18.04
# install app dependencies
RUN apt-get -yqq update \
&& apt-get -yqq dist-upgrade \
&& apt-get -yqq install apt-utils libgomp1 libprotobuf10 libboost-thread1.65.1 libboost-filesystem1.65.1 libopencv-core3.2 libopencv-imgproc3.2 libopencv-imgcodecs3.2 libjpeg-turbo8 libpo
&& apt-get -yqq remove systemd cups perl ffmpeg apt-utils \
&& rm -rf /var/lib/apt/lists/*
# create app folder
RUN mkdir -p /opt/aimes
# copy app, dependencies and config
COPY deps/aimes /opt/aimes/
COPY deps/*.* /opt/aimes/
COPY deps/config /opt/aimes/config
# copy wrapper script
COPY run-es.sh /opt/aimes/
# run command
WORKDIR /opt/aimes
ENV LD_LIBRARY_PATH .
ENTRYPOINT ["./run-es.sh"]

Adding --cap-add=SYS_PTRACE to docker run command helped in finding out issue using gdb.
Also the solution was to add the above option to docker run command, since the exe required root permissions.
Below command solved my issue.
docker run --cap-add=SYS_PTRACE -it --rm

Related

Cannot source tmux config file in a Dockerfile

I'm building a Docker image where I need tmux, and rather than having to run tmux source-file ~/.tmux.conf every time I start the container (that way madness lies), I'd like to source the config file at build time. However, this isn't working:
ARG PYTORCH="1.6.0"
ARG CUDA="10.1"
ARG CUDNN="7"
FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel
RUN apt-get update && apt-get install -y man-db manpages-posix vim screen tmux\
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# configuration for tmux
COPY src/.tmux.conf ~/.tmux.conf
RUN tmux source-file ~/.tmux.conf
I get the error:
error connecting to /tmp/tmux-0/default (No such file or directory)
The command '/bin/sh -c tmux source-file ~/.tmux.conf' returned a non-zero code: 1
What's happening? It doesn't seem to be a file not found error.
There's no tmux server running (no server at all has been running yet, hence the missing file). The config file will be loaded automatically when you run tmux in the container, so the failing line can be dropped
Also, docker doesn't expand the ~, so you'll need to provide the absolute path. The resulting Dockerfile should look something like this, assuming you're running as root in the container:
ARG PYTORCH="1.6.0"
ARG CUDA="10.1"
ARG CUDNN="7"
FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel
RUN apt-get update && apt-get install -y man-db manpages-posix vim screen tmux\
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# configuration for tmux
COPY src/.tmux.conf /root/.tmux.conf

Apache Druid Nano Container Service Error

I want to spin up a low configuration containerized service for which I created a Dockerfile as below:
docker build -t apache/druid_nano:0.20.2 -f Dockerfile .
FROM ubuntu:16.04
Install Java JDK 8
RUN apt-get update
&& apt-get install -y openjdk-8-jdk
RUN mkdir /app
WORKDIR /app
COPY apache-druid-0.20.2-bin.tar.gz /app
RUN tar xvzf apache-druid-0.20.2-bin.tar.gz
WORKDIR /app/apache-druid-0.20.2
EXPOSE <PORT_NUMBERS>
ENTRYPOINT ["/bin/start/start-nano-quickstart"]
When I start the container using the command "docker run -d -p 8888:8888 apache/druid_nano:0.20.2, I get an error as below:
/bin/start-nano-quickstart: no such file or directory
I removed the ENTRYPOINT command and built the image again just to check if the file exists in the bin directory inside the container. There is a file start-nano-quickstart under the bin directory inside the container.
Am I missing anything here? Please help.

Docker file throwing error when i try to run it "AH00111: Config variable ${APACHE_RUN_DIR} is not defined"

I am trying my hands with Docker.
I am trying to install apche2 into ubuntu images.
FROM ubuntu
RUN echo "welcome to yellow pages"
RUN apt-get update
RUN apt-get install -y tzdata
RUN apt-get install -y apache2
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
RUN echo 'Hello, docker' > /var/www/index.html
ENTRYPOINT ["/usr/sbin/apache2"]
CMD ["-D", "FOREGROUND"]
I found a reference online reference
I have added this line "RUN apt-get install -y tzdata" because it was asking for an option of tzdata and stopping image creation.
Now when I run my image I am getting the below error
[Thu Jan 07 09:43:57.213998 2021] [core:warn] [pid 1] AH00111: Config variable ${APACHE_RUN_DIR} is not defined
apache2: Syntax error on line 80 of /etc/apache2/apache2.conf: DefaultRuntimeDir must be a valid directory, absolute or relative to ServerRoot
I am new to docker and it's a bit of a task for me to understand it.
Could anyone help me out of this?
This seems to be Apache issue, not docker issue. Your conf seems to have errors. You have a parameter there called DefaultRuntimeDir which is pointing ad directory which does not exist in docker. Review your config file and ensure directories you specified in there exist in docker.
You can play within docker by simply:
docker build -t my_image_name .
docker run -it --rm --entrypoint /bin/bash my_image_name
# now you are in your docker container, you can check if your directories exist
Without knowing your config I would simply add one more RUN (I made this path up, you can change it to whatever you like)
ENV APACHE_RUN_DIR /var/lib/apache/runtime
RUN mkdir -p ${APACHE_RUN_DIR}
As a side note I would also combine all RUN into single like this:
RUN echo "welcome to yellow pages" \
&& apt-get update \
&& apt-get install -y tzdata apache2 \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir -p /var/www \
&& echo 'Hello, docker' > /var/www/index.html

How to run the bash when we trigger docker run command without -it?

I have a Dockerfile as follow:
FROM centos
RUN mkdir work
RUN yum install -y python3 java-1.8.0-openjdk java-1.8.0-openjdk-devel tar git wget zip
RUN pip install pandas
RUN pip install boto3
RUN pip install pynt
WORKDIR ./work
CMD ["bash"]
where i am installing some basic dependencies.
Now when I run
docker run imagename
it does nothing but when I run
docker run -it imageName
I lands into the bash shell. But I want to get into the bash shell as soon as I trigger the run command without any extra parameters.
I am using this docker container in AWS codebuild and there I can't specify any parameters like -it but I want to execute my code in the docker container itself.
Is it possible to modify CMD/ENTRYPOINT in such a way that when running the docker image I land right inside the container?
I checked your container, it will not even build due to missing pip. So I modified it a bit so that it at least builds:
FROM centos
RUN mkdir glue
RUN yum install -y python3 java-1.8.0-openjdk java-1.8.0-openjdk-devel tar git wget zip python3-pip
RUN pip3 install pandas
RUN pip3 install boto3
RUN pip3 install pynt
WORKDIR ./glue
Build it using, e.g.:
docker build . -t glue
Then you can run command in it using for example the following syntax:
docker run --rm glue bash -c "mkdir a; ls -a; pwd"
I use --rm as I don't want to keep the container.
Hope this helps.
We cannot login to the docker container directly.
If you want to run any specific commands when the container start in detach mode than either you can give it in CMD and ENTRYPOINT command of the Dockerfile.
If you want to get into the shell directly, you can run
docker -it run imageName
or
docker run imageName bash -c "ls -ltr;pwd"
and it will return the output.
If you have triggered the run command without -it param then you can get into the container using:
docker exec -it imageName
and you will land up into the shell.
Now, if you are using AWS codebuild custom images and concerned about how the commands can be submitted to the container than you have to put your commands into the build_spec.yaml file and put your commands either in pre_build, build or post_build parameter and those commands will be submitted to the docker container.
-build_spec.yml
version: 0.2
phases:
pre_build:
commands:
- pip install boto3 #or any prebuild configuration
build:
commands:
- spark-submit job.py
post_build:
commands:
- rm -rf /tmp/*
More about build_spec here

Why does CMD never work in my Dockerfiles?

I have a few Dockerfiles where CMD doesn't seem to run. Here is an example (all the way at the bottom).
##########################################################
# Set the base image to Ansible
FROM ubuntu:16.10
# Install Ansible, Python and Related Deps #
RUN apt-get -y update && \
apt-get install -y python-yaml python-jinja2 python-httplib2 python-keyczar python-paramiko python-setuptools python-pkg-resources git python-pip
RUN mkdir /etc/ansible/
RUN echo '[local]\nlocalhost\n' > /etc/ansible/hosts
RUN mkdir /opt/ansible/
RUN git clone http://github.com/ansible/ansible.git /opt/ansible/ansible
WORKDIR /opt/ansible/ansible
RUN git submodule update --init
ENV PATH /opt/ansible/ansible/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin
ENV PYTHONPATH /opt/ansible/ansible/lib
ENV ANSIBLE_LIBRARY /opt/ansible/ansible/library
RUN apt-get update -y
RUN apt-get install python -y
RUN apt-get install python-dev -y
RUN apt-get install python-setuptools -y
RUN apt-get install python-pip
RUN mkdir /ansible/
WORKDIR /ansible
COPY ./ansible ./
WORKDIR /
RUN ansible-playbook -c local ansible/playbooks/installdjango.yml
ENV PROJECTNAME testwebsite
################## SETUP DIRECTORY STRUCTURE ######################
WORKDIR /home
CMD ["django-admin" "startproject" "$PROJECTNAME"]
EXPOSE 8000
If I build and run the container, I can manually run
Django-admin startproject $PROJECTNAME and it will create a new project as expected, but the CMD in my Dockerfile does not seem to be doing anything and this is happening with all my other Dockerfiles so there's something I must not be getting.
ENTRYPOINT and CMD defines the default command that docker runs when it starts your container, not when the image is built. When ENTRYPOINT isn't defined, you simply run the value of CMD. Otherwise, CMD becomes args to the ENTRYPOINT. When you run your image, you can override the value of the CMD by passing args after the container name.
So, in your example above, CMD may be defined as anything, but when you run your container with docker run -it <imagename> /bin/bash, you override any value of CMD and replace it with /bin/bash. To run the defined value of CMD, you would need to run the container with docker run <imagename>.