Missing output or input when running c++ binary in docker - c++

Building a cpp binary inside a docker builder with cmake
FROM ubuntu:focal as builder
WORKDIR /var/lib/project
RUN cmake ... && make ...
Then copying the built binary to final image which also is ubuntu:focal, to a WORKDIR.
WORKDIR /var/lib/project
COPY --from=builder /usr/src/project/bin/binary ./binary
EXPOSE 8080
CMD ["./binary"]
Running the docker with docker run hangs the docker (even with -d), no input and output. To stop the docker, I have to kill it from another terminal.
However, if I exec into that same container while it is hanging, and run the binary from the shell inside the docker, it will work as expected.
Command used to build the docker
docker build . --platform=linux/amd64 --file docker/Dockerfile --progress=plain -c 32 -t mydocker:latest
Tried:
CMD ["/bin/bash", "-c" , "./binary"]
CMD ["/bin/bash", "-c" , "exec", "./binary"]
And same configurations with ENTRYPOINT too.
Same behavior.
I'm guessing there is a problem with how the binary is built, maybe some specific flags are needed, because if I do docker run ... ls or any other built in command it will work and output to my stdout.
Expecting to have my binary std* redirected to my std* just like any other command inside docker.
There is another related question, as of now, unanswered.
Update:
Main binary is built with these
CXX_FLAGS= -fno-builtin-memcmp -fPIC -Wfatal-errors -w -msse -msse4.2 -mavx2
Update:
Main part of the binary code,
The version is apache arrow 10.0.1
int main() {
arf::Location srv_loc = arf::Location::ForGrpcTcp("127.0.01", 8080).ValueUnsafe();
arf::FlightServerOptions opt(srv_loc);
auto server = std::make_unique<MyService>();
ARROW_RETURN_NOT_OK(server->Init(opt));
std::printf("Starting on port: %i\n", server->port());
return server->Serve().ok();
}
UPDATE: The problem seems to be here:
While the container is hanging, I entered into it, and did a ps aux.
The binary gets PID=1, I don't know why this happens, but hardly this is the correct behavior.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.2 0.0 341520 13372 ? Ssl 07:52 0:00 ./binary
root 12 0.5 0.0 4116 3412 pts/0 Ss 07:52 0:00 /bin/bash
root 20 0.0 0.0 5900 2884 pts/0 R+ 07:52 0:00 ps aux
UPDATE:
./binary is executable, chmod +x is useless.
Adding printfs to multiple places did not work, unless the last line server->Serve() is commented out, or prints are really large, then everything gets printed.
Separating return statement from server->Server() makes no difference.
UPDATE:
Running the built container with docker --init -d flag, works.
Tough is this the right way? and if so how can this be forced in dockerfile?

This problem is related to the pid of the process in the container. You can use the --init flag to avoid the problem. for more information visit this site- https://docs.docker.com/engine/reference/run/#specify-an-init-process

Related

How can I correct this Dockerfile to take arguments properly?

I have this dockerfile content:
FROM python:latest
ARG b=8
ARG r=False
ARG p=1
ENV b=${b}
ENV r=${r}
ENV p=${p}
# many lines of successful setup
ENTRYPOINT python analyze_all.py /analysispath -b $b -p $p -r $r
My intention was to take three arguments at the command line like so:
docker run -it -v c:\analyze containername -e b=16 -e p=22 -e r=False
But unfortunately, I'm misunderstanding something fundamental and simple here instead of something complicated, so I'm helpless :).
If I understood the question correctly, this Dockerfile should do what is required:
FROM python:latest
# test script that prints argv
COPY analyze_all.py /
ENTRYPOINT ["python", "analyze_all.py", "/analysispath"]
Launch:
$ docker run -it test:latest -b 16 -p 22 -r False
sys.argv=['analyze_all.py', '/analysispath', '-b', '16', '-p', '22', '-r', 'False']
Looks like your Dockerfile is designed to build and run a container on Windows. I tested my Dockerfile on Linux, it probably won't be much different to use this approach on Windows.
I think ARG instructions isn't needed in this case because it defines a variable that users can pass at build-time using the docker build command. I would also suggest that you take a look at the Dockerfile reference for the ENTRYPOINT instruction:
Command line arguments to docker run will be appended after all elements in an exec form ENTRYPOINT, and will override all elements specified using CMD. This allows arguments to be passed to the entry point, i.e., docker run -d will pass the -d argument to the entry point.
Also, this question will probably be useful for you: How do I use Docker environment variable in ENTRYPOINT array?

qemu quits when pressing ctrl-c in gdb

Debugging my own kernel with qemu and gdb seems to be unnecessarily hard because pressing ctrl-c in gdb to break qemu does not break it, but makes it quit with the message
qemu-system-x86_64: terminating on signal 2
[Inferior 1 (Remote target) exited normally]
qemu command line:
qemu-system-x86_64 -s -no-shutdown -no-reboot -enable-kvm -m 1G -smp cores=1 -cpu qemu64 -drive if=pflash,format=raw,file=ovmf/OVMF.fd -drive file=fat:rw:hda,format=raw -net none -debugcon file:debug.log -global isa-debugcon.iobase=0x402 &
The behavior is the same without KVM. Could someone please help, how to solve this?
qemu-system-x86_64 v3.1.0
gdb v8.2.1
I would like not to build the latest versions of these from source as it seems to be a daunting task to do.
EDIT: Created a minimal environment where the issue can be reproduced. I may have tracked it down to running the whole thing from a shell script, but can't seem to progress further. Commenting out the gdb call in the script and starting it from a separate terminal, solves the issue (however i like things that work with as few keystrokes as possible).
You can download it here.
Just start the script called qd
(Is there a nicer way to provide files? I will delete this after a while.)
I tested with QEMU 5.0.0 and GDB 9.2, same issue, and same solution, that is commenting out the GDB call in the script and starting it from a separate terminal. You could probably just modify your script so that QEMU would be started in another
terminal. Starting QEMU using nohup is not working either.
I included the script I am usually using for building fresh versions of QEMU and GDB: latest versions are likely to have fixed bugs. The script is working on Ubuntu 20.04, and is probably still working on 16.04 and 18.04 - you may have to make small adjustments at the beginning of the script. Feel free to report issues, I would be willing to fix them.
build-qemu-gdb.sh:
#!/bin/bash
set -e
# Xenial/16.04
PERL_MODULES_VERSION=5.22
SPHINX=python-sphinx
# Bionic/18.04
PERL_MODULES_VERSION=5.26
SPHINX=python-sphinx
# Focal/20.04
PERL_MODULES_VERSION=5.30
SPHINX="sphinx-doc sphinx-common"
# Qemu
QEMU_VERSION=5.0.0
PREFIX=/opt/qemu-${QEMU_VERSION}
# GDB
GDB_VERSION=9.2
do_get_gdb()
{
if [ -f gdb-${GDB_VERSION}.tar.xz ]
then
echo "gdb-${GDB_VERSION}.tar.xz is present."
else
wget http://ftp.gnu.org/gnu/gdb/gdb-${GDB_VERSION}.tar.xz
fi
}
do_get_qemu()
{
if [ -f qemu-${QEMU_VERSION}.tar.xz ]
then
echo "qemu-${QEMU_VERSION}.tar.xz is present."
else
wget https://download.qemu.org/qemu-${QEMU_VERSION}.tar.xz
fi
}
do_install_prerequisites()
{
sudo apt-get install libglib2.0-dev libfdt-dev libpixman-1-dev zlib1g-dev libaio-dev libbluetooth-dev libbrlapi-dev libbz2-dev libcap-dev libcap-ng-dev libcurl4-gnutls-dev libgtk-3-dev libibverbs-dev \
libjpeg8-dev libncurses5-dev libnuma-dev librbd-dev librdmacm-dev libsasl2-dev libsdl2-dev libseccomp-dev libsnappy-dev libssh2-1-dev libvde-dev libvdeplug-dev libvte-2.91-dev libxen-dev liblzo2-dev \
valgrind xfslibs-dev liblzma-dev flex bison texinfo gettext perl perl-modules-${PERL_MODULES_VERSION} ${SPHINX}
}
do_configure()
{
local TARGET_LIST="x86_64-softmmu"
pushd qemu-${QEMU_VERSION}
./configure --target-list="${TARGET_LIST}" --prefix=${PREFIX} --extra-cflags="-I$(pwd)/packages/include" --extra-ldflags="-L$(pwd)/packages/lib"
popd
}
do_extract_qemu()
{
echo "extracting QEMU..."
rm -rf qemu-${QEMU_VERSION}
tar Jxf qemu-${QEMU_VERSION}.tar.xz
}
do_build_qemu()
{
echo "building..."
pushd qemu-${QEMU_VERSION}
make all
popd
}
do_install_qemu()
{
echo "installing..."
pushd qemu-${QEMU_VERSION}
sudo make install
popd
}
do_build_qemu()
{
do_extract_qemu
do_configure
do_build_qemu
do_install_qemu
}
do_extract_gdb()
{
echo "extracting GDB..."
rm -rf gdb-${GDB_VERSION}
tar Jxf gdb-${GDB_VERSION}.tar.xz
}
do_build_gdb()
{
do_extract_gdb
rm -rf gdb
mkdir gdb
pushd gdb
../gdb-${GDB_VERSION}/configure --enable-tui --prefix=/opt/gdb-${GDB_VERSION}-x86_64-none-elf --target=x86_64-none-elf --program-prefix=x86_64-none-elf-
make all install
popd
}
# main
do_install_prerequisites
do_get_qemu
do_build_qemu
do_get_gdb
do_build_gdb
The resulting new paths for QEMU and GDB after installation would be:
/opt/qemu-5.0.0/bin/qemu-system-x86_64
/opt/gdb-9.2-x86_64-none-elf/bin/x86_64-none-elf-gdb

Problems running a Docker image built with an ARMv7 base image on a x86 desktop

I am trying to run a Docker image based on an ARMv7 container on a x86 computer. According to this site, it is possible by running this container first.
docker run --rm --privileged hypriot/qemu-register
This command works on Mac OS X and on an Ubuntu 19 virtual machine (with a Windows 10 host). However, when I try to run on CentOS 7 and one of the AWS A1 instances, I get the message standard_init_linux.go:211: exec user process caused "exec format error". The CPU for the CentOS 7 is an Intel Core i7-8700K and AWS A1 is based on the Graviton processor.
Anyone know what I'm missing here?
The complaint on the AWS A1 instance is with installing miniconda. I'm not sure if there is a way to say yes (to continue to install) since the -b flag already is supposed to get miniconda to install silently.
Step 6/11 : RUN /bin/bash /tmp/miniconda.sh -b -p /opt/miniconda
---> Running in ab9b5fef6837
WARNING:
Your processor does not appear to be an armv7l. This software
was sepicically build for the Raspberry Pi 2 running raspbian wheezy
(or above).
Are sure you want to continue the installation? [yes|no]
[no] >>> Aborting installation
AWS A1 instances do support running Armv7 binaries. Using the available Ubuntu 18.04 AMI for A1, run this on the command line:
cat /boot/config-4.15.0-1043-aws | grep "CONFIG_COMPAT=y"
If this succeeds, then the AMI and kernel have been built with support for running 32-bit executables on the 64-bit platform. To test this capability, install using apt-get install gcc:armhf libc6:armhf to get a minimal 32-bit build environment, create an executable and execute readelf -h on it. You should see the Machine listed as ARM, not AArch64. Execution should also succeed.
Testing docker with armv7 images also works out of the box on the Ubuntu 18.04 AMI on A1. I tested via docker pull armhf/ubuntu:latest and then entered interactive mode using bash and tried installing Miniconda3. The problem does appear to be with the Miniconda install script linked above. It tries this unconditionally on line 58:
if [[ `uname -m` != 'armv7l' ]]; then
echo -n "WARNING:
Your processor does not appear to be an armv7l. This software
was sepicically build for the Raspberry Pi 2 running raspbian wheezy
(or above).
Are sure you want to continue the installation? [yes|no]
[no] >>> "
read ans
if [[ ($ans != "yes") && ($ans != "Yes") && ($ans != "YES") &&
($ans != "y") && ($ans != "Y") ]]
then
echo "Aborting installation"
exit 2
fi
fi
Docker does not do any rewriting of what uname -m returns, it will see AArch64 on the A1 instance and it will trip up there. Commenting this block out should get you going on the A1 instances.
For getting this to work on your x86 laptop, you will need to copy qemu-arm-static to the docker image to enable emulation. I am not sure, but I would suspect that uname would still not return the proper machine-type Miniconda expects.

Using regex with run-parts on Alpine 3.9

I'm creating a Docker image FROM alpine:3.9.2 and I need to run run-parts. I used the script below in the past on ubuntu:16.04 without problems.
run-parts --verbose --regex '\.sh$' "$DIR"
However, this time around, I get errors on the options I pass to it. I.e.
run-parts: unrecognized option: verbose
run-parts: unrecognized option: regex
From my understading Alpine 3.9.2 uses run-parts 4.8.6 https://pkgs.alpinelinux.org/package/edge/main/x86/run-parts which should come from the debianutils https://manpages.debian.org/testing/debianutils/run-parts.8.en.html and supports both verbose and regex.
Am I missing anything here?
how can I run all the files ending with .sh on Alpine 3.9.2?
There is very cut version of run-parts in alpine image by default. It is busybox one:
/ # which run-parts
/bin/run-parts
/ # run-parts --help
BusyBox v1.29.3 (2019-01-24 07:45:07 UTC) multi-call binary.
Usage: run-parts [-a ARG]... [-u UMASK] [--reverse] [--test] [--exit-on-error] DIRECTORY
Run a bunch of scripts in DIRECTORY
-a ARG Pass ARG as argument to scripts
-u UMASK Set UMASK before running scripts
--reverse Reverse execution order
--test Dry run
--exit-on-error Exit if a script exits with non-zero
It can only run a bunch of scripts in directory.
If you want to use uncut run-parts from the debianutils package, you need to install it first to alpine image:
/ # apk add --no-cache run-parts
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
(1/1) Installing run-parts (4.8.6-r0)
Executing busybox-1.29.3-r10.trigger
OK: 6 MiB in 15 packages
Now, there is a full version of run-parts in alpine instance:
/ # which run-parts
/usr/bin/run-parts
/ # run-parts --help
Usage: run-parts [OPTION]... DIRECTORY
--test print script names which would run, but don't run them.
--list print names of all valid files (can not be used with
--test)
-v, --verbose print script names before running them.
--report print script names if they produce output.
--reverse reverse execution order of scripts.
--exit-on-error exit as soon as a script returns with a non-zero exit
code.
--lsbsysinit validate filenames based on LSB sysinit specs.
--new-session run each script in a separate process session
--regex=PATTERN validate filenames based on POSIX ERE pattern PATTERN.
-u, --umask=UMASK sets umask to UMASK (octal), default is 022.
-a, --arg=ARGUMENT pass ARGUMENT to scripts, use once for each argument.
-V, --version output version information and exit.
-h, --help display this help and exit.

Running the "exec" command in Jenkins "Execute Shell"

I'm running Jenkins on a Linux host. I'm automating the build of a C++ application. In order to build the application I need to use the 4.7 version of g++ which includes support for c++11. In order to use this version of g++ I run the following command at a command prompt:
exec /usr/bin/scl enable devtoolset-1.1 bash
So I created a "Execute shell" build step and put the following commands, which properly builds the C++ application on the command prompt:
exec /usr/bin/scl enable devtoolset-1.1 bash
libtoolize
autoreconf --force --install
./configure --prefix=/home/tomcat/.jenkins/workspace/project
make
make install
cd procs
./makem.sh /home/tomcat/.jenkins/workspace/project
The problem is that Jenkins will not run any of the commands after the "exec /usr/bin/scl enable devtoolset-1.1 bash" command, but instead just runs the "exec" command, terminates and marks the build as successful.
Any ideas on how I can re-structure the above so that Jenkins will run all the commands?
Thanks!
At the begining of your "Execute shell" script, execute source /opt/rh/devtoolset-1.1/enable to enable the devtoolet "inside" of your shell.
Which gives:
source /opt/rh/devtoolset-1.1/enable
libtoolize
autoreconf --force --install
./configure --prefix=/home/tomcat/.jenkins/workspace/project
make
make install
cd procs
./makem.sh /home/tomcat/.jenkins/workspace/project
I needed to look up what scl actually does.
Examples
scl enable example 'less --version'
runs command 'less --version' in the environment with collection 'example' enabled
scl enable foo bar bash
runs bash instance with foo and bar Software Collections enabled
So what you are doing is running a bash shell. I guess, that the bash shell returns immediately, since you are in non-interactive mode. exec runs the the command within the shell without creating a new shell. That means if the newly opened bash ends it also ends your shell prematurely. I would suggest to put all your build steps into a bash script (e.g. run_my_build.sh) and call it in the following way.
exec /usr/bin/scl enable devtoolset-1.1 run_my_build.sh
This kind of thing normally works in "find" commands, but may work here. Rather than running two, or three processes, you run one "sh" that executes multiple things, like this:
exec sh -c "thing1; thing2; thing3"
If you require each step to succeed before the next step, replace the semi-colons with double ampersands:
exec sh -c "thing1 && thing2 && thing3"
I have no idea which of your steps you wish to run together, so I am hoping you can adapt the concept to fit your needs.
Or you can put the whole lot into a script and exec that.