Docker does not start cpp application - c++

i am trying to run a cpp application within docker.After i have built the executable and created the Dockerfile , i can not run it inside docker for some reason:
main.cpp
#include<iostream>
#include<chrono>
#include<thread>
#include<string>
#include <unistd.h>
int main(int argc,char *argv[])
{
std::cout<<"Started daemon..."<<std::endl;
std::string hostString(argv[1]);
std::cout<<"HostName:"<<hostString<<std::endl;
std::cout<<"Port:"<<std::stoi(argv[2])<<std::endl;
int i=0;
while(true){
std::cout<<"Iterations:"<<i++<<std::endl;
std::this_thread::sleep_for (std::chrono::seconds(1));
if(i++>10000) i=0;
}
return 0;
}
Dockerfile
FROM ubuntu:latest
RUN mkdir -p /home/dockerc
COPY . /home/dockerc
ENTRYPOINT ["/home/dockerc/main","127.0.0.1","8350"]
dockerc folder
main.cpp
main.exe
Dockerfile
I run the following:
g++ main main.cpp
docker build app .
docker images (it shows that app image is created)
docker run app
The build is succesfull but when i hit the run it looks like it blocks.It just does not continue.
What is wrong?Could someone help me ?I am new to docker.
P.S After waiting out for like 10 minutes i get long error message that begins with the following:
$ docker run cpapp
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: container a575e463f193dbc475aab78c1810486e23981a50c 0b731f9c891c4143d0ed5b3 encountered an error during CreateProcess: failure in a Windows system call: The compute system exited unexpecte dly. (0xc0370106)

You should put the complete path in the ENTRYPOINT and add the parameters to your program.
This Dockerfile does the job:
FROM ubuntu:latest
RUN mkdir -p /home/dockerc
COPY . /home/dockerc
ENTRYPOINT ["/home/dockerc/main", "hostname", "8000"]
replacing hostname and 8000 with the hostname and port that you need.
Edit
I tested your program in Linux, and to make it run I had to:
1) compile for c++11 (because of chrono)
2) add -t to build the docker app
This is the complete list of commands to run:
g++ -o main main.cpp -std=c++11
docker build -t app .
docker run app
and this is the output:

replace your RUN cd... by a WORKDIR ...
your cd does not what you expect in this context, and is forgotten at the next line
you can also remove the RUN cd... line and put the whole path in the
ENTRYPOINT
line

Related

Missing output or input when running c++ binary in docker

Building a cpp binary inside a docker builder with cmake
FROM ubuntu:focal as builder
WORKDIR /var/lib/project
RUN cmake ... && make ...
Then copying the built binary to final image which also is ubuntu:focal, to a WORKDIR.
WORKDIR /var/lib/project
COPY --from=builder /usr/src/project/bin/binary ./binary
EXPOSE 8080
CMD ["./binary"]
Running the docker with docker run hangs the docker (even with -d), no input and output. To stop the docker, I have to kill it from another terminal.
However, if I exec into that same container while it is hanging, and run the binary from the shell inside the docker, it will work as expected.
Command used to build the docker
docker build . --platform=linux/amd64 --file docker/Dockerfile --progress=plain -c 32 -t mydocker:latest
Tried:
CMD ["/bin/bash", "-c" , "./binary"]
CMD ["/bin/bash", "-c" , "exec", "./binary"]
And same configurations with ENTRYPOINT too.
Same behavior.
I'm guessing there is a problem with how the binary is built, maybe some specific flags are needed, because if I do docker run ... ls or any other built in command it will work and output to my stdout.
Expecting to have my binary std* redirected to my std* just like any other command inside docker.
There is another related question, as of now, unanswered.
Update:
Main binary is built with these
CXX_FLAGS= -fno-builtin-memcmp -fPIC -Wfatal-errors -w -msse -msse4.2 -mavx2
Update:
Main part of the binary code,
The version is apache arrow 10.0.1
int main() {
arf::Location srv_loc = arf::Location::ForGrpcTcp("127.0.01", 8080).ValueUnsafe();
arf::FlightServerOptions opt(srv_loc);
auto server = std::make_unique<MyService>();
ARROW_RETURN_NOT_OK(server->Init(opt));
std::printf("Starting on port: %i\n", server->port());
return server->Serve().ok();
}
UPDATE: The problem seems to be here:
While the container is hanging, I entered into it, and did a ps aux.
The binary gets PID=1, I don't know why this happens, but hardly this is the correct behavior.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.2 0.0 341520 13372 ? Ssl 07:52 0:00 ./binary
root 12 0.5 0.0 4116 3412 pts/0 Ss 07:52 0:00 /bin/bash
root 20 0.0 0.0 5900 2884 pts/0 R+ 07:52 0:00 ps aux
UPDATE:
./binary is executable, chmod +x is useless.
Adding printfs to multiple places did not work, unless the last line server->Serve() is commented out, or prints are really large, then everything gets printed.
Separating return statement from server->Server() makes no difference.
UPDATE:
Running the built container with docker --init -d flag, works.
Tough is this the right way? and if so how can this be forced in dockerfile?
This problem is related to the pid of the process in the container. You can use the --init flag to avoid the problem. for more information visit this site- https://docs.docker.com/engine/reference/run/#specify-an-init-process

lambda running twice, and Runtime exited without providing a reason

I got a lambda written in Go running on a container, the image was built with alpine-golang and run with alpine.
When testing i noticed from the logs the lambda is ran twice before exiting with the following:
Error: Runtime exited without providing a reason Runtime.ExitError
From my local system this the code runs fine without errors, i earlier tried running without a container but still faced runtime issues. The only error handling and logging mechs in my code is log.Println and fmt.Printf. Anyone got an idea of what is going on?
EDIT:
I trapped the exit code, which is turns out to be 0 but lambda exits with
Runtime exited with error: exit status 1 Runtime.ExitError
I really suggest going for the "without container" path. Just pack your executable into a .zip archive. Don't forget to compile with GOOS=linux for your code to be compatible with AWS Lambda.
On Linux you can use the following commands to get your archive:
GOOS=linux go build -o executableName path/to/main.go
zip archive.zip executableName
Note that you have to set Handler to be executableName in function's Runtime settings.
For handling lambda function, you have to use github.com/aws/aws-lambda-go/lambda package and in main start the handler function like lambda.Start(handler).
Full code example:
package main
import (
"context"
"log"
"github.com/aws/aws-lambda-go/lambda"
)
func main() {
lambda.Start(handler)
}
func handler(ctx context.Context) {
log.Println("successfully executed")
}
Make sure you are following the recommended guide lines aws provide with building the container image. https://docs.aws.amazon.com/lambda/latest/dg/go-image.html
your Dockerfile should look like this to work with lambda,
FROM public.ecr.aws/lambda/provided:al2 as build
# install compiler
RUN yum install -y golang
RUN go env -w GOPROXY=direct
# cache dependencies
ADD go.mod go.sum ./
RUN go mod download
# build
ADD . .
RUN go build -o /main
# copy artifacts to a clean image
FROM public.ecr.aws/lambda/provided:al2
COPY --from=build /main /main
ENTRYPOINT [ "/main" ]
Lambda is very strange where if you have the Dockerfile like you would on a local machine then it runs it once, ends, then a second time and crashes with no reason given

Use Docker-Windows for Gitlab-runner

I'm trying to use Docker in Windows to create a Gitlab-Runner to build a C++ application. It works so far, but I guess there are better aproaches. Here's what I did:
Here's my initial Docker Container:
FROM mcr.microsoft.com/windows/servercore:2004
# Restore the default Windows shell for correct batch processing.
SHELL ["cmd", "/S", "/C"]
# Download the Build Tools bootstrapper.
ADD https://aka.ms/vs/16/release/vs_buildtools.exe C:\TEMP\vs_buildtools.exe
# Install Build Tools with the Microsoft.VisualStudio.Workload.AzureBuildTools workload, excluding workloads and components with known issues.
RUN C:\TEMP\vs_buildtools.exe --quiet --wait --norestart --nocache `
--installPath C:\BuildTools `
--add Microsoft.VisualStudio.Workload.VCTools `
--add Microsoft.VisualStudio.Component.VC.Tools.x86.x64 `
--add Microsoft.VisualStudio.Component.VC.CMake.Project `
--add Microsoft.VisualStudio.Component.Windows10SDK.19041 `
--locale en-US `
|| IF "%ERRORLEVEL%"=="3010" EXIT 0
# Define the entry point for the docker container.
# This entry point starts the developer command prompt and launches the PowerShell shell.
ENTRYPOINT ["cmd","/k", "C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat", "&&", "powershell.exe", "-NoLogo", "-ExecutionPolicy", "Bypass"]
And my .gitlab-ci.yml looks like this:
build Docker Windows:
image: buildtools2019_core
stage: build
tags:
- win-docker
script:
- mkdir build
- cd build
- cmake -DCMAKE_BUILD_TYPE=Release -DenableWarnings=ON -G Ninja -DCMAKE_MAKE_PROGRAM=Ninja ../
- ninja
This works so far and everthing builds correctly. The main problem however is that if the build fails the job succeeds anyways. I suspect that my entrypoint is wrong because powershell is executed inside of a cmd and only the exit code of cmd is checked which always succeeds.
So I tried to use powershell directly as entrypoint. I need to set environment variables via vcvars64.bat but that is not that trivial to do. I tried to execute the "Developer Powershell for VS 2019" but I can't execute the link in the entrypoint directly and the link looks like this:
"C:\Windows\SysWOW64\WindowsPowerShell\v1.0\powershell.exe -noe -c "&{Import-Module """C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\Common7\Tools\Microsoft.VisualStudio.DevShell.dll"""; Enter-VsDevShell 6f66c5f6}"
Which I don't quit understand what it does and the hash also varies from installation to installation. Also simply using this as entrypoint didn't work.
I then tried to use the Invoke-Environment Script taken from "https://github.com/nightroman/PowerShelf/blob/master/Invoke-Environment.ps1". This allows me to execute the .bat file from powershell like this:
Invoke-Environment C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat
But to do this I need to add this function to my profile, as far as I understood. I did this by copying it to "C:\Windows\system32\WindowsPowerShell\v1.0\profile.ps1" so that it would be accessible by all users.
In my Docker file I added:
COPY Invoke-Environment.ps1 C:\Windows\system32\WindowsPowerShell\v1.0\profile.ps1
and replaced the entrypoint with:
ENTRYPOINT ["C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe", "-NoExit", "-NoLogo", "-ExecutionPolicy", "Bypass", "Invoke-Environment C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat"]
But that didn't initialize the environment variables correctly. Also "Invoke-Environment" is not found by the gitlab-runner. My last resort was to write a small script (Init64.ps1) that executes the Invoke-Environment function with vcvars64.bat:
function Invoke-Environment {
param
(
# Any cmd shell command, normally a configuration batch file.
[Parameter(Mandatory=$true)]
[string] $Command
)
$Command = "`"" + $Command + "`""
cmd /c "$Command > nul 2>&1 && set" | . { process {
if ($_ -match '^([^=]+)=(.*)') {
[System.Environment]::SetEnvironmentVariable($matches[1], $matches[2])
}
}}
}
Invoke-Environment C:\BuildTools\VC\Auxiliary\Build\vcvars64.bat
I copied this in docker via:
COPY Init64.ps1 Init64.ps1
and used this entrypoint:
ENTRYPOINT ["C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe"]
In my build script I need to manually call it to setup the variables:
build Docker Windows:
image: buildtools2019_core
stage: build
tags:
- win-docker
script:
- C:\Init64.ps1
- mkdir build
- cd build
- cmake -DCMAKE_BUILD_TYPE=Release -DenableWarnings=ON -G Ninja -DCMAKE_MAKE_PROGRAM=Ninja ../
- ninja
Now everything works as intended the build works and the job only succeeds if the build succeeds.
However, I would prefer to setup my environment in the entrypoint so that I don't have to do this in my build script.
Is there a better way to do this? Also feel free to suggest any improvements I could make.
Ok, after some struggling, here is my entry.bat that correctly loads the environment exports the error-level/return-value:
REM Load environment
call C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat
REM If there are no parameters call cmd
IF [%1] == [] GOTO NOCALL
REM If there are parameters call cmd with /S for it to exit directly
cmd /S /C "%*"
exit %errorlevel%
:NOCALL
cmd
exit %errorlevel%

how to use openCV image to compile a cpp which uses opencv

If I had understood correctly:
We can use a docker image of something instead of installing it.
I don't want to install openCV.
So I install the image :
docker pull spmallick/opencv-docker:opencv
I have a really simple cpp file which use openCV.
What should i do now to compile my file with this image.
in the terminal run
docker run -it --name myopencv spmallick/opencv-docker:opencv
open new terminal and copy your cpp file to the running container. for example i copy the file 1.cpp to tmp folder to myopencv container.
docker cp .\1.cpp myopencv:/tmp
return to the running container and run g++ <filename.cpp>
** the image you pulled already has all build-essential installed

Including other Docker files in my Docker file

I would like to build a custom Docker file. I start with Ubuntu
FROM ubuntu
But I would also like to add buildpack-deps:stretch
I understand that I am only allowed to use FROM once, so short of copying the contents of buildpack-deps:stretch into my Docker file, how do I add it to my Docker file?
AFAIK, simply "including" another Dockerfile does not work. But, you actually are allowed to use multiple FROM statements, if you use multistage builds (cf. the Docker docs).
For example, you could do something like this:
FROM buildpack-deps:stretch AS build
RUN echo "hello world!" > /tmp/foo
FROM ubuntu
COPY --from=build /tmp/foo .
CMD ["cat", "foo"]
Running docker build --tag foo . && docker run --rm foo results in hello world!. You could replace the first RUN statement with the compilation of something or whatever you are planning to do.
There are more ways for using multistage builds, e.g. using FROM build in our example directly.