C++ print line not printing to console in Docker container - c++

I've got a very basic proof-of-concept C++ application, shown below:
#include <iostream>
int main()
{
std::cout << "test" << std::endl;
return 0;
}
When this is run locally, it prints test to the Console, as expected. However, when run on a Docker container, nothing is printed.
I'm using the microsoft/windowsservercore as for my container. Since this is still proof-of-concept, my Dockerfile consists of copying the exe of my C++ into the image, and then I'm manually running it interactively.
Am I missing something that prevents C++ applications from printing to the console inside of a Windows Docker image?
Dockerfile:
FROM microsoft/windowsservercore
COPY ./Resources /
Resources folder contains only the exe of the C++ application
Docker command:
docker run --rm -it proofconcept:latest, where proofconcept is the name given during build

Related

How to build a docker image for cuda based c++ application runnig on Nvidia Jetson?

To be more specific, my source code is compiled and linked successfully when I am running it from inside the container.
However, when I am trying to build the image from a Dockerfile it fails.
i.e.:
this works (These lines are from the terminal "inside" the container):
cd AppFolder; make; //success
this does not (These are lines from the dockerfile):
RUN git clone <url> && cd APPFolder && make
Now I get:
/usr/bin/ld: warning: libcuda.so.1 needed by...
How can I build the application from the dockerfile?
The only difference between a container and a layer during the image build is the next layers. perhaps you are running the RUN directive to early - i.e. before the cuda library was generated?
try putting this command as low as you can in the Dockerfile
Well, adding "-Wl,--allow-shlib-undefined" to the compiler/linker (g++) solved this issue. I think it "tells" the linker to "remain" pointer to function that will be "linked" only in runtime (i.e., when running the docker image)

Unable to run docker container which has CPP code pthread_setschedparam()

I have a docker container, which has CPP code in it.
void SetRealtimeThreadPriority()
{
sched_param const param{ThreadPriorities::Priority()};
int result = pthread_setschedparam(pthread_self(), ThreadPriorities::Policy(), &param);
printf("SetRealtimeThreadPriority - result checked for assertion %d \n", result);
assert(result == 0); (void) result;
}
when I run the exe which has this code in ubuntu machine it works fine, where result printed is 0(zero). but when I run it in container, the assert is failing.
I have gone through multiple threads, man pages, docker run documentation and articles and tried running the container with below options, but no luck.
docker run -it --rm --cap-add SYS_NICE MyContainer
docker run --cap-add=ALL --privileged MyContainer
docker run --cap-add=ALL MyContainer
docker run -it --rm --userns host --cap-add SYS_NICE MyContainer
How can I debug this issue? Im running docker on wsl ubuntu 16.04.
You could insert some code. Perhaps you can tell what is different? For example, #include <sys/capability.h>, and link with ... -lcap, put this:
std::cout << cap_to_text(cap_get_proc(), NULL) << std::endl;
just before the call to pthread_setschedparam(2). Does it display something different inside and outside the container?

Simple C++ exe std::cout not showing when running within windows container servercore

Some container 101 here please. I can't see messages written to std::cout in the console like i would expect, when it's run in a windows server core container. I've tried the same scenario with a C# console app and it outputs unlike the example below. I feel that narrows it down to something on the C++ side of things.
The code
#include <iostream>
int main()
{
std::cout << "Hello World\n";
}
The Dockerfile
FROM mcr.microsoft.com/windows/servercore:ltsc2019
ADD Debug/ /
ENTRYPOINT [ "cmd.exe" ]
The commands
docker build -t cppnet .
docker run -it cppnet
The results, first running in the container and second running locally

autostart webserver and programm

I'm working on a Yocto based system. My problem is that I can't start my programm written in C++ and the webserver (node.js) at the same time right after the boot of my device.
I already tried this in /etc/init.d:
#! /bin/bash
/home/ProjectFolder/myProject
cd /home/myapp && DEBUG=myapp:* npm start
exit 0
I changed the rights after creating the script by
chmod +x ./startProg.sh
After that I linked it by
update-rc.d startProg.sh defaults
After reboot the system only starts the C++-programm. I tried some other possibilities like seperating the two comands in different shell-scripts, but that didn't work out any better.
Is there any option I missed or did I make any mistake trying to put those two processes into the autostart?
This of course isn't a C++ or Node.js question. A shell script is a list of commands that are executed in order, unless specified otherwise. So your shell script runs your two programs in the order specified, first myProject and when that's done npm will be started.
This is the same as what would happen from the prompt and the solution is the same: /home/ProjectFolder/myProject &

Running a basic Qt application in Docker

I am trying to run a basic console application (developed in Qt) in docker for windows. Development environment is windows 10, compiler VC2015, 32bit Application.
It is hello world and idea was to find the issues, before I try to port the actual application.
The code is simplest c++ code:
#include <QCoreApplication>
#include <iostream>
using namespace std;
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
std::cout<<"Hello world";
return a.exec();
}
The Dockerfile is:
# Comment:
#It needs a Microsoft environment to run
FROM microsoft/nanoserver:latest
#Create a folder inside the home folder in the Container Operating System
RUN mkdir -p C:\HelloWorld
#Copy the excutable from this folder to the folder inside the Container
Operating System.
COPY . /HelloWorld/
#Run the application inside the container operating system.
CMD ["C:\\HelloWorld\\docker_HelloWorld.exe"]
My expected end result was a console/shell output of "Hello world". But I get nothing. Can someone point out what is missing?
Thanks.
I suspect that nanoserver images only supports x64 applications, if possible please build your application as x64 and run it inside the nanoserver with its dependencies.
If not possible to build as x64, then you can use windowsservercore containers to run the x86 applications.
You should copy the dependency assemblies with that application, other wise it will not work.