How to access to an output file with Docker - c++

I'm writting a C++ program, and want to use Docker on it. The Dockerfile looks like the following:
FROM gcc:7.2.0
ENV MYP /repo
WORKDIR ${MYP}
COPY . ${MYP}
RUN /bin/sh -c 'make'
ENTRYPOINT ["./program"]
CMD ["file1", "file2"]
This program needs two input files (file1 and file2) and is built and executed with as follows:
docker build -t image .
docker run -v /home/user/tmp/:/repo/dir image dir/file1 dir/file2
These input files are located in the host in /home/user/tmp/. In the original repository (repo/), the executable is located in its root directory, and the output file generated is saved in the same folder (i.e. they look like repo/program and repo/results.md).
When I run the above docker run command, I can see from the standard output that the executable is reading correctly the input files and generating the expected results. However, I hoped the written output file (generated by the program with std::ofstream) to be also saved in the mounted directory /home/user/tmp/, but its not.
How can I access this file? Is there a straightforward way to get it using the docker volume mechanism?
Docker version is 18.04.0-ce, build 3d479c0af6.
EDIT
The relevant code regarding how the program saves the output file result.md is the following:
std::string filename ("result.md"); // in the actual code this name is not hard-coded and depends on intput, but it will not include / chars
std::ofstream output_file;
output_file.open(filename.data(), std::ios::out);
output_file << some_data << etc << std::endl;
...
output_file.close();
In practice, the program is run as program file1 file2, and the output will be saved in the working directory, not matter if its the same where program is placed or not.

You need to be sure to save your file into the mounted directory. Right now, it looks like your file is being saved as a sibling to your program which is right outside of the mounted directory.
Since you mount with:
docker run -v /home/user/tmp/:/repo/dir image dir/file1 dir/file2
/repo/dir is the only folder you will see changes to. But if you are saving files to /repo, they will get saved there, but not seen on the host system after running.
Consider how you open your output file:
std::string filename ("result.md"); // in the actual code this name is not hard-coded and depends on intput, but it will not include / chars
std::ofstream output_file;
output_file.open(filename.data(), std::ios::out);
output_file << some_data << etc << std::endl;
...
output_file.close();
Since you set the output file to "result.md" with no path, it is going to be opened as a sibling to the program.
If you were to run
docker run -it --rm --entrypoint=/bin/bash image
which would open an interactive shell using your image and then run ./program some-file.text some-other-file.txt and then ran ls you would see the output file result.md as a sibling to program. That is outside of your mountpoint, which is why you don't see it on your host machine.
Consider this program. This program will take an input file and an output location. It will read in each line of the infile and wrap it in <p>. /some is the repository directory. /some/res/ is the folder that will be mounted to /repo/res/.
I provide 2 arguments to my program through docker run, the infile and outfile both of which are relative to /repo which is the working directory.
My program then saves to the outfile location which is within the mountpoint (/repo/res/). After docker run finishes, /some/res/out.txt is populated.
Folder structure:
.
├── Dockerfile
├── README.md
├── makefile
├── res
│   └── in.txt
└── test.cpp
Commands run:
docker build -t image .
docker run --rm -v ~/Desktop/some/res/:/repo/res/ image ./res/in.txt ./res/out.txt
Dockerfile:
FROM gcc:7.2.0
ENV MYP /repo
WORKDIR ${MYP}
COPY . ${MYP}
RUN /bin/sh -c 'make'
ENTRYPOINT ["./test"]
CMD ["file1", "file2"]
makefile:
test: test.cpp
g++ -o test test.cpp
.PHONY: clean
clean:
rm -f test
test.cpp:
#include <fstream>
#include <iostream>
#include <string>
int main(int argc, char **argv) {
if (argc < 3) {
std::cout << "Usage: test [infile] [outfile]" << std::endl;
return 1;
}
std::cout << "All args" << std::endl;
for (int i = 0; i < argc; i++) {
std::cout << argv[i] << std::endl;
}
std::string line;
std::ifstream infile(argv[1]);
std::ofstream outfile(argv[2]);
if (!(infile.is_open() && outfile.is_open())) {
std::cerr << "Unable to open files" << std::endl;
return 1;
}
while (getline(infile, line)) {
outfile << "<p>" << line << "</p>" << std::endl;
}
outfile.close();
return 0;
}
res/in.txt:
hello
world
res/out.txt (after running command):
<p>hello</p>
<p>world</p>

I would like yo post the Dockerfile I'm using right now, in the hope it can be useful to somebody. It doesn't need to specify a name or path for output files. Output files are always written in $PWD.
FROM alpine:3.4
LABEL version="1.0"
LABEL description="some nice description"
LABEL maintainer="user#home.com"
RUN apk update && apk add \
gcc \
g++ \
make \
git \
&& git clone https://gitlab.com/myuser/myrepo.git \
&& cd myrepo \
&& make \
&& cp program /bin \
&& rm -r /myrepo \
&& apk del g++ make git
WORKDIR /tmp
ENTRYPOINT ["program"]
I only need to run:
docker run --rm -v $PWD:/tmp image file1 file2
Inside the image, the working directory is tmp and cannot be changed, which is the one passed to the volume -v option. After running the image, all output files will be saved in the corresponding working directory of the host machine.

How about simply copying the files from the container to the host machine. This is the simplest solution that worked in my case.
sudo docker cp container-id:/path_in_container local_machine_path
container-id is the id of your docker container which can be found with docker ps if it's running or docker ps -a if it is stopped.
path_in_container is the path of the file in the container
local_machine_path is the path on your device/computer that you are working with. Please note that this has to be absolute path
Here's a working example:
sudo docker cp f1be7fe58g36:/app/Output/output_plan1.pdf' /Users/hissaan/Programming/Clients Data/testing_this.pdf'
Solution was tested on mac (m1 pro)

Related

Running pgc++ programs on Cluster

I tried to run the below OPenACC program on cluster:
#include <stdio.h>
#include <iostream>
using namespace std;
int main()
{
#pragma acc parallel loop
for (int i=0; i<1000; i++)
{
//cout << i << endl;
printf("%d ", i);
}
return 0;
}
The PBS Script for the above code:
#PBS -e errorlog1.err
#PBS -o logfilehello.log
#PBS -q rupesh_gpuq
#PBS -l select=1:ncpus=1:ngpus=1
tpdir=`echo $PBS_JOBID | cut -f 1 -d .`
tempdir=$HOME/scratch/job$tpdir
mkdir -p $tempdir
cd $tempdir
cp -R $PBS_O_WORKDIR/* .
module load nvhpc-compiler
#module load cuda10.1
#module load gcc10.3.0
#module load nvhpc-21.11
#module load nvhpc-pgicompiler
#module load gcc920
pgc++ sssp.cpp
./a.out > output.txt
rm *.out
mv * $PBS_O_WORKDIR/.
rmdir $tempdir
~
After submmitting the above job to que, I get the following error log:
"sssp.cpp", line 2: catastrophic error: cannot open source file "iostream" #include <iostream>
^
1 catastrophic error detected in the compilation of "sssp.cpp". Compilation terminated.
I tried running C programs on pgcc and they work fine. Running c++ programs on pgc++ is throwing error. What could be the reason?
What could be the reason?
In order to be interoperable with g++, pgc++ (aka nvc++) uses the g++ STL and system header files. Since the location of these headers can vary, on installation a configuration file, "localrc", is created to store these locations.
What could be happening here is that on install, a single system install was selected and hence the generated localrc is for the system from which the compilers were installed, not the remote system.
If this is the case, consider re-installing with the "network" option. In this case, the creation of the localrc is delayed until the first compiler invocation with a unique localrc generated for each system.
Another possibility is that creating of the localrc file failed for some unknown reason. Possibly a permission issue. To check, you can run the 'makelocalrc' utility to see if any errors occurred.
Note for newer versions of nvc++, we are moving away from using pre-generated config files and instead determining these config items each time the compiler is invoked. The pervious concern being the overhead involved in generating the config, but this has become less of a problem.

Getting git commit id on Qt qmake project at make time

On Windows 10 Qt 5.15, I'm getting git commit id on Qt qmake project at qmake time with
COMMIT = '\\"$$system(git rev-parse --verify master)\\"'
DEFINES += COMMIT_VERSION=\"$${COMMIT}\"
And on main.cpp I can print that with qDebug() << COMMIT_VERSION;
Since some commits do not require a full project rebuild I'm getting old (not the latest) commit ids.
How can I get that updated on every build?
I use Ubuntu but for doing what you want I try this:
what you do is like in cmd or terminal always type this command :
git rev-parse --verify master
so for doing this I use QProcess like this :
first of all, I create one bash file and put that command there.
I always put commands in one shell script file because it's easy and just put.sh file in the start process.
test.sh
#!/bin/sh
cd /home/parisa/projectFolderName;git rev-parse --verify master
and this is code :
QProcess *myProcess = new QProcess();
myProcess->start("/home/parisa/untitled1/test.sh");
myProcess->waitForFinished(3);
QString str = myProcess->readAll();
qDebug() << "git id : " << str;
delete myProcess;
and this is the output :
In QTCreator short cut Alt + K or Alt +G will show you the git log.

C++ print line not printing to console in Docker container

I've got a very basic proof-of-concept C++ application, shown below:
#include <iostream>
int main()
{
std::cout << "test" << std::endl;
return 0;
}
When this is run locally, it prints test to the Console, as expected. However, when run on a Docker container, nothing is printed.
I'm using the microsoft/windowsservercore as for my container. Since this is still proof-of-concept, my Dockerfile consists of copying the exe of my C++ into the image, and then I'm manually running it interactively.
Am I missing something that prevents C++ applications from printing to the console inside of a Windows Docker image?
Dockerfile:
FROM microsoft/windowsservercore
COPY ./Resources /
Resources folder contains only the exe of the C++ application
Docker command:
docker run --rm -it proofconcept:latest, where proofconcept is the name given during build

c++ for unix command line operation

I have a remote unix shell which I log on often to checkout files with but the system keep resetting my locals setting when I logon to it.
I was planning to write the code to execute a list of commands when I log on.
#include<iostream>
#include<stdlib.h>
int main(){
char javah[]="JAVA_HOME=/appl/usr/jdk/jdk1.6.0_20";
char anth[]="ANT_HOME=/appl/usr/ant/instances/1.8.2";
char path[]="PATH=$ANT_HOME/bin:$PATH";
system("bash");
system("cd");
system("cd insurancePPC.11");
system("0x0C");
system("ls");
putenv(javah);
putenv(anth);
putenv(path);
std::cout << "JAVA_HOME=" << getenv("JAVA_HOME");
std::cout << "\n";
std::cout << "ANT_HOME=" << getenv("ANT_HOME");
std::cout << "\n";
std::cout << "PATH=" << getenv("PATH");
std::cout << "\n";
system("cd tools");
std::cout << "command executed successfully...\n";
return 0;
}
Can anyone tell me why this wasn't working as expected?
cd is a built in command of the shell and only affects the current process (i.e. the current running shell.)
When you run system("cd insurancePPC.11"); it starts a new shell, that new shell changes the directory to insurancePPC.11 and exits. Your own process is unaffected by that cd command.
You are much better off writing these commands in a text file and run it as a shell script via the source command.
Create a file named myenv.sh with this content:
JAVA_HOME=/appl/usr/jdk/jdk1.6.0_20
export JAVA_HOME
ANT_HOME=/appl/usr/ant/instances/1.8.2
export ANT_HOME
PATH=$ANT_HOME/bin:$PATH
export PATH
cd
cd insurancePPC.11
ls
echo JAVA_HOME=$JAVA_HOME
echo ANT_HOME=$ANT_HOME
echo PATH=$PATH
cd tools
And from your command line run source myenv.sh Or if your shell supports it, use the shorthand . myenv.sh
There's no need to write a C program here. Just save the following as mysettings.sh:
export JAVA_HOME=/appl/usr/jdk/jdk1.6.0_20
export ANT_HOME=/appl/usr/ant/instances/1.8.2
PATH=$ANT_HOME/bin:$PATH
cd tools
When you log in, run
. mysettings.sh

PyInstaller not working on simple HelloWorld Program

So I am running on 64-bit Windows 7, and I set up Pyinstaller with Pip and PyWin32. I have python 2.7.
I made a simple hello world Program with this code
print "hello world!"
I put the file in the same directory as PyInstaller, and ran this code in the command prompt
pyinstaller.py helloWorld.py
Yet, when I try that, I get this error message:
Error loading Python DLL: C:\PROGRA~1\PYINST~1.1\build\HELLOW~1\python27.dll (error code 126)
What am I doing wrong and how do I fix this?
Run with the -F flag to produce the standalone exe:
pyinstaller -F helloworld.py
It will output to dist/helloworld.exe
NOTE this is a different location to when -F is not used, be sure to run the right exe afterwards.
Thanks #tul! My version of pyinstaller put it to dist\helloworld.exe though!
If you start it from C:\Python27\Scripts... that'll be C:\Python27\Scripts\dist... as well!
But whereever you have it, I recommend putting a batch file next to your .py to be able recompile any time with just a click:
I set it up so there is nothing but the .exe at the .py location and the temporary stuff goes to the temp dir:
#echo off
:: get name from filename without path and ext
set name=%~n0
echo ========= %name% =========
:: cut away the suffix "_build"
set name=%name:~0,-6%
set pypath=C:\Python27\Scripts
set buildpath=%temp%
if not exist %name%.py (
echo ERROR: "%name%.py" does not exist here!
pause
exit /b
)
%pypath%\pyinstaller.exe --onefile -y %~dp0%name%.py --distpath=%~dp0 --workpath=%buildpath% --specpath=%buildpath%
I name it like the .py file plus "_build" and cut away the suffix in the batch script again.
Voilà.