Boost log output can not be redirected - c++

I'm having trouble redirecting Boost-generated logs from the shell. We start out simply enough with an application that successfully logs to stdout.
cjholmes$ ./my_application
[2017-03-14 10:21:16.920355] [0x00007fff79497300] [info] Boost log severity set to trace
[2017-03-14 10:21:16.921071] [0x0000000101282000] [trace] Adding "/Users/cjholmes/Desktop/idsconnector/ca/DigiCertHighAssuranceEVRootCA.pem"
[2017-03-14 10:21:16.921233] [0x0000000101282000] [trace] Adding "/Users/cjholmes/Desktop/idsconnector/ca/DigiCertSHA2HighAssuranceServerCA.pem"
Easy, right? So I should be able to do this:
cjholmes$ ./my_application > log.txt
No output is sent to the console (which is expected), but when I look in the log.txt file, it is empty. The shell creates the file, but the file is always empty.
I have tried all of the usual redirection syntaxes too, and none of them work. Also, I have tried explicitly initializing the log with std::cout:
boost::log::add_console_log(std::cout);
I still get the same result. I can always get the log data on the console, but redirecting the output somehow loses all my log data.
What am I missing?

Related

Executing a batch file using CFEXECUTE

I tried to run test.bat file using cfexecute. It shows timeout error after loding for sometime. The output file is blank. But when i double click the test.bat file it works fine. My code is this,
<cfexecute name="C:\Windows\System32\cmd.exe" arguments="/C C:\ColdFusion2018\cfusion\wwwroot\test.bat" timeout="60" outputfile="C:\ColdFusion2018\cfusion\wwwroot\log_output1.txt"></cfexecute>
We recommend using CFX_EXEC (Windows) instead of the built-in CFExecute. When running BAT files, we've encountered many cases where we needed to run it under a separate Windows account that had privileges different than the CF Service. CFX_EXEC enabled us to specify the specific account whereas CFExecute doesn't have the option at all. We also use CFX_EXEC for performing IP/DNS look-ups as it's a lot faster than Java, honors TTL and doesn't cache the lookup results "forever".
If you want to run test.bat using cfexecute, test.bat should be the value of the name attribute, not the arguments attribute.
<cfexecute name="C:\ColdFusion2018\cfusion\wwwroot\test.bat"
timeout="60"
arguments ="whatever applies"
outputfile="C:\ColdFusion2018\cfusion\wwwroot\log_output1.txt">
</cfexecute>
Thanks for your response,
The batch file successfully executed after suppressing the 'Press any key to continue..'(pause) in the command line. It makes the cfexecute loading till timeout. That was the issue here.

log4cpp and odd console output

I am trying to use log4cpp on my project. But everytime I try to write a log message in addition to the log message going to the log file I also get this message to the console window.
/var/log/ngis/mixer.log.3
Here is the copy of the appropriate section of my log4cpp.properties file.
# This is for the rolling file Appender, this will not send log info
# to the syslog.
log4cpp.rootCategory=INFO, rootAppender
log4cpp.appender.rootAppender=RollingFileAppender
log4cpp.appender.rootAppender.fileName=/var/log/ngis/mixer.log
log4cpp.appender.rootAppender.maxFileSize=10MB
log4cpp.appender.rootAppender.maxBackupIndex=3
log4cpp.appender.rootAppender.BufferedIO=true
log4cpp.appender.rootAppender.BufferSize=100000
log4cpp.appender.rootAppender.layout=PatternLayout
log4cpp.appender.rootAppender.layout.ConversionPattern=%d[%p %c] %m%n
The data is going to the log file. I have tried creating a file called /var/log/ngis/mixer.log.3 but that does not help.
if I change the .maxBackupIndex=3 to .maxBackupIndex=2 then the message changes to:
/var/log/ngis/mixer.log.2

C++ execute temp file as bash script

I have a program that needs to run a program we'll call externalProg in parallel on our linux (CentOS) cluster - or rather, it needs to run many instances of externalProg, each on different cores. Each "thread" creates 3 files based on a few parameters - the inputs to externalProg, a command file to tell externalProg how to execute my file, and a bash script to set up the environment (calls a setup script provided by the manufacturer) and actually call externalProg with my inputs.
Since this needs to be parallel with an unknown number of concurrent threads and I don't want to risk overwriting another thread's files, I am creating temp files using
mkstemp("PREFIX_XXXXXX")
for these input files. After the external program runs, I extract the relevant data and store it, and close the temp files (therefore deleting them).
We'll call the files created (Which actually have a name based on the template above)
tmpInputs - Inputs to externalProg
tmpCommand - Input that tells externalProg how to execute tmpInputs
tmpBash - bash script to set up and call externalProg with my inputs
The file tmpBash looks something like
source /path/to/setup/script # Sets up environment variables
externalProg < /path/to/tmpCommand
where tmpCommand is just a simple text file.
The problem I'm having is actually executing the bash script. Within my program, I call
ostringstream launchcmd;
launchcmd << "bash " << path_to_tmpBash
system(launchcmd.str().c_str());
But nothing happens. No error, no warning, no 'file not found' or permission denied or anything. I have verified that the files are being created and have the correct content. The rest of the code after system() is executed successfully (Though it fails since externalProg wasn't run).
Strangely, if I go back to the terminal and type
bash /path/to/tmpBash
then externalProg is executed successfully. I have also cout'd the launchcmd string, copy and pasted that in to the terminal, which also works successfully. For some reason, this only fails when called within my program.
After a bit of experimentation, I've determined that system() calls /bin/sh on our cluster. If I change launchcmd to look like
/path/to/tmpBash
(So that the full command should look like /bin/sh /path/to/tmpBash), I get a permission denied error, which is no surprise. The problem is that I can't chmod +x the tmpBash file while it's still open, and if I close the file, it gets deleted - so I'm not sure how to address that.
Is there something obviously wrong I'm doing, or does system() have some nuance that I'm missing?
edit: I wanted to add that I can successfully call things like
system("echo $PATH")
and get the expected results (in this case, my default $PATH).
Two separate ideas:
Change your SHELL environment variable to be /bin/bash, then call system(),
or:
Use execve directly `execve('/bin/bash', ['/path/to/tmpBash'], environ)

bash: Confusion with redirecting output

When I execute this command (where fail.cpp is a simple program filled with compiler errors), the errors are not output directly on the screen, but, rather, within the fail.out file:
g++ fail.cpp > fail.out 2>&1
From my introductory understanding of bash, this makes sense: > redirects the program output (stdout, a.k.a. 1) to fail.out, while 2>&1 redirects stderr (a.k.a. 2) to this new place for stdout, which is the file. (?)
But changing the order of the command makes things happen differently:
g++ fail.cpp 2>&1 > fail.out
Now, the error messages go directly onto the screen, and fail.out is a blank file.
Why is this? It seems like the same idea as above: redirect the errors that this command will produce to stdout (2>&1), and redirect that, in turn, to the fail.out file. Is it an order of operations thing that I am missing?
2>&1 means "redirect stderr to where stdout is currently connected", and redirections are processed in order from left to right. So the first one does:
Redirect stdout to the fail.out file.
Redirect stderr to stdout's current connection, i.e. the fail.out file
The second one does:
Redirect stderr to stdout's current connection, i.e. the terminal.
Redirect stdout to the fail.out file.

Condor output file updating

I'm running several simulations using Condor and have coded the program so that it outputs a progress status in the console. This is done at the end of a loop where it simply prints the current time (this can also be percentage or elapsed time). The code looks something like this:
printf("START");
while (programNeedsToRum) {
// Run code repetitive code...
// Print program status update
printf("[%i:%i:%i]\r\n", hours, minutes, seconds);
}
printf("FINISH");
When executing normally (i.e. in the terminal/cmd/bash) this works fine, but the condor nodes don't seem to printf() the status. Only once the simulation has finished, all the status updates have been outputted to the file but then it's no longer of use. My *.sub file that I submit to condor looks like this:
universe = vanilla
executable = program
output = out/out-$(Process)
error = out/err-$(Process)
queue 100
When submitted the program executes (this is confirmed in condor_q) and the output files contain this:
START
Only once the program has finished running its corresponding output file shows (example):
START
[0:3:4]
[0:8:13]
[0:12:57]
[0:18:44]
FINISH
Whilst the program executes, the output file only contains the START text. So I came to the conclusion that the file is not updated if the node executing program is busy. So my question is, is there a way of updating the output files manually or gather any information on the program's progress in a better way?
Thanks already
Max
What you want to do is use the streaming output options. See the stream_error and stream_output options you can pass to condor_submit as outlined here: http://research.cs.wisc.edu/htcondor/manual/current/condor_submit.html
By default, HTCondor stores stdout and stderr locally on the execute node and transfers them back to the submit node on job completion. Setting stream_output to TRUE will ask HTCondor to instead stream the output as it occurs back to the submit node. You can then inspect it as it happens.
Here's something I used a few years ago to solve this problem. It uses condor_chirp which is used to transfer files from the execute host to the submitter. I have a python script that executes the program I really want to run, and redirects its output to a file. Then, periodically, I send the output file back to the submit host.
Here's the Python wrapper, stream.py:
#!/usr/bin/python
import os,sys,time
os.environ['PATH'] += ':/bin:/usr/bin:/cygdrive/c/condor/bin'
# make sure the file exists
open(sys.argv[1], 'w').close()
pid = os.fork()
if pid == 0:
os.system('%s >%s' % (' '.join (sys.argv[2:]), sys.argv[1]))
else:
while True:
time.sleep(10)
os.system('condor_chirp put %s %s' % (sys.argv[1], sys.argv[1]))
try:
os.wait4(pid, os.WNOHANG)
except OSError:
break
And my submit script. The problem ran sh hello.sh, and redirected the output to myout.txt:
universe = vanilla
executable = C:\cygwin\bin\python.exe
requirements = Arch=="INTEL" && OpSys=="WINNT60" && HAS_CYGWIN==TRUE
should_transfer_files = YES
transfer_input_files = stream.py,hello.sh
arguments = stream.py myout.txt sh hello.sh
transfer_executable = false
It does send the output in its entirety, so take that in to account if you have a lot of jobs running at once. Currently, its sending the output every 10 seconds .. you may want to adjust that.
with condor_tail you can view the output of a running process.
to see stdout just add the job-ID (and -f if you want to follow the output and see the updates immediately. Example:
condor_tail 314.0 -f