The context
I wrote a logger printing messages for the user. Messages with level "debug", "info" or "warning" are printed in std::cout and messages with level "error" or "system_error" are printed in std::cerr. My program is not multi-threaded. I work under Linux openSUSE 12.3 with gcc 4.7.2 and CMake 3.1.0.
My problem
I discovered that sometimes, when an error message (printed in std::cerr) follows a long information message (printed in std::cout) and when the output is redirected to the file LastTest.log by CTest, the error message appears in the information message (look at the example below). I don't well understand this behaviour but I suppose a writing thread is launched for std::cout, then the code continues and another writing thread is lauched for std::cerr without waiting the first one is terminated.
Is it possible to avoid that without using only std::cout ?
I don't have the problem in the terminal. It only happens when CTest redirect the output to the LastTest.log file.
Note that my buffer is flushed. It's not a problem of std::endl coming after the call to std::cerr !
Example
Expected behaviour :
[ 12:06:51.497 TRACE ] Start test
[ 12:06:52.837 WARNING ] This
is
a
very
long
warning
message...
[ 12:06:52.837 ERROR ] AT LINE 49 : 7
[ 12:06:52.841 ERROR ] AT LINE 71 : 506
[ 12:06:52.841 TRACE ] End of test
What happens :
[ 12:06:51.497 TRACE ] Start test
[ 12:06:52.837 WARNING ] This
is
a
very
long
[ 12:06:52.837 ERROR ] AT LINE 49 : 7
warning
message...
[ 12:06:52.841 ERROR ] AT LINE 71 : 506
[ 12:06:52.841 TRACE ] End of test
How I call my logger
Here is an example of how I call std::cout or std::cerr with my logger.
I call the logger with maccros like that :
#define LOG_DEBUG(X) {if(Log::debug_is_active()){std::ostringstream o;o<<X;Log::debug(o.str());}}
#define LOG_ERROR(X) {if(Log::error_is_active()){std::ostringstream o;o<<X;Log::error(o.str());}}
//...
LOG_DEBUG("This" << std::endl << "is" << std::endl << "a message");
LOG_ERROR("at line " << __LINE__ << " : " << err_id);
with
void Log::debug(const std::string& msg)
{
Log::write_if_active(Log::DEBUG, msg);
}
void Log::error(const std::string& msg)
{
Log::write_if_active(Log::ERROR, msg);
}
//...
void Log::write_if_active(unsigned short int state, const std::string& msg)
{
Instant now;
now.setCurrentTime();
std::vector<std::string> lines;
for(std::size_t k = 0; k < msg.size();)
{
std::size_t next_endl = msg.find('\n', k);
if(next_endl == std::string::npos)
next_endl = msg.size();
lines.push_back(msg.substr(k, next_endl - k));
k = next_endl + 1;
}
boost::mutex::scoped_lock lock(Log::mutex);
for(unsigned long int i = 0; i < Log::chanels.size(); ++i)
if(Log::chanels[i])
if(Log::chanels[i]->flags & state)
Log::chanels[i]->write(state, now, lines);
}
Here, the log chanel is the object dedicated to terminal output, the write function is :
void Log::StdOut::write(unsigned short int state, const Instant& t, const std::vector<std::string>& lines)
{
assert(lines.size() > 0 && "PRE: empty lines");
std::string prefix = "[ ";
if(this->withDate || this->withTime)
{
std::string pattern = "";
if(this->withDate)
pattern += "%Y-%m-%d ";
if(this->withTime)
pattern += "%H:%M:%S.%Z ";
prefix += t.toString(pattern);
}
std::ostream* out = 0;
if(state == Log::TRACE)
{
prefix += " TRACE";
out = &std::cout;
}
else if(state == Log::DEBUG)
{
prefix += " DEBUG";
out = &std::cout;
}
else if(state == Log::INFO)
{
prefix += " INFO";
out = &std::cout;
}
else if(state == Log::WARNING)
{
prefix += "WARNING";
out = &std::cout;
}
else if(state == Log::ERROR)
{
prefix += " ERROR";
out = &std::cerr;
}
else if(state == Log::SYS_ERROR)
{
prefix += "SYERROR";
out = &std::cerr;
}
else
assert(false && "PRE: Invalid Log state");
prefix += " ] ";
(*out) << prefix << lines[0] << "\n";
prefix = std::string(prefix.size(), ' ');
for(unsigned long int i = 1; i < lines.size(); ++i)
(*out) << prefix << lines[i] << "\n";
out->flush();
}
You can see that my buffer is flushed when the log instruction is executed.
I've seen this behavior before in a few forms. The central idea is to remember that std::cout and std::cerr write to two completely separate streams, so any time you see the output from both in the same place, it's because of some mechanism outside of your program that merges the two streams.
Sometimes, I see this simply due to a mistake, such as
myprogram > logfile &
tail -f logfile
which is watching the logfile as it's written, but forgot to redirect stderr to the logfile too, so writes to stdout go through at least two additional layers of buffering inside tail before getting displayed, but writes to stderr go directly to the tty, and so can get mixed in.
Other examples I've seen involve an external process merging streams. I don't know anything about CTest but maybe it's doing this. Such processes are under no obligation to sort lines by the exact time at which you originally wrote them to the stream, and probably don't have access to that information even if they wanted to!
You really only have two choices here:
Write both logs to the same stream — e.g. use std::clog instead of std::cout or std::cout instead of std::cerr; or launch the program with myprogram 2>&1 or similar
Make sure the merging is done by a process that is actually aware of what it's merging, and takes care to do it appropriately. This works better if you're communicating by passing around packets containing logging events rather than writing the formatted log messages themselves.
Answering my own question
Finally, it comes from a bug in CMake.
CTest is not able to manage the order of the two buffers, and then doesn't care about the exact order of the output.
It will be resolved in CMake >= 3.4.
I'm not a C++ expert, but this might help...
I believe that the issue you are seeing here, when redirecting to a file, is caused by the cstdio library trying to be smart. My understanding is that on Linux, the C++ iostreams ultimately send their output to the cstdio library.
At startup, the cstdio library detects if you are sending the output to a terminal or to a file. If the output is going to the terminal then stdio is line buffered. If output is going to a file then stdio becomes block buffered.
Output to stderr is never buffered, so will be sent immediately.
For solutions you could possibly try using fflush on stdout, or you could look into using the setvbuf function on stdout to force line buffered output (or even unbuffered output if you like). Something like this should force stdout to be line buffered setvbuf(stdout, NULL, _IOLBF, 0).
Related
I'm writing a C++ program and at some point I need the outputs of two other programs runned with dynamically defined arguments.
At the moment I run both external programs as in the following example:
...
system("prog1 arg1".c_str());
system("prog2 arg2".c_str());
if(output1_exist() && output2_exist()){
data1 = loadoutput1();
data2 = loadoutput2();
dosomething(data1, data2);
} else {
cerr << "error" << endl;
exit(1);
}
...
The point is that both external programs can run for a very long time, so I was thinking to run them concurrently and use a while loop and sleep to wait for both to finish:
...
system("prog1 arg1 &".c_str());
system("prog2 arg2 &".c_str());
while(true){
if(output1_exist() && output2_exist()){
break;
} else {
sleep(60);
}
}
data1 = loadoutput1();
data2 = loadoutput2();
dosomething(data1, data2);
...
The problem with this approach is that if one or both external programs fail for whatever reason I will be locked in the while loop forever.
To avoid this problem I was thinking to use two threads like in the following example:
...
void runextprog(string command){
system(command.c_str())
}
...
vector < thread > threadVect;
threadVect.clear();
threadVect.emplace_back(runextprog, "prog1 arg1");
threadVect.emplace_back(runextprog, "prog2 arg2");
threadVect[0].join();
threadVect[1].join();
if(output1_exist() && output2_exist()){
data1 = loadoutput1();
data2 = loadoutput2();
dosomething(data1, data2);
} else {
cerr << "error" << endl;
exit(1);
}
...
This should work but to me it seems a little overengenired.
So, can someone suggest me a way to do what I'm looking for in a more elegant and simple way?
I'm working on a multicore linux workstation and I'll probably run my program always in a similar setup, but a portable solution that can run also on win/mac machines will be appreciated.
Thank you.
The application I'm working on needs to execute commands. Commands can be console commands or 'GUI applications' (like notepad).
I need to get the return code in both cases, and in the case of console commands I also need to catch the output from stdin and stderr.
In order to implement this feature, I based my code on the stack overflow question 'How to execute a command and get output of command within C++ using POSIX?'.
My code:
int ExecuteCmdEx(const char* cmd, std::string &result)
{
char buffer[128];
int retCode = -1; // -1 if error ocurs.
std::string command(cmd);
command.append(" 2>&1"); // also redirect stderr to stdout
result = "";
FILE* pipe = _popen(command.c_str(), "r");
if (pipe != NULL) {
try {
while (!feof(pipe)) {
if (fgets(buffer, 128, pipe) != NULL)
result += buffer;
}
}
catch (...) {
retCode = _pclose(pipe);
throw;
}
retCode = _pclose(pipe);
}
return retCode;
}
It works perfectly with console applications, but in the case of 'GUI applications' it doesn't work as expected...
With 'GUI applications', code stops on while (!feof(pipe)) expecting to get something from pipe.
I understand that 'GUI applications' like notepad don't finish until someone interacts with them (user closes the app, kills the process, etc.),
but when I launch console applications from Windows Console, prompt comes back immediately.
I would like to obtain the same behavior from 'GUI applications'...
One possible solution would be to add the isGui variable indicating when the code should read from the pipe, but I rejected this option, as I don't want to indicate if it is a 'GUI application' or not.
Well you don't have to indicate isGui yourself but detect it by checking the subsystem of the executable (windows/console) prior to executing the command, and in case of windows skip waiting on the redirected pipes.
For example, using SHGetFileInfo with the SHGFI_EXETYPE flag:
bool isGuiApplication(const std::string& command)
{
auto it = command.find_first_of(" \t");
const std::string& executable = (it == std::string::npos ? command : command.substr(0, it));
DWORD_PTR exetype = SHGetFileInfo(executable.c_str(), 0, nullptr, 0, SHGFI_EXETYPE);
if (!exetype) {
cerr << "Executable check failed\n";
}
return ((uintptr_t)exetype & 0xffff0000);
}
Then later in the code...
if (isGuiApplication(command)) {
cout << "GUI application\n";
system(command.c_str()); // don't wait on stdin
}
else {
cout << "Console application\n";
. . .
// _popen and stuff
}
I have code that parses a configuration file, which may send output to stdout or stderr in case of errors.
Unfortunately, when I pipe the output of my program to /dev/null, I get an exception: ios_base::clear: unspecified iostream_category error with stderror Inappropriate ioctl for device.
Here is the relevant part of my code:
try {
file.exceptions(std::ios::failbit | std::ios::badbit);
file.open(config_file);
// file.exceptions(std::ios::failbit);
}
catch (std::ios_base::failure& e) {
throw std::invalid_argument(std::string("Can't open config file ") + config_file + ": " + strerror(errno));
}
try {
errno = 0; // reset errno before I/O operation.
// ...
while (std::getline(file, line)) {
if ( [... unknown line ...] ) {
std::cerr << "Invalid line in config " << config_file << ": " << line << std::endl;
continue;
}
// debug info to STDOUT:
std::cout << config_file << ": " << line << std::endl;
}
} catch (std::ios_base::failure& err) {
std::cout << "Caught exception " << err.what() << std::endl;
if (errno != 0) {
char* theerror = strerror(errno);
file.close();
throw std::invalid_argument(std::string("Can't read config file ") + config_file + ": " + theerror);
}
}
try {
file.close();
}
catch (std::ios_base::failure& e) {
throw std::invalid_argument(std::string("Can't close config file ") + config_file + ": " + strerror(errno));
}
Here is an example of an exception:
~> ./Application test1.conf > /dev/null
test1.conf: # this is a line in the config file
Caught exception ios_base::clear: unspecified iostream_category error
When I don't pipe to /dev/null (but to stdout or a regular file), all is fine. I first suspected that the cout and cerr where causing problems, but I'm not sure.
I finally found that I could resolve this by enabling this line after opening the file, so that the badbit-type of exceptions are ignored.
file.exceptions(std::ios::failbit);
Frankly, I'm too novice in C++ to understand what is going on here.
My questions: what is causing the unspecified iostream_category exception? How can I avoid it? Is setting file.exceptions(std::ios::failbit); indeed a proper solution, or does that give other pitfalls? (A pointer to a good source detailing best practices for opening files in C++, which does included all proper exception handling, or some background explained, is highly appreciated!)
I would recommend the following approach. This is based on my own experience as well as on some of the links that have been provided above. In short I would suggest the following:
Don't turn on the exceptions when using C++ streams. They are just so hard to get right I find they make my code less readable, which sort of defeats the purpose of exceptions. (IMHO it would be better if the C++ streams used exceptions by default and in a more reasonable manner. But since it isn't built that way it is better not to force the issue and just follow the pattern that the designers seem to have had in mind.)
Rely on the fact that getline will handle the various stream bits properly. You don't need to check after each call if the bad or fail bits are set. The stream returned by getline will be implicitly cast to false when these occur.
Restructure your code to follow the RAII pattern so that you don't need to call open() or close() manually. This not only simplifies your code, but ensures that you don't forget to close it. If you are not familiar with the RAII pattern, see https://en.wikipedia.org/wiki/Resource_acquisition_is_initialization.
Don't repeatedly put things like errno or your filename into the exceptions you generate. Keep them minimal to avoid repetition and use a catch block at the bottom to handle the errors, possibly throwing a new exception that adds the details you wish to report.
With that said, I would recommend rewriting your code to look something like the following:
using namespace std;
try {
errno = 0;
// Construct the stream here (part of RAII) then you don't need to call
// open() or close() manually.
ifstream file(config_file);
if (!file.is_open()) {
throw invalid_argument("Could not open the file");
}
while (getline(file, line)) {
// do all your processing as if no errors will occur (getline will
// be explicitly cast to a false when an error occurs). If there is
// something wrong with a line (bad format, etc.) then throw an
// exception without echoing it to cerr.
}
if (file.bad()) {
throw invalid_argument("Problem while reading file");
}
}
catch (const invalid_argument& e) {
// Whatever your error handling needs to be. config_file should still
// be valid so you can add it here, you don't need to add it to each
// individual exception. Also you can echo to cerr here, you don't need
// to do it inside the loop. If you want to use errno it should still
// be set properly at this point. If you want to pass the exception to
// the next level you can either rethrow it or generate a new one that
// adds additional details (like strerror and the filename).
}
I've improved on my earlier answer by writing a couple of functions that handle the stream checks using lambdas to actually process the file. This has the following advantages:
You don't forget where to put the stream checks.
Your code concentrates on what you want to do (the file processing) and not on the system boilerplate items.
I've created two versions. With the first your lambda is given the stream and you can process it how you like. With the second your lambda is given one line at a time. In both cases if an I/O problem occurs it will throw a system_error exception. You can also throw your own exceptions in your lambda and they will be passed on properly.
namespace {
inline void throwProcessingError(const string& filename, const string& what_arg) {
throw system_error(errno, system_category(), what_arg + " " + filename);
}
}
void process_file(const string& filename, function<void (ifstream &)> fn) {
errno = 0;
ifstream strm(filename);
if (!strm.is_open()) throwProcessingError(filename, "Failed to open");
fn(strm);
if (strm.bad()) throwProcessingError(filename, "Failed while processing");
}
void process_file_line_by_line(const string& filename, function<void (const string &)> fn)
{
errno = 0;
ifstream strm(filename);
if (!strm.is_open()) throwProcessingError(filename, "Failed to open");
string line;
while (getline(strm, line)) {
fn(line);
}
if (strm.bad()) throwProcessingError(filename, "Failed while processing");
}
To use them, you would call them as follows...
process_file("myfile.txt", [](ifstream& stream) {
... my processing on stream ...
});
or
process_file_line_by_line("myfile.txt", [](const string& line) {
... process the line ...
});
My program (in C++) uses libev event loop. And I need to watch on a specific folder (say foo) for new files.
I cannot use Inotify::WaitForEvents() in block mode because I do not want to block my libev event loop. As suggested in inotify documentation,I use Inotify::SetNonBlock(true) to make it non-block. The inotify file descriptor is then passed to libev EV_STAT to watch on (as suggested in libev documentation).
The libev callback for EV_STAT is indeed called when there are new files in the folder foo. However, when I use Inotify::WaitForEvents() followed by Inotify::GetEventCount(), I get zero event.
I suspect that libev already consumed the event and convert it to EV_STAT event. If this is the case, how can I get the names of those new files?
I knew there is inode number in EV_STAT callback parameters, but getting file name from inode number is not trivial. So it is better if I can get file name instead.
Any suggestions?
EDIT
I wrote a small program to reproduce this problem. It seems the events are not lost. Instead, inotify events do not come yet when libev callback is called. The event can re-appear when you copy in a new file.
The program to reproduce the issue:
#include <ev++.h>
#include "inotify-cxx.h"
#include <iostream>
const char * path_to_watch = "/path/to/my/folder";
class ev_inotify_test
{
InotifyWatch m_watch;
Inotify m_notify;
// for watching new files
ev::stat m_folderWatcher;
public:
ev_inotify_test() : m_watch(path_to_watch, IN_MOVED_TO | IN_CLOSE_WRITE),
m_notify()
{
}
void run()
{
try {
start();
// run the loop
ev::get_default_loop().run(0);
}
catch (InotifyException & e) {
std::cout << e.GetMessage() << std::endl;
}
catch (...) {
std::cout << "got an unknown exception." << std::endl;
}
}
private:
void start()
{
m_notify.SetNonBlock(true);
m_notify.Add(m_watch);
m_folderWatcher.set<ev_inotify_test, &ev_inotify_test::cb_stat>(this);
m_folderWatcher.set(path_to_watch);
m_folderWatcher.start();
}
void cb_stat(ev::stat &w, int revents)
{
std::cout << "cb_stat called" << std::endl;
try {
m_notify.WaitForEvents();
size_t count = m_notify.GetEventCount();
std::cout << "inotify got " << count << " event(s).\n";
while (count > 0) {
InotifyEvent event;
bool got_event = m_notify.GetEvent(&event);
std::cout << "inotify confirm got event" << std::endl;
if (got_event) {
std::string filename = event.GetName();
std::cout << "test: inotify got file " << filename << std::endl;
}
--count;
}
}
catch (InotifyException &e) {
std::cout << "inotify exception occurred: " << e.GetMessage() << std::endl;
}
catch (...) {
std::cout << "Unknown exception in inotify processing occurred!" << std::endl;
}
}
};
int main(int argc, char ** argv)
{
ev_inotify_test().run();
}
When I copy in a tiny file (say 300 bytes), the file is detected immediately. But if I copy a bigger file (say 500 kB), there is no event until I copy another file in and then I get two events.
The output looks like:
cb_stat called # test_file_1 (300 bytes) is copied in
inotify got 1 event(s).
inotify confirm got event
test: inotify got file test_file_1
cb_stat called # test_file_2 (500 KB) is copied in
inotify got 0 event(s). # no inotify event
cb_stat called # test_file_3 (300 bytes) is copied in
inotify got 2 event(s).
inotify confirm got event
test: inotify got file test_file_2
inotify confirm got event
test: inotify got file test_file_3
I finally figured out the problem: I should use ev::io to watch the file descriptor of inotify, instead of using ev::stat to watch the folder.
In the example code, the definition of m_folderWatcher should be:
ev::io m_folderWatcher;
instead of
ev::stat m_folderWatcher;
And it should be initialized as:
m_folderWatcher.set(m_notify.GetDescriptor(), ev::READ);
instead of
m_folderWatcher.set(path_to_watch);
I can open a process
(i perform a shell script whose output is
1443772262175; URL filter
1443772267339; URL blocked ...) with popen, no problem
but after impossible to read it
I try fscanf, fgets(buff_out, sizeof(buff_out)-1, pf)
int lineno = 0;
if ((pf = popen(files_to_open, "r")) != NULL)
while (!feof(pf))
{
fscanf(pf, "%s %s", buffer, afficher);
message << buffer << " " << afficher ;
}
or while (fgets(buff_out, sizeof(buff_out)-1, pf) != NULL) {
}
i try with a program test it works fine but with a program more complexe (parallel) that i can't debug (the programm complexe) it fails no error just nothing to scan or read
Should i abandon popen?
Thanks for help