I am redirecting stderr to a log file on Windows Phone Runtime:
int stdError = 0;
FILE* pLogFile = NULL;
// Redirect stderror to a logfile
if ( ! m_logFilePath.empty( ) )
{
// Get a duplicate file descriptor for stderror
// This returns -1 on failure
stdError = ::_dup( ::_fileno( stderr ) );
if ( stdError != -1 )
{
// Redirect stderror to a log file so we can capture
// ffmpeg error information
// Ignore the return value (nothing we can do if this fails)
::freopen_s( &pLogFile, m_logFilePath.c_str( ), "w", stderr );
}
}
The program intermittently crashes while calling fflush(stderr);. When I don't redirect stderr everything seems to be working fine.
It's windows so who knows?
Try std::cerr.flush(); because I can totally see windows doing their own thing again (like sockets not being like files, they like doing their own IO stuff).
Using what I just said above puts the task to their standard library, rather than assuming it's a file and such. Remember "abstraction", it makes sense flush is a method, it is a verb, and we don't care how (or in this case don't (want) to know) so let's just assume flush does what flush ought to do!
Leave a comment if this doesn't work and I shall have a think.
I don't use windows or windows phones (I am not one of the lucky 24 out there in the world :P) but I do know that there are I/O problems ("differences") on Windows, fortunately MinGW and co hide them from me :)
OR
Change your tactics, if I really wanted to side-step the problem (because it isn't your code) create a new class called my_error_stream or something, that extends std::ostream (that way you can use it like std::cerr which "is a" std::ostream).
Put a static method in that called get_error_stream() or something that returns one of two classes derived from my_error_stream, one forwards right to std::err, the other to a file.
It depends on how you like your code to look and feel, I said this way because it keeps the implementations separate, and under their own "branch" of the class hierarchy.
It doesn't really answer your question, but your code seems fine, and Windows sucks at pipes and sockets.
Related
If you want to spawn a Windows console in an otherwise SUBSYSTEM:WINDOWS application you can use this code:
if (AllocConsole())
{
FILE* file = nullptr;
_wfreopen_s(&file, L"CONIN$", L"r", stdin);
_wfreopen_s(&file, L"CONOUT$", L"w", stdout);
_wfreopen_s(&file, L"CONOUT$", L"w", stderr);
}
The _wfreopen_s function maps stdin to CONIN$ and provides a pointer to pointer in the file variable (which we are effectively discarding).
What I'd like to do is instead map an input from something other than stdin, for example, another file stream and then write that stream to CONIN$.
For a larger picture of what I'm trying to do here, I've got a secondary thread running std::getline(std::cin... which blocks. I'd like the thread context object to just send a \n to the console to break the blocking call.
If there are other ideas, I'm open. The alternative currently is that I print a message to the console that says "Shutting down, press ENTER to quit..." Which, I guess, also works ;)
What I tried was using the FILE* conin = new FILE(); and then did a memcpy to fill it with a \n and then I used WriteFile to that pointer, thinking that it might write the file stream out to CONIN$, and while the code compiles, and the contents of the FILE* appears to be correct (0x0a), it does not appear to send that stream to the console.
I tested this by having std::cout above and below the code testing the stream write. If it works, I'd expect the two lines to be on separate lines, but they always show up on the same, suggesting that I'm not sending the file stream.
Thanks for reading!
You should not discard the FILE* handle, otherwise you won't be able to manipulate it, in particular you won't be able to properly flush/close it if required.
If you're working with threads, simply give the FILE* to the thread that requires it. Threads share the same memory space.
If you're working with processes, then you should create a pipe between the two processes involved (see Win32 API for CreatePipe for details), and connect one's stdout to the other's stdin.
Consider we have some_function and it prints result to stdout instead returning it.Changing it's defination is out of our scope and there's no alternative to it. We're left with option of reading it from stdout. So the question.
How to read stdout of C++ program in itself.
It is possible to get pid I searched if we can get fd of the same programm but I'm not able to find anything.
#include <unistd.h>
#include <sys/types.h>
#include <iostream>
void some_function(){
std::cout<<"Hello World";
}
int main(){
int pid = ::getpid();
string s = //What to write here.
cout<<"Printing";
some_function(); //This function prints "Hello World" to screen
cout<<s; //"PrintingHello World"
return 0;
}
How to attach pipe to same process i.e instead of creating child process.
Some might think of creating child process and call some_function in it, to be able to read its stdout in parent process, but No, some_function depends on process which calls it and hence we want to call it the very process instead of creating child process.
This isn't hard to do, but IMO it's quite a hack, and it won't work with a multithreaded program:
// make a temp file to store the function's stdout
int newStdOut = mkstemp( "/tmp/stdout.XXXXXXX" );
// save the original stdout
int tmpStdOut = dup( STDOUT_FILENO );
// clear stdout
fflush( stdout );
// now point the stdout file descriptor to the file
dup2( newStdOut, STDOUT_FILENO );
// call the function we want to collect the stdout from
some_function();
// make sure stdout is empty
fflush( stdout );
// restore original stdout
dup2( tmpStdOut, STDOUT_FILENO );
// the tmp file now contains whatever some_function() wrote to stdout
Error checking, proper headers, syncing C stdout with C++ cout, and reading from and cleaning up the temp file are left as exercises... ;-)
Note that you can't safely use a pipe - the function can write enough to fill up the pipe, and you can't read from the pipe because you've called the function.
How to read stdout of C++ program in itself?
There are very few reasons to do that and that is usually (but not always) a design bug.
Be aware of an important thing (at least in a single-threaded program). If your program is both reading from its "stdout" and writing (as usual) in it, it could be stuck in a deadlock: unable to read so not reaching any output routine, (or unable to write because the pipe is full).
So a program which both reads and writes the same thing (actually, the two sides of the same pipe(7)) should use some multiplexing call like poll(2). See also this.
Once you understand that, you'll have some event loop. And before that, you'll make a pipe(7) using pipe(2) (and dup2(2)).
However, pipe to self is a good thing in some signal(7) handling (see signal-safety(7)). That trick is even recommended in Qt Unix signal handling.
Read more about Unix system programming, e.g. ALP or some newer book. Read also intro(2) & syscalls(2).
I have looked for pipe and it requires fd
Wrong. Read much more carefully pipe(2); on success it fills an array of two file descriptors. Of course it could fail (see errno(3) & perror(3) & strerror(3))
Maybe you just need popen(3). Or std::ostringstream. Or open_memstream(3).
Consider we have some_function and it prints result to stdout instead returning it. Changing it's definition is out of our scope and there's no alternative to it
If some_function is your code, or is some free software, you could and probably should improve it to give a result somewhere....
I'm developing a cross platform project currently. On windows i had a class that ran a process/script (using a commandline), waited for it to end, and read everything from it's stdout/stderr to a buffer. I then printed the output to a custom 'console'. Note: This was not a redirection of child stdout to parent stdout, just a pipe from child stdout to parent.
I'm new to OSX/unix-like api's but i can understand the canonical way of doing something like this is forking and piping stdouts together. However, i dont want to redirect it to stdout and i would like to capture the output.. It should work pretty much like this (pseudocode, resemblance with unix functions purely coincidental):
class program
{
string name, cmdline;
string output;
program(char * name, char * cmdline)
: name(name), cmdline(cmdline) {};
int run()
{
// run program - spawn it as a new process
int pid = exec(name, cmdline);
// wait for it to finish
wait(pid);
char buf[size];
int n;
// read output of program's stdout
// keep appending data until there's nothing left to read
while (read(pid, buf, size, &n))
output.append(buf, n);
// return exit code of process
return getexitcode(pid);
}
const string & getOutput() { return output; }
};
How would i go about doing this on OSX?
E:
Okay so i studied the relevant api's and it seems that some kind of fork/exec combo is unavoidable. Problem at hand is that my process is very large and forking it really seems like a bad idea (i see that some unix implementations can't do it if the parent process takes up 50%+ of the system ram).
Can't i avoid this scheme in any way? I see that vfork() might be a possible contender, so maybe i could try to mimic the popen() function using vfork. But then again, most man pages state that vfork might very well just be fork()
You have a library call to do just that: popen. It will provide you with a return value of a file descriptor, and you can read that descriptor till eof. It's part of stdio, so you can do that on OSX, but other systems as well. Just remember to pclose() the descriptor.
#include <stdio.h>
FILE * popen(const char *command, const char *mode);
int pclose(FILE *stream);
if you want to keep output with absolutely no redirection, the only thing we can think of is using something like "tee" - a command which splits the output to a file but maintains its own stdout. It's fairly easy to implement that in code as well, but it might not be necessary in this case.
How do I passively listen to stderr and obtain it as string for sending to callback? I have seen posts on reading stderr but I want to listen to it rather than actively reading it.
Background:
I have a cross-platform piece that uses 3rd party library (libcurl) which will output verbose info into stderr. This cross-platform piece is to be used by more than 1 non-cross-platform applications.
I would like to log these info, which I can do by providing FILE* to libcurl. But instead of doing that, I want to see if I can capture (passively listen to) the output in stderr as string, and send back to the calling main application via callback. This has the benefit of 1. main app can keep a single log using whatever logging tool it wants. 2. it will keep this piece cross-platform.
Doing this in a single process is a little tricky, but you can probably do it.
1: Using freopen() you can redirect your stderr to a named file. You can simultaneously open that file for reading on another handle. You might also need to call setvbuf() on stderr to turn off buffering on output to stderr so that you will be able to read it right away from the 2nd handle. Since it is being written to a file you can read it at anytime - when it is convenient. The unix function "select" is what you need if you want to be notified when the file changes. (see also fileno()).
2: More tricky would be to setup stderr as the write end of a pipe. Should be doable using dup3(), though this isn't exactly cross-platform (to non-unixy OS's). It would also require that a 2nd thread be reading from the pipe to prevent the writer from being blocked if they write very much.
Like:
FILE *stream = freopen("stderr.out", "w", stderr); // Added missing pointer
setvbuf(stream, 0, _IONBF, 0); // No Buffering
FILE *input = fopen("stderr.out", "r");
fprintf(stderr, "Output to stderr dude\n");
//fflush(stderr); // You can explicitly flush instead of setting no buffering.
char buffer[1024];
while (fgets(buffer, 512, input))
{
printf(">>>%s\n", buffer);
}
I'm new here and my english is not really good. Apologize any inconvenience!
I'm programming an application for windows mobile with native code (MFC). I'm trying to open a file and this is driving me crazy. I've tried to open it in a thousand diferent ways... And I really achieve it, but when I try to read (fread or getline) the program crashes without any explanation:
The program 'x' finalize with code 0 (0x0)
The GetLastError() method, in some cases, returns me a 183.
Then, I put the code I've used to open the file:
std::wifstream file(L"\\Archivos de programa\\Prog\\properties.ini");
wchar_t lol[100];
if (file) {
if(!file.eof()) {
file.getline(lol,99);
}
}
It enters on all the if's, but the getline crashes.
FILE * lol = NULL;
lol = _wfope n(ruta, L"rb");
DWORD a = GetLastError();
if ( lol != NULL )
return 1;
else
return -1;
It returns 1 (correct), and after, in a later getline, it stores trash on the string. However, it doesn't crash!!
fp.open (ruta, ifstream::in);
if ( fp.is_open() ) {
return 1;
}else{
return -1;
}
It enters on the return 1, but when executing the later getline() crashes.
I've debugged the getline() method and it crashes on the library fstream, right there:
if ((_Meta = fget c (_File)) == EOF)
return (false);
In the if. The fgetc(), I supose.
I'm going completely crazy!! I need some clue, please!!
The path of the file is correct. First, because, in theory, the methods open the file, and second, I obtain the path dinamically and it matches.
Emphasize that the fread method also crashes.
Thanks in advance!
P.S.:
Say that when I do any fopen, the method fp.good() returns me FALSE, and the GetLastError returns me 183. By the other hand, if I use fp.fopen(path, ifstream::in); or std::wifstream fp(path); the fp.good(); returns me TRUE, and the GetLastError() doesn't throw any error (0).
A hint: use the Process Monitor tool to see what goes wrong in the file system calls.
The path accepted by wifstream is lacking a drive ("C:" or the like) (I don't know what the ruta variable points to)
Apart from the streams problem itself, you can save yourself a lot of trouble by using the GetProfileString and related functions, when using a windows .ini file.
I'm shooting in the dark here, but your description sounds like a runtime mismatch story. Check that MFC and your project use the same runtime link model (static/dynamic). If you link to MFC dynamically, then the restriction is stricter: both MFC and your project have to use dynamic runtime.
I don't know why, but with the CFile class... it works...
Programming mysteries!
Shooting in the dark too.
Unexplained random crash in MFC often comes from a mismatch message handler prototype.
For example the following code is wrong but it won't generate any warning during compilation and it may work most of the time :
ON_MESSAGE(WM_LBUTTONDOWN, onClick)
...
void onClick(void) //wrong prototype given the macro used (ON_MESSAGE)
{
//do some stuff
}
Here the prototype should be :
LRESULT onClick(WPARAM, LPARAM)
{
}
It often happens when people get confident enough to start modifying manually the message maps.