I am creating a class that catches all console output and dumps it into one log. I need this because my program uses many 3rd party libraries that I cannot change. Useful information from these libraries is printed to the console in a handful of ways. I know about replacing the cout/cerr with a custom stream buffer using rdbuf. I don't need help with that. I also know about creating a pipe to capture c-style output, e.g. fprintf( stdout, "Hello, world!" ). However, unlike a custom stream buffer where I can handle output as it comes in, the c-style output is now stuck in this pipe and I have to periodically flush everything and read from it. I would much rather get a notification or install a callback to handle pipe input as it happens.
Qt is in the mix here, too. I've been playing with the QSocketNotifier class, but it doesn't seem to be working the pipe read or write file descriptors.
Suggestions?
output is now stuck in this pipe and I have to periodically flush everything and read from it. I would much rather get a notification or install a callback to handle pipe input as it happens.
It's unclear what "everything" is or why you would need to do more than flush specific file streams, but this sounds like you are referring to the fact that these streams are buffered so the pipes you have connected them to aren't written to until flush conditions are met or flush() is executed.
Further, we don't know whether you are manipulating the layer 3 file streams or the layer 2 file descriptors. We don't know whether you've disabled synchronization between C++ streams and layer 3 streams.
All that said, it is possible to disable the C layer 3 buffering with
setvbuf(stdout, NULL, _IONBF, 0);
setvbuf(stderr, NULL, _IONBF, 0);
This means you won't have to flush() any more for, say, fprintf() calls to be written to the pipes.
For that, you can set up a poll/select call to check for data on the pipes or you can simply have threads performing blocking reads from them and transfer the data someplace else.
On Linux, we use to redirect the standard streams:
freopen (outfile, "a", stdout);
freopen (outfile, "a", stderr);
I don't believe there's any way to get a notification.
Related
Yes, I can't. It seems weird ostream has no close, since istream can detect end of file.
Here's my situation: I am capturing all the output on Posix fd2, in this process, and its children, by creating a pipe and dup2'ing the pipe output end onto fd2. A thread then reads the read end of the pipe using an associated C stream (and happens to write each line with a timestamp to the original fd2 via another associated C stream).
When all the children are dead, I write a closing message to cerr, then I need to close it so the thread echoing it to the original error file will close the pipe and terminate.
The thread is not detecting eof(), even though I am closing both stderr and fd2.
I have duplicated my main program using a simple one, and using C streams instead of C++ iostreams, and everything works just fine by fclosing stderr (there are no child processes in that simplified test though).
Edit: hmm .. do I need to close the original pipe fd after dup2'ing it onto channel 2? I didn't do that, so the underlying pipe still has an open fd attached. Aha .. that's the answer!
When you duplicate a file descriptor with dup2 the original descriptor remains a valid reference to the underlying file. The file won't be closed and the associated resources freed until all file descriptors associated with a particular file are closed (with close).
If you are using dup2 to copy a file descriptor to a well known number (such as 2 for stderr), you usually want to call close on the original file descriptor immediately after a successful dup2.
The streams used for the standard C++ streams are the same as those controlled by the corresponding stdio files. That is, if you fclose(stderr) you also close the stream used for std::cerr. ... and since you seem to play with the various dup() functions you can also close(2) to close this stream.
The best is to put a wrapper around your resource and then have the destructor close it when it goes out of scope. The the presention from Bjarne Stoustup
How can I read/write to a device in C++? the device is in /dev/ttyPA1.
I thought about fstream but I can't know if the device has output I can read without blocking the application.
My goal is to create and application where you write something into the terminal and it gets sent into /dev/ttyPA1. If the device has something to write back it will read it from the device and write to screen. If not it will give the user prompt to write to the device again.
How can I do this?
Use open(2), read(2), and write(2) to read from and write to the device (and don't forget to close(2) when you're done). You can also use the C stdio functions (fopen(3) and friends) or the C++ fstream classes, but if you do so, you almost definitely want to disable buffering (setvbuf(3) for stdio, or outFile.rdbuf()->pubsetbuf(0, 0) for fstreams).
These will all operate in blocking mode, however. You can use select(2) to test if it's possible to read from or write to a file descriptor without blocking (if it's not possible, you shouldn't do so). Alternatively, you can open the file with the O_NONBLOCK flag (or use fcntl(2) to set the flag after opening) on the file descriptor to make it non-blocking; then, any call to read(2) or write(2) that would block instead fails immediately with the error EWOULDBLOCK.
For example:
// Open the device in non-blocking mode
int fd = open("/dev/ttyPA1", O_RDWR | O_NONBLOCK);
if(fd < 0)
; // handle error
// Try to write some data
ssize_t written = write(fd, "data", 4);
if(written >= 0)
; // handle successful write (which might be a partial write!)
else if(errno == EWOULDBLOCK)
; // handle case where the write would block
else
; // handle real error
// Reading data is similar
You can use fstream, but you're going to have to look up the specifications for how your device would like to receive data. Some devices will be just fine using ASCII data, other devices will need a specific binary sequence of data bits/bytes. You may also have to write custom serialization objects that overload the operator<< and operator>> functions for the data you're trying to write. Either that, or you could use the read() and write() methods to read/write raw binary data from/to buffer arrays you've allocated in your program.
Edit: if you're concerned about blocking behavior, then you have two choices. You will either have to use the POSIX API, and check your opened file-descriptor with either poll() or select() to see if data is available, or you will have to keep any file-writing or reading calls in a set of separate threads that can basically act as asynchronous read/write actions. So you would basically send a message to the reader/writer thread, and that thread would block if needed on the fstream calls, yet the rest of your program could continue to function. Your program though may not be designed for threads, and if that's the case, then the POSIX API would be the only way to-go.
I am programming a shell in c++. It needs to be able to pipe the output from one thing to another. For example, in linux, you can pipe a textfile to more by doing cat textfile | more.
My function to pipe one thing to another is declared like this:
void pipeinput(string input, string output);
I send "cat textfile" as the input, and "more" as the output.
In c++ examples that show how to make pipes, fopen() is used. What do I send as my input to fopen()? I have seen c++ examples of pipeing using dup2 and without suing dup2. What's dup2 used for? How do you know if you need to use it or not?
Take a look at popen(3), which is a way to avoid execvp.
For a simple, two-command pipeline, the function interface you propose may be sufficient. For the general case of an N-stage pipeline, I don't think it is flexible enough.
The pipe() system call is used to create a pipe. In context, you will be creating one pipe before forking. One of the two processes will arrange for the write end of the pipe to become its standard output (probably using dup2()), and will then close both of the file descriptors originally returned by pipe(). It will then execute the command that writes to the pipe (cat textfile in your example). The other process will arrange for the read enc of the pipe to become its standard input (probably using dup2() again), and will then close both of the file descriptor originally returned by pipe(). It will then execute the command that reads from the pipe (more in your example).
Of course, there will be still a third process around - the parent shell process - which forked off a child to run the entire pipeline. You might decide you want to refine the mechanisms a bit if you want to track the statuses of each process in the pipeline; the process organization is then a bit different.
fopen() is not used to create pipes. It can be used to open the file descriptor, but it is not necessary to do so.
Pipes are created with the pipe(2) call, before forking off the process. The subprocess has a little bit of file descriptor management to do before execing the command. See the example in pipe's documentation.
I have a Windows C program that gets its data through a redirected stdin pipe, sort of like this:
./some-data-generator | ./myprogram
The problem is that I need to be able to read from stdin in a non-blocking manner. The reason for this is that (1) the input is a data stream and there is no EOF and (2) the program needs to be able to abort its stdin-reading thread at any time. fread blocks when there's no data, so this makes it very difficult.
In Unix this is no problem, as you can set the blocking mode of a file descriptor with fcntl and O_NONBLOCK. However, fcntl doesn't exist on windows.
I tried using SetNamedPipeHandleState:
DWORD mode= PIPE_READMODE_BYTE|PIPE_NOWAIT;
BOOL ok= SetNamedPipeHandleState(GetStdHandle(STD_INPUT_HANDLE), &mode, NULL, NULL);
DWORD err= GetLastError();
but this fails with ERROR_ACCESS_DENIED (0x5).
I'm not sure what else to do. Is this actually impossible (!) or is it just highly obfuscated? The resources on the net are rather sparse for this particular issue.
The order apprach, check there is input ready to read:
For console mode, you can use GetNumberOfConsoleInputEvents().
For pipe redirection, you can use PeekNamedPipe()
You could use async I/O to read from the handle, such as the ReadFileEx() WIN32 call. Use CancelIo() to terminate reading in the absence of input.
See MSDN at http://msdn.microsoft.com/en-us/library/aa365468(VS.85).aspx
I am wrapping existing C++ code from a BSD project in our own custom wrapper and I want to integrate it to our code with as few changes as possible. This code uses fprintf to print to stderr in order to log / report errors.
I want to redirect this to an alternative place within the same process. On Unix I have done this with a socketpair and a thread: one end of the socket is where I send stderr (via a call to dup2) and the other end is monitored in a thread, where I can then process the output.
This does not work on Windows though because a socket is not the same as a file handle.
All documents I have found on the web show how to redirect output from a child process, which is not what I want. How can I redirect stderr within the same process getting a callback of some sort when output is written? (and before you say so, I've tried SetStdHandle but cannot find any way to make this work)...
You can use a similar technique on Windows, you just need to use different words for the same concepts. :) This article: http://msdn.microsoft.com/en-us/library/ms682499.aspx uses a win32 pipe to handle I/O from another process, you just have to do the same thing with threads within the same process. Of course, in your case all output to stderr from anywhere in the process will be redirected to your consumer.
Actually, other pieces of the puzzle you may need are _fdopen and _open_osfhandle. In fact, here's a related example from some code I released years ago:
DWORD CALLBACK DoDebugThread(void *)
{
AllocConsole();
SetConsoleTitle("Copilot Debugger");
// The following is a really disgusting hack to make stdin and stdout attach
// to the newly created console using the MSVC++ libraries. I hope other
// operating systems don't need this kind of kludge.. :)
stdout->_file = _open_osfhandle((long)GetStdHandle(STD_OUTPUT_HANDLE), _O_TEXT);
stdin->_file = _open_osfhandle((long)GetStdHandle(STD_INPUT_HANDLE), _O_TEXT);
debug();
stdout->_file = -1;
stdin->_file = -1;
FreeConsole();
CPU_run();
return 0;
}
In this case, the main process was a GUI process which doesn't start with stdio handles at all. It opens a console, then shoves the right handles into stdout and stdin so the debug() function (which was designed as a stdio interactive function) can interact with the newly created console. You should be able to open some pipes and do the same sort of thing to redirect stderr.
You have to remember that what MSVCRT calls "OS handles" are not Win32 handles, but another layer of handles added just to confuse you. MSVCRT tries to emulate the Unix handle numbers where stdin = 0, stdout = 1, stderr = 2 and so on. Win32 handles are numbered differently and their values always happen to be a multiple of 4. Opening the pipe and getting all the handles configured properly will require getting your hands messy. Using the MSVCRT source code and a debugger is probably a requirement.
You mention that you don't want to use a named pipe for internal use; it's probably worth poining out that the documentation for CreatePipe() states, "Anonymous pipes are implemented using a named pipe with a unique name. Therefore, you can often pass a handle to an anonymous pipe to a function that requires a handle to a named pipe." So, I suggest that you just write a function that creates a similar pipe with the correct settings for async reading. I tend to use a GUID as a string (generated using CoCreateGUID() and StringFromIID()) to give me a unique name and then create the server and client ends of the named pipe with the correct settings for overlapped I/O (more details on this, and code, here: http://www.lenholgate.com/blog/2008/02/process-management-using-jobs-on-windows.html).
Once I have that I wire up some code that I have to read a file using overlapped I/O with an I/O Completion Port and, well, then I just get async notifications of the data as it arrives... However, I've got a fair amount of well tested library code in there that makes it all happen...
It's probably possible to set up the named pipe and then just do an overlapped read with an event in your OVERLAPPED structure and check the event to see if data was available... I don't have any code available that does that though.