Send Character TO CONIN$ (Windows Console) - c++

If you want to spawn a Windows console in an otherwise SUBSYSTEM:WINDOWS application you can use this code:
if (AllocConsole())
{
FILE* file = nullptr;
_wfreopen_s(&file, L"CONIN$", L"r", stdin);
_wfreopen_s(&file, L"CONOUT$", L"w", stdout);
_wfreopen_s(&file, L"CONOUT$", L"w", stderr);
}
The _wfreopen_s function maps stdin to CONIN$ and provides a pointer to pointer in the file variable (which we are effectively discarding).
What I'd like to do is instead map an input from something other than stdin, for example, another file stream and then write that stream to CONIN$.
For a larger picture of what I'm trying to do here, I've got a secondary thread running std::getline(std::cin... which blocks. I'd like the thread context object to just send a \n to the console to break the blocking call.
If there are other ideas, I'm open. The alternative currently is that I print a message to the console that says "Shutting down, press ENTER to quit..." Which, I guess, also works ;)
What I tried was using the FILE* conin = new FILE(); and then did a memcpy to fill it with a \n and then I used WriteFile to that pointer, thinking that it might write the file stream out to CONIN$, and while the code compiles, and the contents of the FILE* appears to be correct (0x0a), it does not appear to send that stream to the console.
I tested this by having std::cout above and below the code testing the stream write. If it works, I'd expect the two lines to be on separate lines, but they always show up on the same, suggesting that I'm not sending the file stream.
Thanks for reading!

You should not discard the FILE* handle, otherwise you won't be able to manipulate it, in particular you won't be able to properly flush/close it if required.
If you're working with threads, simply give the FILE* to the thread that requires it. Threads share the same memory space.
If you're working with processes, then you should create a pipe between the two processes involved (see Win32 API for CreatePipe for details), and connect one's stdout to the other's stdin.

Related

read stdout of a process in itself using c++

Consider we have some_function and it prints result to stdout instead returning it.Changing it's defination is out of our scope and there's no alternative to it. We're left with option of reading it from stdout. So the question.
How to read stdout of C++ program in itself.
It is possible to get pid I searched if we can get fd of the same programm but I'm not able to find anything.
#include <unistd.h>
#include <sys/types.h>
#include <iostream>
void some_function(){
std::cout<<"Hello World";
}
int main(){
int pid = ::getpid();
string s = //What to write here.
cout<<"Printing";
some_function(); //This function prints "Hello World" to screen
cout<<s; //"PrintingHello World"
return 0;
}
How to attach pipe to same process i.e instead of creating child process.
Some might think of creating child process and call some_function in it, to be able to read its stdout in parent process, but No, some_function depends on process which calls it and hence we want to call it the very process instead of creating child process.
This isn't hard to do, but IMO it's quite a hack, and it won't work with a multithreaded program:
// make a temp file to store the function's stdout
int newStdOut = mkstemp( "/tmp/stdout.XXXXXXX" );
// save the original stdout
int tmpStdOut = dup( STDOUT_FILENO );
// clear stdout
fflush( stdout );
// now point the stdout file descriptor to the file
dup2( newStdOut, STDOUT_FILENO );
// call the function we want to collect the stdout from
some_function();
// make sure stdout is empty
fflush( stdout );
// restore original stdout
dup2( tmpStdOut, STDOUT_FILENO );
// the tmp file now contains whatever some_function() wrote to stdout
Error checking, proper headers, syncing C stdout with C++ cout, and reading from and cleaning up the temp file are left as exercises... ;-)
Note that you can't safely use a pipe - the function can write enough to fill up the pipe, and you can't read from the pipe because you've called the function.
How to read stdout of C++ program in itself?
There are very few reasons to do that and that is usually (but not always) a design bug.
Be aware of an important thing (at least in a single-threaded program). If your program is both reading from its "stdout" and writing (as usual) in it, it could be stuck in a deadlock: unable to read so not reaching any output routine, (or unable to write because the pipe is full).
So a program which both reads and writes the same thing (actually, the two sides of the same pipe(7)) should use some multiplexing call like poll(2). See also this.
Once you understand that, you'll have some event loop. And before that, you'll make a pipe(7) using pipe(2) (and dup2(2)).
However, pipe to self is a good thing in some signal(7) handling (see signal-safety(7)). That trick is even recommended in Qt Unix signal handling.
Read more about Unix system programming, e.g. ALP or some newer book. Read also intro(2) & syscalls(2).
I have looked for pipe and it requires fd
Wrong. Read much more carefully pipe(2); on success it fills an array of two file descriptors. Of course it could fail (see errno(3) & perror(3) & strerror(3))
Maybe you just need popen(3). Or std::ostringstream. Or open_memstream(3).
Consider we have some_function and it prints result to stdout instead returning it. Changing it's definition is out of our scope and there's no alternative to it
If some_function is your code, or is some free software, you could and probably should improve it to give a result somewhere....

Using pipe() and fork() to read from a file and output to the console/new file

I'm trying to learn how to use the pipe() and fork() system calls. I'm using pipe and fork to create parent and child processes where the child will read a character from the text file, and then send it through the pipe to the parent that will then output the character to the console, with the desired result that it will print out the entire text to the console. Later I'm going to be doing some text processing on the file with the child process reading and processing then sending the updated text to the parent but for now I just want to make sure I'm getting the basics of pipe() correct.
example file:
This is a test file; it is 1 of many.
Others will follow.
Relevant code:
pid = fork();
ifstream fin;
fin.open(inputFilename);
fin.get(inputChar);
if (pid == -1)
{
perror("Trouble");
exit(2);
}
else if (pid == 0) //child process that reads text file and writes to parent
{
close(pipefds[0]);
while(!fin.eof())
{
write(pipefds[1], &inputChar, sizeof(inputChar));
fin.get(inputChar);
}
close(pipefds[1]);
exit(0);
}
else
{
close(pipefds[1]);
read(pipefds[0], readbuffer, sizeof(readbuffer));
cout << readbuffer << endl;
close(pipefds[0]);
exit(0);
}
fin.close();
However, when I compile and run, the output is always of a varying length. Sometimes it will print the whole file, others it will just print out a few letters, or half of a line. Such as.
This i
I've tried going through the man pages and researching more but I haven't been able to find any answers. What exactly is going on with my program that it will sometimes read everything from the file but other times won't. Any help is greatly appreciated!
It looks as though you're trying to read all the data from the pipe with one call to read(2). But, as with any I/O operation, this may always return fewer bytes than you requested. You should always check the return value of read(2) and write(2) system calls (and others), to make sure that they acted as expected.
In this case, you should loop until you get some independent notification from the child process that they're done sending data. This can be signaled in this case by read(2) returning 0, meaning that the child closed their end of the pipe.
You are assuming that the parent can read everything written to the pipe by the child via one read() call. That might be a safe assumption for a pipe if the child were writing everything via a single write() call, as long as the overall data size did not exceed the size of the pipe's internal buffer. It is not at all safe when, as in this case, the child is sending data via many little writes.
How much data the parent actually gets will depend in part on how its one read() call is ordered relative to the child's writes. Inasmuch as the two are separate processes and you're employing no IPC other than the pipe itself, it's basically unpredictable how much data the parent will successfully read.
In the general case, one must assume that the reader will need to perform multiple read() calls to read all data that are sent. It must keep calling read() and processing the resulting data appropriately until read's return value indicates that an I/O error has occurred or that the end of the file has been reached. Note well that end of file does not mean just that no more bytes are available now, but that no more bytes will ever be available. That happens after all processes have closed all copies of the write end of the pipe.

Opening /dev/input/* character device always results in a segfault

I have a C++ program that is supposed to be something of a raw keyboard-and-mouse event handler by reading from a character device file in Linux.
The problem is, whenever I attempt to read the stream by literally ANY I/O function (e.g. getc, fgetc, read, gets, scan, etc...) it will always produce a segfault. I even check to make sure that the file is not NULL, in which case the program throws a regular error.
Here's exactly what my program does:
FILE * mouseFile;
FILE * kbdFile; //Definitions for my streams
mouseFile = fopen("/dev/input/mice", "r"); //Open mice stream in readonly
kbdFile = fopen("/dev/input/event5", "r"); //In my case, the keyboard is event 5
/*
A loop here that uses one of the I/O functions i talked about earlier, and
then simply prints that to standard output. This is where I assume that the
segfault happens, because i can open the stream just fine.
*/
I would use X or SDL, but i'm planning to eventually port this into NASM assembly or some other very low-level code, and I don't really want to bother with external libraries.

Redirect one process stdout to pipe will make difference running result?

Poco::Pipe outputPipe;
Poco::Pipe errorPipe;
Poco::Process::Env env;
Poco::Process::Args arg;
Poco::Process::launch(exeFile, arg, workDir, 0 , &outputPipe, &errorPipe, env);
I use the above code to create two processes, one is right, the other is wrong. Then I change the code
Poco::Process::launch(exeFile, arg, workDir, 0 , 0, 0, env);
The only difference is I don't redirect the stdout and stderr to pipes. Then I create two processes, now it's all right.
In my opinion, I dont't think Redirecting the stdout and stderr will cause process running with different results. Is it right?
If i am wrong, what situation will make the difference?
Thanks.
There are at least two ways that the change can make a difference:
You don't show the code that reads the pipes. If one of the pipes fills before the reading code reads the data, the launched process will block writing to the pipe until the data is read from the pipe.
When the output (in particular) is a pipe, the output is likely to be fully buffered rather than line buffered or unbuffered. This can mean that output doesn't appear as swiftly as in the unpiped example. Your program might write a line, and then do some work, and then write another line, but neither of those is necessarily sent to the process reading the pipe (unless the application flushes the output, or sets line-buffered output mode).

How to listen to stderr in C/C++ for sending to callback?

How do I passively listen to stderr and obtain it as string for sending to callback? I have seen posts on reading stderr but I want to listen to it rather than actively reading it.
Background:
I have a cross-platform piece that uses 3rd party library (libcurl) which will output verbose info into stderr. This cross-platform piece is to be used by more than 1 non-cross-platform applications.
I would like to log these info, which I can do by providing FILE* to libcurl. But instead of doing that, I want to see if I can capture (passively listen to) the output in stderr as string, and send back to the calling main application via callback. This has the benefit of 1. main app can keep a single log using whatever logging tool it wants. 2. it will keep this piece cross-platform.
Doing this in a single process is a little tricky, but you can probably do it.
1: Using freopen() you can redirect your stderr to a named file. You can simultaneously open that file for reading on another handle. You might also need to call setvbuf() on stderr to turn off buffering on output to stderr so that you will be able to read it right away from the 2nd handle. Since it is being written to a file you can read it at anytime - when it is convenient. The unix function "select" is what you need if you want to be notified when the file changes. (see also fileno()).
2: More tricky would be to setup stderr as the write end of a pipe. Should be doable using dup3(), though this isn't exactly cross-platform (to non-unixy OS's). It would also require that a 2nd thread be reading from the pipe to prevent the writer from being blocked if they write very much.
Like:
FILE *stream = freopen("stderr.out", "w", stderr); // Added missing pointer
setvbuf(stream, 0, _IONBF, 0); // No Buffering
FILE *input = fopen("stderr.out", "r");
fprintf(stderr, "Output to stderr dude\n");
//fflush(stderr); // You can explicitly flush instead of setting no buffering.
char buffer[1024];
while (fgets(buffer, 512, input))
{
printf(">>>%s\n", buffer);
}