FIFO block program until other process read - c++

I am using a FIFO file, created as mkfifo myFIFO on Linux terminal, in a C++ code bellow:
#include <iostream>
using namespace std;
#include <stdio.h>
int main(int argc, char** argv) {
FILE* fp = fopen("/tmp/myFIFO", "w");
fprintf(fp, "Hello, world!\n");
fclose(fp);
cout << "!!!Hello World!!!" << endl; // prints !!!Hello World!!!
return 0;
}
The program stays blocked until other process read myFIFO. How to avoid this? (I want that this peace of code stay processing and writing in the FIFO not carrying about the other process read. Until the FIFO be full, when, if possible, I may discard the oldest messages).

You need to open your FIFO in a non-blocking mode. To do so, use ::open instead of fopen, and do specify the O_NONBLOCK option.
You will also need to use ::write instead of fprintf.

The best way that I found was, using the O_NONBLOCK option, write in the file/pipe the content of a internal queue of the software. So the buffer is created by the writer software and the pipe file have the function only of message channel.

Related

Why does the output comes after sleep without newline?

I'm using gcc 7.3 and g++ 7.3. GCC and G++ makes error. For example,
#include <stdio.h>
#include <unistd.h>
int main() {
printf("a");
sleep(1);
return 0;
}
'a' prints after waiting 1 seconds but when I use printf("a\n"); it works correctly. It's same on C++. For example,
#include <iostream>
#include <unistd.h>
int main() {
std::cout << "a";
sleep(1);
return 0;
}
'a' prints after waiting 1 seconds, too. However, when I use std::cout << "a" << std::endl; it works correctly. What's the problem and how to fix it?
sleep() is like schedule a process manually. printf() puts the data into stdout stream not directly on monitor.
printf("a"); /* data is there in stdout , not flushed */
sleep(1); /* as soon as sleep(1) statement occurs your process(a.out) jumped to waiting state, so data not gets printed on screen */
So either you should use fflush(stdout) or use \n to clear the stdout stream.
You are seeing this behaviour because stdout will be usually line buffered when used with terminal and fully buffered when used with files, the strings will be stored in a buffer and can be flushed by entering new line or when buffer fills or when program terminates
You can also override buffer mode by using setvbuf as below
setvbuf(stdout, NULL, _IONBUF, 1024);
printf("a");
It will print a without buffering, have a look at https://www.tutorialspoint.com/c_standard_library/c_function_setvbuf.htm for using setvbuf
Also have a look at different types of buffering with streams.
Hope this helps you.

Blocking read from std::ifstream

I am reading from a pipe (Linux) or a pipe-like device object (Windows) using std::ifstream::read. However, when there is no more data, read reads 0 bytes and sets EOF. Is there a way to make a blocking read from an ifstream, such that it only returns when there is some more data?
I'd rather not busy wait for the EOF flag to clear.
If it is not possible with the C++ standard library, what is the closest other option? Can I do it in plain C, or do I have to resort to operating system specific APIs?
Unfortunately, std is very poor on any non-algorithmic functionality, like IO. You always have to rely on 3rd-party solutions. Fortunately, there is Boost and, if you do not mind, I will suggest to use it to reduce OS specific code.
namespace bs = boost::iostreams;
int fd; // Create, for example, Posix file descriptor and specify necessary flags for it.
bs::file_descriptor_source fds(fd);
bs::stream<bs::file_descriptor_source> stream(fds);
// Work with the stream as it is std stream
In this small example I use Boost IO Streams and specifically file_descriptor_source that works as an underlying stream device and hides Windows or Posix specific pipe inside. The pipe you open yourself, so you can configure the pipe as you want.
well there seems no way to do a blocking read. clearing the error bit will not help. Only a re-open of the fifo like in this example:
int main(int argc, char **argv)
{
int rc=0;
enum FATAL {ERR_ARGV,ERR_OPEN_FILE};
try
{
if( argv[1] == NULL) throw ERR_ARGV;
std::ifstream fifo;
while(1)
{
fifo.open(argv[1],std::ifstream::in);
if( !fifo.is_open() ) throw ERR_OPEN_FILE;
std::string line;
while(std::getline(fifo,line))
{
std::cout << line << "\n"; fflush(stdout);
}
fifo.close();
}
// never should come here
}
catch(FATAL e)
{
rc=e;
switch(e)
{
case ERR_ARGV:
std::cerr << "ERROR: argument 1 should be a fifo file name\n";
break;
case ERR_OPEN_FILE:
std::cerr << "ERROR: unabel to open file " << argv[1] << "\n";
break;
}
}
return(rc);
}
I have tested this code and it works to do an endless read from a fifo.

how to read stdin to end in qt?

I have a qt-app which can be invoked with:
cat bla.bin | myapp
Whats the easiest way to read the entire input (stdin) into a QByteArray on Win,Mac and Linux?
I tired several things, but none of them seems to work (on windows):
int main(int argc, char *argv[])
{
QCoreApplication app(argc, argv);
QByteArray content;
//---Test 1: hangs forever, reads 0
while(!std::cin.eof()) {
char arr[1024];
int s = std::cin.readsome(arr,sizeof(arr));
content.append(arr,s);
}
//---Test 2: Runs into timeout
QFile in;
if(!in.open(stdin,QFile::ReadOnly|QFile::Unbuffered)) {
qDebug() << in.errorString();
}
while (in.waitForReadyRead(1000)) {
content+=in.readAll();
}
in.close();
return app.exec();
}
Am I having a Event-Loop Problem or shouldn't it work without?
The primary problem of actually reading from stdin stems from using readsome. readsome is generally not used to read from files (including stdin). Readsome is generally used for binary data on asynchronous sources. Technically speaking eof doesn't get set with readsome. read is different in that regard as it will set eof accordingly. There is an SO question/answer here that may be of interest. If you are supporting Linux and Windows and reading stdin, you have to be aware that on Windows stdin isn't opened in binary mode (neither is stdout). On Windows you would have to use _setmode on stdin. One way to do this is with #ifdefs using Q_OS_WIN32. Using QFile doesn't resolve this issue.
In the code you are trying to create it doesn't appear you are interested in actually having an event loop. You can still use QT objects like QByteArray without an event loop. In your code you read data in from stdin (cin) and then you executed return app.exec(); which put your console application into a loop waiting for events. You didn't add any events to the QT Event queue prior to app.exec(); so effectively the only thing you can do is end your application with control-c. If no event loop is needed then code like this should suffice:
#include <QCoreApplication>
#include <iostream>
#ifdef Q_OS_WIN32
#include <fcntl.h>
#include <io.h>
#endif
int main()
{
QByteArray content;
#ifdef Q_OS_WIN32
_setmode(_fileno(stdin), _O_BINARY);
#endif
while(!std::cin.eof()) {
char arr[1024];
std::cin.read(arr,sizeof(arr));
int s = std::cin.gcount();
content.append(arr,s);
}
}
Notice how we used a QByteArray but didn't have a QCoreApplication app(argc, argv); and a call to app.exec();

How do I run a program from another program and pass data to it via stdin in c or c++?

Say I have an .exe, lets say sum.exe. Now say the code for sum.exe is
void main ()
{
int a,b;
scanf ("%d%d", &a, &b);
printf ("%d", a+b);
}
I wanted to know how I could run this program from another c/c++ program and pass input via stdin like they do in online compiler sites like ideone where I type the code in and provide the stdin data in a textbox and that data is accepted by the program using scanf or cin. Also, I wanted to know if there was any way to read the output of this program from the original program that started it.
The easiest way I know for doing this is by using the popen() function. It works in Windows and UNIX. On the other way, popen() only allows unidirectional communication.
For example, to pass information to sum.exe (although you won't be able to read back the result), you can do this:
#include <stdio.h>
#include <stdlib.h>
int main()
{
FILE *f;
f = popen ("sum.exe", "w");
if (!f)
{
perror ("popen");
exit(1);
}
printf ("Sending 3 and 4 to sum.exe...\n");
fprintf (f, "%d\n%d\n", 3, 4);
pclose (f);
return 0;
}
In C on platforms whose name end with X (i.e. not Windows), the key components are:
pipe - Returns a pair of file descriptors, so that what's written to one can be read from the other.
fork - Forks the process to two, both keep running the same code.
dup2 - Renumbers file descriptors. With this, you can take one end of a pipe and turn it into stdin or stdout.
exec - Stop running the current program, start running another, in the same process.
Combine them all, and you can get what you asked for.
This is my solution and it worked:
sum.cpp
#include "stdio.h"
int main (){
int a,b;
scanf ("%d%d", &a, &b);
printf ("%d", a+b);
return 0;
}
test.cpp
#include <stdio.h>
#include <stdlib.h>
int main(){
system("./sum.exe < data.txt");
return 0;
}
data.txt
3 4
Try this solution :)
How to do so is platform dependent.
Under windows, Use CreatePipe and CreateProcess. You can find example from MSDN :
http://msdn.microsoft.com/en-us/library/windows/desktop/ms682499(v=vs.85).aspx
Under Linux/Unix, you can use dup() / dup2()
One simple way to do so is to use a Terminal (like command prompt in windows) and use | to redirect input/output.
Example:
program1 | program2
This will redirect program1's output to program2's input.
To retrieve/input date, you can use temporary files, If you don't want to use temporary files, you will have to use pipe.
For Windows, (use command prompt):
program1 <input >output
For Linux, you can use tee utility, you can find detail instruction by typing man tee in linux terminal
It sounds like you're coming from a Windows environment, so this might not be the answer you are looking for, but from the command line you can use the pipe redirection operator '|' to redirect the stdout of one program to the stdin of another. http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/redirection.mspx?mfr=true
You're probably better off working in a bash shell, which you can get on Windows with cygwin http://cygwin.com/
Also, your example looks like a mix of C++ and C, and the declaration of main isn't exactly an accepted standard for either.
How to do this (you have to check for errors ie. pipe()==-1, dup()!=0, etc, I'm not doing this in the following snippet).
This code runs your program "sum", writes "2 3" to it, and than reads sum's output. Next, it writes the output on the stdout.
#include <iostream>
#include <sys/wait.h>
#include <unistd.h>
int main() {
int parent_to_child[2], child_to_parent[2];
pipe(parent_to_child);
pipe(child_to_parent);
char name[] = "sum";
char *args[] = {name, NULL};
switch (fork()) {
case 0:
// replace stdin with reading from parent
close(fileno(stdin));
dup(parent_to_child[0]);
close(parent_to_child[0]);
// replace stdout with writing to parent
close(fileno(stdout));
dup(child_to_parent[1]);
close(child_to_parent[1]);
close(parent_to_child[1]); // dont write on this pipe
close(child_to_parent[0]); // dont read from this pipe
execvp("./sum", args);
break;
default:
char msg[] = "2 3\n";
close(parent_to_child[0]); // dont read from this pipe
close(child_to_parent[1]); // dont write on this pipe
write(parent_to_child[1], msg, sizeof(msg));
close(parent_to_child[1]);
char res[64];
wait(0);
read(child_to_parent[0], res, 64);
printf("%s", res);
exit(0);
}
}
I'm doing what #ugoren suggested in their answer:
Create two pipes for communication between processes
Fork
Replace stdin, and stdout with pipes' ends using dup
Send the data through the pipe
Based on a few answers posted above and various tutorials/manuals, I just did this in Linux using pipe() and shell redirection. The strategy is to first create a pipe, call another program and redirect the output of the callee from stdout to one end of the pipe, and then read the other end of the pipe. As long as the callee writes to stdout there's no need to modify it.
In my application, I needed to read a math expression input from the user, call a standalone calculator and retrieve its answer. Here's my simplified solution to demonstrate the redirection:
#include <string>
#include <unistd.h>
#include <sstream>
#include <iostream>
// this function is used to wait on the pipe input and clear input buffer after each read
std::string pipeRead(int fd) {
char data[100];
ssize_t size = 0;
while (size == 0) {
size = read(fd, data, 100);
}
std::string ret = data;
return ret;
}
int main() {
// create pipe
int calculatorPipe[2];
if(pipe(calculatorPipe) < 0) {
exit(1);
}
std::string answer = "";
std::stringstream call;
// redirect calculator's output from stdout to one end of the pipe and execute
// e.g. ./myCalculator 1+1 >&8
call << "./myCalculator 1+1 >&" << calculatorPipe[1];
system(call.str().c_str());
// now read the other end of the pipe
answer = pipeRead(calculatorPipe[0]);
std::cout << "pipe data " << answer << "\n";
return 0;
}
Obviously there are other solutions out there but this is what I can think of without modifying the callee program. Things might be different in Windows though.
Some useful links:
https://www.geeksforgeeks.org/pipe-system-call/
https://www.gnu.org/software/bash/manual/html_node/Redirections.html

Send data to another C++ program

Is it possible to send data to another C++ program, without being able to modify the other program (since a few people seem to be missing this important restriction)? If so, how would you do it? My current method involves creating a temporary file and starting the other program with the filename as a parameter. The only problem is that this leaves a bunch of temporary files laying around to clean up later, which is not wanted.
Edit: Also, boost is not an option.
Clearly, building a pipe to stdin is the way to go, if the 2nd program supports it. As Fred mentioned in a comment, many programs read stdin if either there is no named file provided, or if - is used as the filename.
If it must take a filename, and you are using Linux, then try this: create a pipe, and pass /dev/fd/<fd-number> or /proc/self/fd/<fd-number> on the command line.
By way of example, here is hello-world 2.0:
#include <string>
#include <sstream>
#include <cstdlib>
#include <cstdio>
#include <unistd.h>
#include <sys/types.h>
#include <sys/wait.h>
int main () {
int pfd[2];
int rc;
if( pipe(pfd) < 0 ) {
perror("pipe");
return 1;
}
switch(fork()) {
case -1: // Error
perror("fork");
return 1;
case 0: { // Child
// Close the writing end of the pipe
close(pfd[1]);
// Create a filename that refers to reading end of pipe
std::ostringstream path;
path << "/proc/self/fd/" << pfd[0];
// Invoke the subject program. "cat" will do nicely.
execlp("/bin/cat", "cat", path.str().c_str(), (char*)0);
// If we got here, then something went wrong, then execlp failed
perror("exec");
return 1;
}
default: // Parent
// Close the reading end.
close(pfd[0]);
// Write to the pipe. Since "cat" is on the other end, expect to
// see "Hello, world" on your screen.
if (write(pfd[1], "Hello, world\n", 13) != 13)
perror("write");
// Signal "cat" that we are done writing
close(pfd[1]);
// Wait for "cat" to finish its business
if( wait(0) < 0)
perror("wait");
// Everything's okay
return 0;
}
}
You could use sockets. It sounds like both application are on the same host, so you just identify the peers as localhost:portA and localhost:port B. And if you do it this way you can eventually graduate to do network IO. No temp files, no mystery parse errors or file deletions. TCP guarantees packet delivery and guarantees they will be ordered correctly.
So yeah, I would consider creating an synchronous socket server (use asynchronous if you anticipate having tons of peers). One benefit over pipe oriented IPC is that TCP sockets are completely universal. Piping varies dramatically based upon what system you are on (consider Windows named pipes vs implicit and explicit POSIX pipes -> very different).