I am trying to use an ifstream to open a named pipe that will eventually have data written to it.
std::cout << "Opening " << name << std::endl;
std::ifstream manual_shutdown_file(name.c_str());
std::cout << "Opened " << name << std::endl;
When I run the program, it blocks in the ifstream constructor. I see "Opening name" printed to the console, but the opened statement does not appear.
I know that I am connecting to the pipe, because if I execute
$ echo foo > name
from a shell, then the constructor returns and the Opened statement is printed. Is there no way to open a pipe until it has data in it, even if I do not want to immediately try reading it?
Calling open on the read end of a pipe will block until the write end is opened.
You can use the O_NONBLOCK flag to open the file descriptor for the pipe, but there is no standard way to then use the fd with std::ifstream, see here.
Guessing at your requirement, I'd say a small class that opens the fd and presents a polling signal interface would suit, something like:
namespace blah
{
class signal_t
{
private:
int fd;
// note: define sensible copy/move semantics
signal_t(const signal_t&) = delete;
signal_t& operator=(const signal_t&) = delete;
public:
signal_t(const char* named_pipe); // open fd, set O_NONBLOCK
void notify() const; // write 1 byte to fd as signal
bool poll() const; // attempt to read from fd, return true if signalled.
~signal_t(); // close fd
};
}
Since opening an input pipe via ifstream blocks until there is someone writing to it, you could always just let the ifstream block. Then to unblock it from another thread create your own ofstream to that same pipe then immediately close the ofstream. This will unblock the ifstream and have it marked with eof. This is much easier and less error prone than messing with the platform specific controls of the file handles.
You actually can open a std::ifstream on a named pipe without blocking for a writer, but you must set the flags as though you were also going to write to the stream.
Try std::ifstream pipe_stream(filename, std::ifstream::in | std::ifstream::out), or stream.open(filename, std::ifstream::in | std::ifstream::out).
Related
Hello and sorry if the answer is clear to those out there. I am still fairly new to programming and ask for some guidance.
This function should write just one of the three string parameters it takes in to the txt file I have already generated. When I run the program the function seems to work fine and the cout statement shows the info is in the string and does get passes successfully. The issue is after running the program I go to check the txt file and find it is still blank.
I am using C++17 on visual studio professional 2015.
void AddNewMagicItem(const std::string & ItemKey,
const std::string & ItemDescription,
const std::string &filename)
{
const char* ItemKeyName = ItemKey.c_str();
const char* ItemDescriptionBody = ItemDescription.c_str();
const char* FileToAddItemTo = filename.c_str();
std::ofstream AddingItem(FileToAddItemTo);
std::ifstream FileCheck(FileToAddItemTo);
AddingItem.open(FileToAddItemTo, std::ios::out | std::ios::app);
if (_access(FileToAddItemTo, 0) == 0)
{
if (FileCheck.is_open())
{
AddingItem << ItemKey;
std::cout << ItemKey << std::endl;
}
}
AddingItem.close(); // not sure these are necessary
FileCheck.close(); //not sure these are necessary
}
This should print out a message onto a .txt file when you pass a string into the ItemKey parameter.
Thank you very much for your help and again please forgive me as I am also new to stackoverflow and might have made some mistakes in formatting this question or not being clear enough.
ADD ON: Thank you everyone who has answered this question and for all your help. I appreciate the help and would like to personally thank you all for your help, comments, and input on this topic. May your code compile every time and may your code reviews always be commented.
As mentioned by previous commenters/answerers, your code can be simplified by letting the destructor of the ofstream object close the file for you, and by refraining from using the c_str() conversion function.
This code seems to do what you wanted, on GCC v8 at least:
#include <string>
#include <fstream>
#include <iostream>
void AddNewMagicItem(const std::string& ItemKey,
const std::string& ItemDescription,
const std::string& fileName)
{
std::ofstream AddingItem{fileName, std::ios::app};
if (AddingItem) { // if file successfully opened
AddingItem << ItemKey;
std::cout << ItemKey << std::endl;
}
else {
std::cerr << "Could not open file " << fileName << std::endl;
}
// implicit close of AddingItem file handle here
}
int main(int argc, char* argv[])
{
std::string outputFileName{"foobar.txt"};
std::string desc{"Description"};
// use implicit conversion of "key*" C strings to std::string objects:
AddNewMagicItem("key1", desc, outputFileName);
AddNewMagicItem("key2", desc, outputFileName);
AddNewMagicItem("key3", desc, outputFileName);
return 0;
}
Main Problem
std::ofstream AddingItem(FileToAddItemTo);
opened the file. Opening it again with
AddingItem.open(FileToAddItemTo, std::ios::out | std::ios::app);
caused the stream to fail.
Solution
Move the open modes into the constructor (std::ofstream AddingItem(FileToAddItemTo, std::ios::app);) and remove the manual open.
Note that only the app open mode is needed. ofstream implies the out mode is already set.
Note: If the user does not have access to the file, the file cannot be opened. There is no need to test for this separately. I find testing for an open file followed by a call to perror or a similar target-specific call to provide details on the cause of the failure to be a useful feature.
Note that there are several different states the stream could be in and is_open is sort of off to the side. You want to check all of them to make sure an IO transaction succeeded. In this case the file is open, so if is_open is all you check, you miss the failbit. A common related bug when reading is only testing for EOF and winding up in a loop of failed reads that will never reach the end of the file (or reading past the end of the file by checking too soon).
AddingItem << ItemKey;
becomes
if (!(AddingItem << ItemKey))
{
//handle failure
}
Sometimes you will need better granularity to determine exactly what happened in order to properly handle the error. Check the state bits and possibly perror and target-specific
diagnostics as above.
Side Problem
Opening a file for simultaneous read and write with multiple fstreams is not recommended. The different streams will provide different buffered views of the same file resulting in instability.
Attempting to read and write the same file through a single ostream can be done, but it is exceptionally difficult to get right. The standard rule of thumb is read the file into memory and close the file, edit the memory, and the open the file, write the memory, close the file. Keep the in-memory copy of the file if possible so that you do not have to reread the file.
If you need to be certain a file was written correctly, write the file and then read it back, parse it, and verify that the information is correct. While verifying, do not allow the file to be written again. Don't try to multithread this.
Details
Here's a little example to show what went wrong and where.
#include <iostream>
#include <fstream>
int main()
{
std::ofstream AddingItem("test");
if (AddingItem.is_open()) // test file is open
{
std::cout << "open";
}
if (AddingItem) // test stream is writable
{
std::cout << " and writable\n";
}
else
{
std::cout << " and NOT writable\n";
}
AddingItem.open("test", std::ios::app);
if (AddingItem.is_open())
{
std::cout << "open";
}
if (AddingItem)
{
std::cout << " and writable\n";
}
else
{
std::cout << " and NOT writable\n";
}
}
Assuming the working directory is valid and the user has permissions to write to test, we will see that the program output is
open and writable
open and NOT writable
This shows that
std::ofstream AddingItem("test");
opened the file and that
AddingItem.open("test", std::ios::app);
left the file open, but put the stream in a non-writable error state to force you to deal with the potential logic error of trying to have two files open in the same stream at the same time. Basically it's saying, "I'm sorry Dave, I'm afraid I can't do that." without Undefined Behaviour or the full Hal 9000 bloodbath.
Unfortunately to get this message, you have to look at the correct error bits. In this case I looked at all of them with if (AddingItem).
As a complement of the already given question comments:
If you want to write data into a file, I do not understand why you have used a std::ifstream. Only std::ofstream is needed.
You can write data into a file this way:
const std::string file_path("../tmp_test/file_test.txt"); // path to the file
std::string content_to_write("Something\n"); // content to be written in the file
std::ofstream file_s(file_path, std::ios::app); // construct and open the ostream in appending mode
if(file_s) // if the stream is successfully open
{
file_s << content_to_write; // write data
file_s.close(); // close the file (or you can also let the file_s destructor do it for you at the end of the block)
}
else
std::cout << "Fail to open: " << file_path << std::endl; // write an error message
As you said being quite new to programming, I have explicitly commented each line to make it more understandable.
I hope it helps.
EDIT:
For more explanation, you tried to open the file 3 times (twice in writing mode and once in reading mode). This is the cause of your problems. You only need to open the file once in writing mode.
Morever, checking that the input stream is open will not tell you if the output stream is open too. Keep in mind that you open a file stream. If you want to check if it is properly open, you have to check it over the related object, not over another one.
I am reading from a pipe (Linux) or a pipe-like device object (Windows) using std::ifstream::read. However, when there is no more data, read reads 0 bytes and sets EOF. Is there a way to make a blocking read from an ifstream, such that it only returns when there is some more data?
I'd rather not busy wait for the EOF flag to clear.
If it is not possible with the C++ standard library, what is the closest other option? Can I do it in plain C, or do I have to resort to operating system specific APIs?
Unfortunately, std is very poor on any non-algorithmic functionality, like IO. You always have to rely on 3rd-party solutions. Fortunately, there is Boost and, if you do not mind, I will suggest to use it to reduce OS specific code.
namespace bs = boost::iostreams;
int fd; // Create, for example, Posix file descriptor and specify necessary flags for it.
bs::file_descriptor_source fds(fd);
bs::stream<bs::file_descriptor_source> stream(fds);
// Work with the stream as it is std stream
In this small example I use Boost IO Streams and specifically file_descriptor_source that works as an underlying stream device and hides Windows or Posix specific pipe inside. The pipe you open yourself, so you can configure the pipe as you want.
well there seems no way to do a blocking read. clearing the error bit will not help. Only a re-open of the fifo like in this example:
int main(int argc, char **argv)
{
int rc=0;
enum FATAL {ERR_ARGV,ERR_OPEN_FILE};
try
{
if( argv[1] == NULL) throw ERR_ARGV;
std::ifstream fifo;
while(1)
{
fifo.open(argv[1],std::ifstream::in);
if( !fifo.is_open() ) throw ERR_OPEN_FILE;
std::string line;
while(std::getline(fifo,line))
{
std::cout << line << "\n"; fflush(stdout);
}
fifo.close();
}
// never should come here
}
catch(FATAL e)
{
rc=e;
switch(e)
{
case ERR_ARGV:
std::cerr << "ERROR: argument 1 should be a fifo file name\n";
break;
case ERR_OPEN_FILE:
std::cerr << "ERROR: unabel to open file " << argv[1] << "\n";
break;
}
}
return(rc);
}
I have tested this code and it works to do an endless read from a fifo.
I'm reading from a named pipe on Linux using std::ifstream. If the writing end of the file is closed, I can not continue reading from the pipe through the stream. For some reason I have to clear(), close() and open() the stream again to continue reading. Is this expected? How can I avoid the close() open() on a pipe when writers close() and open() the pipe at will?
Background: I believe the close() open() I have to do is causing the writer to sometimes receive SIGPIPE which I would like to avoid.
More details - I am using this code to read a stream
// read single line
stream_("/tmp/delme", std::ios::in|std::ios::binary);
std::getline(stream_, output_filename_);
std::cout << "got filename: " << output_filename_ << std::endl;
#if 0
// this fixes the problem
stream_.clear();
stream_.close();
stream_.open("/tmp/delme", std::ios::in|std::ios::binary);
// now the read blocks until data is available
#endif
// read more binary data
const int hsize = 4096+4;
std::array<char, hsize> b;
stream_.read(&b[0], hsize);
std::string tmp(std::begin(b), std::begin(b)+hsize);
std::cout << "got header: " << tmp << std::endl;
/tmp/delme is my pipe. I do echo "foo" > /tmp/delme and I get the foo in output_filename_ but the stream does not block there, (it should, there is no more data), it proceeds to read garbage. If I enable the code within the ifdef it works. Why?
Thanks,
Sebastian
Since you use std::getLine(), maybe you need to use an extra "\n" to signal the end of a line:
echo -e "foo\n" > /tmp/delme
instead of just
echo "foo" > /tmp/delme
This should at least get rid of the garbage reading.
I'm having some problems with unnamed pipes or "fifos" in C. I have two executable files: One tries to read, the other one tries to write. The reader is meant to be executed only once. I tried to make a simple code to show my problem, so it reads 10 times and then it gets closed. However, the writer should be executed many times (in my original program, it can't be executed twice at once: You have to wait for it to finish to run it again).
The problem with this code is: it only prints the incoming message when another one arrives. It seems that it gets blocked until it receives another message. I don't know what is happening, but it seems the "read" line blocks the program although there is data to read, and it works again when I send new data.
I tried another thing: As you can see the writer closes the file descriptor. The reader opens the file descriptor twice, because it would find EOF and get unblocked if it didn't. I tried eliminating those lines (the writer wouldn't close the fd, the reader would open the fd just once, eliminating the second "open()"). But for some reason, it unblocks if I do that. Why does that happen?
This is my code:
Reader:
int main () {
int fd;
static const std::string FILE_FIFO = "/tmp/archivo_fifo";
mknod ( static_cast<const char*>(FILE_FIFO.c_str()),S_IFIFO|0666,0 );
std::string mess = "Hii!! Example";
//open:
fd = open ( static_cast<const char*>(FILE_FIFO.c_str()),O_WRONLY );
//write:
write ( fd, static_cast<const void*>(mess.c_str()) ,mess.length() );
std::cout << "[Writer] I wrote " << mess << std::endl;
//close:
close ( fd );
fd = -1;
std::cout << "[Writer] END" << std::endl;
exit ( 0 );
}
Writer:
int main () {
int i,fd;
static const int BUFFSIZE = 100;
static const std::string name = "/tmp/archivo_fifo";
mknod ( static_cast<const char*>(name.c_str()),S_IFIFO|0666,0 );
char buffer[BUFFSIZE];
i=0;
fd = open ( name.c_str(),O_RDONLY );
while (true) {
i++;
std::cout << "Waiting to read Fifo: "<< i << std::endl;
ssize_t bytesLeidos = read ( fd,static_cast<void*>(buffer),BUFFSIZE);
fd = open ( name.c_str(),O_RDONLY );
std::string mess = buffer;
mess.resize ( bytesLeidos );
std::cout << "[Reader] I read: " << mess << std::endl;
sleep(3);
if (i==10) break;
}
close ( fd );
fd = -1;
unlink ( name.c_str() );
std::cout << "[Reader] END" << std::endl;
exit ( 0 );
}
Thanks in advance. And please excuse my poor English
you should use the select call to find out if any data is available on the fd of the pipe.
have a look at
http://en.wikipedia.org/wiki/Select_(Unix)
You've opened file in blocking mode:
If some process has the pipe open for writing and O_NONBLOCK is clear, read() shall block the calling thread until some data is written or the pipe is closed by all processes that had the pipe open for writing.
Depends on you goals you shoud rather synchronize readers and writers of your pipe, or use non-blocking mode for reader. Read about poll, epoll, select.
I've been reading more about unnamed pipes and now I understand the problem. I wrote:
the reader opens the file descriptor twice, because it would find EOF and get unblocked if it didn't. I tried eliminating those lines (the writer wouldn't close the fd, the reader would open the fd just once, eliminating the second "open()"). But for some reason, it unblocks if I do that. Why does that happen?
It unblocks because the other process closes, so the OS closes the file descriptor anyway. That's why although I didn't wrote close(fd) it unblocks.
The only way in which a blocking fifo can unblock is:
1) there is data to read
2) the other program closed the file descriptor. If there is no data to read and the writer closed the file descriptor (even if the file descriptor is open in reader), read() returns 0 and unblocks.
So my solution was: redesign my program so it would have the writer's file descriptor open all the time. Which means: there is only a executable file now. I'm pretty sure I could have done it with two executables, but I would probably need semaphores or something like that to synchronize, so it wouldn't try to read if the writer's fd is closed.
Is it possible to read intermittently-sent data through a named pipe using redirection to stdin?
What I'd like to do is this:
$ mkfifo pipe
$ ./test < pipe
In another terminal:
$ cat datafile > pipe
$ cat datafile > pipe
repeating dumping information into the pipe. This only works the first time.
Here's a demonstration program for test that shows the behavior:
int main(int argc, char *argv[]) {
char input_string[30];
while(1) {
while( cin.read(input_string, 30) || cin.gcount()!=0 ) {
cout << "!" << endl;
}
}
return 1;
}
So, what's going on? Does redirection only provide the contents of a single send to the pipe? I've already written a version of the actual production code that takes in the name of the pipe as a parameter and keeps it open for writing this way, and maybe that's the answer. But I'm wondering if there's a way to do this with redirection.
When you redirect the input from the pipe like this:
./test < pipe
The shell opens the pipe for reading and then starts your program. But opening the pipe does not complete until a writer exists -- that is, open(2) blocks. When another process opens the pipe for writing, the original open call completes, and the two can communicate. When the writer closes its end of the pipe, the read end also closes -- the reader gets an EOF.
Once that cycle completes, you can reopen the pipe for reading and start another cycle, but you have to do it yourself. So if you're reading for stdin, you'll have to restart your program. Alternatively, you can just reopen the pipe on a different file descriptor, e.g.:
// Error checking omitted for expository purposes
int main(int argc, char **argv)
{
while(1)
{
int fd = open("pipe", O_RDONLY);
char buffer[30];
int n;
while((n = read(fd, buffer, sizeof(buffer)) > 0)
{
// Process input
}
close(fd);
}
return 0;
}
If you want to wrap the raw I/O in a stdio FILE*, you can use fdopen(3); I'm not aware of a way to wrap a file descriptor in a C++ stream object, though it might be possible.
$ cat datafile > pipe
sends the content of datafile to the pipe, and an EOF (end of file). At this point the redirection is closed, and the data pushed to the pipe afterwards is not redirected to ./test anymore.