Opening pipe in append mode - c++

I'm trying to open a fifo pipe, into which one thread writes, the synchronization is all good.
However, for understandable reasons I need it to be opened in append mode.
When I open it as follow:
ret_val = mkfifo(lpipename.c_str(), 0666);
if((pipehandler = open(lpipename.c_str(), O_RDWR)) < 1)
{
perror("Failed to open pipe file");
syslog(LOG_ERR, "Failed to open pipe file");
exit(1);
}
I don't have any problems and I can see the pipe marked in yellow when 'ls'-ing my folder
But when I try to open the pipe as follows, in append mode:
ret_val = mkfifo(lpipename.c_str(), 0666);
if((pipehandler = open(lpipename.c_str(), O_RDWR| O_APPEND)) < 1)
{
perror("Failed to open pipe file");
syslog(LOG_ERR, "Failed to open pipe file");
exit(1);
}
I can't see the pipe in folder at all.
For the record, I get an error in NEITHER one of the options
Does anyone have any idea of why?
Thanks

O_APPEND may lead to corrupted files on NFS file systems if more than one process appends data to a file at once. This is because NFS does not support appending to a file, so the client kernel has to simulate it, which can't be done without a race condition.
It may be due to this,for more details look into the below link
http://www.kernel.org/doc/man-pages/online/pages/man2/open.2.html

It's a FIFO. How could it do anything else but append? I believe appending is the norm, thus it will always append no matter how you open it.

Related

Why must the file exist when using the 'r+' mode in fopen?

Why add this constraint when your intentions are to both read and write data to the file?
My application wants to open the file in both reading an writing mode. If I use w+ it will destroy the previous contests of the file, but at the same time it will create the file if it doesn't exist.
However if I use the r+ mode, my application will work properly, but if the file doesn't exist it will throw an exception about the nonexistence of the file.
Try something like this. If the first fopen fails because the file does not exist, the second fopen will try to create it. If the second fopen fails there are serious problems.
if((fp = fopen("filename","r+")) == NULL) {
if((fp = fopen("filename","w+")) == NULL) {
return 1;
}
}

How to properly close a socket opened with fdopen?

I have a socket sock:
int sock = socket(...);
connect(sock, ...);
// or sock = accept(sock_listen, 0, 0);
And I opened it with fdopen twice, so that I can use the buffered reader and writer in stdio, such as fwrite, fread, fgets and fprintf.
FILE *f_recv = fdopen(sock, "wb");
FILE *f_send = fdopen(sock, "rb");
// some IO here.
close(sock);
fclose(f_recv);
fclose(f_send);
But as we know, if I fclose a file, a close will be called subsequently, and fclose will fail.
And if I use only close, the memory of struct FILE is leaked.
How do I close it properly?
UPDATE:
Use fdopen once with "r+" makes reading and writing share the same lock, but I except the sending and receiving to work individually.
Use dup() to obtain a duplicate file descriptor for passing to fdopen(). When you call fclose() that will be closed but the underlying socket will remain open and can be closed with close():
FILE *f_recv = fdopen(dup(sock), "wb");
FILE *f_send = fdopen(dup(sock), "rb");
// some IO here.
fclose(f_recv);
fclose(f_send);
close(sock);
Edit: You can of course combine this with just using a single FILE object for both reading and writing.
I think calling fdopen() twice is a mistake for the reasons you give.
Just open it once with fdopen(), passing the mode string "r+b" to make it read/write and binary.

End of file on pipe magic during open

I have a c++ application in which I am starting another process(wireshark) something like following.
if (fp == NULL){
fp = popen(processpath, "r"); //processpath is the process I want to start
if (!fp){
throw std::invalid_argument("Cannot start process");
}
fprintf(fp, d_msg);//d_msg is the input I want to provide to process
} else if(fp != NULL){
fprintf(fp, d_msg);
}
The problem is when I execute my c++ application, it does start the wireshark but with error End of File on pipe magic during open
what should I do to avoid that?
Also I tried using mkfifo to create a named pipe and execute it. I used something like this:
if (fp == NULL){
system("mkfifo /tmp/mine.pcap");
fp = popen("wireshark -k -i /tmp/mine.pcap", "r");
if (!fp){
dout << "Cannot start wireshark"<<std::endl;
throw std::invalid_argument("Cannot start wireshark");
}
input = fopen("/tmp/mine.pcap", "wb");
fprintf(input , d_msg);
fclose(input);
} else if(fp != NULL){
input = fopen("/tmp/mine.pcap", "wb");
fprintf(input , d_msg);
fclose(input);
}
But that too didn't work. With this I get following error:
The file "/tmp/wireshark_mine.pcap_20130730012654_ndbFzk" is a capture for a network type that Wireshark doesn't support
Any help would be appreciated.
Thank you very much.
The problem is when I execute my c++ application, it does start the wireshark but with error End of File on pipe magic during open
what should I do to avoid that?
You should write a pcap file or a pcap-ng file to the pipe, rather than fprintfing something.
Both of those file formats are binary. If you're constructing your own packets, you will have to construct and write to the pipe a valid pcap file header or several valid pcap-ng blocks (Section Header Block and at least one Interface Description Block) before you can write any packets, and then, for each packet, you will have to write a per-packet pcap header or the beginning and end of a pcap-ng Enhanced Packet Block before (and, for an Enhanced Block, after) the raw packet data. If you're just sending an existing file to Wireshark, you will need to read raw bytes from the file and send those raw bytes down the pipe.

exec family with a file input

Hey guys I am trying to write a shell with C++ and I am having trouble with the function of using input file with the exec commands. For example, the bc shell in Linux is able to do “bc < text.txt” which calculate the lines in the text in a batch like fashion. I am trying to do likewise with my shell. Something along the lines of:
char* input = “input.txt”;
execlp(input, bc, …..) // I don’t really know how to call the execlp command and all the doc and search have been kind of cryptic for someone just starting out.
Is this even possible with the exec commands? Or will I have to read in line by line and run the exec commands in a for loop??
You can open the file and then dup2() the file descriptor to standard input, or you can close standard input and then open the file (which works because standard input is descriptor 0 and open() returns the lowest numbered available descriptor).
const char *input = "input.txt";
int fd = open(input, O_RDONLY);
if (fd < 0)
throw "could not open file";
if (dup2(fd, 0) != 0) // Testing that the file descriptor is 0
throw "could not dup2";
close(fd); // You don't want two copies of the file descriptor
execvp(command[0], &command[0]);
fprintf(stderr, "failed to execvp %s\n", command[0]);
exit(1);
You would probably want cleverer error handling than the throw, not least because this is the child process and it is the parent that needs to know. But the throw sites mark points where errors are handled.
Note the close().
the redirect is being performed by the shell -- it's not an argument to bc. You can invoke bash (the equivalent of bash -c "bc < text.txt")
For example, you can use execvp with a file argument of "bash" and argument list
"bash"
"-c"
"bc < text.txt"

C++: File open call fails

I have the following code in one of my library functions that I am calling many times in a loop. After a large number of iterations I find that open returns -1 which it shouldn't have as the previous iterations worked fine. What may be the cause. How can I get more details on the error.?
int mode;
if (fileLen == 0)
mode = O_TRUNC | O_RDWR | O_CREAT;
else
mode = O_RDWR;
myFilDes = open (fName, mode, S_IRUSR | S_IWUSR);
EDIT:After the end of each iteration I am calling a method that library exposes which internally calls close (myFilDes);
perror is the standard function to map errno to string and print it out to stderr:
if (myFilDes == -1)
perror("Unable to open file: ");
man errno / man perror / man strerror for more info.
Are you closing these handles as well? Do you reach a specific number of open calls before it starts failing?
The errno variable should have additional information as to what the failure is. See: http://linux.die.net/man/2/open