End of file on pipe magic during open - c++

I have a c++ application in which I am starting another process(wireshark) something like following.
if (fp == NULL){
fp = popen(processpath, "r"); //processpath is the process I want to start
if (!fp){
throw std::invalid_argument("Cannot start process");
}
fprintf(fp, d_msg);//d_msg is the input I want to provide to process
} else if(fp != NULL){
fprintf(fp, d_msg);
}
The problem is when I execute my c++ application, it does start the wireshark but with error End of File on pipe magic during open
what should I do to avoid that?
Also I tried using mkfifo to create a named pipe and execute it. I used something like this:
if (fp == NULL){
system("mkfifo /tmp/mine.pcap");
fp = popen("wireshark -k -i /tmp/mine.pcap", "r");
if (!fp){
dout << "Cannot start wireshark"<<std::endl;
throw std::invalid_argument("Cannot start wireshark");
}
input = fopen("/tmp/mine.pcap", "wb");
fprintf(input , d_msg);
fclose(input);
} else if(fp != NULL){
input = fopen("/tmp/mine.pcap", "wb");
fprintf(input , d_msg);
fclose(input);
}
But that too didn't work. With this I get following error:
The file "/tmp/wireshark_mine.pcap_20130730012654_ndbFzk" is a capture for a network type that Wireshark doesn't support
Any help would be appreciated.
Thank you very much.

The problem is when I execute my c++ application, it does start the wireshark but with error End of File on pipe magic during open
what should I do to avoid that?
You should write a pcap file or a pcap-ng file to the pipe, rather than fprintfing something.
Both of those file formats are binary. If you're constructing your own packets, you will have to construct and write to the pipe a valid pcap file header or several valid pcap-ng blocks (Section Header Block and at least one Interface Description Block) before you can write any packets, and then, for each packet, you will have to write a per-packet pcap header or the beginning and end of a pcap-ng Enhanced Packet Block before (and, for an Enhanced Block, after) the raw packet data. If you're just sending an existing file to Wireshark, you will need to read raw bytes from the file and send those raw bytes down the pipe.

Related

Correct way of using fdopen

I mean to associate a file descriptor with a file pointer and use that for writing.
I put together program io.cc below:
int main() {
ssize_t nbytes;
const int fd = 3;
char c[100] = "Testing\n";
nbytes = write(fd, (void *) c, strlen(c)); // Line #1
FILE * fp = fdopen(fd, "a");
fprintf(fp, "Writing to file descriptor %d\n", fd);
cout << "Testing alternate writing to stdout and to another fd" << endl;
fprintf(fp, "Writing again to file descriptor %d\n", fd);
close(fd); // Line #2
return 0;
}
I can alternately comment lines 1 and/or 2, compile/run
./io 3> io_redirect.txt
and check the contents of io_redirect.txt.
Whenever line 1 is not commented, it produces in io_redirect.txt the expected line Testing\n.
If line 2 is commented, I get the expected lines
Writing to file descriptor 3
Writing again to file descriptor 3
in io_redirect.txt.
But if it is not commented, those lines do not show up in io_redirect.txt.
Why is that?
What is the correct way of using fdopen?
NOTE.
This seems to be the right approach for a (partial) answer to Smart-write to arbitrary file descriptor from C/C++
I say "partial" since I would be able to use C-style fprintf.
I still would like to also use C++-style stream<<.
EDIT:
I was forgetting about fclose(fp).
That "closes" part of the question.
Why is that?
The opened stream ("stream" is an opened FILE*) is block buffered, so nothing gets written to the destination before the file is flushed. Exiting from an application closes all open streams, which flushes the stream.
Because you close the underlying file descriptor before flushing the stream, the behavior of your program is undefined. I would really recommend you to read posix 2.5.1 Interaction of File Descriptors and Standard I/O Streams (which is written in a horrible language, nonetheless), from which:
... if two or more handles are used, and any one of them is a stream, the application shall ensure that their actions are coordinated as described below. If this is not done, the result is undefined.
...
For the first handle, the first applicable condition below applies. ...
...
If it is a stream which is open for writing or appending (but not also open for reading), the application shall either perform an fflush(), or the stream shall be closed.
A "handle" is a file descriptor or a stream. An "active handle" is the last handle that you did something with.
The fp stream is the active handle that is open for appending to file descriptor 3. Because fp is an active handle and is not flushed and you switch the active handle to fd with close(fd), the behavior of your program is undefined.
What is my guess and most probably happens is that your C standard library implementation calls fflush(fp) after main returns, because fd is closed, some internal write(3, ...) call returns an error and nothing is written to the output.
What is the correct way of using fdopen?
The usage you presented is the correct way of using fdopen.

How to properly close a socket opened with fdopen?

I have a socket sock:
int sock = socket(...);
connect(sock, ...);
// or sock = accept(sock_listen, 0, 0);
And I opened it with fdopen twice, so that I can use the buffered reader and writer in stdio, such as fwrite, fread, fgets and fprintf.
FILE *f_recv = fdopen(sock, "wb");
FILE *f_send = fdopen(sock, "rb");
// some IO here.
close(sock);
fclose(f_recv);
fclose(f_send);
But as we know, if I fclose a file, a close will be called subsequently, and fclose will fail.
And if I use only close, the memory of struct FILE is leaked.
How do I close it properly?
UPDATE:
Use fdopen once with "r+" makes reading and writing share the same lock, but I except the sending and receiving to work individually.
Use dup() to obtain a duplicate file descriptor for passing to fdopen(). When you call fclose() that will be closed but the underlying socket will remain open and can be closed with close():
FILE *f_recv = fdopen(dup(sock), "wb");
FILE *f_send = fdopen(dup(sock), "rb");
// some IO here.
fclose(f_recv);
fclose(f_send);
close(sock);
Edit: You can of course combine this with just using a single FILE object for both reading and writing.
I think calling fdopen() twice is a mistake for the reasons you give.
Just open it once with fdopen(), passing the mode string "r+b" to make it read/write and binary.

Opening pipe in append mode

I'm trying to open a fifo pipe, into which one thread writes, the synchronization is all good.
However, for understandable reasons I need it to be opened in append mode.
When I open it as follow:
ret_val = mkfifo(lpipename.c_str(), 0666);
if((pipehandler = open(lpipename.c_str(), O_RDWR)) < 1)
{
perror("Failed to open pipe file");
syslog(LOG_ERR, "Failed to open pipe file");
exit(1);
}
I don't have any problems and I can see the pipe marked in yellow when 'ls'-ing my folder
But when I try to open the pipe as follows, in append mode:
ret_val = mkfifo(lpipename.c_str(), 0666);
if((pipehandler = open(lpipename.c_str(), O_RDWR| O_APPEND)) < 1)
{
perror("Failed to open pipe file");
syslog(LOG_ERR, "Failed to open pipe file");
exit(1);
}
I can't see the pipe in folder at all.
For the record, I get an error in NEITHER one of the options
Does anyone have any idea of why?
Thanks
O_APPEND may lead to corrupted files on NFS file systems if more than one process appends data to a file at once. This is because NFS does not support appending to a file, so the client kernel has to simulate it, which can't be done without a race condition.
It may be due to this,for more details look into the below link
http://www.kernel.org/doc/man-pages/online/pages/man2/open.2.html
It's a FIFO. How could it do anything else but append? I believe appending is the norm, thus it will always append no matter how you open it.

C++ continuous read file

I've a producer/consumer set-up: Our client is giving us data that our server processes, and our client is giving data to our server by constantly writing to a file. Our server uses inotify to look for any file modifications, and processes the new data.
Problem: The file reader in the server has a buffer of size 4096. I've a unit test that simulates the above situation. The test constantly writes to an open file, which the file reader constantly tries to read an process. But, I noticed that after the first record is read, which is much smaller than 4096, an error flag is set in the ifstream object. This means that any new data arriving is not being processed. A simple workaround seems to be to call ifstream::clear after every read, and this does solve the issue. But, what is going on? Is this the right solution?
First off, depending on your system it may or may not be possible to read a file another process writes to: On Windows the normal settings when opening a file make the access exclusive. I don't know enough about Window to tell whether there are other settings. On POSIX system a file with suitable permissions can be opened for reading and writing by different processes. From the sounds of it you are using Linux, i.e., something following the POSIX specification.
The approach to polling a file upon change isn't entirely ideal, though: As you noticed, you get an "error" every time you reach the end of the current file. Actually, reaching the end of a file isn't really an error but trying to decode something beyond end of file is an error. Also, reading beyond the end of file will still set std::ios_base::eofbit and, thus, the stream won't be good(). If you insist on using this approach there isn't much choice than reading up to the end of the file and dealing with the incomplete read somehow.
If you have control over creating the file, however, you can do a simple trick: Instead of having the file be a normal file, you can create it is mkfifo to create a named pipe using the file name the writing program will write to: When opening a file on a POSIX system it doesn't create a new file if there is already one but uses the existing file. Well, file or whatever else is addressed by the file name (in addition to files and named pipe you may see directories, character or block special devices, and possibly others).
Named pipes are curious beasts intended to have two processes communicate with each other: What is written to one end by one process is readable at the other end by another process! The named pipe itself doesn't have any content, i.e., if you need both the content of the file and the communication with another process you might need to replicate the content somewhere. Opening a named pipe for reading which will block whenever it has reached the current end of the file, i.e., initially the read would block until there is a writer. Similarly writes to the named pipe will block until there is a reader. Once there two processes communicating the respective other end will receive an error when reading or writing the named pipe after the other process has exited.
If you are good with opening and closing file again and again,
The right solution to this problem would be to store the last read pos and start from there once file is updated:
Exact algo will be :
set start_pos = 0 , end pos =0
update end_pos = infile.tellg(),
move get pointer to start_pos (use seekg()) and read the block (end_pos - start_pos).
update start_pos = end_pos and then close the file.
sleep for some time and open file again.
if file stream is still not good , close the file and jump to step 5.
if file stream is good, Jump to step 1.
All c++ reference is present at http://www.cplusplus.com/reference/istream/istream/seekg/
you can literally utilize the sample code given here.
Exact code will be:
`
#include <iostream>
#include <fstream>
int main(int argc, char *argv[]) {
if (argc != 2)
{
std::cout << "Please pass filename with full path \n";
return -1;
}
int end_pos = 0, start_pos = 0;
long length;
char* buffer;
char *filePath = argv[1];
std::ifstream is(filePath, std::ifstream::binary);
while (1)
{
if (is) {
is.seekg(0, is.end);
end_pos = is.tellg(); //always update end pointer to end of the file
is.seekg(start_pos, is.beg); // move read pointer to the new start position
// allocate memory:
length = end_pos - start_pos;
buffer = new char[length];
// read data as a block: (end_pos - start_pos) blocks form read pointer
is.read(buffer, length);
is.close();
// print content:
std::cout.write(buffer, length);
delete[] buffer;
start_pos = end_pos; // update start pointer
}
//wait and restart with new data
sleep(1);
is.open(filePath, std::ifstream::binary);
}
return 0;
}
`

exec family with a file input

Hey guys I am trying to write a shell with C++ and I am having trouble with the function of using input file with the exec commands. For example, the bc shell in Linux is able to do “bc < text.txt” which calculate the lines in the text in a batch like fashion. I am trying to do likewise with my shell. Something along the lines of:
char* input = “input.txt”;
execlp(input, bc, …..) // I don’t really know how to call the execlp command and all the doc and search have been kind of cryptic for someone just starting out.
Is this even possible with the exec commands? Or will I have to read in line by line and run the exec commands in a for loop??
You can open the file and then dup2() the file descriptor to standard input, or you can close standard input and then open the file (which works because standard input is descriptor 0 and open() returns the lowest numbered available descriptor).
const char *input = "input.txt";
int fd = open(input, O_RDONLY);
if (fd < 0)
throw "could not open file";
if (dup2(fd, 0) != 0) // Testing that the file descriptor is 0
throw "could not dup2";
close(fd); // You don't want two copies of the file descriptor
execvp(command[0], &command[0]);
fprintf(stderr, "failed to execvp %s\n", command[0]);
exit(1);
You would probably want cleverer error handling than the throw, not least because this is the child process and it is the parent that needs to know. But the throw sites mark points where errors are handled.
Note the close().
the redirect is being performed by the shell -- it's not an argument to bc. You can invoke bash (the equivalent of bash -c "bc < text.txt")
For example, you can use execvp with a file argument of "bash" and argument list
"bash"
"-c"
"bc < text.txt"