I have a program in C++ which runs a loop like this, grabbing frames from a video device, using a proprietary driver which I have no access to.
while(true) {
mybuf = getNextFrame(); // blocks
}
I would like to build some logic using other programming languages, so I was thinking of using the following interface. (I need only Linux support)
I was thinking of having a file somewhere, like:
/my/video/device
And every time I call read() on it, it would give me the current frame. Also, if I call read() again, I would like it to block until the next frame is available and return that for me. Also, if I don't call open() for awhile, I don't want the frames in-between to be buffered.
What would be the best approach?
I tried to use FUSE to implement a filesystem, but it was trying to seek inside the file, if it was a regular file, and would only read up to the size I specified for the file. I then made a character device, but it would never call my read() function, instead it would say permission denied...
I was thinking about trying CUSE, or something along the lines. Am I over complicating things? I just need to be able to work with a stream of frames which are constantly coming from my C++ loop, but I want to parse them in a different language, like Python or Go. I also do not want to mix the compilation of my C++ code with Go or python, I want the two to be completely separate. I thought having some sort of file API between the two would make things easier. What would be a good way of handling this?
I would write the program using named pipes. One thing to keep in mind is that if the recieving end disconnects in the middle of a write the server will recieve a SIGPIPE signal and unless this signal is handled or blocked the server will be terminated.
Related
I have a local file which some process continuously appends to. I would like to serve that file with boost::beast.
So far I'm using boost::beast::http::response<boost::beast::http::file_body> and boost::beast::http::async_write to send the file to the client. That works very well and it is nice that boost::beast takes care of everything. However, when the end of the file is reached, it stops the asynchronous writing. I assume that is because is_done of the underlying serializer returns true at this point.
Is it possible to keep the asynchronous writing ongoing so new contents are written to the client as the local file grows (similar to how tail -f would keep writing the file's contents to stdout)?
I've figured that I might need to use boost::beast::http::response_serializer<boost::beast::http::file_body> for that kind of customization but I'm not sure how to use it correctly. And do I need to use chunked encoding for that purpose?
Note that keeping the HTTP connection open is not the problem, only writing further output as soon as the file grows.
After some research this problem seems not easily solvable, at least not under GNU/Linux which I'm currently focusing on.
It is possible to use chunked encoding as described in boost::beast's documentation. I've implemented serving chunks asynchronously from file contents which are also read asynchronously with the help of boost::asio::posix::stream_descriptor. That works quite well. However, it also stops with an end-of-file error as soon as the end of the file is reached. When using async_wait via the descriptor I'm getting the error "Operation not supported".
So it simply seems not possible to asynchronously wait for more bytes to be written to a file. That's strange considering tail -f does exactly that. So I've straceed tail -f and it turns out that it calls inotify_add_watch(4, "path_to_file", IN_MODIFY). Hence I assume one actually needs to use inotify to implement this.
For me it seems easier and more efficient to take control over the process which writes to the file so far to let it prints to stdout. Then I can stream the pipe (similarly to how I've attempted streaming the file) and write the file myself.
However, if one wanted to go down the road, I suppose using inotify and boost::asio::posix::stream_descriptor is the answer to the question, at least under GNU/Linux.
This is a question about inter process communication via stdin/stdout.
The problem is I have a COM library, which I wasn't able to use with any Java-COM bridge (one particular function always causes core dump). But I was able to use it from a C++ program.
So I decided to make a wrapper server program in C++ to make those calls for me, and communicate with it from Java via stdin/stdout, but I'm facing a problem here.
I've decided to use protobufs for communicating messages, the main problem is reading input on the C++ side. I need a method, that will block until a certain amount of bytes is written to stdin for it to read.
The idea was to use google's protobufs, and set up communication like this:
C program starts an infinite loop, blocking on STDIN input, waiting to get 4 bytes in, which would be the length of the incoming message.
Then it blocks to get the whole message (raw byte count is known)
Parse the message with protobuf
Do work
Write output to stdout (probably in the same manner, prepending the message with the number of bytes incoming)
Java clinet reads this using DataStream or something like this and deciphers using protobufs as well
Setting up this two way communication turned out to be quite a lot harder, than I would have thought, thanks to my lack of knowledge of C++ and Windows programming (I compile it using MSVS2013 Community, and there are so many windows specific marcos/typedefs from all this COM code).
Is there some 3rd party lib, that can make creation of such a simple server, well, actually, simple?
PS: can be C, can be C++, I just need it to run on Windows.
A relatively simple message handling loop might look like this.
However you should really check the return value of both of the reads and handle errors there.
void read_and_process_message(void) {
while(true) {
long nMessageBytes;
read(stdin, &nMessageBytes, sizeof(long));
//Convert from network byte-order to local byte order
nMessageBytes = ntohl(nMessageBytes);
char * buffer = malloc(nMessageBytes);
read(stdin, buffer, nMessageBytes);
// Do something with your buffer and write to stdout.
}
}
I am writing a Linux command line application that ultimately leads to data acquisition from a piece of hardware. The nature of the data acquisition is that it will feed data to the program consistently at some defined data rate. Once the user enters into RxData (the receive loop), we do not want to stop unless we get a command from the terminal to tell it to stop. The problem I foresee is that using getchar() will hang the loop every iteration of the while loop because the program will expect the user to enter input. Am I wrong in this behavior?
On a side note, I know that when working with embedded devices, you can simply check a register to see if the buffer has increased and use that to determine whether or not to read from the buffer or not. I do not have that luxury on a Linux application (or do I?). Does some such function (let's call it getCharAvailable) which I can run, check if data has been input, and THEN signal my program to stop acquiring data?
I can't simply use SIGINT because I need to signal to the hardware to stop data acquisition as well as add a header to the recorded data. There needs to be a signal to stop acquisition.
In Linux (or any other Unix flavour), you can use select to look if there is available data on 2 (or more) file descriptors, sockets or any other thing that can be read. (It is the reason why this system call exists ...)
use the ncurse library and use getch in non-delay mode
I have a program that creates pipes between two processes. One process constantly monitors the output of the other and when specific output is encountered it gives input through the other pipe with the write() function. The problem I am having, though is that the contents of the pipe don't go through to the other process's stdin stream until I close() the pipe. I want this program to infinitely loop and react every time it encounters the output it is looking for. Is there any way to send the input to the other process without closing the pipe?
I have searched a bit and found that named pipes can be reopened after closing them, but I wanted to find out if there was another option since I have already written the code to use unnamed pipes and I haven't yet learned to use named pipes.
Take a look at using fflush.
How are you reading the other end? Are you expecting complete strings? You aren't sending terminating NULs in the snippet you posted. Perhaps sending strlen(string)+1 bytes will fix it. Without seeing the code it's hard to tell.
Use fsync. http://pubs.opengroup.org/onlinepubs/007908799/xsh/fsync.html
From http://www.delorie.com/gnu/docs/glibc/libc_239.html:
Once write returns, the data is enqueued to be written and can be read back right away, but it is not necessarily written out to permanent storage immediately. You can use fsync when you need to be sure your data has been permanently stored before continuing. (It is more efficient for the system to batch up consecutive writes and do them all at once when convenient. Normally they will always be written to disk within a minute or less.) Modern systems provide another function fdatasync which guarantees integrity only for the file data and is therefore faster. You can use the O_FSYNC open mode to make write always store the data to disk before returning.
I have two applications running on my machine. One is supposed to hand in the work and other is supposed to do the work. How can I make sure that the first application/process is in wait state. I can verify via the resources its consuming, but that does not guarantee so. What tools should I use?
Your 2 applications shoud communicate. There are a lot of ways to do that:
Send messages through sockets. This way the 2 processes can run on different machines if you use normal network sockets instead of local ones.
If you are using C you can use semaphores with semget/semop/semctl. There should be interfaces for that in other languages.
Named pipes block until there is both a read and a write operation in progress. You can use that for synchronisation.
Signals are also good for this. In C it is called sendmsg/recvmsg.
DBUS can also be used and has bindings for variuos languages.
Update: If you can't modify the processing application then it is harder. You have to rely on some signs that indicate the progress. (I am assuming you processing application reads a file, does some processing then writes the result to an output file.) Do you know the final size the result should be? If so you need to check the size repeatedly (or whenever it changes).
If you don't know the size but you know how the processing works you may be able to use that. For example the processing is done when the output file is closed. You can use strace to see all the system calls including the close. You can replace the close() function with the LD_PRELOAD environment variable (on windows you have to replace dlls). This way you can sort of modify the processing program without actually recompiling or even having access to its source.
you can use named pipes - the first app will read from it but it will be blank and hence it will keep waiting (blocked). The second app will write into it when it wants the first one to continue.
Nothing can guarantee that your application is in waiting state. You have to pass it some work and get back a response. It might be transactions or not - application can confirm that it got the message to process before it starts to process it or after it was processed (successfully or not). If it does not wait, passing a piece of work should fail. Whether when trying to write to a TCP/IP socket or other means, or if timeout occurs. This depends on implementation, what kind of transport you are using and other requirements.
There is actually a way of figuring out if the process (thread) is in blocking state and waiting for data on a socket (or other source), but that means that client should be on the same computer and have access privileges required to do that, but that makes no sense other than debugging, which you can do using any debugger anyway.
Overall, the idea of making sure that application is waiting for data before trying to pass it that data smells bad. Not to mention the racing condition - what if you checked and it was OK, and when you actually tried to send the data, you found out that application is not waiting at that time (even if that is microseconds).