Single producer and multiple consumer using C++ - c++

I'm using C++ and have a simple client .exe that when handed a file name, it does process it and return success or error code. I want to create a Windows C++ .exe that does the following and was looking for sample code to do it:
Start 4 (or x) client .exe as separate process (for ex. using CreateProcess)
While the list of the files is not empty
Send work to clients: Each client will process a sent file name and return either success
or error code
Once the list of files to process is empty (or the producer .exe shutdown) close the
4 clients (so they shutdown).
I did some research on this and found that pipes can be used to communicate between process. I found this sample app that does a communication between a server and client in c++: https://code.msdn.microsoft.com/windowsapps/CppNamedPipeServer-d1778534
The sample app does however sends request from client to server and get a response and I wanted to modify it or use a different sample app to do batch processing through having a common queue of work (or pipe that store this queue or batch of work) and send work to clients. I want to synchronize this work so as soon as client is done with one file, I'll send it another file to process.
Basically I want to create a sample application .exe that start multiple clients and send them work through inter-process communication. Any sample C++ code to do this is appreciated.
Thanks
Jeff Lacoste

You could have a look at boost. It has boost::interprocess where you can read about alot of ideas of what methods there are for IPC.
I personally never use boost::interprocess as I'm a huge fan of boost::asio, and just like for your purposes, it has everything you need ( except creating a process ).
And there is many many more to be found on google, and it is entirely opinion based what library to use or to directly use the native OS API, which is why I wonder this question is not closed yet.
As for your request to give "code samples", those 2 links contain samples to everything you listed regarding IPC, and it's open source, so you can look how the libraries communicate with the native OS API.

Related

How to use device data from a C++ library in a follow-up analysis in another language

I have an API in C++ which connects via bluetooth to a device and measures data.
Now I want to use this captured data LIVE and evaluate it in another language like R or Python, how is this done?
So I get live data from my c++ API from console applicaiton within visual studio, now I want to pipe this data stream to another "instance" like Python or R (maybe from within another IDE) and run my script on the data. Afterwards the data does not need to be piped back.
How is a good or correct way to achieve this? In the beginning I thought I would have to add native support for Python within my C++ project, now however I think it would be enough to just takes this little bit of data and pipe it to a local server instance where e.g. my R/Shiny application runs and read it in as a dataframe?
Has anyone worked with a C++ library for a device and piped that data live down another analysis setup in a different language? How have you done it?
I think the best way would be to use TCP/ IPC communication over socket.
In C++, implement a server which read data and publish it into a socket.
In Python implement a client which simply listen to a socket and process the data every time it's published by the C++ server.
If you want an easy C++ library for socket communication I suggest looking into either ZMQ or nanomsg but if your use case is simple enough, using native socket can do the job simply and efficiently.
Edit : If you wish to go the ZMQ way you can start with the ZGuide. You also have this tutorial about sending data between C++ and Python using zmq.
Nanomsg is a fork of ZMQ, so most of the concept of ZMQ will apply to it.
If you want to use native socket, there are already plenty of tutorial in both C++ and Python, just search on google.
If the both programs are independent you can just use standard system pipe.
You just run the both programs from a system terminal, piping the output of the first one as the input of the second one.
The syntax is usually:
cpp_program.exe | python_program.py
Then you just use standard output in the C++ program (functions like printf or std::cout which just write data to terminal). In the other program you use standard function for reading data from terminal.
This solution has a few disadvantages:
Input/output streams are usually treated as text. If you want to pipe binary data there may be some problems. For example on some systems bytes/characters "\n" may be replaced with "\r\n".
You cannot take user input in the second program. (At least not without using some tricks to access real terminal input.)
Pipes have finite size. If the second program is to slow to process data as fast as the first program produces them, the first program may be slowed down by print operations which waits for pipe to empty. (Or maybe it throws exception. I'm not sure.) In this case it may be a better idea to use a file as a buffer.

Chrome Native Messaging Error when Communicating

I am trying to create an Extension for Google Chrome ,in which I want to process some images.
The extension was previously created using NPAPI ,but that being phased out need to switch to another alternative, Native Messaging looked like the best suited for this job.
The native host is written in C++ and it reads from stdin a formated message sent from extension(some like {action:"name_of_action",buffer:"x0x0",length:"4"} ),parse it ,extract the buffer and do some processing with the image,after that i need to return a message to the extension.
The problem I am facing is that after a few messages(the number is not the same every time) ,the port used disconnects and in the javascript console the message is : Error when communicating with the native messaging host..
My application basicly does this:
while(true)
{
/*read until I reach a delimiter*/
while(true){
c = getchar();
buffer[i] = c;
if(i>len && memcmp(buffer+i-len+1,delimiter,len)==0)
break;
i++;
}
ProcessMessage(buffer);
}
I am sending image buffers from the extension(base64 encoded) ,decode them and process that buffer in the app.I also have tried (on windows) using UrlDownloadToFile function to download that image from C++ ,but that seems to fail ,ending up in the previous message Error when communicating with the native messaging host.Does anybody know why doesn't chrome allow downloading a file from the messaging host executable?
If you just want to do image processing in native code then you probably don't need to use Native Messaging. You can most likely use NaCl, or PNaCl, which produces OS-neutral executables that can be run safely withing Chrome.
To communicate with your NaCl module you can PostMessage too and from your extension's JavaScript code. You can even send dictionary object directly and decompose them in native code using the dictionary interface.
Native Message should only be needed when you need to access OS functionality not exposed by PPAPI, or when you need to load/run a pre-compiled code (e.g. load a windows DLL).

How to communicate between two processes

Hi I'm working on a c++ project that I'm trying to keep OS independent and I have two processes which need to communicate. I was thinking about setting up a 3rd process (possibly as a service?) to coordinate the other two, asynchronously.
Client 1 will tell the intermediate process when data is ready, and send the data to it. The intermediate process will then hold this data until client 2 tells it that it is ready for the data. If the intermediate process has not received new data from client 1, it will tell client 2 to wait.
Since I am trying to keep this OS independent I don't really know what to use. I have looked into using MPI but it doesn't really seem to fit this purpose. I have also looked into Boost.ASIO, Named Pipes, RPC's and RCF. Im currently programming in Windows but I'd like to avoid using the WIN_API so that the code could potentially be compiled in Linux.
Here's a little more detail on the two processes.
We have a back end process/model (client 1) that will receive initial inputs from a GUI (client 2, written in Qt) via the intermediate process. The model will then proceed to work until the end condition is met, sending data to the server as it becomes ready. The GUI will ask the intermediate process for data on regular intervals and will be told to wait if the model has not updated the data. As the data becomes available from the model we also want to be able to keep any previous data from the current session for exporting to a file if the user chooses to do so (i.e., we'll want the GUI to issue a command to the interface to export (or load) the data).
My modification privleges of the the back end/model are minimal, other than to adhere to the design outlined above. I have a decent amount of c++ experience but not much parallel/asynchronous application experience. Any help or direction is greatly appreciated.
Standard BSD TCP/IP socket are mostly platform independent. They work with some minor differences on both windows and Unices (like linux).
PS windows does not support AF_UNIX sockets.
I'd checkout the boost.interprocess library. If the two processes are on the same machine it has a number of different ways to communicate between processes, and do so in an platform independent manner.
I am not sure if you have considered the messaging system but if you are sending structured data between processes you should consider looking at google protocol buffers.
These related to the content of the messaging (what is passed) rather than how they are passed.
boost::asio is platform independent although it doesn't imply C++ at both ends. Of course, when you are using C++ you can use boost::asio as your form of transport.

Message queue, c++ multi thread

I looking for cross platform multithread message queue implementation on c++ (not slot/signal) . Better if it based on subject-observer pattern.
ZeroMQ looks like it may be what you are looking for.
It is well documented with lots of examples, such as this one: http://www.zeromq.org/blog:multithreaded-server , which may be what you are trying to implement.
Take a look to the "ISL" open source project (stands for an "Internet Server Library", C++), which SVN-repository is located on http://svn.storozhilov.com/isl/ - isl::AbstractMessageBroker class is a good candidate for a basement of your job. This is quite simple but extensible skeleton for any message broker subsystem (DBus, JMS, AMQP, etc.). Each client is served by 2 threads from the pre-started thread's pool: one is for receiving message from the transport and processing message and another is for sending message to transport. So, actually in order to implement your messaging system you have to override at least following three virtual methods:
isl::AbstractMessageBroker::receiveMessage(...);
isl::AbstractMessageBroker::processMessage(...);
isl::AbstractMessageBroker::sendMessage(...);
Example of use is in trunk/examples/EchoMessageBroker directory. Responses client with echoed message, terminates connection on "bye\r\n" message, terminates itself on SIGINT.
You can try out Apache ActiveMQ. http://activemq.apache.org. Quite robust.We use it for a FIX messaging platform, quite responsive and easy to configure also.
Have a look at Intel's Open Source lib Threading Building Blocks. They are cross-platform and last time I looked they had lock-free containers.

DLL Injection/IPC question

I'm work on a build tool that launches thousands of processes (compiles, links etc). It also distributes executables to remote machines so that the build can be run accross 100s of slave machines. I'm implementing DLL injection to monitor the child processes of my build process so that I can see that they opened/closed the resources I expected them to. That way I can tell if my users aren't specifying dependency information correctly.
My question is:
I've got the DLL injection working but I'm not all that familiar with windows programming. What would be the best/fastest way to callback to the parent build process with all the millions of file io reports that the children will be generating? I've thought about having them write to a non-blocking socket, but have been wondering if maybe pipes/shared memory or maybe COM would be better?
First, since you're apparently dealing with communication between machines, not just within one machine, I'd rule out shared memory immediately.
I'd think hard about trying to minimize the amount of data instead of worrying a lot about how fast you can send it. Instead of sending millions of file I/O reports, I'd batch together a few kilobytes of that data (or something on that order) and send a hash of that packet. With a careful choice of packet size, you should be able to reduce your data transmission to the point that you can simply use whatever method you find most convenient, rather than trying to pick the one that's the fastest.
If you stay in the windows world (None of your machines is linux or whatever) named pipes is a good choice, because it is fast and can be accessed across the machine boundary. I think shared memory is out of the race, because it can't cross the machine boundary. Distributed com allows to formulate the contract in IDL, but i think XML Messages via pipes are also ok. The xml messages have the benefit to work completely independent from the channel. If yo need linux later you can switch to tcp/ip transport and send your xml messages.
Some additional techniques with limitations:
Another forgotten but hot candidate is RPC (remote procedure calls). Lot of windows services rely on this. But i think it is hard to program RPC
If you are on the same machine and you only need to send some status information, you can regisier a windows message via RegisterWindowMessage() and send messages vie SendMessage()
apart from all the suggestions from thomas, you might also just use a common database to store the results. And if that is too slow use one of the more modern(and fast) key/value databases (like tokyo cabinet/memcachedb/etc).
This sounds like a lot of overkill for the task of verifying the files used in a build. How about, just scanning the build files? or capturing the output from the build tools?