I have a C++ program that is constantly generating a large amount of data that needs to be sent to a Rails server. Both the program and the server are on the same machine running Suse Linux.
What is the most efficient and simple solution for this?
Sockets are the way to go. If you want some good asynchronous and cross-platform sockets in C++ your best bet, probably, will be boost::asio.
You could store the data the way you want (file or database).
The only tough point is to make your Rails app aware the C++ program is completed.
I'd strongly advise you to store this information in cache so that it won't cost much to check this every period you need.
You could use sockets since both your programs are residing on the same local machine, and in general it should be pretty straight-forward to send the serialized data over a local socket. Since the socket is using an internal buffer, the transfer time should be very fast. Your C++ program can either push data to the Rails server, or you can have the Rails server poll the C++ program providing you're setting up a cache in your C++ program to store the data in between polling calls. The push method would probably work best though.
Related
I have an API in C++ which connects via bluetooth to a device and measures data.
Now I want to use this captured data LIVE and evaluate it in another language like R or Python, how is this done?
So I get live data from my c++ API from console applicaiton within visual studio, now I want to pipe this data stream to another "instance" like Python or R (maybe from within another IDE) and run my script on the data. Afterwards the data does not need to be piped back.
How is a good or correct way to achieve this? In the beginning I thought I would have to add native support for Python within my C++ project, now however I think it would be enough to just takes this little bit of data and pipe it to a local server instance where e.g. my R/Shiny application runs and read it in as a dataframe?
Has anyone worked with a C++ library for a device and piped that data live down another analysis setup in a different language? How have you done it?
I think the best way would be to use TCP/ IPC communication over socket.
In C++, implement a server which read data and publish it into a socket.
In Python implement a client which simply listen to a socket and process the data every time it's published by the C++ server.
If you want an easy C++ library for socket communication I suggest looking into either ZMQ or nanomsg but if your use case is simple enough, using native socket can do the job simply and efficiently.
Edit : If you wish to go the ZMQ way you can start with the ZGuide. You also have this tutorial about sending data between C++ and Python using zmq.
Nanomsg is a fork of ZMQ, so most of the concept of ZMQ will apply to it.
If you want to use native socket, there are already plenty of tutorial in both C++ and Python, just search on google.
If the both programs are independent you can just use standard system pipe.
You just run the both programs from a system terminal, piping the output of the first one as the input of the second one.
The syntax is usually:
cpp_program.exe | python_program.py
Then you just use standard output in the C++ program (functions like printf or std::cout which just write data to terminal). In the other program you use standard function for reading data from terminal.
This solution has a few disadvantages:
Input/output streams are usually treated as text. If you want to pipe binary data there may be some problems. For example on some systems bytes/characters "\n" may be replaced with "\r\n".
You cannot take user input in the second program. (At least not without using some tricks to access real terminal input.)
Pipes have finite size. If the second program is to slow to process data as fast as the first program produces them, the first program may be slowed down by print operations which waits for pipe to empty. (Or maybe it throws exception. I'm not sure.) In this case it may be a better idea to use a file as a buffer.
Hi I'm working on a c++ project that I'm trying to keep OS independent and I have two processes which need to communicate. I was thinking about setting up a 3rd process (possibly as a service?) to coordinate the other two, asynchronously.
Client 1 will tell the intermediate process when data is ready, and send the data to it. The intermediate process will then hold this data until client 2 tells it that it is ready for the data. If the intermediate process has not received new data from client 1, it will tell client 2 to wait.
Since I am trying to keep this OS independent I don't really know what to use. I have looked into using MPI but it doesn't really seem to fit this purpose. I have also looked into Boost.ASIO, Named Pipes, RPC's and RCF. Im currently programming in Windows but I'd like to avoid using the WIN_API so that the code could potentially be compiled in Linux.
Here's a little more detail on the two processes.
We have a back end process/model (client 1) that will receive initial inputs from a GUI (client 2, written in Qt) via the intermediate process. The model will then proceed to work until the end condition is met, sending data to the server as it becomes ready. The GUI will ask the intermediate process for data on regular intervals and will be told to wait if the model has not updated the data. As the data becomes available from the model we also want to be able to keep any previous data from the current session for exporting to a file if the user chooses to do so (i.e., we'll want the GUI to issue a command to the interface to export (or load) the data).
My modification privleges of the the back end/model are minimal, other than to adhere to the design outlined above. I have a decent amount of c++ experience but not much parallel/asynchronous application experience. Any help or direction is greatly appreciated.
Standard BSD TCP/IP socket are mostly platform independent. They work with some minor differences on both windows and Unices (like linux).
PS windows does not support AF_UNIX sockets.
I'd checkout the boost.interprocess library. If the two processes are on the same machine it has a number of different ways to communicate between processes, and do so in an platform independent manner.
I am not sure if you have considered the messaging system but if you are sending structured data between processes you should consider looking at google protocol buffers.
These related to the content of the messaging (what is passed) rather than how they are passed.
boost::asio is platform independent although it doesn't imply C++ at both ends. Of course, when you are using C++ you can use boost::asio as your form of transport.
I am trying to make an AIR application, that needs to pass an image (.jpg/.png) to a C++ app, that does number crunching.(this needs to be done very often, like every 2-3 seconds.) I've managed to pass the image by saving it to disk via AIR, then opening this file with the C++ program (and passing the filename as an argument to the C++ program), but this method is really slow, because it involves lots of disk I/O.
Is there a method to send an image directly to a native process?
Edit: There is a good Flash-C++ communication example at http://www.marijnspeelman.nl/blog/2008/03/06/face-detection-using-flash-and-c-revisited/ using sockets. The big problem with this method is, that some firewall settings can block the communication (i get a windows firewall warning, when i start the app).
There are several ways to transmit data between two processes.
One of the most efficient, and easy to setup, is to use TCP sockets.
It means that your C/C++ will for (TCP/HTTP) requests, and that your AIR program will send the request with all data inside.
I want my server app to be able to send data to be processed by a bunch of various clients, and then have the processed data returned to the server.
Ideally, I'd have some call like some_process = send_to_client_for_calculating(connection, data)
I just need to be able to send a bunch of data to a client, tell the client what to do (preferably in the same message, which can be done with an array [command, data]), and then return the data...
I'm breaking up pieces of a neural network (tis very large), and then assembling them all later.
If I need to be clearer, let me know how.
I'm shocked no one has thrown it out there... how about boost::asio.
Why don't you have a look at using Apache ActiveMQ? It's a Java JMS server, but it has C++ bindings, and does what you want with a minimum of writing networking code. You basically just subscribe to messages, and send responses back. The MQ server takes care of dispatch and message persistence for you.
You could try using beanstalkd, a fast working queue. I don't know if it fits your purposes. There is a client library written in C, which you should be able to use from C++.
I'd suggest looking at gSOAP, which implements SOAP in C++, including networking.
I'm work on a build tool that launches thousands of processes (compiles, links etc). It also distributes executables to remote machines so that the build can be run accross 100s of slave machines. I'm implementing DLL injection to monitor the child processes of my build process so that I can see that they opened/closed the resources I expected them to. That way I can tell if my users aren't specifying dependency information correctly.
My question is:
I've got the DLL injection working but I'm not all that familiar with windows programming. What would be the best/fastest way to callback to the parent build process with all the millions of file io reports that the children will be generating? I've thought about having them write to a non-blocking socket, but have been wondering if maybe pipes/shared memory or maybe COM would be better?
First, since you're apparently dealing with communication between machines, not just within one machine, I'd rule out shared memory immediately.
I'd think hard about trying to minimize the amount of data instead of worrying a lot about how fast you can send it. Instead of sending millions of file I/O reports, I'd batch together a few kilobytes of that data (or something on that order) and send a hash of that packet. With a careful choice of packet size, you should be able to reduce your data transmission to the point that you can simply use whatever method you find most convenient, rather than trying to pick the one that's the fastest.
If you stay in the windows world (None of your machines is linux or whatever) named pipes is a good choice, because it is fast and can be accessed across the machine boundary. I think shared memory is out of the race, because it can't cross the machine boundary. Distributed com allows to formulate the contract in IDL, but i think XML Messages via pipes are also ok. The xml messages have the benefit to work completely independent from the channel. If yo need linux later you can switch to tcp/ip transport and send your xml messages.
Some additional techniques with limitations:
Another forgotten but hot candidate is RPC (remote procedure calls). Lot of windows services rely on this. But i think it is hard to program RPC
If you are on the same machine and you only need to send some status information, you can regisier a windows message via RegisterWindowMessage() and send messages vie SendMessage()
apart from all the suggestions from thomas, you might also just use a common database to store the results. And if that is too slow use one of the more modern(and fast) key/value databases (like tokyo cabinet/memcachedb/etc).
This sounds like a lot of overkill for the task of verifying the files used in a build. How about, just scanning the build files? or capturing the output from the build tools?