any good and simple RPC library for inter-process calls? [closed] - c++

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I need to send a (probably one) simple one-way command from client processes to server process with arguments of builtin C++ types (so serialization is pretty simple). C++, Windows XP+.
I'm looking for a library that doesn't require complicated configuration, provides simple interface, doesn't require hours to days of learning and doesn't have commercial usage restrictions. Simple solution for simple problem.
Boost.Interprocess is too low-level for this simple task because doesn't provide RPC interface. Sockets are probably an overkill too because I don't need to communicate between machines. The same about DCOM, CORBA et al. Named pipes? Never used them, any good library over WinAPI? OpenMPI?

I don't think sockets are really overkill. The alternatives all have their own problems and sockets are far better supported than named pipes, shared memory, etc., because almost everyone is using them. The speed of sockets on local system is probably not an issue.
There's Apache Thrift:
http://incubator.apache.org/thrift/
There are a few RPC implementations wrapped around Google's protobuf library as the marshaling mechanism:
https://github.com/google/protobuf/blob/master/docs/third_party.md#rpc-implementations
There's XML-RPC:
http://xmlrpc-c.sourceforge.net/
If your messages are really simple, I might consider using UDP packets, then there are no connections to manage.

You might like ZeroMQ for something like this. Perhaps not as much a complete RPC, as a raw byte messaging framework you could use to make an RPC. It's simple, lightweight and with an impressive performance. You can easilly implement an RPC on top of it. Here's an example server straight from the manual:
//
// Hello World server in C++
// Binds REP socket to tcp://*:5555
// Expects "Hello" from client, replies with "World"
//
#include <zmq.hpp>
#include <unistd.h>
#include <stdio.h>
#include <string.h>
int main () {
// Prepare our context and socket
zmq::context_t context (1);
zmq::socket_t socket (context, ZMQ_REP);
socket.bind ("tcp://*:5555");
while (true) {
zmq::message_t request;
// Wait for next request from client
socket.recv (&request);
printf ("Received Hello");
// Do some 'work'
sleep (1);
// Send reply back to client
zmq::message_t reply (5);
memcpy ((void *) reply.data (), "World", 5);
socket.send (reply);
}
return 0;
}
This example uses tcp://*.5555, but uses more efficient IPC techniques if you use:
socket.bind("ipc://route.to.ipc");
or even faster inter thread protocol:
socket.bind("inproc://path.for.client.to.connect");

If you only need to support Windows I'd use the Windows built-in RPC, I've written two introductory articles about this:
http://www.codeproject.com/KB/IP/rpcintro1.aspx
http://www.codeproject.com/KB/IP/rpcintro2.aspx
You could use the ncalrpc protocol if you only need local inter-process communication.

Boost.MPI. Simple, fast, scalable.
#include <boost/mpi/environment.hpp>
#include <boost/mpi/communicator.hpp>
#include <iostream>
#include <sstream>
namespace mpi = boost::mpi;
int main(int argc, char* argv[])
{
mpi::environment env(argc, argv);
mpi::communicator world;
std::stringstream ss;
ss << "Hello, I am process " << world.rank() << " of " << world.size() << ".";
world.send(1, 0, ss.str());
}

If you are working on windows only, and really need a C++ interface, use COM/DCOM. It is based on RPC (in turn based on DCE RPC).
It is extremely simple to use -- provided you take the time to learn the basics.
ATL: http://msdn.microsoft.com/en-us/library/3ax346b7(VS.71).aspx
Interface Definition Language: http://msdn.microsoft.com/en-us/library/aa367091(VS.85).aspx

You probably don't even need a library. Windows has an IPC mechanism built deeply into its core APIs (windows.h). You can basically post a windows message into the message-queue of a different processes main window. Windows even defines a standard message to do just that: WM_COPYDATA.
MSDN docu on WM_COPYDATA
MSDN demo code
More demo code the following StackOverflow response
The sending process basically does:
FindWindow
SendMessage
The receiving process (window):
On Vista and later has to modify its message filter using ChangeWindowsMessageEx
Override its WindowProc
In order to handle the incoming WM_COPYDATA

I know that we are far away from easy to use. But of course you can stick to CORBA. E.g. ACE/TAO

I'm told RPC with Raknet is nice and simple.

Also, you might look at msgpack-rpc
Update
While Thrift/Protobuf are more flexible, I think, but there are require to write some code in specific format. For example, Protobuf needs some .proto file, which can be compile with specific compiler from package, that genegate some classes. In some cases it might be more difficult that other parts of code.
msgpack-rpc is much simpler. It doesn't require write some extra code. Here is example:
#include <iostream>
#include <msgpack/rpc/server.h>
#include <msgpack/rpc/client.h>
class Server: public msgpack::rpc::dispatcher {
public:
typedef msgpack::rpc::request request_;
Server() {};
virtual ~Server() {};
void dispatch(request_ req)
try {
std::string method;
req.method().convert(&method);
if (method == "id") {
id(req);
} else if (method == "name") {
name(req);
} else if (method == "err") {
msgpack::type::tuple<> params;
req.params().convert(&params);
err(req);
} else {
req.error(msgpack::rpc::NO_METHOD_ERROR);
}
}
catch (msgpack::type_error& e) {
req.error(msgpack::rpc::ARGUMENT_ERROR);
return;
}
catch (std::exception& e) {
req.error(std::string(e.what()));
return;
}
void id(request_ req) {
req.result(1);
}
void name(request_ req) {
req.result(std::string("name"));
}
void err(request_ req) {
req.error(std::string("always fail"));
}
};
int main() {
// { run RPC server
msgpack::rpc::server server;
std::auto_ptr<msgpack::rpc::dispatcher> dispatcher(new Server);
server.serve(dispatcher.get());
server.listen("0.0.0.0", 18811);
server.start(1);
// }
msgpack::rpc::client c("127.0.0.1", 18811);
int64_t id = c.call("id").get<int64_t>();
std::string name = c.call("name").get<std::string>();
std::cout << "ID: " << id << std::endl;
std::cout << "name: " << name << std::endl;
return 0;
}
Output
ID: 1
name: name
More complicated examples you can find here https://github.com/msgpack/msgpack-rpc/tree/master/cpp/test

I'm using XmlRpc C++ for Windows found here
Really easy to use :) But the only side effect that this is only a client!

There's also Microsoft Messaging Queueing, which is fairly straightforward to use when all processes are on the local machine.

The simplest solution for interprocess-communication is to use the filesystem. Requests and responses can be written as temp files. You can work out a naming convention for request and response files.
This will not give you the best performance, but maybe it will be good enough.

Related

Turn stream into function argument to use Telnet and Ncurses

Hello fellow programmers.
I'm developing a C/C++ program with sockets that are supposed to connect to the server via Telnet.
To send text and ANSI codes to the Telnet remote terminal I'm using this funcion:
void writeline(int socketfd, string line)
{
string tosend = line + "\n";
write(socketfd, tosend.c_str(), tosend.length());
}
Is there a way to create a stream object like cout, cerr, clog or even a FILE (from C) that sends everything it gets to a function?
For example:
clientout << "Hello World";
would call something like this:
writeline(clientout.str()); //or alike
I did something like this last year when I programmed a microprocessor (with AVR) and redirected stdout into a function that sent everything to the USART connection. I was hoping I could do something like that now.
This is how I did it then:
static int usart_putchar(char c, FILE *stream)
{
while ( !(UCSR0A & (1<<UDRE0)) );
UDR0 = c;
return 0;
}
static FILE usart_stream = FDEV_SETUP_STREAM(usart_putchar, NULL, _FDEV_SETUP_WRITE);
int main()
{
//(...)
stdout = &usart_stream;
//(...)
}
I'm interested in this because, besides being an easier way to print into the remote terminal, I need a stream for even thinking about using ncurses with my program. To use it, I need this function which needs a stream for input and another for output.
I apologize for the long question.
Thank you in advance.
Yes, it's possible to do this, but it's quite a bit of work.
Basically, you have to implement a subclass of std::streambuf that implements std::streambuf's virtual methods to read and write from the socket directly, or call the wrapper functions you showed in your question. It's not really a lot of work, it's only a handful of virtual methods, but you have to understand their obscure semantics, and implement them correctly, 100%. No margin for error.
Once you have your std::streambuf subclass, you can then instantiate a std::istream, std::ostream, and/or std::iostream (which have a constructor that takes a pointer to a std::streambuf).
Now, you have your stream that reads and/or writes to the socket.
Could you use boost asio?
See example of using posix::stream_descriptor to implement a chat application http://www.boost.org/doc/libs/1_49_0/doc/html/boost_asio/example/chat/posix_chat_client.cpp

Read HTML source to string

I hope you don't frown on me too much, but this should be answerable by someone fairly easily. I want to read a file on a website into a string, so I can extract information from it.
I just want a simple way to get the HTML source read into a string. After looking around for hours I see all these libraries and curl and stuff. All I need is the raw HTML data. I don't even need a definite answer. Just something that will help me refine my search.
Just to be clear I want the raw code in a string I can manipulate, don't need any parsing etc.
You need an HTTP Client library, one of many is libcurl. You would then issue a GET request to a URL and read the response back how ever your chosen library provides it.
Here is an example to get you started, it is C so I am sure you can work it out.
#include <stdio.h>
#include <curl/curl.h>
int main(void)
{
CURL *curl;
CURLcode res;
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, "http://example.com");
res = curl_easy_perform(curl);
/* always cleanup */
curl_easy_cleanup(curl);
}
return 0;
}
But you tagged this C++ so if you want a C++ wrapper for libcurl then use curlpp
#include <curlpp/curlpp.hpp>
#include <curlpp/Easy.hpp>
#include <curlpp/Options.hpp>
using namespace curlpp::options;
int main(int, char **)
{
try
{
// That's all that is needed to do cleanup of used resources
curlpp::Cleanup myCleanup;
// Our request to be sent.
curlpp::Easy myRequest;
// Set the URL.
myRequest.setOpt<Url>("http://example.com");
// Send request and get a result.
// By default the result goes to standard output.
myRequest.perform();
}
catch(curlpp::RuntimeError & e)
{
std::cout << e.what() << std::endl;
}
catch(curlpp::LogicError & e)
{
std::cout << e.what() << std::endl;
}
return 0;
}
HTTP is built on top of TCP. If you know socket programming, you can write a simple networking application that opens a socket to the desired server and issues an HTTP GET command. Whatever the server responds with, you'll have to remove the HTTP headers that precede the actual document you want.
If that sounds complicated, then just stick with libcurl.
if it is a hack - then just grab the source from show source, and save as txt. then you can open it with a normal file io stream.
all thos pesky libraries are a hint that it is a common and non-trivial excercise to do it right... :)
If all you want to do is grab the entire HTML code without any kind of parsing and extern libraries, my sugestion would be copying the code with a IO stream into a string.
It is the simplest way that I have in mind but be aware that it isn't the most efficient way to do it.

Boost: how to create a thread so that you can control all of its standard output, standard errors?

I create a win32 console app in C++. I use some API (not mine, and I can not modify its sources). It Is written so that it writes some of its info onto console screen not asking... each time I call it (48 times per second) So I want to put it into some thread and limit its output capabilities, but I need to get notified when that thread will try to output some message which is important for me. I have message text in standard string. How to do such thing in C++ using boost?
Here's a crazy idea:
If the lib is using cout/cerr, you could replace the streambuf of these global variables with an implementation of your own. It would, on flush/data, check some thread-local variable to see if the data came from the thread that calls the library, then route it somewhere else (i.e. into a std::string/std::ostringstream) instead of to the regular cout/cerr streambufs. (Which you should keep around.)
If it's using c's stdout/stderr, I think it'd be harder to do properly, but it might be doable still. You'd need to create some pipes and route stuff back and forth. More of a C/unixy question then, which I don't know that much about... yet. :)
Hope it helps.
That feature does not exist in Boost. You can, however, use _dup2 to replace the standard out descriptor:
#include <cstddef>
#include <cstdio>
#include <cstdlib>
#include <io.h>
#include <iostream>
#include <windows.h>
int main()
{
HANDLE h = CreateFile(TEXT("test.txt"), GENERIC_WRITE, 0, NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL);
if (0 == SetStdHandle(STD_OUTPUT_HANDLE, h)) {
std::fprintf(stderr, "`SetStdHandle` failed with error %d\n", (int)GetLastError());
return EXIT_FAILURE;
}
int h_desc = _open_osfhandle((long)h, 0);
_dup2(h_desc, STDOUT_FILENO);
std::printf("test\r\n"); // This actually writes to `h`.
std::fflush(stdout);
std::cout << "another test" << std::endl; // Also writes to `h`
CloseHandle(h);
return EXIT_SUCCESS;
}
Essentially what this trick does is allow you to redirect all writes to stdout, std::cout, and GetStdHandle(STD_OUTPUT_HANDLE) to a writable handle of your choosing (h). Of course, you can use CreatePipe to create the writable handle (h) and read from the readable end in another thread.
EDIT: If you are looking for a cross-platform solution, note that this trick is even easier on POSIX-compatible systems because dup2 is a standard function in unistd.h and "writable handles" are already descriptors.
I cannot think of a way to achieve in Boost what you want, as the problem is described.
However, this API behaviour is very perplexing. Spitting out reams of output to the console is a bit antisocial. Are you using a Debug build of the API library? Are you sure there's no way to configure the API such that it outputs this data to a different stream, so that you can filter it without capturing the entire standard output? Is there a way to reduce the amount of output, so that you only see the important events you care about?
If you really need to capture standard output and act on certain strings (events) of interest, then Win32 provides ways to do this, but I'd really take a hard look at whether this output can be modified to meet your needs before resorting to that.

Flushing a boost::iostreams::zlib_compressor. How to obtain a "sync flush"?

Is there some magic required to obtain a "zlib sync flush" when using boost::iostreams::zlib_compressor ? Just invoking flush on the filter, or strict_sync on a filtering_ostream containing it doesn't see to do the job (ie I want the compressor to flush enough that the decompressor can recover all the bytes consumed by the compressor so far, without closing the stream).
Looking at the header, there seem to be some "flush codes" defined (notably a sync_flush) but it's unclear to me how they should be used (bearing in mind my compressor is just added into a filtering_ostream).
It turns out there is a fundamental problem that the symmetric_filter that
zlib_compressor inherits from isn't itself flushable (which seems rather
an oversight).
Possibly adding such support to symmetric_filter would be as simple as adding the flushable_tag and exposing the existing private flush methods, but for now I can live with it.
This C++ zlib wrapper library, of which I'm the author, supports flush functionality and is arguably simpler to use:
https://github.com/rudi-cilibrasi/zlibcomplete
It is as easy as this:
#include <iostream>
#include <zlc/zlibcomplete.hpp>
using namespace zlibcomplete;
using namespace std;
int main(int argc, char **argv)
{
const int CHUNK = 16384;
char inbuf[CHUNK];
int readBytes;
ZLibCompressor compressor(9, auto_flush);
for (;;) {
cin.read(inbuf, CHUNK);
readBytes = cin.gcount();
if (readBytes == 0) {
break;
}
string input(inbuf, readBytes);
cout << compressor.compress(input);
}
cout << compressor.finish();
return 0;
}
The main difference from boost is that instead of using a template class filter you simply pass in a string and write out the compressed string that results as many times as you want. Each string will be flushed (in auto_flush mode) so it can be used in interactive network protocols. At the end just call finish to get the last bit of compressed data and a termination block. While the boost example is shorter, it requires using two other template classes that are not as well-known as std::string, namely filtering_streambuf and the less-standard boost::iostreams:copy. The boost interface to zlib is incomplete in that it does not support Z_SYNC_FLUSH. This means it is not appropriate for online streaming applications such as in TCP interactive protocols. I love boost and use it as my main C++ support library in all of my C++ projects but in this particular case it was not usable in my application due to the missing flush functionality.

Passing information between two seperate programs

I want to pass a value of an input variable in my program lets say#1 to another program #2 and i want #2 to print the data it got to screen, both are needed to be written in c++. The this will be on Linux.
Depending on the platform there are a number of options available. What you are trying to do is typically called inter-process communication (IPC).
Some options include:
Sockets
Pipes
Queues
Shared Memory
What is easiest is probably dependent on the platform youa are using.
As always, there is a Boost library for that (God, I like Boost).
Nic has covered all the 4 that I wanted to mention (on the same machine):
Sockets
Pipes
Queues
Shared Memory
If writing system calls is troublesome for you, you may want to use the following libraries:
Boost http://www.boost.org/
Poco http://pocoproject.org/blog/
Nokia Qt http://qt.nokia.com/
Something you can read from Qt portable IPC: only QSharedMemory?
If effeciency is not prime concern then use normal file i/o.
else go for IPC to do so.
As far as Windows is concern you have following options :
Clipboard ,
COM ,
Data Copy ,
DDE ,
File Mapping ,
Mailslots ,
Pipes ,
RPC ,
Windows Sockets
For Linux , use can use Name Pipes(efficient) or sockets.
If you're on Windows, you can use Microsoft Message Queueing. This is an example of queue mentioned previously.
If the data to be passed is just a variable, then one of the option is to set it as Environment Variable [ Var1 ] by program #1 and access it, in Program #2 [ if both are running on same env/machine ]. Guess this will be the easiest one, instead of making it complex, by using IPC/socket etc.
I think most of the answers have address the common IPC mechanisms. I'd just like to add that I would probably go for sockets because it's fairly most standard across several platforms. I decided to go for that when I needed to implement IPC that worked both on Symbian Series 60 and Windows Mobile.
The paradigm is straightforward and apart from a few platform glitches, the model worked the same for both platforms. I would also suggest using Protocol Buffers to format the data you send through. Google uses this a lot in its infrastructure. http://code.google.com/p/protobuf/
DBUS
QtDbus
DBus-mm
In response to your comment to Roopesh Majeti's answer, here's a very simple example using environment variables:
First program:
// p1.cpp - set the variable
#include <cstdlib>
using namespace std;;
int main() {
_putenv( "MYVAR=foobar" );
system( "p2.exe" );
}
Second program:
// p2.cpp - read the variable
#include <cstdlib>
#include <iostream>
using namespace std;;
int main() {
char * p = getenv( "MYVAR" );
if ( p == 0 ) {
cout << "Not set" << endl;
}
else {
cout << "Value: " << p << endl;
}
}
Note:
there is no standard way of setting an environment variable
you will need to construct the name=value string from the variable contents
For a very dirt and completely nonprofessional solution you can do it like me.
Save the variable in to a file and then read it (in an infinite loop every x time) with the other program.
fsexample.open("F:/etc etc ...");
fsexample >> data1 >> data2; // etc etc
and on the other side
fsexample.open("F:/etc etc ...");
fsexample << data1 << data2; // etc etc
The trick is that F is a virtual drive created with ramdisk so it is fast
and heavy-duty proof.
You could have problem of simultaneous access but you can check it with
if (!fsexample.is_open()) {
fsexample_error = 1;
}
and retry on failure.