C++ Calling methods of a remote object (RPC alike) - c++

Im searching for a RPC library that would allow me to call a memberfunction of an object in another Process (on Windows).
The problem im currently encountering is that some of the Serverside objects already exist and have more than one instance. The Server should be able to pass a pointer/identifier to the client which implements a proxy that then directs the calls to the remote objects instance. So what i basically want is something like this:
Client:
TestProxy test = RemoteTestManager.GetTestById(123);
test.echo("bla");
where the instance of Test already exists on the Server and the RemoteTestManager is a manager class on the server that the client obtained in another rpc call. Also it should preferably run over named pipes as there can be multiple servers on the same machine ( actually i want more like an easy IPC :D ).
So my question actually is: Is there something like this for C++ out there or do i have to code one myself

In terms of low-level serializing the messages across the network Protocol Buffers is a common choice...
http://code.google.com/p/protobuf/
For a more complete RPC stack take a look at Apache Thrift...
http://thrift.apache.org/

How about COM? Seems to fit your requirements perfectly.

You might have already found the solution. Just for the reference of other, I have created a library that match what you asked here. Take a look at CppRemote library. This library has features below that match your descriptions:
get pointer to objects at server by name (std::string).
bind existing object (non-intrusive) at server and then get a proxy to that object from client.
server can bind to more than one instance of existing object.
it has named pipe transport.
lightweight and easy to use.
server code
Test test1, test2;
remote::server svr;
svr.bind<itest>(&test1, "test1");
svr.bind<itest>(&test2, "test2");
svr.start(remote::make_basic_binding<text_serializer, named_pipe_transport>("pid"));
...
client code
remote::session client;
client.start(remote::make_basic_binding<text_serializer, named_pipe_transport>("pid"));
auto test1 = client.get<itest>("test1");
auto test2 = client.get<itest>("test2");
test1->echo("bla");
test2->echo("bla");

ZeroMQ is possibly the best IPC system out at the moment and allows for quite a varied combination of client/server topologies. And its really fast and efficient too.
How you access the server objects depends how they're implemented, CORBA had this facility, but I wouldn't try to use CORBA nowadays (or then TBH). A lot of RPC systems allow you to create objects as needed, or to connect to a single instance. Connecting to a object that is created for you, and kept for each call during that session (ie an object created for each client and kept alive) is still reasonably common. A pool of objects is also reasonably common too. However, you have to manage the lifetime of these server objects, and I can't really advise as you havn't said how yours are managed.
I doubt you want named pipes, stick to tcp/ip connections - connecting to localhost is a very lightweight operation (COM practically is a zero overhead in this configuration).

There are candidates on top of the list. But it depends on problem space. A quick look, Capnproto, by kenton varda, maybe a fit. CORBA is a bit old but used in many systems and frameworks such as ACE. One of the issues is PassByCopy of capability references which in capnproto PassByReference and PassByConstruction also provided. COM system also got some problems which needs it's own discussion. ZeroMQ is really cool which I caught a cold once. And it does not support RPC which means you have to implement it on level zero messaging. Also Google protobuf, kenton varda, could be a choice if you are not looking for features such as capabilities security, promise pipelining and other nice features provided by capnproto. I think you better give it a try and experiment yourself.
As a reminder, RPC is not only about remote object invocation. Areas of concern such as adequate level of abstraction and composition, pipelining, message passing, lambda calculus, capability security, ... are the important ones which have to be paid close attention. So, the better solution is finding the efficient and elegant one to your problem space.
Hope to be assistfull.
Bests,
Omid

Related

Can Sente be used in a server-only configuration?

I'm looking to replace an existing Websocket-based server with a new version written in Clojure. It seems like the Sente library might be an appropriate choice for this. One thing that isn't clear to me, however, is to what extent Sente relies on a private internal protocol for its operation.
In my case, I have an existing server and client which use JSON-over-websockets, and I'd like to replace the server without modifying any client code. It seems like Sente has a lot of specific expectations about the nature of client requests -- for example, it expects clients to specify a client-id parameter and to accept :chsk/handshake messages from the server.
Is my use case simply outside of the design space that Sente targets? If so, is there a less opinionated implementation of websockets for Clojure that would be more appropriate?
After more investigation, I found that Sente is a poor fit for server-only, since it has a lot of implicit assumptions about the protocol it uses. I found that HTTP-Kit was more suitable for my use case.

Communication between 2 c++ application

There are 2 c++ applications where one application let say A is reading from an interface device and does some processing and need to provide the data in certain format to an application B.
I feel this can be done in 2 ways as mentioned below -
1. I serialize the data structure in app A and write it to a socket.
2. I inject the packet to an interface.
Please help to evaluate which option would be faster. Or if there is another way to do it faster.
I'm not sure what you mean by "I inject the packet to an interface."
Anyway, if your 2 applications are or could be on separate machines, go for the socket solution.
If on the same machine, you can implement some type of interprocess communication. I recommend you to use Boost for this: http://www.boost.org/doc/libs/1_56_0/doc/html/interprocess.html
As far as performance is concern, ideally you want to perform some tests to find out which work better in your scenario. Also, if you're already familiar with sockets, it may be simpler to use them.

With what shall we replace the DCOM communication with?

We currently have a number of C++/MFC applications that communicate with each other via DCOM. Now we will update the applications and also want to replace DCOM with something more modern, something that is easier to work with. But we do not know what. What do you think
Edit
The data exchanged is not something that may be of interest to others. It is only status information between the different parts of the program running on different computers.
there are many C++ messaging libraries, from the old ACE to new ones like Google's Protocol Buffers or Facebook's (now Apache's) Thrift or Cisco's Etch.
Currently I'm hearing good things about ZeroMq which might give you more than you are used to.
DCOM is nothing more than sugar-coating over a messenging system.
Any proper messenging system would do, and would allow you to actually spot where messages are exchanged (which may be important to localize point of failures/performance bottlenecks in waiting).
There are two typical ways to do so, nowadays:
A pure messenging system, for example using Google Protocol Buffers as the exchange format
A webservice (either full webservice in JSON or a REST API)
I've been doing lots of apps in both C++ and Java using REST and I'm pretty satisfied. Far from the complexity of CORBA and SOAP, REST is easy to implement and flexible. I had a bit of a learning curve to ged used to model things as CRUD, but now it seems even more intuitive that way.
Now, for the C++ side I don't use a specific REST library, just cURL and a XML parser (in my case, CPPDOM) because the C++ apps are only clients, and the servers are Java (using the Restlet framework). If you need one, there's another question here at SO that recommends:
Can anyone recommend a good C/C++ RESTful framework
I'd also mention my decision to use XML was arbitrary and I'm seriously considering to replace it with JSON. Unless you have a specific need for XML, JSON is simpler and lightweight. And the beauty of REST is that you could even support both, along with other representations, if you want to.

Using C++ for backend calculations in a web app

I'm running a PHP front end to an application that does a lot of work with data and uses Cassandra as a data store.
However I know PHP will not give me the performance I need for some of the calculations (as well as the management for the sheer amount of data that needs to be in memory)
I'd like to write the backed stuff in C++ and access it from the PHP application. I'm trying to figure out the best way to interface the two.
Some options I've looked at:
Thrift (A natural choice since I'm already using it for Cassandra)
Google's Protocol Buffers
gSOAP
Apache Axis
The above are only things I looked at, I'm not limiting myself.
The data being transferred to the PHP application is very small, so streaming is not required. Only results of calculations are transferred.
What do you guys think?
If I were you I'd use thrift, no sense pulling in another RPC framework. Go with what you have and already know. Thrift makes it so easy (so does google protocol buffers, but you don't really need two different mechanisms)
Are you limiting yourself to having C++ as a separate application? Have you considered interfacing it with the PHP directly? (i.e. link a C++ extension into your PHP application).
I'm not saying the second approach is necessarily better than the first, but you should consider it anyway, because it offers some different tradeoff choices. For example, the latency of passing stuff between the PHP and C++ would surely be higher when the two are separate applications than when they're the same application dynamically linked.
More details about how much data your computations will need would be useful. Thrift does seem like a reasonable choice. You could use it between PHP, your computation node, and the Cassandra backend. If your result is small, your RPC transport between PHP and the computation node won't make too much difference.

How to check if a Server and Client are in the same concurrency model?

The concurrency model can be either apartment-threaded or multi-threaded
Question:
How to ensure that both the Client and Server are operating from within the same concurrency model?
Sometimes you need to know. Two quick examples:
Performance hit of proxy/stub pairs is a problem
You need to pass around "unmarshallable" data or objects
So, the answer -- if you do need to know:
The server and the client must be designed and implemented to support the same or compatible models. Either one of these scenarios will do:
Both should be MTA, or
Both should be STA, or
The server should be "Both" (supports either)
The Server should be "free-threaded" (but that doesn't buy you anything extra compared to Both, in this scenario)
If you need to know, there's something wrong with your design: the client and server need too much information about one another's internals. Part of the point of client-server is to decouple the two.
That said, then, there is a registry value ThreadingModel.There's an MSDN article on these things as well.