Remote Procedure Call - Service offered by client - c++

I want to develop a Qt5/C++ client-server application using remote procedure calls (RPC).
Idea:
The server listens for incoming connections of multiple clients.
Clients offer a set of procedures/services the server can call in order to collect data from clients and inform other clients about changes.
And here is the catch:
The RPC libs i've seen so far seem to expect the server to offer a service the clients may call. But I want to do the opposite. Clients should offer services the server may call.
The direction is important, because I want to enable port forwarding on the server side only, not on the client side.
The libs I've checked are:
QtRpc2 (https://github.com/brendan0powers/QtRpc2)
grpc (http://www.grpc.io)
Questions:
Is there a reason these libs offer services on server side only?
Did I maybe only miss that part in the documentation?
Is there an RPC lib that does offer client side service offering?

gRPC supports bidirectional streaming, which may meet your needs.
Clients can open a long lived connection to a server, and then the server can "call" the clients by sending responses on the stream.
The client can respond by sending another message on the stream.
http://www.grpc.io/docs/tutorials/basic/c.html

Related

How to enable gRPC server to support just one client connection

I'm currently considering to use gRPC for basically inter-process communication between Java app (client) and C++ server. The RPC calls will use functionality from very old C++ code base which is definitely not thread-safe.
Normally the Java client will start more gRPC server instances and have just one connection with each server instance.
Is there any way how to ensure this on the gRPC server to accept just one connection and refuse all other attempts for connection. Otherwise I need to introduce some global lock in the RPC functions to have 100% correct server implementation.
There are plans to provide additional server side APIs that will allow the server to decide whether or not to accept an incoming connection, but this is not done yet. For now, a lock is probably a reasonable option.

Establishing a websocket connection with another server

I'm working on embedding Mongoose into an application, and I need to be able to have the app connect to a server when it starts up. How can I do this? On GitHub, I see examples for receiving connections, but none on how to initialize a connection with another one. Any ideas?
Mongoose is a web server. It is designed to accept incoming connections, not make outgoing ones.
If you want to make outgoing connections, the way forward will depend on what you are connecting to and what protocol(s) it may use.
If you want to make outgoing http or https connections, you could use libcurl.
If some other protocol you may be able to find an appropriate library. Or, you can use operating system layer socket APIs to make your own connection, and implement whatever protocol is required on top of that. Here is an example for Linux, for example.

What kind of network protocol should be used in this scenario?

Well...
I am working with an mobile application and a web server.
A characteristic of my web server is that it generates different set of data randomly. In other words, I cannot predict when the server will have ready data to send to the mobile app.
On other hand, the mobile app need to receive all data that the server generates. An approach could be request multiple times to get all these data. Indeed, It isn't a good approach, because I don't know when request the data.
If the mobile app could listen the server, after one start request or keep on the connection, for example, the server could sent any set of data in any time.
The question is: What is protocol suitable to this situation? How could I use that? Examples?
Thank you!
You could create a persistent TCP/IP connection to the server and permanently listen for incoming data (using a custom protocol or propably something websocket based). However such a permanent connection might seriously affect your battery life if it's for a mobile device. You will also lose the connection if the operating system automatically shuts down your application because it's out of memory.
The default approach to this problem are Push notification / Push services, where your server sends a notification about new data to a server of the phone provider (e.g. Microsoft or Apple push server), and this server sends the notification (as well as notificaiton from other online services) to your phone.
Some info for Windows Phone:
http://msdn.microsoft.com/en-us/library/hh221549.aspx
http://msdn.microsoft.com/en-us/library/windowsphone/develop/ff402558%28v=vs.105%29.aspx
Depending on how often you have new data both approaches can make sense.
WebSockets could be the answer: http://en.wikipedia.org/wiki/WebSocket
Specifically, for Windows Phone, there's a solution also: http://msdn.microsoft.com/en-us/library/windowsphone/develop/ff402558(v=vs.105).aspx

implementing server for licencing management

I would like to implement the server side of a licence management software. I use C++ in LINUX OS.
When the software starts it must connect to a server that checks privileges and allows/disallow running of some features.
My question is about the implementation of the communication between client and server across internet:
The server will have a static IP on internet so is it enough to use a simple TCP/IP socket client that will connect to a TCP/IP socket server ( providing IP/PORT) ?
I am familiar with socket communication , but less with communication across internet so my question is whether this is the right approach or do I need to use a different mechanism like a http client server or other.
Regards
AFG
Here are some benefits to using HTTP as a transport:
easier to get right, more likely to work in production: Yes, you will probably have to add additional dependencies to deal with HTTP (client and server side), but it's still preferable to yet another homegrown protocol, which you have to implement, maintain, care about backwards compatibility, deal with multiplatform issues (eg. endianness), etc. In terms of implementation ease, using an HTTP based solution should be far easier in the common case (especially true if you build a REST style service API for license checking).
More help available: HTTP as the foundation of the web is one of the most widely used technologies today. Most (all?) problems you will run into are probably publicly documented with solutions/workarounds.
Encryption 'for free': Encryption is already a solved problem (HTTPS/SSL), both with regard to transport as well as with regard to what you have to implement on your end, and it's just a matter of setting it up.
Server Authentication 'for free': HTTPS/SSL doesn't only solve encryption but also server authentication, so that the client can verify whether it's actually talking to the right service.
Guaranteed to work on the internet: HTTP/HTTPS traffic is common on the internet, so you won't run into routing problems or firewalls which are hard to traverse. This might be a problem when using your own protocol.
Flexibility out of the box: You also put less constraints on clients communicating with your server, as it's very simple to build a client in many different environments, as long as they can talk HTTP (and maybe SSL), and they know how to issue the request to your server (ie. what your service API looks like).
Easy to integrate with administrative webapp: If you want to allow users to manage their accounts associated with licenses in some way (update contact info etc.), then you might even combine the license server with that application. You can also build the license administration UI part into the same app if that's useful.
And as a last remark (this puts additional constraints on your client side HTTPS/SSL implementation): you can even use client side SSL certificates, which essentially allow authenticating the client to the server. Depending on how you use them, client side certificates are harder to manage, but they can be eg. expired, or revoked, so to some extent they actually are licenses (to connect to the server).
HTTP is not a different mechanism. It is a protocol operated over TCP/IP connections.
Internet uses IP transport exclusively. You can use UDP, TCP or SCTP session (well, UDP is not much of a session) layer on top of it. TCP is the general choice.
Sockets are operating system interface. They are the only interface to network in most systems, but some systems have different interface. Nothing to do with the transport itself.
IP addresses are in practice tied to network topology, so I strongly discourage hardcoding the IP address into the server. If you have to change network provider for any reason, you won't be getting the same IP address. Use DNS, it's just one gethostbyname call.
And don't forget to authenticate the server; even with hardcoded IP it's too easy to redirect it.

How do I get through proxy server environments for non-standard services?

I'm not real hip on exactly what role(s) today's proxy servers can play and I'm learning so go easy on me :-) I have a client/server system I have written using a homegrown protocol and need to enhance the client side to negotiate its way out of a proxy environment.
I have an existing client and server system written in C and C++ for the speed and a small amount of MFC in the client to handle the user interface. I have written both the server and client side of the system on Windows (the people I work for are mainly web developers using Windows everything - not a choice) sticking to Berkeley Sockets as it were via wsock32 for efficiency. The clients connect to the server through a nonstandard port (even though using port 80 is an option to get out of some environments but the protocol that goes over it isn't HTTP). The TCP connection(s) stay open for the duration of the clients participation in real time conferences.
Our customer base is expanding to all kinds of networked environments. I have been able to solve a lot of problems by adding the ability to connect securely over port 443 and using secure sockets which allows the protocol to pass through a lot environments since the internal packets can't be sniffed. But more and more of our customers are behind a proxy server environment and my direct connections don't make it through. My old school understanding of proxy servers is that they act as a proxy for external HTML content over HTTP, possibly locally caching popular material for faster local access, and also allowing their IT staff to blacklist certain destination sites. Customer are complaining that my software doesn't recognize and easily navigate its way through their proxy environments but I'm finding it difficult to decide what my "best fit" solution should be. My software doesn't tear down the connection after each client request, and on top of that packets can come from either side at any time, basically your typical custom client/server system for a specific niche.
My first reaction is "why can't they just add my server's addresses to their white list" but if there is a programmatic way I can get through without requiring their IT staff to help it is politically better and arguably a better solution anyway. Plus maybe I'm still not understanding the role and purpose of what proxy servers and environments have grown to be these days.
My first attempt at a solution was to use WinInet with its various proxy capabilities to establish a connection over port 80 to my non-standard protocol server (which knows enough to recognize and answer a simple HTTP-looking GET request and answer it with a simple HTTP response page to get around some environments that employ initial packet sniffing (DPI)). I retrieved the actual SOCKET handle behind WinInet's HINTERNET request object and had hoped to use that in place of my software's existing SOCKET connection and hopefully not need to change much more on the client side. It initially seemed to be my solution but on further inspection it seems that the OS gets first-chance at the received data on this socket since when I get notified of events via the standard select(...) statement on the socket and query the size of the data available via ioctlsocket the call succeeds but returns 0 bytes available, the reads don't work and it goes downhill from there.
Can someone tell me of a client-side library (commercial is fine) will let me get past these proxy server environments with as little user and IT staff help as possible? From what I read it has grown past SOCKS and I figure someone has to have solved this problem before me.
Thanks for reading my long-winded question,
Ripred
If your software can make an SSL connection on port 443, then you are 99% of the way there.
Typically HTTP proxies are set up to proxy SSL-on-443 (for the purposes of HTTPS). You just need to teach your software to use the HTTP proxy. Check the HTTP RFCs for the full details, but the Cliffs Notes version is:
Connect to the HTTP proxy on the proxy port;
Send to the proxy:
.
CONNECT your.real.server:443 HTTP/1.1\r\n
Host: your.real.server:443\r\n
User-Agent: YourSoftware/1.234\r\n
\r\n
Then parse the proxy response, which will start with a HTTP status code, followed by HTTP headers, followed by a blank line. You'll then be talking with your destination (if the status code indicated success, anyway), and can start talking SSL.
In many corporate environments you'll have to authenticate with the proxy - this is almost always HTTP Basic Authentication, which is pretty easy - again, see the RFCs.