How to enable gRPC server to support just one client connection - c++

I'm currently considering to use gRPC for basically inter-process communication between Java app (client) and C++ server. The RPC calls will use functionality from very old C++ code base which is definitely not thread-safe.
Normally the Java client will start more gRPC server instances and have just one connection with each server instance.
Is there any way how to ensure this on the gRPC server to accept just one connection and refuse all other attempts for connection. Otherwise I need to introduce some global lock in the RPC functions to have 100% correct server implementation.

There are plans to provide additional server side APIs that will allow the server to decide whether or not to accept an incoming connection, but this is not done yet. For now, a lock is probably a reasonable option.

Related

Remote Procedure Call - Service offered by client

I want to develop a Qt5/C++ client-server application using remote procedure calls (RPC).
Idea:
The server listens for incoming connections of multiple clients.
Clients offer a set of procedures/services the server can call in order to collect data from clients and inform other clients about changes.
And here is the catch:
The RPC libs i've seen so far seem to expect the server to offer a service the clients may call. But I want to do the opposite. Clients should offer services the server may call.
The direction is important, because I want to enable port forwarding on the server side only, not on the client side.
The libs I've checked are:
QtRpc2 (https://github.com/brendan0powers/QtRpc2)
grpc (http://www.grpc.io)
Questions:
Is there a reason these libs offer services on server side only?
Did I maybe only miss that part in the documentation?
Is there an RPC lib that does offer client side service offering?
gRPC supports bidirectional streaming, which may meet your needs.
Clients can open a long lived connection to a server, and then the server can "call" the clients by sending responses on the stream.
The client can respond by sending another message on the stream.
http://www.grpc.io/docs/tutorials/basic/c.html

Tao Client robustness on server down

Context: In a server-client setup (using ace-tao).
Problem Statement: The server might be down, while the client is up and attempting to make API calls. Now to make the client setup more robust, I want to make the client to be able to know about the server-down-state, and when server is up again, it can attempt the rebind and get the new ORB ready, for any further API calls.
Any suggestions?
There is only one solution in case of TCP/IP. You must implement so-called heatbeat connection : simple echo connection and analyze the return code of read-write calls.
There is no callback in TCP/IP for connection state : alive or dead.

Establishing a websocket connection with another server

I'm working on embedding Mongoose into an application, and I need to be able to have the app connect to a server when it starts up. How can I do this? On GitHub, I see examples for receiving connections, but none on how to initialize a connection with another one. Any ideas?
Mongoose is a web server. It is designed to accept incoming connections, not make outgoing ones.
If you want to make outgoing connections, the way forward will depend on what you are connecting to and what protocol(s) it may use.
If you want to make outgoing http or https connections, you could use libcurl.
If some other protocol you may be able to find an appropriate library. Or, you can use operating system layer socket APIs to make your own connection, and implement whatever protocol is required on top of that. Here is an example for Linux, for example.

Exposing Service to Clients

We have internal services in our application, which are basically developed as Thrift RPC services. Now, I need to expose these services to the client applications, which are outside of the core system.
Now, the question is:
should I expose these Thrift services directly to the client? Advantages of doing so would be least amount of work required. Disadvantage would be that the clients need to connect to these Thrift APIs as well as another interface, which already exists, so actually the client applications need to open more than one socket to make connection to the core system.
An alternate option would be to wrap these Thrift services in another layer, which will be ultimately delivered to the end clients. Disadvantage of doing this: doing marshalling/unmarshalling the data twice, once with Thrift and next time with another interface.
What should be the preferred way of handling this situation?
We would not expose these services directly to outside clients. We would build or use an application to configure a proxy that the external clients could connect to.
The advantages to this are:
No need to punch a hole in your firewall
Possibility to do an extra security check
Possibility to throttle access to the internal service
Less chance of a hacker being able to exploit service

C++ & Boost: I'm trying to find an example TCP program with a server that accepts connections from multiple clients

A chat program would be a good enough example.
Just need a server that can accept multiple connections from the clients, and the server needs to be able to send messages to individual clients.
I plan to turn this into a distributed computing program to work with multiple Neural Networks.
Asio is the Boost library that handles networking. There's a chat server example listed here.
I cannot give you an example progam. But to write a server things that you have to do:
1. server will listen at a port for connection
2. thread pool which will accept the connection and serve request
3. write the server code in thread safe manner
You have to use socket programming A good link for that http://beej.us/guide/bgnet/
you can use win32 api in windows and posix for linux