Tao Client robustness on server down - c++

Context: In a server-client setup (using ace-tao).
Problem Statement: The server might be down, while the client is up and attempting to make API calls. Now to make the client setup more robust, I want to make the client to be able to know about the server-down-state, and when server is up again, it can attempt the rebind and get the new ORB ready, for any further API calls.
Any suggestions?

There is only one solution in case of TCP/IP. You must implement so-called heatbeat connection : simple echo connection and analyze the return code of read-write calls.
There is no callback in TCP/IP for connection state : alive or dead.

Related

Apache Thrift: Terminate Connection from the Server

I am using thrift to provide an interface between a device and a management console. It is possible for there to be up to 4 active connections to the device at one time, and I have this working using a TThreadPool server.
The issue arises around client disconnections; If a client disconnects correctly, there is no issue, however if one does not (i.e. the client crashes out or doesn't call client->close()) then the server seems to keep that clients thread alive. This means that when the next connection attempt is made, the client hangs, as the server has used up its allocated thread pool so cannot service the new request.
I haven't been able to find any standard, public mechanism by which the server can stop, and hence free up, a clients thread if that client has not used the interface for a set time period?
Is there a standard way to facilitate this in thrift?
Set the receive/send timeout on the server socket might help. Server will close the connection on timeout.
https://github.com/apache/thrift/blob/129f332d72facda5d06f87e2b4e5e08bea0b6b44/lib/cpp/src/thrift/transport/TServerSocket.h#L103
void setSendTimeout(int sendTimeout);
void setRecvTimeout(int recvTimeout);

How to enable gRPC server to support just one client connection

I'm currently considering to use gRPC for basically inter-process communication between Java app (client) and C++ server. The RPC calls will use functionality from very old C++ code base which is definitely not thread-safe.
Normally the Java client will start more gRPC server instances and have just one connection with each server instance.
Is there any way how to ensure this on the gRPC server to accept just one connection and refuse all other attempts for connection. Otherwise I need to introduce some global lock in the RPC functions to have 100% correct server implementation.
There are plans to provide additional server side APIs that will allow the server to decide whether or not to accept an incoming connection, but this is not done yet. For now, a lock is probably a reasonable option.

C++ TCP server Class Design

I am developing TCP server in C++(win32/linux) which cater multiple client.The server is for Video Streaming.Client request video to server and Server get it from Gateway connected with camera.
I am stuck up in the class Design.I found three classes by
Peer
Session and
ConnectionMgr.
So here ConnectionMgr is responsible for managing other Classes.
I wanted your feedback on this.
what info Peer and session need to have;
How Peer and session is related
what information needs to be modeled here.
how to do Session maintainer.
Managing multiple client will require Threads what information thoses may need.
Please give your feedback so that I can upgrade my design.
Looking at the problem space from scratch:
there's some state associated with each client that connects - you seem to split this between Peer and Session and I see no real value in that if they're 1:1 - can omit such trivia from the high-level design stage.
"what info Peer and session need to have": socket descriptor is the only crucial thing, assuming you have only one camera and stream to all clients at the same pace (losing data when socket send() blocks/can't complete due to full buffer), otherwise, a buffer too...
you have a ConnectionMgr, well - yes... it must listen and accept clients on the server socket, possibly launch a new thread per client or monitor the set of current client connections and dispatch events
you'll need to make some decisions about the I/O and concurrency model (e.g. select/poll/non-blocking, async, blocking, single threaded, thread-per-client, thread-pool etc)
this will obviously affect your design: you should decide which - or which choices - you need to support...
To get a feel for this problem space, I suggest you create a very simple client/server program - probably using threads if you're familiar and comfortable with multithreading, otherwise you can hack upon the GCC libc TCP client/server examples for a select() based solution (http://www.gnu.org/s/libc/manual/html_node/Server-Example.html#Server-Example) or try boost::asio or ACE or whatever. To start, just get it working so you can telnet to the server and whatever you type in any connection is echoed out on all the connections. That should give you enough insight to start asking more concrete questions.
As #nabulke and #Jan Hudec stated in their comments, Boost.Asio is very good solution for your problem. Have a look at pretty simple example "Async TCP Echo Server". It uses just 2 classes: server and session. No session_manager. Sessions are managed automatically by smart pointers, very convenient and simple approach.
Using Boost.Asio you can keep the network part simple (and almost optimal by efficiency using asynchronous processing). As a bonus, adding couple of code lines, you receive multithreaded server w/o headache (I would recommend this example: "An HTTP server using a single io_service and a thread pool calling io_service::run().", just ignore HTTP stuff. pay attention to boost::asio::io_service::strand used in connection class)

Multithreaded Server Issue

I am writing a server in linux that is supposed to serve an API.
Initially, I wanted to make it Multi-threaded on a single port, meaning that I'd have multiple threads working on various request received on a single port.
One of my friends told me that it not the way it is supposed to work. He told me that when a request is received, I first have to follow a Handshake procedure, create a thread that is listening to some other port dedicated to the request and then redirect the requested client to the new port.
Theoretically, it's very interesting but I could not find any information on how to implement the handshake and do the redirection. Can someone help?
If I'm not wrong in interpreting your responses, once I create a multithreaded server with a main thread listening to a port, and creates a new thread to handle requests, I'm essentially making it multithreaded on a single port?
Consider the scenario where I get a large number of requests every second. Isn't it true that every request on the port should now wait for the "current" request to complete? If not, how would the communication still be done: Say a browser sends a request, so the thread handling this has to first listen to the port, block it, process it, respond and then unblock it.
By this, eventhough I'm having "multithreads" , all I'm using is one single thread at a time apart from the main thread because the port is being blocked.
What your friend told you is similar to passive FTP - a client tells the server that it needs a connection, the server sends back the port number and the client creates a data connection to that port.
But all you wanted to do is a multithreaded server. All you need is one server socket listening and accepting connections on a given port. As soon as the automatic TCP handshake is finished, you'll get a new socket from the accept function - that socket will be used for communication with the client that has just connected. So now you only have to create a new thread, passing that client socket to the thread function. In your server thread, you will then call accept again in order to accept another connection.
TCP/IP does the handshake, if you can't think of any reason to do a handshake than your application does not demand it.
An example of an application specific handshake could be for user authentication.
What your colleague is suggesting sounds like the way FTP works. This is not a good thing to do -- the internet these days is more or less used for protocols which use a single port, and having a command port is bad. One of the reasons is because statefull firewalls aren't designed for multi-port applications; they have to be extended for each individual application that does things this way.
Look at ASIO's tutorial on async TCP. There one part accept connections on TCP and spawns handlers that each communicate with a single client. That's how TCP-servers usually work (including HTTP/web, the most common tcp protocol.)
You may disregard the asynchronous stuff of ASIO if you're set on creating a thread per connection. It doesn't apply to your question. (Going fully async and have one worker-thread per core is nice, but it might not integrate well with the rest of your environment.)

Detect RPC connection loss from server-side on Windows

Is there any way to check the status of the RPC connection from the server-side? I am looking for a way to detect if the connection from the client is lost, be it client crash or other connectivity issues.
Use Context Handles for managing server state between calls for a particular client. RPC uses keep-alive's to detect client disconnects and will execute your context handle rundown routine if the client disconnects.
Mo Flanagan's answer is the best IMHO. Some more context.
If you're using binding handles, there is no way of tracking state across RPC calls and the concept of a "client disconnect" is essentially meaningless - you still need to return from the RPC call.
If you're using context handles, then the RPC runtime library will call the _rundown function when the client disconnects.
When that routine is called, the server can clean up whatever it needs to.