COM Server hang- detection and resolution - c++

I have an application that sends requests to an out of proc COM server whom handles the requests and sends them back to the requesting application.
The client application is really in control of the start-stop of this Out-of-Proc COM server and determines its lifetime so to say.
Because this application has many hundreds of requests at any given time, it mostly has at least 4 of the same COM servers to handle these requests.
The problem is that sometimes this COM servers gets hung up handling a request, which is caught by the requesting application, whom kills the out of proc COM server. This however does not always happen.
What sometimes happens is that the client application requests a COM server kill, which results in the client releasing all references to the COM Server, but the COM server ends up just using 25% of the CPU and just never dies. It seems it just hangs and uses CPU constantly.
The client has mechanism to attempt to kill the COM Server process forcibly if it fails to die, however even that does not seem to work in the cases where the COM server gets into the CPU usage and just hangs.
Has anybody experienced something similar or has some advice on how one could resolve a situation like this?

You need to design all calls in the COM server in such way that they all end in some reasonably short time. Once a new call arrives from the client COM spawns a separate thread and dispatches a call onto that thread. There's no reliable way to interrupt the call - the call needs to end on itself (just return). You achieve this by designing your algorithm appropriately.

Related

Apache Thrift: Terminate Connection from the Server

I am using thrift to provide an interface between a device and a management console. It is possible for there to be up to 4 active connections to the device at one time, and I have this working using a TThreadPool server.
The issue arises around client disconnections; If a client disconnects correctly, there is no issue, however if one does not (i.e. the client crashes out or doesn't call client->close()) then the server seems to keep that clients thread alive. This means that when the next connection attempt is made, the client hangs, as the server has used up its allocated thread pool so cannot service the new request.
I haven't been able to find any standard, public mechanism by which the server can stop, and hence free up, a clients thread if that client has not used the interface for a set time period?
Is there a standard way to facilitate this in thrift?
Set the receive/send timeout on the server socket might help. Server will close the connection on timeout.
https://github.com/apache/thrift/blob/129f332d72facda5d06f87e2b4e5e08bea0b6b44/lib/cpp/src/thrift/transport/TServerSocket.h#L103
void setSendTimeout(int sendTimeout);
void setRecvTimeout(int recvTimeout);

Design a multi client - server application, where client send messages infrequent

I have to design a server which can able to send a same objects to many clients. clients may send some request to the server if it wants to update something in the database.
Things which are confusing:
My server should start the program (where I perform some operation and produce 'results' , this will be send to the client).
My server should listen to the incoming connection from the client, if any it should accept and start sending the ‘results’.
Server should accept as many clients as possible (Not more than 100).
My ‘result' should be secured. I don’t want some one take my ‘result' and see what my program logics look like.
I thought point 1. is one thread. And point 2. is another thread and it going to create multiple threads within its scope to serve point 3. Point 4 should be taken by my application logic while serialising the 'result' rather the server.
Is it a bad idea? If so where can i improve?
Thanks
Putting every connection on a thread is very bad, and is apparently a common mistake that beginners do. Every thread costs about 1 MB of memory, and this will overkill your program for no good reason. I did ask the very same question before, and I got a very good answer. I used boost ASIO, and the server/client project is finished since months, and it's a running project now beautifully.
If you use C++ and SSL (to secure your connection), no one will see your logic, since your programs are compiled. But you have to write your own communication protocol/serialization in that case.

Handing over an established TCP connection from one process to another

I am writing a simple web server with C++ that handles long-lived connections. However, I need to reload my web server from time to time. I wonder if there is a way that I can hand over the established connections from one process to another process to be able to retain my established connections after reload.
Would that be enough to only pass file descriptors? what would happen to connection states?
Any similar open source project that does the same thing?
Any thoughts or ideas?
Thanks,
I really have no idea whether this is possible, but I think not. If you fork() then the child will "inherit" the descriptors, but I don't know whether they behave like the should (though I suspect that they do.) And with forking, you can't run new code (can you?) Simple descriptor numbers are process-specific, so just passing them to a new, unrelated process won't work either, and they will be closed when your process terminates anyway.
One solution (in the absence of a simpler one,) is to break your server into two processes:
Front-end: A very simple process that just accepts the connections, keep them open and forwards any data it receives to the second process, and vice versa.
Server: The real web server, that does all the logic and processing, but does not communicate with the clients directly.
The first and second processes communicate via a simple protocol. One feature of this protocol must that it does support the second process being terminated and relaunched.
Now, you can reload the actual server process without losing the client connections (since they are handled by the front-end process.) And since this front-end is extremely simple and probably has very few configurations and bugs, you rarely need to reload it at all. (I'm assuming that you need to reload your server process because it runs into bugs that need to be fixed or you need to change configurations and stuff.)
Another important and helpful feature that this system can have is to be able to transition between server processes "gradually". That is, you already have a front-end and a server running, but you decide to reload the server. You launch another server process that connects to the front-end (while the old server is still running and connected,) and the front-end process forwards all the new client connections to the new server process (or even all the new requests coming from the existing client connections.) And when the old server finishes processing all the requests that it has under processing, it gracefully and cleanly exits.
As I said, this is a solution you might to try only if nothing easier and simpler is found.

What happens to queued FastCGI requests when my server goes down?

I understand that FastCGI queues requests and acts on them one by one. I was wondering what would happen if there are multiple requests queued, and for some reason my server goes down. Will it still remember the requests and continue acting on them when the server springs back up or will I lose all those queued up requests?
You will lose the queued requests. They are held in memory, not on disk.
Unless you have documentation for your FastCGI application to the contrary, I would assume that when the OS or hardware fails or shutdown, the requests in process will be lost. If you want to be certain, you can set up a test where some requests are queued, and then shutdown or unplug as needed to simulate the situation you want to test.

webservice dispatcher

Here is my problem: I have a C++ application that consists of Qt GUI and quite a lot of backend code. Currently it is linked into one executable and runs on Solaris. Now, I would like to run the GUI on Windows and leave the rest of the code running on Solaris (porting it will be a huge effort). The interface between GUI and backend is pretty clean and consists of one C++ abstract class (also uses some stl containers). This is the part I would like to turn into webservice.
The problem is that our backend code is not thread safe therefore I will need to run a separate process on Solaris for every GUI on Windows. However, for performance reasons I cannot start and finish process for every request from the GUI.
This design means that I need to take care of several problems:
there must be a single point of contact for the GUI code,
the communication must happen with the instance started during first call (it should either be routed or the first call should return address of the actual server instance),
there must be some keep-alive messages sent between GUI and server process to manage lifetime of server process (server process cannot run forever).
Could you recommend a framework that would take care of these details (message routing/dispatching and lifetime management)?
You could technically configure Apache httpd to spawn a new instance per connection. The configuration also allows you to manage the time the processes stay alive when idle, and how many processes to leave running at a minimum. This would work well as long as the web service is stateless. A little weird, but technically feasible.
If you use something like gSoap, you can compile your C++ classes in Solaris directly into a gSoap mod and won't have to adapt it to any front-end like PHP or Java. It'll just plug into Apache httpd and start working.
Edit:
I just thought about it, and you could probably use HTTP 1.1 keep-alives to manage the life of the process too. Apache lets you configure how long it will allow the keep-alive to remain open, which keeps the thread/process for the connection active.