Object pool design pattern problem - c++

For object pools, we say that whenever client asks for a resource, we give it from the pool. If I checked out one resource and changed its state and checked it in. What happens on the next request, does the pool let client check out this resource or this resource is invalid for the pool now?

If the object released to the pool became invalid for re-use, the pool would be somewhat pointless. If a class requires initialization, or re-initialization, you could do it in the get() or release() pool methods. If reinitialization requires much more than simple assignments, (eg. a pool of socket objects that must not be re-used for 5 minutes), then you may have to resort to a dedicated pool manager thread that effectively splits the pool into a couple of puddles - those objects available for re-use and those awaiting reinitialization.
Rgds,
Martin

Or, alternatively, you should not return the resource back to the pool until the resource is back to its original state. For example, imagine you have a web server with a listener thread and a pool of 10 worker threads. The listener thread accepts incoming http requests and dispatches them to the worker threads for processing. Worker threads in the pool (not checked-out) are in their "original" state, i.e. idle, or not processing a request. Once the listener thread checks out a worker thread and gives it an http request, the worker thread begins processing the request; in other words, its state is "working". Once it's done processing the request and has sent the http reply to the client, it is "idle" again and goes back into the pool. Thereby, all threads not currently checked out of the pool are always in their original state, "idle".

Related

gRPC C++ async client completion queue drain

I have created an gRPC async client written in C++ which makes both streaming and unary requests to a server, using a completion queue.
In the destructor of the client class the Shutdown method of the completion queue is called, then I thought I could call Next to drain the queue and obtain the pending tags, but instead the call to Next blocks everything.
The pending tags are needed since they are objects create with new and must be deleted to avoid leaks.
What is the correct way to drain a queue used for an async client?
It should be that 1 tag into the completion queue, 1 tag out, so all the pending ops will get their tags returned from Next (even if the RPC gets canceled).
The symptom that Next blocks is likely due to there are pending events that is not finished.
You may like to use TryCancel to terminate the call quickly

Launch dialog on main thread waiting for result of worker thread

I have an app which sends off HTTP requests and processes the received response. The main thread is blocked until a response comes back, else we couldn't process the data. To send these requests, the user must be authenticated. I wish to catch a 401 response and before returning the response for processing by my app, prompt the user for authentication. Depending on the success, I want to retry to send the original request and return that response instead, or, if authentication fails, return the original 401 response.
I'm using the C++ REST SDK to send HTTP requests. These happen in another thread (pplx::task). I'm also using MFC modal dialog to prompt for authentication. Some of you may see the deadlock that occurs. If not, let me explain more.
The main thread waits for the HTTP request to complete. Inside that thread, I catch a 401 and wish to launch a dialog. To do so, I use a boost::signal. This signal calls SendMessage to the handle I wish to display the dialog. After the message is processed by the MFC message loop, it will launch the dialog (on the main thread). This relies on the MFC message loop, which is blocked waiting for the HTTP request. In short, the main thread is already waiting for the request to finish so it can't run its message loop to receive the call from SendMessage.
Main thread is waiting on worker thread. Worker thread needs to launch a dialog on main thread before it can continue. Deadlock. Does anyone have any clever solutions for ways around this?
I think the simplist solution here is to redesign the way you are handling your threads.
I suggest that instead of having a single thread for requests, you spawn a new thread for each request and then have it return the status code (no matter what it is), you can then handle any logic for retrying with authentication within your main thread (i.e display an authentication dialog, then respawn the authentication thread with the credentials).
This also allows you to encapsulate your request handler better, which is a big plus. In order to properly encapsulate this logic (so you dont have to check every request) you should define some kind of request handler (either a class or a function). For example
StatusCode make_reqeust(...) {
// Deal with the logic on authentication here
}
Where StatusCode is a type for a HTTP status code.
Of course this doesnt solve the issue of your UI thread potentially waiting for your worker thread to finish, so you also need some kind of UI refresh method which is called every x amount of time and which checks the status of all the worker threads (i.e by checking the returned std::future's). You would also want to change my above example to possibly spawn a seperate thread and return an std::future in this case.

Create workers dynamically (ActiveMQ)

I want to create a web application were a client calls a REST Webservice. This returns OK-Status for the client (with a link to the result) and creates a new message on an activeMQ Queue. On the listeners side of the activeMQ there should be worker who process the messages.
Iam stucking here with my concept, because i dont really know how to determine the number of workers i need. The workers only have to call web service interfaces, so no high computation power is needed for the worker itself. The most time the worker has to wait for returning results from the called webservice. But one worker can not handle all requests, so if a limit of requests in the queue is exceeded (i dont know the limit yet), another worker should treat the queue.
What is the best practise for doing this job? Should i create one worker per Request and destroying them if the work is done? How to dynamically create workers based on the queue size? Is it better to run these workers all the time or creating them when the queue requiere that?
I think a Topic/Suscriber architecture is not reasonable, because only one worker should care about one request. Lets imagine of 100 Requests per Minute average and 500 requests on high workload.
My intention is to get results fast, so no client have to wait for it answer just because not properly used ressources ...
Thank you
Why don't you figure out the max number of workers you'd realistically be able to support, and then make that number and leave them running forever? I'd use a prefetch of either 0 or 1, to avoid piling up a bunch of messages in one worker's prefetch buffer while the others sit idle. (Prefetch=0 will pull the next message when the current one is finished, whereas prefetch=1 will have a single message sitting "on deck" available to be processed without needing to get it from the network but it means that a consumer might be available to consume a message but can't because it's sitting in another consumer's prefetch buffer waiting for that consumer to be read for it). I'd use prefetch=0 as long as the time to download your messages from the broker isn't unreasonable, since it will spread the workload as evenly as possible.
Then whenever there are messages to be processed, either a worker available to process the next message (so no delay) or all the workers are processing messages (so of course you're going to have to wait because you're at capacity, but as soon as there's a worker available it will take the next message from the queue).
Also, you're right that you want queues (where a message will be consumed by only a single worker) not topics (where a message will be consumed by each worker).

Why do agents have a pool of threads?

In clojure documentation I see that agent use a pool of thread to process data. But I read that (always in documentation) :
The actions of all Agents get interleaved amongst threads in a thread
pool. At any point in time, at most one action for each Agent is being
executed.
Why does an agent have a pool of thread and not a single thread to process the "queue" of sended function ?
Thanks.
An agent does not 'have a pool of threads'. There are two thread pools (for send and send-off actions), to which agent actions get assigned.
This design decision is the optimal choice for CPU-bound tasks, and a best-effort approach for IO-bound tasks.
For the latter case, providing your own pool with send-via will be the optimal choice (assuming you know what you're doing).

C++ Singleton Threading problem

I have a C++ singleton that runs as a separate thread. This singleton is derived from a base class provided by a library and it overrides the method onLogon(...). The onLogon method is synchronous, it wants to know right away if we accept the logon attempt.
Problem is we need to pass the logon information via message to a security server. We can register a callback with the security server listener (a separate thread) to get the results of the logon authentication message we sent. My question is how do I block in the onLogon method such that I can have the thread woken up by the callback I've registered with the security server listener thread, and how can I then access the response returned from the security server in a thread-safe way (ie, I need to be able to handle multiple concurrent logon requests).
I'm totally stumped.
Use an empty semaphore. After you send the credentials to the security server, take the semaphore. Since it will be empty it will block execution. Then have the callback function post to the semaphore. That will then resume execution on the original thread.
Since callbacks typically allow an anonymous value to be passed as a parameter, you can register a pointer to a data structure that can be filled with the response.
I ended up going with boost::promise and boost::unique_future. It's was perfect for what I needed.
http://www.boost.org/doc/libs/1_43_0/doc/html/thread/synchronization.html#thread.synchronization.futures