can we verify that boostlog core did remove sink? - c++

I am using boost log to make logging system for my program.
I understand boost log mechanism like this:
the core singleton registers sink ,this leads to raising shared pointer count of sink by 1,then we backend raising this count to 2 in addition to main count of shared pointer of sink as 0 .
In my code I remove sink from core and I expect that shared pointer count of this front end sink is decreased to 1 ,then I test this shared pointer to be unique and if so I reset the shared pointer.
I use multi threads and use mutex to protect boost log code working with this specefic sink" I have cout sink and I do not protect it"
the problem is : sometimes I find that sink front end shared pointer counter is not 2,it becomes 3.
I do not know why this is happening as every sink will be registered to core once making its count 1 then adding backend we should have count of 2 only.
is there any way I can verify that core has removed front end sink??
is there any way to know where each instance of shared pointer is present in code??
thanks alot
Update:
if core.remove_sink is executed on one thread and at the same time core log to cout is done on another thread"cout sink is not protected by mutex" and i can see on console that msg is written in wrong position where certain message after core.remove_sink is ought to be done ,BUT here frontend sink shared pointer count is not reduced!!
Did the core discarded the remove_sink which came at same time of logging to another sink???

is there any way I can verify that core has removed front end sink?
The sink is considered removed when remove_sink returns. That is, it will not receive any future log records.
It may not be released by the library at that point because there may be log records in progress at the point of the remove_sink call, and remove_sink may return before those log records are fully processed. Log record processing will continue and may involve the sink that is being removed. Eventually, when all log records are processed and remove_sink have returned, the sink will have been released by the core and, if no more references are left, destroyed.
You can detect when the sink is no longer present by using weak_ptr, which you can construct from shared_ptr referencing the sink. When the last shared_ptr referencing the sink object is destroyed or reset, the weak_ptr::lock method will return a null shared_ptr. Note that this includes any shared_ptrs to the sink that you may be holding in your code.
is there any way to know where each instance of shared pointer is present in code?
Generally, no. You will have to manually track where you pass and save pointers to objects.

Related

How to implement long running gRPC async streaming data updates in C++ server

I'm creating an async gRPC server in C++. One of the methods streams data from the server to clients - it's used to send data updates to clients. The frequency of the data updates isn't predictable. They could be nearly continuous or as infrequent as once per hour. The model used in the gRPC example with the "CallData" class and the CREATE/PROCESS/FINISH states doesn't seem like it would work very well for that. I've seen an example that shows how to create a 'polling' loop that sleeps for some time and then wakes up to check for new data, but that doesn't seem very efficient.
Is there another way to do this? If I use the "CallData" method can it block in the 'PROCESS' state until there's data (which probably wouldn't be my first choice)? Or better, can I structure my code so I can notify a gRPC handler when data is available?
Any ideas or examples would be appreciated.
In a server-side streaming example, you probably need more states, because you need to track whether there is currently a write already in progress. I would add two states, one called WRITE_PENDING that is used when a write is in progress, and another called WRITABLE that is used when a new message can be sent immediately. When a new message is produced, if you are in state WRITABLE, you can send immediately and go into state WRITE_PENDING, but if you are in state WRITE_PENDING, then the newly produced message needs to go into a queue to be sent after the current write finishes. When a write finishes, if the queue is non-empty, you can grab the next message from the queue and immediately start a write for it; otherwise, you can just go into state WRITABLE and wait for another message to be produced.
There should be no need to block here, and you probably don't want to do that anyway, because it would tie up a thread that should otherwise be polling the completion queue. If all of your threads wind up blocked that way, you will be blind to new events (such as new calls coming in).
An alternative here would be to use the C++ sync API, which is much easier to use. In that case, you can simply write straight-line blocking code. But the cost is that it creates one thread on the server for each in-progress call, so it may not be feasible, depending on the amount of traffic you're handling.
I hope this information is helpful!

How to wait for unknown number of processes to end

The scenario:
There are several processes running on a machine. Names and handles unknown, but they all have a piece of code running in them that's under our control.
A command line process is run. It signals to the other processes that they need to end (SetEvent), which our code picks up and handles within the other processes.
The goal:
The command line process needs to wait until the other processes have ended. How can this be achieved?
All that's coming to mind is to set up some shared memory or something and have each process write its handle into it so the command line process can wait on them, but this seems like so much effort for what it is. There must be some kernel level reference count that can be waited on?
Edit 1:
I'm thinking maybe assigning the processes to a job object, then the command line processes can wait on that? Not ideal though...
Edit 2:
Can't use job objects as it would interfere with other things using jobs. So now I'm thinking that the processes would obtain a handle to some/any sync object (semaphore, event, etc), and the command line process would poll for its existance. It would have to poll as if it waited it would keep the object alive. The sync object gets cleaned up by windows when the processes die, so the next poll would indicate that there are no processes. Not the niceset, cleanest method, but simple enough for the job it needs to do. Any advance on that?
You can do either of following ways.
Shared Memory (memory mapped object) : CreateFileMapping, then MapViewOfFile --> Proceed the request. UnmapViewFile. Close the file,
Named Pipe : Create a nameed pipe for each application. And keep running a thread to read the file. So, You can write end protocol from your application by connecting to that named pipe. ( U can implement a small database as like same )
WinSock : (Dont use if you have more number of processes. Since you need to send end request to the other process. Either the process should bind to your application or it should be listening in a port.)
Create a file/DB : Share the file between the processes. ( You can have multiple files if u needed ). Make locking before reading or writing.
I would consider a solution using two objects:
a shared semaphore object, created by the main (controller?) app, with an initial count of 0, just before requesting the other processes to terminate (calling SetEvent()) - I assume that the other processes don't create this event object, neither they fail if it has not been created yet.
a mutex object, created by the other (child?) processes, used not for waiting on it, but for allowing the main process to check for its existence (if all child processes terminate it should be destroyed). Mutex objects have the distinction that can be "created" by more than one processes (according to the documentation).
Synchronization would be as follows:
The child processes on initialization should create the Mutex object (set initial ownership to FALSE).
The child processes upon receiving the termination request should increase the semaphore count by one (ReleaseSemaphore()) and then exit normally.
The main process would enter a loop calling WaitForSingleObject() on the semaphore with a reasonably small timeout (eg some 250 msec), and then check not whether the object was granted or a timeout has occurred, but whether the mutex still exists - if not, this means that all child processes terminated.
This setup avoids making an interprocess communication scheme (eg having the child processes communicating their handles back - the number of which is unknown anyway), while it's not strictly speaking "polling" either. Well, there is some timeout involved (and some may argue that this alone is polling), but the check is also performed after each process has reported that it's terminating (you can employ some tracing to see how many times the timeout has actually elapsed).
The simple approach: you already have an event object that every subordinate process has open, so you can use that. After setting the event in the master process, close the handle, and then poll until you discover that the event object no longer exists.
The better approach: named pipes as a synchronization object, as already suggested. That sounds complicated, but it isn't.
The idea is that each of the subordinate processes creates an instance of the named pipe (i.e., all with the same name) when starting up. There's no need for a listening thread, or indeed any I/O logic at all; you just need to create the instance using CreateNamedPipe, then throw away the handle without closing it. When the process exits, the handle is closed automatically, and that's all we need.
To see whether there are any subordinate processes, the master process would attempt to connect to that named pipe using CreateFile. If it gets a file not found error, there are no subordinate processes, so we're done.
If the connection succeeded, there's at least one subordinate process that we need to wait for. (When you attempt to connect to a named pipe with more than one available instance, Windows chooses which instance to connect you to. It doesn't matter to us which one it is.)
The master process would then call ReadFile (just a simple synchronous read, one byte will do) and wait for it to fail. Once you've confirmed that the error code is ERROR_BROKEN_PIPE (it will be, unless something has gone seriously wrong) you know that the subordinate process in question has exited. You can then loop around and attempt another connection, until no more subordinate processes remain.
(I'm assuming here that the user will have to intervene if one or more subordinates have hung. It isn't impossible to keep track of the process IDs and do something programmatically if that is desirable, but it's not entirely trivial and should probably be a separate question.)

How can I avoid the cost of a conditional by swapping pointers when finished with a producer consumer queue?

I have a logger class with a tbb::concurrent_queue as a member field. Threads using a logger object call a method to push a message onto this internal queue. The logger has an internal thread which is consuming these messages until it receives a sentinel message and then this internal thread exits. The thing is, if clients of this logger object try to log more messages into this logger (after the call to shutdown has sent the sentinel) the messages are passed onto the queue but never picked up on the other end and silently lost. Ideally I would like to notify when this is the case but using a flag set by the internal consumer thread upon exit to check each time im about to push a new message onto the queue would add a branch cost to what is a critical path for me.
One idea i heard was that maybe the consumer thread..right before exiting..somehow atomically swap out the pointer of the queue so that the next call to it would call something else which could handle the message differently now..
i.e. the call: m_buffer->push(message) where m_buffer is a pointer to a tbb::concurrent_queue should after the consumer thread is done still look like m_buffer->push(message) except it goes somewhere else..a custom handler of mine or so...
How can I do this though? I cant swap m_buffer to point to any other custom class unless I inherit from tbb::concurrent_queue...is there another way to get around this?
Thanks
Sounds more like a design problem to me. What do clients of a logger want? To log a message. They don't care whether it goes onto a queue, or is written out to the screen, or is written onto a piece of paper, inserted into a glass bottle and thrown into the ocean.
So your logger need only expose a log() method, nothing else. Queue swapping would happen internally, by keeping two queues in an array and atomically switching a pointer or index around.
Also, if the speed of your logging code is critical, you might be doing too much of it...
Why would the logging thread have to set the 'no more logging' flag? The 'shutdown' method of the logger should set it to prevent any more messages getting queued up, just before it issues the sentinel that will eventually signal the thread to shut down, so that the sentinel is always the last item queued. What you do with logging requests when the flag is set is up to your logger.

Resetting Threaded Events - C++

Let's say that I have a switch statement in my thread function that evaluates for triggered events. Each case is a different event. Is it better to put the call to ResetEvent at the end of the case, or at the beginning? It seems to me that it should go at the end, so that the event cannot be triggered again, until the thread has finished processing the previous event. IF it is placed at the beginning, the event could be triggered again, while being processed.
Yes. think that is the way to go. Create a manual reset event (second parameter of CreateEvent API) so that event is not automatically reset after setting it.
If you handle incoming traffic using a single Event object (implying you have no inbound queue), you will miss events. Is this really what you want?
If you want to catch all events, a full-blown producer-consumer queue wouold be a better bet. Reference implementation for Boost.Thread here.
One problem that comes up time and
again with multi-threaded code is how
to transfer data from one thread to
another. For example, one common way
to parallelize a serial algorithm is
to split it into independent chunks
and make a pipeline — each stage in
the pipeline can be run on a separate
thread, and each stage adds the data
to the input queue for the next stage
when it's done. For this to work
properly, the input queue needs to be
written so that data can safely be
added by one thread and removed by
another thread without corrupting the
data structure.

Looking for design advise - Statistics reporter

I need to implement a statistics reporter - an object that prints to screen bunch of statistic.
This info is updated by 20 threads.
The reporter must be a thread itself that wakes up every 1 sec, read the info and prints it to screen.
My design so far: InfoReporterElement - one element of info. has two function, PrintInfo and UpdateData.
InfoReporterRow - one row on screen. A row holds vector of ReporterInfoElement.
InfoReporterModule - a module composed of a header and vector of rows.
InfoRporter - the reporter composed of a vector of modules and a header. The reporter exports the function 'PrintData' that goes over all modules\rows\basic elements and prints the data to screen.
I think that I should an Object responsible to receive updates from the threads and update the basic info elements.
The main problem is how to update the info - should I use one mutex for the object or use mutex per basic element?
Also, which object should be a threads - the reporter itself, or the one that received updates from the threads?
I would say that first of all, the Reporter itself should be a thread. It's basic in term of decoupling to isolate the drawing part from the active code (MVC).
The structure itself is of little use here. When you reason in term of Multithread it's not so much the structure as the flow of information that you should check.
Here you have 20 active threads that will update the information, and 1 passive thread that will display it.
The problem here is that you encounter the risk of introducing some delay in the work to be done because the active thread cannot acquire the lock (used for display). Reporting (or logging) should never block (or as little as possible).
I propose to introduce an intermediate structure (and thread), to separate the GUI and the work: a queuing thread.
active threads post event to the queue
the queuing thread update the structure above
the displaying thread shows the current state
You can avoid some synchronization issues by using the same idea that is used for Graphics. Use 2 buffers: the current one (that is displayed by the displaying thread) and the next one (updated by the queuing thread). When the queuing thread has processed a batch of events (up to you to decide what a batch is), it asks to swap the 2 buffers, so that next time the displaying thread will display fresh info.
Note: On a more personal note, I don't like your structure. The working thread has to know exactly where on the screen the element it should update is displayed, this is a clear breach of encapsulation.
Once again, look up MVC.
And since I am neck deep in patterns: look up Observer too ;)
The main problem is how to update the
info - should i use one mutex for the
object or use mutex per basic element?
Put a mutex around the basic unit of update action. If this is an InfoReporterElement object, you'd need a mutex per such object. Otherwise, if a row is updated at a time, by any one of the threads then put the mutex around the row and so on.
Also, which object should be a threads
- the reporter itself, or the one that received updates from the threads?
You can put all of them in separate threads -- multiple writer threads that update the information and one reader thread that reads the value.
You seem to have a pretty good grasp of the basics of concurrency.
My intial thought would be a queue which has a mutex which locks for writes and deletes. If you have the time then I would look at lock-free access.
For you second concern I would have just one reader thread.
A piece of code would be nice to operate on.
Attach a mutex to every InfoReporterElement. As you've written in a comment, not only you need getting and setting element value, but also increment it or probably do another stuff, so what I'd do is make a mutexed member function for every interlocked operation I'd need.