Phones Pro: Why are rollover calls classified as missed? - phone-call

If CSR1 is already on a call and another call comes in, they are unable to answer the new call and it rolls over to CSR2. If CSR2 answers the call, the call is not missed. Why does CSR1 receive a missed call metric for that call if:
They were on another call and physically unable to answer
The call was answered by someone else so it wasn't missed
Is it expected behavior to penalize CSR1 for already being on a call?

Related

Can an MPI_WAITALL call be skipped if all requests are already complete?

Let's say I have an array of non-blocking MPI requests initiated in a series of calls to MPI_ISEND (or MPI_IRECV). I store the request info in the array xfer_rqst. It is my understanding that when a request completes, its corresponding value in xfer_rqst will be set to MPI_REQUEST_NULL. Further, it's my understanding that if a call is made to MPI_WAITALL, it will have work to do only if some requests in xfer_rqst still have some value other than MPI_REQUEST_NULL. If I have it right, then, if all requests complete before we get to the MPI_WAITALL call, then MPI_WAITALL will be a no-op for all intents and purposes. If I am right about all this (hoo boy), then the following Fortran code should work and maybe occasionally save a useless function call:
IF (ANY(xfer_rqst /= MPI_REQUEST_NULL)) &
CALL MPI_WAITALL(num_xfers, xfer_rqst, xfer_stat, ierr)
I have actually run this several times without an issue. But still I wonder, is this code proper and safe?
It is indeed often the case that you can decide from the structure of your code that certain requests are satisfied. So you think you can skip the wait call.
And indeed you often can in the sense that your code will work. The only thing you're missing is that the wait call deallocates your request object. In other words, by skipping the wait call you have created a memory leak. If your wait call is in a region that gets iterated many times, this is a problem. (As noted in the comments, the standard actually states that the wait call is needed to guarantee completion.)
In your particular case, I think you're wrong about the null request: it's the wait call that sets the request to null. Try it: see if any requests are null before the wait call.

DetectMultiScale never returns when calling overload with rejectLevels and levelWeights

I am trying to modify my OpenCV-based code to get confidences along with each detected object from a cascade classifier. When I call the overload of the CascadeClassifier's detectMultiScale method which takes out parameters for rejectLevels and levelWeights (and pass true for outputRejectLevels), the call never completes. Internally, the call to detectMultiScaleNoGrouping finishes quickly but returns millions of objects. When I don't pass either of the extra out parameters and set outputRejectLevels to false, that same call returns 60 object and the rest of the function works fine.
Am I not supposed to call this overload? Or is there a different reason that it is returning so many values that the function never finishes? How can I do this correctly?
Note: I see another question that appears to be referencing the same problem, but it does not include much information about the problem itself and hasn't gotten any answers.
It turns out that this has been fixed in OpenCV's master branch, but not released yet (as of April 18, 2016). I have confirmed that manually applying the patch and rebuilding fixes the problem.
The relevant discussion is here and the PR that must be included to fix the problem is this one. It is a one-line change, so it should be easy to manually implement until they release an official build with the change included.

ReportEvent - when should I close HANDLE?

I've seen many examples of ReportEvent function on the web. In those examples, the handle that is sent to ReportEvent (first argument) is created right before the call to ReportEvent and destroyed right after the call (with RegisterEventSource and DeregisterEventSource respectively).
Is there a reason why the HANDLE is alive only for a short time? Why would it be better than just creating the HANDLE in the begining of the porgram and destroying it at the end? (after all it's only one HANDLE and around 16 million is the maximum). Is there no overhead on creating and destroying the HANDLE each time we call ReportEvent?
Thanks in advance,
Dror
Yes you can. At the beginning of wmain, you may do:
HEVENT hEventLog = RegisterEventSource(NULL, PROVIDER_NAME);
And, at the end, of wmain,
if (hEventLog)
DeregisterEventSource(hEventLog);
This should do, and you may reuse same handle
The more interesting aspect of this question is following - can we call RegisterEventSource only and then assume that exiting process is enough to auto-close all related handles including the one which was obtained via RegisterEventSource?
In other words - if we know for sure that process will be exited sometime then we can call RegisterEventSource only. Nothing more.
And even more clarifying form of the same question - if we will not call DeregisterEventSource() what problems that may cause? Would be there a kind of internal leak in whole OS? Or any other consequences?
For example:
I need to call ReportEvent() in lot of different DLLs, as result - in every DLL I always have to call 3 functions: RegisterEventSource, ReportEvent, DeregisterEventSource. I would like to use internal static variable with a handle, call RegisterEventSource only once and then never call DeregisterEventSource with a hope that when process exited all related handles will be auto-closed.
Would that work?

What use is the future param in NDB's _post_delete_hook method?

The signature for an NDB _post_delete_hook in GAE is:
def _post_delete_hook(cls, key, future):
I am wondering what benefit the future parameter gives. According to the docs on Key.delete, this Future will always be None. The docs even say you cannot use the Future to determine if a delete succeeded. Here they are (from Key.delete in key.py):
"""
This returns a Future, whose result becomes available once the
deletion is complete. If no such entity exists, a Future is still
returned. In all cases the Future's result is None (i.e. there is
no way to tell whether the entity existed or not).
"""
So, my question is, what use is this future parameter? Should I block on it to ensure an NDB delete is done before calling my delete hook? Or is it just a holdover/remnant from the _post_delete_hook's initial implementation and the method now has to take 3 parameters no matter what?
It's a very open ended question, so I would just like to bolster my app engine knowledge and see what you guys have in mind/how you've used it in the past.
According to documentation [1]:
If you use post-hooks with asynchronous APIs, the hooks are triggered by calling check_result(), get_result() or yielding (inside a tasklet) an async method's future. Post hooks do not check whether the RPC was successful; the hook runs regardless of failure.
All post- hooks have a Future argument at the end of the call signature. This Future object holds the result of the action. You can call get_result() on this Future to retrieve the result; you can be sure that get_result() won't block, since the Future is complete by the time the hook is called.
For me, the arg Future this is just a remanent.
[1] https://cloud.google.com/appengine/docs/standard/python/ndb/creating-entity-models#using_model_hooks

What is the best way to store function calls and their arguments when an exception occurs?

This is a followup to Clojure: Compile time insertion of pre/post functions
My goal is to call a debug function instead of throwing an exception. I am looking for the best way to store a list of stack frames, function calls and their arguments, to accomplish this.
I want to have a function (my-uber-debug), so that when I call it (instead of throwing an exception), the following things happen:
a new Java window pops up
there is a record of the current clojure stack frame
for each stack frame, there is a record of the argument passed to the function
This is so that I can move up/down the stack frames, and examine the arguments passed to get to this current point. [If somehow, magically, we can get the variables defined in "let" environments, that'd be awesome too.]
Current Idea
I'm going to have a thread local variable uber-debug, which has type:
List of StackFrames
where StackFrame = function + arguments
At each function call, it's going to push (cons the current function + arguments to uber-debug), then at the end of a function call, it's going to remove the first element from uber-debug
Then, when I call (my-uber-debug), it just pops up a new java window, and lets me interact with uber-debug
Question
The ideas I've had so far are probably not ideal for setting this up. What is the right way to solve this problem?
Edit:
The question is NOT about the Swing/GUI part. It's about how to store the stack frames.
Thanks!
Your answer may depend on a lot of factors, so I am going to answer this by giving you my thoughts.
If you merely want to store function calls and their parameters when an exception occurs, then either write a macro or function as a wrapper to accomplish this. You would then have to pass all functions to be called to this wrapper. The wrapper would perform the try catch operation and whatever else you need.
You might also want to look into Clojure meta data in addition to writing the wrapper, because your running code could look at its meta-data and make some decisions based on that as well. I have never used meta data, but the information at the link looks promising.
As a final thought, it might be helpful for you to further delineate what you want to accomplish by doing this by editing your original post and putting the information there.
For example, are these stack traces for a library or a main program?
As to storing all this information, are multiple threads going to need it, or just one?
Can you get by storing the information in a let binding at the highest level of your program, or do you need something like a ref?