Determine resolved input type of IMFSampleGrabberSinkCallback - c++

I have an IMFSampleGrabberSinkCallback in my Media Foundation application, which I create an activator for using MFCreateSampleGrabberSinkActivate. The function receives only a partially filled media type. Within the sample grabber callback, an NVENC encoder is running (I know that there are out-of-the-box transforms for NVENC, but the software is not running on Windows 10). In order to use the encoder, I need to have the full media type that after topology resolution (mainly the size of the frames, which is not known when the sample grabber is created).
There seems to be no obvious way to obtain this information in the IMFSampleGrabberSinkCallback, so the idea would be getting the full topology from the session once I receive MF_TOPOSTATUS_READY.
First question: Is this the only way to get the full type the grabber will receive or did I miss something?
After getting the full topology, the idea is iterating all nodes, searching the one with the sample grabber and retrieving its input type.
Second question: What would be the best way of doing that? For me, there seems to be no way to retrieve my IMFSampleGabberSinkCallback from the topology node.
I would solve that using the topo node ID, but I am not sure whether this would work in any case. The documentation of IMFTopologyNode::GetTopoNodeID states that
When a node is first created, it is assigned an identifier. Node identifiers are unique within a topology, but can be reused across several topologies. The topology loader uses the identifier to look up nodes in the previous topology, so that it can reuse objects from the previous topology.
To me, it is not clear whether the ID will be reused or whether it can be reused.
Third question: Is it guaranteed that, if I obtain the topo node ID of the node I insert my IMFActivate obtained from MFCreateSampleGrabberSinkActivate (and as long as I do not change the ID manually using IMFTopologyNode::SetTopoNodeID, the very same ID will be used for any subsequent topology created during the topology resolution process?
Thanks in advance,
Christoph
Edit: The question is not on how to get NVENC working and resolve its input type, but the key questions are (1) whether the topo IDs are guaranteed to be preserved during the resolution process and (2) whether there is a better way to do that than the following (which is susceptible to users actively changing the topo ID): First, I create an activator for my grabber callback. Once I have added the node containing the activator as object to the topology, I retrieve its ID. When the session reports that the topology is ready, I retrieve the node with the ID saved before, obtain its object, query its IMFStreamSink interface, retrieve its IMFMediaTypeHandler, get the current media type and obtain the actual frame size and frame rate for NVENC. But: This only works if the ID cannot change and is not actively changed.
I have extracted the topology resolution stages from my test code:
The topology resolver finds out that a colour conversion is required and adds the blue transform to account for this. Coming back to the questions: In this case, they would be (1) whether it is guaranteed that the ID of the red node cannot change during resolution and (2) whether there is an alternative way to implement this which is not susceptible to someone using IMFTopologyNode::SetTopoNodeID on the red node.

From my experience ID of Topology Node looks like result of hash function. I think that generator of ID has complex algorithm and it cannot guaranty constant from one session to other, but ID is stable during current session. Maybe you can try another way - your wrote - "have added the node containing the activator as object to the topology," but what do you do with original pointer on "an activator for my grabber callback"? IMFTopologyNode::SetObject increments reference of IUnknown. I think you release original pointer an activate, but you
can keep pointer on on "an activator for my grabber callback". In that case there is not need - "I retrieve the node with the ID saved before, obtain its object,". After resolving of topology you ALREADY can have pointer on activator with full resolved MediaType and query its IMFStreamSink interface, retrieve its IMFMediaTypeHandler without saving ID of topology node.
Activator is proxy, but for IMFMediaSink object, which is gotten by calling of IMFActivate::ActivateObject with IID_IMFMediaSink. While it is called first time by topology resolving it creates object with IMFMediaSink interface (which creates the IMFStreamSink with IMFSampleGabberSinkCallback inner itself) in activator, but the next calling will return reference - Activator KEEPS reference on resolved IMFMediaSink - according to MSDN MFActivate::ActivateObject - "After the first call to ActivateObject, subsequent calls return a pointer to the same instance, until the client calls either ShutdownObject or IMFActivate::DetachObject." It means that after resolving of topology Microsoft's code DOES NOT detach object or shutdown it - only close session execute ShutdownObject or IMFActivate::DetachObject.
Regards

Related

How can I pass value through process variable in Camunda to subflow from main flow

Colleagues,
Can you please advise me a bit about the following.
I cannot figure out how to pass value through process variable from main flow to its subflow in Camunda. I am putting value to process variable in one task in main flow via execution.setVariable("toolId", toolId);
where execution is an instance of DelegateExecution. I am trying to retrieve in another task of subflow via
Long toolId = (Long) execution.getVariable("toolId");
However I am getting null.
By subflow I assume you mean a call activity (otherwise the data would be available).
A call activity references a technically independent process instance with its own data. Therefore you have to explicitly map the in data, which should be copied from the source (parent) to the target (sub process) and also the out data in the other direction.
Please see: https://docs.camunda.io/docs/components/modeler/bpmn/call-activities/#variable-mappings and https://docs.camunda.io/docs/components/concepts/variables/#inputoutput-variable-mappings

can we verify that boostlog core did remove sink?

I am using boost log to make logging system for my program.
I understand boost log mechanism like this:
the core singleton registers sink ,this leads to raising shared pointer count of sink by 1,then we backend raising this count to 2 in addition to main count of shared pointer of sink as 0 .
In my code I remove sink from core and I expect that shared pointer count of this front end sink is decreased to 1 ,then I test this shared pointer to be unique and if so I reset the shared pointer.
I use multi threads and use mutex to protect boost log code working with this specefic sink" I have cout sink and I do not protect it"
the problem is : sometimes I find that sink front end shared pointer counter is not 2,it becomes 3.
I do not know why this is happening as every sink will be registered to core once making its count 1 then adding backend we should have count of 2 only.
is there any way I can verify that core has removed front end sink??
is there any way to know where each instance of shared pointer is present in code??
thanks alot
Update:
if core.remove_sink is executed on one thread and at the same time core log to cout is done on another thread"cout sink is not protected by mutex" and i can see on console that msg is written in wrong position where certain message after core.remove_sink is ought to be done ,BUT here frontend sink shared pointer count is not reduced!!
Did the core discarded the remove_sink which came at same time of logging to another sink???
is there any way I can verify that core has removed front end sink?
The sink is considered removed when remove_sink returns. That is, it will not receive any future log records.
It may not be released by the library at that point because there may be log records in progress at the point of the remove_sink call, and remove_sink may return before those log records are fully processed. Log record processing will continue and may involve the sink that is being removed. Eventually, when all log records are processed and remove_sink have returned, the sink will have been released by the core and, if no more references are left, destroyed.
You can detect when the sink is no longer present by using weak_ptr, which you can construct from shared_ptr referencing the sink. When the last shared_ptr referencing the sink object is destroyed or reset, the weak_ptr::lock method will return a null shared_ptr. Note that this includes any shared_ptrs to the sink that you may be holding in your code.
is there any way to know where each instance of shared pointer is present in code?
Generally, no. You will have to manually track where you pass and save pointers to objects.

DirectShow: How to syncronize stream time to system time for video capture devices

I am creating a program where I show some graphical content, and I record the face of the viewer with the webcam using DirectShow. It is very important that I know the time difference between what's on the screen to when the webcam records a frame.
I don't care at all about reducing latency or anything like that, it can be whatever it's going to be, but I need to know the capture latency as accurately as possible.
When frames come in, I can get the stream times of the frames, but all those frames are relative to some particular stream start time. How can I access the stream start time, for a capture device? That value is obviously somewhere in the bowels of directshow, because the filter graph computes it for every frame, but how can I get at it? I've searched through the docs but haven't found it's secret yet.
I've created my own IBaseFilter IReferenceClock implementing classes, which do little more than report tons of debugging info. Those seem to be doing what they need to be doing, but they don't provide enough information.
For what it is worth, I have tried to investigate this by inspecting the DirectShow Event Queue, but no events concerning the starting of the filter graph seem to be triggered, even when I start the graph.
The following image recorded using the test app might help understand what I'm doing. The graphical content right now is just a timer counting seconds.
The webcam is recording the screen. At the particular moment that frame was captured, the system time was about 1.35 seconds or so. The time of the sample recorded in DirectShow was 1.1862 seconds (ignore the caption in the picture). How can I account for the difference of .1637 seconds in this example? The stream start time is key to deriving that value.
The system clock and the reference clock are both using the QueryPerformanceCounter() function, so I would not expect it to be timer wonkyness.
Thank you.
Filters in the graph share reference clock (unless you remove it, which is not what you want anyway) and stream times are relative to certain base start time of this reference clock. Start time corresponds to stream time of zero.
Normally, controlling application does not have access to this start time as filter graph manager chooses the value itself internally and passes to every filter in the graph as a parameter in IBaseFilter::Run call. If you have at least one filter of your own, you can get the value.
Getting absolute capture time in this case is a matter of simple math: frame time is base time + stream time, and you can always do IReferenceClock::GetTime to check current effective time.
If you don't have access to start time anyway and you don't want to add your own filter to the graph, there is a trick you can employ to define base start time yourself. This is what filter graph manager is doing anyway.
Starting the graphs in sync means using IMediaFilter::Run instead of IMediaControl::Run... Call IMediaFilter::Run on all graphs, passing this time... as the parameter.
try IReferenceClock::GetTime
Reference Clocks: https://msdn.microsoft.com/en-us/library/dd377506(v=vs.85).aspx
For more information here:
https://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/1dc4123a-05cf-4036-a17e-a7648ca5db4e/how-do-i-know-current-time-stamp-referencetime-on-directshow-source-filter?forum=windowsdirectshowdevelopment

Wrapper for MessageQ C++ with QT

I have a small issue regarding one existing MQ interpretation.
The thing is in each part of the program we have to interrogate the message that is being sent/received to which type it belongs, resulting in a massive switch scenario for each component.
Each type of the message has to be processed accordingly (update GUI progress bar, update a specific file, connect specific signals from where the interrogation happens and so on).
What would be the best approach to move it into a single component?
For now it uses Factory method to create each of the needed objects and like I said before the drawback is that you have to ask what type of object was created to implement the needed logic => big switches.
Instead of a message id, that you process is a switch statement, you can easily send a code chunk to be executed, say, a lambda object. Then, you can merely execute the code chunk in the "slot", without checking and reacting on the message id.

Which is the most suitable stage in the DirectShow pipeline to initialize a resource?

I'm programming a DirectShow filter that reads data from a cam. I wonder which is the most suitable point on the whole DirectShow pipeline to initialize this cam. On the filter pause method, on the OnThreadCreate overrided method... ?
It depends on what exactly "cam initialization" is. If it is something trivial and simple, you can do it any time. Should this be related to runtime delay and/or exclusive resource management, then you don't want to do it too early because you don't want an idling instantiated filter to produce errors and cause unexpected freezes. It makes sense to do this sort of initialization on UI action (filter or pin property pages) or transition from stopped state (CSourceStream::OnThreadCreate looks good), whatever takes place first.
If it's a straight forward initialization, you should do it as early as possible - constructor.
Make sure the factory function for your filter checks that the constructor was successful by exposing an "IsOK" function. The camera might be disconnected and you want to catch that early - before the filter is connected.