I have written a video aggregator plugin based on GstAggregator.
When I am using the plugin with multiple queue (each queue contains some video scaling), It stuck and not getting the call to 'aggregator' function.
If we reduce the framerate of video input, everything is working perfect.
suggestions will be greatly appreciated.
Got resolved after handling the QOS event.
Related
I'm new to GStreamer, so sorry if it's kind of a dumb question, but how can I stream audio data through my pipeline as fast as possible?
I use a playbin, and I have a bunch of songs that I'd like to send through it at the greatest speed available. I'm aware of the concept of frame stepping, I also read this tutorial: https://gstreamer.freedesktop.org/documentation/tutorials/basic/playback-speed.html, but still can't figure out how to do this simply without keyboard handling. Can someone explain it to me?
First of all, you should know this question is titled that way because that's were I ended up stuck after narrowing down my problem for quite a while. Since there probably are better approaches to my problem I'm also explaining below my problem and what I've been doing to try and solve it. Suggestions on other approaches would be very welcome.
The problem
I'm using a gstreamer port to Android to render videos from remote cameras through the RTSP protocol (UDP is the transport method).
Using playbin things were working quite fine until they didn't anymore for a subset of these cameras.
Unfortunately I don't have access to the cameras themselves since they belong to our company's client, but the first thing that sprung to my mind was that it's got to be a problem with them.
Then, there's another Android app which we're using as reference that is still able to play video from these cameras normally, so I'm now trying to do my best to further investigate the issue on my end (our Android app).
The problem has been quite deterministic: some cameras always fail, others always work. When they fail, sometimes it would be with reason not-linked as the cause.
I managed to dump the pipeline graph associated with each of these cameras when the application tries to play video from them. Then I could notice that for each of the cameras that are failing, the associated pipelines are always missing something. Some miss just the sink element, others miss both the source and the sink:
Dump of pipeline with source only:
Dump of pipeline without a source or a sink:
Dump of pipeline with both (these are the cases where we can indeed play):
These are dumps of pipelines built by the playbin.
Attempted solution
I've been trying to test what would happen if I built the pipeline manually from scratch (so that it's the same being build by the playbin in the third image above) and forced all camera's videos to be processed by this pipeline. Since all cameras used to work, my guess is that somehow negotiation is failing now for some cameras so that the playbin is not building the pipeline properly for these cameras but if I assemble it myself, eventually it all would work as expected (I'm assuming that rtspsrc in combination with glimagesink was also the chosen pipeline by the playbin for playing video from these cameras).
This is how I'm trying to build this pipeline myself:
priv->pipeline = gst_pipeline_new("rtspstreamer");
source = gst_element_factory_make("rtspsrc", NULL);
if (!source) {
GST_DEBUG("Source could not be created");
}
sink = gst_element_factory_make("glimagesink", NULL);
if (!sink) {
GST_DEBUG("Sink could not be created");
}
if (!gst_bin_add(GST_BIN(priv->pipeline), source)) {
GST_DEBUG("Could not add source to pipeline");
}
if (!gst_bin_add(GST_BIN(priv->pipeline), sink)) {
GST_DEBUG("Could not add sink to pipeline");
}
if (!gst_element_link(source, sink)) {
GST_DEBUG("Source and sink could not be linked");
}
g_object_set(source, "location", uri, NULL);
So, running the code above, I get the following error:
Source and sink could not be linked
This is where I'm stuck. How could I investigate further on why these components are unable to link to each other? I think that maybe there should be some other component between them in the pipeline, but I think that's not the case by looking at the dump of the successful pipeline (third image) above.
Thanks in advance for any help.
I'm creating an application for video conferencing using media foundation and I'm having an issue decoding the H264 video frames I receive over the network.
The Design
Currently my network source queues a token on every request sample, unless there is an available stored sample. If a sample arrives over the network and no token is available the sample is stored in a linked list. Otherwise it is queued with the MEMediaSample event. I also have the decoder set to low latency.
My Issue
When running the topology using my network source I immediately see the first frame rendered to the screen. I then experience a long pause until a live stream begins to play perfectly. After a few seconds the stream appears to pause but then you notice that it's just looping through the same frame over and over again adding in a live frame every couple of seconds that then disappears immediately and goes back to displaying the old loop.
Why is this happening? I'm by no means an expert in H264 or media foundation for that matter but, I've been trying to fix this issue for weeks with no success. I have no idea where the problem might be. Please help me!
The time stamp is created by starting at 0 and adding the duration to it for every new sample. The other data is retrieved from a IMFSampleGrabberSinkCallback.
I've also posted some of my MFTrace onto the msdn media foundation forums Link
I mentioned on there that the presentation clock doesn't seem to change on the trace but, I'm unsure if that's the cause or how to fix it.
EDIT:
Could you share the video and a full mftrace log for this issue? It's not clear for me what really happens: do you see the live video after a while?
The current log does not contain enough information to trace sample processing. From your description is looks like that only keyframes are rendered. Plus, duration is weird for the rendered keyframe:
Sample #00A74970, Time 6733ms, Duration 499ms. <- Duration is not 33ms.
I would like to see what happened to that sample.
In any case, if you are using standard encoder and decoder, the issue should be with your media source, and how it buffers frames. Incorrect circular buffer implementation? You may want to try and cache a second or two of samples before starting giving them to the decoder.
I have a directshow Video renderer redived from CBaseVideoRenderer. The renderer is used in a graph that receives data from a live source (BDA). It looks like the connections are established properly, but the video rendering immediately ends because there is no sample. However, audio Rendering works, ie I can hear the sound while DoRenderSample of my renderer is never called.
Stepping through the code in the debugger, I found out that in CBaseRenderer::StartStreaming, the stream ends immedately, because the member m_pMediaSample is NULL. If I replace my renderer with the EVR renderer, it shows frames, ie the stream is not ending before the first frame for the EVR renderer, but only for my renderer.
Why is that and how can I fix it? I implemented (following the sample from http://www.codeproject.com/Articles/152317/DirectShow-Filters-Development-Part-Video-Render) what I understand as the basic interface (CheckMediaType, SetMediaType and DoRenderSample), so I do not see any possibility to influence what is happening here...
Edit: This is the graph as seen from the ROT:
What I basically try to do is capturing a DVB stream that uses VIDEOINFOHEADER2, which is not supported by the standard sample grabber. Although the channel is a public German TV channel without encryption, could it be that this is a DRM issue?
Edit 2: I have attached my renderer to another source (a Blackmagic Intensity Shuttle). It seams that the source causes the issue, because I get samples in the other graph.
Edit 3: Following Roman's Suggestion, I have created a transform filter. The graph looks like
an has unfortunately the same problem, ie I do not get any sample (Transform is not called).
You supposedly chose wrong path of fetching video frames out of media pipeline. So you are implementing a "network renderer", something that terminates the pipeline to further send data to network.
A renderer which accepts the feed sounds appropriate. Implementing a custom renderer, however, is an untypical task and then there is not so much information around on this. Additionally, a fully featured renderer is typically equipped with sample scheduling part, which end of stream delivery - things relatively easy to break when you customize it through inheriting from base classes. That is, while the approach sounds good, you might want to compare it to another option you have, which is...
A combination of Sample Grabber + Null Renderer, two standard filters, which you can attach your callback to and get frames having the pipeline properly terminated. The problem here is that standard Sample Grabber does not support VIDEOINFOHEADER2. With another video decoder you could possibly have the feed decoded info VIDEOINFOHEADER, which is one option. And then improvement of Sample Grabber itself is another solution: DirectX SDK Extras February 2005 (dxsdk_feb2005_extras.exe) was the SDK which included a filter similar to standard Sample Grabber called Grabber \DirectShow\Samples\C++\DirectShow\Filters\Grabber. It is/was available in source code and provided with a good description text file. It is relatively easy to extend to allow it accept VIDEOINFOHEADER2 and make payload data available to your application this way.
The easiest way to get data out of a DirectShow graph, if youњre not going to use
MultiMedia Streaming, is probably to write your own TransInPlace filter, a sub-variety
of a Transform filter. Then connect this filter to the desired stream of data you wish to
monitor, and then run, pause, seek, or otherwise control the graph. The data, as it passes
through the transform filter, can be manipulated however you want. We call this kind of
filter, a Њsample grabberћ. Microsoft released a limited-functionality sample grabber
with DX8.0. This filter is limited because it doesnњt deal with DV Data or mediatypes
with a format of VideoInfo2. It doesnњt allow the user to receive prerolled samples.
(Whatњs a preroll sample? See the DX8.1 docs) Its ЊOneShotћ mode also has some problems.
To add to this, the Grabber sample is pretty simple itself - perhaps 1000 lines of code all together, including comments.
Looks like your decoder or splitter isn't demuxing the video frames. Look further up the chain to see what filters are supplying your renderer pin with data, chances are its only recognising audio.
Try dropping the file into Graphedit (there's a better one on the web BTW) and see what filters it creates.
Then look at the samples in the DirectShow SDK.
I'm struggling with mixing two audio streams into single output stream. MFNode has an AudioMixerMFT but TopoEdit crashes when I try to build a topology like this & execute it:
Note: I tried TopoEdit that comes with Windows SDK 7.1 & also the one with few fixes by the author of "Developing Microsoft® Media Foundation Applications"
I thought it could be some issue with TopoEdit so I built the Topology in code (by modifying the code from Ch#9 of "Developing Microsoft® Media Foundation Applications") but it still failed with 'E_UNEXPECTED Catastrophic failure' on mediaEvent->GetStatus(&hrStatus) inside HRESULT CPlayer::ProcessEvent(CComPtr<IMFMediaEvent>& mediaEvent) on Session Start event.
Now at this point I thought it could be some issue with AudioMixerMFT so I wrote a Custom MFT with 2 inputs that acts like a simple pass-through (Only sends 1st input & ignores 2nd one). And I built a topology in TopoEdit like and it worked:
But when I connected 'Audio 2.wav' to MFT, it crashed. Now I tried to use this custom MFT in my own code & it worked again with single input but failed with 'E_UNEXPECTED Catastrophic failure' when applied two inputs.
Not sure what could be the problem, I started to doubt if multiple input MFT is supported, I came across a post http://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/21596e11-c4e2-480a-b28f-9e2f5fa8820d/mutlinput-and-multioutput (yes it is quite old) that says it is not supported.
Is there anyone out there who was able to run AudioMixerMFT from MFNode successfully? Any alternates to Microsoft Media Foundation? or Any hint would be appreciated. Thanks
MFNode is my open source project.
If you read the MFNode's documentation, you will see that TopoEdit does not handle more than one inputstream in a MFT. And yes TopoEdit crashes. You can fix the bug in TopoEdit source code. It is just a null pointer that TopoEdit does not checked. But unfortunatly, it does not solve the problem. TopoEdit is not able to call ProcessInput twice on the two input streams, before calling ProcessOutput.
You have to provide a custom media session to make it work (implement IMFMediaSession).
In a next update of MFNode Project, i will provide a player to use all the MFNode, and especially the MFNode Audio Mixer.
EDIT: in tededit.cpp, TopoEdit crashes at CTedEditorVisualObjectEventHandler::NotifyObjectDeleted :
...
CTedTopologyNode* pNode = m_pEditor->FindNode(pConn->GetOutputNodeID());
...
pNode can be null pointer and TopoEdit does not check.
EDIT
I've updated my project. Check MFNodePlayer. I use a custom MediaSession to handle the wave mixer topology.
It works well but it is not perfect because of two things. If you stop the topo and then replay, it fails (because i must stop all source, and perhaps reset the time clock and bytestream). Second, there is a function wich handles IMFTransform in a recursive way. It is hard to debug.
I will fix later.
PS : Special thanks to "Developing Microsoft Media Foundation Applications" book. It helps me a lot to create a custom MediaSession.