gstreamer has autoaudiosink, autovideosink, autoaudiosrc, autovideosrc.
How does this work when there are multiple sources or sinks that match
ex: videosink on windows can be opengl or directx,
How does gstreamer decide which one to use?
Is there any possibility to alter this?
How does gstreamer decide which one to use?
GStreamer has a very general autoplugging mechanism so that it can do the right thing. The documentation is quite terse, but let's go over it for the autovideosink case.
In a first step, autoplugging will try to filter out the relevant elements on your system: for example, if the input of a decodebin element is an H264 stream, it will find only elements that advertise video/xh264 caps on their sink pads. In the case of autovideosink, it will filter all elements that have added explicit "Sink" and "Video" tags to find the relevant elements.
In a second step, it still needs to select the best match out of the set of elements that we just collected. For that, GStreamer picks the plugin with the hightest "rank". Plugins have a default rank, but you can modify it programmatically (how to do this has been answered elsewhere).
Note: some elements (like decodebin) also provide extra API for even more fine-grained control.
Is there any possibility to alter this?
So the short answer is here: by modifying the plugin rank.
It's also good to note that application developers will generally pick a given sink (for example glimagesink) and optimize for that case, by configuring the properties of that element.
Related
I'm trying to evaluate functionality in gstreamer for applicability in a new application.
The application should be able to dynamically play videos and images depending on a few criteria (user input, ...) not really relevant for this question. The main thing I was not able to figure out was how I can achieve seamless crossfading/blending between successive content.
I was thinking about using the videomixer plugin and programatically transition the sinks alpha values. However, I'm not sure if this would work nor if it is a good idea to do so.
A gstreamer solution would be prefered because of the availability on development and target platform. Furthermore, a custom videosink implementation may be used in the end for rendering the content to proprietary displays.
Edit: Was able to code up a prototype using two file-sources fed into a videomixer, using GstInterpolationControlSource and GstTimedValueControlSource to bind and interpolate the videomixer alpha control inputs. The fades look perfect, however, what I did not quite have on the radar was that I cannot dynamically change the file sources location while the pipeline is running. Furthermore, it feels like misusing functions not intended for the job at hand.
Any feedback on how to tackle this use case would still be very much appreachiated. Thanks!
I'm currently working on retrieving image data from a video capturing device.
It is important for me that I have raw output data in a rather specific format and I need a continuous data stream. Therefore I figured to use the IMFSourceReader. I pretty much understand how it is working. For the whole pipeline to work I checked the output formats of the camera and looked at the available Media Foundation Transforms(MFTs).
The critical function here is IMFSourceReader::SetCurrentMediaType. I'd like to elaborate one critical functionality I discovered. If I just use the function with the parameters of my desired output format, it changes some parameters like fps or resolution, but the call succeeds. When I first call the function with a native media type with my desired parameters and a wrong subtype (like MJPG or sth.) and call it again with my desired parameters and the correct subtype the call succeeds and I end up with my correct parameters. I suspect this is only true, if fitting MFTs (decoders) are available.
So far I've pretty much beaten the WMF to get what I want. The Problem now is, that the second call of IMFSourceReader::SetCurrentMediaType takes a long time. The duration depends heavily on the camera used. Varying from 0.5s to 10s. To be honest I don't really know why its taking so long, but I think the calculation of the correct transformation paths and/or the initialization of the transformations is the problem. I recognized an excessive amount of loading and unloading of the same dlls(ntasn1.dll, ncrypt.dll, igd10iumd32.dll). But loading them once myself didn't change anything.
So does anybody know this issue and has a quick fix for it?
Or does anybody know a work around to:
Get raw image data via media foundation without the use ofIMFSourceReader?
Select and load the transformations myself, to support the source reader call?
You basically described the way Source Reader is supposed to work in first place. The underlying media source has its own media types and the reader can supply a conversion if it ever needs to fit requested media type and closest original.
Video capture devices tend to expose many [native] media types (I have a webcam which enumerates 475 of them!), so if format fitting does not go well, source reader might take some time to try one conversion or another.
Note that you can disable source reader's conversions by applying certain attributes like MF_READWRITE_DISABLE_CONVERTERS, in which case inability to set a video format directly on the source would result in an failure.
You can also read data in original device's format and decode/convert/process yourself by feeding the data into one or a chain of MFTs. Typically, when you set respective format on the source reader, the source reader manages the MFTs for you. If you however prefer, you can do it yourself too. Unfortunately you cannot build a chain of MFTs for the source reader to manage. Either you leave it on source reader completely, or you set native media type, you read the data in original format from the reader, and then you manage the MFTs on your side by doing IMFTransform::ProcessInput, IMFTransform::ProcessOutput and friends. This is not as easy as source reader, but is doable.
Since VuVirt does not want to write any answer, I'd like to add one for him and everybody who has the same issue.
Under some conditions the call IMFSourceReader::SetCurrentMediaType takes a long time, when the target format is RGB of some kind and is not natively available. So to get rid of it, I adjusted my image pipeline to be able to interpret YUV (YUY2). I still have no idea, why this is the case, but this is a working work around for me. I don't know any alternative to speed the call up.
Additional hint: I've found that there are usually several IMFTransforms to decode many natively available formats to YUY2. So, if you are able to use YUY2, you are safe. NV12 is another working alternative. Though there are probably more.
Thanks for your answer anyways
I am working on a rmvb playback plugin on gstreamer.I write the demux and decoders,and it work fine when I link them using pipeline.
But the playback application is using the playbin2 to play the videos.
so I wonder if it is possible to add them to the playbin2. so that playbin2 can play rmvb files.
But I dont know what to do.
So my question is:
1.Is it possible to do that;
2.If it is possible , what are the keywords I should search;
3.If it is impossible . is there any other way to play the rmvb file at the least cost.(It is hard to change the playback application source code)
It will be appreciated if any one helps.
Thanks a lot.
Yes
Elements have ranks, playbin will look for the elements with the highest rank to be used. So you need to make sure your element reports rmvb caps (as reported by gst-typefind) on its sink pads and that it has a high enough rank. Ranks are set when registering the element to the plugin.
There should already be rmvb support in gstreamer, maybe you're just missing the proper plugin in your installation? You shouldn't need to write a new one. It should be in gst-plugins-ugly (realmedia is the name of the plugin IIRC).
Unasked but please move to 1.0, 0.10 is dead/obsolete/unmantained for years now. By using it you won't be getting much or any help from the community these days.
I experienced a weird effect concerning DirectShow and splitters.
I was not able to track it, so maybe somebody can help?
In my application I extract a few frames from movies.
I do this by DirectShow's IMediaDet interface.
(BTW: It's XP SP3, DirectShow 9.0).
Everything works fine, as long as there is no media splitter involved
(this is the case for mp4, mkv, flv, ...).
Concerning codecs I use the K-Lite distribution.
Since some time there are two splitters: LAV and Haali.
The Gabest splitter has been removed since some time.
But only with the latter activated everything worked fine!
OK - the effect:
It's about IMediaDet.GetBitmapBits:
Some (most) medias that uses splitters always extract the very first frame.
And with some other medias with splitters this effect is only when I
used get_StreamLength before. (Although GetBitmapBits should switch
back to BitmapGrab mode, as the docu says.).
As said - everything works fine as far as no splitter is involved (mpg, wmv, ...).
Does someone experienced a similar effect?
Where may be the bug: In DShow, in the splitters, in my code?
Any help appreciated ... :-)
Your assumption is not quite correct. IMediaDet::GetBitmapBits builds a filter graph internally, and attempts to navigate playback to position of interest. Then starts streaming to get a valid image onto its Sample Grabber filter "BitBucket".
It does not matter if splitter is a separate filter or it is combined with source. Important part is the ability of the graph to seek, a faulty filter might be an obstacle there, even though the snapshot is taken. This is the symptom you are describing.
For instance the internal graph might be like this:
There is a dedicated multiplexer there, and snapshot is taken from correct position.
I need some help on software design. Let's say I have a camera that gets acquisitions, send them to a filter, and display the images one at a time.
Now, what I want is to wait for two images and after that send the two images to the filter and both of them to the screen.
I thought of two options and I'm wondering which one to choose:
In my Acquisitioner (or whatever) class, should I put a queue which waits for two images before sending them to the Filterer class?
Should I put an Accumulator class between the Acquisitionner & Filterer?
Both would work in the end, but which one do you think would be better?
Thanks!
To give a direct answer, I would implement the Accumulator policy in a separate object. Here's the why:
Working on similar designs in the past, I found it very helpful to think of different 'actors' in this model as sources and sinks. A source object would be capable of producing or outputting an image to the attached sink object. Filters or accumulators in this system would be designed as pipes -- in other words they would implement interfaces of both a sink and a source. Once you come up with a mechanism for connecting generic sources, pipes, and sinks, it's very easy to implement an accumulation policy as a pipe, which for every nth image received, would hold on to it if n is odd, and output both of them, if n is even.
Once you have this system, it will be trivial for you to change out sources (image file readers, movie decoders, camera capture interfaces), sinks (image file or movie encoders, display viewers, etc), and pipes (filters, accumulators, encoders, multiplexers) without disrupting the rest of your code.
It depends. But, if all that your queue does is waiting for a second image to come, i reckon you could simply implement it right in the Acquisitioner.
On the other hand, if you want to incorporate whatever additional functionality there, then added modularity and all the benefits that come hand in hand with it would not hurt one tiny bit.
I don't think it matters all that much in this particular case.