Is it possible to add custom demux or decoder to playbin2 - gstreamer

I am working on a rmvb playback plugin on gstreamer.I write the demux and decoders,and it work fine when I link them using pipeline.
But the playback application is using the playbin2 to play the videos.
so I wonder if it is possible to add them to the playbin2. so that playbin2 can play rmvb files.
But I dont know what to do.
So my question is:
1.Is it possible to do that;
2.If it is possible , what are the keywords I should search;
3.If it is impossible . is there any other way to play the rmvb file at the least cost.(It is hard to change the playback application source code)
It will be appreciated if any one helps.
Thanks a lot.

Yes
Elements have ranks, playbin will look for the elements with the highest rank to be used. So you need to make sure your element reports rmvb caps (as reported by gst-typefind) on its sink pads and that it has a high enough rank. Ranks are set when registering the element to the plugin.
There should already be rmvb support in gstreamer, maybe you're just missing the proper plugin in your installation? You shouldn't need to write a new one. It should be in gst-plugins-ugly (realmedia is the name of the plugin IIRC).
Unasked but please move to 1.0, 0.10 is dead/obsolete/unmantained for years now. By using it you won't be getting much or any help from the community these days.

Related

How to alter the behaviour of gstreamer autoaudiosink, autovideosink etc,

gstreamer has autoaudiosink, autovideosink, autoaudiosrc, autovideosrc.
How does this work when there are multiple sources or sinks that match
ex: videosink on windows can be opengl or directx,
How does gstreamer decide which one to use?
Is there any possibility to alter this?
How does gstreamer decide which one to use?
GStreamer has a very general autoplugging mechanism so that it can do the right thing. The documentation is quite terse, but let's go over it for the autovideosink case.
In a first step, autoplugging will try to filter out the relevant elements on your system: for example, if the input of a decodebin element is an H264 stream, it will find only elements that advertise video/xh264 caps on their sink pads. In the case of autovideosink, it will filter all elements that have added explicit "Sink" and "Video" tags to find the relevant elements.
In a second step, it still needs to select the best match out of the set of elements that we just collected. For that, GStreamer picks the plugin with the hightest "rank". Plugins have a default rank, but you can modify it programmatically (how to do this has been answered elsewhere).
Note: some elements (like decodebin) also provide extra API for even more fine-grained control.
Is there any possibility to alter this?
So the short answer is here: by modifying the plugin rank.
It's also good to note that application developers will generally pick a given sink (for example glimagesink) and optimize for that case, by configuring the properties of that element.

What is the path from BITMAP[+WAVE(s)] to RTSP (Twitch) via C/C++ in Windows?

So I'm trying to get a basic tool to output video/audio(s) to Twitch. I'm new to this side (AV) of programming so I'm not even sure what to look for. I'm trying to use mainly Windows infrastructure and third party where not available.
What are the steps of getting raw bitmap and wave data into a codec and then into a rtsp client and finally showing up on Twitch? I'm not looking for code. I'm looking for concepts so I can search for as I'm not absolutely sure what to search for. I'd rather not go through OBS source code to figure it out and use that as last resort.
So I capture the monitor via Output Duplication and also the Sound on the system as a wave and the microphone as another wave. I'm trying to push this to Twitch. I know that there's Media Foundation on Windows but I don't know how far to streaming it can get as I assume there no netcode integrated in it? And also the libav* collection in FFMPEG.
What are the basic steps of sending bitmap/wave to Twitch via any of thee above libraries or even others as long as they work on Windows. Please don't add code, I just need a not very long conceptual explanation and I'll take it from there. Try to cover also how bitrate and framerate gets regulated (do I have do it or the codec does it)?
Assume absolute noob level in this area (concept-wise not code-wise).

Windows Media Foundation - Right speaker doesn't work

I am using Windows Media Foundation C++ for playing audio and video files.
My application is pretty much based on the Media Foundation guide - http://msdn.microsoft.com/en-us/library/ms697062%28v=VS.85%29.aspx.
My problem is that when I play a media file, the audio is rendered only from the left speaker.
Some more info:
The problem happens for both Audio and Video files.
My topology is a classic Input-Node -> Transfer-Node -> Output-Node.
The audio stream looks okay in the output of the Output-Node (It's a float32 stream and it has no interleaving zeros for the right speaker).
The Transfer-Node in the topology is for a future equalizer, but currently it does nothing. Even if I remove it from the topology, the problem still occurs.
I suppose the problem might happen because of some misconfiguration of the Media Foundation, but I haven't found anything out of the order with respect to the Media Foundation Guide.
Any idea what might be the problem?
I would be happy to share relevant code samples or give any other relevant info about my implementation.
Thanks.
It sounds like either your source node is providing a single channel data stream or the input media type for the output node is single channel. If it's the latter case then the media session is injecting a transform that downmixes the input stream to single channel to conform with the media type.
I would check the media types of both nodes and see if this is the issue.
I've found the problem.
It was a misuse of the waveOutSetVolume() function that muted my right speaker (I used it with value 0xFFFF instead of 0xFFFFFFFF).
Somehow I've missed in the multiple code reviews I was doing when debugging this issue :(
So not related to Media Foundation at all.

video/audio encoding/decoding/playback

I've always wanted to try and make a media player but I don't understand how. I found FFmpeg and GStreamer but I seem to be favoring FFmpeg despite its worse documentation even though I haven't written anything at all. That being said, I feel I would understand how things worked more if I knew what they were doing. I have no idea how video/audio streams work and the several media types so that doesn't help. At the end of the day, I'm just 'emulating' some of the code samples.
Where do I start to learn how to encode/decode/playback video/audio streams without having to read hundreds of pages of several 'standards'. Perhaps to a certain extent also be enough knowledge to playback media without relying on another API. Googling 'basic video audio decoding encoding' doesn't seem to help. :(
This seem to be a black art that nobody is out to tell anyone about.
The first part is extracting streams from the container. From there, you need to decode the streams into media. I recommend finding a small Theora video and seeing how the pieces relate there.
you want that we write one answer and you read that and be master in multimedia domain..!!!!
Anyway that can not be by one answer.
First of all understand this terminolgy by googling
1> container -- muxer/demuxer
2> codec --coder/decoder
If you like ffmpeg then go with its basic video plater application. iT is well documented at here http://dranger.com/ffmpeg/ it will shows the method of demuxing container and decoding any elementry stream with ffmpeg api. more about this at http://ffmpeg.org/ffplay.html
i like gstreamer more then ffmpeg. it has well documentation. it will be good choise if you start with gstreamer

Video mixer filter

I need to find a video filter in order to mix multiple video streams (let's say, maximum 4).
I've found a video mixer filter from MediaLooks and is ok, but the problem is that i'm trying to use it in a school project (for the entire semester) and so the 30 days trial is kind of unacceptable.
So my question to you is that: are you aware of a free direct show filter that could help. If this is not working then it means i must write one. The problem here is that i don't know from where to start.
If you need output to the display, you can use the VMR. If you need output to file, then I think you will need to write something. The standard solution to this is to write an allocator/presenter plugin for the VMR that allows you to get back the mixed video and then save it somewhere. This is more efficient that a fully software-only mixer filter.
G
I finally ended up by implementing my own filter.
The VideoMixerRender9 (and 7) will do the trick for you. You can set the opacity and area each video going into the VMR9. I suggest playing with it from within graphedit.
I would also like to suggest skipping that all together. If you use WPF, you will get far more media capabilities, much easier.
If you want low level DirectShow support, you can try my project, WPF Mediakit. I have a control called MediaUriElement that is similar to WPF's MediaElement.