Video Mixing Options - gstreamer

I am working on a bigger project of video-wall and want to display multiple sources of videos on a single display.
something like this --
What are all my options?
Java with JMF
Python with GStreamer bindings
Before committing to a technology, I want to get a clear picture about available resources and their limitations.

With gstreamer you can realize this. You would use 4 uridecodebin instances and feed them into a videomixer. On each videomixer.pad you can set the xpos,ypos,z-order and alpha. Between the uridecodebins and the videomixer, you probably want to plug scaling and framerate adaptation.

Related

Video and audio blending/fading with gstreamer

I'm trying to evaluate functionality in gstreamer for applicability in a new application.
The application should be able to dynamically play videos and images depending on a few criteria (user input, ...) not really relevant for this question. The main thing I was not able to figure out was how I can achieve seamless crossfading/blending between successive content.
I was thinking about using the videomixer plugin and programatically transition the sinks alpha values. However, I'm not sure if this would work nor if it is a good idea to do so.
A gstreamer solution would be prefered because of the availability on development and target platform. Furthermore, a custom videosink implementation may be used in the end for rendering the content to proprietary displays.
Edit: Was able to code up a prototype using two file-sources fed into a videomixer, using GstInterpolationControlSource and GstTimedValueControlSource to bind and interpolate the videomixer alpha control inputs. The fades look perfect, however, what I did not quite have on the radar was that I cannot dynamically change the file sources location while the pipeline is running. Furthermore, it feels like misusing functions not intended for the job at hand.
Any feedback on how to tackle this use case would still be very much appreachiated. Thanks!

What is the path from BITMAP[+WAVE(s)] to RTSP (Twitch) via C/C++ in Windows?

So I'm trying to get a basic tool to output video/audio(s) to Twitch. I'm new to this side (AV) of programming so I'm not even sure what to look for. I'm trying to use mainly Windows infrastructure and third party where not available.
What are the steps of getting raw bitmap and wave data into a codec and then into a rtsp client and finally showing up on Twitch? I'm not looking for code. I'm looking for concepts so I can search for as I'm not absolutely sure what to search for. I'd rather not go through OBS source code to figure it out and use that as last resort.
So I capture the monitor via Output Duplication and also the Sound on the system as a wave and the microphone as another wave. I'm trying to push this to Twitch. I know that there's Media Foundation on Windows but I don't know how far to streaming it can get as I assume there no netcode integrated in it? And also the libav* collection in FFMPEG.
What are the basic steps of sending bitmap/wave to Twitch via any of thee above libraries or even others as long as they work on Windows. Please don't add code, I just need a not very long conceptual explanation and I'll take it from there. Try to cover also how bitrate and framerate gets regulated (do I have do it or the codec does it)?
Assume absolute noob level in this area (concept-wise not code-wise).

How use MFT in windows application without using media transform pipeline

I am newbie in media foundation programming and windows programing as well.
It might looks very silly question but i didn't get clear answer anywhere.
My application is to capture screen, scale, encode and send the data to network. I am looking to improve the performance of my pipeline. so i want to change some intermediate libraries like scaling or encoding libraries.
When i do a lot of search for better option of scaling and encoding, i end up with some MFT(media foundation transform) e.g.Video Processor MFT and H.264 Video Encoder MFT.
My application already implemented pipeline and i don't want to change complete architecture.
can we directly use MFT as a library and add in my project? or i have to build complete pipeline, source and sink.
As per architecture of Media foundation a MFT is intermediate block. It requires IMFTransform::GetInputStreamInfo and IMFTransform::GetOutputStreamInfo.
Is it any way to call direct API's of MFT to perform (scaling and encoding) with creating complete pipeline?
Please provide link if any similar question already asked.
Yes you can create this IMFTransform directly and use it in isolation from pipeline. It is very typical usage model for encoder MFT.
You will need to configure input / output media types, start streaming, feed input frames and grab output frames.
Depending on whether your transform is synchronous or asynchronous (which may differ depending on HW or SW implementation of your MFT) you may need use basic (https://msdn.microsoft.com/en-us/library/windows/desktop/aa965264(v=vs.85).aspx) or async (https://msdn.microsoft.com/en-us/library/windows/desktop/dd317909(v=vs.85).aspx) processing model.

c++ convert/play videos and images

I'm looking for build in library for converting videos/images. i heard something about DirectShow. Do you know any library you have used to convert videos/images?
For transcoding (converting one video format to another) using Directshow is bit tricky, you want to use Media Foundation for this job.
There is Transcode API available in Media Foundation to achieve this task. This link has more details on Transcode API, tutorials and samples to get you started.
You can use DirectShow for grabbing images from video stream. For it you must create your own filter node. It is complex task because of filter is COM object that will work within chain (DirectShow filter graph) of other filter nodes - codecs. So after creating you need register your filter in system. As for me i think you can try it because you can use all registered codecs in system and as result get decompressed/final image into your filter. As other solution i think that you can try to use modules from some open source media player. For example try VideoLAN but as i know it is big thing and not easy to use.
Good luck!

Video mixer filter

I need to find a video filter in order to mix multiple video streams (let's say, maximum 4).
I've found a video mixer filter from MediaLooks and is ok, but the problem is that i'm trying to use it in a school project (for the entire semester) and so the 30 days trial is kind of unacceptable.
So my question to you is that: are you aware of a free direct show filter that could help. If this is not working then it means i must write one. The problem here is that i don't know from where to start.
If you need output to the display, you can use the VMR. If you need output to file, then I think you will need to write something. The standard solution to this is to write an allocator/presenter plugin for the VMR that allows you to get back the mixed video and then save it somewhere. This is more efficient that a fully software-only mixer filter.
G
I finally ended up by implementing my own filter.
The VideoMixerRender9 (and 7) will do the trick for you. You can set the opacity and area each video going into the VMR9. I suggest playing with it from within graphedit.
I would also like to suggest skipping that all together. If you use WPF, you will get far more media capabilities, much easier.
If you want low level DirectShow support, you can try my project, WPF Mediakit. I have a control called MediaUriElement that is similar to WPF's MediaElement.