Is it necessary to sink unneeded streams into a fakesink? - gstreamer

Suppose I have a GStreamer pipeline with a source that provides both audio and video and I am interested in only one of those. Can I ignore the other or is it best practice to route the unneeded stream into a fakesink?

Related

Best video format / codec to optimise 'seeking' with Videogular

I am using the Videogular2 library within my Ionic 3 application. A major feature of the application is the ability to seek to different places within a video.
I noticed that some formats have very quick seek response, while others take seconds to get there, even if the video is in the buffer already - I assume this may depend on the decoding process being used.
What would the best compromise be in order to speed up seek time while still keeping the file size reasonably small so that the video can be streamed from a server?
EDIT
Well, I learned that the fact that my video was recorded in the mov format caused the seek delays. Any transcoding applied to this didn't help because mov is lossy and the damage must have been done already. After screen-capturing the video and encoding it in regular mp4, the seeking happens almost instantaneously.
What would the best compromise be in order to speed up seek time while
still keeping the file size reasonably small so that the video can be
streamed from a server?
Decrease key-frame distance when encoding the video. This will allow for building a full frame quicker with less scanning, depending on codec.
This will increase the file size if using the same quality parameters, so the compromise for this is to reduce quality at the same time.
The actual effect depends on the codec itself, how it builds intermediate frames, and how it is supported/implemented in the browser. This together with the general load/caching-strategy (you can control some of the latter via media source extensions).

How use MFT in windows application without using media transform pipeline

I am newbie in media foundation programming and windows programing as well.
It might looks very silly question but i didn't get clear answer anywhere.
My application is to capture screen, scale, encode and send the data to network. I am looking to improve the performance of my pipeline. so i want to change some intermediate libraries like scaling or encoding libraries.
When i do a lot of search for better option of scaling and encoding, i end up with some MFT(media foundation transform) e.g.Video Processor MFT and H.264 Video Encoder MFT.
My application already implemented pipeline and i don't want to change complete architecture.
can we directly use MFT as a library and add in my project? or i have to build complete pipeline, source and sink.
As per architecture of Media foundation a MFT is intermediate block. It requires IMFTransform::GetInputStreamInfo and IMFTransform::GetOutputStreamInfo.
Is it any way to call direct API's of MFT to perform (scaling and encoding) with creating complete pipeline?
Please provide link if any similar question already asked.
Yes you can create this IMFTransform directly and use it in isolation from pipeline. It is very typical usage model for encoder MFT.
You will need to configure input / output media types, start streaming, feed input frames and grab output frames.
Depending on whether your transform is synchronous or asynchronous (which may differ depending on HW or SW implementation of your MFT) you may need use basic (https://msdn.microsoft.com/en-us/library/windows/desktop/aa965264(v=vs.85).aspx) or async (https://msdn.microsoft.com/en-us/library/windows/desktop/dd317909(v=vs.85).aspx) processing model.

Partial decoding h264 stream

I'm trying to get information about frames in h264 bitstream. Especially motion vectors of macroblocks. I think, I have to use ffmpeg code for it, but it's really huge and hard to understand.
So, can someone give me some tips or exapmles of partial decoding from raw data of single frame from h264 stream?
Thank you.
Unfortunately, to get that level of information from the bitstream you have to decode every macroblock, there's no quick option, like there would be for getting information from the slice header.
One option is to use the h.264 reference software and turn on the verbose debug output and/or add your own printf's where needed, but this is also a large code base to navigate:
http://iphome.hhi.de/suehring/tml/
(You can also use ffmpeg and add output where needed too as you said, but it would take some understanding of that code base too)
There are graphical tools for analyzing video bitstreams which will show you this type of information on a per-macroblock basis, many are expensive, but sometimes there are free trial versions available.

Reading Matroska (MKV) metadata and extracting streams

I am attempting to write a program in C++ that will need to display metadata from a MKV file and extract streams from the file.
I looked at libmatroska and found it difficult to use because it is very low level. There are also no examples on usage apart from mkvtoolnix making it difficult to use due to having to bring in most of mkvinfo and mkvextract. libmediainfo covers most of what I need although it would be nice to be more flexible if possible.
In addition, I have found no simple way of extracting the video stream without using mkvextract to write it to a temporary file.
Any advice would be appreciated

How do I packettize a video frame with JRTP

I am trying to take a video frame that I have and packettize it into various RTP packets. I am using jrtp, and am working in C++, can this be done with this library? If so how do I go about this?
Thank you,
First, know what codec you have. (H.263, H.264, MPEG-2, etc). Then find the IETF AVT RFC for packetizing that codec (RFC 3984 for H.264 for example). Then look for libraries or implementations of that RFC (and look in jrtp), or code it yourself.
jrtplib provides only basic RTP/RTCP functionality. You have to do any media-type specific packetization yourself. If you look at the RTPPacket constructor, it takes payload data and payload length parameters (amongst others). The RTPPacketBuilder could also be of interest to you.
If you decide to do this yourself, you need to read the corresponding RFCs and implement according to them as jesup stated.
FYI, the c++ live555 Streaming Media library handles packetization of many video formats for you, but is also a lot more complex.