How to set-up WebRTC with VP9 codec and lossless compression - compression

I have been trying to figure out whether or not it is possible to set up WebRTC with VP9 codec and lossless compression.
So far, I have been able to figure out, how to set VP9 in the SDP and also how to set the coding profile (0-3). However, my understanding is, that setting the encoder profile to index 3, does not have an impact on the compression.
I also looked at the RTP payload specification for VP9. However, the specification for the SDP parameters only show how to set the codec and the coding profile.
So my question is, is it at all possible to set up WebRTC with VP9 lossless compression? If so, where could I set it and does it have to be set in the SDP at all?

Is this using the browser WebRTC API? I don't believe this is possible.
WebCodecs is a proposal to give users more control over things.
You can use MediaRecorder today, and then send media over a different transport maybe? I don't believe you can choose lossless though.

Related

Windows Media Foundation - Right speaker doesn't work

I am using Windows Media Foundation C++ for playing audio and video files.
My application is pretty much based on the Media Foundation guide - http://msdn.microsoft.com/en-us/library/ms697062%28v=VS.85%29.aspx.
My problem is that when I play a media file, the audio is rendered only from the left speaker.
Some more info:
The problem happens for both Audio and Video files.
My topology is a classic Input-Node -> Transfer-Node -> Output-Node.
The audio stream looks okay in the output of the Output-Node (It's a float32 stream and it has no interleaving zeros for the right speaker).
The Transfer-Node in the topology is for a future equalizer, but currently it does nothing. Even if I remove it from the topology, the problem still occurs.
I suppose the problem might happen because of some misconfiguration of the Media Foundation, but I haven't found anything out of the order with respect to the Media Foundation Guide.
Any idea what might be the problem?
I would be happy to share relevant code samples or give any other relevant info about my implementation.
Thanks.
It sounds like either your source node is providing a single channel data stream or the input media type for the output node is single channel. If it's the latter case then the media session is injecting a transform that downmixes the input stream to single channel to conform with the media type.
I would check the media types of both nodes and see if this is the issue.
I've found the problem.
It was a misuse of the waveOutSetVolume() function that muted my right speaker (I used it with value 0xFFFF instead of 0xFFFFFFFF).
Somehow I've missed in the multiple code reviews I was doing when debugging this issue :(
So not related to Media Foundation at all.

How use MFT in windows application without using media transform pipeline

I am newbie in media foundation programming and windows programing as well.
It might looks very silly question but i didn't get clear answer anywhere.
My application is to capture screen, scale, encode and send the data to network. I am looking to improve the performance of my pipeline. so i want to change some intermediate libraries like scaling or encoding libraries.
When i do a lot of search for better option of scaling and encoding, i end up with some MFT(media foundation transform) e.g.Video Processor MFT and H.264 Video Encoder MFT.
My application already implemented pipeline and i don't want to change complete architecture.
can we directly use MFT as a library and add in my project? or i have to build complete pipeline, source and sink.
As per architecture of Media foundation a MFT is intermediate block. It requires IMFTransform::GetInputStreamInfo and IMFTransform::GetOutputStreamInfo.
Is it any way to call direct API's of MFT to perform (scaling and encoding) with creating complete pipeline?
Please provide link if any similar question already asked.
Yes you can create this IMFTransform directly and use it in isolation from pipeline. It is very typical usage model for encoder MFT.
You will need to configure input / output media types, start streaming, feed input frames and grab output frames.
Depending on whether your transform is synchronous or asynchronous (which may differ depending on HW or SW implementation of your MFT) you may need use basic (https://msdn.microsoft.com/en-us/library/windows/desktop/aa965264(v=vs.85).aspx) or async (https://msdn.microsoft.com/en-us/library/windows/desktop/dd317909(v=vs.85).aspx) processing model.

From C++ image frames to html5 <video tag in client browser

In my C++ application I have video image frames coming from a web camera.
I wish to send those image frames down to a HTML5 video tag element for live video playing from the camera. How can I do this?
For a starting point you are going to want to look into WebM and H.264/MPEG-4 AVC. Both of these technologies are used as HTML5 media streams. It use to be that FireFox only supported WebM while Safari and Chrome both supported H.264. I am not sure about their current states, but you will probably have to implement both.
Your C++ will then have to implement a web server that can stream these formats on the fly. Which may require significant work. If you choose this route this Microsoft document may be of some use. Also, the WebM page has developer documentation. It is possible that H.264 must be licensed for a cost. WebM allows royalty free usage.
If I am not mistaken neither of these formats has to be completely downloaded in order to work. So you would just have to encode and flush the current frame you have over and over again.
Then as far as the video tag in HTML5 you just have to provide it the URLS your C++ server will respond to. Here is some documentation on that. Though, you may want to see if there is some service to mirror these streams as not to overload your application.
An easier way to stream your webcam could be simply to use FFMPEG.
Another usefull document can be found at:
http://www.cecs.uci.edu/~papers/aspdac06/pdf/p736_7D-1.pdf
I am no expert, but I hope that at least helps you get your start.

What MPEG4 encoder library to use?

So I have a WIN32 app that records videos using DirectShow. Now I want to convert the .AVI files to .MP4.
I rather not use a custom filter in the source, since I don't want to have to register filters(admin needed). I also dont want to use a standalone application since it should be automatised. Preferbly I just want a library with a well documented API since im rather new to this. Then I can use it from my app to convert the .AVI files when they are done being recorded. Anyone can point me in a direction? Or have comments on my method of choice?
I'd be outmost grateful for any help and thanks in advance!
Because MPEG-4 codecs are not royalty free, finding suitable encoder might be not as easy as you could think of it. Microsoft does not provide Windows with the encoder, except H.264 (MPEG-4 Part 10) encoder in some editions of Windows 7, and only within Media Foundation (as opposed to DirectShow). If you are OK to be limited to those Windows 7 versions, Media Foundation might be a good option, MSDN offers samples to transcode file into file and it is reasonably easy and well documented.
There are third party solutions, there half made libraries you can leverage to encoder, there is FFmpeg which offers MPEG-4 Part 2 video encoder under LGPL, and MPEG-4 Part 10 through libx264 under GPL. And my understanding you might still be expected to pay royalties to MPEG-LA. FFmpeg might still be a good option to convert file to file because its command lnie interface is well documented (as opposed to libavformat/libavcodec API which are not so well documented on the contrary).
Another option is to use Windows Media codecs and compress into ASF/WMV files.
Libavcodec and libffmpeg -- an everything-to-everything media converter (includes library and command-line application): http://ffmpeg.org/

How do I packettize a video frame with JRTP

I am trying to take a video frame that I have and packettize it into various RTP packets. I am using jrtp, and am working in C++, can this be done with this library? If so how do I go about this?
Thank you,
First, know what codec you have. (H.263, H.264, MPEG-2, etc). Then find the IETF AVT RFC for packetizing that codec (RFC 3984 for H.264 for example). Then look for libraries or implementations of that RFC (and look in jrtp), or code it yourself.
jrtplib provides only basic RTP/RTCP functionality. You have to do any media-type specific packetization yourself. If you look at the RTPPacket constructor, it takes payload data and payload length parameters (amongst others). The RTPPacketBuilder could also be of interest to you.
If you decide to do this yourself, you need to read the corresponding RFCs and implement according to them as jesup stated.
FYI, the c++ live555 Streaming Media library handles packetization of many video formats for you, but is also a lot more complex.