Reading video stream and writing it to a buffer - c++

I'm writing a program in C++ using DirectShow API in order to accelerate video encoding part. This program should read video stream from Video Capture Card, and pass it to the encoder, without intermediate raw data file which is usually passed.
But encoder is not my software, in fact, it was bought.
This encoder used to accept a raw data file with it's details and give an encoded file as an output. So I've decided to read video stream from Video Capture Card, save it to some buffer, and when the size of a buffer is appropriate (appropriate is specified in encoder) pass it to encoder.
But I'm new to DirectShow as well as to a whole multimedia programming, so what I'm asking is an advise about a function to use to read a stream, or about total solution, or any useful links.
Thanks in Advance
EDIT 1: What I meant by Accelerate is to read video stream in encoder directly, instead of creating intermediate YUV file and making encoder to read YUV file.

Related

Create I Frame out of P and B frames

I've written a C++ converter based on FFMpeg which can receive a link to hls-stream and convert it into local .mp4 video. So far, so good, converter works like a charm, no questions about that.
PROBLEM: No matter what input source I'm providing to the converter, at the end of convertation I need to receive video with key-frames ONLY. I need such video due to perfect seeking forward and reverse.
It's a well-known fact that subsidiary video frames (P and B) dependent on their owner-frame (I frame), because this frame contains full pixel map. According to that, we can recreate a I frame for each P and B frame by merging their data with their I frame. That's why such ffmpeg command ffmpeg -i video.mp4 output%4d.jpg works.
QUESTION: How can I implement an algorithm of merging of frames in order to recreate Key-frames ONLY video at the end? What kind of quirks I need to know about merging datas of AVPackets?
Thanks.
You cannot "merge" P and B-frames of a compressed stream (e.g. with H.264 codec), to obtain I-frames.
What ffmpeg does with
ffmpeg -i video.mp4 output%4d.jpg
is decoding each frame (thus it needs to start from an I-frame, then decode all subsequent P and B-frames in the stream), and compress them back to JPEG and output a JPEG image for each frame in the original input stream.
If you want to convert an input stream with P/B frames to an intra-only stream (with all I-frames), you need to transcode the stream.
That means decode all frames from the original stream and encode them back to an intra-only stream.

Stream OpenGL framebuffer over HTTP (via FFmpeg)

I have an OpenGL application of which rendered images need to be streamed over internet to mobile clients. Previously, it sufficed to simply record the rendering into a video file, which is already working, and now this should be extended to subsequent streaming.
What is working right now:
Render a scene to an OpenGL framebuffer object
Capture the FBO content using NvIFR
Encode it to H.264 using NvENC (no CPU round trip required)
Download the encoded frame to host memory as a byte array
Append this frame to a video file
None of this steps involves FFmpeg or any other library so far. I now want to replace the last step with "Stream the current frame's byte array over internet" and I assume that using FFmpeg and FFserver would be a reasonable choice for this. Am I correct? If not, what would be the proper way?
If so, how do I approach this within my C++ code? As pointed out, the frame is already encoded. Also, there is no sound or other stuff, simply a H.264 encoded frame as byte array that is updated irregularly and should be converted into a steady video stream. I assume that this would be FFmpeg's job and that the subsequent streaming via FFserver would be simple from there. What I don't know is how to feed my data to FFmpeg in the first place, as all FFmpeg tutorials I found (in a non-exhaustive search) work on a file or webcam/capture device as data source, not volatile data in main memory.
The file mentioned above that I am already able to create is a C++ file stream to which I append each single frame, meaning that different framerates of video and rendering are not treated correctly. This also needs to be taken care of at some point.
Can somebody point me in the right direction? Can I forward data from my application to FFmpeg to build a proper video feed without writing to the hard disk? Tutorials are greatly appreciated. By the way FFmpeg/FFserver is not mandatory. If you have a better idea for streaming of OpenGL framebuffer contents, I'm eager to know.
You can feed the ffmpeg process readily encoded H.264 data (-f h264) and tell it to simply copy the stream into to the output multiplexer (-c:v copy). To get the data actually into ffmpeg just launch it as a child process with a pipe connected to its stdin and specify stdin as reading source
FILE *ffmpeg_in = popen("ffmpeg -i /dev/stdin -f h264 -c copy ...", "w");
you can then write your encoded h264 stream to ffmpeg_in.

Compress Raw buffer data of video (C++)

I have raw buffer video data which I need to stream over the net as and when requested by the clients. This data is very huge. So I will need to compress and save it at my server location.
One way I understand of doing this convert the data into a video file ".avi" and save it and then stream it frame by frame as and when requested.
I have used the VideoWriter function of OpenCV for converting the buffer to mat and then to ".avi" file with MPEG4 encoding.
However I want to know if there is any way I can compress the raw buffer data and save it in a file. Can anyone suggest me if this is possible?
Thank You.

Use DirectShow to capture to an AVI from a non DirectShow source

This may be a dumb question but I'm having a hard time conceptualizing what I need to do here...In the past I've used DirectShow to connect to a camera and capture an AVI using a source filter, AVI mux, compression filter, run the graph, etc...piece of cake.
In this particular case I am getting notified when my non DirectShow camera driver has a buffer ready. I get notification and then I go and grab the BYTE* and render it using GDI. I now also need to create an AVI with these buffers.
Conceptually it makes sense for me to use something like vfw and write to an AVI stream every time I receive a buffer, of course vfw is old technology and I was also having some problems getting that to work (as I posted in a different forum).
How can I push these buffers into a DirectShow AVI Mux and write to a file? Do I have to create my own source filter to receive these buffers, then add my source filter and avi mux to a filter graph?
Thanks for any tips
So you have BYTE* with video frame data. It is very close to what you supposed. Your choices are to either use VFW AVIFileOpen and friends to write into AVI file, or inject data into DirectShow pipeline. To do the latter, you typically make your PushSource-like filter and push video frames from there (through AVI Mux to File Writer).

How to read .avi files C++

I want to read in an .avi video file for a program that I am making. I have the file location saved as a string. Is there any good tutorials on using .avi files in c++ or does anyone know who to read one in? Is it the same as normal files?
I have a previously asked SO question that goes into better detail but here is what I want to do:
I am making a program that will detect faces (though OpenCV) As of now I have been given a video processor program that will detect each face on a frame, and return the frame as a image and the CvRec of the faces. I want to take these faces and test them to validate that they are all actually faces.
After I have all the faces (tested) I want to then take the images and test them together. I test the faces on each frame for size and distance changes. If the faces pass this for a frame length of two seconds, then I want to crop the face and make it the subject of each frame.
After each frame is cropped I then want to save the new video file for the user.
Hopefully that helps. If anyone needs a better explanation please let me know.
First of all, a little background.
What is AVI?
AVI stands for Audio Video Interleave. It is a special case of the RIFF (Resource Interchange File Format). AVI is defined by Microsoft and it is the most common format for audio/video data.
I assume you would want to read a avi file and decode the compressed video frames. AVI file is just like any other normal file and you can use fread()(in C) or iostream(in C++) to open an avi file and read it contents. But the contents of an avi file are video frames in a compressed format. The compression allows video content of bigger sizes to be efficiently packed in less memory space.To make any sense of this compressed data you would have to decode the encoded data format.You will have to study the standard which describes how AVI encoding is done and then extract and decode the frames. this raw video data now when fed to a video device will be displayed in video format.
It seems you are staying within OpenCV so things are easy. If OpenCV is compiled properly it is capable of delegating io/coding/decoding to other libraries. Quicktime and others for example, but best is to use ffmpeg. You open, read and decode everything using the OpenCV API which gives you the video frame by frame.
Make sure your OpenCV is compiled with ffmpeg support and then read the OpenCV tutorial on how to read/write AVI files. It's really easy.
Getting OpenCV to be built with ffmpeg support might be hard though. You might want to switch to an older version of OpenCV if you can't get ffmpeg running with the current one.
Personally i would not spent time trying to read the video by yourself and delegate the task to OpenCV. That's how it is supposed to be used.