C++ Video processing frame by frame - c++

Iam stuck with a project in which iam required to write a program in C++ that gets every frame of a raw .yuv video file and calculates the Signal to Noise ratio.
Iam stuck in this and can't find where to start from .. any guide to a tutorial or anything written on how to do this ? how to read a video and get the frames of the videos in c++?

Check out the ffmpeg libraries https://www.ffmpeg.org/about.html for extracting frames from a video stream.
There are other libraries, like OpenCV, which may also help with the image analysis part, and Windows-specific APIs.
For measuring signal:noise, you'll need a mathematical model for noise detection, like autocorrelation.

Related

C++ ffmpeg real-time video transmisson

I am a student currently working on my final project. Our project is focusing on new type network coding research. Now my task is to do a real-time video transmission to test the network coding. I have learned something of ffmepg and opencv and have finished a c++ program which can divide the video into frames and send it frame by frame. However, by this way, the transmission data (the frames)size are quite much more than the original video file size. My prof advise me try to find the keyframe and inter frame diff of the video (mjpeg format), so that transmit the keyframe and interframe diff only instead of all the frames with large amount of redundancy, and therefore reduce the transmission data. I have no idea in how to do this in c++ and ffmpeg or opencv. Can any one give any advice?
For my old program, please refer to here. C++ Video streaming and transimisson
I would recommend against using ffmpeg/libav* at all. I would recommend using libx264 directly. By using x264 you can have greater control of NALU slice sizes as well as lower encoder latency by utilizing callbacks.
Two questions which already may help yourself:
How are you interfacing from c++ to ffmpeg? ffmpeg generally refers to the command line tool, from c++ you generally use the individual libs which are part of ffmpeg. You should use libavcodec to encode your frames and possibly libavformat to packetize them into a container format.
Which codec do you use?

Writing video on memory OpenCV 2

We're currently developing some functionality for our program that needs OpenCV. One of the ideas being tossed at the table is the use of a "buffer" which saves a minute of video data to the memory and then we need to extract like a 13-second video file from that buffer for every event trigger.
Currently we don't have enough experience with OpenCV so we don't know if it is possible or not. Looking at the documentation the only allowable function to write in memory are imencode and imdecode, but those are images. If we can find a way to write sequences of images to a video file that would be neat, but for now our idea is to use a video buffer.
We're also using OpenCV version 2 specifications.
TL;DR We want to know if it is possible to write a portion of a video to memory.
In OpenCV, every video is treated as a collection of frames(images). Depending on your cameras' FPS you can capture frames periodically and fill the buffer with them. Meanwhile you can destroy the oldest frame(taken 1 min before). So a FIFO data structure can be implemented to achieve your goal. Getting a 13 second sample is easy, just jump to a random frame and write 13*FPS frames sequentially to a video file.
But there will be some sync and timing problems AFAIK and as far as I've used OpenCV.
Here is the link of OpenCV documentation about video i/o. Especially the last chunk of code is what you will use for writing.
TL;DR : There is no video, there are sequential images with little differences. So you need to treat them as such.

How to find object on video using OpenCV

To track object on video frame, first of all I extract image frames from video and save those images to a folder. Then I am supposed to process those images to find an object. Actually I do not know if this is a practical thing, because all the algorithm did this for one step. Is this correct?
Well, your approach will consume a lot of space on your disk depending on the size of the video and the size of the frames, plus you will spend a considerable amount of time reading frames from the disk.
Have you tried to perform real-time video processing instead? If your algorithm is not too slow, there are some posts that show the things that you need to do:
This post demonstrates how to use the C interface of OpenCV to execute a function to convert frames captured by the webcam (on-the-fly) to grayscale and displays them on the screen;
This post shows a simple way to detect a square in an image using the C++ interface;
This post is a slight variation of the one above, and shows how to detect a paper sheet;
This thread shows several different ways to perform advanced square detection.
I trust you are capable of converting code from the C interface to the C++ interface.
There is no point in storing frames of a video if you're using OpenCV, as it has really handy methods for capturing frames from a camera/stored video real-time.
In this post you have an example code for capturing frames from a video.
Then, if you want to detect objects on those frames, you need to process each frame using a detection algorithm. OpenCV brings some sample code related to the topic. You can try to use SIFT algorithm, to detect a picture, for example.

Best way to load in a video and to grab images using c++

I am looking for a fast way to load in a video file and to create images from them at certain intervals ( every second, every minute, every hour, etc.).
I tried using DirectShow, but it just ran too slow for me to start the video file and move to a certain location to get data and to save it out to an image. Even if I disabled the reference clock. Tried OpenCV, but it has trouble opening the AVI file unless I know the exact codec information. So if I know a way to get the codec information out from OpenCV I may give it another shot. I tried to use FFMPEG, but I don't have as much control over it as well as I would wish.
Any advice would be greatly appreciated. This is being developed on a Windows box since it has to be hosted on a Windows box.
MPEG-4 format is not an intra-coded format, so you can't just jump to a random frame and decode it on its own, as most frames only encode the differences from one or more other frames. I suspect your decoding is slow because when you land on a frame for which several other dependent frames to be decoded first.
One way to improve performance would be to determine which frames are keyframes (or sometimes also called 'sync' points) and limit your decoding to those frames, since these can be decoded on their own.
I'm not very familiar with DirectShow capabilities, but I would expect it has some API to expose sync points.
Also, I should mention that the QuickTime SDK on Windows is possibly another good option that you have for decoding frames from movies. You should first test that your AVI movies are played correctly in the QuickTime Player. And the QT SDK does expose sync points, see the section Finding Interesting Times in the QT SDK documentation.
ffmpeg's libavformat might work for ya...

Extract sound from video

I'm currently doing my Multimedia assignment where I have to create a new video using one video as a foreground and another as a background. OpenCV allows me to do just that: extracting images from each frame in video, processing them and putting the results back into a video format. However, OpenCV is only a computer vision library. Is there a library that allows me to do the same for sound? I'd like to extract sound (music, actually) from a video I'm using and put it into the final video.
You can use libavcodec library used in FFmpeg.
Try Tuna Audio Extracter (http://github.com/tuna74/TunaAudioExtracter). You can use the extracter part from that program.