copy frames from video file recording - c++

I start a recording session from webcam and then I want to make another file with 200 frames already recorded and then the other realtime frames from the webcam.
I made this to get frames from the "Blob.avi" file that is recording:
VideoWriter writeVideo;
VideoCapture savedVideo;
savedVideo.open("E:\\Blob.avi");
int startFrame = savedVideo.get(CV_CAP_PROP_FRAME_COUNT);
cout << " startFrame:"<< startFrame;
startFrame -= 200;
savedVideo.set(CV_CAP_PROP_POS_FRAMES,startFrame);
cout <<"-"<< startFrame << " ms: " << savedVideo.get(CV_CAP_PROP_POS_MSEC) << endl;
sprintf_s(filename,150, "E:\\FramesTest.avi");
writeVideo.open(filename,xvid, 10, Size(640,480));
Mat tempPic;
for( int i = 0; i < 200; i++ ){
savedVideo >> tempPic;
writeVideo.write(tempPic);
}
//then add realtime frames.
The problem is that it can't read total frames in the file that is recording: it gives me 0 and also the ms. This way it starts the new file recording realtime without copying first the 200 frames of the other file.
I think the problem is that the file video is in use.
So how can I copy some frames from a not released file video?
EDIT:
I must add at least 30ms of delay:
for( int i = 0; i < 200; i++ ){
savedVideo >> tempPic;
writeVideo.write(tempPic);
Sleep(30)
}
This way it works, also if it starts from frame 0 minus 200 instead of subtracting 200 from the last frame.
But it's a lot slow: I loose about 10 seconds of realtime capture.
Also because there is a re-encoding of each frame.
So I think it should be better recording first the realtime capture and then add the 200 frames at the beginning of the newly created file.
How can I achieve this? possibly witout re-encoding, better at file-level.
The purpose of this is to make a video when there is some blob detection, including some seconds of video before the blob triggered.

Related

Audio and Generated Video Keeps Desyncing Despite Audio Being 48000khz

So I'm writing a C++ program that will take a wav file, generate a visualization, and export the video out alongside the audio using ffmpeg (pipe). I've been able to get output out to ffmpeg just fine and a video with the visualization and audio are created by ffmpeg.
The problem is the video and audio are desyncing. The video is just too fast and ends before the song is completed (the video file is the correct length; the waveform just flatlines and ends, indicating that ffmpeg reached the end of the video and is just using the last frame it received until the audio ends). So I'm not sending enough frames to ffmpeg.
Below is a truncated version of the source code:
// Example code
int main()
{
// LoadAudio();
uint32_t frame_max = audio.sample_rate / 24; // 24 frames per second
uint32_t frame_counter = 0;
// InitializePipe2FFMPEG();
// (Left channel and right channel are always equal in size)
for (uint32_t i = 0; i < audio.left_channel.size(); ++i)
{
// UpdateWaveform4Image();
if (frame_counter % frame_max == 0)
{
// DrawImageAndSend2Pipe();
frame_counter = 1;
}
else
{
++frame_counter;
}
}
// FlushAndClosePipe();
return 0;
}
The commented-out functions are fake and irrelevant. I know this because "UpdateWaveform4Image()" updates the waveform used to generate the image every sample. (I know that's inefficient, but I'll worry about optimization later.) The waveform is a std::vector in which each element stores the y-coordinate of each sample. It has no effect on when the program will generate a new frame for the video.
Also ffmpeg is set to output 24 frames per second--trust me, I thought that was the problem too because by default ffmpeg outputs to 25 fps.
My line of thinking for the modulus check is that frame_counter is incremented every sample. frame_max equals 2000 because 48000 / 24 = 2000. I know the audio is clocked at 48kHz because I created the file myself. So it SHOULD generate a new image every 2000 samples.
Here is a link to the output video: [output]
Any advice would be helpful.
EDIT: Skip to 01:24 to see the waveform flatline.

Open cv c++: How to split video several part by time?

I would like to ask you how to split video (.mp4) on several videos by open cv (language c++) by time? For example I have video 10 seconds long and I want to create two videos from it; the first video captures frames from original video between 0 second and 5 seconds and the second video captures frames from original video between 6 seconds and 10 seconds.
Is there somebody who knows answer?
Just read the inputVideo and calculate how many frames you need.
Then write the number of frames to the first output video and the rest to the second.
Something like this should work
for(;;) //Show the image captured in the window and repeat
{
countFirstVideo++;
inputVideo >> src; // read
if (src.empty()){ break; } // check if at end
if(countFirstVideo++ < myDesignatedSize)
{
outputVideo1 << src;
}
else
{
outputVideo2 << src;
}
}

OpenCV VideoCaptures sometimes returns blank frames

I am using the following code for capturing video frames from a USB webcam using openCV3 in MS VC++ 2012. But the problem is that sometimes I am able to display the captured frames # 30 fps but sometimes I get black frames with a very low fps (or with a high delay). In other words, the program works randomly. Do you know how I can solve this problem? I tried different solutions suggested in stackoverflow or some other places but none of them solved the problem.
VideoCapture v(1);
v.set(CV_CAP_PROP_FRAME_WIDTH, 720);
v.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
if(!v.isOpened()){
cout << "Error opening video stream or file" << endl;
return;
}
Mat Image;
namedWindow("win",1);
while(1){
v >> Image;
imshow("win", Image);
}
try this:
while(1){
v >> Image;
imshow("win", Image);
char c=waitKey(10);//add a 10ms delay per frame to sync with cam fps
if(c=='b')
{
break;//break when b is pressed
}
}

How to show different Frame per second of video in two window in opencv

I am using opencv to show frames from camera. I want to show that frames in to two separation windows. I want show real frame from camera into first window (show frames after every 30 mili-seconds) and show the frames in second window with some delay (that means it will show frames after every 1 seconds). Is it possible to do that task. I tried to do it with my code but it is does not work well. Please give me one solution to do that task using opencv and visual studio 2012. Thanks in advance
This is my code
VideoCapture cap(0);
if (!cap.isOpened())
{
cout << "exit" << endl;
return -1;
}
namedWindow("Window 1", 1);
namedWindow("Window 2", 2);
long count = 0;
Mat face_algin;
while (true)
{
Mat frame;
Mat original;
cap >> frame;
if (!frame.empty()){
original = frame.clone();
cv::imshow("Window 1", original);
}
if (waitKey(30) >= 0) break;// Delay 30ms for first window
}
You could write the loop to display frames in a single function with the video file name as the argument and call them simultaneously by multi-threading.
The pseudo code would look like,
void* play_video(void* frame_rate)
{
// play at specified frame rate
}
main()
{
create_thread(thread1, play_video, normal_frame_rate);
create_thread(thread2, play_video, delayed_frame_rate);
join_thread(thread1);
join_thread(thread2);
}

OpenCV - Dramatically increase frame rate from playback

In OpenCV is there a way to dramatically increase the frame rate of a video (.mp4). I have tried to methods to increase the playback of a video including :
Increasing the frame rate:
cv.SetCaptureProperty(cap, cv.CV_CAP_PROP_FPS, int XFRAMES)
Skipping frames:
for( int i = 0; i < playbackSpeed; i++ ){originalImage = frame.grab();}
&
video.set (CV_CAP_PROP_POS_FRAMES, (double)nextFrameNumber);
is there another way to achieve the desired results? Any suggestions would be greatly appreciated.
Update
Just to clarify, the play back speed is NOT slow, I am just searching for a way to make it much faster.
You're using the old API (cv.CaptureFromFile) to capture from a video file.
If you use the new C++ API, you can grab frames at the rate you want. Here is a very simple sample code :
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap("filename"); // put your filename here
namedWindow("Playback",1);
int delay = 15; // 15 ms between frame acquisitions = 2x fast-forward
while(true)
{
Mat frame;
cap >> frame;
imshow("Playback", frame);
if(waitKey(delay) >= 0) break;
}
return 0;
}
Basically, you just grab a frame at each loop and wait between frames using cvWaitKey(). The number of milliseconds that you wait between each frame will set your speedup. Here, I put 15 ms, so this example will play a 30 fps video at approx. twice the speed (minus the time necessary to grab a frame).
Another option, if you really want control about how and what you grab from video files, is to use the GStreamer API to grab the images, and then convert to OpenCV for image tratment. You can find some info about this on this post : MJPEG streaming and decoding