I want to record video from webcam using opencv.
I put the following code inside a timer event handler function, which is called each 50 ms
IplImage *image = cvQueryFrame(camera);
IplImage *resizeImage = cvCreateImage( size, 8, 3);
cvResize(image, resizeImage);
cvWriteFrame(writer, resizeImage );
With writer is created using cvCreateVideoWriter, and the video is created when I call cvReleaseVideoWriter(&writer)
The problem is I can not release memory allocated by cvWriteFrame until cvReleaseVideoWriter is called. That makes a big issue when I need record a long time video.
How can I handle this situation?
I suppose that the best solution in your case (if you don't want to modify OpenCV code), is to write several video files.
As I can suppose, each frame is grabbed to RAM as is without any compressing. So, you can calculate a number of frames before amount of allocated memory exceed a particular size. Then you finish writing to file and start a new one.
Related
I am trying to capture images from several cameras using the cameras driver,OpenCV and C++. My goal is to get as many FPS as possible, and to this end I have found saving the images in the hard drive to be the slowest operation. In order to speed up the process, I am doing each saving in separate threads. Problem is, I still have to wait for the saving to be complete to avoid the captured image being overwritten. Doing this provides good results, but for unknown reasons every 30-40 frames the speed is 10x higher.
I am addressing this by creating a ring buffer where I store the images, as these sudden drops in write speed are very short. I have obtained very good results using this approach, but unfortunately for more than 3 cameras the camera driver can't handle the stress and my program halts, waiting for the first image of the 4th camera to be saved. I checked and it's not the CPU, as 3 cameras + a thread writing random data in the disk works fine.
Now, seeing how using opencv reduced the stress on the camera driver, I would like to create a OpenCV mat buffer to hold the images while they are saved without my camera overwritting them (well, not until the buffer has done a whole lap, which I will make sure won't happen).
I know I can do
cv::Mat colorFrame(cv::Size(width, height),CV_8UC3,pointerToMemoryOfCamera);
to initialize a frame from the memory written by the camera. This does not solve my problem, as it will only point to the data, and the moment the camera overwrites it, it will corrupt the image saved.
How do I create a matrix with a given size and type, and then copy the contents of the memory to this matrix?
You need to create a deep copy. You can use clone:
cv::Mat colorFrame = cv::Mat(height, width, CV_8UC3, pointerToMemoryOfCamera).clone();
You can also speed up the process of saving the images using matwrite and matread functions.
I want to create the vector of matrices to stores as many images as possible.
I know that,it is possible as written below:
vector<Mat> images1;
and during the image acquisition from the camera and i would save the images at 100fps with resolution of 1600*800 as below:
images1.push_back(InputImage.clone());
Where InputImage is the Mat and given by the camera. Since creating video during the acquisition process either leads to frame missing in the video or reduction in aquisition speed.
Later after stopping the image acquisition and before stopping the program, I would write the images into video as written below:
VideoWriter writer;
writer = Videowriter("video.avi",-1,100,frameSize(1600,800),false);
for (vector<Mat>::iterator iter = images1.begin(); ier != images1.end(); iter++)
writer.write(*iter);
Is it correct, since I am not sure the images1 can store the images around 1500 images without overflow.
You don't really have to worry about "overflow", whatever that means in your context.
The bigger problem is memory. A single frame takes (at 8 bits per color, with 3 colors) 3 * 1600 * 800 == 3.84Mb. At 100fps, One second of footage requires 0.384Gb of memory. 8GB of memory can only hold about 20 seconds of footage. You'll need almost 24GB of memory before you can hold a whole minute. There's a reason that the vast, vast, vast majority of Video Encoding Software only keeps a few frames of video data in memory at any given time, and dumps the rest to the hard drive (or discards it, depending on what purpose the software is serving).
What you should probably be doing (which is what programs like FRAPS do) is dumping frames to the hard drive as soon as soon as you receive them. Then, when recording finishes, you can either call it a day (if raw video footage is what you need) or you can begin a process of reading the file and encoding it into a more compressed format.
Pre-allocate your image vector in memory so that you just need to copy the frames without real-time allocation.
If you have memory problems, try dumping the frames to a file, the OS will hopefully be able to handle the I/O. If not try memory mapped files.
My aim is to capture all the frames (RGB) from Kinect at 30 fps and save them to my hard drive. For doing this I took the following approach.
Get the frames from Kinect and store them in an array buffer. Since writing to disk (using imwrite()) takes a bit of time and I may miss some frames while doing so, so instead of directly saving them to the disk, I store them in an array. Now, I have another parallel thread that accesses this array and writes the individual frames to the disk as images.
Now I have used a static array of size 3000 and type Mat. This will suffice since I need to store frames for 1.5 minute videos (1.5 minutes = 2700 frames). I have declared the array as follows :
#define NUM_FRAMES 3000
Mat rgb[NUM_FRAMES];
I have already tested this limit by reading images and saving them to the array using the following code :
for(int i=0; i<NUM_FRAMES; i++)
{
Mat img = imread("image.jpg", CV_LOAD_IMAGE_COLOR);
rgb[i] = img;
imshow("Image", img);
cvWaitKey(10);
}
The above code executed flawlessly.
But one problem is that the code I am using for capturing image using Kinect, captures the image in an IplImage. Thus I need to convert the image to cv::Mat format before using it. I convert it using the following command:
IplImage* color = cvCreateImageHeader(cvSize(COLOR_WIDTH, COLOR_HEIGHT), IPL_DEPTH_8U, 4);
cvSetData(color, colorBuffer, colorLockedRect.Pitch); // colorBuffer and colorLockedRect.Pitch is something that Kinect uses. Not related to OpenCv
rgb[rgb_read++] = Mat(color, FLAG);
Now here lies my problem. Whenever I am setting #define FLAG true, it causes memory leaks and gives me OpenCv Error: Insufficient memory (failed to allocate 1228804 bytes) error.
But if I use #define FLAG false it works correctly, but the frames that I am getting is erroneous as shown below. They are three consecutive frames.
I was moving around my arm and the image got cut in between as can be seen from above.
Can someone please point out the reason for this weird behavior or any other alternate way of obtaining the desired result. I have been struggling with this since a few days now. Please ask for if any further clarifications are required.
I am using OpenCV 2.4.8, Kinect SDK for Windows version-1.8.0 and Microsoft Visual Studio 2010.
Also can someone please explan to me the role of the CopyData parameter in Mat::Mat. I have already gone through this link, but still it's not completely clear. Maybe that's why I could not solve the above error in the first place since it's working is not very clear.
Thanks in advance.
first, do not use IplImages, stick with cv::Mat, please.
the equivalent code for that would be:
Mat img_borrowed = Mat( height, width, CV_8U4C, colorBuffer, colorLockedRect.Pitch );
note, that this does not do any allocation on its own, it's still the kinect's pixels, so you will have to clone() it:
rgb[rgb_read++] = img_borrowed.clone();
this is the same as setting the flag in your code above to 'true'. (deep-copy the data)
[edit] maybe it's a good idea to skip the useless 4th channel (also less mem required), so , instead of the above you could do:
cvtColor( img_borrowed, rgb[rgb_read++], CV_BGRA2BGR); // will make a 'deep copy', too.
so, - here's the bummer: if you don't save a deep-copy in your array, you'll end up with garbled (and all the same!) images, probably even with undefined behaviour due to the locking/unlocking of the kinect buffer, if you do copy it (and you must), you will need a lot of memory.
unlikely, that you can keep 3000 *1024*786*4 = 9658368000 bytes in memory, you'll have to cut it down one way or another.
I am finding a memory leak in this simple OpenCV code:
VideoCapture* capture = new VideoCapture(0);
Mat frame;
while (true) {
capture->set( CV_CAP_PROP_FRAME_WIDTH, 1600 );
capture->set(CV_CAP_PROP_FRAME_HEIGHT, 1200 );
capture->read(frame);
}
This is the whole code. Every time through the while loop, several MB are leaked. I have tried frame.release() just after the read, but it doesn't help. Removing the set-size lines fixes the problem, but in my real code I want to vary the size, so that isn't a solution. It is getting the image at the correct size.
Am I doing something stupid?
By the way, I am using a Logitech B910 webcam.
Thanks!
Do you need to change the readout size on every frame?
Once it is set the camera will produce the same size until you reset it
I am writing a program that involves real-time processing of video from a network camera using OpenCV. I want to be able to capture (at any time during processing) previous images (e.g. say ten seconds worth) and save to a video file.
I am currently doing this using a queue as a buffer (to push 'cv::Mat' data) but this is obviously not efficient as a few seconds worth of images soon uses up all the PC memory. I tried compressing images using 'cv::imencode' but that doesn't make much difference using PNG, I need a solution that uses hard-drive memory and efficient for real-time operation.
Can anyone suggest a very simple and efficient solution?
EDIT:
Just so that everyone understands what I'm doing at the moment; here's the code for a 10 second buffer:
void run()
{
cv::VideoCapture cap(0);
double fps = cap.get(CV_CAP_PROP_FPS);
int buffer_lenght = 10; // in seconds
int wait = 1000.0/fps;
QTime time;
forever{
time.restart();
cv::mat image;
bool read = cap.read(image);
if(!read)
break;
bool locked = _mutex.tryLock(10);
if(locked){
if(image.data){
_buffer.push(image);
if((int)_buffer.size() > (fps*buffer_lenght))
_buffer.pop();
}
_mutex.unlock();
}
int time_taken = time.elapsed();
if(time_taken<wait)
msleep(wait-time_taken);
}
cap.release();
}
queue<cv::Mat> _buffer and QMutex _mutex are global variables. If you're familiar with QT, signals and slots etc, I've got a slot that grabs the buffer and saves it as a video using cv::VideoWriter.
EDIT:
I think the ideal solution will be for my queue<cv::Mat> _buffer to use hard-drive memory rather than pc memory. Not sure on which planet this is possible? :/
I suggest looking into real-time compression with x264 or similar. x264 is regularly used for real-time encoding of video streams and, with the right settings, can encode multiple streams or a 1080p video stream in a moderately powered processor.
I suggest asking in doom9's forum or similar forums.
x264 is a free h.264 encoder which can achieve 100:1 or better (vs raw) compression. The output of x264 can be stored in your memory queue with much greater efficiency than uncompressed (or losslessly compressed) video.
UPDATED
One thing you can do is store images to the hard disk using imwrite and update their filenames to the queue. When the queue is full, delete images as you pop filenames.
In your video writing slot, load the images as they are popped from the queue and write them to your VideoWriter instance
You mentioned you needed to use Hard Drive Memory
In that case, consider using the OpenCV HighGUI VideoWriter. You can create an instance of VideoWriter as below:
VideoWriter record("RobotVideo.avi", CV_FOURCC('D','I','V','X'),
30, frame.size(), true);
And write image captures to in as below:
record.write(image);
Find the documentation and the sample program on the website.