I am using the
haarcascade_frontalface_alt2.xml
file for face detection in OpenCV 2.4.3 under the Visual Studio 10 framework.
I am using
Mat frame;
cv::VideoCapture capture("C:\\Users\\Xavier\\Desktop\\AVI\\Video 6_xvid.avi");
capture.set(CV_CAP_PROP_FPS,30);
for(;;)
{
capture >> frame;
//face detection code
}
The problem i'm facing is that as Haar face detection is computationally heavy, OpenCV is missing a few frames in the
capture >> frame;
instruction. To check it I wrote to a txt file a counter and found only 728 frames out of 900 for a 30 sec 30fps video.
Plz someone tell me how to fix it.
I am not an experienced openCV user, but you could try flushing the outputstream of capture to disk. Unfortunately, I think the VideoCapture class does not seem to support such an operation. Note that flushing to disk will have an impact on your performance, since it will first flush everything and only then continue executing. Therefore it might not be the best solution, but it is the easiest one if possible.
Another approach that requires more work but that should fix it is to make a separate low priority thread that writes each frame to disk. Your current thread then only needs to call this low priority thread each time it wants its data to be captured. Depending on whether the higher priority thread might change the data while the low priority thread still has to write it to disk, you might want to copy the data to a separate buffer first.
Related
I'm using OpenCV4 to read from a camera. Similar to a webcam. Works great, code is somewhat like this:
cv::VideoCapture cap(0);
cap.set(cv::CAP_PROP_FRAME_WIDTH , 1600);
cap.set(cv::CAP_PROP_FRAME_HEIGHT, 1200);
while (true)
{
cv::Mat mat;
// wait for some external event here so I know it is time to take a picture...
cap >> mat;
process_image(mat);
}
Problem is, this gives many video frames, not a single image. This is important because in my case I don't want nor need to be processing 30 FPS. I actually have specific physical events that trigger reading the image from the camera at certain times. Because OpenCV is expecting the caller to want video -- not surprising considering the class is called cv::VideoCapture -- it has buffered many seconds of frames.
What I see in the image is always from several seconds ago.
So my questions:
Is there a way to flush the OpenCV buffer?
Or to tell OpenCV to discard the input until I tell it to take another image?
Or to get the most recent image instead of the oldest one?
The other option I'm thinking of investigating is using V4L2 directly instead of OpenCV. Will that let me take individual pictures or only stream video like OpenCV?
So I'm actually making a project of augmented reality.
I use openCV to take picture from 2 camera.
Those camera arent really efficient, I think their max fps is around 30 fps.
If I refresh the camera frame (by the read method) in the programm at each frame the fps of the application is about 25 fps. If I don't refresh it It's about 55 fps.
I suppose that this latency is because openCV wait for a new frame to be generated by the cameras before going to the next step of the program.
But I need at least all virtual object to be rendered at 55 fps for imersion. Is their a way to say openCV to jump to the next call if there's no frame in the videoCapture object?
And if there's no way is there an other cross platform API for camera control more efficient?
Thx!
I never use OpenCV in C++, but I think this is the same. I am using OpenCV4Android and I need to do something when a frame comes in, it will acturally slow the fps if you put your procedure in the onCameraFrame() function(I guess its like the read() function in C++) because only when the frame return, the next frame is coming in.
My solution is use another thread to process the frame. what you can do in your read() function is setting the flag to indicate the frame is in videoCapture object or not, then use the process thread to check the flag, if there is, process it. The fps will be better.
I'm writing a program that reads multiple webcams, stitches the pictures together and saves them into a video file.
Should i use threads to capture the images and to write the resulting large image into a file?
If yes, should i use c+11 or boost?
Do you maybe have some simple example code which shows how to avoid racing conditions?
I already found this thread which uses threads for write but it doesn't seem to avoid racing conditions.
pseudocode of my program:
camVector //vector with cameras
VideoWriter outputVideo //outputs the video
while(true){
for(camera:camVector){
camera.grab() //send signal to grab picture
}
picVector = getFrames(camVector) //collect frames of all cameras into a Vector
Mat bigImg = stitchPicTogether(picVector)
outputVideo.write(bigImg)
}
That's what I'd do:
camVector //vector with cameras
VideoWriter outputVideo //outputs the video
for(camera:camVector) camera.grab(); // request first frame
while(true){
parallel for(camera:camVector) {
frame = getFrame(camera);
pasteFrame(/* destination */ bigImg,
/* geometry */ geometry[camera],
/* source */ frame
);
camera.grab(); // request next frame, so the webcam driver can start working
// while you write to disk
}
outputVideo.write(bigImg)
}
This way stiching is done in parallel, and if your webcam drivers have different timing, you can begin stiching what you recieved from one webcam while you wait for the other webcam's frame.
About implementation, you could simply go with OpenMP, already used in OpenCV, something like #pragma omp parallel. If you prefer something more C++ like, give Intel TBB a try, it's cool.
Or as you said, you could go with native C++11 threads or boost. Choosing between the two mainly depend on your work environment: if you intend to work with older compiler, boost is safer.
In all cases, no locking is required, except a join at the end, to wait for all threads to finish their work.
If your bottleneck is writing the output, the limitting factor is either disk IO or video compression speed. For disk, you can try changing codec and/or compressing more. For compression speed, have a look at GPU codecs like this.
I am writing a program that involves real-time processing of video from a network camera using OpenCV. I want to be able to capture (at any time during processing) previous images (e.g. say ten seconds worth) and save to a video file.
I am currently doing this using a queue as a buffer (to push 'cv::Mat' data) but this is obviously not efficient as a few seconds worth of images soon uses up all the PC memory. I tried compressing images using 'cv::imencode' but that doesn't make much difference using PNG, I need a solution that uses hard-drive memory and efficient for real-time operation.
Can anyone suggest a very simple and efficient solution?
EDIT:
Just so that everyone understands what I'm doing at the moment; here's the code for a 10 second buffer:
void run()
{
cv::VideoCapture cap(0);
double fps = cap.get(CV_CAP_PROP_FPS);
int buffer_lenght = 10; // in seconds
int wait = 1000.0/fps;
QTime time;
forever{
time.restart();
cv::mat image;
bool read = cap.read(image);
if(!read)
break;
bool locked = _mutex.tryLock(10);
if(locked){
if(image.data){
_buffer.push(image);
if((int)_buffer.size() > (fps*buffer_lenght))
_buffer.pop();
}
_mutex.unlock();
}
int time_taken = time.elapsed();
if(time_taken<wait)
msleep(wait-time_taken);
}
cap.release();
}
queue<cv::Mat> _buffer and QMutex _mutex are global variables. If you're familiar with QT, signals and slots etc, I've got a slot that grabs the buffer and saves it as a video using cv::VideoWriter.
EDIT:
I think the ideal solution will be for my queue<cv::Mat> _buffer to use hard-drive memory rather than pc memory. Not sure on which planet this is possible? :/
I suggest looking into real-time compression with x264 or similar. x264 is regularly used for real-time encoding of video streams and, with the right settings, can encode multiple streams or a 1080p video stream in a moderately powered processor.
I suggest asking in doom9's forum or similar forums.
x264 is a free h.264 encoder which can achieve 100:1 or better (vs raw) compression. The output of x264 can be stored in your memory queue with much greater efficiency than uncompressed (or losslessly compressed) video.
UPDATED
One thing you can do is store images to the hard disk using imwrite and update their filenames to the queue. When the queue is full, delete images as you pop filenames.
In your video writing slot, load the images as they are popped from the queue and write them to your VideoWriter instance
You mentioned you needed to use Hard Drive Memory
In that case, consider using the OpenCV HighGUI VideoWriter. You can create an instance of VideoWriter as below:
VideoWriter record("RobotVideo.avi", CV_FOURCC('D','I','V','X'),
30, frame.size(), true);
And write image captures to in as below:
record.write(image);
Find the documentation and the sample program on the website.
I am trying to Stitch an image from a live video camera (more like a panorama) using OpenCv. The stitching is working fine. My problem is, i want the stitching to be done in real time say around 30 mph but the processing of the stitching is slow.
I want to use Threads to improve the speed but in order to use them do i need to store my live video stream or is there any way to directly use threads for the live stream.
Here is a sample code:
SapAcqDevice *pAcq=new SapAcqDevice("Genie_HM1400_1", false);
SapBuffer *pBuffer = new SapBuffer(20,pAcq);
SapView *pView=new SapView(pBuffer,(HWND)-1);
SapAcqDeviceToBuf *pTransfer= new SapAcqDeviceToB(pAcq,pBuffer,XferCallback,pView);
pAcq->Create();
pBuffer->Create();
pView->Create();
pTransfer->Create();
pTransfer->Grab();
printf("Press any key to stop grab\n");
getch();
pTransfer->Freeze();
pTransfer->Wait(5000);
printf("Press any key to terminate\n");
getch();
This above code is used to capture the live stream. The XferCallback function is used to do the processing of the frames. In this function i call my stitch engine. Since the processing of the engine is slow i want to use threads.
Here is a sample code of the callback function:
SapView *pView = (SapView *) pInfo->GetContext();
SapBuffer *pBuffer;
pBuffer = pView->GetBuffer();
void *pData=NULL;
pBuffer->GetAddress(&pData);
int width=pBuffer->GetWidth();
int height=pBuffer->GetHeight();
int depth=pBuffer->GetPixelDepth();
IplImage *fram;
fram = cvCreateImage(cvSize(width,height),depth,1);
cvSetImageData(fram,pData,width);
stitching(frame_num , fram);
cvWaitKey(1);
frame_num++;
I want many threads working on the stitch engine.
If you think you can get the stitching fast enough using threads, then go for it.
do i need to store my live video stream or is there any way to
directly use threads for the live stream.
You might benefit from setting up a ring buffer with preallocated frames. You know the image size isn't going to change. So your Sapera acquisition callback simply pushes a frame into the buffer.
You then have another thread that sits there stitching as fast as it can and maintaining state information to help optimize the next stitch. You have not given much information about the stitching process, but presumably you can make it parallel with OpenMP. If that is fast enough to keep up with frame acquisition then you'll be fine. If not, then you will start dropping frames because your ring buffer is full.
As hinted above, you can probably predict where the stitching for the next frame ought to begin. This is on the basis that movement between one frame and the next should be reasonably small and/or smooth. This way you narrow your search and greatly improve the speed.