Slow video-capturing with opencv 2.3.1 - c++

Is there a way to stream video with opencv faster?
i'm using
Mat img;
VideoCapture cap(.../video.avi);
for (;;) {
cap >> img;
...
here is some calculations
}
Thanks

Since the frame grabbing procedure is pretty straightforward, the slowness you are experiencing could caused by some calculations consuming your CPU, decreasing the FPS displayed by your application.
It's hard to tell without looking at the code that does this.
But a simple test to pinpoint the origin of the problem would be to simply remove some calculations and make a simple application that read the frames from the video and displays them. Simple as that! If this test works perfectly, then you know that the performance is being affected by the calculations that are being done.
Good luck.

Related

OpenCV stitcher mode:SCANS crashes with certain properties

I am trying to setup a planar image stitching app, but if I give the stitcher below a PlaneWarper, the app crashes with a bad access exception. I also learned that ORB feature finding is best for planar stitching, but using an OrbFeatureFinder also causes the app to crash within the stitch function. I know I am not fully aware of how the stitching pipeline works, so if someone could help me understand the issue here, I would be grateful.
vector<Mat> imgs;
cv::Mat stitch (vector<Mat>& images)
{
imgs = images;
Mat pano;
Ptr<Stitcher> stitcher = Stitcher::create(Stitcher::SCANS, true);
stitcher->setPanoConfidenceThresh(0.8f);
stitcher->setFeaturesMatcher(makePtr<cv::detail::AffineBestOf2NearestMatcher>(true, true, 0.8f));
Stitcher::Status status = stitcher->stitch(imgs, pano);
if (status != Stitcher::OK)
{
cout << "Can't stitch images, error code = " << int(status) << endl;
//return 0;
}
return pano;
}
I have tested on my Mac the stitching_detailed program with Orb feature finding and Planar warping, and it gave me great results, so I attempted to run stitching_detailed.cpp in the iOS app interface, but that cause all types of crashes, so I am trying this way now.
The stitching works well, but there is some distortion here and there and using the ORB feature finding with the Planar warping eliminated it on my Mac.
I only did a cursory look, but I suspect your issue lies with how OpenCV is structured. When running on a Mac, it can utilize the GPU via OpenCL. However, when running on an iOS device, it cannot use OpenCL since it is unsupported. Because of this, it must use the CPU based implementation found here.
https://github.com/opencv/opencv/blob/808ba552c532408bddd5fe51784cf4209296448a/modules/stitching/src/stitcher.cpp
You will see the variable try_use_gpu used extensively, and based on the way it configures and runs, this is likely the culprit. While I cannot say for certain in your case, I have found previously that there is iOS specific functionality that is broken, or simply even non-existant. With that said, you may want to file an issue with the project in the hope that someone can pick it up and fix it.
Use open cv 2.4.9 version of stitching for iOS app. Also, use this code it will work great for iOS App
https://github.com/foundry/OpenCVSwiftStitch
I already spend too much time to fixed the crash but now it got fixed.

Detect bad frames in OpenCV 2.4.9

I know the title is a bit vague but I'm not sure how else to describe it.
CentOS with ffmpeg + OpenCV 2.4.9. I'm working on a simple motion detection system which uses a stream from an IP camera (h264).
Once in a while the stream hiccups and throws in a "bad frame" (see pic-bad.png link below). The problem is, these frames vary largely from the previous frames and causes a "motion" event to get triggered even though no actual motion occured.
The pictures below will explain the problem.
Good frame (motion captured):
Bad frame (no motion, just a broken frame):
The bad frame gets caught randomly. I guess I can make a bad frame detector by analyzing (looping) through the pixels going down from a certain position to see if they are all the same, but I'm wondering if there is any other, more efficient, "by the book" approach to detecting these types of bad frames and just skipping over them.
Thank You!
EDIT UPDATE:
The frame is grabbed using a C++ motion detection program via cvQueryFrame(camera); so I do not directly interface with ffmpeg, OpenCV does it on the backend. I'm using the latest version of ffmpeg compiled from git source. All of the libraries are also up to date (h264, etc, all downloaded and compiled yesterday). The data is coming from an RTSP stream (ffserver). I've tested over multiple cameras (dahua 1 - 3 MP models) and the frame glitch is pretty persistent across all of them, although it doesn't happen continuously, just once on a while (ex: once every 10 minutes).
What comes to my mind in first approach is to check dissimilarity between example of valid frame and the one we are checking by counting the pixels that are not the same. Dividing this number by the area we get percentage which measures dissimilarity. I would guess above 0.5 we can say that tested frame is invalid because it differs too much from the example of valid one.
This assumption is only appropriate if you have a static camera (it does not move) and the objects which can move in front of it are not in the shortest distance (depends from focal length, but if you have e.g. wide lenses so objects should not appear less than 30 cm in front of camera to prevent situation that objects "jumps" into a frame from nowhere and has it size bigger that 50% of frame area).
Here you have opencv function which does what I said. In fact you can adjust dissimilarity coefficient more large if you think motion changes will be more rapid. Please notice that first parameter should be an example of valid frame.
bool IsBadFrame(const cv::Mat &goodFrame, const cv::Mat &nextFrame) {
// assert(goodFrame.size() == nextFrame.size())
cv::Mat g, g2;
cv::cvtColor(goodFrame, g, CV_BGR2GRAY);
cv::cvtColor(nextFrame, g2, CV_BGR2GRAY);
cv::Mat diff = g2 != g;
float similarity = (float)cv::countNonZero(diff) / (goodFrame.size().height * goodFrame.size().width);
return similarity > 0.5f;
}
You do not mention if you use ffmpeg command line or libraries, but in the latter case you can check the bad frame flag (I forgot its exact description) and simply ignore those frames.
remove waitKey(50) or change it to waitKey(1). I think opencv does not spawn a new thread to perform capture. so when there is a pause, it confuses the buffer management routines, causing bad frames..maybe?
I have dahua cameras and observed that with higher delay, bad frames are observed. And they go away completely with waitKey(1). The pause does not necessarily need to come from waitKey. Calling routines also cause such pauses and result in bad frames if they are taking long enough.
This means that there should be minimum pause between consecutive frame grabs.the solution would be to use two threads to perform capture and processing separately.

OpenCV is missing frames while face detection takes place

I am using the
haarcascade_frontalface_alt2.xml
file for face detection in OpenCV 2.4.3 under the Visual Studio 10 framework.
I am using
Mat frame;
cv::VideoCapture capture("C:\\Users\\Xavier\\Desktop\\AVI\\Video 6_xvid.avi");
capture.set(CV_CAP_PROP_FPS,30);
for(;;)
{
capture >> frame;
//face detection code
}
The problem i'm facing is that as Haar face detection is computationally heavy, OpenCV is missing a few frames in the
capture >> frame;
instruction. To check it I wrote to a txt file a counter and found only 728 frames out of 900 for a 30 sec 30fps video.
Plz someone tell me how to fix it.
I am not an experienced openCV user, but you could try flushing the outputstream of capture to disk. Unfortunately, I think the VideoCapture class does not seem to support such an operation. Note that flushing to disk will have an impact on your performance, since it will first flush everything and only then continue executing. Therefore it might not be the best solution, but it is the easiest one if possible.
Another approach that requires more work but that should fix it is to make a separate low priority thread that writes each frame to disk. Your current thread then only needs to call this low priority thread each time it wants its data to be captured. Depending on whether the higher priority thread might change the data while the low priority thread still has to write it to disk, you might want to copy the data to a separate buffer first.

Image Stitching from a live Video Stream in OpenCv

I am trying to Stitch an image from a live video camera (more like a panorama) using OpenCv. The stitching is working fine. My problem is, i want the stitching to be done in real time say around 30 mph but the processing of the stitching is slow.
I want to use Threads to improve the speed but in order to use them do i need to store my live video stream or is there any way to directly use threads for the live stream.
Here is a sample code:
SapAcqDevice *pAcq=new SapAcqDevice("Genie_HM1400_1", false);
SapBuffer *pBuffer = new SapBuffer(20,pAcq);
SapView *pView=new SapView(pBuffer,(HWND)-1);
SapAcqDeviceToBuf *pTransfer= new SapAcqDeviceToB(pAcq,pBuffer,XferCallback,pView);
pAcq->Create();
pBuffer->Create();
pView->Create();
pTransfer->Create();
pTransfer->Grab();
printf("Press any key to stop grab\n");
getch();
pTransfer->Freeze();
pTransfer->Wait(5000);
printf("Press any key to terminate\n");
getch();
This above code is used to capture the live stream. The XferCallback function is used to do the processing of the frames. In this function i call my stitch engine. Since the processing of the engine is slow i want to use threads.
Here is a sample code of the callback function:
SapView *pView = (SapView *) pInfo->GetContext();
SapBuffer *pBuffer;
pBuffer = pView->GetBuffer();
void *pData=NULL;
pBuffer->GetAddress(&pData);
int width=pBuffer->GetWidth();
int height=pBuffer->GetHeight();
int depth=pBuffer->GetPixelDepth();
IplImage *fram;
fram = cvCreateImage(cvSize(width,height),depth,1);
cvSetImageData(fram,pData,width);
stitching(frame_num , fram);
cvWaitKey(1);
frame_num++;
I want many threads working on the stitch engine.
If you think you can get the stitching fast enough using threads, then go for it.
do i need to store my live video stream or is there any way to
directly use threads for the live stream.
You might benefit from setting up a ring buffer with preallocated frames. You know the image size isn't going to change. So your Sapera acquisition callback simply pushes a frame into the buffer.
You then have another thread that sits there stitching as fast as it can and maintaining state information to help optimize the next stitch. You have not given much information about the stitching process, but presumably you can make it parallel with OpenMP. If that is fast enough to keep up with frame acquisition then you'll be fine. If not, then you will start dropping frames because your ring buffer is full.
As hinted above, you can probably predict where the stitching for the next frame ought to begin. This is on the basis that movement between one frame and the next should be reasonably small and/or smooth. This way you narrow your search and greatly improve the speed.

How to find object on video using OpenCV

To track object on video frame, first of all I extract image frames from video and save those images to a folder. Then I am supposed to process those images to find an object. Actually I do not know if this is a practical thing, because all the algorithm did this for one step. Is this correct?
Well, your approach will consume a lot of space on your disk depending on the size of the video and the size of the frames, plus you will spend a considerable amount of time reading frames from the disk.
Have you tried to perform real-time video processing instead? If your algorithm is not too slow, there are some posts that show the things that you need to do:
This post demonstrates how to use the C interface of OpenCV to execute a function to convert frames captured by the webcam (on-the-fly) to grayscale and displays them on the screen;
This post shows a simple way to detect a square in an image using the C++ interface;
This post is a slight variation of the one above, and shows how to detect a paper sheet;
This thread shows several different ways to perform advanced square detection.
I trust you are capable of converting code from the C interface to the C++ interface.
There is no point in storing frames of a video if you're using OpenCV, as it has really handy methods for capturing frames from a camera/stored video real-time.
In this post you have an example code for capturing frames from a video.
Then, if you want to detect objects on those frames, you need to process each frame using a detection algorithm. OpenCV brings some sample code related to the topic. You can try to use SIFT algorithm, to detect a picture, for example.