How processing-hungry are opencv backgroundsubtractor mog, mog2 , gmg? - c++

I am testing in my application opencv's backgroundSubtractors mog, mog2 and gmg.
All 3 reduce the FPS of my application ALOT. With mog and mog2 I get around 17 fps and with gmg I fall down to 5!
I have re-examined my code and I know I pass around the images as references. There is 1 deep copy involved when grabbing the image from camera, but other than that I work with pointers.
Are the algorithms THAT power hungry?
Note that I run only one subtraction method at a time on CPU (not gpu).
The processor is a core i7 though and it seems reasonable to assumd in should be able to cope.
Or am I wrong ?

Related

Most efficient way to blur an image in opencv

I am blurring the background of an image using the blur method. All the tutorials I have seen show the highest kernel size of (7,7). But that is not blurred enough for what I need it for.
I have used Size(33,33) and it works alright but I would like to go higher so currently I am using Size(77,77). Is this the most efficient way of blurring an image in OpenCV? And is it okay to go that high at all?
Another Idea is run the blur method more than once. with a kernel size of (7,7), but that doesn't seem like it is more efficient.
EDIT:
OpenCV version 3.2
Try cv::stackBlur().
It's an addition from v4.7.0. Its performance is almost flat, i.e. independent of kernel size. The pull request contains performance figures: https://github.com/opencv/opencv/pull/20379
GaussianBlur(sigmaX=22) (30 ms)
stackBlur(ksize=(101,101)) (0.4 ms)

Fast Panorama Image Compositing

I am working on a live panorama algorithm on C++. Basically, I took Stitching_detailed.cpp from OpenCV as reference and started to modify it according to my needs. I have been carefully studying the stitching pipeline on which it is based ( as detailed in Images stitching by OpenCV and Automatic Panoramic Image Stitching using Invariant Features ).
My main problem now is the execution time. I tried to implement as much as I could in CUDA. However, I got some problems with the compositing block. It seems that there is no CUDA implementation for the Graph Cut Seam Finder algorithm.
I am aiming to stitch the images from 6 different cameras in ~60 ms (~15FPS). However, in my current implementation the CPU Version of GraphCut Seam Finder takes about 90 ms for only 3 images ( being 0.1 MP each).
I have tried to use other Seam Finder algorithms which are less computationally expensive as Voronoi and DP, but unfortunately the result is an unpleasant stitched image.
I am kind of lost here now, what could I do in order to speed up this part? Are there any other seam_finding/blending techniques I could make use of?

Feed GStreamer sink into OpenPose

I have a custom USB camera with a custom driver on a custom board Nvidia Jetson TX2 that is not detected through openpose examples. I access the data using GStreamer custom source. I currently pull frames into a CV mat, color convert them and feed into OpenPose on a per picture basis, it works fine but 30 - 40% slower than a comparable video stream from a plug and play camera. I would like to explore things like tracking that is available for streams since Im trying to maximize the fps. I believe the stream feed is superior due to better (continuous) use of the GPU.
In particular the speedup would come at confidence expense and would be addressed later. 1 frame goes through pose estimation and 3 - 4 subsequent frames are just tracking the object with decreasing confidence levels. I tried that on a plug and play camera and openpose example and the results were somewhat satisfactory.
The point where I stumbled is that I can put the video stream into CV VideoCapture but I do not know, however, how to provide the CV video capture to OpenPose for processing.
If there is a better way to do it, I am happy to try different things but the bottom line is that the custom camera stays (I know ;/). Solutions to the issue described or different ideas are welcome.
Things I already tried:
Lower resolution of the camera (the camera crops below certain res instead of binning so cant really go below 1920x1080, its a 40+ MegaPixel video camera by the way)
use CUDA to shrink the image before feeding it to OpenPose (the shrink + pose estimation time was virtually equivalent to the pose estimation on the original image)
since the camera view is static, check for changes between frames, crop the image down to the area that changed and run pose estimation on that section (10% speedup, high risk of missing something)

calcopticalflowpyrlk function in opencv 3.0

I'm trying to track something in some frames. I know calcOpticalFlowPyrLK is supposed to be used for sparse tracking problems. However, I thought it wouldn't really hurt if I just try to track all pixels in the frames.
So my video frames are actually very stable(motions are barely visible by eyes), and calcopticalflowpyrlk works well for most pixels. But for some pixels it returns really big flow vectors(like [200,300]), which doesn't make sense.
And I also found a Matlab implementation that's using the same Pyramidal Lucas-Kanade algorithm, but this Matlab version doesn't return any crazy values.
So I'm wondering what is causing opencv function to return huge non-reasonable values. Is it because the matrix inversion is done differently?

Blurry Saved Images of Detected Objects using OpenCV

I have a c++ code that is being run on the Parrot AR.Drone version 2.0 to detect objects, then save images of the detected objects to the controller (computer). As you all may know, the AR.Drone has an 720p High Definition camera. However, the saved images are very blurry. I cannot seem to find any OpenCV function that increases the resolution of the saved images, however I believe the resolution is set to 95/100 by default for OpenCV. Does anyone know of any solution to this problem?
Any input or comment would be helpful.
I think you mean 95/100 of jPEG quality. You can change the third parameter of cv::imwrite like it said in the opencv documentation
cv::imwrite("name.jpg", image, CV_IMWRITE_JPEG_QUALITY=100); //100 instead of default 95
But this method only increase the quality, not the resolution... and there shouldn't be much difference between 95 and 100%.