OpenCV -- how to optimize color tracking program? - c++

I want to optimize my program, in which I am using color object tracking algorithm described here. The only difference is that I am using cvBlob library, instead of cv::moments (cvBlob was faster and more accurate). Using profiler (valgrind + kcachegrind) I have found that ~29% of time is taken by colorspace conversion method (cv::cvtColor; I am tracking objects in three colors). I am converting from BGR to HSV.
I've read in some papers that using YCbCr colorspace is even better in color tracking. Is it good idea to convert from BGR to YCbCR? It should be slightly faster, as it requires less multiplications (I am not sure about that -- I do not know how OpenCv does it internally). Does this algorithm need some changes, or can I just convert lower and upper boundaries for tracked color from HSV to YCbCr, and then use inRangeS method, as I did with HSV?
Is there any way to get the frame from driver in YcbCr (or YUV)? I am not asking about HSV, because this is not supported by v4l2, AFAIR.
Do you have any other ideas? I don't want to use IPP or GPU.

Check out the OpenCV documentation for cvtColor. It talks about conversion between BGR2YCbCr using cvtColor.
(Please try that and also comment here about result, ie how much percentage of total time it takes in YCbCr mode. Because it will help lots of people in future.)

Related

Most efficient way to blur an image in opencv

I am blurring the background of an image using the blur method. All the tutorials I have seen show the highest kernel size of (7,7). But that is not blurred enough for what I need it for.
I have used Size(33,33) and it works alright but I would like to go higher so currently I am using Size(77,77). Is this the most efficient way of blurring an image in OpenCV? And is it okay to go that high at all?
Another Idea is run the blur method more than once. with a kernel size of (7,7), but that doesn't seem like it is more efficient.
EDIT:
OpenCV version 3.2
Try cv::stackBlur().
It's an addition from v4.7.0. Its performance is almost flat, i.e. independent of kernel size. The pull request contains performance figures: https://github.com/opencv/opencv/pull/20379
GaussianBlur(sigmaX=22) (30 ms)
stackBlur(ksize=(101,101)) (0.4 ms)

Is there a SURF_CUDA implementation for colored images?

Recently started playing around with OpenCV, trying that SURF algorithm, that is really slow on CPU, and does not work with color images on GPU (has an assertion that checks for type==CV_8UC1), and converting images to grayscale gives some pretty bad results.
I'm wondering if there is a colored implementation on gpu in OpenCV, somewhere else, or if there is some kinda tricky workaround like doing the algorithm on all 3 channels and then magically merging them?
Thanks.
There's no special handling of color images in OpenCV's non-GPU version of SURF; the code shows that it just calls cvtColor(img, img, COLOR_BGR2GRAY) if it gets an image with more than one channel.
You might try converting the image to HSV and using one or more of the H, S, and/or V channels. More discussion at this question.

background extraction using OpenCv

I want to extract the background from a video but i don't want to use cv::bgsegm::BackgroundSubtractorMOG, cv::BackgroundSubtractorMOG2 these methods. because they using frame means. But I planed to use frame comparison method. Where i'm using first frame as background model and i plane to compere pixel values of next frames with first frame pixel values and if there is no change or change less than threshold it is background pixel. How can implement these using OpenCV and C++
Your question is too vague, I think. I can only give you some hints.
First, your approach is very simplistic. That's not bad. But from my experience, it won't give great results, even if you have a lot of control over your scene. Nevertheless, I do not want to hold you back if you want to make your own experiences.
You probably want to take a look at
Operations on Arrays in OpenCV
Basic Threshold Operations in OpenCV
Everything you need should be there. In particular, the absdiff operation and the threshold function (with binary threshold type) should be of interest.

Can libjpeg be used to change contrast of images in C++?

If not, where can I find the algorithm to adjust contrast of an image. I will have to code it in C++ and have access to libjpeg and libjpeg-turbo libraries
http://en.wikipedia.org/wiki/Image_editing#Contrast_change_and_brightening
Is this a good starting point for color images?
The simplest I could have think of is the ImageMagick library, or do it yourself*.
* I know that the code in that answer is not c++, but if you know c or c++, you should be able to understand it.
You might like this one for starters: Processing in the 8-bit YUV Color Space
C there is the contrast adjustment. With an image with pixel format in YUV color space, constrast adjustment is quite easy and is an update for Y component of the pixel.
libjpeg is not quite the tool for image processing, unless you are decoding/encoding JPEGs and you need some processing on the way.

image color conversion

I need to convert 24bppRGB to 16bppRGB, 8bppRGB, 4bppRGB, 8bpp grayscal and 4bpp grayscale. Any good link or other suggestions?
preferably using Windows/GDI+
[EDIT] speed is more critical than quality. source images are screenshots
[EDIT1] color conversion is required to minimize space
You're better off getting yourself a library, as others have suggested. Aside from ImageMagick, there are others, such as OpenCV. The benefits of leaving this to a library are:
Save yourself some time -- by cutting out dev and testing time for the algorithm
Speed. Most libraries out there are optimized to a level far greater than a standard developer (such as ourselves) could achieve
Standards compliance. There are many image formats out there, and using a library cuts the problem of standards compliance out of the equation.
If you're doing this yourself, then your problem can be divided into the following sub-problems:
Simple color quantization. As #Alf P. Steinbach pointed out, this is just "downscaling" the number of colors. RGB24 has 8 bits per R, G, B channels, each. For RGB16 you can do a number of conversions:
Equal number of bits for each of R, G, B. This typically means 4 or 5 bits each.
Favor the green channel (human eyes are more sensitive to green) and give it 6 bits. R and B get 5 bits.
You can even do the same thing for RGB24 to RGB8, but the results won't be as pretty as a palletized image:
4 bits green, 2 red, 2 blue.
3 bits green, 5 bits between red and blue
Palletization (indexed color). This is for going from RGB24 to RGB8 and RGB4. This is a hard problem to solve by yourself.
Color to grayscale conversion. Very easy. Convert your RGB24 to YUV' color space, and keep the Y' channel. That will give you 8bpp grayscale. If you want 4bpp grayscale, then you either quantize or do palletization.
Also be sure to check out chroma subsampling. Often, you can decrease the bitrate by a third without visible losses to image quality.
With that breakdown, you can divide and conquer. Problems 1 and 2 you can solve pretty quickly. That will allow you to see the quality you can get simply by doing coarser color quantization.
Whether or not you want to solve Problem 2 will depend on the result from above. You said that speed is more important, so if the quality of color quantization only is good enough, don't bother with palletization.
Finally, you never mentioned WHY you are doing this. If this is for reducing storage space, then you should be looking at image compression. Even lossless compression will give you better results than reducing the color depth alone.
EDIT
If you're set on using PNG as the final format, then your options are quite limited, because both RGB16 and RGB8 are not valid combinations in the PNG header.
So what this means is: regardless of bit depth, you will have to switch to index color if you want RGB color images below 24bpp (8 bits per channel). This means you will NOT be able to take advantage of the color quantization and chroma decimation that I mentioned above -- it's not supported in PNG. So this means you will have to solve Problem 2 -- palletization.
But before you think about that, some more questions:
What are the dimensions of your images?
What sort of ideal file-size are you after?
How close to that ideal file-size do you get with straight RBG24 + PNG compression?
What is the source of your images? You've mentioned screenshots, but since you're so concerned about disk space, I'm beginning to suspect that you might be dealing with image sequences (video). If this is so, then you could do better than PNG compression.
Oh, and if you're serious about doing things with PNG, then definitely have a look at this library.
Find your self a copy of the ImageMagick [sic] library. It's very configurable, so you can teach it about the details of some binary format that you need to process...
See: ImageMagick, which has a very practical license.
I received acceptable results (preliminary) by GDI+, v.1.1 that is shipped with Vista and Win7. It allows conversion to 16bpp (I used PixelFormat16bppRGB565) and to 8bpp and 4bpp using standard palettes. Better quality could be received by "optimal palette" - GDI+ would calculate optimal palette for each screenshot, but it's two times slower conversion. Grayscale was received by specifying simple custom palette, e.g. as demonstrated here, except that I didn't need to modify pixels manually, Bitmap::ConvertFormat() did it for me.
[EDIT] results were really acceptable until I decided to check the solution on WinXP. Surprisingly, Microsoft decided to not ship GDI+ v.1.1 (required for Bitmap::ConvertFormat) to WinXP. Nice move! So I continue researching...
[EDIT] had to reimplement this on clean GDI hardcoding palettes from GDI+