Getting Hue value from the peaks in histogram opencv - c++

I am trying to use color information of detection of rectangles. Some of my rectangles are overlapping and with multicolor. I found a solution to detect these rectangles using Hue values. I am checking inRange with Hue values of colors
Orange 0-22
Yellow 22- 38
Green 38-75
Blue 75-130
Violet 130-160
Red 160-179
, but I do not know what exact color is going to be. For example, in one image rectangles can be orange, red, blue and in another image, it can be other colors.
I tried to look histogram, but I would have a background which is not only white or black. So, the histogram is confusing.
If you give me some ideas about how to handle this problem, I will appreciate it.

You can try a brute force approach, where you try all the color ranges, then use findcontours (example) to see if you can find a contour that is possibly a rectangle. If the background is very noisy you can use a minimum size for the contour
(contourArea). You could also check the solidity by dividing the contour area by the area of the minAreaRect, the result for a rectangle (that has good detection) should approach 1.
Whether this could possibly work depends on several factors, and overlapping rectangles will quickly break it.

So if I understand correctly, you have a variety of images, each of which contain multiple rectangles which can be a variety of different colors, and the background of the image is non uniform, and you're trying to segment out the rectangles using a histogram?
Using histograms for image segmentation works best with grey scale images with a uniform background, so that upon seeing the peeks in your histogram you know the primary intensities of the objects you are trying to segment out. This method is not going to translate well to your application because the shapes you are attempting to segment are non uniform in shade, without seeing example images I would probably say this isn't going to work, however you might be able to get away with it if the shade variation of the rectangles is relatively similar... basically if you have rectangles that are 15-30 you might be alright, but if they vary from 20-100 you're going to be out of luck, same goes with variation of the background.
If the background and the rectangles have very clearly defined borders, and the background colors transition VERY smoothly, you may be able to get away with some sort of region growing on the background in order to get a list of all the background pixels and then just set those to black or something in order to allow better analysis of the rectangles in the foreground, but I can only speculate so much with the information you've given in your post

Related

SimpleBlobDetection keypoints not visually centered (opencv C++)

I am trying to use opencv's simple blob detector to determine the colors on the face of a Rubik's cube. I have been using this fantastic resource so far and it has proved to be very helpful and descriptive. After a fair bit of tweaking, I have been successful in making good filters for each color. I look at the x and y position of each color blob, and since the cubies have an even spacing, do a quick rounding division to determine which row and column they belong in, with groups of two being split and belonging in two different rows/columns respectively.
This is more of a curiosity question than anything else. To my eye, it looks like the centroids are being calculated incorrectly... shouldn't the drawn circle be more centered in each blob? Yet both seem to be sticking out arbitrarily to one side.
Below, I have the original image of the cube, and two of the color filters, the top for green, and the bottom for blue.
As you can see, the green and blue blobs are positioned correctly, and should be far enough away from each other to be classified into separate rows, but the centroids visually seem to be skewed from the centers of the blobs (green centroid should be more to the right, and blue more to the left). Is there something that I am not picking up on here? Is this just a quirk of the system?

Python: Reduce rectangles on images to their border

I have many grayscale input images which contain several rectangles. Some of them overlap and some go over the border of the image. An example image could look like this:
Now i have to reduce the rectangles to their border. My idea was to make all non-white pixels which are less than N (e.g. 3) pixels away from the border or a white pixel (using the Manhatten distance) white. The output should look like this (sorry for the different-sized borders):
It is not very hard to implement this. Unfortunately the implementation must be fast, because the input may contain extremly many images (e.g. 100'000) and the user has to wait until this step is finished.
I thought about using fromimage and do then everything with numpy, but i did not find a good solution.
Maybe someone has an idea or a hint how this problem may be solved very efficient?
Calculate the distance transform of the image (opencv distanceTrasform http://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html)
In the resulted image zero all the pixels that have value bigger than 3

Detecting color range from "average" in OpenCV

Currently I am using a HaarCascade to detect a face in a picture. Which is working, I am getting a rect of where the face is.
Now I want to get the "average?" (skin color) in that rect. And use that as a base for the color range to search for other skin in the photo. How should I go about that?
I have found the inRange function, which searches for a color in a range. But I am not quite sure how I could get the average color of my skin in there. It seems that the inRange function needs HSV values? However, I don't know quite what that format is. It doesn't seem to be the same as HSB in photoshop. (Which I tried for "testing purposes").
My question boils down to this, how can I get the "average" color in a rect, and find other colours in that range (e.g, lighter and darker than that color, but the same shade).
Thanks.
Get the 3D color histogram within the detected region of interest. That is, not three 1D histograms for each channel, but one 3D histogram for all 3 channels together. OpenCV's calcHist has options for that. Here's an example which does that. This example is using Python bindings for OpenCV, but it shows how to set the parameters.
Also, set the range of the histogram within a reasonable range for skin color. As MSalters suggested in the comments, HSV is a better color space for things like this. Perhaps you can disregard the S and V channels and only do the 1D histogram for V. Try is out.
The bin with the highest count is going to be your "average" skin color.

Extending a contour in OpenCv

i have several contours that consist of several black regions in my image. Directly adjacent to these black regions are some brighter regions that do not belong to my contours. I want to add these brighter regions to my black region and therefor extend my contour in OpenCv.
Is there a convenient way to extend a contour? I thought about looking at intensity change from my gradient-image created with cv::Sobel and extend until the gradient changes again, meaning the intensity of pixel is going back to the neither black nor bright regions of the image.
Thanks!
Here are example images. The first picture shows the raw Image, the second the extracted Contour using Canny & findContours, the last one the Sobel-Gradient intensity Image of the same area.
I want to include the bright boundaries in the first image to the Contour.
Update: Now i've used some morphological operations on the Sobelgradients and added a contour around them (see Image below). Next step could be to find the adjacent pair of purple & red contours, but it seems very much like a waste of procession time to actually have to search for directly adjacent contours. Any better ideas?
Update 2: My solution for now is to search for morphed gradient (red) contours in a bounding box around my (purple) contours and pick the one with correct orientation & size. This works for gradient contours where the morphological operation closes the "rise" and "fall" gradient areas like in Figure 3. But it is still a bad solution for cases in which the lighted area is wider then in the image above. Any idea is still very much appreciated, thanks!
What you're trying to do is find two different features and merge them. It's not terribly difficult but you have to use multiple copies of the image to make it happen.
Make one copy, and threshold it for the dark portion
Make another copy and threshold it for the light portion
Merge both thresholded images into a new image
Apply a morphological operation like opening or closing (depending on how you threshold) This will connect nearby components
Find contours in the resultant image
Use those contours on your original image. This will work since all the images are the same size and all based off of the original.

Laser light detection with OpenCV and C++

I want to track a laser light dot(which is on a wall) with a webcam and i am using openCV to do this task. can anybody suggest me a way to do it with C++.
Thank you !
You have three options depending on the stability of your background, and the things you want to do with the image.
You can make your image so dark that the only thing you can see is the laser-point. You can do this by closing the diaphragm and/or reducing the shutter time. Even with cheap webcams this can usually be done in the driver. Once you've done this the job of finding the laser point is very easy. You make the image as dark as possible because usually the point where the laser shines is way too bright for the camera to pick up. This means (as you have experienced) that you cant discern between the light laser dot and other light objects in the image. By making it darker you now can do this.
If you have other uses for your image (showing it to people) and your background is stable you can also use the average of the last few video images as a "background" and then find the spot where there is a large difference between that background and the newest image. This is usually where the laser is pointed (again, if your background is stable enough) .
Finally, if your background is not stable and you don't want to make your image very dark your final option is to look for all pixels that are both very bright, and brighter in the red channel than they are in green and blue (if you are using a red laser). This system will still be distracted by white spots, but not as much as just finding the bright pixels. If the center of your laser-pointer spot is indeed showing up as bright white regardless of laser color then this technique will allow you to find "rings" around this bright spot (the outer part of the dot where the laser is not as bright as it is in the center so that it shows up with the actual color of the laser in the image). You can then use simple morphological operations, (probably closing is enough) to fill these circles.
Let say you use a laser of one of these colors: red, green, blue.
If the laser dot appears very bright (at least in one channel, e.g. red) then simply thresholding the image/channel at, say greyvalue of 200, will leave you with a few candidates for the laser light. If the other channels are dark(er) in this area, then you know it is a bright light of the right color. A little filtering by size, and you have a good chance of finding the spot.
If you stick a IR filter on your webcam, your projection will not be picked up, making the detection of the laser point much easier (using simple background subtraction e.t.c) That's assuming the laser pointer emits IR light...
As suggested in other answers, searching for the color might be a good idea.
You should consider to look for a specific color range. Best way to do this is to convert the picture to HSL or HSV color space.
cv::cvtColor(src, hsv, COLOR_BGR2HSV);
More information on Wikipedia.
Then you have three channels:
hue (=color), saturation and lightness( or value).
With cv::inRange(hsv, cv::Scalar(159, 135, 165), cv::Scalar(179, 255, 200), inRange); you can now generate a Black and white image, which shows what pixels are in the color range.
The scalars are the low and high values for each channel.
In this example you will get pixels with a color between 159 and 179 (hue), saturation between 135 and 255 and value between 165 and 200.
Maybe this can improve your tracking.
How about this code
https://www.youtube.com/watch?v=MKUWnz_obqQ
https://github.com/niitsuma/detect_laser_pointer
In this code, observed HSV color is compared to registered color using Hotelling's t square test
try Template Maching.
first you "point the pointer" to an specific place so the temple can be done. Then you just look for it.
Or, as jilles de wit said, you can take the difference of the last 2 frames, probably the difference will show you the pointer.
Convert the last 2 frames do gray scale, then apply the SUB function.