C++ opencv custom threshold - c++

I need a custom threshold to the image, where is the value of the pixel is less than thr I need to leave the original value, but if the pixel is bigger than the thr then it should be the same value of the thr.
I check the threshold method in the opencv, but it give me back and white, I do not want this, I need the same what I explain above.
Thanks in advance.!

Opencv offer you some basic thresholding operations, We can effectuate 5 types of Thresholding operations:
Threshold Binary:
if the intensity of the pixel src(x,y) is higher than thresh, then the new pixel intensity is set to a MaxVal. Otherwise, the pixels are set to 0.
Threshold Binary, Inverted:
If the intensity of the pixel src(x,y) is higher than thresh, then the new pixel intensity is set to a 0. Otherwise, it is set to MaxVal.
Truncate:
The maximum intensity value for the pixels is thresh, if src(x,y) is greater, then its value is truncated.
Threshold to Zero:
If src(x,y) is lower than thresh, the new pixel value will be set to 0.
Threshold to Zero, Inverted:
If src(x,y) is greater than thresh, the new pixel value will be set to 0.
So you can do that using Truncated type, check this:
double threshold(InputArray src, OutputArray dst, double thresh, double maxval, int type)
src – input array (single-channel, 8-bit or 32-bit floating point).
dst – output array of the same size and type as src.
thresh – threshold value.
maxval – maximum value to use with the THRESH_BINARY and THRESH_BINARY_INV thresholding types.
type – thresholding type (see the details below).
Example:
/* threshold_type
0: Binary
1: Binary Inverted
2: Threshold Truncated
3: Threshold to Zero
4: Threshold to Zero Inverted
*/
threshold( src_gray, dst, threshold_value, max_BINARY_value,threshold_type );
//In your case threshold_type = 2
ref: 1 2

Related

How to detect the intensity gradient direction

Having a Mat that is square area of grayscale pixels. How to create a straight line whose direction is created as a perpendicular to most pixel values change direction (average gradient, aerage over the whole Mat, the result would be just one direction (which can be then drawn as a line))?
For example having
it would look like
How can one do such thing in OpenCV (in python or C++)?
An OpenCV implementation would look something like the following. It solves the problem in a similar fashion as explained in the answer by Mark Setchell, except that normalising the image does not have any effect on the resulting direction.
Mat img = imread("img.png", IMREAD_GRAYSCALE);
// compute the image derivatives for both the x and y direction
Mat dx, dy;
Sobel(img, dx, CV_32F, 1, 0);
Sobel(img, dy, CV_32F, 0, 1);
Scalar average_dx = mean(dx);
Scalar average_dy = mean(dy);
double average_gradient = atan2(-average_dy[0], average_dx[0]);
cout << "average_gradient = " << average_gradient << endl;
And to display the resulting direction
Point center = Point(img.cols/2, img.rows/2);
Point direction = Point(cos(average_gradient) * 100, -sin(average_gradient) * 100);
Mat img_rgb = imread("img.png"); // read the image in colour
line(img_rgb, center, center + direction, Scalar(0,0,255));
imshow("image", img_rgb);
waitKey();
I can't easily tell you how to do it with OpenCV, but I can tell you the method and demonstrate using ImageMagick just at the command-line.
First, I think you need to convert the image to grayscale and normalise it to the full range of black to white - like this:
convert gradient.png -colorspace gray -normalize stage1.png
Then you need to calculate the X-gradient and the Y-gradient of the image using a Sobel filter and then take the inverse tan of the Y-gradient over the X-gradient:
convert stage1.png -define convolve:scale='50%!' -bias 50% \
\( -clone 0 -morphology Convolve Sobel:0 \) \
\( -clone 0 -morphology Convolve Sobel:90 \) \
-fx '0.5+atan2(v-0.5,0.5-u)/pi/2' result.jpg
Then the mean value of the pixels in result.jpg is the direction of your line.
You can see the coefficients used in the convolution for X- and Y-gradient like this:
convert xc: -define morphology:showkernel=1 -morphology Convolve Sobel:0 null:
Kernel "Sobel" of size 3x3+1+1 with values from -2 to 2
Forming a output range from -4 to 4 (Zero-Summing)
0: 1 0 -1
1: 2 0 -2
2: 1 0 -1
convert xc: -define morphology:showkernel=1 -morphology Convolve Sobel:90 null:
Kernel "Sobel#90" of size 3x3+1+1 with values from -2 to 2
Forming a output range from -4 to 4 (Zero-Summing)
0: 1 2 1
1: 0 0 0
2: -1 -2 -1
See Wikipedia here - specifically this line:
Convert the image to grayscale and classify its pixels based on the gray level. For classification, you can use something like Otsu method or kmeans with 2 clusters. Then take the morphological gradient to detect the doundary.
Here are the classified pixels and the boundary using Otsu method.
Now find the non-zero pixels of the boundary image and fit a 2D line to those pixels using the fitLine function that finds a weighted least squares line or use this RANSAC implementation. fitLine gives a normalized vector collinear to the line. Using this vector, you can find an orthogonal vector to it.
I get [0.983035, -0.183421] for the collinear vector using the code below. So, [0.183421 0.983035] is orthogonal to this vector.
Here, in the left image, the red line is the least squares line and the blue line is a perpendicular line to the red one. In the right image, red line is the least squares line and the green one is the line fitted using the RANSAC library mentioned above.
Mat im = imread("LP24W.png", 0);
Mat bw, gr;
threshold(im, bw, 0, 255, CV_THRESH_BINARY|CV_THRESH_OTSU);
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
morphologyEx(bw, gr, CV_MOP_GRADIENT, kernel);
vector<vector<Point>> contours;
findContours(gr, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
vector<Point> points;
for (vector<Point>& cont: contours)
{
points.insert(points.end(), cont.begin(), cont.end());
}
Vec4f line;
fitLine(points, line, CV_DIST_L2, 0, 0.01, 0.01);
cout << line << endl;

Compare intensity pixel value Vec3b in OpenCV

I have a 3 channel Mat image, type is CV_8UC3.
I want to compare, in a loop, the intensity value of a pixel with its neighbours and then set 0 or 1 if the neighbour is greater or not.
I can get the intensity calling Img.at<Vec3b>(x,y).
But my question is: how can I compare two Vec3b?
Should I compare pixels value for every channel (BGR or Vec3b[0], Vec3b[1] and Vec3b[2]), and then merge the three channels results into a single Mat object?
Me again :)
If you want to compare (greater or less) two RGB values you need to project the 3-dimensional RGB space onto a plane or axis.
Of course, there are many possibilities to do this, but an easy way would be to use the HSV color space. The hue (H), however, is not appropriate as a linear order function because it is circular (i.e. the value 1.0 is identical with 0.0, so you cannot decide if 0.5 > 0.0 or 0.5 < 0.0). However, the saturation (S) or the value (V) are appropriate projection functions for your purpose:
If you want to have colored pixels "larger" than monochrome pixels, you will prefer S.
If you want to have lighter pixels larger than darker pixels, you will probably prefer V.
Also any combination of S and V would be a valid projection function, e.g. S+V.
As far as I understand, you want a measure to calculate distance/similarity between two Vec3b pixels. This can be reflected to the general problem of finding distance between two vectors in an n-mathematical space.
One of the famous measures (and I think this is what you're asking for), is the Euclidean distance.
If you are using Opencv then you can simply use:
cv::Vec3b a(1, 1, 1);
cv::Vec3b b(5, 5, 5);
double dist = cv::norm(a, b, CV_L2);
You can refer to this for reading about cv::norm and its options.
Edit: If you are doing this to measure color similarity, it's recommended to use the LAB color space as it's proved that Euclidean distance in LAB space is a good approximation for human perception of colors.
Edit 2: I see what you mean, for this you can get the magnitude of each vector and then compare them, something like this:
double a_magnitude = cv::norm(a, CV_L2);
double b_magnitude = cv::norm(b, CV_L2);
if(a_magnitude > b_magnitude)
// do something
else
// do something else.

matrix multiplication resulting in values greater than 255

If I am performing matrix multiplication on two 8UC1 images, or per element multiplication, what happens if one of the resulting pixel values is greater than 255? For example, if in image A a certain pixel has value 100, and in image B that same pixel has value 150 (for the per element multiplication case), then clearly 100*150 > 255 - so does that pixel simply get truncated to 255 value? And if so is there some transformation I can make to preserve that information without having it truncated?
opencv will saturate the result for a uchar img.
to avoid that, use e.g. the dtype flag in multiply and specify a type larger than your input
Mat a, b; //input, CV_8U
Mat c; // output, yet unspecified
multiply( a,b, c, 1, CV_32S ); // c will be of int type, untruncated results

How to scale the pixel values in the range [0,1] in opencv

You can scale the pixel values ​​of a matrix of type uchar Mat in the range [0,1] and storing them in a Mat of type float?
When I try to divide all pixels by 255 and store them in Mat of type float, I do not find in it the values ​​between [0,1] but the integer values zero and one.
See: Convert uchar Mat to float Mat in OpenCV?
After this, you can simply divide by 255 to get the range from 0 to 1

Taking the integral image of a Color Distance result (logical error)

I'm debugging a robot's project and have found an error which I'm not quite sure how to fix it theoretically.
I must calculate a color distance map, and following this I must take the integral image of the result and do some calculation with it.
Using the A and B channel from a Lab colorspaced image I obtain the color distance for example color red(pA = 255, pB = 127) using formula sqrt([A-pA]^2+[B-pB]^2)
subtract(mA, Scalar(pA), tA);
subtract(mB, Scalar(pB), tB);
tA.convertTo(t32A, CV_32SC1);
tB.convertTo(t32B, CV_32SC1);
pow(t32A, 2.0, powA);
pow(t32B, 2.0, powB);
add(powA, powB, sq);
pow(sq, 0.5, res);
//res.convertTo(result, CV_8UC1);
I needed the conversion to CV_32S because of the limitations of CV_8U handling values above 255.
Now I must feed the result in to the integral image, this expects only an image of CV_8UC1.
The problem I'm facing, is that the aforementioned color distance function might produce pixels with values above 255.
For example:
distance between (0,0) to red (255,127)
sqrt( (0-255)^2 + (0-127)^2) = 285
Or between (0,255) to red (255,127)
sqrt( (0-255)^2 + (255-127)^2) = 285
Does anybody have any suggestions how I can feed the result in to the integral image, without any loss of information.
Thank you
How about using sqrt(2) as a normalization factor ?