Set colour limit axis in OpenCV 4 (c++) akin to Matlab's CAXIS - c++

Matlab offers the ability to set colour limits for the current axis using CAXIS. OpenCV has applyColorMap which can be used to highlight differences in pixel intensity in a greyscale image which I believe maps pixel from 0 - 255.
I am new to Matlab/Image-processing and have been asked to port a simple program from MatLab which uses the CAXIS function to change the "brightness" of a colour map. I have no experience in Matlab but it appears that they use this function to "lower" the intensity requirements needed for pixels to be mapped to a more intense colour on the map
i.e. Colour map using "JET"
When brightness = 1, red = 255
When brightness = 10, red >= 25
The matlab program allows 16bit images to be read in and displayed which obviouly gives higher pixel values whereas everything i've read and done indicates OpenCV only supports 8 bit images (for colour maps)
Therefore my question is, is it possible to provide similar functionality in OpenCV? How do you set the axis limit for a colourmap/how do you scale the colour map lookup table so that "less" intense pixels are scaled to the more intense regions?
A similar question was asked with a reply stating the array needs to be "normalised" but unfortunately I don't quite know how to achieve this and can't reply to the answer as i don't have enough rep!
I have gone ahead and used cv::normalize to set the max value in the array to be maxPixelValue/brightness but that doesn't work at all.
I have also experimented and tried converting my 16bit image into a CV_8UC1 with a scale factor to no avail. Any help would be greatly appreciated!

In my opinion you can use cv::normalize to "crop" values in the source picture to the corresponding ones in color map you are interested in. Say you want your image to be mapped to the blue-ish region of Jet colormap then you should do something like:
int minVal = 0, maxVal = 80;
cv::normalize(src,dst, minVal, maxVal, cv::NORM_MINMAX);
If you plan to apply some kind of custom map it's fairly easy for 1-or3-channel 8-bit image, you only need to create LUT with 255 values (with proper number of channels) and apply it using cv::LUT, more about it in this blog, also see the dosc about LUT
If the image you are working is of different depth, 16-bit or even floating point data I guess all you need to do is write a function like:
template<class T>
T customColorMapper(T input_pixel)
{
T output_pixel = 0;
// do something with output_pixel basing on intput_pixel
return output_pixel;
}
and apply it to each source image pixel like:
cv::Mat dst_image = src_image.clone(); //copy data
dst_image.forEach<TYPE>([](TYPE& input_pixel, const int* pos_row_col) -> void {
input_pixel = customColorMapper<TYPE>(input_pixel);
});
of course TYPE need to be a valid type. Maybe specialized version of this function taking cv::Scalar or cv::Vec3-something would be nice if you need to work with multiple channels.
Hope this helps!

I managed to replicate the MATLAB behaviour but had to resort to manually iterating over each pixel and setting the value to the maximum value for the image depth or scaling the value where needed.
my code looked something like this
cv::minMaxLoc(dst, &min, &max);
double axisThreshold = floor(max / contrastLevel);
for (int i = 0; i < dst.rows; i++)
{
for (int j = 0; j < dst.cols; j++)
{
short pixel = dst.at<short>(i, j);
if (pixel >= axisThreshold)
{
pixel = USHRT_MAX;
}
else
{
pixel *= (USHRT_MAX / axisThreshold);
}
dst.at<short>(i, j) = cv::saturate_cast<short>(pixel);
}
}
In my example I had a slider which adjusted the contrast/brightness (we called it contrast, the original implementation called it brightness).
When the contrast/brightness was changed, the program would retrieve the maximum pixel value and then compute the axis limit by doing
calculatedThreshold = Max pixel value / contrast
Each pixel more than the threshold gets set to MAX, each pixel lower than the threshold gets multiplied by a scale factor calculated by
scale = MAX Pixel Value / calculatedThreshold.
TBH i can't say I fully understand the maths behind it. I just used trial and error until it worked; any help in that department would be appreciated HOWEVER it seems to do what i want to!
My understanding of the initial matlab implementation and the terminology "brightness" is in fact their attempt to scale the colourmap so that the "brighter" the image, the less intense each pixel had to be to map to a particular colour in the colourmap.
Since applycolourmap only works on 8 bit images, when the brightness increases and the colourmap axis values decrease, we need to ensure the values of the pixels scale accordingly so that they now match up with the "higher" intensity values in the map.
I have seen numerous OPENCV tutorials which use this approach to changing the contrast/brightness but they often promote the use of optimised convertTo (especially if you're trying to use the GPU). However as far as I can see, convertTo applies the aplha/beta values uniformly and not on a pixel by pixel basis therefore I can't use that approach.
I will update this question If i found more suitable OPENCV functions to achieve what I want.

Related

Deciphering a code [duplicate]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Please explain as to what happens to an image when we use histeq function in MATLAB? A mathematical explanation would be really helpful.
Histogram equalization seeks to flatten your image histogram. Basically, it models the image as a probability density function (or in simpler terms, a histogram where you normalize each entry by the total number of pixels in the image) and tries to ensure that the probability for a pixel to take on a particular intensity is equiprobable (with equal probability).
The premise behind histogram equalization is for images that have poor contrast. Images that look like they're too dark, or if they're too washed out, or if they're too bright are good candidates for you to apply histogram equalization. If you plot the histogram, the spread of the pixels is limited to a very narrow range. By doing histogram equalization, the histogram will thus flatten and give you a better contrast image. The effect of this with the histogram is that it stretches the dynamic range of your histogram.
In terms of the mathematical definition, I won't bore you with the details and I would love to have some LaTeX to do it here, but it isn't supported. As such, I defer you to this link that explains it in more detail: http://www.math.uci.edu/icamp/courses/math77c/demos/hist_eq.pdf
However, the final equation that you get for performing histogram equalization is essentially a 1-to-1 mapping. For each pixel in your image, you extract its intensity, then run it through this function. It then gives you an output intensity to be placed in your output image.
Supposing that p_i is the probability that you would encounter a pixel with intensity i in your image (take the histogram bin count for pixel intensity i and divide by the total number of pixels in your image). Given that you have L intensities in your image, the output intensity at this location given the intensity of i is dictated as:
g_i = floor( (L-1) * sum_{n=0}^{i} p_i )
You add up all of the probabilities from pixel intensity 0, then 1, then 2, all the way up to intensity i. This is familiarly known as the Cumulative Distribution Function.
MATLAB essentially performs histogram equalization using this approach. However, if you want to implement this yourself, it's actually pretty simple. Assume that you have an input image im that is of an unsigned 8-bit integer type.
function [out] = hist_eq(im, L)
if (~exist(L, 'var'))
L = 256;
end
h = imhist(im) / numel(im);
cdf = cumsum(h);
out = (L-1)*cdf(double(im)+1);
out = uint8(out);
This function takes in an image that is assumed to be unsigned 8-bit integer. You can optionally specify the number of levels for the output. Usually, L = 256 for an 8-bit image and so if you omit the second parameter, L would be assumed as such. The first line computes the probabilities. The next line computes the Cumulative Distribution Function (CDF). The next two lines after compute input/output using histogram equalization, and then convert back to unsigned 8-bit integer. Note that the uint8 casting implicitly performs the floor operation for us. You'll need to take note that we have to add an offset of 1 when accessing the CDF. The reason why is because MATLAB starts indexing at 1, while the intensities in your image start at 0.
The MATLAB command histeq pretty much does the same thing, except that if you call histeq(im), it assumes that you have 32 intensities in your image. Therefore, you can override the histeq function by specifying an additional parameter that specifies how many intensity values are seen in the image just like what we did above. As such, you would do histeq(im, 256);. Calling this in MATLAB, and using the function I wrote above should give you identical results.
As a bit of an exercise, let's use an image that is part of the MATLAB distribution called pout.tif. Let's also show its histogram.
im = imread('pout.tif');
figure;
subplot(2,1,1);
imshow(im);
subplot(2,1,2);
imhist(im);
As you can see, the image has poor contrast because most of the intensity values fit in a narrow range. Histogram equalization will flatten the image and thus increase the contrast of the image. As such, try doing this:
out = histeq(im, 256); %//or you can use my function: out = hist_eq(im);
figure;
subplot(2,1,1);
imshow(out);
subplot(2,1,2);
imhist(out);
This is what we get:
As you can see the contrast is better. Darker pixels tend to move towards the darker end, while lighter pixels get pushed towards the lighter end. Successful result I think! Bear in mind that not all images will give you a good result when you try and do histogram equalization. Image processing is mostly a trial and error thing, and so you put a mishmash of different techniques together until you get a good result.
This should hopefully get you started. Good luck!

OpenCV - odd HSV range detection

I have a Qt app where I have to find the HSV range of a couple of pixels around click coordinates, to track later on. This is how I do it:
cv::Mat temp;
cv::cvtColor(frame, temp, CV_BGR2HSV); //frame is pulled from a video or jpeg
cv::Vec3b hsv=temp.at<cv::Vec3b>(frameX,frameY); //sometimes SIGSEGV?
qDebug() << hsv.val[0]; //look up H
qDebug() << hsv.val[1]; //look up S
qDebug() << hsv.val[2]; //look up V
//just base values so far, will work on range later
emit hsvDownloaded(hsv.val[0], hsv.val[0]+5, hsv.val[1], 255, hsv.val[2], 255); //send to GUI which automaticly updates worker thread
Now, things are odd. Those are the results (red circle indicates the click location):
With red it's weird, upper half of the shape is detected correctly, lower half is not, despite it being a solid mass of the same colour.
And for an actual test
It detects HSV {95,196,248} which is frankly absurd (base values way too high). None of the pixels that were detected isn't even the one that was clicked. The best values to detect that ball 100% of the time are H:35-141 S:0-238 V:65-255. I've wanted to get a HSV range from a normalized histogram, but I can't even get the base values right. What's up? When OpenCV pulls a frame using kalibrowanyPlik.read(frame); , the default colour scheme is BGR, right?
Why would the colour detection work so randomly?
As berak has mentioned, your code looks like you've used the indices to access pixel in the wrong order.
That means your pixel locations are wrong, except for pixel that lie on the diagonal, so clicked objects that are around the diagonal will be detected correctly, while all the others won't.
To not get confused again and again, I want you to understand why OpenCV uses (row,col) ordering for indices:
OpenCV uses matrices to represent images. In mathematics, 2D matrices use (row,col) indexing, have a look at http://en.wikipedia.org/wiki/Index_notation#Two-dimensional_arrays and watch at the indices. So for matrices, it is typical to use the row index first, followed by the column index.
Unfortunately, images and pixel typically have a (x,y) indexing, which corresponds to x/y axis/direction in mathematical graphs and coordinate systems. So here the x position is used first, followed by the y position.
Luckily, OpenCV provides two different versions of .at method, one to access pixel-positions and one to access matrix elements (which are exactly the same elements in the end).
matrix.at<type>(row,column) // matrix indexing to access elements
// which equals
matrix.at<type>(y,x)
and
matrix.at<type>(cv::Point(x,y)) // pixel/position indexing to access elements
since the first version should be slightly more efficient it should be preferred if the positions aren't already given as cv::Point objects. So the best way often is to remember, that openCV uses matrices to represent images and it uses matric index notations to access elements.
btw, I've seen people wondering why matrix.at<type>(cv::Point(y,x)) doesn't work the way intended after they've learned that openCV images use the "wrong ordering". I hope this question doesn't come up after my explanation.
one more btw: in school I already wondered, why matrices index rows first, while graphs of functions index x axis first. I found it stupid to not use the "same" ordering for both but I still had to live with it :D (and at the end, both don't have much to do with the other)

Ranking pixels by colour with OpenCV

I begin a project about the detection.
My idea is to rank every pixels of an image (Mat).
Then, I will be able to exit which colour is dominant.
The difficulty is a colour is not unic. For exemple, Green is rgb(0, 255, 0) but is almost rgb(10, 240, 20) too.
The goal of my ranking is to exit pixels which are almost same colour. Then, with a pourcentage, I think I can locate my object.
So, my question: Is it a way to ranking pixels by colour ?
Thx a lot in advance for your answers.
There isn't a straight method of ranking as you say of pixels in colours.
However, you can find an approximation to the most dominant one.
There are several way in which you can do it:
You can calculate the histogram for each colour channel - split it into the R,G,B and compute the histogram. Then you can see where the peaks of the resulting graphs are - e.g.
If you k-means cluster the pixels at the image - in other words, represent each pixel as a 3D point with coordinated (R, G, B). Then you can segment the pixels into k most occurring colours.
If you resize the image to a 1x1 pixel image, you'll find the average of all pixel values. If there is a dominant colour, where the majority of the pixels are in close proximity, it will give a good approximation.
There however, are all approximations. Your best choice would be to use k-means and to find the cluster that either has the most elements, or is the most dense.
In case you are looking for way to locate an object with a specific colour, you can use a maximum likelihood estimation. Something like this, which was used to classify different objects, such as grass, cars, building and pavement from satellite images. You can use it with a single colour and get a heat-map of where the object is in terms of likelihood (the percentage of probability) of that pixel belonging to your object.
In an ordinary image, there's always a number of colors involved. To best average the pixels carrying almost the same colors is done by color quantization which is reducing number of colors in an image using techniques like K-mean clustering. This is best explained here with Python code:
https://www.pyimagesearch.com/2014/07/07/color-quantization-opencv-using-k-means-clustering/
After successful quantization, you can just try the following code to rank the colors based on their frequencies in the image.
top_n_colors = []
n = 3
colors_count = {}
(channel_b, channel_g, channel_r) = cv2.split(_processed_image)
# Flattens the 2D single channel array so as to make it easier to iterate over it
channel_b = channel_b.flatten()
channel_g = channel_g.flatten()
channel_r = channel_r.flatten()
for i in range(len(channel_b)):
RGB = str(channel_r[i]) + " " + str(channel_g[i]) + " " + str(channel_b[i])
if RGB in colors_count:
colors_count[RGB] += 1
else:
colors_count[RGB] = 1
# taking the top n colors from the dictionary objects
_top_colors = sorted(colors_count.items(), key=lambda x: x[1], reverse=True)[0:n]
for _color in _top_colors:
_rgb = tuple([int(value) for value in _color[0].split()])
top_n_colors.append(_rgb)
print(top_n_colors)

Smooth color transition algorithm

I am looking for a general algorithm to smoothly transition between two colors.
For example, this image is taken from Wikipedia and shows a transition from orange to blue.
When I try to do the same using my code (C++), first idea that came to mind is using the HSV color space, but the annoying in-between colors show-up.
What is the good way to achieve this ? Seems to be related to diminution of contrast or maybe use a different color space ?
I have done tons of these in the past. The smoothing can be performed many different ways, but the way they are probably doing here is a simple linear approach. This is to say that for each R, G, and B component, they simply figure out the "y = m*x + b" equation that connects the two points, and use that to figure out the components in between.
m[RED] = (ColorRight[RED] - ColorLeft[RED]) / PixelsWidthAttemptingToFillIn
m[GREEN] = (ColorRight[GREEN] - ColorLeft[GREEN]) / PixelsWidthAttemptingToFillIn
m[BLUE] = (ColorRight[BLUE] - ColorLeft[BLUE]) / PixelsWidthAttemptingToFillIn
b[RED] = ColorLeft[RED]
b[GREEN] = ColorLeft[GREEN]
b[BLUE] = ColorLeft[BLUE]
Any new color in between is now:
NewCol[pixelXFromLeft][RED] = m[RED] * pixelXFromLeft + ColorLeft[RED]
NewCol[pixelXFromLeft][GREEN] = m[GREEN] * pixelXFromLeft + ColorLeft[GREEN]
NewCol[pixelXFromLeft][BLUE] = m[BLUE] * pixelXFromLeft + ColorLeft[BLUE]
There are many mathematical ways to create a transition, what we really want to do is understand what transition you really want to see. If you want to see the exact transition from the above image, it is worth looking at the color values of that image. I wrote a program way back in time to look at such images and output there values graphically. Here is the output of my program for the above pseudocolor scale.
Based upon looking at the graph, it IS more complex than a linear as I stated above. The blue component looks mostly linear, the red could be emulated to linear, the green however looks to have a more rounded shape. We could perform mathematical analysis of the green to better understand its mathematical function, and use that instead. You may find that a linear interpolation with an increasing slope between 0 and ~70 pixels with a linear decreasing slope after pixel 70 is good enough.
If you look at the bottom of the screen, this program gives some statistical measures of each color component, such as min, max, and average, as well as how many pixels wide the image read was.
A simple linear interpolation of the R,G,B values will do it.
trumpetlicks has shown that the image you used is not a pure linear interpolation. But I think an interpolation gives you the effect you're looking for. Below I show an image with a linear interpolation on top and your original image on the bottom.
And here's the (Python) code that produced it:
for y in range(height/2):
for x in range(width):
p = x / float(width - 1)
r = int((1.0-p) * r1 + p * r2 + 0.5)
g = int((1.0-p) * g1 + p * g2 + 0.5)
b = int((1.0-p) * b1 + p * b2 + 0.5)
pix[x,y] = (r,g,b)
The HSV color space is not a very good color space to use for smooth transitions. This is because the h value, hue, is just used to arbitrarily define different colors around the 'color wheel'. That means if you go between two colors far apart on the wheel, you'll have to dip through a bunch of other colors. Not smooth at all.
It would make a lot more sense to use RGB (or CMYK). These 'component' color spaces are better defined to make smooth transitions because they represent how much of each 'component' a color needs.
A linear transition (see #trumpetlicks answer) for each component value, R, G and B should look 'pretty good'. Anything more than 'pretty good' is going to require an actual human to tweak the values because there are differences and asymmetries to how our eyes perceive color values in different color groups that aren't represented in either RBG or CMYK (or any standard).
The wikipedia image is using the algorithm that Photoshop uses. Unfortunately, that algorithm is not publicly available.
I've been researching into this to build an algorithm that takes a grayscale image as input and colorises it artificially according to a color palette:
■■■■ Grayscale input ■■■■ Output ■■■■■■■■■■■■■■■
Just like many of the other solutions, the algorithm uses linear interpolation to make the transition between colours. With your example, smooth_color_transition() should be invoked with the following arguments:
QImage input("gradient.jpg");
QVector<QColor> colors;
colors.push_back(QColor(242, 177, 103)); // orange
colors.push_back(QColor(124, 162, 248)); // blue-ish
QImage output = smooth_color_transition(input, colors);
output.save("output.jpg");
A comparison of the original image VS output from the algorithm can be seen below:
(output)
(original)
The visual artefacts that can be observed in the output are already present in the input (grayscale). The input image got these artefacts when it was resized to 189x51.
Here's another example that was created with a more complex color palette:
■■■■ Grayscale input ■■■■ Output ■■■■■■■■■■■■■■■
Seems to me like it would be easier to create the gradient using RGB values. You should first calculate the change in color for each value based on the width of the gradient. The following pseudocode would need to be done for R, G, and B values.
redDifference = (redValue2 - redValue1) / widthOfGradient
You can then render each pixel with these values like so:
for (int i = 0; i < widthOfGradient; i++) {
int r = round(redValue1 + i * redDifference)
// ...repeat for green and blue
drawLine(i, r, g, b)
}
I know you specified that you're using C++, but I created a JSFiddle demonstrating this working with your first gradient as an example: http://jsfiddle.net/eumf7/

Suggested algorithm for diverging color mapping visualization

I am attempting to write a piece of code that is suppose to map data to RGB values, and one of the types of visualizations I am attempting to use is a diverging color map.
I am not exactly sure what the best way is to go about applying the colors. The current algorithm I am using is:
//F is the data point being checked
if(F <= .5){
RGB[0] = F*510;
RGB[1] = F*510;
RGB[2] = F*254 + 128;
}else{
RGB[0] = 255 - (F-.5)*254;
RGB[1] = 255 - (F-.5)*510;
RGB[2] = 255 - (F-.5)*510;
}
Where the key points for the curve are:
F=0: (0,0,128)
F=0.5: (255,255,255)
F=1: (128, 0, 0)
Are there any suggested algorithms out there for use instead of this, or is this hacked together piecewise function alright?
This is the image generated by this current algorithm.
I think you should use a bar to test your function as it would be easier to see the transition 'speed' in linear data.
Here is a really good article for using the diverging colour maps: http://www.sandia.gov/~kmorel/documents/ColorMaps/
It describes the mathematics behind it. I know it seems an overkill to go through Lab and MSH colour spaces for such a simple task, but if you want good quality colour maps it's really worth it.
Other than that, I don't know of any 'manual' implementation of the function (i.e. not using already complex functions from matlab or R)
I think it may be more useful to use HSV color space as opposed to RGB, and show your data using the Hue component. This way all the values of your function will map to a nice rainbow color and will be evenly saturated.
In the provided links you should be able to derive the formula, how to convert the Hue value to RGB.