Controlling Brightness of an Image using numericupdown - c++

I have done the brightness increasing but unable to decrease after trying a lot. For example: my rgb values of a pixel are r=100 , g = 200 ; b =125 . I'm using numericupdown to increase and decrease the value. When I add ,for example, 100 using numupdown. the new values will be r=200 , g=300 and b=255. But we take g=300 -> g=255 because we can't go further than 255. When I decrease the value to 100 , the values should be r=100 , g=200 , b=125 back. Due to changing the value of g It would be no more g=200 because g is equal to 255 and 255-100=155 which is not equal to 200..Seeking help to set the pixel values again to same while decreasing .
P.s : I'm a learner

Store the original image and display a copy. Every time you run your algorithm you read the pixel values of the original and write the modified pixel values into the copy.

Note: this is a very simple approach. Brightness is a well discussed subject with a lot of options. For sophisticated solutions you often also drag in saturation and much more. Per pixel options are maybe not the best approach, but for the sake of this post I have constructed an answer that will solve your specific problem below.
// edit 2
Thinking about this some more, I did not think about the solution to the equation not being unique. You indeed need to store the original and recalculate from the original image. I would still advice using an approved brightness equation like the ones found in the link above. Simply modifying R,G, and B channels might not be what your users expect.
The below answer must be combined with working on the original image and displaying a modified copy as mentioned in other answers.
I would not increase R, G, and B channels directly but go with a perceived brightness option like found in here.
Lets say you take:
L = (0.299*R + 0.587*G + 0.114*B)
You know the min(L) will be 0, and the max(L) will be 255. This is where your numeric up/down will be limited to [0,255]. Next you simply increase/decrease L and calculate the RGB using the formula.
//edit
You case as example:
r=100 , g = 200 ; b =125
L = (0.299*100 + 0.587*200 + 0.114*125)
L = 161.5
Now lets go to the max (limited) to get the extreme case and see this still works:
L = 255
L = (0.299*255 + 0.587 * 255 + 0.114 * 255)
RGB = (255,255,255)
Going back will also always work, 0 gives black everything in between has a guaranteed RGB in range R,G,B in [0,255].

Another solution, possibly more elegant, would be to map your RGB values to the HSV color space.
Once you are in the HSV color space you can increase and decrease the value (V) to control brightness without losing hue or saturation information.
This question gives some pointers on how to do the conversion from RGB to HSV.

Related

Set colour limit axis in OpenCV 4 (c++) akin to Matlab's CAXIS

Matlab offers the ability to set colour limits for the current axis using CAXIS. OpenCV has applyColorMap which can be used to highlight differences in pixel intensity in a greyscale image which I believe maps pixel from 0 - 255.
I am new to Matlab/Image-processing and have been asked to port a simple program from MatLab which uses the CAXIS function to change the "brightness" of a colour map. I have no experience in Matlab but it appears that they use this function to "lower" the intensity requirements needed for pixels to be mapped to a more intense colour on the map
i.e. Colour map using "JET"
When brightness = 1, red = 255
When brightness = 10, red >= 25
The matlab program allows 16bit images to be read in and displayed which obviouly gives higher pixel values whereas everything i've read and done indicates OpenCV only supports 8 bit images (for colour maps)
Therefore my question is, is it possible to provide similar functionality in OpenCV? How do you set the axis limit for a colourmap/how do you scale the colour map lookup table so that "less" intense pixels are scaled to the more intense regions?
A similar question was asked with a reply stating the array needs to be "normalised" but unfortunately I don't quite know how to achieve this and can't reply to the answer as i don't have enough rep!
I have gone ahead and used cv::normalize to set the max value in the array to be maxPixelValue/brightness but that doesn't work at all.
I have also experimented and tried converting my 16bit image into a CV_8UC1 with a scale factor to no avail. Any help would be greatly appreciated!
In my opinion you can use cv::normalize to "crop" values in the source picture to the corresponding ones in color map you are interested in. Say you want your image to be mapped to the blue-ish region of Jet colormap then you should do something like:
int minVal = 0, maxVal = 80;
cv::normalize(src,dst, minVal, maxVal, cv::NORM_MINMAX);
If you plan to apply some kind of custom map it's fairly easy for 1-or3-channel 8-bit image, you only need to create LUT with 255 values (with proper number of channels) and apply it using cv::LUT, more about it in this blog, also see the dosc about LUT
If the image you are working is of different depth, 16-bit or even floating point data I guess all you need to do is write a function like:
template<class T>
T customColorMapper(T input_pixel)
{
T output_pixel = 0;
// do something with output_pixel basing on intput_pixel
return output_pixel;
}
and apply it to each source image pixel like:
cv::Mat dst_image = src_image.clone(); //copy data
dst_image.forEach<TYPE>([](TYPE& input_pixel, const int* pos_row_col) -> void {
input_pixel = customColorMapper<TYPE>(input_pixel);
});
of course TYPE need to be a valid type. Maybe specialized version of this function taking cv::Scalar or cv::Vec3-something would be nice if you need to work with multiple channels.
Hope this helps!
I managed to replicate the MATLAB behaviour but had to resort to manually iterating over each pixel and setting the value to the maximum value for the image depth or scaling the value where needed.
my code looked something like this
cv::minMaxLoc(dst, &min, &max);
double axisThreshold = floor(max / contrastLevel);
for (int i = 0; i < dst.rows; i++)
{
for (int j = 0; j < dst.cols; j++)
{
short pixel = dst.at<short>(i, j);
if (pixel >= axisThreshold)
{
pixel = USHRT_MAX;
}
else
{
pixel *= (USHRT_MAX / axisThreshold);
}
dst.at<short>(i, j) = cv::saturate_cast<short>(pixel);
}
}
In my example I had a slider which adjusted the contrast/brightness (we called it contrast, the original implementation called it brightness).
When the contrast/brightness was changed, the program would retrieve the maximum pixel value and then compute the axis limit by doing
calculatedThreshold = Max pixel value / contrast
Each pixel more than the threshold gets set to MAX, each pixel lower than the threshold gets multiplied by a scale factor calculated by
scale = MAX Pixel Value / calculatedThreshold.
TBH i can't say I fully understand the maths behind it. I just used trial and error until it worked; any help in that department would be appreciated HOWEVER it seems to do what i want to!
My understanding of the initial matlab implementation and the terminology "brightness" is in fact their attempt to scale the colourmap so that the "brighter" the image, the less intense each pixel had to be to map to a particular colour in the colourmap.
Since applycolourmap only works on 8 bit images, when the brightness increases and the colourmap axis values decrease, we need to ensure the values of the pixels scale accordingly so that they now match up with the "higher" intensity values in the map.
I have seen numerous OPENCV tutorials which use this approach to changing the contrast/brightness but they often promote the use of optimised convertTo (especially if you're trying to use the GPU). However as far as I can see, convertTo applies the aplha/beta values uniformly and not on a pixel by pixel basis therefore I can't use that approach.
I will update this question If i found more suitable OPENCV functions to achieve what I want.

Disparity Map Block Matching

I am writing a disparity matching algorithm using block matching, but I am not sure how to find the corresponding pixel values in the secondary image.
Given a square window of some size, what techniques exist to find the corresponding pixels? Do I need to use feature matching algorithms or is there a simpler method, such as summing the pixel values and determining whether they are within some threshold, or perhaps converting the pixel values to binary strings where the values are either greater than or less than the center pixel?
I'm going to assume you're talking about Stereo Disparity, in which case you will likely want to use a simple Sum of Absolute Differences (read that wiki article before you continue here). You should also read this tutorial by Chris McCormick before you read more here.
side note: SAD is not the only method, but it's really common and should solve your problem.
You already have the right idea. Make windows, move windows, sum pixels, find minimums. So I'll give you what I think might help:
To start:
If you have color images, first you will want to convert them to black and white. In python you might use a simple function like this per pixel, where x is a pixel that contains RGB.
def rgb_to_bw(x):
return int(x[0]*0.299 + x[1]*0.587 + x[2]*0.114)
You will want this to be black and white to make the SAD easier to computer. If you're wondering why you don't loose significant information from this, you might be interested in learning what a Bayer Filter is. The Bayer Filter, which is typically RGGB, also explains the multiplication ratios of the Red, Green, and Blue portions of the pixel.
Calculating the SAD:
You already mentioned that you have a window of some size, which is exactly what you want to do. Let's say this window is n x n in size. You would also have some window in your left image WL and some window in your right image WR. The idea is to find the pair that has the smallest SAD.
So, for each left window pixel pl at some location in the window (x,y) you would the absolute value of difference of the right window pixel pr also located at (x,y). you would also want some running value, which is the sum of these absolute differences. In sudo code:
SAD = 0
from x = 0 to n:
from y = 0 to n:
SAD = SAD + absolute_value|pl - pr|
After you calculate the SAD for this pair of windows, WL and WR you will want to "slide" WR to a new location and calculate another SAD. You want to find the pair of WL and WR with the smallest SAD - which you can think of as being the most similar windows. In other words, the WL and WR with the smallest SAD are "matched". When you have the minimum SAD for the current WL you will "slide" WL and repeat.
Disparity is calculated by the distance between the matched WL and WR. For visualization, you can scale this distance to be between 0-255 and output that to another image. I posted 3 images below to show you this.
Typical Results:
Left Image:
Right Image:
Calculated Disparity (from the left image):
you can get test images here: http://vision.middlebury.edu/stereo/data/scenes2003/

Ranking pixels by colour with OpenCV

I begin a project about the detection.
My idea is to rank every pixels of an image (Mat).
Then, I will be able to exit which colour is dominant.
The difficulty is a colour is not unic. For exemple, Green is rgb(0, 255, 0) but is almost rgb(10, 240, 20) too.
The goal of my ranking is to exit pixels which are almost same colour. Then, with a pourcentage, I think I can locate my object.
So, my question: Is it a way to ranking pixels by colour ?
Thx a lot in advance for your answers.
There isn't a straight method of ranking as you say of pixels in colours.
However, you can find an approximation to the most dominant one.
There are several way in which you can do it:
You can calculate the histogram for each colour channel - split it into the R,G,B and compute the histogram. Then you can see where the peaks of the resulting graphs are - e.g.
If you k-means cluster the pixels at the image - in other words, represent each pixel as a 3D point with coordinated (R, G, B). Then you can segment the pixels into k most occurring colours.
If you resize the image to a 1x1 pixel image, you'll find the average of all pixel values. If there is a dominant colour, where the majority of the pixels are in close proximity, it will give a good approximation.
There however, are all approximations. Your best choice would be to use k-means and to find the cluster that either has the most elements, or is the most dense.
In case you are looking for way to locate an object with a specific colour, you can use a maximum likelihood estimation. Something like this, which was used to classify different objects, such as grass, cars, building and pavement from satellite images. You can use it with a single colour and get a heat-map of where the object is in terms of likelihood (the percentage of probability) of that pixel belonging to your object.
In an ordinary image, there's always a number of colors involved. To best average the pixels carrying almost the same colors is done by color quantization which is reducing number of colors in an image using techniques like K-mean clustering. This is best explained here with Python code:
https://www.pyimagesearch.com/2014/07/07/color-quantization-opencv-using-k-means-clustering/
After successful quantization, you can just try the following code to rank the colors based on their frequencies in the image.
top_n_colors = []
n = 3
colors_count = {}
(channel_b, channel_g, channel_r) = cv2.split(_processed_image)
# Flattens the 2D single channel array so as to make it easier to iterate over it
channel_b = channel_b.flatten()
channel_g = channel_g.flatten()
channel_r = channel_r.flatten()
for i in range(len(channel_b)):
RGB = str(channel_r[i]) + " " + str(channel_g[i]) + " " + str(channel_b[i])
if RGB in colors_count:
colors_count[RGB] += 1
else:
colors_count[RGB] = 1
# taking the top n colors from the dictionary objects
_top_colors = sorted(colors_count.items(), key=lambda x: x[1], reverse=True)[0:n]
for _color in _top_colors:
_rgb = tuple([int(value) for value in _color[0].split()])
top_n_colors.append(_rgb)
print(top_n_colors)

Smooth color transition algorithm

I am looking for a general algorithm to smoothly transition between two colors.
For example, this image is taken from Wikipedia and shows a transition from orange to blue.
When I try to do the same using my code (C++), first idea that came to mind is using the HSV color space, but the annoying in-between colors show-up.
What is the good way to achieve this ? Seems to be related to diminution of contrast or maybe use a different color space ?
I have done tons of these in the past. The smoothing can be performed many different ways, but the way they are probably doing here is a simple linear approach. This is to say that for each R, G, and B component, they simply figure out the "y = m*x + b" equation that connects the two points, and use that to figure out the components in between.
m[RED] = (ColorRight[RED] - ColorLeft[RED]) / PixelsWidthAttemptingToFillIn
m[GREEN] = (ColorRight[GREEN] - ColorLeft[GREEN]) / PixelsWidthAttemptingToFillIn
m[BLUE] = (ColorRight[BLUE] - ColorLeft[BLUE]) / PixelsWidthAttemptingToFillIn
b[RED] = ColorLeft[RED]
b[GREEN] = ColorLeft[GREEN]
b[BLUE] = ColorLeft[BLUE]
Any new color in between is now:
NewCol[pixelXFromLeft][RED] = m[RED] * pixelXFromLeft + ColorLeft[RED]
NewCol[pixelXFromLeft][GREEN] = m[GREEN] * pixelXFromLeft + ColorLeft[GREEN]
NewCol[pixelXFromLeft][BLUE] = m[BLUE] * pixelXFromLeft + ColorLeft[BLUE]
There are many mathematical ways to create a transition, what we really want to do is understand what transition you really want to see. If you want to see the exact transition from the above image, it is worth looking at the color values of that image. I wrote a program way back in time to look at such images and output there values graphically. Here is the output of my program for the above pseudocolor scale.
Based upon looking at the graph, it IS more complex than a linear as I stated above. The blue component looks mostly linear, the red could be emulated to linear, the green however looks to have a more rounded shape. We could perform mathematical analysis of the green to better understand its mathematical function, and use that instead. You may find that a linear interpolation with an increasing slope between 0 and ~70 pixels with a linear decreasing slope after pixel 70 is good enough.
If you look at the bottom of the screen, this program gives some statistical measures of each color component, such as min, max, and average, as well as how many pixels wide the image read was.
A simple linear interpolation of the R,G,B values will do it.
trumpetlicks has shown that the image you used is not a pure linear interpolation. But I think an interpolation gives you the effect you're looking for. Below I show an image with a linear interpolation on top and your original image on the bottom.
And here's the (Python) code that produced it:
for y in range(height/2):
for x in range(width):
p = x / float(width - 1)
r = int((1.0-p) * r1 + p * r2 + 0.5)
g = int((1.0-p) * g1 + p * g2 + 0.5)
b = int((1.0-p) * b1 + p * b2 + 0.5)
pix[x,y] = (r,g,b)
The HSV color space is not a very good color space to use for smooth transitions. This is because the h value, hue, is just used to arbitrarily define different colors around the 'color wheel'. That means if you go between two colors far apart on the wheel, you'll have to dip through a bunch of other colors. Not smooth at all.
It would make a lot more sense to use RGB (or CMYK). These 'component' color spaces are better defined to make smooth transitions because they represent how much of each 'component' a color needs.
A linear transition (see #trumpetlicks answer) for each component value, R, G and B should look 'pretty good'. Anything more than 'pretty good' is going to require an actual human to tweak the values because there are differences and asymmetries to how our eyes perceive color values in different color groups that aren't represented in either RBG or CMYK (or any standard).
The wikipedia image is using the algorithm that Photoshop uses. Unfortunately, that algorithm is not publicly available.
I've been researching into this to build an algorithm that takes a grayscale image as input and colorises it artificially according to a color palette:
■■■■ Grayscale input ■■■■ Output ■■■■■■■■■■■■■■■
Just like many of the other solutions, the algorithm uses linear interpolation to make the transition between colours. With your example, smooth_color_transition() should be invoked with the following arguments:
QImage input("gradient.jpg");
QVector<QColor> colors;
colors.push_back(QColor(242, 177, 103)); // orange
colors.push_back(QColor(124, 162, 248)); // blue-ish
QImage output = smooth_color_transition(input, colors);
output.save("output.jpg");
A comparison of the original image VS output from the algorithm can be seen below:
(output)
(original)
The visual artefacts that can be observed in the output are already present in the input (grayscale). The input image got these artefacts when it was resized to 189x51.
Here's another example that was created with a more complex color palette:
■■■■ Grayscale input ■■■■ Output ■■■■■■■■■■■■■■■
Seems to me like it would be easier to create the gradient using RGB values. You should first calculate the change in color for each value based on the width of the gradient. The following pseudocode would need to be done for R, G, and B values.
redDifference = (redValue2 - redValue1) / widthOfGradient
You can then render each pixel with these values like so:
for (int i = 0; i < widthOfGradient; i++) {
int r = round(redValue1 + i * redDifference)
// ...repeat for green and blue
drawLine(i, r, g, b)
}
I know you specified that you're using C++, but I created a JSFiddle demonstrating this working with your first gradient as an example: http://jsfiddle.net/eumf7/

Determine difference in stops between images with no EXIF data

I have a set of images of the same scene but shot with different exposures. These images have no EXIF data so there is no way to extract useful info like f-stop, shutter speed etc.
What I'm trying to do is to determine the difference in stops between the images i.e. Image1 is +1.3 stops of Image0.
My current approach is to first calculate luminance from the image's RGB values using the equation
L = 0.2126 * R + 0.7152 * G + 0.0722 * B
I've seen different numbers being used in the equation but generally it should not affect the end result L too much.
After that I derive the log-average luminance of the image.
exp(avg of log(luminance of image))
But somehow the log-avg luminance doesn't seem to give much indication on exposure difference btw the images.
Any ideas on how to determine exposure difference?
edit: on c/c++
You have to generally solve two problems:
1. Linearize your image data
(In case it's not obvious what is meant: two times more light collected by your pixel shall result in two times the intensity value in your linearized image.)
Your image input might be (sufficiently) linearized already -> you may skip to part 2. If your content came from a camera and it's a JPEG, then this will most certainly not be the case.
The real 'solution' to this problem is finding the camera response function, which you want to invert and apply to your image data to get linear intensity values. This is by no means a trivial task. The EMoR model is widely used in all sorts of software (Photoshop, PTGui, Photomatix, etc.) to describe camera response functions. Some open source software solving this problem (but using a different model iirc) is PFScalibrate.
Having that said, you may get away with a simple inverse gamma application. A rough 'gestimation' for the right gamma value might be found by doing this:
capture an evenly lit, static scene with two exposure times e and e/2
apply a couple of inverse gamma transforms (e.g. for 1.8 to 2.4 in 0.1 steps) on both images
multiply all the short exposure images with 2.0 and subtract them from the respective long exposure images
pick the gamma that lead to the smallest overall difference
2. Find the actual difference of irradiation in stops, i.e. log2(scale factor)
Presuming the scene was static (no moving objects or camera), this is relatively easy:
sum1 = sum2 = 0
foreach pixel pair (p1,p2) from the two images:
if p1 or p2 is close to 0 or 255:
skip this pair
sum1 += p1 and sum2 += p2
return log2(sum1 / sum2)
On large images this will certainly work just as well and a lot faster if you sub-sample the images.
If the camera was static but the scene was not (moving objects), this starts to work less well. I produced acceptable results in this case by simply repeating the above procedure several times and use the output of the previous run as an estimate for the correct scale factor and then discard pixel pairs who's quotient is too far away from the current estimate. So basically replacing the above if line with the following:
if <see above> or if abs(log2(p1/p2) - estimate) > 0.5:
I'd stop the repetition after a fixed number of iterations or if two consecutive estimates are sufficiently close to each other.
EDIT: A note about conversion to luminance
You don't need to do that at all (as Tony D mentioned already) and if you insist, then do it after the linearization step (as Mark Ransom noted). In a perfect setting (static scene, no noise, no de-mosaicing, no quantization) every channel of every pixel would have the same ratio p1/p2 (if neither is saturated). Therefore the relative weighting of the different channels is irrelevant. You may sum over all pixels/channels (weighing R, G and B equally) or maybe only use the green channel.