Converting byte number to grayscale and vica versa - c++

I got a number between 0-255 and need to convert it to a RGB grayscale color. And how do I convert a RGB-color to a grayscale value between 0-255?

The common formula is luminosity = 0.30 * red + 0.59 * green + 0.11 * blue. Matches the human eye's color perception, doesn't otherwise correct for display device gamma.

If you have a number 0 <= x <= 255 representing a grayscale value, the corresponding RGB tuple is simply (x,x,x).

Related

Trouble creating grayscale image from array of pixel intensities with ImageMagick

I'm trying to output a png grayscale image using values from an array of intensity values using ImageMagick.
I've used the Image constructor to try to do this, but the image it is creating does not exactly match the given array.
Image grayscaleImage(256, 256, "I", DoublePixel, inputPtr);
grayscaleImage.write("test.png");
The image that's being created has the correct values for all of the black pixels (intensity of 0) but for the non-zero pixels, I'm getting only white; no gray. How can I correct this issue? Or am I using the constructor incorrectly? Thank you!
As emcconville stated, the numbers in the array of integers need to be scaled to be between 0.0 and 1.0 for Magick::DoublePixel. I achieved this by using the following function:
(b-a)(x - min)
f(x) = -------------- + a
max - min
Where a == 0, b == 1, x == inputPtr[index], min == 0, and max == 255.

C++ Color calculation from grayscape to color

I have a program that generates an grayscale image. The grayshading for every pixel is set with the following code sniped:
//loop over all pixel
*PixelColorPointer = Number * 0x010101;
in this case the Number is an integer number between 0 and 255. Which generates all grayscale colors from black to white.
What I try to do is have an colored image (in order to have false colors), but I don't really understand the calculation with the hex value. I figured out if I assign e.g. Number * 0xFFFFFF I have a gradient/variety from white to yellow.
Can someone explain me how the calculation of this colors is working? Please remember that (as said already) I want/have to pass the Number variable to get variety.
RGB color are stored byte by byte (0 to 255).
When you say 0x010203 it is (in hex) 01 red, 02 green, and 03 blue. It can also be inverted (03 red, 02 green, 01 blue) depending on your Endianness.
Here you have to separate your 3 colors. Then you shoud multiply every color by it's coefficient.
You can store your color in an union, it's the easiest way.
union Color
{
struct
{
unsigned char b;
unsigned char g;
unsigned char r;
unsigned char a; // Useless here
};
unsigned int full;
};
Color c;
c.full = 0x102030;
unsigned char res = 0.229 * c.r + 0.587 * c.g + 0.114 * c.b;
Color grey;
grey.r = grey.g = grey.b = res;
0.229, 0.587 and 0.114 are the Relative luminance.
Remember, you might need to invert the rgba order and move the a place :)
You need to give a little more information. What library of function are you using to do this??
But by just seeing the problems I think the hexadecimal number refers to the color in the Color-hex code, the Number would refer to the brightness 0 being blank and 255 max color intensity (white)
EDIT: Actually I think the whole number resulting from:
Number * 0x010101
Is the hexadecimal color code, in the particular case of 0x010101 the Number works as the intensity. But any other hexadecimal would give you some weird result.
Use an Color-hex code table choose any random color and just input :
*PixelColorPointer = 0XHEXCODE;
if the output is the desired color then I'm right

Converting an OpenCV BGR 8-bit Image to CIE L*a*b*

I am trying to convert a given Mat representing an RGB image with 8-bit depth to Lab using the function provided in the documentation:
cvtColor(source, destination, <conversion code>);
I have tried the following conversion codes:
CV_RGB2Lab
CV_BGR2Lab
CV_LBGR2Lab
I have received bizarre results each time around, with an "L" value of greater than 100 for some samples, literally <107, 125, 130>.
I am also using Photoshop to check the results - but given that 107 is beyond the accepted range of 0 ≤ L ≤ 100, I can not comprehend what my error is.
Update:
I'll post my overall results here:
Given an image (Mat) represented by 8-bit BGR, the image can be converted by the following:
cvtColor(source, destination, CV_BGR2Lab);
The pixel values can then be accessed in the following manner:
int step = destination.step;
int channels = destination.channels();
for (int i = 0; i < destination.rows(); i++) {
for (int j = 0; j < destination.cols(); j++) {
Point3_<uchar> pixelData;
//L*: 0-255 (elsewhere is represented by 0 to 100)
pixelData.x = destination.data[step*i + channels*j + 0];
//a*: 0-255 (elsewhere is represented by -127 to 127)
pixelData.y = destination.data[step*i + channels*j + 1];
//b*: 0-255 (elsewhere is represented by -127 to 127)
pixelData.z = destination.data[step*i + channels*j + 2];
}
}
If anyone is interested in the range of the other variables a and b I made a small program to test their range.
If you convert all the colors that are represented with RGB to the CieLab used in OpenCV the ranges are:
0 <=L<= 255
42 <=a<= 226
20 <=b<= 223
And if you're using RGB values in the float mode instead of uint8 the ranges will be:
0.0 <=L<= 100.0
-86.1813 <=a<= 98.2352
-107.862 <=b<= 94.4758
P.S. If you want to see how distinguishable (regarding human perception) is a LAB value from another LAB value, you should use the floating point. The scale used to keep the lab values in the uint8 ranges messes up with their euclidean distance.
This is the code I used (python):
L=[0]*256**3
a=[0]*256**3
b=[0]*256**3
i=0
for r in xrange(256):
for g in xrange(256):
for bb in xrange(256):
im = np.array((bb,g,r),np.uint8).reshape(1,1,3)
cv2.cvtColor(im,cv2.COLOR_BGR2LAB,im) #tranform it to LAB
L[i] = im[0,0,0]
a[i] = im[0,0,1]
b[i] = im[0,0,2]
i+=1
print min(L), '<=L<=', max(L)
print min(a), '<=a<=', max(a)
print min(b), '<=b<=', max(b)
That's because L value is in range [0..255] in OpenCV. You can simply scale this value to needed interval ([0..100] in your case).
I am not sure about João Abrantes's range on A and B.
The opencv documentation has clearly mentioned the CIE L*a*b*range.
8 bit images
Thus leading to a range of
0 <= L <= 255
0 <= a <= 255
0 <= b <= 255
In case anyone runs into the same issue:
Please note that in OpenCV (2.4.13), you can not convert CV_32FC3 BGR images into the Lab color space. That is to say:
//this->xImage is CV_8UC3
this->xImage.convertTo(FloatPrecisionImage, CV_32FC3);
Mat result;
cvtColor(FloatPrecisionImage, result, COLOR_BGR2Lab);
this->xImage = result;
will not work
while
Mat result;
cvtColor(this->xImage, result, COLOR_BGR2Lab);
result.convertTo(this->xImage, CV_32FC3);
works like a charm.
I did not track down the reason for said behavior; however it seems off to me, because this in effect puts limits on the image's quality.

OpenCV double mat shows up as all white

I have a type 6 (double-valued, single channel) mat with data ranging from 0 to 255. I can print out the data using the following code:
double* data = result.ptr<double>();
for(int i = 0; i < rows; i++)
for(int j = 0; j < cols; j++)
std::cout<<data[i*step+j]<<"\t";
And this appears perfectly normal--in the range from 0 to 255 and the size that I'd expect. However, when I try to show the image:
imshow(window_name, result);
waitKey();
I just get a white image. Just white pixels. Nothing else.
Loading other images from files and displaying in the window works fine.
Using Windows 7, OpenCV 233
cv::imshow works in following ways -
If the image is 8-bit unsigned, it is displayed as is.
If the image is 16-bit unsigned or 32-bit integer, the pixels are divided by 256. That is, the value range [0,255*256] is mapped to [0,255].
If the image is 32-bit floating-point, the pixel values are multiplied by 255. That is, the value range [0,1] is mapped to [0,255].
Your matrix lies in the 3rd category where imshow is expecting the values to be between 0 and 1 and so it multiplies it by 255. Since your values are already between 0 and 255, you are getting unwanted result. So normalizing the pixels between 0 and 1 will work.
You need to normalize your floating point image so that the values are between 0.0 - 1.0 if you're using imshow. I bet your values are over 1.0 and thus those pixels are all set to 255, giving you the white image.

Taking the integral image of a Color Distance result (logical error)

I'm debugging a robot's project and have found an error which I'm not quite sure how to fix it theoretically.
I must calculate a color distance map, and following this I must take the integral image of the result and do some calculation with it.
Using the A and B channel from a Lab colorspaced image I obtain the color distance for example color red(pA = 255, pB = 127) using formula sqrt([A-pA]^2+[B-pB]^2)
subtract(mA, Scalar(pA), tA);
subtract(mB, Scalar(pB), tB);
tA.convertTo(t32A, CV_32SC1);
tB.convertTo(t32B, CV_32SC1);
pow(t32A, 2.0, powA);
pow(t32B, 2.0, powB);
add(powA, powB, sq);
pow(sq, 0.5, res);
//res.convertTo(result, CV_8UC1);
I needed the conversion to CV_32S because of the limitations of CV_8U handling values above 255.
Now I must feed the result in to the integral image, this expects only an image of CV_8UC1.
The problem I'm facing, is that the aforementioned color distance function might produce pixels with values above 255.
For example:
distance between (0,0) to red (255,127)
sqrt( (0-255)^2 + (0-127)^2) = 285
Or between (0,255) to red (255,127)
sqrt( (0-255)^2 + (255-127)^2) = 285
Does anybody have any suggestions how I can feed the result in to the integral image, without any loss of information.
Thank you
How about using sqrt(2) as a normalization factor ?