I have a jpeg image with the colors encoded in the YCCK color space. I have already decoded it in C++ using libjpeg. How can I convert it to RGB?
Converting it to CMYK would also be useful for me, since I know how to convert from CMYK to RGB using ICC color profiles.
Have a look here.
First, conversion is done into RGB format as:
R = Y + 1.402*Cr - 179.456
G = Y - 0.34414*Cb - 0.71414*Cr + 135.45984
B = Y + 1.772*Cb - 226.816
After that, conversion to CMYK image is performed as follows:
C = 255 – R
M = 255 – G
Y = 255 – B
The values of K channel are written without modification.
Related
I am trying to sample colors in MATLAB using the Color Thresholder App and then use the L * a * b output in OpenCV. But there seems to be a scale mismatch. The following are the L * a * b scales in MATLAB and OpenCV:
MATLAB: 0 <= L <= 100; -100 <= a <= 100; and -100 <= b <= 100
OpenCV: 0 <= L <= 100; -127 <= a <= 127; and -127 <= b <= 127
according to these two sources : Source 1; and Source 2
It seems like we need the following L * a * b ranges for 8 bit images in OpenCV :
0 <= L <= 255; 0 <= a <= 255; and 0 <= b <= 255
How do we convert from MATLAB to OpenCV L * a * b color scale for 8 - bit images ?
Matlab uses the International Color Consortium's specifications for color representation in their image processing toolbox. The specification for ICC profiles are nearly universally used for color specification and conversions.
ICC Lab specified that LAB is used for the profile conversion space in various resolutions. For 8 bits, unsigned 8 bit values are used. L* is mapped 0->0 and 100->255. For a* and b* the values are limited to between -128 and +127. Thus the actual encoding adds 128 to a* and b* to produce an unsigned values between 0 and 255.
Representation for these and other, larger bit sizes can be found on tables 12 and 13 in section 6.3.4.2 in the specification here:
http://color.org/specification/ICC1v43_2010-12.pdf
Most programs such as Photoshop, Tiff files, and such use the ICC formats. Additionally, Matlab's functions include various conversion functions such as lab2uint8(lab) which can be used to convert flating point Lab values to their proper representation in fixed sizes such as unsigned 8 bit ints.
I want to color k clusters of points in a 2D grid. Right now I using a naive approach.
I'm using RGB to set a color, the G component is fix, R is counted down gradually, B is counted up gradually. So the first cluster has R set to 255 and the last to 0, vice versa for B.
int r = 255, g = 80, b = 0;
// do stuff
int step = 255 / k;
// loop over data
int cluster = getCurrentCluster();
int currentR = r - (cluster * step);
int currentG = g;
int currentB = b + (cluster * step);
The current solution is working and effektive. It's possible to differentiate the clusters by colors
But I don't like it, and would prefer rainbow colors or at least a richer spectrum.
How can I achieve that? How can I map an integer in interval [0, k) to a color that meets my requirements?
Another approach that came to my mind was to map the integer to a wave length in a given interval, e.g. 400 nm to 800 nm (should roughly be the rainbow spectrum, if I recall correctly) and convert the wavelength to RGB.
If you want to map a linear range to a rainbow like spectrum then you are better off starting with a color space like HSV and then convert to RGB.
Here you find the details of the conversion
HSV will give the nicest results, but needs trigonometry.
Instead, consider three functions:
R: r = x < 256 ? 255 - x : x < 512 ? 0 : x - 512
G: g = x < 256 ? x : x < 512 ? 512 - x : 0
B: b = x < 256 ? 0 : x < 512 ? x - 256 : 768 - x
These may be easier and faster, although less aestethically pleasing (not so a nice yellow, orange, etc.)
I'm using openCV C++ API and I'm trying to convert a camera buffer (YUV_NV12) to a RGB format. However, the dimensions of my image changed (width shrinked from 720 to 480) and the colors are wrong (kinda purple/green ish).
unsigned char* myYUVBufferPointer = (something passed as argument)
int myYUVBufferHeight = 1280
int myYUVBufferWidth = 720
cv::Mat yuvMat(myYUVBufferHeight, myYUVBufferWidth, CV_8U, myYUVBufferPointer);
cv::Mat rgbMat;
cv::cvtColor(yuvMat, rgbMat, CV_YUV2RGB_NV12);
cv::imwrite("path/to/my/image.jpg",rgbMat);
Any ideas? *(I'm more interested about the size changed than the color, since I will eventually convert it to CV_YUV2GRAY_NV12 and thats working, but the size isn't).*
Your code constructs a single channel (grayscale) image called yuvMat out of a series of unsigned chars. When you try to -- forcefully -- convert this single channel image from YUV 4:2:0 to a multi-channel RGB, OpenCV library assumes that each row has 2/3 of the full 4:4:4 information (1 x height x width for Y and 1/2 height x width for U and V each, instead of 3 x height x width for a normal YUV) therefore your width of the destination image shrinks to 2/3 of the original width. It can be assumed that half of the data read from the original image comes from unallocated memory though because the original image only has width x height uchars but 2 x width x height uchars are read from the memory!
If your uchar buffer is already correctly formatted for a series of bytes representing YUV_NV12 (4:2:0 subsampling) in the conventional width x height bytes per channel, all you need to do is to construct your original yuvMat as a CV_8UC3 and you are good to go. The above assumes that all the interlacing and channel positioning is already implemented in the buffer. Most probably this is not the case though. YUV_NV12 data comes with width x height uchars of Y, followed by (1/2) width x (1/2) x height of 2 x uchars representing UV combined. You probably need to write your own reader to read Y, U, and V data separately and to construct 3 single-channel Mats of size width x height -- filling in the horizontal and vertical gaps in both U and V channels -- then use cv::merge() to combine those single-channel images to a 3-channel YUV before using cv::cvtColor() to convert that image using CV_YUV2BGR option. Notice the use of BGR instead of RGB.
It could be that "something passed as argument" does not have enough data to fill 720 lines. With some video cameras, not all three channels are represented using the same number of bits. For example, when capturing video on an iPhone, the three channels use 8-4-4 Bytes instead of 8-8-8. I haven't used this type of a camera with OpenCV, but most likely the problem is here.
I am trying to convert a given Mat representing an RGB image with 8-bit depth to Lab using the function provided in the documentation:
cvtColor(source, destination, <conversion code>);
I have tried the following conversion codes:
CV_RGB2Lab
CV_BGR2Lab
CV_LBGR2Lab
I have received bizarre results each time around, with an "L" value of greater than 100 for some samples, literally <107, 125, 130>.
I am also using Photoshop to check the results - but given that 107 is beyond the accepted range of 0 ≤ L ≤ 100, I can not comprehend what my error is.
Update:
I'll post my overall results here:
Given an image (Mat) represented by 8-bit BGR, the image can be converted by the following:
cvtColor(source, destination, CV_BGR2Lab);
The pixel values can then be accessed in the following manner:
int step = destination.step;
int channels = destination.channels();
for (int i = 0; i < destination.rows(); i++) {
for (int j = 0; j < destination.cols(); j++) {
Point3_<uchar> pixelData;
//L*: 0-255 (elsewhere is represented by 0 to 100)
pixelData.x = destination.data[step*i + channels*j + 0];
//a*: 0-255 (elsewhere is represented by -127 to 127)
pixelData.y = destination.data[step*i + channels*j + 1];
//b*: 0-255 (elsewhere is represented by -127 to 127)
pixelData.z = destination.data[step*i + channels*j + 2];
}
}
If anyone is interested in the range of the other variables a and b I made a small program to test their range.
If you convert all the colors that are represented with RGB to the CieLab used in OpenCV the ranges are:
0 <=L<= 255
42 <=a<= 226
20 <=b<= 223
And if you're using RGB values in the float mode instead of uint8 the ranges will be:
0.0 <=L<= 100.0
-86.1813 <=a<= 98.2352
-107.862 <=b<= 94.4758
P.S. If you want to see how distinguishable (regarding human perception) is a LAB value from another LAB value, you should use the floating point. The scale used to keep the lab values in the uint8 ranges messes up with their euclidean distance.
This is the code I used (python):
L=[0]*256**3
a=[0]*256**3
b=[0]*256**3
i=0
for r in xrange(256):
for g in xrange(256):
for bb in xrange(256):
im = np.array((bb,g,r),np.uint8).reshape(1,1,3)
cv2.cvtColor(im,cv2.COLOR_BGR2LAB,im) #tranform it to LAB
L[i] = im[0,0,0]
a[i] = im[0,0,1]
b[i] = im[0,0,2]
i+=1
print min(L), '<=L<=', max(L)
print min(a), '<=a<=', max(a)
print min(b), '<=b<=', max(b)
That's because L value is in range [0..255] in OpenCV. You can simply scale this value to needed interval ([0..100] in your case).
I am not sure about João Abrantes's range on A and B.
The opencv documentation has clearly mentioned the CIE L*a*b*range.
8 bit images
Thus leading to a range of
0 <= L <= 255
0 <= a <= 255
0 <= b <= 255
In case anyone runs into the same issue:
Please note that in OpenCV (2.4.13), you can not convert CV_32FC3 BGR images into the Lab color space. That is to say:
//this->xImage is CV_8UC3
this->xImage.convertTo(FloatPrecisionImage, CV_32FC3);
Mat result;
cvtColor(FloatPrecisionImage, result, COLOR_BGR2Lab);
this->xImage = result;
will not work
while
Mat result;
cvtColor(this->xImage, result, COLOR_BGR2Lab);
result.convertTo(this->xImage, CV_32FC3);
works like a charm.
I did not track down the reason for said behavior; however it seems off to me, because this in effect puts limits on the image's quality.
I got a number between 0-255 and need to convert it to a RGB grayscale color. And how do I convert a RGB-color to a grayscale value between 0-255?
The common formula is luminosity = 0.30 * red + 0.59 * green + 0.11 * blue. Matches the human eye's color perception, doesn't otherwise correct for display device gamma.
If you have a number 0 <= x <= 255 representing a grayscale value, the corresponding RGB tuple is simply (x,x,x).