How to convert yuy2 to a BITMAP in C++ - c++

I'm using a security camera DLL to retreive the image from the camera. The DLL call a function of my program passing the image buffer as a parameter, but the image is in yuy2 format. I need to convert this buffer to RGB, but I tried every formula I found on Internet with no success. Every example I tried (including http://msdn.microsoft.com/en-us/library/aa904813(VS.80).aspx#yuvformats_2) gives me wrong colors.
I'm able to convert the buffer to a BW image using only the Y component of the pixel, but I really need the color picture. I debugged (assembly only) the DLL that shows the image in the screen and it uses DirectDraw to do this.

Using the information from the Microsoft link in the question:
for (int i = 0; i < width/2; ++i)
{
int y0 = ptrIn[0];
int u0 = ptrIn[1];
int y1 = ptrIn[2];
int v0 = ptrIn[3];
ptrIn += 4;
int c = y0 - 16;
int d = u0 - 128;
int e = v0 - 128;
ptrOut[0] = clip(( 298 * c + 516 * d + 128) >> 8); // blue
ptrOut[1] = clip(( 298 * c - 100 * d - 208 * e + 128) >> 8); // green
ptrOut[2] = clip(( 298 * c + 409 * e + 128) >> 8); // red
c = y1 - 16;
ptrOut[3] = clip(( 298 * c + 516 * d + 128) >> 8); // blue
ptrOut[4] = clip(( 298 * c - 100 * d - 208 * e + 128) >> 8); // green
ptrOut[5] = clip(( 298 * c + 409 * e + 128) >> 8); // red
ptrOut += 6;
}

This formula worked:
int C = luma - 16;
int D = cr - 128;
int E = cb - 128;
r = (298*C+409*E+128)/256;
g = (298*C-100*D-208*E+128)/256;
b = (298*C+516*D+128)/256;
I got this from a matlab example.
The gotcha is: in memory, Windows bitmaps aren't RGB, they are BGR. If you are writing to a memory buffer, you need to do something like this:
rgbbuffer[rgbindex] = (char)b;
rgbbuffer[rgbindex + 1] = (char)g;
rgbbuffer[rgbindex + 2] = (char)r;

If you were already using DirectShow to get video data out of the security camera, then you could simply add the "Color Space Converter Filter" to your DirectShow graph. But if you aren't using DirectShow already (it sounds like you're not) then it will be much, much simpler to just convert the data to RGB yourself using the formulas that Daniel linked to. Adding DirectShow to a project is very complicated.

You will have to write your own converter. GDI+ doesn't know how to deal with YUY2 bitmaps.
Look here.
Please note that 2 pixels share same color values and have different luminance values.
Here are some formulas to help you write your converter.

Related

How to get the calculate the RGB values of a pixel from the luminance?

I want to compute the RGB values from the luminance.
The data that I know are :
the new luminance (the value that I want to apply)
the old luminance
the old RGB values.
We can compute the luminance from the RGB values like this :
uint8_t luminance = R * 0.21 + G * 0.71 + B * 0.07;
My code is :
// We create a function to set the luminance of a pixel
void jpegImage::setLuminance(uint8_t newLuminance, unsigned int x, unsigned int y) {
// If the X or Y value is out of range, we throw an error
if(x >= width) {
throw std::runtime_error("Error : in jpegImage::setLuminance : The X value is out of range");
}
else if(y >= height) {
throw std::runtime_error("Error : in jpegImage::setLuminance : The Y value is out of range");
}
// If the image is monochrome
if(pixelSize == 1) {
// We set the pixel value to the luminance
pixels[y][x] = newLuminance;
}
// Else if the image is colored, we throw an error
else if(pixelSize == 3) {
// I don't know how to proceed
// My image is stored in a std::vector<std::vector<uint8_t>> pixels;
// This is a list that contain the lines of the image
// Each line contains the RGB values of the following pixels
// For example an image with 2 columns and 3 lines
// [[R, G, B, R, G, B], [R, G, B, R, G, B], [R, G, B, R, G, B]]
// For example, the R value with x = 23, y = 12 is:
// pixels[12][23 * pixelSize];
// For example, the B value with x = 23, y = 12 is:
// pixels[12][23 * pixelSize + 2];
// (If the image is colored, the pixelSize will be 3 (R, G and B)
// (If the image is monochrome the pixelSIze will be 1 (just the luminance value)
}
}
How can I proceed ?
Thanks !
You don't need the old luminance if you have the original RGB.
Referencing https://www.fourcc.org/fccyvrgb.php for YUV to RGB conversion.
Compute U and V from original RGB:
```
V = (0.439 * R) - (0.368 * G) - (0.071 * B) + 128
U = -(0.148 * R) - (0.291 * G) + (0.439 * B) + 128
```
Y is the new luminance normalized to a value between 0 and 255
Then just convert back to RGB:
B = 1.164(Y - 16) + 2.018(U - 128)
G = 1.164(Y - 16) - 0.813(V - 128) - 0.391(U - 128)
R = 1.164(Y - 16) + 1.596(V - 128)
Make sure you clamp your computed values of each equation to be in range of 0..255. Some of these formulas can convert a YUV or RGB value to something less than 0 or higher than 255.
There's also multiple formula for converting between YUV and RGB. (Different constants). I noticed the page listed above has a different computation for Y than you cited. They are all relatively close with different precisions and adjustments. For just changing the brightness of a pixel, almost any formula will do.
Updated
I originally deleted this answer after the OP suggested it wasn't working for him. I was too busy for the last few days to investigate, but I wrote some sample code to confirm my hypothesis. At the bottom of this answer is a snippet of GDI+ based code increases the luminance of an image by a variable amount. Along with the code is an image that I tested this out on and two conversions. One at 130% brightness. Another at 170% brightness.
Here's a sample conversion
Original Image
Updated Image (at 130% Y)
Updated Image (at 170% Y)
Source:
#define CLAMP(val) {val = (val > 255) ? 255 : ((val < 0) ? 0 : val);}
void Brighten(Gdiplus::BitmapData& dataIn, Gdiplus::BitmapData& dataOut, const double YMultiplier=1.3)
{
if ( ((dataIn.PixelFormat != PixelFormat24bppRGB) && (dataIn.PixelFormat != PixelFormat32bppARGB)) ||
((dataOut.PixelFormat != PixelFormat24bppRGB) && (dataOut.PixelFormat != PixelFormat32bppARGB)))
{
return;
}
if ((dataIn.Width != dataOut.Width) || (dataIn.Height != dataOut.Height))
{
// images sizes aren't the same
return;
}
const size_t incrementIn = dataIn.PixelFormat == PixelFormat24bppRGB ? 3 : 4;
const size_t incrementOut = dataOut.PixelFormat == PixelFormat24bppRGB ? 3 : 4;
const size_t width = dataIn.Width;
const size_t height = dataIn.Height;
for (size_t y = 0; y < height; y++)
{
auto ptrRowIn = (BYTE*)(dataIn.Scan0) + (y * dataIn.Stride);
auto ptrRowOut = (BYTE*)(dataOut.Scan0) + (y * dataOut.Stride);
for (size_t x = 0; x < width; x++)
{
uint8_t B = ptrRowIn[0];
uint8_t G = ptrRowIn[1];
uint8_t R = ptrRowIn[2];
uint8_t A = (incrementIn == 3) ? 0xFF : ptrRowIn[3];
auto Y = (0.257 * R) + (0.504 * G) + (0.098 * B) + 16;
auto V = (0.439 * R) - (0.368 * G) - (0.071 * B) + 128;
auto U = -(0.148 * R) - (0.291 * G) + (0.439 * B) + 128;
Y *= YMultiplier;
auto newB = 1.164*(Y - 16) + 2.018*(U - 128);
auto newG = 1.164*(Y - 16) - 0.813*(V - 128) - 0.391*(U - 128);
auto newR = 1.164*(Y - 16) + 1.596*(V - 128);
CLAMP(newR);
CLAMP(newG);
CLAMP(newB);
ptrRowOut[0] = newB;
ptrRowOut[1] = newG;
ptrRowOut[2] = newR;
if (incrementOut == 4)
{
ptrRowOut[3] = A; // keep original alpha
}
ptrRowIn += incrementIn;
ptrRowOut += incrementOut;
}
}
}

BGR -> YCbCr conversion not working correctly

I am trying to manually convert an image from RBG (BGR in OpenCV) to the YCbCr color space.
My image is a png color image, 800 width and 600 height, 3 channels, 16 bit depth.
Here's how I tried solving this.
cv::Mat convertToYCbCr(cv::Mat image) {
// converts an RGB image to YCbCr
// cv::Mat: B-G-R
std::cout << "Converting image to YCbCr color space." << std::endl;
int i, j;
for (i = 0; i <= image.cols; i++) {
for (j = 0; j <= image.rows; j++) {
// R, G, B values
auto R = image.at<cv::Vec3d>(j, i)[2];
auto G = image.at<cv::Vec3d>(j, i)[1];
auto B = image.at<cv::Vec3d>(j, i)[0];
// Y'
auto Y = image.at<cv::Vec3d>(j,i)[0] = 0.299 * R + 0.587 * G + 0.114 * B + 16;
// Cb
auto Cb = image.at<cv::Vec3d>(j,i)[1] = 128 + (-0.169 * R -0.331 * G + 0.5 * B);
// Cr
auto Cr = image.at<cv::Vec3d>(j,i)[2] = 128 + (0.5 * R -0.419 * G -0.081 * B);
std::cout << "At conversion: Y = " << Y << ", Cb = " << Cb << ", "
<< Cr << std::endl;
}
}
std::cout << "Converting finished." << std::endl;
return image;
}
The image I receive looks like this:
What I am expecting is this (using OpenCV method):
The vertical lines hint maybe at something? Is my loop wrong? Can I even just "replace" the RGB values with YCbCr values and expect the image to look like the example? typeid() returns the same value for both images, N2cv3MatE.
The primary reason for incorrect results being observed is the incorrect data-type used to access the image. The correct type for accessing 16 bit unsigned pixels is cv::Vec3w (not cv::Vec3d).
The next issue is that the coefficients that are being using for conversion are designed for analog signals ( YPbPr ). For digital images, we have to use coefficients designed for digital images ( YCbCr ). You can find more details on the Wikipedia article on YCbCr in section ITU-R BT.601 conversion.
The piece of information missing from the article is that how will the coefficients change if the images are of 16 bit unsigned depth or 32 bit floating point depth? The answer to this is that we will have to scale the coefficients according to the bit depth of our image.
For images with 16 bit unsigned depth, the scaling should be performed as follows:
auto Y = (R * 65.481f * scale) + (G * 128.553f * scale) + (B * 24.966f * scale) + (16.0f * offset);
auto Cb = (R * -37.797f * scale) + (G * -74.203f * scale) + (B * 112.0f * scale) + (128.0f * offset);
auto Cr = (R * 112.0f * scale) + (G * -93.786f * scale) + (B * -18.214f * scale) + (128.0f * offset);
where scale is equal to 257.0/65535.0 and offset is equal to 257.0.
This conversion technique has been adopted from MATLAB source code for rgb2ycbcr function which references the following book describing the scaling:
C.A. Poynton, "A Technical Introduction to Digital Video", John Wiley
& Sons, Inc., 1996, Chapter 9, Page 175`
Now that the conversion has been done, the third issue we face is the visualization of image similar to that of OpenCV. When we perform color conversion with OpenCV, the output image is stored in the order YCrCb instead of the usual YCbCr. So to get the same image with our custom conversion logic, we have to store values in the relevant order.
A sample conversion code may look like this:
if(image.type() == CV_16UC3)
{
const float scale = 257.0f / 65535.0f;
const float offset = 257.0f;
for (int i = 0; i < image.cols; i++)
{
for (int j = 0; j < image.rows; j++)
{
auto R = image.at<cv::Vec3w>(j, i)[2];
auto G = image.at<cv::Vec3w>(j, i)[1];
auto B = image.at<cv::Vec3w>(j, i)[0];
auto Y = (R * 65.481f * scale) + (G * 128.553f * scale) + (B * 24.966f * scale) + (16.0f * offset);
auto Cb = (R * -37.797f * scale) + (G * -74.203f * scale) + (B * 112.0f * scale) + (128.0f * offset);
auto Cr = (R * 112.0f * scale) + (G * -93.786f * scale) + (B * -18.214f * scale) + (128.0f * offset);
image.at<cv::Vec3w>(j, i)[0] = (unsigned short)Y;
image.at<cv::Vec3w>(j, i)[1] = (unsigned short)Cr;
image.at<cv::Vec3w>(j, i)[2] = (unsigned short)Cb;
}
}
}
You should use cv::cvtColor
cvtColor(src, target_image, cv::COLOR_RGB2YCrCb);
Then just flip the second and third channels.
Though you could be getting that error because you're not casting the resulting values to ints.

Transform images with bezier curves

I'm using this article: nonlingr as a font to understand non linear transformations, in the section GLYPHS ALONG A PATH he explains how to use a parametric curve to transform an image, i'm trying to apply a cubic bezier to an image, however i have been unsuccessfull, this is my code:
OUT.aloc(IN.width(), IN.height());
//get the control points...
wVector p0(values[vindex], values[vindex+1], 1);
wVector p1(values[vindex+2], values[vindex+3], 1);
wVector p2(values[vindex+4], values[vindex+5], 1);
wVector p3(values[vindex+6], values[vindex+7], 1);
//this is to calculate t based on x
double trange = 1 / (OUT.width()-1);
//curve coefficients
double A = (-p0[0] + 3*p1[0] - 3*p2[0] + p3[0]);
double B = (3*p0[0] - 6*p1[0] + 3*p2[0]);
double C = (-3*p0[0] + 3*p1[0]);
double D = p0[0];
double E = (-p0[1] + 3*p1[1] - 3*p2[1] + p3[1]);
double F = (3*p0[1] - 6*p1[1] + 3*p2[1]);
double G = (-3*p0[1] + 3*p1[1]);
double H = p0[1];
//apply the transformation
for(long i = 0; i < OUT.height(); i++){
for(long j = 0; j < OUT.width(); j++){
//t = x / width
double t = trange * j;
//apply the article given formulas
double x_path_d = 3*t*t*A + 2*t*B + C;
double y_path_d = 3*t*t*E + 2*t*F + G;
double angle = 3.14159265/2.0 + std::atan(y_path_d / x_path_d);
mapped_point.Set((t*t*t)*A + (t*t)*B + t*C + D + i*std::cos(angle),
(t*t*t)*E + (t*t)*F + t*G + H + i*std::sin(angle),
1);
//test if the point is inside the image
if(mapped_point[0] < 0 ||
mapped_point[0] >= OUT.width() ||
mapped_point[1] < 0 ||
mapped_point[1] >= IN.height())
continue;
OUT.setPixel(
long(mapped_point[0]),
long(mapped_point[1]),
IN.getPixel(j, i));
}
}
Applying this code in a 300x196 rgb image all i get is a black screen no matter what control points i use, is hard to find information about this kind of transformation, searching for parametric curves all i find is how to draw them, not apply to images. Can someone help me on how to transform an image with a bezier curve?
IMHO applying a curve to an image sound like using a LUT. So you will need to check for the value of the curve for different image values and then switch the image value with the one on the curve, so, create a Look-Up-Table for each possible value in the image (e.g : 0, 1, ..., 255, for a gray value 8 bit image), that is a 2x256 matrix, first column has the values from 0 to 255 and the second one having the value of the curve.

c++ YUYV 422 Horizontal and Vertical Flipping

I have a uint8_t YUYV 422 (Interleaved) image array in memory and I want to be able to flip it both vertically and horizontally. I have successfully implemented a vertical flip but I'm having a problem with flipping both horizontally and vertically at the same time.
My code for the vertical flip, below, works perfectly.
int counter = 0;
int array_width = 2; // YUYV
for (int h = (m_Width * m_Height * array_width) - m_Width * array_width; h > 0; h -= m_Width * array_width)
{
for (int w = 0; w < m_Width * array_width; w++)
{
flipped[counter] = buffer[h + w];
counter++;
}
}
However, the following vertical and horizontal flip code appears to work but there is a loss of definition. To better understand what I am referring to, please see my sample images.
int x = 0;
for (int n = m_Width * m_Height * 2 - 1; n >= 0; n -= 4)
{
flipped[x] = buffer[n - 3]; // Y0
flipped[x + 1] = buffer[n - 2]; // U
flipped[x + 2] = buffer[n - 1]; // Y1
flipped[x + 3] = buffer[n]; // V
x += 4;
}
As you can see, I am moving the YUYV components and keeping them in the same order. I don't believe that I am dropping pixels so I don't understand why I am losing definition. To reiterate, I don't see this problem when flipping vertically (Using the first code snippet).
Here is the reference image, please note the stem of the lamp:
This is the flipped image, the stem of the lamp has lost definition:
You also need to swap Y0 and Y1 in your loop.
int x = 0;
for (int n = m_Width * m_Height * 2 - 1; n >= 3; n -= 4)
{
flipped[x] = buffer[n - 1]; // Y1->Y0
flipped[x + 1] = buffer[n - 2]; // U
flipped[x + 2] = buffer[n - 3]; // Y0->Y1
flipped[x + 3] = buffer[n]; // V
x += 4;
}
While I was at it, since you're accessing n - 3 I changed the loop condition to be absolutely sure it was safe.
m_Width * m_Height * 2 is not a multiple of 4 (the number of data blocks in YUYV format. Try changing '2' into '4', an also array_width.

Show RGB888 content

I have to show RGB888 content using the ShowRGBContent function.
The below function is a ShowRGBContent function for yv12->rgb565 & UYVY->RGB565
static void ShowRGBContent(UINT8 * pImageBuf, INT32 width, INT32 height)
{
LogEntry(L"%d : In %s Function \r\n",++abhineet,__WFUNCTION__);
UINT16 * temp;
BYTE rValue, gValue, bValue;
// this is to refresh the background desktop
ShowWindow(GetDesktopWindow(),SW_HIDE);
ShowWindow(GetDesktopWindow(),SW_SHOW);
for(int i=0; i<height; i++)
{
for (int j=0; j< width; j++)
{
temp = (UINT16 *) (pImageBuf+ i*width*PP_TEST_FRAME_BPP+j*PP_TEST_FRAME_BPP);
bValue = (BYTE) ((*temp & RGB_COMPONET0_MASK) >> RGB_COMPONET0_OFFSET) << (8 -RGB_COMPONET0_WIDTH);
gValue = (BYTE) ((*temp & RGB_COMPONET1_MASK) >> RGB_COMPONET1_OFFSET) << (8 -RGB_COMPONET1_WIDTH);
rValue = (BYTE) ((*temp & RGB_COMPONET2_MASK) >> RGB_COMPONET2_OFFSET) << (8 -RGB_COMPONET2_WIDTH);
SetPixel(g_hDisplay, SCREEN_OFFSET_X + j, SCREEN_OFFSET_Y+i, RGB(rValue, gValue, bValue));
}
}
Sleep(2000); //sleep here to review the result
LogEntry(L"%d :Out %s Function \r\n",++abhineet,__WFUNCTION__);
}
I have to modify this for RGB888
Here in the above function:
************************
RGB_COMPONET0_WIDTH = 5
RGB_COMPONET1_WIDTH = 6
RGB_COMPONET2_WIDTH = 5
************************
************************
RGB_COMPONET0_MASK = 0x001F //31 in decimal
RGB_COMPONET1_MASK = 0x07E0 //2016 in decimal
RGB_COMPONET2_MASK = 0xF800 //63488 in decimal
************************
************************
RGB_COMPONET0_OFFSET = 0
RGB_COMPONET1_OFFSET = 5
RGB_COMPONET2_OFFSET = 11
************************
************************
SCREEN_OFFSET_X = 100
SCREEN_OFFSET_Y = 0
************************
Here
Also PP_TEST_FRAME_BPP = 2 for yv12 -> RGB565 & UYVY -> RGB565
iOutputBytesPerFrame = iOutputStride * iOutputHeight;
// where iOutputStride = (iOutputWidth * PP_TEST_FRAME_BPP) i.e (112 * 2)
// & iOutputHeight = 160
// These are in case of RGB565
pOutputFrameVirtAddr = (UINT32 *) AllocPhysMem( iOutputBytesPerFrame,
PAGE_EXECUTE_READWRITE,
0,
0,
(ULONG *) &pOutputFramePhysAddr);
// PAGE_EXECUTE_READWRITE = 0x40 mentioned in winnt.h
// Width =112 & Height = 160 in all the formats for i/p & o/p
Now my task is for RGB888.
Please guide me what shall i do in this.
**Thanks in advance.
Conversion from yuv444 to rgb888 is pretty simple since all of the components fall on byte boundaries so no bit masking should even be needed. According to the wikipedia article nobugz referred to in the comments section, the conversion can be done in fixed point by the following
UINT8* pimg = pImageBuf;
for(int i=0; i<height; i++)
{
for (int j=0; j< width; j++)
{
INT16 Y = pimg[0];
INT16 Cb = (INT16)pimg[1] - 128;
INT16 Cr = (INT16)pimg[2] - 128;
rValue = Y + Cr + Cr >> 2 + Cr >> 3 + Cr >> 5
gValue = Y - (Cb >> 2 + Cb >> 4 + Cb >> 5) -
(Cr >> 1 + Cr >> 3 + Cr >> 4 + Cr >> 5);
bValue = Y + Cb + Cb >> 1 + Cb >> 2 + Cb >> 6;
SetPixel(g_hDisplay, SCREEN_OFFSET_X + j, SCREEN_OFFSET_Y+i, RGB(rValue,
gValue, bValue));
pimg+=3;
}
}
This assumes that your yuv444 is 8 bits per sample (24 bits per pixel). The conversion can also be done in floating point but this should be quicker if it works since your source and destinations are both fixed point. I'm also not sure the conversion to int16 is necessary, but I did it to be safe.
Note that the 444 in yuv444 is not referring to the same thing as the 888 in rgb888. The 444 refers to the subsampling that often occurs when using the TUV colorspace. For instance in YUV420, Cb and Cr are decimated by two in both directions. yuv444 just means that all three components are sampled the same (no subsampling). The 888 in rgb888 is referring to the bits per sample (8 bits for each of the three color components).
I have not actually tested this code, but it should at least give you an idea where to start.