Scaling from YUV 10 bit to RGB - c++

I finally managed to convert, with the aid of libyuv, a sample of type MFVideoFormat_P010 (read with Media Foundation) and I got a buffer of 10-bit values in a 32-bit structure like this:
struct ar30
{
unsigned b : 10;
unsigned g : 10;
unsigned r : 10;
unsigned a : 2;
};
Now I want to convert these RGB values to display into my HDR Direct2D context which has a DXGI_FORMAT_R16G16B16A16_FLOAT format and accepts WIC bitmaps created in a GUID_WICPixelFormat128bppPRGBAFloat format.
My problem is how to scale these values. Scaling to [0..1] misses the point of HDR anyway, but scaling to [0..4] creates a too much bright image.
which makes me think that the scaling between this 10-bit and floating point cannot be linear. Looking the images indicates that something about the luma is wrong.
Any clue on the proper conversion?
Thanks a lot.

Related

How to convert an image with 16 bit integers into a QPixmap

I am working with software that has a proprietary image format. I need to be able to display a modified version of these images in a QT GUI. There is a method (Image->getpixel(x,y)) that returns a 16 bit integer (16 bits per pixel). To be clear, the 16 bit number does not represent an RGB color format. It literally represents a measurement or dimension to that particular point (height map) on the part that is being photographed. I need to take the range of dimensions (integers) in the image and apply a scale to be represented in colors. Then, I need to use that information to build an image for a QPixmap that can be displayed in a Qlabel. Here is the general gist...
QByteArray Arr;
unsigned int Temp;
for (int N = 0; N < Image->GetWidth(); N++) {
for (int M = 0; M < Image->GetHeight(); M++) {
Temp = Image.GetPixel(N,M);
bytes[0] = (Temp >> 8) & 0xFF;
bytes[1] = Temp & 0xFF;
Arr.push_back(bytes[0]);
Arr.push_back(bytes[1]);
}
}
// Take the range of 16 bit integers. Example (14,982 to 16,010)
// Apply a color scheme to the values
// Display the image
QPixmap Image2;
Image2.loadFromData(Arr);
ui->LabelPic->setPixmap(Image2);
Thoughts?
This screenshot is an example of what I am trying to replicate. It is important to note that the coloration of the image is not inherent to the underlying data in the image. It is the result of an application scaling the height values and applying a color scheme to the range of integers.
The information on proprietary image format is limited so the below is guess or thought (as requested) according to explanation above:
QImage img(/*raw image data*/ (const uchar*) qbyteArr.data(),
/*columns*/ width, /*height*/ rows,
/*bytes per line*/ width * sizeof(uint16),
/*format*/ QImage::Format_RGB16); // the format is likely matching the request
QPixpam pixmap(img); // if the pixmap is needed
I found pieces of the answer here.
Algorithm to convert any positive integer to an RGB value
As for the actual format, I chose to convert the 16 bit integer into a QImage::Format_RGB888 to create a heat map. This was accomplished by applying a scale to the range of integers and using the scale to plot different color equations.

Converting float to unsigned char causes wrong values

I've created a function that creates a BMP image using RGB values.
The RGB values are stored as floats that range from 0.0 to 1.0.
When writing the values to the BMP file they need to range from 0 to 255.0 so I multiply the floats by 255.0
They also need to be unsigned chars.
EDIT: Unless one of you can think of a better type.
So basically what I do is this
unsigned char pixel[3]
//BMP Expects BGR
pixel[0] = image.b*255.0;
pixel[1] = image.g*255.0;
pixel[2] = image.r*255.0;
fwrite(&pixel, 1, 3, file);
Where image.r is a float.
There seems to be some kind of loss of data in the conversion because some parts of the image are black when they shouldn't be.
The BMP image is set to 24 bits per pixel
I was going to post images but I don't have enough reputation.
edit:
BMP image
http://tinypic.com/r/2qw3cdv/8
Printscreen
http://tinypic.com/r/2q3rm07/8
Basically light blue parts become black.
If I multiply by 128 instead the image is darker but otherwise accurate. It starts getting weird around 180 ish

Overlaying/merging two (and more) YUV images in OpenCV

I investigated and stripped down my previous question (Is there a way to avoid conversion from YUV to BGR?). I want to overlay few images (format is YUV) on the resulting, bigger image (think about it like it is a canvas) and send it via network library (OPAL) forward without converting it to to BGR.
Here is the code:
Mat tYUV;
Mat tClonedYUV;
Mat tBGR;
Mat tMergedFrame;
int tMergedFrameWidth = 1000;
int tMergedFrameHeight = 800;
int tMergedFrameHalfWidth = tMergedFrameWidth / 2;
tYUV = Mat(tHeader->height * 1.5f, tHeader->width, CV_8UC1, OPAL_VIDEO_FRAME_DATA_PTR(tHeader));
tClonedYUV = tYUV.clone();
tMergedFrame = Mat(Size(tMergedFrameWidth, tMergedFrameHeight), tYUV.type(), cv::Scalar(0, 0, 0));
tYUV.copyTo(tMergedFrame(cv::Rect(0, 0, tYUV.cols > tMergedFrameWidth ? tMergedFrameWidth : tYUV.cols, tYUV.rows > tMergedFrameHeight ? tMergedFrameHeight : tYUV.rows)));
tClonedYUV.copyTo(tMergedFrame(cv::Rect(tMergedFrameHalfWidth, 0, tYUV.cols > tMergedFrameHalfWidth ? tMergedFrameHalfWidth : tYUV.cols, tYUV.rows > tMergedFrameHeight ? tMergedFrameHeight : tYUV.rows)));
namedWindow("merged frame", 1);
imshow("merged frame", tMergedFrame);
waitKey(10);
The result of above code looks like this:
I guess the image is not correctly interpreted, so the pictures stay black/white (Y component) and below them, we can see the U and V component. There are images, which describes the problem well (http://en.wikipedia.org/wiki/YUV):
and: http://upload.wikimedia.org/wikipedia/en/0/0d/Yuv420.svg
Is there a way for these values to be correctly read? I guess I should not copy the whole images (their Y, U, V components) straight to the calculated positions. The U and V components should be below them and in the proper order, am I right?
First, there are several YUV formats, so you need to be clear about which one you are using.
According to your image, it seems your YUV format is Y'UV420p.
Regardless, it is a lot simpler to convert to BGR work there and then convert back.
If that is not an option, you pretty much have to manage the ROIs yourself. YUV is commonly a plane-format where the channels are not (completely) multiplexed - and some are of different sizes and depths. If you do not use the internal color conversions, then you will have to know the exact YUV format and manage the pixel copying ROIs yourself.
With a YUV image, the CV_8UC* format specifier does not mean much beyond the actual memory requirements. It certainly does not specify the pixel/channel muxing.
For example, if you wanted to only use the Y component, then the Y is often the first plane in the image so the first "half" of whole image can just be treated as a monochrome 8UC1 image. In this case using ROIs is easy.

How to convert yuy2 video samples to rgb samples?

I know the formula to convert yuy2 to rgb as described in here:
Convert yuy2 to bitmap
My problem is that I don't know how to apply it in a directshow filter:
In directshow i have a buffer and a header but how do I convert these into rgb?
The formula is:
int C = luma - 16;
int D = cr - 128;
int E = cb - 128;
r = (298*C+409*E+128)/256;
g = (298*C-100*D-208*E+128)/256;
b = (298*C+516*D+128)/256;
How do i get these values and how do I write them into the output buffer?
This is how i copy the buffer at the moment:
long lSizeSample = sample->GetSize();
long lSizeOutSample = outsample->GetSize();
outsample->GetPointer(&newBuffer);
sample->GetPointer(&sampleBuffer);
memcpy((void *)newBuffer, (void *)sampleBuffer, lSizeSample);
So i just copy the buffer. But how do i modify it?
Instead of memcpy you are expected to convert pixel by pixel, taking into consideration strides, planar/packed formatting etc. In most cases this needs to be well optimized, such as using SIMD, for decent performance.
You can do the math yourself, of course, but you can also have the conversion done for you by Color Converter DSP, if Vista+ is OK for you.
The DSP is available as DMO, or you can use DMO Wrapper Filter and use it as a readily available DirectShow filter.

getting Y value[Ycbcr] of one Pixel in opencv

I'm trying to get the Y value of pixel from a frame that's in Ycbcr color mode.
here what I' wrote:
cv::Mat frame, Ycbcrframe, helpframe;
........
cvtColor(frame,yCbCrFrame,CV_RGB2YCrCb); // converting to Ycbcr
Vec3b intensity =yCbCrFrame.at<uchar>(YPoint);
uchar yv = intensity.val[0]; // I thought it's my Y value but its not, coz he gives me I think the Blue channel of RGB color space
any Idea how what the correct way to do that
what about the following code?
Vec3f Y_pix = YCbCrframe.at<Vec3f>(rows, cols);
int pixelval = Y_pix[0];
(P.S. I havent tried it yet)
You need to know both the depth (numerical format and precision of channel sample) as well as the channel count (typically 3, but can also be 1 (monochrome) or 4 (alpha-containing)), ahead of time.
For 3-channel, 8-bit unsigned integer (a.k.a. byte or uchar) pixel format, each pixel can be accessed with
mat8UC3.at<cv::Vec3b>(pt);