Converting 16 bit unsigned int array to 32 bit float array - c++

I am using directx 9 with 64bit render targets...I need to read the data on the render target surfaces. Each color component( a,r,g,b ) is encoded with 2 bytes( or 16bits x 4 = 64 ). How do I convert each 16 bit color component to a 32 bit floating point variable? Here is what I've tried:
BYTE *pData = ( BYTE* )renderTargetData;
for( UINT y = 0; y < Height; ++y )
{
for( UINT x = 0; x < width; ++x )
{
// declare 4component vector to hold 4 floats
D3DXVECTOR4 vColor;
// convert the pixel color from 16 to 32 bits
D3DXFloat16To32Array( ( FLOAT* )&vColor, ( D3DXFLOAT16* )&pData[ y + 8 * x ], 4 );
}
}
For some reason this is incorrect...In one case after conversion, where the actual renderTargetData for one pixel is ( 0, 0, 0, 65535 ), I get this result: ( 0, 0, 0, -131008.00 ).

In general, converting an integer v from integer in range [0..n] to float in range [0.0..1.0] is:
float f = v/(float)n;
So, in your case, a loop that does:
vColor.x = (pData[ y + 4 * x ])/65535.0f;
vColor.y = (pData[ y + 4 * x + 1 ])/65535.0f;
// ... etc.
should work, if we change the BYTE *pData = ( BYTE* )renderTargetData; into WORD *pData = ( WORD* )renderTargetData;
But there may be some clever way for DX to do this for you that I don't know of since I

Related

Algorithm for adjustment of image levels

I need to implement in C++ algorithm for adjusting image levels that works similar to Levels function in Photoshop or GIMP. I.e. inputs are: color RGB image to be adjusted adjust, while point, black point, midtone point, output from/to values. But I didn't find yet any info on how to perform this adjustment. Probably someone recommend me algorithm description or materials to study.
To the moment I've came up with following code myself, but it doesn't give expected result, similar to what I can see, for example in the GIMP, image becomes too lightened. Below is my current fragment of the code:
const int normalBlackPoint = 0;
const int normalMidtonePoint = 127;
const int normalWhitePoint = 255;
const double normalLowRange = normalMidtonePoint - normalBlackPoint + 1;
const double normalHighRange = normalWhitePoint - normalMidtonePoint;
int blackPoint = 53;
int midtonePoint = 110;
int whitePoint = 168;
int outputFrom = 0;
int outputTo = 255;
double outputRange = outputTo - outputFrom + 1;
double lowRange = midtonePoint - blackPoint + 1;
double highRange = whitePoint - midtonePoint;
double fullRange = whitePoint - blackPoint + 1;
double lowPart = lowRange / fullRange;
double highPart = highRange / fullRange;
int dim(256);
cv::Mat lut(1, &dim, CV_8U);
for(int i = 0; i < 256; ++i)
{
double p = i > normalMidtonePoint
? (static_cast<double>(i - normalMidtonePoint) / normalHighRange) * highRange * highPart + lowPart
: (static_cast<double>(i + 1) / normalLowRange) * lowRange * lowPart;
int v = static_cast<int>(outputRange * p ) + outputFrom - 1;
if(v < 0) v = 0;
else if(v > 255) v = 255;
lut.at<uchar>(i) = v;
}
....
cv::Mat sourceImage = cv::imread(inputFileName, CV_LOAD_IMAGE_COLOR);
if(!sourceImage.data)
{
std::cerr << "Error: couldn't load image " << inputFileName << "." << std::endl;
continue;
}
#if 0
const int forwardConversion = CV_BGR2YUV;
const int reverseConversion = CV_YUV2BGR;
#else
const int forwardConversion = CV_BGR2Lab;
const int reverseConversion = CV_Lab2BGR;
#endif
cv::Mat convertedImage;
cv::cvtColor(sourceImage, convertedImage, forwardConversion);
// Extract the L channel
std::vector<cv::Mat> convertedPlanes(3);
cv::split(convertedImage, convertedPlanes);
cv::LUT(convertedPlanes[0], lut, convertedPlanes[0]);
//dst.copyTo(convertedPlanes[0]);
cv::merge(convertedPlanes, convertedImage);
cv::Mat resImage;
cv::cvtColor(convertedImage, resImage, reverseConversion);
cv::imwrite(outputFileName, resImage);
Pseudocode for Photoshop's Levels Adjustment
First, calculate the gamma correction value to use for the midtone adjustment (if desired). The following roughly simulates Photoshop's technique, which applies gamma 9.99-1.00 for midtone values 0-128, and 1.00-0.01 for 128-255.
Apply gamma correction:
Gamma = 1
MidtoneNormal = Midtones / 255
If Midtones < 128 Then
MidtoneNormal = MidtoneNormal * 2
Gamma = 1 + ( 9 * ( 1 - MidtoneNormal ) )
Gamma = Min( Gamma, 9.99 )
Else If Midtones > 128 Then
MidtoneNormal = ( MidtoneNormal * 2 ) - 1
Gamma = 1 - MidtoneNormal
Gamma = Max( Gamma, 0.01 )
End If
GammaCorrection = 1 / Gamma
Then, for each channel value R, G, B (0-255) for each pixel, do the following in order.
Apply the input levels:
ChannelValue = 255 * ( ( ChannelValue - ShadowValue ) /
( HighlightValue - ShadowValue ) )
Apply the midtones:
If Midtones <> 128 Then
ChannelValue = 255 * ( Pow( ( ChannelValue / 255 ), GammaCorrection ) )
End If
Apply the output levels:
ChannelValue = ( ChannelValue / 255 ) *
( OutHighlightValue - OutShadowValue ) + OutShadowValue
Where:
All channel and adjustment parameter values are integers, 0-255 inclusive
Shadow/Midtone/HighlightValue are the input adjustment values (defaults 0, 128, 255)
OutShadow/HighlightValue are the output adjustment values (defaults 0, 255)
You should optimize things and make sure values are kept in bounds (like 0-255 for each channel)
For a more accurate simulation of Photoshop, you can use a non-linear interpolation curve if Midtones < 128. Photoshop also chops off the darkest and lightest 0.1% of the values by default.
Ignoring the midtone/Gamma, the Levels function is a simple linear scaling.
All input values are first linearly scaled so that all values less or equal to the "black point" are set to 0, and all values greater than or equal white point are set to 255.
Then all values are linearly scaled from 0/255 to the output range.
For the mid-point—it depends what you actually mean by that.
In GIMP, there is a Gamma value. The Gamma value is a simple exponent of the input values (after restricting to the black/white points).
For Gamma == 1, the values are not changed.
For gamma < 1, the values are darkened.

Access R,G and B pixel values using a pointer to image data

I have read an image in Mat format.
Mat image = imread("image.png", 1);
I declare a pointer to its data using
unsigned char *ptr_source = image.data
Now, I want to access the value of R,G and B values at each pixel in a for loop. I already know the method to do it with img.at<Veb3b>(i,j) or similar things but now, I have to do it using a pointer of unsigned char type.
uchar R_value = ptr_source[ i*?? + ??? ];
uchar G_value = ptr_source[ i*?? + ??? ];
uchar B_value = ptr_source[ i*?? + ??? ];
IMPORTANT: Some people here have mentioned to use the following:
unsigned char *input = (unsigned char*)(img.data);
for(int j = 0;j < img.rows;j++){
for(int i = 0;i < img.cols;i++){
unsigned char b = input[img.step * j + i ] ;
unsigned char g = input[img.step * j + i + 1];
unsigned char r = input[img.step * j + i + 2];
}
}
which makes sense to me as per the openCV docs but unfortunately it is not working in my case. The other method posted at SO says to use the following:
uchar b = frame.data[frame.channels()*(frame.cols*y + x) + 0];
uchar g = frame.data[frame.channels()*(frame.cols*y + x) + 1];
uchar r = frame.data[frame.channels()*(frame.cols*y + x) + 2];
Basic Question: Though, it seems to be working but I do not understand it logically. Why do we need to multiply (frame.cols*y + x) with frame.channels() ??
The cv::Mat::channels() method returns the number of channels in an image.
In a 8UC3 three-channel color image, channels() returns 3, and the pixels are stored in consecutive byte-triplets: BGRBGRBGRBGRBGR....
To access pixel (x,y) given a unsigned char* ptr_source pointer, you need to calculate the pixel offset. The image width is frame.cols. Each pixel is channels() == 3 bytes, so the pixel's unsiged char* offset will be ptr_source + frame.channels()*(frame.cols*y + x). This unsigned char* would usually be the blue channel with the following 2 chars the green and red.
For example, given a 3x4 image, the pixels in memory would look like this (spaces for clarity only):
r\c 0 1 2
0 BGR BGR BGR
1 BGR BGR BGR
2 BGR>BGR<BGR
3 BGR BGR BGR
So if you count bytes you'll see that the blue channel byte of pixel (1,2) is exactly at byte offset 3*(2*3+1) = 21
It is actually advisable to use img.step instead of the raw computation since some images have padding at the end of each pixel row so that it is not always true that img.step[0] == img.channels()*img.cols.
In this case you should use ptr_source[img.step[0]*y + img.channels()*x].
Additionally, your question assumes that the pixel depth is 8U which may not be correct for all images. If it is not, you will need to multiply everything by the depth (bytes per pixel) as well.
And this is essentially what cv::Mat:at<> does...

Convert 16 bit stereo sound to 16 bit mono sound

I'm trying to convert 16 bit stereo sound from a WAVE file to 16 bit mono sound, but I'm having some struggle. I've tried to convert 8 bit stereo sound to mono and it's working great. Here's the piece of code for that:
if( bitsPerSample == 8 )
{
dataSize /= 2;
openALFormat = AL_FORMAT_MONO8;
for( SizeType i = 0; i < dataSize; i++ )
{
pData[ i ] = static_cast<Uint8>(
( static_cast<Uint16>( pData[ i * 2 ] ) +
static_cast<Uint16>( pData[ i * 2 + 1 ] ) ) / 2
);
}
But, now I'm trying to do pretty much the same with 16 bit audio, but I just can't get it to work. I can just hear some kind of weird noise. I've tried to set "monoSample" to "left"(Uint16 monoSample = left;) and the audio data from that channel works very well. The right channel as well. Can anyone of you see what I'm doing wrong?
Here's the code(pData is an array of bytes):
if( bitsPerSample == 16 )
{
dataSize /= 2;
openALFormat = AL_FORMAT_MONO16;
for( SizeType i = 0; i < dataSize / 2; i++ )
{
Uint16 left = static_cast<Uint16>( pData[ i * 4 ] ) |
( static_cast<Uint16>( pData[ i * 4 + 1 ] ) << 8 );
Uint16 right = static_cast<Uint16>( pData[ i * 4 + 2 ] ) |
( static_cast<Uint16>( pData[ i * 4 + 3 ] ) << 8 );
Uint16 monoSample = static_cast<Uint16>(
( static_cast<Uint32>( left ) +
static_cast<Uint32>( right ) ) / 2
);
// Set the new mono sample.
pData[ i * 2 ] = static_cast<Uint8>( monoSample );
pData[ i * 2 + 1 ] = static_cast<Uint8>( monoSample >> 8 );
}
}
In a 16 bit stereo WAV file, each sample is 16 bits, and the samples are interleaved. I'm not sure why you're using a bitwise OR, but you can just retrieve the data directly without having to shift. The below non-portable code (assumes sizeof(short) == 2) illustrates this.
unsigned size = header.data_size;
char *data = new char[size];
// Read the contents of the WAV file in to data
for (unsigned i = 0; i < size; i += 4)
{
short left = *(short *)&data[i];
short right = *(short *)&data[i + 2];
short monoSample = (int(left) + right) / 2;
}
Also, while 8 bit WAV files are unsigned, 16 bit WAV files are signed. To average them, make sure you store it in an appropriately sized signed type. Note that one of the samples is promoted to an int temporarily to prevent overflow.
As has been pointed out in the comments below by Stix, simple averaging may not give the best results. Your mileage may vary.
In addition, Greg Hewgill correctly noted that this assumes that the machine is little-endian.

How to Convert a 12-bit Image to 8-bit in C/C++?

All right, so I have been very frustrated trying to convert a 12-bit buffer to an 8-bit one.
The image source is a 12-bit GrayScale (decompressed from JPEG2000) whose color range goes from 0-4095. Now I have to reduce that to 0-255. Common sense tells me that I should simply divide each pixel value like this. But when I try this, the image comes out too light.
void
TwelveToEightBit(
unsigned char * charArray,
unsigned char * shortArray,
const int num )
{
short shortValue = 0; //Will contain the two bytes in the shortArray.
double doubleValue = 0; //Will contain intermediary calculations.
for( int i = 0, j =0; i < num; i++, j +=2 )
{
// Bitwise manipulations to fit two chars onto one short.
shortValue = (shortArray[j]<<8);
shortValue += (shortArray[j+1]);
charArray[i] = (( unsigned char)(shortValue/16));
}
}
Now I can tell that there needs to be some contrast adjustments. Any ideas anyone?
Many Thanks in advance
In actuality, it was merely some simple Contrast adjustments that needed to be made. I realized this as soon as I loaded up the result image in Photoshop and did auto-contrast....the image result would very closely resemble the expected output image.
I found out an algorithm that does the contrast and will post it here for other's convenience:
#include <math.h>
short shortValue = 0; //Will contain the two bytes in the shortBuffer.
double doubleValue = 0; //Will contain intermediary calculations.
//Contrast adjustment necessary when converting
//setting 50 as the contrast seems to be real sweetspot.
double contrast = pow( ((100.0f + 50.0f) / 100.0f), 2);
for ( int i = 0, j =0; i < num; i++, j += 2 )
{
//Bitwise manipulations to fit two chars onto one short.
shortValue = (shortBuffer[j]<<8);
shortValue += (shortBuffer[j+1]);
doubleValue = (double)shortValue;
//Divide by 16 to bring down to 0-255 from 0-4095 (12 to 8 bits)
doubleValue /= 16;
//Flatten it out from 0-1
doubleValue /= 255;
//Center pixel values at 0, so that the range is -0.5 to 0.5
doubleValue -= 0.5f;
//Multiply and just by the contrast ratio, this distances the color
//distributing right at the center....see histogram for further details
doubleValue *= contrast;
//change back to a 0-1 range
doubleValue += 0.5f;
//and back to 0-255
doubleValue *= 255;
//If the pixel values clip a little, equalize them.
if (doubleValue >255)
doubleValue = 255;
else if (doubleValue<0)
doubleValue = 0;
//Finally, put back into the char buffer.
charBuffer[i] = (( unsigned char)(doubleValue));
}
The main problem, as I understand, is to convert a 12-bit value to a 8-bit one.
Range of 12-bit value = 0 - 4095 (4096 values)
Range of 8-bit value = 0 - 255 ( 256 values)
I would try to convert a 12-bit value x to a 8-bit value y
First, scale down first to the range 0-1, and
Then, scale up to the range 0-256.
Some C-ish code:
uint16_t x = some_value;
uint8_t y = (uint8_t) ((double) x/4096 ) * 256;
Update
Thanks to Kriss's comment, I realized that I disregarded the speed issue. The above solution, due to floating operations, might be slower than pure integer operations.
Then I started considering another solution. How about constructing y with the 8 most significant bits of x? In other words, by trimming off the 4 least significant bits.
y = x >> 4;
Will this work?
if you just want to drop the bottom 4 least significant bits you can do the following:
unsigned int start_value = SOMEVALUE; // starting value
value = (value & 0xFF0 ); // drop bits
unsigned char final_value =(uint8_t)value >> 4; //bit shift to 8 bits
Note the "unsigned". You don't want the signed bit mucking with your values.
Like this:
// Image is stored in 'data'
unsigned short* I = (unsigned short*)data;
for(int i=0; i<imageSize; i++) {
// 'color' is the 8-bit value
char color = (char)((double)(255*I[i])/(double)(1<<12));
/*...*/
}
Wild guess: your code assumes a big-endian machine (most significant byte first). A Windows PC is little-endian. So perhaps try
shortValue = (shortArray[j+1]<<8);
shortValue += (shortArray[j]);
If indeed endiasness is the problem then the code you presented would just shave off the 4 most significant bits of every value, and expand the rest to the intensity range. Hm, EDIT, 2 secs later: no, that was a thinko. But try it anyway?
Cheers & hth.,
– Alf

Show RGB888 content

I have to show RGB888 content using the ShowRGBContent function.
The below function is a ShowRGBContent function for yv12->rgb565 & UYVY->RGB565
static void ShowRGBContent(UINT8 * pImageBuf, INT32 width, INT32 height)
{
LogEntry(L"%d : In %s Function \r\n",++abhineet,__WFUNCTION__);
UINT16 * temp;
BYTE rValue, gValue, bValue;
// this is to refresh the background desktop
ShowWindow(GetDesktopWindow(),SW_HIDE);
ShowWindow(GetDesktopWindow(),SW_SHOW);
for(int i=0; i<height; i++)
{
for (int j=0; j< width; j++)
{
temp = (UINT16 *) (pImageBuf+ i*width*PP_TEST_FRAME_BPP+j*PP_TEST_FRAME_BPP);
bValue = (BYTE) ((*temp & RGB_COMPONET0_MASK) >> RGB_COMPONET0_OFFSET) << (8 -RGB_COMPONET0_WIDTH);
gValue = (BYTE) ((*temp & RGB_COMPONET1_MASK) >> RGB_COMPONET1_OFFSET) << (8 -RGB_COMPONET1_WIDTH);
rValue = (BYTE) ((*temp & RGB_COMPONET2_MASK) >> RGB_COMPONET2_OFFSET) << (8 -RGB_COMPONET2_WIDTH);
SetPixel(g_hDisplay, SCREEN_OFFSET_X + j, SCREEN_OFFSET_Y+i, RGB(rValue, gValue, bValue));
}
}
Sleep(2000); //sleep here to review the result
LogEntry(L"%d :Out %s Function \r\n",++abhineet,__WFUNCTION__);
}
I have to modify this for RGB888
Here in the above function:
************************
RGB_COMPONET0_WIDTH = 5
RGB_COMPONET1_WIDTH = 6
RGB_COMPONET2_WIDTH = 5
************************
************************
RGB_COMPONET0_MASK = 0x001F //31 in decimal
RGB_COMPONET1_MASK = 0x07E0 //2016 in decimal
RGB_COMPONET2_MASK = 0xF800 //63488 in decimal
************************
************************
RGB_COMPONET0_OFFSET = 0
RGB_COMPONET1_OFFSET = 5
RGB_COMPONET2_OFFSET = 11
************************
************************
SCREEN_OFFSET_X = 100
SCREEN_OFFSET_Y = 0
************************
Here
Also PP_TEST_FRAME_BPP = 2 for yv12 -> RGB565 & UYVY -> RGB565
iOutputBytesPerFrame = iOutputStride * iOutputHeight;
// where iOutputStride = (iOutputWidth * PP_TEST_FRAME_BPP) i.e (112 * 2)
// & iOutputHeight = 160
// These are in case of RGB565
pOutputFrameVirtAddr = (UINT32 *) AllocPhysMem( iOutputBytesPerFrame,
PAGE_EXECUTE_READWRITE,
0,
0,
(ULONG *) &pOutputFramePhysAddr);
// PAGE_EXECUTE_READWRITE = 0x40 mentioned in winnt.h
// Width =112 & Height = 160 in all the formats for i/p & o/p
Now my task is for RGB888.
Please guide me what shall i do in this.
**Thanks in advance.
Conversion from yuv444 to rgb888 is pretty simple since all of the components fall on byte boundaries so no bit masking should even be needed. According to the wikipedia article nobugz referred to in the comments section, the conversion can be done in fixed point by the following
UINT8* pimg = pImageBuf;
for(int i=0; i<height; i++)
{
for (int j=0; j< width; j++)
{
INT16 Y = pimg[0];
INT16 Cb = (INT16)pimg[1] - 128;
INT16 Cr = (INT16)pimg[2] - 128;
rValue = Y + Cr + Cr >> 2 + Cr >> 3 + Cr >> 5
gValue = Y - (Cb >> 2 + Cb >> 4 + Cb >> 5) -
(Cr >> 1 + Cr >> 3 + Cr >> 4 + Cr >> 5);
bValue = Y + Cb + Cb >> 1 + Cb >> 2 + Cb >> 6;
SetPixel(g_hDisplay, SCREEN_OFFSET_X + j, SCREEN_OFFSET_Y+i, RGB(rValue,
gValue, bValue));
pimg+=3;
}
}
This assumes that your yuv444 is 8 bits per sample (24 bits per pixel). The conversion can also be done in floating point but this should be quicker if it works since your source and destinations are both fixed point. I'm also not sure the conversion to int16 is necessary, but I did it to be safe.
Note that the 444 in yuv444 is not referring to the same thing as the 888 in rgb888. The 444 refers to the subsampling that often occurs when using the TUV colorspace. For instance in YUV420, Cb and Cr are decimated by two in both directions. yuv444 just means that all three components are sampled the same (no subsampling). The 888 in rgb888 is referring to the bits per sample (8 bits for each of the three color components).
I have not actually tested this code, but it should at least give you an idea where to start.