Creating a UINT32 from 4 floats - c++

Ok.
I'm working with the FW1FontWrapper code for use with DirectX
: https://archive.codeplex.com/?p=fw1
This has removed my need to use an outdated and useless font engine powered by textures.
However, the DrawString function within this Wrapper has a peculiar requirement for a Colour representation.
UINT32 Color : In the format 0xAaBbGgRr
The data I am given for this task is a constant Alpha value: 1.0f.
And 3 variable float values for R, G and B ranging from 0.0f to 1.0f.
Given the peculiar arrangement of colours within the UNIT32, I'm attempting to write a function that will create this UNIT32 using the 3 float values I am given.
My Attempt
UINT32 TextClassA::getColour(SentenceType* sentence)
{
//Convert each float value to its percentage of 255
int colorb = 255 * sentence->blue;
int colorg = 255 * sentence->green;
int colorr = 255 * sentence->red;
//convert each int to 8 bit Hex
UINT8 ucolorb = colorb;
UINT8 ucolorg = colorg;
UINT8 ucolorr = colorr;
//Push each hex back onto a UNIT32
UINT32 color = 0xFF + (ucolorb << 6) + (ucolorg << 4) + (ucolorr << 2);
return color;
}
SentenceType
red, green and blue are simply floats for each value of RGB from 0.0-1.0f
My Idea.
Was roughly that I could:
convert each float value to its percentage of 255 (not too worried about perfect accuracy.
Convert those integer values to UINT8s
Then push those back onto a UINT32

The implementation can be made clearer by avoiding all the temporary variables and using something like the code below. That said, any reasonable optimizing compiler should generate the same code in both cases.
UINT32 TextClassA::getColour(SentenceType* sentence)
{
//Convert color components to value between 0 and 255.
UINT32 r = 255 * sentence->red;
UINT32 b = 255 * sentence->blue;
UINT32 g = 255 * sentence->green;
//Combine the color components in a single value of the form 0xAaBbGgRr
return 0xFF000000 | r | (b << 16) | (g << 8);
}

I figured it out!!!
UINT32 TextClassA::getColour(SentenceType* sentence)
{
//Convert each float value to its percentage of 255
int colorb = 255 * sentence->blue;
int colorg = 255 * sentence->green;
int colorr = 255 * sentence->red;
//convert each int to 8 bit Hex
UINT8 ucolorb = 0x00 + colorb;
UINT8 ucolorg = 0x00 + colorg;
UINT8 ucolorr = 0x00 + colorr;
//Convert each UINT8 to a UINT32
UINT32 u32colorb = ucolorb;
UINT32 u32colorg = ucolorg;
UINT32 u32colorr = ucolorr;
//Create final UINT32s and push the converted UINT8s back onto each.
UINT32 u32finalcolorb = 0x00000000 | (u32colorb << 16);
UINT32 u32finalcolorg = 0x00000000 | (u32colorg << 8);
UINT32 u32finalcolorr = 0x00000000 | (u32colorr);
//0xAaBbGgRr
//Push each hex back onto a UNIT32
UINT32 color = 0xFF000000 | u32finalcolorb | u32finalcolorg |
u32finalcolorr;
return color;
}
My Mistake
...I believe, was trying to push UINT8s back, causing overflows, so I needed to convert to UINT32s first.

Related

Convert Pixels Buffer type from 1555 to 5551 (C++, OpenGL ES)

I'm having a problem while converting OpenGL video plugin to support GLES 3.0
So far everything went well, except glTexSubImage2D the original code uses GL_UNSIGNED_SHORT_1_5_5_5_REV as pixels type which is not supported in GLES 3.0
the type that worked is GL_UNSIGNED_SHORT_5_5_5_1 but colors and pixels are broken,
so I thought converting the pixels buffer would be fine..
but due to my limited understanding in GL and C++ I didn't succeed to do that.
Pixels process:
the pixels will be converted internally to 16 bit ABGR as in the Shader comments:
// Take a normalized color and convert it into a 16bit 1555 ABGR
// integer in the format used internally by the Playstation GPU.
uint rebuild_psx_color(vec4 color) {
uint a = uint(floor(color.a + 0.5));
uint r = uint(floor(color.r * 31. + 0.5));
uint g = uint(floor(color.g * 31. + 0.5));
uint b = uint(floor(color.b * 31. + 0.5));
return (a << 15) | (b << 10) | (g << 5) | r;
}
it will be received by this method after processing by vGPU:
static void Texture_set_sub_image_window(struct Texture *tex, uint16_t top_left[2], uint16_t resolution[2], size_t row_len, uint16_t* data)
{
uint16_t x = top_left[0];
uint16_t y = top_left[1];
/* TODO - Am I indexing data out of bounds? */
size_t index = ((size_t) y) * row_len + ((size_t) x);
uint16_t* sub_data = &( data[index] );
glPixelStorei(GL_UNPACK_ROW_LENGTH, (GLint) row_len);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, tex->id);
glTexSubImage2D(GL_TEXTURE_2D, 0,
(GLint) top_left[0], (GLint) top_left[1],
(GLsizei) resolution[0], (GLsizei) resolution[1],
GL_RGBA, GL_UNSIGNED_SHORT_1_5_5_5_REV /* Not supported in GLES */,
(void*)sub_data);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
}
as for row_len it's get the value from #define VRAM_WIDTH_PIXELS 1024
What I tried to do:
1st I replaced the type with another one:
glTexSubImage2D(GL_TEXTURE_2D, 0,
(GLint) top_left[0], (GLint) top_left[1],
(GLsizei) resolution[0], (GLsizei) resolution[1],
GL_RGBA, GL_UNSIGNED_SHORT_5_5_5_1 /* <- Here new type */,
(void*)sub_data);
2nd converted sub_data using this method:
uint16_t* ABGRConversion(const uint16_t* pixels, int row_len, int x, int y, int width, int height) {
uint16_t *frameBuffer = (uint16_t*)malloc(width * row_len * height);
signed i, j;
for (j=0; j < height; j++)
{
for (i=0; i < width; i++)
{
int offset = j * row_len + i;
uint16_t pixel = pixels[offset];
frameBuffer[offset] = Convert1555To5551(pixel); //<- stuck here
}
}
return frameBuffer;
}
I have no idea what Convert1555To5551 should look like?
Note: Sorry if some descriptions is wrong, I don't really have full understanding for the whole process.
Performance is not major problem.. just need to know how to deal with the current pixel buffer.
Side note: I had to replace glFramebufferTexture with glFramebufferTexture2D so I hope it's not involved in the issue.
Thanks.
This should be what you're looking for.
uint16_t Convert1555To5551(uint16_t pixel)
{
// extract rgba from 1555 (1 bit alpha, 5 bits blue, 5 bits green, 5 bits red)
uint16_t a = pixel >> 15;
uint16_t b = (pixel >> 10) & 0x1f; // mask lowest five bits
uint16_t g = (pixel >> 5) & 0x1f;
uint16_t r = pixel & 0x1f;
// compress rgba into 5551 (5 bits red, 5 bits green, 5 bits blue, 1 bit alpha)
return (r << 11) | (g << 6) | (b << 1) | a;
}

Pixels Overlay With transparency

I have 2 pixels in B8G8R8A8 (32) format.
Both pixels (top and bottom) has transparency (Alpha channel < 255 )
What is the way (formula) to overlay top pixel on the bottom one ?
(without using 3rd parties).
I tried to do something like this
struct FColor
{
public:
// Variables.
#if PLATFORM_LITTLE_ENDIAN
#ifdef _MSC_VER
// Win32 x86
union { struct{ uint8 B,G,R,A; }; uint32 AlignmentDummy; };
#else
// Linux x86, etc
uint8 B GCC_ALIGN(4);
uint8 G,R,A;
#endif
#else // PLATFORM_LITTLE_ENDIAN
union { struct{ uint8 A,R,G,B; }; uint32 AlignmentDummy; };
#endif
//...
};
FORCEINLINE FColor AlphaBlendColors(FColor pixel1, FColor pixel2)
{
FColor blendedColor;
//Calculate new Alpha:
uint8 newAlpha = 0;
newAlpha = pixel1.A + pixel2.A * (255 - pixel1.A);
//get FColor as uint32
uint32 colora = pixel1.DWColor();
uint32 colorb = pixel2.DWColor();
uint32 rb1 = ((0x100 - newAlpha) * (colora & 0xFF00FF)) >> 8;
uint32 rb2 = (newAlpha * (colorb & 0xFF00FF)) >> 8;
uint32 g1 = ((0x100 - newAlpha) * (colora & 0x00FF00)) >> 8;
uint32 g2 = (newAlpha * (colorb & 0x00FF00)) >> 8;
blendedColor = FColor(((rb1 | rb2) & 0xFF00FF) + ((g1 | g2) & 0x00FF00));
blendedColor.A = newAlpha;
return blendedColor;
}
But the result is far not what I want :-)
I looked for some Alpha blending formulas (I did never understand how would I calculate a new alpha of the overlay) -> perhaps I was going in a wrong direction ?
Edit:
Changing the newAlpha to newAlpha = FMath::Min(pixel1.A + pixel2.A, 255);
Actually gives a much better result, but is it right to calculate it like this ? Am I missing something here?
Working Example Based On Accepted Answer)
FORCEINLINE FColor AlphaBlendColors(FColor BottomPixel, FColor TopPixel)
{
FColor blendedColor;
//Calculate new Alpha:
float normA1 = 0.003921568627451f * (TopPixel.A);
float normA2 = 0.003921568627451f * (BottomPixel.A);
uint8 newAlpha = (uint8)((normA1 + normA2 * (1.0f - normA1)) * 255.0f);
if (newAlpha == 0)
{
return FColor(0,0,0,0);
}
//Going By Straight Alpha formula
float dstCoef = normA2 * (1.0f - normA1);
float multiplier = 255.0f / float(newAlpha);
blendedColor.R = (uint8)((TopPixel.R * normA1 + BottomPixel.R * dstCoef) * multiplier);
blendedColor.G = (uint8)((TopPixel.G * normA1 + BottomPixel.G * dstCoef) * multiplier);
blendedColor.B = (uint8)((TopPixel.B * normA1 + BottomPixel.B * dstCoef) * multiplier);
blendedColor.A = newAlpha;
return blendedColor;
}
Start by assuming that there is a third pixel below that happens to be opaque.
For the further notations, I will assume that alpha values are in [0,1].
Given: three pixels with the first one being on top, colors c_1, c_2, c_3, alpha values a_1, a_2, a_3 = 1
Then the resulting alpha value is obviously 1 and the color is
(a_1)*c_1 + (1-a_1)(*a_2)*c_2 + (1-a_1)*(1-a_2)*c_3
Now, we want to find some values c_k, a_k so that the formula above equates
(a_k)*c_k + (1-a_k)*c_3
We can solve this in two steps:
(1-a_k) = (1-a_1)*(1-a_2)
->
a_k = 1-(1-a_1)*(1-a_2)
and
(a_k)*c_k = (a_1)*c_1 + (1-a_1)(*a_2)*c_2
->
c_k = [(a_1)*c_1 + (1-a_1)(*a_2)*c_2] / a_k
Use those formulas (with a different range for your alpha values) and you get your desired color.
(Don't forget to catch a_k = 0)
edit: Explanation of the third pixel:
When you use your two pixels in any way, that is doing something that results it in being used to display something, they will be put over some other existing color that is opaque. For example, this might be the background color, but it could also be some color that is the result of applying many more transparent pixels on some background color.
What I now do to combine your two colors is to find a color that behaves just like those two colors. That is, putting it on top of some opaque color should result in the same as putting the original two colors on top of it. This is what I demand of the new color, resulting in the formula I use.
The formula is nothing than the result of applying two colors in succession on the third one.

Change DWORD color alpha channel value

I have a starting color: 0xffff00ff, which is a:255, r:255, g:0, b:255.
The goal is to change the alpha channel of the color to be less opaque based on a percentage. i.e. 50% opacity for that color is roughly 0x80ff00ff.
How I've tried to reach the solution:
DWORD cx = 0xffff00ff;
DWORD cn = .5;
DWORD nc = cx*cn;
DWORD cx = 0xffff00ff;
float cn = .5;
DWORD alphaMask=0xff000000;
DWORD nc = (cx|alphaMask)&((DWORD)(alphaMask*cn)|(~alphaMask));
This should do the trick. all I'm doing here is setting the first 8 bits of the DWORD to 1's with the or (symbolized by '|') and then anding those bits with the correct value you want them to be which is the alpha mask times cn. Of course I casted the result of the multiplication to make it a DWORD again.
This is tested code (in linux). However, you might find a simpler answer. Note: this is RGBA, not ARGB as you have referenced in your question.
double transparency = 0.500;
unsigned char *current_image_data_iterator = reinterpret_cast<unsigned char*>( const_cast<char *>( this->data.getCString() ) );
unsigned char *new_image_data_iterator = reinterpret_cast<unsigned char*>( const_cast<char *>( new_image_data->data.getCString() ) );
size_t x;
//cout << "transparency: " << transparency << endl;
for( x = 0; x < data_length; x += 4 ){
//rgb data is the same
*(new_image_data_iterator + x) = *(current_image_data_iterator + x);
*(new_image_data_iterator + x + 1) = *(current_image_data_iterator + x + 1);
*(new_image_data_iterator + x + 2) = *(current_image_data_iterator + x + 2);
//multiply the current opacity by the applied transparency
*(new_image_data_iterator + x + 3) = uint8_t( double(*(current_image_data_iterator + x + 3)) * ( transparency / 255.0 ) );
//cout << "Current Alpha: " << dec << static_cast<int>( *(current_image_data_iterator + x + 3) ) << endl;
//cout << "New Alpha: " << double(*(current_image_data_iterator + x + 3)) * ( transparency / 255.0 ) << endl;
//cout << "----" << endl;
}
typedef union ARGB
{
std::uint32_t Colour;
std::uint8_t A, R, G, B;
};
int main()
{
DWORD cx = 0xffff00ff;
reinterpret_cast<ARGB*>(&cx)->A = reinterpret_cast<ARGB*>(&cx)->A / 2;
std::cout<<std::hex<<cx;
}
The solution I chose to go with:
DWORD changeOpacity(DWORD color, float opacity) {
int alpha = (color >> 24) & 0xff;
int r = (color >> 16) & 0xff;
int g = (color >> 8) & 0xff;
int b = color & 0xff;
int newAlpha = ceil(alpha * opacity);
UINT newColor = r << 16;
newColor += g << 8;
newColor += b;
newColor += (newAlpha << 24);
return (DWORD)newColor;
}
I understand your question as: I wish to change a given rgba color component by a certain factor while keeping the same overall transparency.
For a color with full alpha (1.0 or 255), this is trivial: simply multiply the component without touching the others:
//typedef unsigned char uint8
enum COMPONENT {
RED,
GREEN,
BLUE,
ALPHA
};
struct rgba {
uint8 components[4];
// uint8 alpha, blue, green, red; // little endian
uint8 &operator[](int index){
return components[index];
}
};
rgba color;
if (color[ALPHA] == 255)
color[RED] *= factor;
else
ComponentFactor(color, RED, factor);
There's'probably not a single answer to that question in the general case. Consider that colors may be encoded alternatively in HSL or HSV. You might want to keep some of these parameters fixed, and allow other to change.
My approach to this problem would be to first try to find the hue distance between the source and target colors at full alpha, and then convert the real source color to HSV, apply the change in hue, then convert back to RGBA. Obviously, that second step is not necessary if the alpha is actually 1.0.
In pseudo code:
rgba ComponentFactor(rgba color, int component, double factor){
rgba fsrc = color, ftgt;
fsrc.alpha = 1.0; // set full alpha
ftgt = fsrc;
ftgt[component] *= factor; // apply factor
hsv hsrc = fsrc, htgt = ftgt; // convert to hsv color space
int distance = htgt.hue - hsrc.hue; // find the hue difference
hsv tmp = color; // convert actual color to hsv
tmp.hue += distance; // apply change in hue
rgba res = tmp; // convert back to RGBA space
return res;
}
Note how the above rely on type rgba and hsv to have implicit conversion constructors. Algorithms for conversion may be easily found with a web search. It should be also easy to derive struct definitions for hsv from the rgba one, or include individual component access as field members (rather than using the [] operator).
For instance:
//typedef DWORD uint32;
struct rgba {
union {
uint8 components[4];
struct {
uint8 alpha,blue,green,red; // little endian plaform
}
uint32 raw;
};
uint8 &operator[](int index){
return components[4 - index];
}
rgba (uint32 raw_):raw(raw_){}
rgba (uint8 r, uint8 g, uint8 b, uint8 a):
red(r), green(g), blue(b),alpha(a){}
};
Perhaps you will have to find a hue factor rather than a distance, or tweak other HSV components to achieve the desired result.

ARGB and kCGImageAlphaPremultipliedFirst format. Why do the pixel colors are stored as (255-data)?

I create an image using
UIGraphicsBeginImageContextWithOptions(image.size, NO, 0);
[image drawInRect:CGRectMake(0, 0, image.size.width, image.size.height)];
// more code - not relevant - removed for debugging
image = UIGraphicsGetImageFromCurrentImageContext(); // the image is now ARGB
UIGraphicsEndImageContext();
Then I try to find the color of a pixel (using the code by Minas Petterson from here: Get Pixel color of UIImage).
But since the image is now in ARGB format I had to modified the code with this:
alpha = data[pixelInfo];
red = data[(pixelInfo + 1)];
green = data[pixelInfo + 2];
blue = data[pixelInfo + 3];
However this did not work.
The problem is that (for example) a red pixel, that in RGBA would be represented as 1001 (actually 255 0 0 255, but for simplicity I use 0 to 1 values), in the image is represented as 0011 and not (as I thought) 1100.
Any ideas why? Am I doing something wrong?
PS. The code I have to use looks like it has to be this:
alpha = 255-data[pixelInfo];
red = 255-data[(pixelInfo + 1)];
green = 255-data[pixelInfo + 2];
blue = 255-data[pixelInfo + 3];
There are some problems that arises there:
"In some contexts, primarily OpenGL, the term "RGBA" actually means the colors are stored in memory such that R is at the lowest address, G after it, B after that, and A last. OpenGL describes the above format as "BGRA" on a little-endian machine and "ARGB" on a big-endian machine." (wiki)
Graphics hardware is backed by OpenGL on OS X/iOS, so I assume that we deal with little-endian data(intel/arm processors). So, when format is kCGImageAlphaPremultipliedFirst (ARGB) on little-endian machine it's BGRA. But don't worry, there is easy way to fix that.
Assuming that it's ARGB, kCGImageAlphaPremultipliedFirst, 8 bits per component, 4 components per pixel(That's what UIGraphicsGetImageFromCurrentImageContext() returns), don't_care-endiannes:
- (void)parsePixelValuesFromPixel:(const uint8_t *)pixel
intoBuffer:(out uint8_t[4])buffer {
static NSInteger const kRedIndex = 0;
static NSInteger const kGreenIndex = 1;
static NSInteger const kBlueIndex = 2;
static NSInteger const kAlphaIndex = 3;
int32_t *wholePixel = (int32_t *)pixel;
int32_t value = OSSwapHostToBigConstInt32(*wholePixel);
// Now we have value in big-endian format, regardless of our machine endiannes (ARGB now).
buffer[kAlphaIndex] = value & 0xFF;
buffer[kRedIndex] = (value >> 8) & 0xFF;
buffer[kGreenIndex] = (value >> 16) & 0xFF;
buffer[kBlueIndex] = (value >> 24) & 0xFF;
}

How to change RGB values in SDL surface?

In my application, once I load an image into an SDL_Surface object, I need to go through each RGB value in the image and replace it with another RGB value from a lookup function.
(rNew, gNew, bNew) = lookup(rCur, gCur, bCur);
It seems surface->pixels gets me the pixels. I would appreciate it if someone can explain to me how to obtain R, G, and B values from the pixel and replace it with the new RGB value.
Use built-in functions SDL_GetRGB and SDL_MapRGB
#include <stdint.h>
/*
...
*/
short int x = 200 ;
short int y = 350 ;
uint32_t pixel = *( ( uint32_t * )screen->pixels + y * screen->w + x ) ;
uint8_t r ;
uint8_t g ;
uint8_t b ;
SDL_GetRGB( pixel, screen->format , &r, &g, &b );
screen->format deals with the format so you don't have to.
You can also use SDL_Color instead of writing r,g,b variables separately.
Depending on the format of the surface, the pixels are arranged as an array in the buffer.
For typical 32 bit surfaces, it is R G B A R G B A
where each component is 8 bit, and every 4 are a pixel
First of all you need to lock the surface to safely access the data for modification. Now to manipulate the array you need to know the numbers of bit per pixels, and the alignment of the channels (A, R, G, B). As Photon said if is 32 bits per pixel the array can be RGBARGBA.... if it is 24 the array can be RGBRGB.... (can also be BGR, BGR, blue first)
//i assume the signature of lookup to be
int lookup(Uint8 r, Uint8 g, Uint8 b, Uint8 *rnew, Uint8* gnew, Uint8* bnew);
SDL_LockSurface( surface );
/* Surface is locked */
/* Direct pixel access on surface here */
Uint8 byteincrement = surface->format->BytesPerPixel;
int position;
for(position = 0; position < surface->w * surface->h* byteincrement; position += byteincrement )
{
Uint8* curpixeldata = (Uint8*)surface->data + position;
/* assuming RGB, you need to know the position of channels otherwise the code is overly complex. for instance, can be BGR */
Uint8* rdata = curpixeldata +1;
Uint8* gdata = curpixeldata +2;
Uint8* bdata = curpixeldata +3;
/* those pointers point to r, g, b, use it as you want */
lookup(*rdata, *gdata, *bdata, rdata,gdata,bdata);
}
.
SDL_LockSurface( surface );