I'm trying to create an algorithm to overlay an image with transparencies on top of fully opaque image.
On the next sample I have a back fully opaque image, and front image which is a blue frame with diffuse edges.
The problem I'm having is that my implementation overlays incorrectly the semi-transparent areas producing darkish pixels.
Here is my implementation:
#define OPAQUE 0xFF
#define TRANSPARENT 0
#define ALPHA(argb) (uint8_t)(argb >> 24)
#define RED(argb) (uint8_t)(argb >> 16)
#define GREEN(argb) (uint8_t)(argb >> 8)
#define BLUE(argb) (uint8_t)(argb)
#define ARGB(a, r, g, b) (a << 24) | ((r & 0xff) << 16) | ((g & 0xff) << 8) | (b & 0xff)
#define BLEND(a, b, alpha) ((a * alpha) + (b * (255 - alpha))) / 255
void ImageUtil::overlay(const uint32_t* front, uint32_t* back, const unsigned int width, const unsigned int height)
{
const size_t totalPixels = width * height;
for (unsigned long index = 0; index < totalPixels; index++)
{
const uint32_t alpha = ALPHA(*front);
const uint32_t R = BLEND(RED(*front), RED(*back), alpha);
const uint32_t G = BLEND(GREEN(*front), GREEN(*back), alpha);
const uint32_t B = BLEND(BLUE(*front), BLUE(*back), alpha);
*backPixels++ = ARGB(OPAQUE, R , G, B);
*frontPixels++;
}
}
UPDATE:
Test Images files
DOWNLOAD
Following the tips from the comments by gman and interjay, I've investigated further and yes the data is being loaded with pre-multiplied alpha.
This produces the darkening when blending. The solution has been to un-multiply the front pixels, and finally I've got the expected result.
Unmultiply formula:
((0xFF * color) / alpha)
Final code:
#define OPAQUE 0xFF;
#define TRANSPARENT 0;
#define ALPHA(rgb) (uint8_t)(rgb >> 24)
#define RED(rgb) (uint8_t)(rgb >> 16)
#define GREEN(rgb) (uint8_t)(rgb >> 8)
#define BLUE(rgb) (uint8_t)(rgb)
#define UNMULTIPLY(color, alpha) ((0xFF * color) / alpha)
#define BLEND(back, front, alpha) ((front * alpha) + (back * (255 - alpha))) / 255
#define ARGB(a, r, g, b) (a << 24) | ((r & 0xFF) << 16) | ((g & 0xFF) << 8) | (b & 0xFF)
void ImageUtil::overlay(const uint32_t* front, uint32_t* back, const unsigned int width, const unsigned int height)
{
const size_t totalPixels = width * height;
for (unsigned long index = 0; index < totalPixels; index++)
{
const uint32_t frontAlpha = ALPHA(*front);
if (frontAlpha == TRANSPARENT)
{
*back++;
*front++;
continue;
}
if (frontAlpha == OPAQUE)
{
*back++ = *front++;
continue;
}
const uint8_t backR = RED(*back);
const uint8_t backG = GREEN(*back);
const uint8_t backB = BLUE(*back);
const uint8_t frontR = UNMULTIPLY(RED(*front), frontAlpha);
const uint8_t frontG = UNMULTIPLY(GREEN(*front), frontAlpha);
const uint8_t frontB = UNMULTIPLY(BLUE(*front), frontAlpha);
const uint32_t R = BLEND(backR, frontR, frontAlpha);
const uint32_t G = BLEND(backG, frontG, frontAlpha);
const uint32_t B = BLEND(backB, frontB, frontAlpha);
*back++ = ARGB(OPAQUE, R , G, B);
*front++;
}
}
Related
I would like to convert a hardware pixel buffer that is in the format X8B8G8R8 into unsigned int 24 bit memory buffer.
Here is my attempt:
// pixels is uin32_t;
src.pixels = new pixel_t[src.width*src.height];
readbuffer->lock( Ogre::HardwareBuffer::HBL_DISCARD );
const Ogre::PixelBox &pb = readbuffer->getCurrentLock();
/// Update the contents of pb here
/// Image data starts at pb.data and has format pb.format
uint32 *data = static_cast<uint32*>(pb.data);
size_t height = pb.getHeight();
size_t width = pb.getWidth();
size_t pitch = pb.rowPitch; // Skip between rows of image
for ( size_t y = 0; y<height; ++y )
{
for ( size_t x = 0; x<width; ++x )
{
src.pixels[pitch*y + x] = data[pitch*y + x];
}
}
This should do
uint32_t BGRtoRGB(uint32_t col) {
return (col & 0x0000ff00) | ((col & 0x000000ff) << 16) | ((col & 0x00ff0000) >> 16)
}
With
src.pixels[pitch*y + x] = BGRtoRGB(data[pitch*y + x]);
Note: BGRtoRGB here converts both ways if you want it to, but remember it throws away whatever you have in the X8 bits (alpha?), but it should keep the values themselves.
To convert the other way around with an alpha of 0xff
uint32_t RGBtoXBGR(uint32_t col) {
return 0xff000000 | (col & 0x0000ff00) | ((col & 0x000000ff) << 16) | ((col & 0x00ff0000) >> 16)
}
I am developing an application using c++
I am facing a problem when trying to capture screen. then edit some of its pixels
and save the image
My code works absolutely fine when when i select the platform as Win32
but as soon as i change the platform from Win32 to x64, the code fails
It start giving access violation when trying to access the pixels
I checked that under both platforms, size of int is 4 bytes and imageData.Stride is coming as -5528
when i do (row*stride/4 + col) i get same value on both platforms
imageData.getPixelFormat() returns 139273 which is PixelFormat32bppRGB
under both platforms
I am posting the code below
please help me out, i have done lot of google, but nothing helps
The access violation error comes at this line
UINT curColor = pixels[row * iStride / 4 + col];
when row value is >0
void BitmapToJpg(HBITMAP hbmpImage, int width, int height)
{
p_bmp = Bitmap::FromHBITMAP(hbmpImage, NULL);
CLSID pngClsid;
int result = GetEncoderClsid(L"image/jpeg", &pngClsid);
if (result != -1)
std::cout << "Encoder succeeded" << std::endl;
else
std::cout << "Encoder failed" << std::endl;
//***************************Testing Lockbits********************************//
// successfull results and position is also correct
BitmapData imageData;
Rect rect(0, 0, width, height);
p_bmp->LockBits(
&rect,
ImageLockModeWrite,
p_bmp->GetPixelFormat(),
//PixelFormat24bppRGB,
&imageData);
cout << p_bmp->GetPixelFormat();
UINT* pixels;
pixels = (UINT*)imageData.Scan0;
int iStride = imageData.Stride;
int x = sizeof(int);
byte red = 0;
byte green = 0;
byte blue = 255;
byte alpha = 0;
for (int row = 0; row < height; ++row)
{
for (int col = 0; col < width; ++col)
{
///Some code to get color
UINT curColor = pixels[row * iStride / 4 + col];
int b = curColor & 0xff;
int g = (curColor & 0xff00) >> 8;
int r = (curColor & 0xff0000) >> 16;
int a = (curColor & 0xff000000) >> 24;
//result_pixels[col][row] = RGB(r, g, b);
if (b>15 && b < 25 && g<5 && r>250)
{
//Red found
//Code to change color, generate ARGB from provided RGB values
UINT32 rgb = (alpha << 24) + (red << 16) + (green << 8) + (blue);
curColor = rgb;
b = curColor & 0xff;
g = (curColor & 0xff00) >> 8;
r = (curColor & 0xff0000) >> 16;
a = (curColor & 0xff000000) >> 24;
cout << "Red found" << endl;
pixels[row * iStride / 4 + col]=rgb;
}
}
}
p_bmp->UnlockBits(&imageData);
//*****************************Till Here*************************************//
p_bmp->Save(L"screen.jpg", &pngClsid, NULL);
delete p_bmp;
}
The aim of the following function is to get the R,G,B values of each pixel from a Bitmap loaded from file and increase them by 10.
void PerformTransformation(Gdiplus::Bitmap* bitmap, LPCTSTR SaveFileName) {
Gdiplus::BitmapData* bitmapData = new Gdiplus::BitmapData;
UINT Width = bitmap->GetWidth();
UINT Height = bitmap->GetHeight();
Gdiplus::Rect rect(0, 0,Width,Height );
bitmap->LockBits(&rect, Gdiplus::ImageLockModeRead, PixelFormat32bppARGB, bitmapData);
byte* pixels = (byte*)bitmapData->Scan0;
INT iStride = abs(bitmapData->Stride);
for (UINT col = 0; col < Width; ++col)
for (UINT row = 0; row < Height; ++row)
{
unsigned int curColor = pixels[row * iStride / 4 + col];
int b = curColor & 0xff;
int g = (curColor & 0xff00) >> 8;
int r = (curColor & 0xff0000) >> 16;
if ((r + 10) > 255) r = 255; else r += 10;
if ((g + 10) > 255) g = 255; else g += 10;
if ((b + 10) > 255) b = 255; else b += 10;
pixels[curColor & 0xff ] = b;
pixels[curColor & 0xff00 >> 8] = g;
pixels[curColor & 0xff0000 >> 16] = r;
}
bitmap->UnlockBits(bitmapData);
CLSID pngClsid;
GetEncoderClsid(L"image/png", &pngClsid);
bitmap->Save(SaveFileName, &pngClsid, NULL);
}
However when checking the save file, the brightness has not increased. I have tried to increase the values to update each R,G,B value to be 100 each but the image remains the same, Seems like i'm not setting the new values correctly.
Can anyone show me what im doing wrong?
EDIT:
After following some guidance i now have the image brightening but only brightening a quarter of the image.
Changed Code
void PerformTransformation(Gdiplus::Bitmap* bitmap, LPCTSTR SaveFileName) {
Gdiplus::BitmapData* bitmapData = new Gdiplus::BitmapData;
UINT Width = bitmap->GetWidth();
UINT Height = bitmap->GetHeight();
Gdiplus::Rect rect(0, 0,Width,Height );
// Lock a 5x3 rectangular portion of the bitmap for reading.
bitmap->LockBits(&rect, Gdiplus::ImageLockModeWrite,
PixelFormat32bppARGB, bitmapData);
byte* Pixels = (byte*)bitmapData->Scan0;
INT stride_bytes_count = abs(bitmapData->Stride);
UINT row_index, col_index;
byte pixel[4];
for (col_index = 0; col_index < Width; ++col_index) {
for (row_index = 0; row_index < Height; ++row_index)
{
unsigned int curColor = Pixels[row_index * stride_bytes_count /
4 + col_index];
int b = curColor & 0xff;
int g = (curColor & 0xff00) >> 8;
int r = (curColor & 0xff0000) >> 16;
if ((r + 10) > 255) r = 255; else r += 10;
if ((g + 10) > 255) g = 255; else g += 10;
if ((b + 10) > 255) b = 255; else b += 10;
pixel[0] = b;
pixel[1] = g;
pixel[2] = r;
Pixels[row_index * stride_bytes_count / 4 + col_index] = *pixel;
}
}
bitmap->UnlockBits(bitmapData);
::DeleteObject(bitmapData);
CLSID pngClsid;
GetEncoderClsid(L"image/png", &pngClsid);
bitmap->Save(SaveFileName, &pngClsid, NULL);
}
};
You never check return codes.
You access bitmap data in reading mode (Gdiplus::ImageLockModeRead)
You are indexing pixel channel values by color value pixels[curColor & 0xff]
You never delete allocated bitmapData object
I'm trying to create an algorithm in C/C++, which applies a uniform transparent gradient from left to right to a pixel buffer. As seen on the next image:
Next is so far my implementation. But the resulting image is not even close to what I need to achieve. Anyone can spot what I'm doing wrong? Thanks
void alphaGradient(uint32_t* pixelsBuffer, const int width, const int height)
{
const short OPAQUE = 255;
int pixelOffsetY, pixelIndex;
short A, R, G, B;
for (int y = 0; y < height; y++)
{
A = OPAQUE;
pixelOffsetY = y * height;
for (int x = 0; x < width; x++)
{
pixelIndex = pixelOffsetY + x;
A = (int)(OPAQUE - ((OPAQUE * x) / width));
R = (pixelsBuffer[pixelIndex] & 0x00FF0000) >> 16;
G = (pixelsBuffer[pixelIndex] & 0x0000FF00) >> 8;
B = (pixelsBuffer[pixelIndex] & 0x000000FF);
pixelsBuffer[pixelIndex] = (A << 24) + (R << 16) + (G << 8) + B;
}
}
}
I haven't tried this code out but something like this should work :
void alphaGradient(uint32_t* pixelBuffer, const int width, const int height)
{
for (int i = 0; i < width; i++)
{
for (int j = 0; j < height; j++)
{
const DWORD src = pixelBuffer[i + j * width];
const DWORD dst = MYBACKGROUNDCOLOR;
const unsigned char src_A = (width - i) * 255 / width;
const unsigned char src_R = (src & 0x00FF0000) >> 16;
const unsigned char src_G = (src & 0x0000FF00) >> 8;
const unsigned char src_B = (src & 0x000000FF);
//const unsigned char dst_Alpha = (src & 0xFF000000) >> 24;
const unsigned char dst_R = (dst & 0x00FF0000) >> 16;
const unsigned char dst_G = (dst & 0x0000FF00) >> 8;
const unsigned char dst_B = (dst & 0x000000FF);
const unsigned char rlt_R = (src_R * src_A + dst_R * (255 - src_A)) / 255;
const unsigned char rlt_G = (src_G * src_A + dst_G * (255 - src_A)) / 255;
const unsigned char rlt_B = (src_B * src_A + dst_B * (255 - src_A)) / 255;
//pixelBuffer[i + j*width] = (DWORD)(((255) << 24) | (((rlt_R)& 0xff) << 16) | (((rlt_G)& 0xff) << 8) | ((rlt_B)& 0xff));
// or if you want to save the transparancy then
//pixelBuffer[i + j*width] = (DWORD)(((src_A) << 24) | (((src_R)& 0xff) << 16) | (((src_G)& 0xff) << 8) | ((src_B)& 0xff));
}
}
}
But personally, I would try to use DirectX or OpenGL for this and write a good PixelShader. It would make this ALOT faster.
As a suggestion, since you only want to modify the alpha channel, you do not need to do anything with the colors. So the following would work too:
char *b((char *) pixelBuffer);
for(int j = 0; j < height; ++j)
{
for(int i = 0; i < width; ++i, b += 4)
{
*b = (width - i) * 255 / width;
}
}
That's it. You could also eliminate the computation for each line by duplicating the data of the first line in the following lines:
// WARNING: code expects height > 0!
char *b((char *) pixelBuffer);
for(int i = 0; i < width; ++i, b += 4)
{
*b = (width - i) * 255 / width;
}
int offset = width * -4;
for(int j = 1; j < height; ++j)
{
for(int i = 0; i < width; ++i, b += 4)
{
*b = b[offset];
}
}
I will leave as an exercise to you to change this double for() loop in a single for() loop, which would make it a little faster yet (because you'd have a single counter (variable b) instead of three).
Note that I do not understand how Mikael's answer would work as he uses the * 255 in the wrong place in his computation of the alpha channel. With integer arithmetic, that's very important. So this should return 0 or 255:
(width - i) / width * 255
because if value < width then value / width == 0. And (width - i) is either width or a value smaller than width...
How i can set specific byte in 4 bytes length DWORD variable?
DWORD color_argb;
unsigned char a = 11; // first byte
unsigned char r = 22; // second byte
unsigned char g = 33; // third byte
unsigned char b = 44; // fouth byte
zumalifeguard, if I understand you correctly - i can use next macroses:
#define SET_COLOR_A(color, a) color |= (a << 24)
#define SET_COLOR_R(color, r) color |= (r << 16)
#define SET_COLOR_G(color, g) color |= (g << 8)
#define SET_COLOR_B(color, b) color |= (b << 0)
?
Try these macros instead:
#define SET_COLOR_A(color, a) color = (DWORD(color) & 0x00FFFFFF) | ((DWORD(a) & 0xFF) << 24)
#define SET_COLOR_R(color, r) color = (DWORD(color) & 0xFF00FFFF) | ((DWORD(r) & 0xFF) << 16)
#define SET_COLOR_G(color, g) color = (DWORD(color) & 0xFFFF00FF) | ((DWORD(g) & 0xFF) << 8)
#define SET_COLOR_B(color, b) color = (DWORD(color) & 0xFFFFFF00) | (DWORD(b) & 0xFF)
The important thing is to preserve existing bits that are not being manipulated, while removing existing bits that are being replaced. Simply OR'ing the new bits is not enough if there are already bits present in the location being assigned to.
DWORD color_argb;
unsigned char a = 11; // first byte
unsigned char r = 22; // second byte
unsigned char g = 33; // third byte
unsigned char b = 44; // fouth byte
color_argb = 0;
int byte_number; // first byte = 1, second byte = 2, etc.
// Set first byte to a;
byte_number = 1;
color_argb |= ( a << (8 * (4 - byte_number) ) );
// Set first byte to a;
byte_number = 2;
color_argb |= ( b << (8 * (4 - byte_number) ) );
// Set first byte to a;
byte_number = 3;
color_argb |= ( c << (8 * (4 - byte_number) ) );
// Set first byte to a;
byte_number = 4;
color_argb |= ( d << (8 * (4 - byte_number) ) );