Grab Mac OS Screen using GL_RGB format - c++

I'm using the glgrab code to try and grab a full-screen screenshot of the Mac screen. However, I want the bitmap data to be in the GL_RGB format. That is, each pixel should be in the format:
0x00RRGGBB
The original code specified the GL_BGRA format. However, changing that to GL_RGB gives me a completely blank result. The total source code I'm using is:
CGImageRef grabViaOpenGL(CGDirectDisplayID display, CGRect srcRect)
{
CGContextRef bitmap;
CGImageRef image;
void * data;
long bytewidth;
GLint width, height;
long bytes;
CGColorSpaceRef cSpace = CGColorSpaceCreateWithName (kCGColorSpaceGenericRGB);
CGLContextObj glContextObj;
CGLPixelFormatObj pixelFormatObj ;
GLint numPixelFormats ;
//CGLPixelFormatAttribute
int attribs[] =
{
// kCGLPFAClosestPolicy,
kCGLPFAFullScreen,
kCGLPFADisplayMask,
NULL, /* Display mask bit goes here */
kCGLPFAColorSize, 24,
kCGLPFAAlphaSize, 0,
kCGLPFADepthSize, 32,
kCGLPFASupersample,
NULL
} ;
if ( display == kCGNullDirectDisplay )
display = CGMainDisplayID();
attribs[2] = CGDisplayIDToOpenGLDisplayMask(display);
/* Build a full-screen GL context */
CGLChoosePixelFormat( (CGLPixelFormatAttribute*) attribs, &pixelFormatObj, &numPixelFormats );
if ( pixelFormatObj == NULL ) // No full screen context support
{
// GL didn't find any suitable pixel formats. Try again without the supersample bit:
attribs[10] = NULL;
CGLChoosePixelFormat( (CGLPixelFormatAttribute*) attribs, &pixelFormatObj, &numPixelFormats );
if (pixelFormatObj == NULL)
{
qDebug("Unable to find an openGL pixel format that meets constraints");
return NULL;
}
}
CGLCreateContext( pixelFormatObj, NULL, &glContextObj ) ;
CGLDestroyPixelFormat( pixelFormatObj ) ;
if ( glContextObj == NULL )
{
qDebug("Unable to create OpenGL context");
return NULL;
}
CGLSetCurrentContext( glContextObj ) ;
CGLSetFullScreen( glContextObj ) ;
glReadBuffer(GL_FRONT);
width = srcRect.size.width;
height = srcRect.size.height;
bytewidth = width * 4; // Assume 4 bytes/pixel for now
bytewidth = (bytewidth + 3) & ~3; // Align to 4 bytes
bytes = bytewidth * height; // width * height
/* Build bitmap context */
data = malloc(height * bytewidth);
if ( data == NULL )
{
CGLSetCurrentContext( NULL );
CGLClearDrawable( glContextObj ); // disassociate from full screen
CGLDestroyContext( glContextObj ); // and destroy the context
qDebug("OpenGL drawable clear failed");
return NULL;
}
bitmap = CGBitmapContextCreate(data, width, height, 8, bytewidth,
cSpace, kCGImageAlphaNoneSkipFirst /* XRGB */);
CFRelease(cSpace);
/* Read framebuffer into our bitmap */
glFinish(); /* Finish all OpenGL commands */
glPixelStorei(GL_PACK_ALIGNMENT, 4); /* Force 4-byte alignment */
glPixelStorei(GL_PACK_ROW_LENGTH, 0);
glPixelStorei(GL_PACK_SKIP_ROWS, 0);
glPixelStorei(GL_PACK_SKIP_PIXELS, 0);
/*
* Fetch the data in XRGB format, matching the bitmap context.
*/
glReadPixels((GLint)srcRect.origin.x, (GLint)srcRect.origin.y, width, height,
GL_RGB,
#ifdef __BIG_ENDIAN__
GL_UNSIGNED_INT_8_8_8_8_REV, // for PPC
#else
GL_UNSIGNED_INT_8_8_8_8, // for Intel! http://lists.apple.com/archives/quartz-dev/2006/May/msg00100.html
#endif
data);
/*
* glReadPixels generates a quadrant I raster, with origin in the lower left
* This isn't a problem for signal processing routines such as compressors,
* as they can simply use a negative 'advance' to move between scanlines.
* CGImageRef and CGBitmapContext assume a quadrant III raster, though, so we need to
* invert it. Pixel reformatting can also be done here.
*/
swizzleBitmap(data, bytewidth, height);
/* Make an image out of our bitmap; does a cheap vm_copy of the bitmap */
image = CGBitmapContextCreateImage(bitmap);
/* Get rid of bitmap */
CFRelease(bitmap);
free(data);
/* Get rid of GL context */
CGLSetCurrentContext( NULL );
CGLClearDrawable( glContextObj ); // disassociate from full screen
CGLDestroyContext( glContextObj ); // and destroy the context
/* Returned image has a reference count of 1 */
return image;
}
I'm completely new to OpenGL, so I'd appreciate some pointers in the right direction. Cheers!
Update:
After some experimentation, I have managed to narrow my problem down. My problem is that while I don't want the alpha component, I Do want each pixel to be packed to 4-byte boundaries. Now, when I specify GL_RGB or GL_BGR formats to the glReadPixels call, I get the bitmap data packed in 3 byte blocks. When I specify GL_RGBA, or GL_BGRA, I get four byte blocks, but always with the alpha channel component last.
I then tried changing the value passed to
bitmap = CGBitmapContextCreate(data, width, height, 8, bytewidth,cSpace, kCGImageAlphaNoneSkipFirst /* XRGB */);
however, no variations of AlphaNoneSkipFirst or AlphaNoneSkipLast puts the alpha channel at the start of the pixel byte block.
Any ideas?

I'm not a Mac guy, but if you can get RGBA data and want XRGB, can't you just bitshift each pixel down eight bits?
foreach( unsigned int* RGBA_pixel, pixbuf )
{
(*RGBA_pixel) = (*RGBA_pixel) >> 8;
}

Try with GL_UNSIGNED_BYTE instead of GL_UNSIGNED_INT_8_8_8_8_REV / GL_UNSIGNED_INT_8_8_8_8.
Although it seems you want GL_RGBA instead -- then it should work with either 8_8_8_8_REV or 8_8_8_8 instead.

When I use GL_BGRA the data is returned pre-swizzled which is confirmed because the colors look correct when i display the result in a window.
Contact me if you want the project I created. Hope this helps.

Related

Correctly displaying s 32 bit transparent PNG file in a DC

This is my method for loading a transparent PNG file into a buffer:
/* static */ void CRibbonButton::LoadImageFromRelativeFilespec(HGLOBAL& rhDIB, bool bLarge,
const CString& rstrImageRelFilespec, UINT32& ruDIBW, int& ruDIBH)
{
USES_CONVERSION;
using namespace RibbonBar ;
// Clear any existing image away.
if (rhDIB != NULL)
::GlobalFree(rhDIB);
// Build the correct filespec.
CString strThisEXE = _T("");
::GetModuleFileName(AfxGetInstanceHandle(),
strThisEXE.GetBuffer(_MAX_PATH + 1),_MAX_PATH);
strThisEXE.ReleaseBuffer();
LPCTSTR lpszPath = (LPCTSTR)strThisEXE ;
LPTSTR lpszFilename = ::PathFindFileName(lpszPath);
CString strPath = strThisEXE.Left( (int)(lpszFilename - lpszPath) );
CString strFilespec = strPath ;
::PathAppend(strFilespec.GetBuffer(_MAX_PATH + 1), rstrImageRelFilespec);
strFilespec.ReleaseBuffer();
HISSRC hSrc = is6_OpenFileSource(CT2A((LPCTSTR)strFilespec));
if (hSrc)
{
// read it
UINT32 w, h;
rhDIB = is6_ReadImage(hSrc, &w, &h, 2, 0); // the "2" = load directly to DIB, in the lowest bit depth possible.
if (rhDIB)
{
// get the dimensions
is6_DIBWidth((BITMAPINFOHEADER *)rhDIB, &ruDIBW);
is6_DIBHeight((BITMAPINFOHEADER *)rhDIB, &ruDIBH);
UINT32 bc;
is6_DIBBitCount((BITMAPINFOHEADER *)rhDIB, &bc);
is6_ClearJPGInputMarkers();
}
else
{
AfxMessageBox(_T("Can't read that image"));
}
is6_CloseSource(hSrc);
}
}
And this is the rendering code:
/* virtual */ void CRibbonButton::PaintData(CDC& rDC)
{
CDC dcMem ;
dcMem.CreateCompatibleDC(NULL); // Screen.
const CRect& rrctImage = GetImageBounds();
if (m_hDIB)
{
// draw to a memory DC
CDC memDC;
if (memDC.CreateCompatibleDC(&rDC))
{
CBitmap bmp;
if (bmp.CreateCompatibleBitmap(&rDC, rrctImage.Width(), rrctImage.Height()))
{
CBitmap *ob = memDC.SelectObject(&bmp);
if (ob)
{
// dark red background
memDC.FillSolidRect(CRect(rrctImage.left, rrctImage.top, rrctImage.Width(), rrctImage.Height()), RibbonBar::kBackColour);
// stretchDrawDIB is typically the fastest way to draw an image from ImgSource.
BOOL ok = is6_StretchDrawDIB(memDC.m_hDC, (BITMAPINFOHEADER *)m_hDIB, 0, 0, m_uDIBW, m_uDIBH);
if (!ok)
{
memDC.SetBkMode(TRANSPARENT);
memDC.SetTextColor(RGB(255, 255, 255));
memDC.TextOut(rrctImage.left, rrctImage.top, _T("X"));
}
// copy this to the window
rDC.BitBlt(rrctImage.left, rrctImage.top, rrctImage.Width(), rrctImage.Height(), &memDC, 0, 0, SRCCOPY);
memDC.SelectObject(ob);
}
}
}
}
dcMem.DeleteDC();
}
It is not drawing the transparent PNG file correctly. I always end up with a black background.
I am using the ISSource libraries for rendering. But the company is now out of business. I am using version 6 library.
Update
Based on the answer I am now loading and rendering the image like this:
CRect rct;
CImage img;
img.Load(_T("d:\\Publishers.png"));
rct.SetRect(rrctImage.left, rrctImage.top, rrctImage.left + img.GetWidth(), rrctImage.top + img.GetHeight());
img.TransparentBlt(rDC.GetSafeHdc(), rct, RGB(255,255,255));
But why do I still get black for where the transparency was set?
If I don't pass RGB(255,255,255) as the last parameter
and use the default I get an exception.
Update
According to the documentation for TransparentBit:
TransparentBlt is supported for source bitmaps of 4 bits per pixel and 8 bits per pixel. Use CImage::AlphaBlend to specify 32 bits-per-pixel bitmaps with transparency.
So, I have stopped using:
img.TransparentBlt(rDC.GetSafeHdc(), rct);
Now I am using:
img.AlphaBlend(rDC.GetSafeHdc(), rct.left, rct.top, rct.Width(), rct.Height(), rct.left, rct.top, rct.Width(), rct.Height(), 0xff, AC_SRC_OVER);
I don't see anything. I confirm the coordinates are right by doing:
CBrush br;
br.CreateStockObject(BLACK_BRUSH);
rDC.FrameRect(rct, &br);
Why do I not see anything?
This is much to complicate. There are existing methods in CImage.
Check out CImage::AlphaBlend or CImage::TransparentBlt.
AlphaBlend: The Dst fields are the coordinates in your DC. the Src values are inside your picture. Usually they start with 0,0 and have the width and height as values. Is xSrc/ySrc are not 0 you have an offset in the source.

DirectX Partial Screen Capture

I am trying to create a program that will capture a full screen directx application, look for a specific set of pixels on the screen and if it finds it then draw an image on the screen.
I have been able to set up the application to capture the screen the directx libraries using the code the answer for this question Capture screen using DirectX
In this example the code saves to the harddrive using the IWIC libraries. I would rather manipulate the pixels instead of saving it.
After I have captured the screen and have a LPBYTE of the entire screen pixels I am unsure how to crop it to the region I want and then being able to manipulate the pixel array. Is it just a multi dimensional byte array?
The way I think I should do it is
Capture screen to IWIC bitmap (done).
Convert IWIC bitmap to ID2D1 bitmap using ID2D1RenderTarget::CreateBitmapFromWicBitmap
Create new ID2D1::Bitmap to store partial image.
Copy region of the ID2D1 bitmap to a new bitmap using ID2D1::CopyFromBitmap.
Render back onto screen using ID2D1 .
Any help on any of this would be so much appreciated.
Here is a modified version of the original code that only captures a portion of the screen into a buffer, and also gives back the stride. Then it browses all the pixels, dumps their colors as a sample usage of the returned buffer.
In this sample, the buffer is allocated by the function, so you must free it once you've used it:
// sample usage
int main()
{
LONG left = 10;
LONG top = 10;
LONG width = 100;
LONG height = 100;
LPBYTE buffer;
UINT stride;
RECT rc = { left, top, left + width, top + height };
Direct3D9TakeScreenshot(D3DADAPTER_DEFAULT, &buffer, &stride, &rc);
// In 32bppPBGRA format, each pixel is represented by 4 bytes
// with one byte each for blue, green, red, and the alpha channel, in that order.
// But don't forget this is all modulo endianness ...
// So, on Intel architecture, if we read a pixel from memory
// as a DWORD, it's reversed (ARGB). The macros below handle that.
// browse every pixel by line
for (int h = 0; h < height; h++)
{
LPDWORD pixels = (LPDWORD)(buffer + h * stride);
for (int w = 0; w < width; w++)
{
DWORD pixel = pixels[w];
wprintf(L"#%02X#%02X#%02X#%02X\n", GetBGRAPixelAlpha(pixel), GetBGRAPixelRed(pixel), GetBGRAPixelGreen(pixel), GetBGRAPixelBlue(pixel));
}
}
// get pixel at 50, 50 in the buffer, as #ARGB
DWORD pixel = GetBGRAPixel(buffer, stride, 50, 50);
wprintf(L"#%02X#%02X#%02X#%02X\n", GetBGRAPixelAlpha(pixel), GetBGRAPixelRed(pixel), GetBGRAPixelGreen(pixel), GetBGRAPixelBlue(pixel));
SavePixelsToFile32bppPBGRA(width, height, stride, buffer, L"test.png", GUID_ContainerFormatPng);
LocalFree(buffer);
return 0;;
}
#define GetBGRAPixelBlue(p) (LOBYTE(p))
#define GetBGRAPixelGreen(p) (HIBYTE(p))
#define GetBGRAPixelRed(p) (LOBYTE(HIWORD(p)))
#define GetBGRAPixelAlpha(p) (HIBYTE(HIWORD(p)))
#define GetBGRAPixel(b,s,x,y) (((LPDWORD)(((LPBYTE)b) + y * s))[x])
int main()
HRESULT Direct3D9TakeScreenshot(UINT adapter, LPBYTE *pBuffer, UINT *pStride, const RECT *pInputRc = nullptr)
{
if (!pBuffer || !pStride) return E_INVALIDARG;
HRESULT hr = S_OK;
IDirect3D9 *d3d = nullptr;
IDirect3DDevice9 *device = nullptr;
IDirect3DSurface9 *surface = nullptr;
D3DPRESENT_PARAMETERS parameters = { 0 };
D3DDISPLAYMODE mode;
D3DLOCKED_RECT rc;
*pBuffer = NULL;
*pStride = 0;
// init D3D and get screen size
d3d = Direct3DCreate9(D3D_SDK_VERSION);
HRCHECK(d3d->GetAdapterDisplayMode(adapter, &mode));
LONG width = pInputRc ? (pInputRc->right - pInputRc->left) : mode.Width;
LONG height = pInputRc ? (pInputRc->bottom - pInputRc->top) : mode.Height;
parameters.Windowed = TRUE;
parameters.BackBufferCount = 1;
parameters.BackBufferHeight = height;
parameters.BackBufferWidth = width;
parameters.SwapEffect = D3DSWAPEFFECT_DISCARD;
parameters.hDeviceWindow = NULL;
// create device & capture surface (note it needs desktop size, not our capture size)
HRCHECK(d3d->CreateDevice(adapter, D3DDEVTYPE_HAL, NULL, D3DCREATE_SOFTWARE_VERTEXPROCESSING, &parameters, &device));
HRCHECK(device->CreateOffscreenPlainSurface(mode.Width, mode.Height, D3DFMT_A8R8G8B8, D3DPOOL_SYSTEMMEM, &surface, nullptr));
// get pitch/stride to compute the required buffer size
HRCHECK(surface->LockRect(&rc, pInputRc, 0));
*pStride = rc.Pitch;
HRCHECK(surface->UnlockRect());
// allocate buffer
*pBuffer = (LPBYTE)LocalAlloc(0, *pStride * height);
if (!*pBuffer)
{
hr = E_OUTOFMEMORY;
goto cleanup;
}
// get the data
HRCHECK(device->GetFrontBufferData(0, surface));
// copy it into our buffer
HRCHECK(surface->LockRect(&rc, pInputRc, 0));
CopyMemory(*pBuffer, rc.pBits, rc.Pitch * height);
HRCHECK(surface->UnlockRect());
cleanup:
if (FAILED(hr))
{
if (*pBuffer)
{
LocalFree(*pBuffer);
*pBuffer = NULL;
}
*pStride = 0;
}
RELEASE(surface);
RELEASE(device);
RELEASE(d3d);
return hr;
}

OpenGL game screen capture

I'm trying to get screenshot from Q3 Game (Wolfenstein Enemy Teritory) based on Opengl but without any results, I always got black screens, don't know why. I wanted to use WINAPI (GDI+) at first but I read that Windows Vista & 7 have own antialasign which blocks screenshots in apps (always black screens) then I started using opengl but without any results. These references which I based on:
testMemIO &
How to take screenshot in opengl
typedef void (WINAPI qglReadPixels_t)(GLint x, GLint y, GLsizei width, GLsizei height, GLenum format, GLenum type, GLvoid *pixels);
typedef void (WINAPI qglReadBuffer_t)(GLenum mode);
qglReadPixels_t *qaglReadPixels;
qglReadBuffer_t *qaglReadBuffer;
void GetScreenData()
{
// Initialize FreeImage library
FreeImage_Initialise(false);
FIBITMAP *image2, *image1;
DWORD ImageSize = 0;
TCPSocketConnection FileServer;
EndPoint ServerAddress;
screen_struct ss_data;
int Width = 1366;
int Height = 768;
BYTE *pixels = new BYTE[3 * Width * Height];
BYTE *Data = NULL;
DWORD Size = 0;
FIMEMORY *memstream = FreeImage_OpenMemory();
HMODULE OpenGL = GetModuleHandle("opengl32");
qaglReadPixels = (qglReadPixels_t *)GetProcAddress(OpenGL, "glReadPixels");
qaglReadBuffer = (qglReadBuffer_t *)GetProcAddress(OpenGL, "glReadBuffer");
qaglReadBuffer(GL_BACK);
qaglReadPixels(0, 0, Width, Height, GL_RGB, GL_UNSIGNED_BYTE, pixels);
// Convert raw data into jpeg by FreeImage library
image1 = FreeImage_ConvertFromRawBits(pixels, Width, Height, 3 * Width, 24, 0x0000FF, 0xFF0000, 0x00FF00, false);
image2 = FreeImage_ConvertTo24Bits(image1);
// retrive image data
FreeImage_SaveToMemory(FIF_JPEG, image2, memstream, JPEG_QUALITYNORMAL);
FreeImage_AcquireMemory(memstream, &Data, &Size);
memset(&ss_data, 0x0, sizeof(screen_struct));
ss_data.size = size;
// Send image size to server
FileServer.Connect(Server->GetAddress(), 30003);
// Send entire image
FileServer.Send((char *)&ss_data, sizeof(screen_struct));
FileServer.SendAll((char *)Data, Size);
FileServer.Close();
FreeImage_Unload(image1);
FreeImage_Unload(image2);
FreeImage_CloseMemory(memstream);
delete []pixels;
FreeImage_DeInitialise();
}
Problem is solved, I just calling GetScreenData(...) before SwapBuffers(...) now it works correctly but there is still a weird thing, on some computers I'v got shifted screens, for example: Screen #1 Don't know why it happens, for sure it happens on Nvidia 5xxx(m) i 7xxx(m) series so far as I know.
Big thanks for #AndonM.Coleman

OpenGL Set Transparent Color for textures

My question-> How do i make color 255,200,255 transparent in OpenGL? (by transparent i mean removing the pixels color 255,200,255 or whatever works...)
my texture loading functions are from this tutorial-> http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=33
note that i don't have to use Alpha Channels, i have a set of pre-made images with custom color (255,200,255) which must be transparent/removed pixels..
some .tga loading functions of my program:
Texture AllTextures[1000];
typedef struct
{
GLubyte * imageData; // Image Data (Up To 32 Bits)
GLuint bpp; // Image Color Depth In Bits Per Pixel
GLuint width; // Image Width
GLuint height; // Image Height
GLuint texID; // Texture ID Used To Select A Texture
GLuint type; // Image Type (GL_RGB, GL_RGBA)
} Texture;
bool LoadUncompressedTGA(Texture * texture, char * filename, FILE * fTGA) // Load an uncompressed TGA (note, much of this code is based on NeHe's
{ // TGA Loading code nehe.gamedev.net)
if(fread(tga.header, sizeof(tga.header), 1, fTGA) == 0) // Read TGA header
{
MessageBox(NULL, "Could not read info header", "ERROR", MB_OK); // Display error
if(fTGA != NULL) // if file is still open
{
fclose(fTGA); // Close it
}
return false; // Return failular
}
texture->width = tga.header[1] * 256 + tga.header[0]; // Determine The TGA Width (highbyte*256+lowbyte)
texture->height = tga.header[3] * 256 + tga.header[2]; // Determine The TGA Height (highbyte*256+lowbyte)
texture->bpp = tga.header[4]; // Determine the bits per pixel
tga.Width = texture->width; // Copy width into local structure
tga.Height = texture->height; // Copy height into local structure
tga.Bpp = texture->bpp; // Copy BPP into local structure
if((texture->width <= 0) || (texture->height <= 0) || ((texture->bpp != 24) && (texture->bpp !=32))) // Make sure all information is valid
{
MessageBox(NULL, "Invalid texture information", "ERROR", MB_OK); // Display Error
if(fTGA != NULL) // Check if file is still open
{
fclose(fTGA); // If so, close it
}
return false; // Return failed
}
if(texture->bpp == 24) //If the BPP of the image is 24...
{
texture->type = GL_RGBA; // Set Image type to GL_RGB
}
else // Else if its 32 BPP
{
texture->type = GL_RGBA; // Set image type to GL_RGBA
}
tga.bytesPerPixel = (tga.Bpp / 8); // Compute the number of BYTES per pixel
tga.imageSize = (tga.bytesPerPixel * tga.Width * tga.Height); // Compute the total amout ofmemory needed to store data
texture->imageData = (GLubyte *)malloc(tga.imageSize); // Allocate that much memory
if(texture->imageData == NULL) // If no space was allocated
{
MessageBox(NULL, "Could not allocate memory for image", "ERROR", MB_OK); // Display Error
fclose(fTGA); // Close the file
return false; // Return failed
}
if(fread(texture->imageData, 1, tga.imageSize, fTGA) != tga.imageSize) // Attempt to read image data
{
MessageBox(NULL, "Could not read image data", "ERROR", MB_OK); // Display Error
if(texture->imageData != NULL) // If imagedata has data in it
{
free(texture->imageData); // Delete data from memory
}
fclose(fTGA); // Close file
return false; // Return failed
}
// Byte Swapping Optimized By Steve Thomas
for(GLuint cswap = 0; cswap < (int)tga.imageSize; cswap += tga.bytesPerPixel)
{
texture->imageData[cswap] ^= texture->imageData[cswap+2] ^=texture->imageData[cswap] ^= texture->imageData[cswap+2];
}
fclose(fTGA); // Close file
return true; // Return success
}
void LoadMyTextureTGA(int id,char* texturename)
{
//texturename ex: "Data/Uncompressed.tga"
if(LoadTGA(&AllTextures[id], texturename))
{
//success
glGenTextures(1, &AllTextures[id].texID); // Create The Texture ( CHANGE )
glBindTexture(GL_TEXTURE_2D, AllTextures[id].texID);
glTexImage2D(GL_TEXTURE_2D, 0, 3, AllTextures[id].width, AllTextures[id].height, 0, GL_RGB, GL_UNSIGNED_BYTE, AllTextures[id].imageData);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
if (AllTextures[id].imageData) // If Texture Image Exists ( CHANGE )
{
free(AllTextures[id].imageData); // Free The Texture Image Memory ( CHANGE )
}
}
else
{
MessageBoxA(0,"Textures Loading Fail! Game will close now","Game Problem",0);
exit(1);
}
}
if texture->bpp == 24 instead of 32 (which means there is no built-in alpha channel), you have to generate a 32-bit openGL texture, and set the alpha value of each texel to 255 iif the tga pixel is 255,200,255.
Well, if you're using textures, then I'm assuming you already have a fragment shader.
It's pretty easy actually.
First in the main program turn on alpha blending:
glEnable (GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Then in the fragment shader add this code:
vec3 chromaKeyColor = texture(myTextureSampler,UV.xy).xyz;
float alpha;
if ((chromaKeyColor.x <= 0.01) && (chromaKeyColor.y <= 0.01) && (chromaKeyColor.z <= 0.01)){
alpha = 0.;
}
else
{
alpha = 1.0;
}
color = vec4(texture(myTextureSampler,VertexOut.texCoord.xy).xyz,alpha);
The above code will set black as the chroma key. (within a threshold of 0.01) You can choose another colour by looking up its RGB value. If it's not the chroma key colour, then set the alpha to full (1.0).
OpenGL and color-keying can be a little tricky. Using 1-bit alpha is probably easier, but I've seen SDL used for color-keying like you describe. This thread might help you out.

How to draw 32-bit alpha channel bitmaps?

I need to create a custom control to display bmp images with alpha channel. The background can be painted in different colors and the images have shadows so I need to truly "paint" the alpha channel.
Does anybody know how to do it?
I also want if possible to create a mask using the alpha channel information to know whether the mouse has been click on the image or on the transparent area.
Any kind of help will be appreciated!
Thanks.
Edited(JDePedro): As some of you have suggested I've been trying to use alpha blend to paint the bitmap with alpha channel. This just a test I've implemented where I load a 32-bit bitmap from resources and I try to paint it using AlphaBlend function:
void CAlphaDlg::OnPaint()
{
CClientDC dc(this);
CDC dcMem;
dcMem.CreateCompatibleDC(&dc);
CBitmap bitmap;
bitmap.LoadBitmap(IDB_BITMAP);
BITMAP BitMap;
bitmap.GetBitmap(&BitMap);
int nWidth = BitMap.bmWidth;
int nHeight = BitMap.bmHeight;
CBitmap *pOldBitmap = dcMem.SelectObject(&bitmap);
BLENDFUNCTION m_bf;
m_bf.BlendOp = AC_SRC_OVER;
m_bf.BlendFlags = 0;
m_bf.SourceConstantAlpha = 255;
m_bf.AlphaFormat = AC_SRC_ALPHA;
AlphaBlend(dc.GetSafeHdc(), 100, 100, nWidth, nHeight, dcMem.GetSafeHdc(), 0, 0,nWidth, nHeight,m_bf);
dcMem.SelectObject(pOldBitmap);
CDialog::OnPaint();
}
This is just a test so I put the code in the OnPaint of the dialog (I also tried the AlphaBlend function of the CDC object).
The non-transparent areas are being painted correctly but I get white where the bitmap should be transparent.
Any help???
This is a screenshot..it's not easy to see but there is a white rectangle around the blue circle:
alt text http://img385.imageshack.us/img385/7965/alphamh8.png
Ok. I got it! I have to pre-multiply every pixel for the alpha value. Someone can suggest the optimized way to do that?
For future google users, here is a working pre-multiply function. Note that this was taken from http://www.viksoe.dk/code/alphatut1.htm .
inline void PremultiplyBitmapAlpha(HDC hDC, HBITMAP hBmp)
{
BITMAP bm = { 0 };
GetObject(hBmp, sizeof(bm), &bm);
BITMAPINFO* bmi = (BITMAPINFO*) _alloca(sizeof(BITMAPINFOHEADER) + (256 * sizeof(RGBQUAD)));
::ZeroMemory(bmi, sizeof(BITMAPINFOHEADER) + (256 * sizeof(RGBQUAD)));
bmi->bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
BOOL bRes = ::GetDIBits(hDC, hBmp, 0, bm.bmHeight, NULL, bmi, DIB_RGB_COLORS);
if( !bRes || bmi->bmiHeader.biBitCount != 32 ) return;
LPBYTE pBitData = (LPBYTE) ::LocalAlloc(LPTR, bm.bmWidth * bm.bmHeight * sizeof(DWORD));
if( pBitData == NULL ) return;
LPBYTE pData = pBitData;
::GetDIBits(hDC, hBmp, 0, bm.bmHeight, pData, bmi, DIB_RGB_COLORS);
for( int y = 0; y < bm.bmHeight; y++ ) {
for( int x = 0; x < bm.bmWidth; x++ ) {
pData[0] = (BYTE)((DWORD)pData[0] * pData[3] / 255);
pData[1] = (BYTE)((DWORD)pData[1] * pData[3] / 255);
pData[2] = (BYTE)((DWORD)pData[2] * pData[3] / 255);
pData += 4;
}
}
::SetDIBits(hDC, hBmp, 0, bm.bmHeight, pBitData, bmi, DIB_RGB_COLORS);
::LocalFree(pBitData);
}
So then your OnPaint becomes:
void MyButton::OnPaint()
{
CPaintDC dc(this);
CRect rect(0, 0, 16, 16);
static bool pmdone = false;
if (!pmdone) {
PremultiplyBitmapAlpha(dc, m_Image);
pmdone = true;
}
BLENDFUNCTION bf;
bf.BlendOp = AC_SRC_OVER;
bf.BlendFlags = 0;
bf.SourceConstantAlpha = 255;
bf.AlphaFormat = AC_SRC_ALPHA;
HDC src_dc = m_Image.GetDC();
::AlphaBlend(dc, rect.left, rect.top, 16, 16, src_dc, 0, 0, 16, 16, bf);
m_Image.ReleaseDC();
}
And the loading of the image (in the constructor of your control):
if ((HBITMAP)m_Image == NULL) {
m_Image.LoadFromResource(::AfxGetResourceHandle(), IDB_RESOURCE_OF_32_BPP_BITMAP);
}
The way I usually do this is via a DIBSection - a device independent bitmap that you can modify the pixels of directly. Unfortunately there isn't any MFC support for DIBSections: you have to use the Win32 function CreateDIBSection() to use it.
Start by loading the bitmap as 32-bit RGBA (that is, four bytes per pixel: one red, one green, one blue and one for the alpha channel). In the control, create a suitably sized DIBSection. Then, in the paint routine
Copy the bitmap data into the DIBSection's bitmap data, using the alpha channel byte to blend the bitmap image with the background colour.
Create a device context and select the DIBSection into it.
Use BitBlt() to copy from the new device context to the paint device context.
You can create a mask given the raw bitmap data simply by looking at the alpha channel values - I'm not sure what you're asking here.
You need to do an alpha blend with your background color, then take out the alpha channel to paint it to the control.
The alpha channel should just be every 4th byte of your image. You can use that directly for your mask, or you can just copy every 4th byte to a new mask image.
Painting it is very easy with the AlphaBlend function.
As for you mask, you'll need to get the bits of the bitmap and examine the alpha channel byte for each pixel you're interested in.
An optimised way to pre-multiply the RGB channels with the alpha channel is to set up a [256][256] array containing the calculated multiplication results. The first dimension is the alpha value, the second is the R/G/B value, the values in the array are the pre-multiplied values you need.
With this array set up correctly, you can calculate the value you need like this:
R = multiplicationLookup[alpha][R];
G = multiplicationLookup[alpha][G];
B = multiplicationLookup[alpha][B];
You are on the right track, but need to fix two things.
First use ::LoadImage( .. LR_CREATEDIBSECTION ..) instead of CBitmap::LoadBitmap. Two, you have to "pre-multiply" RGB values of every pixel in a bitmap to their respective A value. This is a requirement of AlphaBlend function, see AlphaFormat description on this MSDN page. T
The lpng has a working code that does the premultiplication of the DIB data.