How to draw transparent BMP with GDI+ - c++

I'm currently editing some old GDI code to use GDI+ and ran into a problem when it comes to draw a BMP file with transparent background. The old GDI code did not use any obvious extra code to draw the background transparent so I'm wondering how to achieve this using GDI+.
My current code looks like this
HINSTANCE hinstance = GetModuleHandle(NULL);
bmp = Gdiplus::Bitmap::FromResource(hinstance, MAKEINTRESOURCEW(IDB_BMP));
Gdiplus::Graphics graphics(pDC->m_hDC);
graphics.DrawImage(&bmp, posX, posY);
I also tried to create a new bitmap from the resource by using the clone method and by drawing the bitmap to a newly created one but neither did help. Both times I used PixelFormat32bppPARGB.
Then I tried to use alpha blending but this way the whole image gets transparent and not only the background:
Gdiplus::ColorMatrix clrMatrix = {
1.0f, 0.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 0.0f, 0.5f, 0.0f,
0.0f, 0.0f, 0.0f, 0.0f, 1.0f
};
Gdiplus::ImageAttributes imgAttr;
imgAttr.SetColorMatrix(&clrMatrix);
graphics.DrawImage(&bmp, destRect, 0, 0, width(), height(), Gdiplus::UnitPixel, &imgAttr);
The transparency information is already contained in the image but I don't have a clue how to apply it when drawing the image. How does one achieve this?

A late answer but:
ImageAttributes imAtt;
imAtt.SetColorKey(Color(255,255,255), Color(255,255,255), ColorAdjustTypeBitmap);
Will make white (255,255,255) transparent on any bitmap you use this image attribute with.

The simplest solution is to use some format other than BMP.
You need the bits to contain alpha data, and you need the Bitmap to be in a format that has alpha data. When you load a BMP with GDI+, it will always use a format without alpha, even if the BMP has an alpha channel. I believe the data is there in the image bits, but it's not being used.
The problem when you clone or draw to a PixelFormat32bppPARGB Bitmap is that GDI+ will convert the image data to the new format, which means discarding the alpha data.
Assuming it's loading the bits correctly, what you need to do is copy the bits over directly to another bitmap with the correct format. You can do this with Bitmap::LockBits and Bitmap::UnlockBits. (Make sure you lock each bitmap with its native pixel format so no conversion is done.)

I had the same problem. Transparent BMPs have not been shown correctly and unfortunately, PNGs cannot be loaded directly from resources (except by adding quite a bit of code which copies them into a stream and loads them from the stream). I wanted to avoid this code.
The bitmaps that I'm using also use only two colours (background and logo). Having an alpha channel means that I would need to save them with a much higher colour depth instead of only 2 bit colour depth.
Evan's answer was exactly was I was looking for :-)
Instead of white, I'm using the colour of the top left pixel as transparent colour:
Gdiplus::Color ColourOfTopLeftPixel;
Gdiplus::Status eStatus = m_pBitmap->GetPixel(0, 0, &ColourOfTopLeftPixel);
_ASSERTE(eStatus == Gdiplus::Ok);
// The following makes every pixel with the same colour as the top left pixel (ColourOfTopLeftPixel) transparent.
Gdiplus::ImageAttributes ImgAtt;
ImgAtt.SetColorKey(ColourOfTopLeftPixel, ColourOfTopLeftPixel, Gdiplus::ColorAdjustTypeBitmap);

Related

Drawing a simple rectangle in OpenGL 4

According to this wikibook it used to be possible to draw a simple rectangle as easily as this (after creating and initializing the window):
glColor3f(0.0f, 0.0f, 0.0f);
glRectf(-0.75f,0.75f, 0.75f, -0.75f);
This is has been removed however in OpenGL 3.2 and later versions.
Is there some other simple, quick and dirty, way in OpenGL 4 to draw a rectangle with a fixed color (without using shaders or anything fancy)?
Is there some ... way ... to draw a rectangle ... without using shaders ...?
Yes. In fact, AFAIK, it is supported on all OpenGL versions in existence: you can draw a solid rectangle by enabling scissor test and clearing the framebuffer:
glEnable(GL_SCISSOR_TEST);
glScissor(x, y, width, height);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
This is different from glRect in multiple ways:
The coordinates are specified in pixels relative to the window origin.
The rectangle must be axis aligned and cannot be transformed in any way.
Most of the per-sample processing is skipped. This includes blending, depth and stencil testing.
However, I'd rather discourage you from doing this. You're likely to be better off by building a VAO with all the rectangles you want to draw on the screen, then draw them all with a very simple shader.

2D Texture morph in Ortographic Projection

I'm having a hard time figuring out what's going on with my texture:
Basically I am fetching a webcam stream as my underlying 2d texture canvas in OpenGL, and in my paintGL() I'm drawing stuff on it (as RGBA images with GL_BLEND).
Since I'm using a Kinect as a data source, I'm also getting the depth values from a tracked skeleton (a person), and converting them into GL values (XYZ varying between 0.0f and 1.0f).
So my goal is that, for instance, a loaded 2D Texture like a shirt, is now properly tracking the person in my RGB output display. But it seems my understanding of orthographic projection is wrong:
I'm constantly loading the 4 converted vertices into a VBO, but whenever I put the texture on top of this dynamic quad, it's always facing the screen.
I thought that putting this dynamic quad between the "background" canvas and the camera would result in a proper projection of the quad onto the canvas, which would give me the impression of a warping 2D texture, that seems to "bend" whenever the person rotates.
But the texture is always facing the camera and doesnt rotate.
I've also tried to manually rotate via a matrix and set that into my shader, but again, it only rotates the vertice quad itself (as: rotation simply makes the texture smaller) , and THEN puts the texture on top, instead of rotating the texture with it.
So, is it somehow possible to properly apply this to the texture?
I've thought about mixing a perspective projection in, but actually have no idea how to implement this...
EDIT:
I've actually already set my projection matrix up like the following:
In resizeGL():
projection.setToIdentity();
projection.ortho(0.0f, 1.0f, 0.0f, 1.0f, 2.0f, -5.0f);
projection.translate(0.0f, 0.0f, 3.0f);
In paintGL():
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDisable(GL_DEPTH_TEST); // turning this on/off makes no difference
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, &textureID);
program.setUniformValue("mvp_matrix", projection);
program.setUniformValue("texture", 0);
//draw 2d background quad
drawQuad();
glClear(GL_DEPTH_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// switch to frustum to give perspective view
projection.setToIdentity();
projection.frustum(0.0f, 1.0f, 0.0f, 1.0f, 2.0f, -5.0f);
projection.translate(0.0f, 0.0f, 3.0f);
// bind cloth texture and draw ontop 2d quad
clothTexture->bind();
program.setUniformValue("mpv_matrix", projection);
drawShirtQuad();
// reset to ortho view
projection.setToIdentity();
projection.ortho(0.0f, 1.0f, 0.0f, 1.0f, 2.0f, -5.0f);
// release texture
clothTexture->release();
glDisable(GL_BLEND);
clothTexture is a QOpenGLTexture that has successfully loaded an RGBA image from a file.
Result: whenever I activate the frustum perspective, it results in a black screen. I think everything is correctly set up: POV is traversed towards positive z-axis in resizeGL(), and all the cloth vertices vary between 0 and 1 in XYZ, while the background is positioned at:
(0.0f, 0.0f, -1.0f), (1.0f, 0.0f, -1.0f), (1.0f, 1.0f, -1.0f), (0.0f, 1.0f, -1.0f).
So the cloth object is always positioned between background plane and POV. Am i missing something in the frustum setup ? I've simply set it up the same way as ortho...
EDIT:
Sorry for not mentiong; the matrix I'm using is a QMatrix4x4 type:
Frustum
These functions multiply the current matrix with the one you define as an argument, which should yield the same result as if I define a View matrix for instance, and then define my shader uniform "mvp_matrix" as projection * view, if I'm not mistaken. Maybe something like lookAt will do the trick; I'll just try messing around more. :)
You need to use a perspective projection to achieve desired result. Look here for example code for perspective projection matrix creation with glm.
Moving vertices wouldn't be needed as you will get proper positions with rotation applied in your model matrix.
EDIT: in your code where can i look at .frustum and .translate methods or from what library projection object is? It doesn't look like you are doing Projection * View by moving frustum matrix. Some info about roles of standard matrices.
Considering debugging if you get on screen black color instead of GL_COLOR_BUFFER_BIT color problem is not with matrix but earlier. Also i recommend to console.log your perspective matrix and compare it to correct one (which you can get for example in glm library).

OpenGL - PBuffer render to Texture

After my last post, when someone recommended me to use pBuffers, I digged a bit on Google and I found some cool examples to make Offscreen Rendering, using pbuffers. Some example, available on nVidia's website, does a simple offscreen rendering, which just renders on the pbuffer context, reads the pixels into an array and then calls the opengl functions to DrawPixels.
I changed this example, in order to create a texture from the pixels read - Render it offscreen, read the pixels to the array, and then initialize a texture with this colorbit array. But this looks very redundant to me - We render the image, we copy it from Graphical Card memory into our memory (the array), to later copy it back to the graphical card in order to display it on screen, but just in a different rendering context. It looks kinda stupid the copies that I am making just to display the rendered texture, so I tried a different approach using glCopyTexImage2D(), which unfortunately doesn't work. I'll display code and explanations:
mypbuffer.Initialize(256, 256, false, false);
- The false values are for Sharing context and sharing object. They are false cause this fantastic graphical card doesn't support it.
Then I perform the usual initializations, to enable Blending, and GL_TEXTURE_2D.
CreateTexture();
mypbuffer.Activate();
int viewport[4];
glGetIntegerv(GL_VIEWPORT,(int*)viewport);
glViewport(0,0,xSize,ySize);
DrawScene(hDC);
//save data to texture using glCopyTexImage2D
glBindTexture(GL_TEXTURE_2D,texture);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,
0,0, xSize, ySize, 0);
glClearColor(.0f, 0.5f, 0.5f, 1.0f); // Set The Clear Color To Medium Blue
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(viewport[0],viewport[1],viewport[2],viewport[3]);
// glBindTexture(GL_TEXTURE_2D,texture);
first = false;
mypbuffer.Deactivate();
- The DrawScene function is very simple, it just renders a triangle and a rectangle, which is suposed to be offscreen rendered (I HOPE). CreateTexture() creates an empty texture. The function should work, as it was tested in the previous way I described and it works.
After this, in the main loop, i just do the following:
glClear(GL_COLOR_BUFFER_BIT);
glBindTexture(GL_TEXTURE_2D,texture);
glRotatef(theta, 0.0f, 0.0f, 0.01f);
glBegin(GL_QUADS);
//Front Face
glTexCoord2f(0.0f, 0.0f);
glVertex3f(-0.5, -0.5f, 0.5f);
glTexCoord2f(1.0f, 0.0f);
glVertex3f( 0.5f, -0.5f, 0.5f);
glTexCoord2f(1.0f, 1.0f);
glVertex3f( 0.5f, 0.5f, 0.5f);
glTexCoord2f(0.0f, 1.0f);
glVertex3f(-0.5f, 0.5f, 0.5f);
glEnd();
SwapBuffers(hDC);
theta = 0.10f;
Sleep (1);
The final result is just a window with a blue background, nothing got actually Rendered. Any Idea why is this happening? My Graphical Card doesn't support the extension wgl_ARB_render_texture, but this shouldn't be a problem when calling the glCopyTexImage2D() right?
My Card doesn't support FBO either
What you must do is, sort of "connect" your two OpenGL contexts so that the textures of your PBuffer context also show up in the main render context. The term you need to look for is "display list sharing". In Windows you connect the contexts retroactively using wglShareLists, on X11 and MacOS X you must supply the handle of the context to be shared at context creation.
An entirely different possibility and working just as well is reusing the same context on the PBuffer. It's a little known fact, that you can use OpenGL render contexts not only on the drawable it has been created with first, but on any drawable with compatible settings. So if your PBuffer matches your main window's pixel format, you can detach the render context from the main window and attach it to the PBuffer. Of course you then need low level access to the main window's device context/drawable, which is normally hidden behind a framework.
You should check whether your OpenGL implementation supports framebuffer objects: these object are able to be render targets, and they can have attached textures as color buffers, indeed rendering directly into a texture.
This should be the way to go, otherwise your method is the alternative.

How do I shift the hue of a LWJGL texture?

I am trying to do a Photoshop-like hue shift on a texture.
My code is somewhat like this:
glColor4f(0.0f, 1.0f, 1.0f, 1.0f);
//bind texture, draw quad, etc.
Here is a picture describing what happens:
I can't post images yet, so here's a link.

C++ OpenGL load image in GL_QUAD, glVertex2f

Using WIN32_FIND_DATA and FindFirstFile I'm searching for files in a directory an with fileName.find(".jpg") != std::string::npos I filter the jpg images out.
I'm using OpenGL for creating Boxes with a red color:
glBegin( GL_QUADS );
glColor4f( 1.0f, 0.0f, 0.0f, 0.0f ); glVertex2f( 0.35f, 0.7f );
glColor4f( 1.0f, 0.0f, 0.0f, 0.0f ); glVertex2f( -0.35f, 0.7f );
glColor4f( 1.0f, 0.0f, 0.0f, 0.0f ); glVertex2f( -0.35f, -0.3f );
glColor4f( 1.0f, 0.0f, 0.0f, 0.0f ); glVertex2f( 0.35f, -0.3f );
This is the box in the center with a red color.
My Question is how can I load the Images each in a Cube instead of the Red color (glColor4f)?
I think this is not the best way to make this, but this code is not my own Code, I'm trying to make this better for a friend.
Thank you!
You need to learn about texturing. See NeHe's tutorial on the subject as an example.
However, that tutorial is a bit old (as is your code, since you use glVertex(), so it might not matter to you right now... :).
Anyway, starting from OpenGL 3.1 and OpenGL ES 2.0, you should do it with using GLSL, fragment shaders and samplers instead. See another tutorial for that. It's actually simpler than learning all the fixed function stuff.
It's not really a good practice to use WinAPI together with OpenGL applications unless you really have reasons to - and loading textures from the disk is not a good reason.
Think this way: OpenGL is a platform-independent API, why to dimnish this advantage by using non-portable subroutines when portable alternatives exist and are more convenient to use in most cases?
For the loading textures, I recommend the SOIL library. This is likely to be much better a solution than what the NeHe tutorials recommend.
For finding files on the disk, you might want to use boost::filesystem if you want to get rid of the WinAPI dependency. But that's not a priority now.
When you have the texture loaded by SOIL (a GLuint value being the texture ID), you can do the following:
enable 2D texturing (glEnable(GL_TEXTURE_2D)),
bind the texture as active 2D texture (glBindTexture(GL_TEXTURE_2D,tex);),
set the active color to pure white so that the texture image will be full-bright,
draw the vertices as usual, but for each vertex you'll need to specify a texture coordinate (glTexCoord2f) instead of a color. (0,0) is upper left coord of the texture image, (1,1) is the lower right.
Note that the texture image must have dimensions being powers of two (like 16x16 or 256x512). If you want to use any texture size, switch to a newer OpenGL version which supports GL_TEXTURE_RECTANGLE.
Not really a lot of explaining, as far as the basics are concerned. :)
BTW- +1 for what Marcus said in his answer. You're learning an outdated OpenGL version right now; while you can do a lot of fun things with it, you can do more with at least OpenGL 2 and shaders... and it's usually easier with shaders too.