After loading an image, I have the individual bytes for each channel loaded into an array of unsigned characters. It's passed to a function that projects it as a texture onto a quad. Everything seems to work properly other than the alpha channel, which shows up as the background color. I'm using OpenGL to draw the image. Would I benefit by adding a layering mechanism? Also, how can I achieve the transparent effect that I want?
Note: This is the code that I have a feeling needs changed:
void SetUpView()
{
// Set color and depth clear value
glClearDepth(1.f);
glClearColor(1.f, 0.f, 0.f, 0.f);
// Enable Z-buffer read and write
glEnable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
glDepthMask(GL_TRUE);
glEnable (GL_BLEND);
glBlendFunc (GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
// Setup a perspective projection
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(90.f, 1.f, 1.f, 500.f);
};
Also, here's the code to render the quad.
void DrawTexturedRect(RectBounds *Verts, Image *Texture)
{
glBindTexture(GL_TEXTURE_2D, GetTextureID(Texture));
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelZoom(1, -1);
glBegin(GL_QUADS);
glTexCoord2f(0.0, 1.0);
glVertex3f(Verts->corners[0].x, Verts->corners[0].y, Verts->corners[0].z);
glTexCoord2f(1.0, 1.0);
glVertex3f(Verts->corners[1].x, Verts->corners[1].y, Verts->corners[1].z);
glTexCoord2f(1.0, 0.0);
glVertex3f(Verts->corners[2].x, Verts->corners[2].y, Verts->corners[2].z);
glTexCoord2f(0.0, 0.0);
glVertex3f(Verts->corners[3].x, Verts->corners[3].y, Verts->corners[3].z);
glEnd();
};
The Image class holds an array of unsigned chars, obtained using OpenIL.
Relevant code:
loaded = ilLoadImage(filename.c_str());
ilConvertImage(IL_RGBA, IL_UNSIGNED_BYTE);
unsigned char *bytes = ilGetData();
//NewImage is an instance of an Image. This is returned and passed to the above function.
NewImage->data = bytes;
Before drawing anything transparent you should call:
glDepthMask(false);
And then afterwards:
glDepthMask(true);
Also, all transparent objects must be drawn after all opaque ones.
glClearColor(1.f, 0.f, 0.f, 0.f);
I'd assume that's at the RGBA default, and you're setting red to 1.0.
How about a better explanation of what you're trying to accomplish?
Try:
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Edit: I know what that looks like. It looks like you are drawing the blue square (Which has a Z-order position placing it behind the cursor) AFTER the cursor. You have to draw things in the correct, back-to-front Z-Order when alpha blending or you see errors like you are seeing.
Related
The below code is nearly identical to the code retrieved from this NeHe tutorial. The only difference between my code and the code on the tutorial is that I am using SFML for window context, which should not be relevant. To view the entire source code, go here. A snippet of the relevant code is below (the comments are from NeHe):
// Clip Plane Equations
double eqr[] = {0.0f,-1.0f, 0.0f, 0.0f}; // Plane Equation glColorMask(0,0,0,0); // Set Color Mask
glEnable(GL_STENCIL_TEST); // Enable Stencil Buffer For "marking" The Floor
glStencilFunc(GL_ALWAYS, 1, 1); // Always Passes, 1 Bit Plane, 1 As Mask
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE); // We Set The Stencil Buffer To 1 Where We Draw Any Polygon
// Keep If Test Fails, Keep If Test Passes But Buffer Test Fails
// Replace If Test Passes
glDisable(GL_DEPTH_TEST); // Disable Depth Testing
DrawFloor(); // Draw The Floor (Draws To The Stencil Buffer)
// We Only Want To Mark It In The Stencil Buffer
glEnable(GL_DEPTH_TEST); // Enable Depth Testing
glColorMask(1,1,1,1); // Set Color Mask to TRUE, TRUE, TRUE, TRUE
glStencilFunc(GL_EQUAL, 1, 1); // We Draw Only Where The Stencil Is 1
// (I.E. Where The Floor Was Drawn)
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); // Don't Change The Stencil Buffer
glEnable(GL_CLIP_PLANE0); // Enable Clip Plane For Removing Artifacts
// (When The Object Crosses The Floor)
glClipPlane(GL_CLIP_PLANE0, eqr); // Equation For Reflected Objects
glPushMatrix(); // Push The Matrix Onto The Stack
glScalef(1.0f, -1.0f, 1.0f); // Mirror Y Axis
glLightfv(GL_LIGHT0, GL_POSITION, LightPos); // Set Up Light0
glTranslatef(0.0f, height, 0.0f); // Position The Object
DrawObject(); // Draw The Sphere (Reflection)
glPopMatrix(); // Pop The Matrix Off The Stack
glDisable(GL_CLIP_PLANE0); // Disable Clip Plane For Drawing The Floor
glDisable(GL_STENCIL_TEST); // We Don't Need The Stencil Buffer Any More (Disable)
glLightfv(GL_LIGHT0, GL_POSITION, LightPos); // Set Up Light0 Position
glEnable(GL_BLEND); // Enable Blending (Otherwise The Reflected Object Wont Show)
glDisable(GL_LIGHTING); // Since We Use Blending, We Disable Lighting
glColor4f(1.0f, 1.0f, 1.0f, 0.8f); // Set Color To White With 80% Alpha
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // Blending Based On Source Alpha And 1 Minus Dest Alpha
DrawFloor(); // Draw The Floor To The Screen
glEnable(GL_LIGHTING); // Enable Lighting
glDisable(GL_BLEND); // Disable Blending
glTranslatef(0.0f, height, 0.0f); // Position The Ball At Proper Height
DrawObject();
The final result of this code can be seen below:
How do I alter the above code to cause the bottom (reflected) sphere to appear only on the plane instead of outside of it.
Well, do you actually create a GL context with a stencil buffer? The only relevant line for context creation in your code seems to be
f::RenderWindow window(sf::VideoMode(800, 600, 32), "Test");
and that is not very specific. I don't know SFML, but why do you think changing the code for context creation isn't relevant here?
I am creating a 3D game. I have objects in my game. When an enemy hits my position I want my screen to go red for a short time. I have chosen to do this by trying to render a full screen red square at my camera position. This is my attempt which is in my render method.
RenderQuadTerrain();
//Draw the skybox
CreateSkyBox(vNewPos.x, vNewPos.y, vNewPos.z,3500,3000,3500);
DrawCoins();
CollisionTest(g_Camera.Position().x, g_Camera.Position().y, g_Camera.Position().z);
DrawEnemy();
DrawEnemy1();
//Draw SecondaryObjects models
DrawSecondaryObjects();
//Apply lighting effects
LightingEffects();
escapeAttempt();
if(hitbyenemy==true){
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE); // additive blending
float blendFactor = 1.0;
glColor3f(blendFactor ,0,0); // when blendFactor = 0, the quad won't be visible. When blendFactor=1, the scene will be bathed in redness
glBegin(GL_QUADS); // Draw A Quad
glVertex3f(-1.0f, 1.0f, 0.0f); // Top Left
glVertex3f( 1.0f, 1.0f, 0.0f); // Top Right
glVertex3f( 1.0f,-1.0f, 0.0f); // Bottom Right
glVertex3f(-1.0f,-1.0f, 0.0f); // Bottom Left
glEnd();
}
All this does, however, is turn all of the objects in my game a transparent colour, and I can't see the square anywhere. I don't even know how to position the quad. I'm very new to openGL.
How my game looks without an attempt to render a quad:
How my game looks after my attempt:
With Kevin's code and glDisable(GL_DEPTH_TEST);
EDIT: I have changed the code to the below paste..still looks like image 1.
http://pastebin.com/eiVFcQqM
There are several possible contributions to the problem:
You probably want regular blending, not additive blending; additive blending will not turn white, yellow, or purple objects red. Change the blend func to glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); and use a color of glColor4f(1, 0, 0, blendFactor);
You should glDisable(GL_DEPTH_TEST); while drawing the overlay, to prevent it from being hidden by other geometry, and reenable it afterward (or use glPush/PopAttrib(GL_ENABLE_BIT)).
The projection and modelview matrixes should be the identity, to ensure a quad with those coordinates covers the entire screen. (However, you may have that implicitly already, since you say it is affecting the full screen, just not in the right way.)
If these suggestions do not fix it, please edit your question showing screenshots of your game with and without the red flash so we can understand the problem better.
I just started working with OpenGL, but I ran into a problem after implementing a Font system.
My plan is to simply visualize several Pathfinding Algorithms.
Currently OpenGL gets set up like this (OnSize gets called once on window creation manually):
void GLWindow::OnSize(GLsizei width, GLsizei height)
{
// set size
glViewport(0,0,width,height);
// orthographic projection
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0,width,height,0.0,-1.0,1.0);
glMatrixMode(GL_MODELVIEW);
m_uiWidth = width;
m_uiHeight = height;
}
void GLWindow::InitGL()
{
// enable 2D texturing
glEnable(GL_TEXTURE_2D);
// choose a smooth shading model
glShadeModel(GL_SMOOTH);
// set the clear color to black
glClearColor(0.0, 0.0, 0.0, 0.0);
glEnable(GL_ALPHA_TEST);
glAlphaFunc(GL_GREATER, 0.0f);
}
In theory I don't need blending, because I will only use untextured Quads to visualize obstacles and line etc to draw paths... So everything will be untextured, except the fonts...
The Font Class has a push and pop function, that look like this (if I remember right my Font system is based on a NeHe Tutorial that I was following quite a while ago):
inline void GLFont::pushScreenMatrix()
{
glPushAttrib(GL_TRANSFORM_BIT);
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(viewport[0],viewport[2],viewport[1],viewport[3], -1.0, 1.0);
glPopAttrib();
}
inline void GLFont::popProjectionMatrix()
{
glPushAttrib(GL_TRANSFORM_BIT);
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glPopAttrib();
}
So the Problem:
If I don't draw a Text I can see the Quads I want to draw, but they are quite dark, so there must be something wrong with my general OpenGL Matrix Properties.
If I draw Text (so the font related push and pop functions get called) I can't see any Quads.
The question:
How do I solve this problem and some background information why this happened would also be nice, because I am still a beginner/student, who just started.
If your quads are untextured, you will run into undefined behaviour. What will probably happen is that any previous texture will be used, and the colour at point (0,0) will be used, which could be what is causing them to be invisible.
Really, you need to disable texturing before trying to draw untextured quads using glDisable(GL_TEXTURE_2D). Again, if you don't, it'll just use the previous texture and texture co-ordinates, which without seeing your draw() loop, I'm assuming to be undefined.
Just wondering if someone can help me track down my issue with the following code where the text color is not being set correctly (its just rendering whatever color is in the background)
void RenderText(int x, int y, const char *string)
{
int i, len;
glUseProgram(0);
glLoadIdentity();
glColor3f(1.0f, 1.0f, 1.0f);
glTranslatef(0.0f, 0.0f, -5.0f);
glRasterPos2i(x, y);
glDisable(GL_TEXTURE_2D);
for (i = 0, len = strlen(string); i < len; i++)
{
glutBitmapCharacter(GLUT_BITMAP_8_BY_13, (int)string[i]);
}
glEnable(GL_TEXTURE_2D);
}
I've checked all the usual things (I think), disabling texturing, setting color before rasterPos'ing, etc Ive disabled shaders but Im still having issues
Looks like you've forgotten to glDisable(GL_LIGHTING) before drawing your string.
No color is stored with any OpenGL bitmap (which is what glutBitmapCharacter created. The bitmap is monochrome and stores only shape.
When the bitmap is drawn (e.g. glBitmap or maybe glDrawLists), the current raster color is used. The raster color is not always the same as the active color, see http://www.opengl.org/wiki/Coloring_a_bitmap.
Color is usually controlled with the glColor3f function, thus if the text is white and shouldn't be then the following change should help:
glLoadIdentity();
glColor3f(0.5f, 0.5f, 0.5f); //<-- this line controls the color (now text is gray)
glTranslatef(0.0f, 0.0f, -5.0f);
glRasterPos2i(x, y);
Also, calling glDisable(GL_TEXTURE_2D) and glEnable(GL_TEXTURE_2D) is unnecessary. Instead you can just call glBindTexture(GL_TEXTURE_2D,0) to disable textures and then use the same function to set the active texture. Just make sure to call glEnable(GL_TEXTURE_2D) in your initialization function.
I'm in the process of writing a wrapper for some OpenGL functions. The goal is to wrap the context used by the game Neverwinter Nights, in order to apply post-processing shader effects. After learning OpenGL (this is my first attempt to use it) and much playing with DLLs and redirection, I have a somewhat working system.
However, when the post-processing fullscreen quad is active, all texturing and transparency drawn by the game are lost. This shouldn't be possible, because all my functions take effect after the game has completely finished its own rendering.
The code does not use renderbuffers or framebuffers (both refused to compile on my system in any way, with or with GLEW or GLee, despite being supported and usable by other programs). Eventually, I put together this code to handle copying the texture from the buffer and rendering a fullscreen quad:
extern "C" SEND BOOL WINAPI hook_wglSwapLayerBuffers(HDC h, UINT v)
{
if ( frameCount > 250 )
{
frameCount++;
if ( frameCount == 750 ) frameCount = 0;
if ( nwshader->thisframe == NULL )
{
createTextures();
}
glBindTexture(GL_TEXTURE_2D, nwshader->thisframe);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, nwshader->width, nwshader->height, 0);
glClearColor(0.0f, 0.5f, 0.0f, 0.5f);
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
glDisable(GL_DEPTH_TEST);
glBlendFunc(GL_ONE, GL_ZERO);
glEnable(GL_BLEND);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho( 0, nwshader->width , nwshader->height , 0, -1, 1 );
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glBegin(GL_POLYGON);
glTexCoord2f(0.0f, 1.0f);
glVertex2d(0, 0);
glTexCoord2f(0.0f, 0.0f);
glVertex2d(0, nwshader->height);
glTexCoord2f(1.0f, 0.0f);
glVertex2d(nwshader->width, nwshader->height);
glTexCoord2f(1.0f, 1.0f);
glVertex2d(nwshader->width, 0);
glEnd();
glMatrixMode( GL_PROJECTION );
glPopMatrix();
glMatrixMode( GL_MODELVIEW );
glPopMatrix();
glEnable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
} else {
frameCount++;
}
if ( h == grabbedDevice )
{
Log->logline("Swapping buffer on cached device.");
}
return wglSwapLayerBuffers(h,v);
}
This code functions almost functions perfectly and has no notable slow-down. However, when it is active (I added the frameCount condition to turn it on and off every ~5 seconds), all alpha and texturing are completely ignored by the game renderer. I'm not turning off any kind of blending or texturing before this function (the only OpenGL calls are to create the nwshader->thisframe texture).
I was able to catch a few screenshots of what's happening:
Broken A: http://i4.photobucket.com/albums/y145/peachykeen000/outside_brokenA.png
Broken B: http://i4.photobucket.com/albums/y145/peachykeen000/outside_brokenB.png
(note, in B, the smoke in the back is not broken, it is correctly transparent. So is the HUD.)
Broken Interior: http://i4.photobucket.com/albums/y145/peachykeen000/transparency_broken.png
Correct Interior (for comparison): http://i4.photobucket.com/albums/y145/peachykeen000/transparency_proper.png
The drawing of the quad also breaks menus, turning the whole thing into a black surface with a single white box. I suspect it is a problem with either depth or how the game is drawing certain objects, or a state that is not being reset properly. I've used GLintercept to dump a full log of all calls in a frame, and didn't see anything wrong (the call to wglSwapLayerBuffers is always last).
Being brand new to working with OpenGL, I really have no clue what's going wrong (or how to fix it) and nothing I've tried has helped. What am I missing?
I don't quite understand how your code is supposed to integrate with the Neverwinter Nights code. However...
It seems like you're most likely changing some setting that the existing code didn't expect to change.
Based on the description of the problem, I'd try removing the following line:
glDisable(GL_TEXTURE_2D);
That line disables textures, which certainly sounds like the problem you're seeing.