OpenGL ignore object when blending? - c++

I'm rendering a quad with an outline. I do this by rendering a quad of the required size in black, then rendering the "inner" quad smaller than the first. I also want this outlined quad to be transparent.
My current attempt renders as so:
However the problem I have is that because I'm essentially rendering two quads on top of each other, the black "main" quad blends through to the smaller "inner" quad. I don't want this to happen as I want the blending to only blend the background and the "inner" quad colors.
Other than rendering separate quads/triangles for the outline is there a way (ideally not using shaders as I'm using fixed pipeline) to make the blending ignore/not render the "main" quad when blending the "inner" quad?
Relevant code:
Blend mode:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnable( GL_BLEND );
Rendering the box:
struct vertex
{
GLfloat x,y;
};
void draw_box()
{
glPushMatrix();
glTranslatef( -3.0f, 1.12f, 1.0f );
// Draw a huge black quad
const vertex vertices2[] =
{
{ -1.0f, -1.0f },
{ 0.0f, 0.0f },
{ -1.0f, 0.0f },
{ -1.0f, -1.0f },
{ 0.0f, 0.0f },
{ 0.0f, -1.0f },
};
glVertexPointer(2, GL_FLOAT, 0, vertices2);
glColor4f(0.0f, 0.0f, 0.0f, 0.5f);
glDrawArrays(GL_TRIANGLES, 0, 6);
// Draw a smaller grey quad
const GLfloat space = 0.04f;
const vertex vertices3[] =
{
{ -1.0f+space, -1.0f+space },
{ 0.0f-space, 0.0f-space },
{ -1.0f+space, 0.0f-space },
{ -1.0f+space, -1.0f+space },
{ 0.0f-space, 0.0f-space },
{ 0.0f-space, -1.0f+space },
};
glVertexPointer(2, GL_FLOAT, 0, vertices3);
glColor4f(0.5f, 0.5f, 0.5f, 0.5f);
glDrawArrays(GL_TRIANGLES, 0, 6);
glPopMatrix();
}

The best way to do this is probably to split up the border into four trapezoidal quads, but since you explicitly mentioned you didn't want to go down that path, there are a couple of other options you could pursue.
The next best method is probably to use the depth buffer: glEnable(GL_DEPTH_TEST). Draw the inside quad first (with blending enabled). Then draw the outer quad, but using coordinates such that it's depth is greater than the first quad (it is "farther away"). If depth testing is enabled when the border quad is drawn, it will fail for the quad you already drew, since those pixels have a smaller depth value. Note that using this method, if you are writing to the depth buffer when rendering the things behind these quads, you may run into some issues with parts of the background being considered closer than the new quads. You can solve this for the inner quad by configuring the depth test to always succeed, even when the test fails: glDepthFunc(GL_ALWAYS). This won't help for the outer quad, however. In that case the next method might be helpful:
You can use the stencil buffer in a similar manner to the depth buffer. When rendering the inside quad, you set the stencil value (or one bit of it at least) to a specific value:
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_ALWAYS, 1, GLuint(-1);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
Then when you draw the outer quad, configure the stencil test to succeed only if a fragment does not have that value:
glStencilFunc(GL_NOT_EQUAL, 1, 1);
Essentially, this is the same thing the version above was doing with the depth buffer, but uses the stencil buffer instead. You may want to take a look at the documentation for glStencilFunc and glStencilOp.
There may be other ways to achieve this behavior, but all the other solutions I can think of involve using custom shaders, which you indicated is not acceptable. Personally, I'd just split up the border into 4 quads. Sure, it's 3 more quads than the other methods, but it doesn't require any extra OpenGL features or state changes.

Related

Outlining an object using a stencil buffer give wrong result

I'm trying to use the stencil buffer to draw outline border behind a model.
I'm using 2 render passes, first rendering the model and using GL_ALWAYS to write 1 to the stencil buffer and then I render a slightly scaled up version of the model that uses GL_NOTEQUAL with ref value of 1 so basically to pass only fragments that are outside the original cube thus making the highlight.
I'm getting weird result, the outline is not fully rendered and when I move my camera enough it disappears completely as if the stencil test of the scaled up cube doesn't pass at all.
Here's the gist of the rendering loop (shader3 just outputs a color for the border):
while (!glfwWindowShouldClose(window))
{
glClearColor(0.2f, 0.3f, 0.3f, 1.0f);
glClearStencil(0);
renderer.Clear();
...
glm::mat4 projection = glm::perspective(glm::radians(45.0f), (float)SCREEN_WIDTH / (float)SCREEN_HEIGHT, 0.1f, 100.0f);
UniformMat4f projectionUniform("u_Projection");
projectionUniform.SetValues(projection);
glm::mat4 view = camera.GetView();
UniformMat4f viewUniform("u_View");
viewUniform.SetValues(view);
modelUniform.SetValues(glm::translate(glm::mat4(1.0f), glm::vec3(5.0f, 0.0f, 0.0f)));
shader1.SetUniform(projectionUniform);
shader1.SetUniform(viewUniform);
shader1.SetUniform(modelUniform);
glStencilFunc(GL_ALWAYS, 1, 0xFF);
glStencilMask(0xFF);
renderer.Draw(model, shader1);
modelUniform.SetValues(glm::scale(glm::translate(glm::mat4(1.0f), glm::vec3(5.0f, 0.0f, 0.0f)), glm::vec3(1.1f, 1.1f, 1.1f)));
shader3.SetUniform(projectionUniform);
shader3.SetUniform(viewUniform);
shader3.SetUniform(modelUniform);
glStencilFunc(GL_NOTEQUAL, 1, 0xFF);
glStencilMask(0x00);
glDisable(GL_DEPTH_TEST);
renderer.Draw(model, shader3);
glEnable(GL_DEPTH_TEST);
}
I first tried it with the nanosuit model from crysis and it work as I described not showing all the parts and eventually disappearing when moving the camera. I've decided to try a simple geometry to test this out and tried to outline a simple cube and got the same result:
The issue is the call to glStencilMask(0x00); at the end of the main loop. the default value of the stencil mask is 0xff (for 8 bit, see glStencilMask). OpenGL is a state engine. A state is kept, till it is changed again (even beyond frames).
glClear considers the state of the color, depth and stencil mask. If the stencil mask is 0x00, then the stencil buffer isn't cleared at all.
Set the stencil mask 0xff, before you clear the stencil buffer to solve the issue:
glStencilMask(0xFF);
renderer.Clear();

Why does my OpenGL reflection fail to clip to the stenciled area?

The below code is nearly identical to the code retrieved from this NeHe tutorial. The only difference between my code and the code on the tutorial is that I am using SFML for window context, which should not be relevant. To view the entire source code, go here. A snippet of the relevant code is below (the comments are from NeHe):
// Clip Plane Equations
double eqr[] = {0.0f,-1.0f, 0.0f, 0.0f}; // Plane Equation glColorMask(0,0,0,0); // Set Color Mask
glEnable(GL_STENCIL_TEST); // Enable Stencil Buffer For "marking" The Floor
glStencilFunc(GL_ALWAYS, 1, 1); // Always Passes, 1 Bit Plane, 1 As Mask
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE); // We Set The Stencil Buffer To 1 Where We Draw Any Polygon
// Keep If Test Fails, Keep If Test Passes But Buffer Test Fails
// Replace If Test Passes
glDisable(GL_DEPTH_TEST); // Disable Depth Testing
DrawFloor(); // Draw The Floor (Draws To The Stencil Buffer)
// We Only Want To Mark It In The Stencil Buffer
glEnable(GL_DEPTH_TEST); // Enable Depth Testing
glColorMask(1,1,1,1); // Set Color Mask to TRUE, TRUE, TRUE, TRUE
glStencilFunc(GL_EQUAL, 1, 1); // We Draw Only Where The Stencil Is 1
// (I.E. Where The Floor Was Drawn)
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); // Don't Change The Stencil Buffer
glEnable(GL_CLIP_PLANE0); // Enable Clip Plane For Removing Artifacts
// (When The Object Crosses The Floor)
glClipPlane(GL_CLIP_PLANE0, eqr); // Equation For Reflected Objects
glPushMatrix(); // Push The Matrix Onto The Stack
glScalef(1.0f, -1.0f, 1.0f); // Mirror Y Axis
glLightfv(GL_LIGHT0, GL_POSITION, LightPos); // Set Up Light0
glTranslatef(0.0f, height, 0.0f); // Position The Object
DrawObject(); // Draw The Sphere (Reflection)
glPopMatrix(); // Pop The Matrix Off The Stack
glDisable(GL_CLIP_PLANE0); // Disable Clip Plane For Drawing The Floor
glDisable(GL_STENCIL_TEST); // We Don't Need The Stencil Buffer Any More (Disable)
glLightfv(GL_LIGHT0, GL_POSITION, LightPos); // Set Up Light0 Position
glEnable(GL_BLEND); // Enable Blending (Otherwise The Reflected Object Wont Show)
glDisable(GL_LIGHTING); // Since We Use Blending, We Disable Lighting
glColor4f(1.0f, 1.0f, 1.0f, 0.8f); // Set Color To White With 80% Alpha
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // Blending Based On Source Alpha And 1 Minus Dest Alpha
DrawFloor(); // Draw The Floor To The Screen
glEnable(GL_LIGHTING); // Enable Lighting
glDisable(GL_BLEND); // Disable Blending
glTranslatef(0.0f, height, 0.0f); // Position The Ball At Proper Height
DrawObject();
The final result of this code can be seen below:
How do I alter the above code to cause the bottom (reflected) sphere to appear only on the plane instead of outside of it.
Well, do you actually create a GL context with a stencil buffer? The only relevant line for context creation in your code seems to be
f::RenderWindow window(sf::VideoMode(800, 600, 32), "Test");
and that is not very specific. I don't know SFML, but why do you think changing the code for context creation isn't relevant here?

ShadowMap confuse

I am confused about shadow mapping. Here's what I've understood (folowing steps are not working:) )
How to get profit (please don't get confused about the code, it is roughly because I write on Java):
1. Create empty depthTexture (mine is 1024x1024) with parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE) and
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, GL_NULL)
2. Create FBO and attach that texture to it
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthTexture, 0)
3. Setup new projection and view matrix for lighting camera
I used the same as my main camera just with another coords, because scene looks better in it (I tried it out, so there's no problem... I guess ..).
4. Create new little shader to determinate gl_Position for FBO and modify the main shader with fancy stuff (bias matrix * lightCamera matrix * vertex, sampler2Dshadow and more)
5. Of course uniform everything and do something.
And now RENDER LOOP
1. Of course uniform everything and do something(again).
2. Bind FBO, bind little shader, glViewport(0, 0, 1024, 1024), colorMask to false, clear depth buffer, glCullFace(GL_FRONT), glBindTexture(GL_TEXTURE_2D, 0)
3. Render everything using only vertex position attribs completely for nothing
4. Unbind FBO, rebind program to main shader (that one with fancy stuff), glViewport, enable colorMask, glCullFace(GL_BACK)
5. Render the scene normally without even thinking about "How the hell main shader gonna get sampler2DShadow because we dont bind it, dont uniform it, and don't even touch it"
6. Watch countless glitches, bugs and pixel orgy
Actually I tried to uniform depthTexture to sampler, but I've got black screen only. And even if I dont render FBO, I get the same picture when I do that.
Can someone explain, what am I missing?
I feel like shader uses diffuse texture twice: as a diffuse texture and as a depthTexture, but I don't know how to give it that depthTexture.
here is a good link to read: http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-16-shadow-mapping/
and here: http://www.paulsprojects.net/tutorials/smt/smt.html (although the second link uses old fixed function opengl)
in general:
create one depth texture
attach this texture to FBO
bind FBO, setup proper viewport
render scene from light pos, save only depth values to your texture
unbind FBO and setup final scene vieport and camera position
render scene normally using shadow test (sampler2DShadow)
you can render your depth map always (in render loop) or only when light pos changes.
I do not know why you are rendering you scene normally twice... are you using Z-prepass, or something? just try the basic version I think.
my old code with shadow maps:
void RenderShadowMap() {
currDepth->Bind();
glClear(GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gLightCam.SetProjectionMatrix();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gLightCam.SetViewMatrix();
glColorMask(false, false, false, false);
glUseProgram(0); // draw without any shaders... just default depth
glEnable(GL_POLYGON_OFFSET_FILL);
glPolygonOffset(offFactor, offUnits);
SimpleScene(false); // floor does not cast shadow so do not render it
glDisable(GL_POLYGON_OFFSET_FILL);
glColorMask(true, true, true, true);
}
render scene:
// compose shadow matrix:
MATRIX4X4 bias(0.5f, 0.0f, 0.0f, 0.0f,
0.0f, 0.5f, 0.0f, 0.0f,
0.0f, 0.0f, 0.5f, 0.0f,
0.5f, 0.5f, 0.5f, 1.0f);
MATRIX4X4 *invCam = gSphericalCam.GetInvViewMatrix();
MATRIX4X4 smMat = (*gLightCam.GetViewProjMatrix()) * (*invCam);
gShaderProgramManager->GetProgram("shadow")->Use();
gShaderProgramManager->GetProgram("shadow")->SetMatrix("shadowMat", &smMat);
gShaderProgramManager->GetProgram("shadow")->SetBool("useShadow", currCam != &gLightCam && useShadow);
gShaderProgramManager->GetProgram("shadow")->SetFloat("shadowMapSize", (float)currDepth->GetWidth());
glActiveTexture(GL_TEXTURE0);
glEnable(GL_TEXTURE_2D);
glColor3f(1.0f, 1.0f, 1.0f);
gTextureManager->Bind("default");
glActiveTexture(GL_TEXTURE1);
currDepth->BindDepthAsTexture();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE_ARB, GL_COMPARE_R_TO_TEXTURE_ARB);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC_ARB, GL_LEQUAL);
SimpleScene();

How to draw a full screen quad and still see the objects behind it

I am creating a 3D game. I have objects in my game. When an enemy hits my position I want my screen to go red for a short time. I have chosen to do this by trying to render a full screen red square at my camera position. This is my attempt which is in my render method.
RenderQuadTerrain();
//Draw the skybox
CreateSkyBox(vNewPos.x, vNewPos.y, vNewPos.z,3500,3000,3500);
DrawCoins();
CollisionTest(g_Camera.Position().x, g_Camera.Position().y, g_Camera.Position().z);
DrawEnemy();
DrawEnemy1();
//Draw SecondaryObjects models
DrawSecondaryObjects();
//Apply lighting effects
LightingEffects();
escapeAttempt();
if(hitbyenemy==true){
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE); // additive blending
float blendFactor = 1.0;
glColor3f(blendFactor ,0,0); // when blendFactor = 0, the quad won't be visible. When blendFactor=1, the scene will be bathed in redness
glBegin(GL_QUADS); // Draw A Quad
glVertex3f(-1.0f, 1.0f, 0.0f); // Top Left
glVertex3f( 1.0f, 1.0f, 0.0f); // Top Right
glVertex3f( 1.0f,-1.0f, 0.0f); // Bottom Right
glVertex3f(-1.0f,-1.0f, 0.0f); // Bottom Left
glEnd();
}
All this does, however, is turn all of the objects in my game a transparent colour, and I can't see the square anywhere. I don't even know how to position the quad. I'm very new to openGL.
How my game looks without an attempt to render a quad:
How my game looks after my attempt:
With Kevin's code and glDisable(GL_DEPTH_TEST);
EDIT: I have changed the code to the below paste..still looks like image 1.
http://pastebin.com/eiVFcQqM
There are several possible contributions to the problem:
You probably want regular blending, not additive blending; additive blending will not turn white, yellow, or purple objects red. Change the blend func to glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); and use a color of glColor4f(1, 0, 0, blendFactor);
You should glDisable(GL_DEPTH_TEST); while drawing the overlay, to prevent it from being hidden by other geometry, and reenable it afterward (or use glPush/PopAttrib(GL_ENABLE_BIT)).
The projection and modelview matrixes should be the identity, to ensure a quad with those coordinates covers the entire screen. (However, you may have that implicitly already, since you say it is affecting the full screen, just not in the right way.)
If these suggestions do not fix it, please edit your question showing screenshots of your game with and without the red flash so we can understand the problem better.

Putting a texture on one surface of a cube isn't working

I'm trying to put a texture on one surface of a cube (if facing the XY plane the texture would be facing you).
No texture is getting drawn, only the wireframe and I'm wondering what I'm doing wrong. I think it's the vertex coordinates?
Here's some code:
struct paperVertex {
D3DXVECTOR3 pos;
DWORD color; // The vertex color
D3DXVECTOR2 texCoor;
paperVertex(D3DXVECTOR3 p, DWORD c, D3DXVECTOR2 t) {pos = p; color = c; texCoor = t;}
paperVertex() {pos = D3DXVECTOR3(0,0,0); color = 0; texCoor = D3DXVECTOR2(0,0);}
};
D3DCOLOR color1 = D3DCOLOR_XRGB(255, 255, 255);
D3DCOLOR color2 = D3DCOLOR_XRGB(200, 200, 200);
vertices[0] = paperVertex(D3DXVECTOR3(-1.0f, -1.0f, -1.0f), color1, D3DXVECTOR2(1,0)); // bottom left corner of tex
vertices[1] = paperVertex(D3DXVECTOR3(-1.0f, 1.0f, -1.0f), color1, D3DXVECTOR2(0,0)); // top left corner of tex
vertices[2] = paperVertex(D3DXVECTOR3( 1.0f, 1.0f, -1.0f), color1, D3DXVECTOR2(0,1)); // top right corner of tex
vertices[3] = paperVertex(D3DXVECTOR3(1.0f, -1.0f, -1.0f), color1, D3DXVECTOR2(1,1)); // bottom right corner of tex
vertices[4] = paperVertex(D3DXVECTOR3(-1.0f, -1.0f, 1.0f), color1, D3DXVECTOR2(0,0));
vertices[5] = paperVertex(D3DXVECTOR3(-1.0f, 1.0f, 1.0f), color2, D3DXVECTOR2(0,0));
vertices[6] = paperVertex(D3DXVECTOR3(1.0f, 1.0f, 1.0f), color2, D3DXVECTOR2(0,0));
vertices[7] = paperVertex(D3DXVECTOR3(1.0f, -1.0f, 1.0f), color1, D3DXVECTOR2(0,0));
D3DXCreateTextureFromFile( md3dDev, "texture.bmp", &gTexture);
md3dDev->SetSamplerState(0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR);
md3dDev->SetSamplerState(0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR);
md3dDev->SetTexture(0, gTexture);
md3dDev->SetStreamSource(0, mVtxBuf, 0, sizeof(paperVertex));
md3dDev->SetVertexDeclaration(paperDecl);
md3dDev->SetRenderState(D3DRS_FILLMODE, D3DFILL_WIREFRAME);
md3dDev->SetIndices(mIndBuf);
md3dDev->DrawIndexedPrimitive(D3DPT_TRIANGLELIST, 0, 0, VTX_NUM, 0, NUM_TRIANGLES);
disclaimer: I have no Direct3D experience, but solid OpenGL and general computer graphics experience. And since the underlying concepts don't really differ, I attempt an answer, of whose correctness I'm 99% sure.
You call md3dDev->SetRenderState(D3DRS_FILLMODE, D3DFILL_WIREFRAME) immediately before rendering and wonder why only the wireframe is drawn?
Keep in mind that using a texture doesn't magically turn a wireframe model into a solid model. It is still a wireframe model with the texture only applied to the wireframe. You can only draw the whole primitve as wireframe or not.
Likewise does using texture coordinates of (0,0) not magically disable texturing for individual faces. You can only draw the whole primitive textured or not, though you might play with the texture coordinates and the texture's wrapping mode (and maybe the texture border) to make the "non-textured" faces use a uniform color from the texture and thus look like not textured.
But in general to achieve such deviating render styles (like textured/non-textured, but especially wireframe/solid) in a single primitive, you won't get around splitting the primitive into multiple ones and drawing each one with its dedicated render style.
EDIT: According to your comment: If you don't need wireframe, why enable it then? Besides disabling wireframe, with your current texture coordinates the other faces won't just have a single color from the texture but some strange distorted version of the texture. This is because your vertices (and their texture coordinates) are shared between different faces, but the texture coordinates at the moment are created only for the front face to look reasonable.
In such a situation, you won't get around duplicating vertices, so that each face uses a set of 4 unique vertices. In the case of a cube you won't actually need an index array anymore, because each face needs its own vertices. This is due to the fact, that a vertex conceptually represents all of the vertex' attributes (position, color, texCoord, ...) and you cannot have a two vertices sharing a position but having different texture coordinates (you can but you need two distinct vertices). Once you've duplicated the vertices accordingly, you can give each of the corner vertices their respective texture coordinates (which would be the usual [0,1]-quad if you want them textured normally, or all 0s if you want them to have a single color, in this case the color of the bottom left (or top left in D3D?) corner of the texture).
The same problem arises if you want to light the cube and need normals per-face, istead of interpolated per-vertex normals. In this case you also have to introduce duplicate vertices only deviating in their normal attribute. Always keep in mind that a vertex conceptually consists of all the vertex attributes and if two vertices have the same position but a different color/normal/texCoord/... they are conceptually (and practically) different vertices.