Is OpenGL supposed to set GL_BLEND to false when calling glClear()? - opengl

I spent ages trying to figure out why the GL_BLEND (alpha blending) state was changing in my program, and I found that OpenGL sets the GL_BLEND state to false after calling glClear to clear the render buffer.
GLboolean boolValue;
glEnable(GL_BLEND);
glGetBooleanv(GL_BLEND, &boolValue);
assert(boolValue == true); // PASSES
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glGetBooleanv(GL_BLEND, &boolValue);
assert(boolValue == true); // FAILS
I'm wondering if this is supposed to happen and if it's mentioned in any of the documentation. I'd hate to find out what other states OpenGL changes without knowing about it and spending ages trying to track down why my program is failing.
Edit: I just found out that it does this only on the first call to glClear(). Can anybody verify that this is the proper behavior?

Related

glEnable(GL_ALPHA_TEST) gives invalid enum (seems to be depreciated - code works though - but why?)

Quick question - title says it all:
In my OpenGL-code (3.3), I'm using the line
glEnable(GL_ALPHA_TEST);
I've been using my code for weeks now and never checked for errors (via glGetError()) because it works perfectly. Now that I did (because something else isn't working), this line gives me an invalid enum error. Google revealed that glEnable(GL_ALPHA_TEST) seems to be depreciated since OpenGL 3 (core profile?) or so and I guess, that is the reason for the error.
But that part of the code still does exactly what I want. Some more code:
glDisable(GL_CULL_FACE);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_ALPHA_TEST);
// buffer-stuff
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, 9, NumParticles);
So, did I put something redundant in there? I'm drawing particles (instanced) on screen using 2 triangles each (to give a quad) and in the alpha-chanel of the particle-color, I'm basically setting a circle (so 1.0f if in the circle, otherwise 0.0f). Depth-testing of course for not drawing particles from the back infront of particles further in front and glBlendFunc() (and as I understood glEnabled(GL_ALPHA_TEST)) for removing the bits not in the circle. I'm still learning OpenGL and am trying to understand, why that code actually works (for once) and why I apparently don't need glEnable(GL_ALPHA_TEST)...
Yes, I'm using discard in the fragment shader. Otherwise, I just used to code above, so I guess, only one depth value (standard?).
discard is the replacement for glEnable(GL_ALPHA_TEST);.
So, did I put something redundant in there?
Yes discard and glEnable(GL_ALPHA_TEST); would be redundant if you use a profile for which glEnable(GL_ALPHA_TEST); still exists and if you use discard for every fragment with an alpha for which the glAlphaFunc would discard that fragment.
Since you are in a profile for which the glEnable(GL_ALPHA_TEST); does not exist anymore, the glEnable(GL_ALPHA_TEST); has no effect in your code and can be removed.
Alpha test is a (since ages deprecated) method to only draw fragments when they match some alpha function. Nowadays this can easily be done inside a shader by just discarding the fragments. Alpha testing in itself is also very limited, because it can only decide to draw a fragment or not.
In general, enabling GL_ALPHA_TEST without setting a proper glAlphaFunc will do nothing since the default comparison function is GL_ALWAYS which means that all fragments will pass the test.
Your code doesn't seem to rely on alpha testing, but on blending (I assume that since you are setting the glBlendFunc). Somewhere in your code there's probably also a glEnable(GL_BLEND).

OpenGL ping pong works with one pass, not with two

This might be a more basic OpenGL mistake than the title suggests.
I am doing segmentation using fragment shaders in OpenGL, which require multiple rendering passes to do successive operations (eg. gaussian blur + edge detection + segmentation).
As far as I understood, there is this common technique called ping pong which takes two frame buffers (FBO) and simply renders to one FBO using the other as input.
The thing is, one pass--shader_0 outputting stuff to FBO_1 using FBO_0 as input--works, but when I try to use shader_1 with FBO_0 as input and render into FBO_1, I get a completely transparent image.
I checked both shaders and they do work individually, yet together they produce this transparent output.
Here is the set of calls I do for each pass, with segmentationBuffers containing the two FBOs, respectively used as input and output for this pass:
glBindFramebuffer(
GL_FRAMEBUFFER,
segmentationBuffers[lastSegmentationFboRenderedTo]->FramebufferName
);
glViewport(0, 0, windowWidth, windowHeight);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
currentStepShader->UseProgram();
glClearColor(0, 0, 0, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Enable blending
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
lastSegmentationFboRenderedTo = (lastSegmentationFboRenderedTo + 1) % 2;
glActiveTexture(GL_TEXTURE0);
glBindTexture(
GL_TEXTURE_2D,
segmentationBuffers[lastSegmentationFboRenderedTo]->renderedTexture
);
glUniform1i(glGetUniformLocation(shader->shaderPtr, "inputTexture"), 0);
glUniform2fv(
glGetUniformLocation(shader->shaderPtr, "texCoordOffsets"),
25,
texCoordOffsets
);
quad->Draw(GL_TRIANGLES, shader,
orthographicProjection,
glm::mat4(1.0f),
getOverlayModelMatrix()
);
And as stated above, doing one pass yields correct intermediary results, but doing two in a row gives a transparent frame. I suspect this is a more basic OpenGL mistake than it seems, but any help is appreciated!
I solved the issue by removing the call to glEnable(GL_DEPTH_TEST);.
I suspect that by enabling depth testing, OpenGL was discarding fragments from subsequent computation steps since they had the same depth value.

QQuickFramebufferObject redraw only shows clear color

I'm using a QQuickFramebufferObject object to render a red triangle to a framebuffer, which itself gets drawn to the QML scene.
To do that i overwrote the render function of the associated QQuickFramebufferObject::Renderer class.
This render function looks like following:
void GLRenderEngine::render()
{
glClearColor(0,0,0,1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glColor3d(1,0,0);
glBegin(GL_TRIANGLES);
glVertex2d(0,0);
glVertex2d(1,0);
glVertex2d(0,1);
glEnd();
glFlush();
//QQuickWindow context of encapsuling QQuickFramebufferObject
//is set in overwritten synchronize call
if(m_pWindow)
{
m_pWindow->resetOpenGLState();
update();
}
}
The problem i experence is that the first frame gets drawn correctly, while all other frames only show the clear color.
I've analyzed the opengl api calls with vogl and posted the result in pastebin:
Frame0 (correct Frame): https://pastebin.com/aWu4ee6m
Frame1: https://pastebin.com/4EmWmnMv
The only differences i noticed were the initializing calls, where Qt querys the statemachines states, so i'm curious what else i did wrong.
Thanks for help in advance.
Small update:
If i remove glClear(...) The frames show the correct image, though i doubt this is correct behaviour.
The framebuffer bound when I use glClear is the one Qt created for me to use. It is bound with flag GL_FRAMEBUFFER, which also enables drawing.
After i returned from the function the default framebuffer (0) is bound and cleared. This procedure can be seen in Frame 1 pretty well.
What I've been wondering about is whether glBlitFrameBuffer is being called. Vogl doesn't seem to catch that call, also in the preview of the individual framebuffers, provided by Vogl, i couldn't see my red triangle in Frame1, while it is visible in Frame0.
I solved the problem when i compared the statemachines states and saw, that the Shaderprogram switched from 0 to 1.
Changing it back to 0, and thus disabling shaderprograms, at every start of the render function resulted in the expected behaviour.

dll injection: drawing simple game overlay with opengl

I'm trying to draw a custom opengl overlay (steam does that for example) in a 3d desktop game.
This overlay should basically be able to show the status of some variables which the user
can affect by pressing some keys. Think about it like a game trainer.
The goal is in the first place to draw a few primitives at a specific point on the screen. Later I want to have a little nice looking "gui" component in the game window.
The game uses the "SwapBuffers" method from the GDI32.dll.
Currently I'm able to inject a custom DLL file into the game and hook the "SwapBuffers" method.
My first idea was to insert the drawing of the overlay into that function. This could be done by switching the 3d drawing mode from the game into 2d, then draw the 2d overlay on the screen and switch it back again, like this:
//SwapBuffers_HOOK (HDC)
glPushMatrix();
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glOrtho(0.0, 640, 480, 0.0, 1.0, -1.0);
//"OVERLAY"
glBegin(GL_QUADS);
glColor3f(1.0f, 1.0f, 1.0f);
glVertex2f(0, 0);
glVertex2f(0.5f, 0);
glVertex2f(0.5f, 0.5f);
glVertex2f(0.0f, 0.5f);
glEnd();
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
SwapBuffers_OLD(HDC);
However, this does not have any effect on the game at all.
Is my approach correct and reasonable (also considering my 3d to 2d switching code)?
I would like to know what the best way is to design and display a custom overlay in the hooked function. (should I use something like windows forms or should I assemble my component with opengl functions - lines, quads
...?)
Is the SwapBuffers method the best place to draw my overlay?
Any hint, source code or tutorial to something similiar is appreciated too.
The game by the way is counterstrike 1.6 and I don't intend to cheat online.
Thanks.
EDIT:
I could manage to draw a simple rectangle into the game's window by using a new opengl context as proposed by 'derHass'. Here is what I did:
//1. At the beginning of the hooked gdiSwapBuffers(HDC hdc) method save the old context
GLboolean gdiSwapBuffersHOOKED(HDC hdc) {
HGLRC oldContext = wglGetCurrentContext();
//2. If the new context has not been already created - create it
//(we need the "hdc" parameter for the current window, so the initialition
//process is happening in this method - anyone has a better solution?)
//Then set the new context to the current one.
if (!contextCreated) {
thisContext = wglCreateContext(hdc);
wglMakeCurrent(hdc, thisContext);
initContext();
}
else {
wglMakeCurrent(hdc, thisContext);
}
//Draw the quad in the new context and switch back to the old one.
drawContext();
wglMakeCurrent(hdc, oldContext);
return gdiSwapBuffersOLD(hdc);
}
GLvoid drawContext() {
glColor3f(1.0f, 0, 0);
glBegin(GL_QUADS);
glVertex2f(0,190.0f);
glVertex2f(100.0f, 190.0f);
glVertex2f(100.0f,290.0f);
glVertex2f(0, 290.0f);
glEnd();
}
GLvoid initContext() {
contextCreated = true;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, 640, 480, 0.0, 1.0, -1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClearColor(0, 0, 0, 1.0);
}
Here is the result:
cs overlay example
It is still very simple but I will try to add some more details, text etc. to it.
Thanks.
If the game is using OpenGL, then hooking into SwapBuffers is the way to go, in principle. In theory, there might be sevaral different drawables, and you might have to decide in your swap buffer function which one(s) are the right ones to modify.
There are a couple of issues with such kind of OpenGL interceptions, though:
OpenGL is a state machine. The application might have modified any GL state variable there is. The code you provided is far from complete to guarantee that something is draw. For example, if the application happens to have shaders enabled, all your matrix setup might be without effect, and what really would appear on the screen depends on the shaders.
If depth testing is on, your fragments might lie behind what already was drawn. If polygon culling is on, your primitive might be incorrectly winded for the currect culling mode. If the color masks are set to GL_FALSE or the draw buffer is not set to where you expect it, nothing will appear.
Also note that your attempt to "reset" the matrices is also wrong. You seem to assume that the current matrix mode is GL_MODELVIEW. But this doesn't have to be the case. It could as well be GL_PROJECTION or GL_TEXTURE. You also apply glOrtho to the current projection matrix without loading identity first, so this alone is a good reason for nothing to appear on the screen.
As OpenGL is a state machine, you also must restore all the state you touched. You already try this with the matrix stack push/pop. But you for example failed to restore the exact matrix mode. As you have seen in 1, a lot more state changes will be required, so restoring it will be more comples. Since you use legacy OpenGL, glPushAttrib() might come handy here.
SwapBuffers is not a GL function, but one of the operating system's API. It gets a drawable as parameter, and does only indirectly refer to any GL context. It might be called while another GL context is bound to the thread, or with none at all. If you want to play it safe, you'll also have to intercept the GL context creation function as well as MakeCurrent. In the worst (though very unlikely) case, the application has the GL context bound to another thread while it is calling the SwapBuffers, so there is no change for you in the hooked function to get to the context.
Putting this all together opens up another alternative: You can create your own GL context, bind it temporarily during the hooked SwapBuffers call and restore the original binding again. That way, you don't interfere with the GL state of the application at all. You still can augment the image content the application has rendered, since the framebuffer is part of the drawable, not the GL context. Doing so might have a negative impact on performance, but it might be so small that you never would even notice it.
Since you want to do this only for a single specific application, another approach would be to find out the minimal state changes which are necessary by observing what GL state the application actually set during the SwapBuffers call. A tool like apitrace can help you with that.

Opengl surface rendering issue

I just started loading some obj files and render it with opengl. When I render these meshes I get this result (see pictures).
I think its some kind of depth problem but i cant figure it out by myself.
Thats the parameters for rendering:
// Dark blue background
glClearColor(0.0f, 0.0f, 0.4f, 0.0f);
// Enable depth test
glEnable( GL_DEPTH_TEST );
// Cull triangles which normal is not towards the camera
glEnable(GL_CULL_FACE);
I used this Tutorial code as template. https://code.google.com/p/opengl-tutorial-org/source/browse/#hg%2Ftutorial08_basic_shading
The problem is simple, you are doing FRONT or BACK culling.
And the object file contains CCW(Counter-Clock-Wise) or CW (Clock-Wise) cordinates, so written from left to right or right to left.
Your openGL code is expecting it in the other way round, so it hides the surfaces which you are looking backward on.
To check this solves your problem, just take out the glEnable(GL_CULL_FACE);
As this exactly seems to be producing the problem.
Additionally you can use glCullFace(ENUM); where ENUM has to be GL_FRONT or GL_BACK.
If you don't in at least one of both cases can't see your mesh (means in both cases: GL_FRONT or GL_BACK your just seeing the partial mesh) , thats a problem with your code of interpreting the .obj. or the .obj uses not strict surface vectors. (A mix of CCW and CW)
I am actually unsure what you mean, however glEnable(GL_CULL_FACE); and then GL_CULL_FACE(GL_BACK); will cull out or remove the back face of the object. This greatly reduces the lag while rendering objects, and only makes a difference if you are inside or "behind" the object.
Also, have you tried glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); before your render code?