GL_DEPTH_TEST fail in middle of program - opengl

was following some tutorial on learnopengl.com, moving camera around and suddenly the GL_DEPTH_TEST fails.
GL_DEPTH_TEST WORKS AT FIRST, THEN FAILS
program looks like this
int main(){
glEnable(GL_DEPTH_TEST);
while (!glfwWindowShouldClose(window))
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 36); //some draw function
}
}
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode)
{
handler();
}
It actually fails in some other program as well (meaning other tutorials I am building). If I place the glEnable(GL_DEPTH_TEST) in the loop, then it will not fail, so I suspects that GL_DEPTH_TEST has somehow been disabled / failed during runtime.
Is there reason for this to happen?
how to prevent it?
is placing glEnable(GL_DEPTH_TEST) in the loop the correct solution?
is it hardware related? I am using Phenom X6 AMD CPU with some Radeon 6850 card on
my Windows PC.
EDIT:
I think my window was actually quite standard stuff
#include <GLFW/glfw3.h>
int main(){
glfwInit();
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
GLFWwindow* window = glfwCreateWindow(WIDTH, HEIGHT, "LearnOpenGL", nullptr, nullptr);
glfwMakeContextCurrent(window);
glewInit();
while(!glfwWindowShouldClose(window)){}
}
EDIT:
I used the function glIsEnabled() to check, indeed GL_DEPTH_TEST was disabled after sometime. This happens in 2 of the built program, one just panning around by key_press(change camera position), the other one rotates by glfwGetTime(). The line if(!glIsEnabled(GL_DEPTH_TEST)) std::cout << "time: " << glfwGetTime() << " no depth!!" << std::endl; gave output.
Does google map WebGL in the background has anything to do with that?
I guess I shall have to resort to putting GL_DEPTH_TEST in loop.

Is there reason for this to happen?
Normally not. OpenGL state is not supposed to suddenly change. However you have additional software installed, that injects DLLs and does "things to your OpenGL context. Programs like FRAPS (screen capture software), Stereoscopic/Virtual-Reality wrappers, Debugging-Overlays, etc.
how to prevent it?
Writing correct code ;) – and by that I mean the full stack: your program, the OS written by someone, the GPU drivers written by someone else. Bugs happen.
is placing glEnable(GL_DEPTH_TEST) in the loop the correct solution?
Yes. In fact you should always set all drawing related state anew with each drawing iteration. Not only for correctness reasons, but because with more advanced rendering techniques eventually you'll have to do this anyway. For example if you're going to render shadow maps you'll have to use FBOs, which require to set glViewport several times during rendering a frame. Or say you want to draw a minimap and/or HUD, then you'll have to disable depth testing in between.
If your program is structured like this from the very beginning things are getting much easier.
is it hardware related?
No. OpenGL is a software level specification and a conforming implementation must do whatever the specification says, regardless of the underlying hardware.

It may be your window declaration. Can you put your initialization for windows and opengl ?
EDIT
I can see you are declaring OpenGL 3.3, you have to put
glewExperimental = GL_TRUE;
before glewInit to make it works correctly.
Try to put it and control the eventual errors returned by glewInit :
GLuint err = glewInit();
Does google map WebGL in the background has anything to do with that?
no it shouldn't because OpenGL doesn't share data between process.

Related

glCopyImageSubData with glfw explicit version and secondary context interaction

First of all, sorry for the title, didn't know what I should call this.
I have found out that when I call glCopyImageSubData on a secondary thread with a secondary context created with glfw and explicit version, following render calls just don't work like they should. In my program I create two textures, texture 2 bigger than texture 1. I upload pixels to texture 1 but not texture 2, I just allocate space. Then I use glCopyImageSubData. After that I delete the textures. All this happens on a secondary context sharing resources with the window.
In theory I do not modify anything on the rendering side of things, I just create two textures, copy data and delete them. My rendering loop looks like this:
glViewport(0, 0, width, height);
glClearColor(1F, 0F, 0F, 0.5F);
glClear(GL_COLOR_BUFFER_BIT);
glfwSwapBuffers(window);
Following render calls on the window don't work like they should, I don't know what happens or why it is happening.
If I do not specify the context version, it works just fine, which is a mystery to me because I do not think it should make a difference... If I specify the version of GL running on my computer, the same context is created as if I did not specify the version, right?
I specify the version like this:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 6);
Here is a video of the problem:
https://youtu.be/5fc6m1BEKyc
Here is the github with the code: https://github.com/DasBabyPixel/lwjgl-testing
(In the code, mode=1 with disableExplicitVersion=false is what I want to get to work)
My System info just to confirm things
EDIT
The copy process works just fine, confirmed by writing the textures to file.
I don't know, maybe I'm unclear: I am trying to figure out why this is happening and how to fix it. Leaving the version hints out is not an option because I want to be able to use GLES 3.2

OpenGL window artifacts

I recently started to learn OpenGl using tutorials and books. In most of them I can find a notice like that:
When an application draws in a single buffer the resulting image might display flickering issues. ... To circumvent these issues, windowing applications apply a double buffer for rendering.
Sounds reasonable, so for that purpose I should use glfwSwapBuffers(window); in my render loop.
Unfortunately I still get this artifacts.
But, if I clear color buffer bit before swapping, then it works well.
So the question is: Why those artifacts are gone, only after clearing bits. I would be pleased if you help me to figure out why that's happening.
Here's a code snippet of that render loop:
// Render loop
// -----------
while (!glfwWindowShouldClose(window)){
processInput(window);
// Rendering commands
// ------------------
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
glfwSwapBuffers(window);
glfwPollEvents();
}

Why glDrawElments() is working without using any shader?

I'm trying to debug some shaders but I can't change the one currently loaded. I tried to run without loading any shader, or linking any program and it still working.
I already tried deleting completely the shaders from my HDD. I tried to just call glUseProgram (with any random number including 0) just before calling glDrawElements and it still work. And even if I load any shader it just doesn't make any effect. It still show linking and compile error if I make mistakes in the files but when run the executable it just ignores what is in the shaders.
I draw the vertex with this
void Renderer::renderFrame() {
vao.bind();
glUseProgram(0);
glDrawElements(GL_LINE_LOOP, 3, GL_UNSIGNED_INT, nullptr);
}
and this are my window hints
void App::start() {
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
window = SDL_CreateWindow(title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 500,500, SDL_WINDOW_RESIZABLE|SDL_WINDOW_OPENGL);
this->context = SDL_GL_CreateContext(window);
glewInit();
glClearColor(0.5,1.0,1.0,1.0);
renderer.init();
}
SDL_GL_SetAttribute() only affects the next SDL_CreateWindow() call.
From the doc wiki:
Use this function to set an OpenGL window attribute before window creation.
So right now you're most likely getting a Compatibility context where shader-less draws are perfectly valid. You can check the value of GL_VERSION to see what you're getting.
If you want a Core context make those SDL_GL_SetAttribute() calls before your SDL_CreateWindow():
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
window = SDL_CreateWindow(title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 500,500, SDL_WINDOW_RESIZABLE|SDL_WINDOW_OPENGL);
this->context = SDL_GL_CreateContext(window);
In case no valid shader is binded the default fixed function is usually used (you know GL 1.0 backward compatibility even on core profile sometimes depending on vendor/driver).
So in case your attribute locations matches the used fixed function ones your CPU side code still renders image see:
What are the Attribute locations for fixed function pipeline in OpenGL 4.0++ core profile?
however the locations are not defined by any standard so it is different for any vendor (and can change with time/driver version). Only nVidia defined it and still using it after years...
So its a good idea to check the GLSL compiler/linker log for any shader in development to avoid confusion ... For more info on how to obtain them see:
complete GL+GLSL+VAO/VBO C++ example
btw some gfx drivers support logging and if enabled it will save the GLSL logs into a file on its own... tha can be done for example with nVidia drivers and NVEmulate utility

dll injection: drawing simple game overlay with opengl

I'm trying to draw a custom opengl overlay (steam does that for example) in a 3d desktop game.
This overlay should basically be able to show the status of some variables which the user
can affect by pressing some keys. Think about it like a game trainer.
The goal is in the first place to draw a few primitives at a specific point on the screen. Later I want to have a little nice looking "gui" component in the game window.
The game uses the "SwapBuffers" method from the GDI32.dll.
Currently I'm able to inject a custom DLL file into the game and hook the "SwapBuffers" method.
My first idea was to insert the drawing of the overlay into that function. This could be done by switching the 3d drawing mode from the game into 2d, then draw the 2d overlay on the screen and switch it back again, like this:
//SwapBuffers_HOOK (HDC)
glPushMatrix();
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glOrtho(0.0, 640, 480, 0.0, 1.0, -1.0);
//"OVERLAY"
glBegin(GL_QUADS);
glColor3f(1.0f, 1.0f, 1.0f);
glVertex2f(0, 0);
glVertex2f(0.5f, 0);
glVertex2f(0.5f, 0.5f);
glVertex2f(0.0f, 0.5f);
glEnd();
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
SwapBuffers_OLD(HDC);
However, this does not have any effect on the game at all.
Is my approach correct and reasonable (also considering my 3d to 2d switching code)?
I would like to know what the best way is to design and display a custom overlay in the hooked function. (should I use something like windows forms or should I assemble my component with opengl functions - lines, quads
...?)
Is the SwapBuffers method the best place to draw my overlay?
Any hint, source code or tutorial to something similiar is appreciated too.
The game by the way is counterstrike 1.6 and I don't intend to cheat online.
Thanks.
EDIT:
I could manage to draw a simple rectangle into the game's window by using a new opengl context as proposed by 'derHass'. Here is what I did:
//1. At the beginning of the hooked gdiSwapBuffers(HDC hdc) method save the old context
GLboolean gdiSwapBuffersHOOKED(HDC hdc) {
HGLRC oldContext = wglGetCurrentContext();
//2. If the new context has not been already created - create it
//(we need the "hdc" parameter for the current window, so the initialition
//process is happening in this method - anyone has a better solution?)
//Then set the new context to the current one.
if (!contextCreated) {
thisContext = wglCreateContext(hdc);
wglMakeCurrent(hdc, thisContext);
initContext();
}
else {
wglMakeCurrent(hdc, thisContext);
}
//Draw the quad in the new context and switch back to the old one.
drawContext();
wglMakeCurrent(hdc, oldContext);
return gdiSwapBuffersOLD(hdc);
}
GLvoid drawContext() {
glColor3f(1.0f, 0, 0);
glBegin(GL_QUADS);
glVertex2f(0,190.0f);
glVertex2f(100.0f, 190.0f);
glVertex2f(100.0f,290.0f);
glVertex2f(0, 290.0f);
glEnd();
}
GLvoid initContext() {
contextCreated = true;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, 640, 480, 0.0, 1.0, -1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClearColor(0, 0, 0, 1.0);
}
Here is the result:
cs overlay example
It is still very simple but I will try to add some more details, text etc. to it.
Thanks.
If the game is using OpenGL, then hooking into SwapBuffers is the way to go, in principle. In theory, there might be sevaral different drawables, and you might have to decide in your swap buffer function which one(s) are the right ones to modify.
There are a couple of issues with such kind of OpenGL interceptions, though:
OpenGL is a state machine. The application might have modified any GL state variable there is. The code you provided is far from complete to guarantee that something is draw. For example, if the application happens to have shaders enabled, all your matrix setup might be without effect, and what really would appear on the screen depends on the shaders.
If depth testing is on, your fragments might lie behind what already was drawn. If polygon culling is on, your primitive might be incorrectly winded for the currect culling mode. If the color masks are set to GL_FALSE or the draw buffer is not set to where you expect it, nothing will appear.
Also note that your attempt to "reset" the matrices is also wrong. You seem to assume that the current matrix mode is GL_MODELVIEW. But this doesn't have to be the case. It could as well be GL_PROJECTION or GL_TEXTURE. You also apply glOrtho to the current projection matrix without loading identity first, so this alone is a good reason for nothing to appear on the screen.
As OpenGL is a state machine, you also must restore all the state you touched. You already try this with the matrix stack push/pop. But you for example failed to restore the exact matrix mode. As you have seen in 1, a lot more state changes will be required, so restoring it will be more comples. Since you use legacy OpenGL, glPushAttrib() might come handy here.
SwapBuffers is not a GL function, but one of the operating system's API. It gets a drawable as parameter, and does only indirectly refer to any GL context. It might be called while another GL context is bound to the thread, or with none at all. If you want to play it safe, you'll also have to intercept the GL context creation function as well as MakeCurrent. In the worst (though very unlikely) case, the application has the GL context bound to another thread while it is calling the SwapBuffers, so there is no change for you in the hooked function to get to the context.
Putting this all together opens up another alternative: You can create your own GL context, bind it temporarily during the hooked SwapBuffers call and restore the original binding again. That way, you don't interfere with the GL state of the application at all. You still can augment the image content the application has rendered, since the framebuffer is part of the drawable, not the GL context. Doing so might have a negative impact on performance, but it might be so small that you never would even notice it.
Since you want to do this only for a single specific application, another approach would be to find out the minimal state changes which are necessary by observing what GL state the application actually set during the SwapBuffers call. A tool like apitrace can help you with that.

nsight - OpenGL 4.2 debugging incompatibility

Whenever I attempt to debug a shader in nvidia nsight I get the following incompatibility in my nvcompatlog.
glDisable (cap = 0x00008620)
glMatrixMode
glPushMatrix
glLoadIdentity
glOrtho
glBegin
glColor4f
glVertex2f
glEnd
glPopMatrix
This is confusing since I am using a 4.2 core profile and not using any deprecated or fixed function calls. At this stage I am just drawing a simple 2D square to the screen and can assure none of the functions listed above are being used.
My real concern is being new to SDL & GLEW I am not sure what functions they are using behind the scene. I have been searching around the web and have found others who are using SDL, GLEW, & Nvidia nsight. This leads me to believe I am overlooking something. Below is a shortened verison of how I am initialing SDL & GLW.
SDL_Init(SDL_INIT_EVERYTHING);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 2);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
SDL_Window *_window;
_window = SDL_CreateWindow("Red Square", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED , 200, 200, SDL_WINDOW_OPENGL);
SDL_GLContext glContext = SDL_GL_CreateContext(_window);
glewExperimental = GL_TRUE;
GLenum status = glewInit();
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
In the implementation I have error checking pretty much after every call. I excluded it from the example to reduce the amount of clutter. All the above produce no errors and return valid objects.
After the initialization glewGetString(GLEW_VERSION) returns 4.2.0 NVIDIA 344.75, glewGetString(GLEW_VERSION) returns 1.11.0, and GLEW_VERSION_4_2 returns true.
Any idea on how I can used SDL & GLEW and not have either of these frameworks call deprecated functions?
** Edit **
I have been experiementing with the Dependency Walker here. Looking at the calls through Opengl32.dll none of what is listed is shown as a called module.
For anyone interested, Nsight captures all commands issued to the OpenGL server. Not just those issued through your application. If you have any FPS or recording software enabled, these tend to use deprecated methods drawing to the framebuffer. In my case it was Riva Tuner which displays the FPS on screen for any running games. Disabling it resolved my issue.