OpenGL window artifacts - c++

I recently started to learn OpenGl using tutorials and books. In most of them I can find a notice like that:
When an application draws in a single buffer the resulting image might display flickering issues. ... To circumvent these issues, windowing applications apply a double buffer for rendering.
Sounds reasonable, so for that purpose I should use glfwSwapBuffers(window); in my render loop.
Unfortunately I still get this artifacts.
But, if I clear color buffer bit before swapping, then it works well.
So the question is: Why those artifacts are gone, only after clearing bits. I would be pleased if you help me to figure out why that's happening.
Here's a code snippet of that render loop:
// Render loop
// -----------
while (!glfwWindowShouldClose(window)){
processInput(window);
// Rendering commands
// ------------------
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
glfwSwapBuffers(window);
glfwPollEvents();
}

Related

GL_DEPTH_TEST fail in middle of program

was following some tutorial on learnopengl.com, moving camera around and suddenly the GL_DEPTH_TEST fails.
GL_DEPTH_TEST WORKS AT FIRST, THEN FAILS
program looks like this
int main(){
glEnable(GL_DEPTH_TEST);
while (!glfwWindowShouldClose(window))
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 36); //some draw function
}
}
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode)
{
handler();
}
It actually fails in some other program as well (meaning other tutorials I am building). If I place the glEnable(GL_DEPTH_TEST) in the loop, then it will not fail, so I suspects that GL_DEPTH_TEST has somehow been disabled / failed during runtime.
Is there reason for this to happen?
how to prevent it?
is placing glEnable(GL_DEPTH_TEST) in the loop the correct solution?
is it hardware related? I am using Phenom X6 AMD CPU with some Radeon 6850 card on
my Windows PC.
EDIT:
I think my window was actually quite standard stuff
#include <GLFW/glfw3.h>
int main(){
glfwInit();
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
GLFWwindow* window = glfwCreateWindow(WIDTH, HEIGHT, "LearnOpenGL", nullptr, nullptr);
glfwMakeContextCurrent(window);
glewInit();
while(!glfwWindowShouldClose(window)){}
}
EDIT:
I used the function glIsEnabled() to check, indeed GL_DEPTH_TEST was disabled after sometime. This happens in 2 of the built program, one just panning around by key_press(change camera position), the other one rotates by glfwGetTime(). The line if(!glIsEnabled(GL_DEPTH_TEST)) std::cout << "time: " << glfwGetTime() << " no depth!!" << std::endl; gave output.
Does google map WebGL in the background has anything to do with that?
I guess I shall have to resort to putting GL_DEPTH_TEST in loop.
Is there reason for this to happen?
Normally not. OpenGL state is not supposed to suddenly change. However you have additional software installed, that injects DLLs and does "things to your OpenGL context. Programs like FRAPS (screen capture software), Stereoscopic/Virtual-Reality wrappers, Debugging-Overlays, etc.
how to prevent it?
Writing correct code ;) – and by that I mean the full stack: your program, the OS written by someone, the GPU drivers written by someone else. Bugs happen.
is placing glEnable(GL_DEPTH_TEST) in the loop the correct solution?
Yes. In fact you should always set all drawing related state anew with each drawing iteration. Not only for correctness reasons, but because with more advanced rendering techniques eventually you'll have to do this anyway. For example if you're going to render shadow maps you'll have to use FBOs, which require to set glViewport several times during rendering a frame. Or say you want to draw a minimap and/or HUD, then you'll have to disable depth testing in between.
If your program is structured like this from the very beginning things are getting much easier.
is it hardware related?
No. OpenGL is a software level specification and a conforming implementation must do whatever the specification says, regardless of the underlying hardware.
It may be your window declaration. Can you put your initialization for windows and opengl ?
EDIT
I can see you are declaring OpenGL 3.3, you have to put
glewExperimental = GL_TRUE;
before glewInit to make it works correctly.
Try to put it and control the eventual errors returned by glewInit :
GLuint err = glewInit();
Does google map WebGL in the background has anything to do with that?
no it shouldn't because OpenGL doesn't share data between process.

dll injection: drawing simple game overlay with opengl

I'm trying to draw a custom opengl overlay (steam does that for example) in a 3d desktop game.
This overlay should basically be able to show the status of some variables which the user
can affect by pressing some keys. Think about it like a game trainer.
The goal is in the first place to draw a few primitives at a specific point on the screen. Later I want to have a little nice looking "gui" component in the game window.
The game uses the "SwapBuffers" method from the GDI32.dll.
Currently I'm able to inject a custom DLL file into the game and hook the "SwapBuffers" method.
My first idea was to insert the drawing of the overlay into that function. This could be done by switching the 3d drawing mode from the game into 2d, then draw the 2d overlay on the screen and switch it back again, like this:
//SwapBuffers_HOOK (HDC)
glPushMatrix();
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glOrtho(0.0, 640, 480, 0.0, 1.0, -1.0);
//"OVERLAY"
glBegin(GL_QUADS);
glColor3f(1.0f, 1.0f, 1.0f);
glVertex2f(0, 0);
glVertex2f(0.5f, 0);
glVertex2f(0.5f, 0.5f);
glVertex2f(0.0f, 0.5f);
glEnd();
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
SwapBuffers_OLD(HDC);
However, this does not have any effect on the game at all.
Is my approach correct and reasonable (also considering my 3d to 2d switching code)?
I would like to know what the best way is to design and display a custom overlay in the hooked function. (should I use something like windows forms or should I assemble my component with opengl functions - lines, quads
...?)
Is the SwapBuffers method the best place to draw my overlay?
Any hint, source code or tutorial to something similiar is appreciated too.
The game by the way is counterstrike 1.6 and I don't intend to cheat online.
Thanks.
EDIT:
I could manage to draw a simple rectangle into the game's window by using a new opengl context as proposed by 'derHass'. Here is what I did:
//1. At the beginning of the hooked gdiSwapBuffers(HDC hdc) method save the old context
GLboolean gdiSwapBuffersHOOKED(HDC hdc) {
HGLRC oldContext = wglGetCurrentContext();
//2. If the new context has not been already created - create it
//(we need the "hdc" parameter for the current window, so the initialition
//process is happening in this method - anyone has a better solution?)
//Then set the new context to the current one.
if (!contextCreated) {
thisContext = wglCreateContext(hdc);
wglMakeCurrent(hdc, thisContext);
initContext();
}
else {
wglMakeCurrent(hdc, thisContext);
}
//Draw the quad in the new context and switch back to the old one.
drawContext();
wglMakeCurrent(hdc, oldContext);
return gdiSwapBuffersOLD(hdc);
}
GLvoid drawContext() {
glColor3f(1.0f, 0, 0);
glBegin(GL_QUADS);
glVertex2f(0,190.0f);
glVertex2f(100.0f, 190.0f);
glVertex2f(100.0f,290.0f);
glVertex2f(0, 290.0f);
glEnd();
}
GLvoid initContext() {
contextCreated = true;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, 640, 480, 0.0, 1.0, -1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClearColor(0, 0, 0, 1.0);
}
Here is the result:
cs overlay example
It is still very simple but I will try to add some more details, text etc. to it.
Thanks.
If the game is using OpenGL, then hooking into SwapBuffers is the way to go, in principle. In theory, there might be sevaral different drawables, and you might have to decide in your swap buffer function which one(s) are the right ones to modify.
There are a couple of issues with such kind of OpenGL interceptions, though:
OpenGL is a state machine. The application might have modified any GL state variable there is. The code you provided is far from complete to guarantee that something is draw. For example, if the application happens to have shaders enabled, all your matrix setup might be without effect, and what really would appear on the screen depends on the shaders.
If depth testing is on, your fragments might lie behind what already was drawn. If polygon culling is on, your primitive might be incorrectly winded for the currect culling mode. If the color masks are set to GL_FALSE or the draw buffer is not set to where you expect it, nothing will appear.
Also note that your attempt to "reset" the matrices is also wrong. You seem to assume that the current matrix mode is GL_MODELVIEW. But this doesn't have to be the case. It could as well be GL_PROJECTION or GL_TEXTURE. You also apply glOrtho to the current projection matrix without loading identity first, so this alone is a good reason for nothing to appear on the screen.
As OpenGL is a state machine, you also must restore all the state you touched. You already try this with the matrix stack push/pop. But you for example failed to restore the exact matrix mode. As you have seen in 1, a lot more state changes will be required, so restoring it will be more comples. Since you use legacy OpenGL, glPushAttrib() might come handy here.
SwapBuffers is not a GL function, but one of the operating system's API. It gets a drawable as parameter, and does only indirectly refer to any GL context. It might be called while another GL context is bound to the thread, or with none at all. If you want to play it safe, you'll also have to intercept the GL context creation function as well as MakeCurrent. In the worst (though very unlikely) case, the application has the GL context bound to another thread while it is calling the SwapBuffers, so there is no change for you in the hooked function to get to the context.
Putting this all together opens up another alternative: You can create your own GL context, bind it temporarily during the hooked SwapBuffers call and restore the original binding again. That way, you don't interfere with the GL state of the application at all. You still can augment the image content the application has rendered, since the framebuffer is part of the drawable, not the GL context. Doing so might have a negative impact on performance, but it might be so small that you never would even notice it.
Since you want to do this only for a single specific application, another approach would be to find out the minimal state changes which are necessary by observing what GL state the application actually set during the SwapBuffers call. A tool like apitrace can help you with that.

How to make a step by step display animation in openGL?

How to make a step by step display animation in openGL??
I'M doing a reprap printer project to read a GCode file and interpret it into graphic.
now i have difficulty make a step by step animation of drawing the whole object.
i need to draw many short lines to make up a whole object.
for example:
|-----|
| |
| |
|-----|
the square is made up of many short lines, and each line is generated by code like:
glPushMatrix();
.....
for(int i=0; i< instruction[i].size(),i++)
{ ....
glBegin(GL_LINES);
glVertex3f(oldx, oldy, oldz);
glVertex3f(x, y, z);
glEnd();
}
glPopMatrix();
now i want to make a step animation to display how this square is made. I tried to refresh the screen each time a new line is drawn, but it doesn't work, the whole square just come out at once. anyone know how to make this?
Typical OpenGL implementations will queue up large number of calls to batch them together into bursts of activity to make optimal use of available communication bandwidth and GPU time resources.
What you want to do is basically the opposite of double buffered rendering, i.e. rendering where each drawing step is immediately visible. One way to do this is by rendering to a single buffered window and call glFinish() after each step. Major drawback: It's likely to not work well on modern systems, which use compositing window managers and similar.
Another approach, which I recommend, is using a separate buffer for incremental drawing, and constantly refreshing the main framebuffer from this one. The key subjects are Frame Buffer Object and Render To Texture.
First you create a FBO (tons of tutorials out there and as answers on StackOverflow). A FBO is basically an abstraction to which you can connect target buffers, like textures or renderbuffers, and which can be bound as the destination of drawing calls.
So how to solve your problem with them? First you should not do the animation by delaying a drawing loop. This has several reasons, but the main issue is, that you loose program interactivity by this. Instead you maintain a (global) counter at which step in your animation you are. Let's call it step:
int step = 0;
Then in your drawing function you have to phases: 1) Texture update 2) Screen refresh
Phase one consists of binding your framebuffer object as render target. For this to work the target texture must be unbound
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, animFBO);
glViewport(0, 0, fbo.width, fbo.height);
set_animFBO_projection();
the trick now is, that you clear the animFBO only once, namely after creation, and then never again. Now you draw your lines according to the animation step
draw_lines_for_step(step);
and increment the step counter (could do this as a compound statement, but this is more explicit)
step++;
After updating the animation FBO it's time to update the screen. First unbind the animFBO
glBindFramebuffer(GL_FRAMEBUFFER, 0);
We're now on the main, on-screen framebuffer
glViewport(0, 0, window.width, window.height);
set_window_projection(); //most likely a glMatrixMode(GL_PROJECTION); glOrtho(0, 1, 0, 1, -1, 1);
Now bind the FBO attached texture and draw it to a full viewport quad
glBindTexture(GL_TEXTURE_2D, animFBOTexture);
draw_full_viewport_textured_quad();
Finally do the buffer swap to show the animation step iteration
SwapBuffers();
You should have the SwapBuffer method called after each draw call.
Be sure you don't screw the matrix stack and you'll probably need something to "pause" the rendering like a breakpoint.
If you only want the Lines to appear one after another and you dont have to be nit-picking about efficiency or good programming style try something like:
(in your drawing routine)
if (timer > 100)
{
//draw the next line
timer = 0;
}
else
timer++;
//draw all the other lines (you have to remember wich one already appeared)
//for example using a boolean array "lineDrawn[10]"
The timer is an integer that tells you, how often you have drawn the scene. If you make it larger, stuff happens more slowly on the screen when you run your program.
Of course this only works if you have a draw routine. If not, I strongly suggest using one.
->plenty tutorials pretty everywhere, e.g.
http://nehe.gamedev.net/tutorial/creating_an_opengl_window_%28win32%29/13001/
Goor luck to you!
PS: I think you have done nearly the same, but without a timer. thats why everything was drawn so fast that you thought it appeared all at the same time.

gwen + opengl can't see anything

I'm trying to use GWEN to draw some GUI elements on top of my opengl scene. It seems to have set up correctly but nothing from gwen is actually being drawn (visibly at least). I'm using a custom renderer which is essentially GWEN's stock opengl renderer but with a different function for loading textures. And OpenGL::Begin() and OpenGL::End() replaced with these:
void coRenderer::Begin()
{
glUseProgram(0);
glDisable(GL_DEPTH_TEST);
glDepthMask(0);
glEnable(GL_BLEND);
glMatrixMode(GL_PROJECTION); // Select The Projection Matrix
glPushMatrix(); // Store The Projection Matrix
glLoadIdentity();
glOrtho(0, screen->w, screen->h, 0, -1, 1 );
glMatrixMode(GL_MODELVIEW);
glActiveTexture(GL_TEXTURE0);
}
void coRenderer::End()
{
Flush();
glMatrixMode(GL_PROJECTION); // Select The Projection Matrix
glPopMatrix(); // Restore The Old Projection Matrix
glDisable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
glDepthMask(1);
glEnable(GL_TEXTURE_2D);
}
the code for gwen's opengl renderer is here:
http://gwen.googlecode.com/svn/trunk/trunk/gwen/Renderers/OpenGL/OpenGL.cpp
BTW I'm using OpenGL 2.1 not 3.0+
Ah GWEN. That frustrating GUI library.
When I started using it, and integrating it into the engine we wrote in school, I had the same issue as you, using the stock OpenGL renderer however. Turned out it was being positioned wrong, calling glLoadIdentity() to reset the identity matrix seemed to resolve it.
The issue you are having, could well end up being the same as what I had, or there could be a problem with your custom OpenGL renderer. I'm not sure if you know much about GWEN, or how it works, but it runs on a single texture, that skins the GUI. Are you loading that in? Perhaps your texture loader isn't loading it correctly.
Try using your Debugger and stepping through your program. Areas of interest would be where you're attempting to load the GUI skin, where you're assigning the screen space that GWEN can use, and when you're actually attempting to render the GUI.

Android NDK OpenGL ES 1.0 Simple Rendering

I'm starting out with the Android NDK and OpenGL. I know I'm doing something (probably a few) things wrong here and since I keep getting a black screen when I test I know the rendering isn't being sent to the screen.
In the Java I have a GLSurfaceView.Renderer that calls these two native methods. They are being called correctly but not drawing to the device screen.
Could someone point me in the right direction with this?
Here are the native method implementations:
int init()
{
sendMessage("init()");
glGenFramebuffersOES(1, &framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glGenRenderbuffersOES(1, &colorRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_RGBA8_OES, 854, 480);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, colorRenderbuffer);
GLuint depthRenderbuffer;
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, 854, 480);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);
GLenum status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES);
if(status != GL_FRAMEBUFFER_COMPLETE_OES)
sendMessage("Failed to make complete framebuffer object");
return 0;
}
void draw()
{
sendMessage("draw()");
GLfloat vertices[] = {1,0,0, 0,1,0, -1,0,0};
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableClientState(GL_VERTEX_ARRAY);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);
}
The log output is:
init()
draw()
draw()
draw()
etc..
I don't think that this is a real solution at all.
I'm having the same problem here, using framebuffer objects inside native code, and by doing
framebuffer = (GLuint) 0;
you're only using the default frame buffer, which always exist and is reserved to 0.
Technically, you could erase all your code related to framebuffers and your app should be working properly as framebuffer 0 is always generated and is the one binded by defaut.
But, you should be able to generate multiple frame buffers and swap between them using the binding function (glBindFramebuffer) as you please. But that doesn't seems to be working on my end and I haven't found the real solution yet. There's not much documentation on the android part, and I'm starting to wonder if fbo are really supported in native code. They do work properly inside the java code though, I've tested it with succes !
Oh ! And I just noticed that your buffer dimensions are not power of 2...that usually should be the case for all textures/buffers like structure in Opengl.
UPDATE :
Now I'm fairly sure you cannot use FBOs with android (2.2 or lower) and ndk (version r5b or lower). It is a whole different game if you use the new 3.1 release though, where you can code all of your app with native code (no more jni wrapper necessary), but I haven't tested it yet !
On the other hand, I've manage to make Stencil buffers and textures work flawlessly !
So the workaround will be to use those for my rendering logic, and just forget about FBO offscreen rendering.
I finally found the problem after MUCH tinkering.
Turns out that because I was calling the code from a GLSurfaceView.Renderer in Java the frame buffer already existed and so by calling:
glGenFramebuffersOES(1, &framebuffer);
I was unintentionally allocating a NEW buffer that was not attached to the target display. By removing this line and replacing it with:
framebuffer = (GLuint) 0;
It now renders to the correct buffer and displays properly on the screen. Note that even though I don't really use the buffer in this snippet, changing it is what messed up the proper display.
I had similar issues when moving form iOS to Android NDK here is my complete solution too.
OpenGLES 1.1 with FrameBuffer / ColorBuffer / DepthBuffer for Android with NDK r7b