I have the hints:
glfwWindowHint(GLFW_DEPTH_BITS, GL_TRUE);
then later on I have:
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
and when drawing I have
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
but the result clearly doesn't use the depth buffer...
I feel like
glfwWindowHint(GLFW_DEPTH_BITS, GL_TRUE);
might not be correct?
I am using GLFW 3, OpenGL 4.1, macOS El Capitan 10.11.6
According to the GLFW guide [1], GLFW_DEPTH_BITS can not be used to enable or disable depth testing, but to set the size of the depth buffer in bit.
What glfwWindowHint(GLFW_DEPTH_BITS, GL_TRUE); does is, it sets the depth buffer size to 1 bit, since GL_TRUE is defined as 1. A depth buffer size of 1 bit is probably not even supported by the OpenGL implementation, which you could check using glGetError [2].
On Windows, you usually don't have to set the depth buffer format at all, it just works out of the box with GLFW. I remember, however, that it was neccessary to configure the framebuffer on iOS. So, just delete the line glfwWindowHint(GLFW_DEPTH_BITS, GL_TRUE); and see what happens.
[1] http://www.glfw.org/docs/latest/window_guide.html
[2] https://www.khronos.org/registry/OpenGL-Refpages/es2.0/xhtml/glGetError.xml
Related
I'm trying to debug some shaders but I can't change the one currently loaded. I tried to run without loading any shader, or linking any program and it still working.
I already tried deleting completely the shaders from my HDD. I tried to just call glUseProgram (with any random number including 0) just before calling glDrawElements and it still work. And even if I load any shader it just doesn't make any effect. It still show linking and compile error if I make mistakes in the files but when run the executable it just ignores what is in the shaders.
I draw the vertex with this
void Renderer::renderFrame() {
vao.bind();
glUseProgram(0);
glDrawElements(GL_LINE_LOOP, 3, GL_UNSIGNED_INT, nullptr);
}
and this are my window hints
void App::start() {
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
window = SDL_CreateWindow(title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 500,500, SDL_WINDOW_RESIZABLE|SDL_WINDOW_OPENGL);
this->context = SDL_GL_CreateContext(window);
glewInit();
glClearColor(0.5,1.0,1.0,1.0);
renderer.init();
}
SDL_GL_SetAttribute() only affects the next SDL_CreateWindow() call.
From the doc wiki:
Use this function to set an OpenGL window attribute before window creation.
So right now you're most likely getting a Compatibility context where shader-less draws are perfectly valid. You can check the value of GL_VERSION to see what you're getting.
If you want a Core context make those SDL_GL_SetAttribute() calls before your SDL_CreateWindow():
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
window = SDL_CreateWindow(title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 500,500, SDL_WINDOW_RESIZABLE|SDL_WINDOW_OPENGL);
this->context = SDL_GL_CreateContext(window);
In case no valid shader is binded the default fixed function is usually used (you know GL 1.0 backward compatibility even on core profile sometimes depending on vendor/driver).
So in case your attribute locations matches the used fixed function ones your CPU side code still renders image see:
What are the Attribute locations for fixed function pipeline in OpenGL 4.0++ core profile?
however the locations are not defined by any standard so it is different for any vendor (and can change with time/driver version). Only nVidia defined it and still using it after years...
So its a good idea to check the GLSL compiler/linker log for any shader in development to avoid confusion ... For more info on how to obtain them see:
complete GL+GLSL+VAO/VBO C++ example
btw some gfx drivers support logging and if enabled it will save the GLSL logs into a file on its own... tha can be done for example with nVidia drivers and NVEmulate utility
In my game, I am trying to create a glfw window with no depth buffer, stencil buffer or alpha buffer, because all I want it to do is render a 2D image to the screen, the result of a previous framebuffer.
So I use the following initialization code:
glfwDefaultWindowHints();
glfwWindowHint(GLFW_DEPTH_BITS, 0);
glfwWindowHint(GLFW_STENCIL_BITS, 0);
glfwWindowHint(GLFW_ALPHA_BITS, 0);
However, when I create my window and call glGetInteger(GL_ALPHA_BITS), It returns 8. The Depth bits and stencil bits are 0 however.
My question is, when I specify a 'hint' using glfwWindowHint(), is it a recommendation for how the window should be created or something that must be.
Yes, it is a recommendation. But having more alpha bits is not a problem, since alpha blending can be enabled and disabled with glEnable(GL_BLEND)/glDisable(GL_BLEND)
I would like it so that when mesh A (the character), is behind mesh B (a wall), it is still rendered but with a solid gray color.
I'm beginning opengles 2.0 and I'm still unsure as to go about this. From what I understand the depth buffer allows meshes to fight out who will be seen in the fragments they encompass, also there are various blend functions that could possibly involved in this, finally the stencil buffer looks like it would also have this desirable functionality.
So is there a way to output different colors through the shader based on a failed depth test? Is there a way to do this through blending? Or must I use the stencil buffer some how?
And what is this technique called for future reference? I've seen it used in a lot of video games.
This can be done using the stencil buffer. The stencil buffer gives each pixel some additional bits which can be used as a bitmask or a counter. In your case you'd configure the stencil test unit to set a specific bitmask when the depth test for the character fails (because it's obstructed by the well). Then you switch the stencil test mode operation to pass the stencil test for this specific bitmask, and render a full viewport, solid quad in the desired color, with depth testing and depth writes disabled.
Code
I strongly recommend you dive deep into the documentation for the stencil test unit. It's a very powerful mechanism, often overlooked. Your particular problem would be solved by the following. I stuggest you take this example code, read it in parallel to the stencil test functions references glStencilFunc, glStencilOp.
You must add a stencil buffer to your frame buffer's pixel format – how you do that is platform dependent. For example, if you're using GLUT, then you'd add |GLUT_STENCIL to the format bitmask of glutInitDisplayMode; on iOS you'd set a property on your GLKView; etc. Once you've added a stencil buffer, you should clear it along with your other render buffers by adding |GL_STENCIL_BUFFER_BIT to the initial glClear call of each drawing.
GLint const silouhette_stencil_mask = 0x1;
void display()
{
/* ... */
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glDepthFunc(GL_LESS);
glDisable(GL_STENCIL_TEST);
/* The following two are not necessary according to specification.
* But drivers can be buggy and this makes sure we don't run into
* issues caused by not wanting to change the stencil buffer, but
* it happening anyway due to a buggy driver.
*/
glStencilFunc(GL_NEVER, 0, 0);
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
draw_the_wall();
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_ALWAYS, silouhette_stencil_mask, 0xffffffff);
glStencilOp(GL_KEEP, GL_REPLACE, GL_KEEP);
draw_the_character();
glStencilFunc(GL_EQUAL, silouhette_stencil_mask, 0xffffffff);
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
glDisable(GL_DEPTH_TEST);
glDepthMask(GL_FALSE);
draw_full_viewport_solid_color();
/* ... */
}
I try to use the stencil buffer of a FBO in OpenGL, but I can't get it to work. I bound a depth24_stencil8 texture to the FBO both for the depth and stencil targets. As a simple test, I tried:
/* Enable FBO */
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_NEVER, 1, 0xff);
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
glBegin(GL_TRIANGLE);
/* Draw some triangles */
glEnd();
glDisable(GL_STENCIL_TEST);
/* Disable FBO and render it on screen as a texture. */
As I use GL_NEVER, nothing should be rendered at all, but I can see the triangles. This is like if there were no stencil at all, but I cannot understand why. Trying this code without FBOs works, so I think I use the stencil functions correctly. I don't have any idea how to solve this problem. Did anyone already use a stencil with FBOs?
My bad, I was not attaching correctly the stencil buffer to my FBO. Strange thing is that my fbo status was not indicating any error or bad attachement, so I was persuaded it was OK... this problem has driven me crazy but now it seems to work.
I'm starting out with the Android NDK and OpenGL. I know I'm doing something (probably a few) things wrong here and since I keep getting a black screen when I test I know the rendering isn't being sent to the screen.
In the Java I have a GLSurfaceView.Renderer that calls these two native methods. They are being called correctly but not drawing to the device screen.
Could someone point me in the right direction with this?
Here are the native method implementations:
int init()
{
sendMessage("init()");
glGenFramebuffersOES(1, &framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glGenRenderbuffersOES(1, &colorRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_RGBA8_OES, 854, 480);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, colorRenderbuffer);
GLuint depthRenderbuffer;
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, 854, 480);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);
GLenum status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES);
if(status != GL_FRAMEBUFFER_COMPLETE_OES)
sendMessage("Failed to make complete framebuffer object");
return 0;
}
void draw()
{
sendMessage("draw()");
GLfloat vertices[] = {1,0,0, 0,1,0, -1,0,0};
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableClientState(GL_VERTEX_ARRAY);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);
}
The log output is:
init()
draw()
draw()
draw()
etc..
I don't think that this is a real solution at all.
I'm having the same problem here, using framebuffer objects inside native code, and by doing
framebuffer = (GLuint) 0;
you're only using the default frame buffer, which always exist and is reserved to 0.
Technically, you could erase all your code related to framebuffers and your app should be working properly as framebuffer 0 is always generated and is the one binded by defaut.
But, you should be able to generate multiple frame buffers and swap between them using the binding function (glBindFramebuffer) as you please. But that doesn't seems to be working on my end and I haven't found the real solution yet. There's not much documentation on the android part, and I'm starting to wonder if fbo are really supported in native code. They do work properly inside the java code though, I've tested it with succes !
Oh ! And I just noticed that your buffer dimensions are not power of 2...that usually should be the case for all textures/buffers like structure in Opengl.
UPDATE :
Now I'm fairly sure you cannot use FBOs with android (2.2 or lower) and ndk (version r5b or lower). It is a whole different game if you use the new 3.1 release though, where you can code all of your app with native code (no more jni wrapper necessary), but I haven't tested it yet !
On the other hand, I've manage to make Stencil buffers and textures work flawlessly !
So the workaround will be to use those for my rendering logic, and just forget about FBO offscreen rendering.
I finally found the problem after MUCH tinkering.
Turns out that because I was calling the code from a GLSurfaceView.Renderer in Java the frame buffer already existed and so by calling:
glGenFramebuffersOES(1, &framebuffer);
I was unintentionally allocating a NEW buffer that was not attached to the target display. By removing this line and replacing it with:
framebuffer = (GLuint) 0;
It now renders to the correct buffer and displays properly on the screen. Note that even though I don't really use the buffer in this snippet, changing it is what messed up the proper display.
I had similar issues when moving form iOS to Android NDK here is my complete solution too.
OpenGLES 1.1 with FrameBuffer / ColorBuffer / DepthBuffer for Android with NDK r7b