I'm trying to debug some shaders but I can't change the one currently loaded. I tried to run without loading any shader, or linking any program and it still working.
I already tried deleting completely the shaders from my HDD. I tried to just call glUseProgram (with any random number including 0) just before calling glDrawElements and it still work. And even if I load any shader it just doesn't make any effect. It still show linking and compile error if I make mistakes in the files but when run the executable it just ignores what is in the shaders.
I draw the vertex with this
void Renderer::renderFrame() {
vao.bind();
glUseProgram(0);
glDrawElements(GL_LINE_LOOP, 3, GL_UNSIGNED_INT, nullptr);
}
and this are my window hints
void App::start() {
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
window = SDL_CreateWindow(title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 500,500, SDL_WINDOW_RESIZABLE|SDL_WINDOW_OPENGL);
this->context = SDL_GL_CreateContext(window);
glewInit();
glClearColor(0.5,1.0,1.0,1.0);
renderer.init();
}
SDL_GL_SetAttribute() only affects the next SDL_CreateWindow() call.
From the doc wiki:
Use this function to set an OpenGL window attribute before window creation.
So right now you're most likely getting a Compatibility context where shader-less draws are perfectly valid. You can check the value of GL_VERSION to see what you're getting.
If you want a Core context make those SDL_GL_SetAttribute() calls before your SDL_CreateWindow():
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
window = SDL_CreateWindow(title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 500,500, SDL_WINDOW_RESIZABLE|SDL_WINDOW_OPENGL);
this->context = SDL_GL_CreateContext(window);
In case no valid shader is binded the default fixed function is usually used (you know GL 1.0 backward compatibility even on core profile sometimes depending on vendor/driver).
So in case your attribute locations matches the used fixed function ones your CPU side code still renders image see:
What are the Attribute locations for fixed function pipeline in OpenGL 4.0++ core profile?
however the locations are not defined by any standard so it is different for any vendor (and can change with time/driver version). Only nVidia defined it and still using it after years...
So its a good idea to check the GLSL compiler/linker log for any shader in development to avoid confusion ... For more info on how to obtain them see:
complete GL+GLSL+VAO/VBO C++ example
btw some gfx drivers support logging and if enabled it will save the GLSL logs into a file on its own... tha can be done for example with nVidia drivers and NVEmulate utility
Related
First of all, sorry for the title, didn't know what I should call this.
I have found out that when I call glCopyImageSubData on a secondary thread with a secondary context created with glfw and explicit version, following render calls just don't work like they should. In my program I create two textures, texture 2 bigger than texture 1. I upload pixels to texture 1 but not texture 2, I just allocate space. Then I use glCopyImageSubData. After that I delete the textures. All this happens on a secondary context sharing resources with the window.
In theory I do not modify anything on the rendering side of things, I just create two textures, copy data and delete them. My rendering loop looks like this:
glViewport(0, 0, width, height);
glClearColor(1F, 0F, 0F, 0.5F);
glClear(GL_COLOR_BUFFER_BIT);
glfwSwapBuffers(window);
Following render calls on the window don't work like they should, I don't know what happens or why it is happening.
If I do not specify the context version, it works just fine, which is a mystery to me because I do not think it should make a difference... If I specify the version of GL running on my computer, the same context is created as if I did not specify the version, right?
I specify the version like this:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 6);
Here is a video of the problem:
https://youtu.be/5fc6m1BEKyc
Here is the github with the code: https://github.com/DasBabyPixel/lwjgl-testing
(In the code, mode=1 with disableExplicitVersion=false is what I want to get to work)
My System info just to confirm things
EDIT
The copy process works just fine, confirmed by writing the textures to file.
I don't know, maybe I'm unclear: I am trying to figure out why this is happening and how to fix it. Leaving the version hints out is not an option because I want to be able to use GLES 3.2
was following some tutorial on learnopengl.com, moving camera around and suddenly the GL_DEPTH_TEST fails.
GL_DEPTH_TEST WORKS AT FIRST, THEN FAILS
program looks like this
int main(){
glEnable(GL_DEPTH_TEST);
while (!glfwWindowShouldClose(window))
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 36); //some draw function
}
}
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode)
{
handler();
}
It actually fails in some other program as well (meaning other tutorials I am building). If I place the glEnable(GL_DEPTH_TEST) in the loop, then it will not fail, so I suspects that GL_DEPTH_TEST has somehow been disabled / failed during runtime.
Is there reason for this to happen?
how to prevent it?
is placing glEnable(GL_DEPTH_TEST) in the loop the correct solution?
is it hardware related? I am using Phenom X6 AMD CPU with some Radeon 6850 card on
my Windows PC.
EDIT:
I think my window was actually quite standard stuff
#include <GLFW/glfw3.h>
int main(){
glfwInit();
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
GLFWwindow* window = glfwCreateWindow(WIDTH, HEIGHT, "LearnOpenGL", nullptr, nullptr);
glfwMakeContextCurrent(window);
glewInit();
while(!glfwWindowShouldClose(window)){}
}
EDIT:
I used the function glIsEnabled() to check, indeed GL_DEPTH_TEST was disabled after sometime. This happens in 2 of the built program, one just panning around by key_press(change camera position), the other one rotates by glfwGetTime(). The line if(!glIsEnabled(GL_DEPTH_TEST)) std::cout << "time: " << glfwGetTime() << " no depth!!" << std::endl; gave output.
Does google map WebGL in the background has anything to do with that?
I guess I shall have to resort to putting GL_DEPTH_TEST in loop.
Is there reason for this to happen?
Normally not. OpenGL state is not supposed to suddenly change. However you have additional software installed, that injects DLLs and does "things to your OpenGL context. Programs like FRAPS (screen capture software), Stereoscopic/Virtual-Reality wrappers, Debugging-Overlays, etc.
how to prevent it?
Writing correct code ;) – and by that I mean the full stack: your program, the OS written by someone, the GPU drivers written by someone else. Bugs happen.
is placing glEnable(GL_DEPTH_TEST) in the loop the correct solution?
Yes. In fact you should always set all drawing related state anew with each drawing iteration. Not only for correctness reasons, but because with more advanced rendering techniques eventually you'll have to do this anyway. For example if you're going to render shadow maps you'll have to use FBOs, which require to set glViewport several times during rendering a frame. Or say you want to draw a minimap and/or HUD, then you'll have to disable depth testing in between.
If your program is structured like this from the very beginning things are getting much easier.
is it hardware related?
No. OpenGL is a software level specification and a conforming implementation must do whatever the specification says, regardless of the underlying hardware.
It may be your window declaration. Can you put your initialization for windows and opengl ?
EDIT
I can see you are declaring OpenGL 3.3, you have to put
glewExperimental = GL_TRUE;
before glewInit to make it works correctly.
Try to put it and control the eventual errors returned by glewInit :
GLuint err = glewInit();
Does google map WebGL in the background has anything to do with that?
no it shouldn't because OpenGL doesn't share data between process.
I'm making a simple application using OpenGL, GLEW and SDL2 which draws a simple quad on screen (I'm following the example on lazyfoo site with modern OpenGL).
When I use opengl version 3.1 everything works fine, but if I use OpenGL version 3.2+ draw commands don't work (triangle doesn't appear). Does someone know what I am doing wrong?
This is how I setup everything:
if (SDL_Init(SDL_INIT_VIDEO) != 0) {
return false;
}
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 2);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24);
m_pWindow = SDL_CreateWindow("Hello World!", 100, 100, 800, 600, SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN );
m_pGLContext = SDL_GL_CreateContext(m_pWindow);
glewExperimental = GL_TRUE;
GLenum glewError = glewInit();
if (glewError != GLEW_OK)
{
return false;
}
//Use Vsync
if (SDL_GL_SetSwapInterval(1) < 0)
{
return false;
}
If you want to check it grab one-file cpp source from http://lazyfoo.net/tutorials/SDL/51_SDL_and_modern_opengl/index.php (on site bottom), and try to change version of opengl from 3.1 to 3.2.
In OpenGL 3.2, the OpenGL profiles were introduced. The core profile actually removes all of the deprecated functions, which breaks compaitibily with older GL functions. The compatibility profile retains backwards compatibility.
To create "modern" OpenGL context, the extensions like GLX_create_context (Unix/X11) or WGL_create_context (Windows) have to be used (and SDL does that for you internally). Citing these extensions specifications gives the answer to your question:
The attribute name GLX_CONTEXT_PROFILE_MASK_ARB requests an OpenGL
context supporting a specific profile of the API. If the
GLX_CONTEXT_CORE_PROFILE_BIT_ARB bit is set in the attribute value,
then a context implementing the core profile of OpenGL is
returned. If the GLX_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB bit is
set, then a context implementing the compatibility profile is
returned. If the requested OpenGL version is less than 3.2,
GLX_CONTEXT_PROFILE_MASK_ARB is ignored and the functionality of the
context is determined solely by the requested version.
[...]
The default value for GLX_CONTEXT_PROFILE_MASK_ARB is
GLX_CONTEXT_CORE_PROFILE_BIT_ARB. All OpenGL 3.2 implementations are
required to implement the core profile, but implementation of the
compatibility profile is optional.
SInce you did not explicitely request a compatibility profile (and SDL does neither), you got a core profile, and it seems like your code is invalid in a core profile.
You might try requesting a compaitbility profile by adding the
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_COMPATIBILITY);
hint before creating the context.
But be warned that this is not universally supported - MacOS generally supports only GL up to 2.1 OR GL >= 3.2 core profile only. THe open source drivers on Linux only support OpenGL >= 3.2 only in core profile, too. So my recommendation is that you actually fix your code and switch to a core profile.
Whenever I attempt to debug a shader in nvidia nsight I get the following incompatibility in my nvcompatlog.
glDisable (cap = 0x00008620)
glMatrixMode
glPushMatrix
glLoadIdentity
glOrtho
glBegin
glColor4f
glVertex2f
glEnd
glPopMatrix
This is confusing since I am using a 4.2 core profile and not using any deprecated or fixed function calls. At this stage I am just drawing a simple 2D square to the screen and can assure none of the functions listed above are being used.
My real concern is being new to SDL & GLEW I am not sure what functions they are using behind the scene. I have been searching around the web and have found others who are using SDL, GLEW, & Nvidia nsight. This leads me to believe I am overlooking something. Below is a shortened verison of how I am initialing SDL & GLW.
SDL_Init(SDL_INIT_EVERYTHING);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 2);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
SDL_Window *_window;
_window = SDL_CreateWindow("Red Square", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED , 200, 200, SDL_WINDOW_OPENGL);
SDL_GLContext glContext = SDL_GL_CreateContext(_window);
glewExperimental = GL_TRUE;
GLenum status = glewInit();
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
In the implementation I have error checking pretty much after every call. I excluded it from the example to reduce the amount of clutter. All the above produce no errors and return valid objects.
After the initialization glewGetString(GLEW_VERSION) returns 4.2.0 NVIDIA 344.75, glewGetString(GLEW_VERSION) returns 1.11.0, and GLEW_VERSION_4_2 returns true.
Any idea on how I can used SDL & GLEW and not have either of these frameworks call deprecated functions?
** Edit **
I have been experiementing with the Dependency Walker here. Looking at the calls through Opengl32.dll none of what is listed is shown as a called module.
For anyone interested, Nsight captures all commands issued to the OpenGL server. Not just those issued through your application. If you have any FPS or recording software enabled, these tend to use deprecated methods drawing to the framebuffer. In my case it was Riva Tuner which displays the FPS on screen for any running games. Disabling it resolved my issue.
I'm starting out with the Android NDK and OpenGL. I know I'm doing something (probably a few) things wrong here and since I keep getting a black screen when I test I know the rendering isn't being sent to the screen.
In the Java I have a GLSurfaceView.Renderer that calls these two native methods. They are being called correctly but not drawing to the device screen.
Could someone point me in the right direction with this?
Here are the native method implementations:
int init()
{
sendMessage("init()");
glGenFramebuffersOES(1, &framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glGenRenderbuffersOES(1, &colorRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_RGBA8_OES, 854, 480);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, colorRenderbuffer);
GLuint depthRenderbuffer;
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, 854, 480);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);
GLenum status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES);
if(status != GL_FRAMEBUFFER_COMPLETE_OES)
sendMessage("Failed to make complete framebuffer object");
return 0;
}
void draw()
{
sendMessage("draw()");
GLfloat vertices[] = {1,0,0, 0,1,0, -1,0,0};
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableClientState(GL_VERTEX_ARRAY);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);
}
The log output is:
init()
draw()
draw()
draw()
etc..
I don't think that this is a real solution at all.
I'm having the same problem here, using framebuffer objects inside native code, and by doing
framebuffer = (GLuint) 0;
you're only using the default frame buffer, which always exist and is reserved to 0.
Technically, you could erase all your code related to framebuffers and your app should be working properly as framebuffer 0 is always generated and is the one binded by defaut.
But, you should be able to generate multiple frame buffers and swap between them using the binding function (glBindFramebuffer) as you please. But that doesn't seems to be working on my end and I haven't found the real solution yet. There's not much documentation on the android part, and I'm starting to wonder if fbo are really supported in native code. They do work properly inside the java code though, I've tested it with succes !
Oh ! And I just noticed that your buffer dimensions are not power of 2...that usually should be the case for all textures/buffers like structure in Opengl.
UPDATE :
Now I'm fairly sure you cannot use FBOs with android (2.2 or lower) and ndk (version r5b or lower). It is a whole different game if you use the new 3.1 release though, where you can code all of your app with native code (no more jni wrapper necessary), but I haven't tested it yet !
On the other hand, I've manage to make Stencil buffers and textures work flawlessly !
So the workaround will be to use those for my rendering logic, and just forget about FBO offscreen rendering.
I finally found the problem after MUCH tinkering.
Turns out that because I was calling the code from a GLSurfaceView.Renderer in Java the frame buffer already existed and so by calling:
glGenFramebuffersOES(1, &framebuffer);
I was unintentionally allocating a NEW buffer that was not attached to the target display. By removing this line and replacing it with:
framebuffer = (GLuint) 0;
It now renders to the correct buffer and displays properly on the screen. Note that even though I don't really use the buffer in this snippet, changing it is what messed up the proper display.
I had similar issues when moving form iOS to Android NDK here is my complete solution too.
OpenGLES 1.1 with FrameBuffer / ColorBuffer / DepthBuffer for Android with NDK r7b