nsight - OpenGL 4.2 debugging incompatibility - sdl

Whenever I attempt to debug a shader in nvidia nsight I get the following incompatibility in my nvcompatlog.
glDisable (cap = 0x00008620)
glMatrixMode
glPushMatrix
glLoadIdentity
glOrtho
glBegin
glColor4f
glVertex2f
glEnd
glPopMatrix
This is confusing since I am using a 4.2 core profile and not using any deprecated or fixed function calls. At this stage I am just drawing a simple 2D square to the screen and can assure none of the functions listed above are being used.
My real concern is being new to SDL & GLEW I am not sure what functions they are using behind the scene. I have been searching around the web and have found others who are using SDL, GLEW, & Nvidia nsight. This leads me to believe I am overlooking something. Below is a shortened verison of how I am initialing SDL & GLW.
SDL_Init(SDL_INIT_EVERYTHING);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 2);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
SDL_Window *_window;
_window = SDL_CreateWindow("Red Square", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED , 200, 200, SDL_WINDOW_OPENGL);
SDL_GLContext glContext = SDL_GL_CreateContext(_window);
glewExperimental = GL_TRUE;
GLenum status = glewInit();
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
In the implementation I have error checking pretty much after every call. I excluded it from the example to reduce the amount of clutter. All the above produce no errors and return valid objects.
After the initialization glewGetString(GLEW_VERSION) returns 4.2.0 NVIDIA 344.75, glewGetString(GLEW_VERSION) returns 1.11.0, and GLEW_VERSION_4_2 returns true.
Any idea on how I can used SDL & GLEW and not have either of these frameworks call deprecated functions?
** Edit **
I have been experiementing with the Dependency Walker here. Looking at the calls through Opengl32.dll none of what is listed is shown as a called module.

For anyone interested, Nsight captures all commands issued to the OpenGL server. Not just those issued through your application. If you have any FPS or recording software enabled, these tend to use deprecated methods drawing to the framebuffer. In my case it was Riva Tuner which displays the FPS on screen for any running games. Disabling it resolved my issue.

Related

Why glDrawElments() is working without using any shader?

I'm trying to debug some shaders but I can't change the one currently loaded. I tried to run without loading any shader, or linking any program and it still working.
I already tried deleting completely the shaders from my HDD. I tried to just call glUseProgram (with any random number including 0) just before calling glDrawElements and it still work. And even if I load any shader it just doesn't make any effect. It still show linking and compile error if I make mistakes in the files but when run the executable it just ignores what is in the shaders.
I draw the vertex with this
void Renderer::renderFrame() {
vao.bind();
glUseProgram(0);
glDrawElements(GL_LINE_LOOP, 3, GL_UNSIGNED_INT, nullptr);
}
and this are my window hints
void App::start() {
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
window = SDL_CreateWindow(title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 500,500, SDL_WINDOW_RESIZABLE|SDL_WINDOW_OPENGL);
this->context = SDL_GL_CreateContext(window);
glewInit();
glClearColor(0.5,1.0,1.0,1.0);
renderer.init();
}
SDL_GL_SetAttribute() only affects the next SDL_CreateWindow() call.
From the doc wiki:
Use this function to set an OpenGL window attribute before window creation.
So right now you're most likely getting a Compatibility context where shader-less draws are perfectly valid. You can check the value of GL_VERSION to see what you're getting.
If you want a Core context make those SDL_GL_SetAttribute() calls before your SDL_CreateWindow():
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
window = SDL_CreateWindow(title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 500,500, SDL_WINDOW_RESIZABLE|SDL_WINDOW_OPENGL);
this->context = SDL_GL_CreateContext(window);
In case no valid shader is binded the default fixed function is usually used (you know GL 1.0 backward compatibility even on core profile sometimes depending on vendor/driver).
So in case your attribute locations matches the used fixed function ones your CPU side code still renders image see:
What are the Attribute locations for fixed function pipeline in OpenGL 4.0++ core profile?
however the locations are not defined by any standard so it is different for any vendor (and can change with time/driver version). Only nVidia defined it and still using it after years...
So its a good idea to check the GLSL compiler/linker log for any shader in development to avoid confusion ... For more info on how to obtain them see:
complete GL+GLSL+VAO/VBO C++ example
btw some gfx drivers support logging and if enabled it will save the GLSL logs into a file on its own... tha can be done for example with nVidia drivers and NVEmulate utility

GL_DEPTH_TEST fail in middle of program

was following some tutorial on learnopengl.com, moving camera around and suddenly the GL_DEPTH_TEST fails.
GL_DEPTH_TEST WORKS AT FIRST, THEN FAILS
program looks like this
int main(){
glEnable(GL_DEPTH_TEST);
while (!glfwWindowShouldClose(window))
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 36); //some draw function
}
}
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode)
{
handler();
}
It actually fails in some other program as well (meaning other tutorials I am building). If I place the glEnable(GL_DEPTH_TEST) in the loop, then it will not fail, so I suspects that GL_DEPTH_TEST has somehow been disabled / failed during runtime.
Is there reason for this to happen?
how to prevent it?
is placing glEnable(GL_DEPTH_TEST) in the loop the correct solution?
is it hardware related? I am using Phenom X6 AMD CPU with some Radeon 6850 card on
my Windows PC.
EDIT:
I think my window was actually quite standard stuff
#include <GLFW/glfw3.h>
int main(){
glfwInit();
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
GLFWwindow* window = glfwCreateWindow(WIDTH, HEIGHT, "LearnOpenGL", nullptr, nullptr);
glfwMakeContextCurrent(window);
glewInit();
while(!glfwWindowShouldClose(window)){}
}
EDIT:
I used the function glIsEnabled() to check, indeed GL_DEPTH_TEST was disabled after sometime. This happens in 2 of the built program, one just panning around by key_press(change camera position), the other one rotates by glfwGetTime(). The line if(!glIsEnabled(GL_DEPTH_TEST)) std::cout << "time: " << glfwGetTime() << " no depth!!" << std::endl; gave output.
Does google map WebGL in the background has anything to do with that?
I guess I shall have to resort to putting GL_DEPTH_TEST in loop.
Is there reason for this to happen?
Normally not. OpenGL state is not supposed to suddenly change. However you have additional software installed, that injects DLLs and does "things to your OpenGL context. Programs like FRAPS (screen capture software), Stereoscopic/Virtual-Reality wrappers, Debugging-Overlays, etc.
how to prevent it?
Writing correct code ;) – and by that I mean the full stack: your program, the OS written by someone, the GPU drivers written by someone else. Bugs happen.
is placing glEnable(GL_DEPTH_TEST) in the loop the correct solution?
Yes. In fact you should always set all drawing related state anew with each drawing iteration. Not only for correctness reasons, but because with more advanced rendering techniques eventually you'll have to do this anyway. For example if you're going to render shadow maps you'll have to use FBOs, which require to set glViewport several times during rendering a frame. Or say you want to draw a minimap and/or HUD, then you'll have to disable depth testing in between.
If your program is structured like this from the very beginning things are getting much easier.
is it hardware related?
No. OpenGL is a software level specification and a conforming implementation must do whatever the specification says, regardless of the underlying hardware.
It may be your window declaration. Can you put your initialization for windows and opengl ?
EDIT
I can see you are declaring OpenGL 3.3, you have to put
glewExperimental = GL_TRUE;
before glewInit to make it works correctly.
Try to put it and control the eventual errors returned by glewInit :
GLuint err = glewInit();
Does google map WebGL in the background has anything to do with that?
no it shouldn't because OpenGL doesn't share data between process.

OpenGL glutInit() : XOpenDisplay() causing segmentation fault

I'm carrying out a project for virtualization of CUDA API. The project is based on QEMU hyper-visor. I'm using the latest version 2.6.0rc3. I have completed the core module and this question is regarding demoing it.QEMU 2.6.0rc3 has OpenGL support.
I ran the following program on the VM to test OpenGL support & it executed without any issue.
#include <GL/freeglut.h>
#include <GL/gl.h>
void renderFunction()
{
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1.0, 1.0, 1.0);
glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0);
glBegin(GL_POLYGON);
glVertex2f(-0.5, -0.5);
glVertex2f(-0.5, 0.5);
glVertex2f(0.5, 0.5);
glVertex2f(0.5, -0.5);
glEnd();
glFlush();
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE);
glutInitWindowSize(500,500);
glutInitWindowPosition(100,100);
glutCreateWindow("OpenGL - First window demo");
glutDisplayFunc(renderFunction);
glewInit();
glutMainLoop();
return 0;
}
I also used NVIDIA samples graphics demo named "simpleGL" available with CUDA 6.5 toolkit at https://developer.nvidia.com/cuda-toolkit-65. The demo uses OpenGL to depict waveforms and CUDA for underlying calculations to simulate it. When I run this demo program, a segmentation fault occurs at the glutInit() call. Here's the related code segment from the demo.
bool initGL(int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE);
glutInitWindowSize(window_width, window_height);
glutCreateWindow("Cuda GL Interop (VBO)");
glutDisplayFunc(display);
glutKeyboardFunc(keyboard);
glutMotionFunc(motion);
glutTimerFunc(REFRESH_DELAY, timerEvent,0);
// initialize necessary OpenGL extensions
glewInit();
if (! glewIsSupported("GL_VERSION_2_0 "))
{
fprintf(stderr, "ERROR: Support for necessary OpenGL extensions missing.");
fflush(stderr);
return false;
}
// default initialization
glClearColor(0.0, 0.0, 0.0, 1.0);
glDisable(GL_DEPTH_TEST);
// viewport
glViewport(0, 0, window_width, window_height);
// projection
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60.0, (GLfloat)window_width / (GLfloat) window_height, 0.1, 10.0);
SDK_CHECK_ERROR_GL();
return true;
}
Here's the gdb call stack.
#0 0x00007ffff57d2872 in XOpenDisplay ()
from /usr/lib/x86_64-linux-gnu/libX11.so.6
#1 0x00007ffff76af2a3 in glutInit ()
from /usr/lib/x86_64-linux-gnu/libglut.so.3
#2 0x000000000040394d in initGL(int, char**) ()
#3 0x0000000000403b6a in runTest(int, char**, char*) ()
#4 0x00000000004037dc in main ()
According to my research the segmentation fault occurs when an attempt to open a window is made. My knowledge of internal working of OpenGL is very limited, some help in this regard is much appreciated. Thanks.
I'm carrying out a project for virtualization of CUDA API
Without support from NVidia I doubt you can do this on your own.
You're doing a few things that clash in a crass way:
First off you're running everything in a QEmu environment, which means, that, if you don't pass through the GPU via IOMMU virtualization into the VM there's nothing a CUDA runtime in there could work with. CUDA is designed to talk directly to the GPU.
Next you're using the Mesa OpenGL implementation inside the VM. Mesa has a dedicated backend to pass OpenGL commands through QEmu to a OpenGL implementation "outside" of it. This is more or less a remote procedure call and it piggybacks over the very same code paths that also implement indirect GLX via X11 transport.
CUDA internally links against libGL.so, but the libGL.so it expects to see is the one of the NVidia drivers, not some arbitrary libGL.so. Since libcuda.so and libGL.so come as a part of the same driver package, namely the NVidia drivers. There's certain internal "knowledge" about the particular libGL.so that the corresponding libcuda.so has and tries to use. Without the right libGL.so it won't work.
If you want to use CUDA in a VM (perfectly possible) you have to pass through the whole GPU into the VM. You can do this by loading the pci_stub kernel module, configuring the NVidia GPU as device to be attached to the stub, then launch the QEmu VM with pass through of the GPU device (actually it should also be possible to hot-plug passthrough it, but I never tried that). For this to work the nvidia kernel module must not have taken ownership of the GPU. So in case you have multiple NVidia GPUs and want to pass through only a subset of them, you have to attach those to the pci_stub before loading the nvidia kernel module. Then inside the VM you can use the NVidia drivers as usual.

Can't draw with opengl version greater then 3.1 with SDL

I'm making a simple application using OpenGL, GLEW and SDL2 which draws a simple quad on screen (I'm following the example on lazyfoo site with modern OpenGL).
When I use opengl version 3.1 everything works fine, but if I use OpenGL version 3.2+ draw commands don't work (triangle doesn't appear). Does someone know what I am doing wrong?
This is how I setup everything:
if (SDL_Init(SDL_INIT_VIDEO) != 0) {
return false;
}
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 2);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24);
m_pWindow = SDL_CreateWindow("Hello World!", 100, 100, 800, 600, SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN );
m_pGLContext = SDL_GL_CreateContext(m_pWindow);
glewExperimental = GL_TRUE;
GLenum glewError = glewInit();
if (glewError != GLEW_OK)
{
return false;
}
//Use Vsync
if (SDL_GL_SetSwapInterval(1) < 0)
{
return false;
}
If you want to check it grab one-file cpp source from http://lazyfoo.net/tutorials/SDL/51_SDL_and_modern_opengl/index.php (on site bottom), and try to change version of opengl from 3.1 to 3.2.
In OpenGL 3.2, the OpenGL profiles were introduced. The core profile actually removes all of the deprecated functions, which breaks compaitibily with older GL functions. The compatibility profile retains backwards compatibility.
To create "modern" OpenGL context, the extensions like GLX_create_context (Unix/X11) or WGL_create_context (Windows) have to be used (and SDL does that for you internally). Citing these extensions specifications gives the answer to your question:
The attribute name GLX_CONTEXT_PROFILE_MASK_ARB requests an OpenGL
context supporting a specific profile of the API. If the
GLX_CONTEXT_CORE_PROFILE_BIT_ARB bit is set in the attribute value,
then a context implementing the core profile of OpenGL is
returned. If the GLX_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB bit is
set, then a context implementing the compatibility profile is
returned. If the requested OpenGL version is less than 3.2,
GLX_CONTEXT_PROFILE_MASK_ARB is ignored and the functionality of the
context is determined solely by the requested version.
[...]
The default value for GLX_CONTEXT_PROFILE_MASK_ARB is
GLX_CONTEXT_CORE_PROFILE_BIT_ARB. All OpenGL 3.2 implementations are
required to implement the core profile, but implementation of the
compatibility profile is optional.
SInce you did not explicitely request a compatibility profile (and SDL does neither), you got a core profile, and it seems like your code is invalid in a core profile.
You might try requesting a compaitbility profile by adding the
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_COMPATIBILITY);
hint before creating the context.
But be warned that this is not universally supported - MacOS generally supports only GL up to 2.1 OR GL >= 3.2 core profile only. THe open source drivers on Linux only support OpenGL >= 3.2 only in core profile, too. So my recommendation is that you actually fix your code and switch to a core profile.

SDL with OpenGL (freeglut) crashes on call to glutBitmapCharacter

I have a program using OpenGL through freeglut under SDL. The SDL/OpenGL initialization is as follows:
// Initialize SDL
SDL_Init(SDL_INIT_VIDEO);
// Create the SDL window
SDL_SetVideoMode(SCREEN_W, SCREEN_H, SCREEN_DEPTH, SDL_OPENGL);
// Initialize OpenGL
glClearColor(BG_COLOR_R, BG_COLOR_G, BG_COLOR_B, 1.f);
glViewport(0, 0, SCREEN_W, SCREEN_H);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0f, SCREEN_W, SCREEN_H, 0.0f, -1.0f, 1.0f);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
I've been using glBegin() ... glEnd() blocks without any trouble to draw primitives. However, in this program when I call any glutBitmapX function, the program simply exits without an error status. The code I'm using to draw text is:
glColor3f(1.f, 1.f, 1.f);
glRasterPos2f(x, y);
glutStrokeString(GLUT_BITMAP_8_BY_13, (const unsigned char*)"test string");
In previous similar programs I've used glutBitmapCharacter and glutStrokeString to draw text and it's seemed to work. The only difference being that I'm using freeglut with SDL now instead of just GLUT as I did in previous programs. Is there some fundamental problem with my setup that I'm not seeing, or is there a better way of drawing text?
Section 2, Initialization:
Routines beginning with the glutInit- prefix are used to initialize
GLUT state. The primary initialization routine is glutInit that should
only be called exactly once in a GLUT program. No non- glutInit-
prefixed GLUT or OpenGL routines should be called before glutInit.
The other glutInit- routines may be called before glutInit. The reason
is these routines can be used to set default window initialization
state that might be modified by the command processing done in
glutInit. For example, glutInitWindowSize(400, 400) can be called
before glutInit to indicate 400 by 400 is the program's default window
size. Setting the initial window size or position before glutInit
allows the GLUT program user to specify the initial size or position
using command line arguments.
Don't try to mix-n-match GLUT and SDL. It will end in tears and/or non-functioning event loops. Pick one framework and stick with it.
You have likely corrupted the heap.