Dedicated Nvidia GPU won't draw OpenGL - c++

My setup includes on-board Intel integrated GPU for everyday tasks and a high-performance Nvidia GPU for graphics-intensive applications. I'm developing an OpenGL 3.3 (core profile) application (using shaders, not fixed-function-pipeline). By default, my app runs on Intel GPU and works just fine. But should I try to run it on Nvidia, it only shows the black screen.
Now here's the interesting part. OpenGL context gets loaded correctly, and world coordinate axes I draw for debugging actually get drawn (GL_LINE). For some reason, Nvidia doesn't draw any GL_POLYGONs or GL_QUADs.
Has anyone experienced something similar, and what do you think is the culprit here?

It appears GL_POLYGON, GL_QUADS and GL_QUAD_STRIP are removed from OpenGL 3.3 core profile. For some reason Intel draws them regardless, but Nvidia started drawing as well, as soon as I substituted those with GL_TRIANGLES etc. Always check for removed features if problems like this arise.

Related

Why can I use GL_QUADS with glDrawArrays?

The glDrawArrays document does not mention GL_QUADS in mode.
https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glDrawArrays.xhtml
But my PC is rendered with glDrawArrays (GL_QUADS, 0, 4); the rectangle is drawn. My PC is running on OpenGL 4.3.
Why is this?
Probably because your PC implements the compatibility profile.
However the OpenGL spec is just a spec. Vendors are free to implement other stuff as well to make users happy, for example by supporting legacy software.

Qt OpenGL OSX rendering slow, Windows fast on same machine

I am rendering a mesh using OpenGl through Qt. (Qt 5.4).
On my OSX computer the rendering is relatively slow. When I rotate the mesh I can see that the rendering can't keep up with my mouse input.
On the same OSX computer when running a Windows 7 virtual machine and my application the rendering is silky smooth. It almost looks like the Mac version is rendering in software mode, instead of using acceleration.
I used glGetString to check the vender and renderer being used and this looks ok:
"NVIDIA Corporation"
"NVIDIA GeForce GT 650M OpenGL Engine"
Any ideas why the native OSX generated code would run so much slower.
BTW: I am rendering a mesh composed of about 150,000 vertices using a GL_ARRAY_BUFFER.
I am quite new to OpenGL, any ideas?
I'm answering this question so that it can be closed.
As Kuba Ober indicated in the comments above the problem was caused by an opengl mistake that Windows appeared to be hiding. In my case I forgot to call the QOpenGLShaderProgram::disableAttributeArray() function, ex:
program->enableAttributeArray(texcoorLocation);
glVertexAttribPointer(texcoorLocation, 2, GL_FLOAT, GL_FALSE, 0, uv);
glDrawElements(GL_TRIANGLES, elementCount, GL_UNSIGNED_SHORT, indices);
program->disableAttributeArray(texcoorLocation); //<-- this line was missing
It seems Windows forgave this problem, while OSX did not.

Are Shaders used in the latest OpenGL very early?

When i look to the 4th Edition of the Book "OpenGL SuperBible" it starts with Points / Lines drawing Polygons and later on Shaders are discussed. In the 6th Edition of the book, it starts directly with Shaders as the very first example. I didn't use OpenGL for a long time, but is it the way to start with Shaders?
Why is there the shift, is this because of going from fixed pipeline to Shaders?
To a limited extent it depends on exactly which branch of OpenGL you're talking about. OpenGL ES 2.0 has no path to the screen other than shaders — there's no matrix stack, no option to draw without shaders and none of the fixed-pipeline bonded built-in variables. WebGL is based on OpenGL ES 2.0 so it inherits all of that behaviour.
As per derhass' comment, all of the fixed stuff is deprecated in modern desktop GL and you can expect it to vanish over time. The quickest thing to check is probably the OpenGL 4.4 quick reference card. If the functionality you want isn't on there, it's not in the latest OpenGL.
As per your comment, Kronos defines OpenGL to be:
the only cross-platform graphics API that enables developers of
software for PC, workstation, and supercomputing hardware to create
high- performance, visually-compelling graphics software applications,
in markets such as CAD, content creation, energy, entertainment, game
development, manufacturing, medical, and virtual reality.
It more or less just exposes the hardware. The hardware can't do anything without shaders. Nobody in the industry wants to be maintaining shaders that emulate the old fixed functionality forever.
"About the only rendering you can do with OpenGL without shaders is clearing a window, which should give you a feel for how important they are when using OpenGL." - From OpenGL official guide
After OpenGL 3.1, the fixed-function pipeline was removed and shaders became mandatory.
So the SuperBible or the OpenGL Redbook begin by describing the new Programmable Pipeline early in discussions. Then they tell how to write and use a vertex and fragment shading program.
For your shader objects you now have to:
Create the shader (glCreateShader, glShaderSource)
Compile the shader source into an object (glCompileShader)
Verify the shader (glGetShaderInfoLog)
Then you link the shader object into your shader program:
Create shader program (glCreateProgram)
Attach the shader objects (glAttachShader)
Link the shader program (glLinkProgram)
Verify (glGetProgramInfoLog)
Use the shader (glUseProgram)
There is more to do now before you can render than in the previous fixed function pipeline. No doubt the programmable pipeline is more powerful, but it does make it more difficult just to begin rendering. And the shaders are now a core concept to learn.

OpenGL 3.+ glsl compatibility mess?

So, I googled a lot of opengl 3.+ tutorials, all incorporating shaders (GLSL 330 core). I however do not have a graphics card supporting these newer GLSL implementations, either I have to update my driver but still I'm not sure if my card is intrinsically able to support it.
Currently my openGL version is 3.1, and I created on windows with C++ a modern context with backwards compatibility. My GLSL version is 1.30 via NVIDIA Cg compiler (full definition), and GLSL 1.30 -> version 130.
The problem is : version 130 is fully based on the legacy opengl pipeline, because it contains things like viewmatrix, modelmatrix, etc. Then how am I supposed to use them when I am using core functions in my client app (OpenGL 3+)?
This is really confusing, give me concrete examples.
Furthermore, I want my app to be able to run on most OpenGL implementations, then could you tell me where the border is between legacy GLSL and modern GLSL? Is GLSL 300 the modern GLSL, and is there a compatibilty with OpenGL 3.+ with older GLSL versions?
I would say OpenGL 3.1 is modern OpenGL.
Any hardware that supports OpenGL 3.1 is capable of supporting OpenGL 3.3. Whether the driver always support of it is another matter. Updating your graphics card will probably bump you up to OpenGL 3.3.
Just to clear this up OpenGL 3.1 is not legacy OpenGL.
legacy OpenGL would be:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(90.0, 0.0, 1.0, 0.0);
glTranslatef(0.0, 0.0, -5.0);
Which OpenGL 3.1 with a compatibility context supports, but that doesn't mean it should be used. If you are developing for OpenGL 3 capable hardware you should most definitely not be using it. You can disable the legacy functionality by requesting a core context.
if you are using shaders then you already moved away the legacy fixed function pipeline. So GLSL 130 is not legacy :P.
Working on my Linux Laptop with my Intel CPU where the latest stable drivers are only at OpenGL 3.1 (Yes OpenGL 3.3 commits are in place, but I'm waiting for MESA 10 ;) ) I have without much effort been able to get the OpenGL 3.3 Tutorials to run on my machine without touching legacy OpenGL.
One of the wonderful things about OpenGL is that you can extend the functionality with OpenGL extension. Even if your HW isn't capable of handling OpenGL 4.4 you can still use the extensions that doesn't require OpenGL 4 HW with updated drivers!
See https://developer.nvidia.com/opengl-driver and http://developer.amd.com/resources/documentation-articles/opengl-zone/ for info on what features are added to older HW, but if you are uncertain all you have to do is test it on your HW.
And I'll finish of by saying Legacy OpenGL also has it's place.
In my opinion legacy OpenGL might be easier to learn than modern OpenGL, since you don't need knowledge of shaders and OpenGL buffers to draw your first triangle, but I don't think you should be using it in a modern production application.
If you need support for old hardware you might need to use an older OpenGL version. Even modern CPU's support OpenGL 3 so I would not worry about this to much.
Converting from OpenGL 3.3 to OpenGL 3.0
I tested it on the tutorials from http://www.opengl-tutorial.org/. I cannot put the code up I converted as most of it is as is from the tutorials and I don't have permission to put the code here.
They author talked about OpenGL 3.1, but since he is capped at glsl 130 (OpenGL 3.0) I am converting to 3.0.
First of all change the context version to OpenGL 3.0 (Just change
the minor version to 0 if your working from the tutorials). Also don't set it to use core context if your using OpenGL 3.0 since as far as I know ARB_compatibility is only available from OpenGL 3.1.
Change the shader version to
#version 130
Remove all layout binding in shaders
layout(location = #) in vec2 #myVarName;
to
in vec2 #myVarName;
Use glBindAttribLocation to bind the in layouts as they were specified (see 3)
e.g
glBindAttribLocation(#myProgramName, #, "#myVarName");
Use glBindFragDataLocation to bind the out layout as they were specified (see 3)
e.g
glBindFragDataLocation(#myProgramName, #, "#myVarName");
glFramebufferTexture doesn't work in OpenGL 3.0. (Used for shadowmapping and deferred rendering etc.). Instead you need to use glFramebufferTexture2D. (It has a extra parameter, but the documentation is sufficient)
Here is screenshot of tutorial16 (I though this one covered the most areas and used this a test to see if that all that's needed)
There is a mistake in the source of tutorial16 (At the time of writing). The FBO is set to have no color output, but the fragment shader still outputs a color value, causing a segfault (Trying to write to nothing ussually does that). Simply changing the depth fragment shader to output nothing fixes it. (Doesn't produce segfault on more tolerant drivers, but that's not something you should bargain on)

Determine which renderer is used for vertex shader

Apple's OpenGL Shader Builder let's you drop in your vertex (or fragment) shader and it will link and validate it then tell you which GL_RENDERER is used for that shader. For me it either shows: Apple Software Renderer (in red because it means the shader will be dog slow) or AMD Radeon HD 6970M OpenGL Engine (i.e. my gpu's renderer which I usually want to run the shader).
How can I also determine this at runtime in my own software?
Edit:
Querying GL_RENDERER in my CPU code always seems to return AMD Radeon HD 6970M OpenGL Engine regardless of where I place it in the draw loop even though I'm using a shader that OpenGL Shader Builder says is running on Apple Software Renderer (and I believe it because it's very slow). Is it a matter of querying GL_RENDERER at just the right time? If so, when?
The renderer used is tied to the OpenGL context and a proper OpenGL implementation should not switch the renderer inbetween. Of course a OpenGL implementation may be built on some infrastructure that dynamically switches between backend renderers, but this must then reflect to the frontend context in renderer string that identifies this.
So what you do is indeed the correct method.