Determine which renderer is used for vertex shader - opengl

Apple's OpenGL Shader Builder let's you drop in your vertex (or fragment) shader and it will link and validate it then tell you which GL_RENDERER is used for that shader. For me it either shows: Apple Software Renderer (in red because it means the shader will be dog slow) or AMD Radeon HD 6970M OpenGL Engine (i.e. my gpu's renderer which I usually want to run the shader).
How can I also determine this at runtime in my own software?
Edit:
Querying GL_RENDERER in my CPU code always seems to return AMD Radeon HD 6970M OpenGL Engine regardless of where I place it in the draw loop even though I'm using a shader that OpenGL Shader Builder says is running on Apple Software Renderer (and I believe it because it's very slow). Is it a matter of querying GL_RENDERER at just the right time? If so, when?

The renderer used is tied to the OpenGL context and a proper OpenGL implementation should not switch the renderer inbetween. Of course a OpenGL implementation may be built on some infrastructure that dynamically switches between backend renderers, but this must then reflect to the frontend context in renderer string that identifies this.
So what you do is indeed the correct method.

Related

Dedicated Nvidia GPU won't draw OpenGL

My setup includes on-board Intel integrated GPU for everyday tasks and a high-performance Nvidia GPU for graphics-intensive applications. I'm developing an OpenGL 3.3 (core profile) application (using shaders, not fixed-function-pipeline). By default, my app runs on Intel GPU and works just fine. But should I try to run it on Nvidia, it only shows the black screen.
Now here's the interesting part. OpenGL context gets loaded correctly, and world coordinate axes I draw for debugging actually get drawn (GL_LINE). For some reason, Nvidia doesn't draw any GL_POLYGONs or GL_QUADs.
Has anyone experienced something similar, and what do you think is the culprit here?
It appears GL_POLYGON, GL_QUADS and GL_QUAD_STRIP are removed from OpenGL 3.3 core profile. For some reason Intel draws them regardless, but Nvidia started drawing as well, as soon as I substituted those with GL_TRIANGLES etc. Always check for removed features if problems like this arise.

Are Shaders used in the latest OpenGL very early?

When i look to the 4th Edition of the Book "OpenGL SuperBible" it starts with Points / Lines drawing Polygons and later on Shaders are discussed. In the 6th Edition of the book, it starts directly with Shaders as the very first example. I didn't use OpenGL for a long time, but is it the way to start with Shaders?
Why is there the shift, is this because of going from fixed pipeline to Shaders?
To a limited extent it depends on exactly which branch of OpenGL you're talking about. OpenGL ES 2.0 has no path to the screen other than shaders — there's no matrix stack, no option to draw without shaders and none of the fixed-pipeline bonded built-in variables. WebGL is based on OpenGL ES 2.0 so it inherits all of that behaviour.
As per derhass' comment, all of the fixed stuff is deprecated in modern desktop GL and you can expect it to vanish over time. The quickest thing to check is probably the OpenGL 4.4 quick reference card. If the functionality you want isn't on there, it's not in the latest OpenGL.
As per your comment, Kronos defines OpenGL to be:
the only cross-platform graphics API that enables developers of
software for PC, workstation, and supercomputing hardware to create
high- performance, visually-compelling graphics software applications,
in markets such as CAD, content creation, energy, entertainment, game
development, manufacturing, medical, and virtual reality.
It more or less just exposes the hardware. The hardware can't do anything without shaders. Nobody in the industry wants to be maintaining shaders that emulate the old fixed functionality forever.
"About the only rendering you can do with OpenGL without shaders is clearing a window, which should give you a feel for how important they are when using OpenGL." - From OpenGL official guide
After OpenGL 3.1, the fixed-function pipeline was removed and shaders became mandatory.
So the SuperBible or the OpenGL Redbook begin by describing the new Programmable Pipeline early in discussions. Then they tell how to write and use a vertex and fragment shading program.
For your shader objects you now have to:
Create the shader (glCreateShader, glShaderSource)
Compile the shader source into an object (glCompileShader)
Verify the shader (glGetShaderInfoLog)
Then you link the shader object into your shader program:
Create shader program (glCreateProgram)
Attach the shader objects (glAttachShader)
Link the shader program (glLinkProgram)
Verify (glGetProgramInfoLog)
Use the shader (glUseProgram)
There is more to do now before you can render than in the previous fixed function pipeline. No doubt the programmable pipeline is more powerful, but it does make it more difficult just to begin rendering. And the shaders are now a core concept to learn.

OpenGL compute shader extension

I have a problem with creating compute shader.
My program seems to not know GLenum type GL_COMPUTE_SHADER when I'm trying to create shader with glCreateShader() func.
My graphics card is kinda low-end but when I check for GL_ARB_compute_shader extension it is present so that shouldn't be a problem I guess.
Is there something that I have to do to enable this extension or is there another problem and I have to use OpenCL?
OpenGL Compute Shaders are new in version 4.3. I'm guess you have headers that predate that version. However, even if you got newer headers, your GPU or driver may be too old to support OpenGL 4.3. What version does your hardware return for glGetString(GL_VERSION)?

OpenGL: "Fragment Shader not supported by HW" on old ATI card

In our OpenGL game we've got a shader link failure on an ATI Radeon x800 card. glGetProgramInfoLog reports:
Fragment shader(s) failed to link, vertex shader(s) linked.
Fragment Shader not supported by HW
Some googling suggests that we may be hitting an ALU instruction limit, due to a very long fragment shader. Any way to verify that?
I wasn't able to find detailed specs for the x800, nor any way to query the instruction limit at runtime. And even if I was able to query it, how do I determine the number of instructions of my shader?
There are several limits your may hit:
maximum shader length
maximum number of texture indirections (this is the limit most easily crossed)
using unsupported features
Technically the X800 is a shader model 2 GPU, which as about what GLSL 1.20 provides. When I started shader programming with a Radeon 9800, and the X800 is just a upscaled 9800 technically, I quickly abandoned the idea of doing it with GLSL. It was just too limited. And like so often when computer has only limited resources and capabilites, the way out was using assembly. In that case I mean the assembly provided by ARB_fragment_program extension.
GLview is a great tool to easily view all the limitations and supported GL extensions of a GPU/driver combination. If I recall correctly, I previously used AMD GPU ShaderAnalyzer which allows you to see the assembly compiled version of GLSL shaders. NVidia offers the same functionality with the nvemulate tool.
The x800 is very limited in shader power compared to current GPUs. You would probably have to cut back on your shader complexity anyway for this lower-end GPU to achieve proper performance. If you have your GLSL version running, simply choosing different fragment shaders for the X800 will probably be the most sensible approach.

OpenGL drawPixels with fragment shader

I'm confused about the OpenGL pipeline. I have an openGL method where I am trying to use glDrawPixels with a fragment shader, so my code looks like:
// I setup the shader before this
glUseProgram(myshader);
glDrawPixels(...);
On some graphics cards the shader gets applied, but on others it does not. I've no problem with nvidia, but problems with various ATI cards. Is this a bug with the ATI card? Or is nvidia just more flexible and I'm misunderstanding the pipeline? Are there alternatives to working around this (other than texture mapping)?
thanks,
Jeff
glDrawPixels should have fragment shaders applied. Figure 3.1 of page 203 of the compatibility profile makes it clear.
Note however, that the core profile removes DrawPixels. Which GL version are you using ?