An open GL error I get seems to be linked to the MAX_VERTEX_UNIFORM_COMPONENTS_ARB. (as suggested in the answer here
What determines this constant (graphic hardware, graphic driver, openGL version ?), and how can I check its value under Linux (nvidia hardware)?
This limit is related to used gfx HW and its driver. In OpenGL you can obtain such values on CPU side using this:
GLint x;
glGetIntegerv(GL_MAX_VERTEX_UNIFORM_COMPONENTS_ARB,&x);
On my setup:
Vendor: NVIDIA Corporation
OpenGL 4.5.0 NVIDIA 368.22
Render: GeForce GTX 550 Ti/PCIe/SSE2
it returns:
max vertex uniform components: 4096
In order to use this you need working OpenGL 1.0 context or better and know the numeric value of the queried stuff. That is usually stored in gl.h or glext.h in your case its the latter and in case you do not want to include glext.h just add this to your code (C++) before use:
#define GL_MAX_VERTEX_UNIFORM_COMPONENTS_ARB 0x8B4A
Related
I'm trying to compile a fragment shader using:
#extension ARB_draw_buffers : require
but compilation fails with the following error:
extension 'ARB_draw_buffers' is not supported
However when I check for availability of this particular extensions, either by calling glGetString (GL_EXTENSIONS) or using OpenGL Extension Viewer I get positive results.
OpenGL version is 3.1,
The grapic card is Intel HD Graphics 3000.
What might be the cause of that?
Your driver in this scenario is 3.1; it is not clear what your targeted OpenGL version is.
If you can establish OpenGL 3.0 as the mininum required version, you can write your shader using #version 130 and avoid the extension directive altogether.
The ARB extension mentioned in the question is only there for drivers that cannot implement all of the features required by OpenGL 3.0, but have the necessary hardware support for this one feature.
That was its intended purpose, but there do not appear to be many driver / hardware combinations in the wild that actually have this problem. You probably do not want the headache of writing code that supports them anyway ;)
I am using Visual Studio 13 with Nvidia NSights 4.0. In my application I am doing a mix of different types of rendering but, for the purpose of testing the proiler, I did a simple rendering of a scene. I opened the graphics debugger and, when I open the GUI and press spacebar to capture the frame I get this error:
Cannot enter frame debugger. Nsight only supports frame debugging for
D3D9, D3D10, D3D11, and OpenGL 4.2.
Reason: glEnd
I am using a GT540m and I checked my OpenGL version and it is 4.3
If I, then, try to use the performance anaysis tool and trace OpenGL (following the instructions) I always get some percentage of CPU frames and 0 GPU frames.
I have no idea what am I doing wrong. Is there any solution to this or alternative ways to profile OpenGL?
Are you using immediate mode drawing? Ie. glBegin(..); glVertex<> ; glEnd()
From the Nsight User Guide's Supported OpenGL Functions page:
NVIDIA® Nsight™ Visual Studio Edition 4.0 frame debugging supports the set of OpenGL operations, which are defined by the OpenGL 4.2 core profile. Note that it is not necessary to create a core profile context to make use of the frame debugger. An application that uses a compatibility profile context, but restricts itself to using the OpenGL 4.2 core subset, will also work. A few OpenGL 4.2 compatibility profile features, such as support for alpha testing and a default vertex array object, are also supported.
So, replace the immediate mode rendering with newer drawing functions like glDrawArrays and glDrawElements that vertex array objects.
Better yet, create a core profile context to ensure you aren't using deprecated functionality.
My advice: stay away from outdated tutorials online and read the latest edition of the Red book (OpenGL Programming Guide), which only covers modern OpenGL.
You can also try the more basic GPUView tool that can be found in Win 8 SDK
UPDATE:
As for why 0 GPU frames are retrieved, are you sure that your GPU is on the list of supported hardware. I had the same problem where NSight was mostly working (was able to profile other aspects) but 0 GPU frames were collected. Later realized that my card was not officially supported.
Now available Nsight 4.5 RC1, works with cuda sdk 7 RC, and among its features, now support openGL 4.3 !
My project should greatly benefit from arbitrary/atomic read and write operations in a texture from glsl shaders. The Image load store extension is what I need. Only problem, my target platform does not support OpenGL 4.
Is there an extension for OGL 3 that achieves similar results? I mean, atomic read/write operations in a texture or shared buffer of some sort from fragment shaders.
Image Load Store and, especially atomic operations are features that must be backed up by specific hardware capabilities, that are very similar to features used in compute shaders. Only some of the GL3 hardware can handle it and only in a limited way.
Image Load Store in core profile since 4.2, so if your hardware (and driver) is capable of OpenGL 4.2, then you don't need any extensions at all
if your hardware (and driver) capabilities is lower than GL 4.2, but higher than GL 3.0, you can, probably, use ARB_shader_image_load_store extension.
quote: OpenGL 3.0 and GLSL 1.30 are required
obviously, not all 3.0 hardware (and drivers) will support this extension, so you must check for its support before use it
I believe, most NVIDIA GL 3.3 hardware supports it, but not AMD or Intel (that's my subjective observations ;) ).
If your hardware is lower than GL 4.2 and not capable of this extension, nothing really you can do. Just have an alternative code path with texture sampling and rendering to texture and no atomics (as I understood this is possible, but without "great benefit of atomic"), or simply report an error to those users, who not yet upgraded their rigs.
Hope it helps.
I started writing programs, in C (for now) using GLFW and OpenGL. The question I have is that, how do I know which version of OpenGL my program will use? My laptop says that my video card has OpenGL 3.3. Typing "glxinfo | grep -i opengl" returns:
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce 9600M GT/PCI/SSE2
OpenGL version string: 3.3.0 NVIDIA 285.05.09
OpenGL shading language version string: 3.30 NVIDIA via Cg compiler
OpenGL extensions:
So is OpenGL 3.3 automatically being used ?
Just call glGetString(GL_VERSION) (once the context is initialized, of course) and put out the result (which is actually the same that glxinfo does, I suppose):
printf("%s\n", glGetString(GL_VERSION));
Your program should automatically use the highest possible version your hardware and driver support, which in your case seems to be 3.3. But for creating a core-profile context for OpenGL 3+ (one where deprecated functionality has been completely removed) you have to take special measures. But since version 2.7 GLFW has means for doing this, using the glfwOpenWindowHint function. But if you don't want to explicitly disallow deprecated functionality, you can just use the context given to you by the default context creation functions of GLFW, which will as said support the highest possible version for your hardware and drivers.
But also keep in mind that for using OpenGL functionality higher than version 1.1 you need to retrieve the corresponding function pointers or use a library that handles this for you, like GLEW.
I have many OpenGl shaders. We try to use as many different hardware as possible to evaluate the portability of our product. One of our customer recently ran into some rendering issues it seems that the target machine only provide version shaders model 2.0 all our development/build/test machine (even oldest ones run version 4.0), everything else (OpenGl version, GSLS version ...) seems identical.
I didn't find a way to downgrade the shaders model version since it's automatically provided by the graphic card driver.
Is there a way to manually install or select OpenGl/GLSL/Shader model version in use for develpment/test purpose ?
NOTE: the main target are windows XP SP2/7 (32&64) for both ATI/NVIDIA cards
OpenGL does not have the concept of "shader models"; that's a Direct3D thing. It only has versions of GLSL: 1.10, 1.20, etc.
Every OpenGL version matches a specific GLSL version. GL 2.1 supports GLSL 1.20. GL 3.0 supports GLSL 1.30. For GL 3.3 and above, they stopped fooling around and just used the same version number, so GL 3.3 supports GLSL 3.30. So there's an odd version number gap between GLSL 1.50 (maps to GL 3.2) and GLSL 3.30.
Technically, OpenGL implementations are allowed to refuse to compile older shader versions than the ones that match to the current version. As a practical matter however, you can pretty much shove any GLSL shader into any OpenGL implementation, as long as the shader's version is less than or equal to the version that the OpenGL implementation supports. This hasn't been tested on MacOSX Lion's implementation of GL 3.2 core.
There is one exception: core contexts. If you try to feed a shader through a core OpenGL context that uses functionality removed from the core, it will complain.
There is no way to force OpenGL to provide you with a particular OpenGL version. You can ask it to, with wgl/glXCreateContextAttribs. But that is allowed to give you any version higher than the one you ask for, so long as that version is backwards compatible with what you asked for.