I started writing programs, in C (for now) using GLFW and OpenGL. The question I have is that, how do I know which version of OpenGL my program will use? My laptop says that my video card has OpenGL 3.3. Typing "glxinfo | grep -i opengl" returns:
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce 9600M GT/PCI/SSE2
OpenGL version string: 3.3.0 NVIDIA 285.05.09
OpenGL shading language version string: 3.30 NVIDIA via Cg compiler
OpenGL extensions:
So is OpenGL 3.3 automatically being used ?
Just call glGetString(GL_VERSION) (once the context is initialized, of course) and put out the result (which is actually the same that glxinfo does, I suppose):
printf("%s\n", glGetString(GL_VERSION));
Your program should automatically use the highest possible version your hardware and driver support, which in your case seems to be 3.3. But for creating a core-profile context for OpenGL 3+ (one where deprecated functionality has been completely removed) you have to take special measures. But since version 2.7 GLFW has means for doing this, using the glfwOpenWindowHint function. But if you don't want to explicitly disallow deprecated functionality, you can just use the context given to you by the default context creation functions of GLFW, which will as said support the highest possible version for your hardware and drivers.
But also keep in mind that for using OpenGL functionality higher than version 1.1 you need to retrieve the corresponding function pointers or use a library that handles this for you, like GLEW.
Related
Someone was asking me for a test with OpenGL 2.x since they have hardware that supports only up to OpenGL 2.1.
I figured I'd try it out by setting the window hints in GLFW to use the major/minor version of 2 and 0.
Problem is I'm still using #version 330 in my shaders, and it works. However, it would not let me use the hints of GL version 2 when I was leaving on a Core profile (by accident). This seems to indicate that my version choice is doing something, but not what I expect.
I want to restrict myself to 2.1 to see if my application would run, and if it doesn't, then see what I can change to make it work. Problem is I don't have any 2.1 hardware since my computers are all 2015 or later.
Is there a way I can emulate 2.1 (on Windows) somehow and have it crash/die if I try using features it doesn't support? Apparently the hints I'm using are not helping.
As far as I know the major/minor version flags don't set the version of your OpenGL context but the required feature set. So if you set the flags to 3.3 for example you will usually get a 4.5 or 4.6 context as those version are typically the latest OpenGL versions your GPU supports that is compatible with OpenGL 3.3. Getting a OpenGL 2.1 Core context should be impossible as the defining feature of the core context is that it doesn't support some OpenGL 1.0-2.1 functionality. So this isn't really surprising.
I think your best option here is to use headers that only contain OpenGL 2.1 functions. GLAD for examples allows you to specify which version you want to generate headers for.
For OpenGL ES the OES_EGL_image extension provides a function EGLImageTargetTexture2DOES to create a texture from an EGLImage. Is there an equivalent extension/function for desktop OpenGL (not ES)?
I think GL_OES_EGL_image should work with the Mesa drivers and desktop GL. glxinfo shows the extension as supported with both core and compatibility profiles. I did not see any checks for ES with a quick look at the implementation.
A grep through the Mesa provided GL headers shows no other occurrence of EGLImage, so GL_OES_EGL_image is probably your only choice with the Mesa drivers.
I am not sure though whether this behavior is specific to Mesa or other drivers also follow it.
I'm trying to create an OpenGL context 3.2 on a Netbook running Ubuntu 13. Since the hardware isn't capable of hardware-supported Opengl 3.2, I'm wondering if the software rasterizer could provide such functionality.
I'm aware that software mode can be utterly slow, but I just need to test and practice some simple examples.
I couldn't find any definitive information on the Internet that would say it's possible or not, and my knowledge on Mesa is very limited. So my question is, is it possible to create a software-based OpenGL 3.2 context with Mesa or not?
Currently, it isn't. When using one of the software rasterizer backends (the old, deprecated swrast, or the more modern, gallium-based softpipe or llvmpipe drivers), only GL 2.1 will be advertised. The issue is that mesa's software rasterizers do not yet support multisampling, which is a requirement of GL 3.x. There might be also some other minor features missing which are required for GL 3.x.
However, you can still use most of the GL 3.2 features via the extension mechanism, without having a 3.2 context. This also means that you won't be able to get a core profile context, but this shouldn't be a problem either - nothing forces you to actually use the deprecated functionality.
I knew Windows comes with Opengl drivers. If I also have NVidia driver, how to termine the OpenGL version?
I knew Windows comes with Opengl drivers.
Actually it doesn't. Windows comes with a OpenGL emulation. But actual OpenGL drivers are only available through the vendor original drivers.
how to termine the OpenGL version?
Create a OpenGL context and use the glGetString function to retrieve the identifying values. Of most interest are GL_VERSION and GL_RENDERER.
You can use glGetString(GL_VERSION) to retrieve the currently executing OpenGL Version.
I have many OpenGl shaders. We try to use as many different hardware as possible to evaluate the portability of our product. One of our customer recently ran into some rendering issues it seems that the target machine only provide version shaders model 2.0 all our development/build/test machine (even oldest ones run version 4.0), everything else (OpenGl version, GSLS version ...) seems identical.
I didn't find a way to downgrade the shaders model version since it's automatically provided by the graphic card driver.
Is there a way to manually install or select OpenGl/GLSL/Shader model version in use for develpment/test purpose ?
NOTE: the main target are windows XP SP2/7 (32&64) for both ATI/NVIDIA cards
OpenGL does not have the concept of "shader models"; that's a Direct3D thing. It only has versions of GLSL: 1.10, 1.20, etc.
Every OpenGL version matches a specific GLSL version. GL 2.1 supports GLSL 1.20. GL 3.0 supports GLSL 1.30. For GL 3.3 and above, they stopped fooling around and just used the same version number, so GL 3.3 supports GLSL 3.30. So there's an odd version number gap between GLSL 1.50 (maps to GL 3.2) and GLSL 3.30.
Technically, OpenGL implementations are allowed to refuse to compile older shader versions than the ones that match to the current version. As a practical matter however, you can pretty much shove any GLSL shader into any OpenGL implementation, as long as the shader's version is less than or equal to the version that the OpenGL implementation supports. This hasn't been tested on MacOSX Lion's implementation of GL 3.2 core.
There is one exception: core contexts. If you try to feed a shader through a core OpenGL context that uses functionality removed from the core, it will complain.
There is no way to force OpenGL to provide you with a particular OpenGL version. You can ask it to, with wgl/glXCreateContextAttribs. But that is allowed to give you any version higher than the one you ask for, so long as that version is backwards compatible with what you asked for.