I've been hearing about DSA or Direct_State_Access extension, and I am looking to try it out, I used GLAD to load OpenGL, I first tried GL_ARB_direct_state_access extension, and called:
if(!GLAD_GL_ARB_direct_state_access) appFatal("Direct State Access Extension Unsupported.\n");
Without any problems, but I don't have access to functions like:
glProgramUniform...
For some reason, or another.... I then try GL_EXT_direct_state_access, which does give me access to those functions, but GLAD_GL_ext_direct_state_access fails, and I get an error...
On the other hand, my computer supports up to OpenGL 4.5 Core, which is odd, since DSA was core since 4.5, and therefore, there should be support
Extended renderer info (GLX_MESA_query_renderer):
Vendor: Intel Open Source Technology Center (0x8086)
Device: Mesa DRI Intel(R) HD Graphics 5500 (Broadwell GT2) (0x1616)
Version: 17.2.8
Accelerated: yes
Video memory: 3072MB
Unified memory: yes
Preferred profile: core (0x1)
Max core profile version: 4.5
Max compat profile version: 3.0
Max GLES1 profile version: 1.1
Max GLES[23] profile version: 3.1
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) HD Graphics 5500 (Broadwell GT2)
OpenGL core profile version string: 4.5 (Core Profile) Mesa 17.2.8
OpenGL core profile shading language version string: 4.50
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
What's the issue here? And how can I access those DSA functions, if I even can...
glProgramUniform isn't part of GL_ARB_direct_state_access but GL_ARB_separate_shader_objects. As such, you must check for GLAD_GL_ARB_separate_shader_objects (or GL 4.1) before you can use the glProgramUniform...() family of functions.
Since you seem to have generated a GL loader for 3.3 core, you must also explictely add the GL_ARB_separate_shader_objects extensions when generating your loader with GLAD.
Ttheoretically, there can be 3.x implementations supporting these extensions. But in practice, GPU vendors seldom add such new functionality to really old drivers (and 3.x only GPUs are end-of-life since several years, and only supported by "legacy" branches of the various vendor drivers.). GL_ARB_direct_state_access won't be available on MacOSX in general, and it will be lacking from most windows drivers not supporting GL 4.5 anyway. The only notable exception might be mesa itself, where many driver backends use the same base infrastructure, and where also still a lot of effort is put into supporting older GPUs.
So while it doesn't hurt to use 3.3 + some extensions which are core in 4.x, the increase (relatively to using GL 4.x directly) in the number of potential implementations which can run your code might not as big as you may hope. YMMV.
Related
From my understanding, modern OpenGL starts with version 3.3 which is mostly the same as 4.0. Both versions were released at the same time and 3.3 is just for back compatibility with older graphics cards. In my application I need to use tessellation control. All of the sources I have found online say tessellation starts with 4.0, but also say 3.3 is virtually the same.
Does OpenGL 3.3 have tessellation or not? Should I even bother with 3.3 in 2021 or should I just use the latest version 4.6? Is 4.6 compatible with mac?
Tessellation shaders are in the standard since OpenGL 4.0. However you can test if you hardware supports the ARB_tessellation_shader extension, which is written against the OpenGL 3.2.
I'm trying to use xming to render software using OpenGl running on the same machine in WSL / windows bash.
This works fine for some really small demos, however once I try something like glmark2, it fails because it seems the OpenGl version is reported incorrectly.
glxinfo | grep OpenGL reports this:
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce GTX 970M/PCIe/SSE2
OpenGL version string: 1.4 (4.5.0 NVIDIA 382.05)
If I let xming run on my internal graphics card (using a laptop), it reports
OpenGL vendor string: Intel
OpenGL renderer string: Intel(R) HD Graphics 4600
OpenGL version string: 1.4 (4.3.0 - Build 20.19.15.4568)
The weird part is the 1.4 in front of 4.5.0 NVIDIA 382.05.
The OpenGl support is definitely at least 3, because a demo using GLSL shaders which require newer OpenGl runs, but the version string is kinda garbage.
The problem you're running into is, that the GLX portion of XMing does support only up to OpenGL-1.4. The part inside the parentheses is the version string as reported by the system native OpenGL implementation. However since XMing lacks (so far) the capability to reliably pass on anything beyond OpenGL-1.4 it will simply tell you "all I guarantee you to support is OpenGL 1.4, but the system I'm running on could actually do …".
Maybe some day someone goes through the effort to implement a fully featured dynamic GLX←→WGL wrapper.
I have difficulties interpreting glxinfo and glewinfo.
glxinfo gives me this:
OpenGL version string: 3.0 Mesa 9.2.2
OpenGL shading language version string: 1.30
whereas glewinfo goes up to version 4.0:
GL_VERSION_4_0: OK
---------------
glBlendEquationSeparatei: OK
glBlendEquationi: OK
glBlendFuncSeparatei: OK
glBlendFunci: OK
glMinSampleShading: OK
I know that I cannot use GLSL newer than 1.30, but I'm wondering, is that a driver issue?
My GPU is
VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)
I'm using Arch Linux and SDL, and have Mesa 9.2 and Mesa-libgl 9.2.2 installed.
When glewinfo reports certain functions being there, that just means it could retrieve a function pointer for them. However the function pointer being available does not tell, that the corresponding extension/version support being actually available. Since OpenGL follows a client server model, the client side interface may very well expose also newer functionality, while the server side doesn't support it.
The list of supported extensions and the reported version are the authorative information on that and you must rely on only those.
I have had a lot of problems / confusion setting up my laptop to work for OpenGL programming / the running of OpenGL programs.
My laptop has one of these very clever (too clever for me) designs where the Intel CPU has a graphics processor on chip, and there is also a dedicated graphics card. Specifically, the CPU is a 3630QM, with "HD Graphics 4000" (a very exciting name, I am sure), and the "proper" Graphics Processor is a Nvidia GTX 670MX.
Theoretically, according to Wikipedia, the HD Graphics Chip (Intel), under Linux, supports OpenGL 3.1, if the correct drivers are installed. (They probably aren't.)
According to NVIDIA, the 670MX can support OpenGL 4.1, so ideally I would like to develop and execute on this GPU.
Do I have drivers installed to enable me to execute OpenGL 4.1 code on the NVIDIA GPU? Answer: Probably no, currently I use this "optirun" thing to execute OpenGL programs on the dedicated GPU. See this link to see the process I followed to setup my computer.
My question is, I know how to run a compiled program on the 670MX; that would be 'optirun ./programname', but how can I find out what OpenGL version the installed graphics drivers on my system will support? Running 'glxinfo | grep -i opengl' in a terminal tells me that the Intel Chip supports OpenGl version 3.0. See the below information:
ed#kubuntu1304-P151EMx:~$ glxinfo | grep -i opengl
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Ivybridge Mobile
OpenGL version string: 3.0 Mesa 9.1.3
OpenGL shading language version string: 1.30
OpenGL extensions:
How do I do the same or similar thing to find out what support is available under 'optirun', and what version of OpenGL is supported?
Update
Someone suggested I use glGetString() to find this information: I am now completely confused!
Without optirun, the supported OpenGL version is '3.0 MESA 9.1.3', so version 3, which is what I expected. However, under optirun, the supported OpenGL version is '4.3.0 NVIDIA 313.30', so version 4.3?! How can it be Version 4.3 if the hardware specification from NVIDIA states only Version 4.1 is supported?
You can just run glxinfo under optirun:
optirun glxinfo | grep -i opengl
Both cards have different features, so its normal to get different OpenGL versions.
This is the OpenGL version I have:
Video Card Vendor: Intel
Renderer: Intel(R) HD Graphics
OpenGL Version: 2.1.0 - Build 8.15.10.2622
GLU Version: 1.2.2.0 Microsoft Corporation
I'd like to learn the latest OpenGL API. But my card supports only 2.1 (and I cant update). Is it possible to program in latest API even with no card support?
AshleysBrain's answer is not quite correct. You can use a software implementation of OpenGL such as Mesa3D which can execute newer code using your CPU instead of your GPU. It will be slower but will allow you to compile and run your 4+ code against it OpenGL 3.1 code against it.
Edit: just checked, it seems Mesa only supports up to OpenGL 3.1. Still; that's pretty good for free.
It's more to do with what version the driver supports. Try updating your driver and see if it supports a newer version. Otherwise I'm afraid you're stuck with that version, there's nothing else you can do!
Send money/time Mesa's way in the hope they'll support GL4 in their software rasterizer sometime soon.
Or pester Khronos for a reference implementation.
As of Mesa 20.2 llvmpipe supports OpenGL 4.5.
Intel(R) HD Graphics are usually onboard.So NO you can't use GPU accelerated OpenGL 4.2 with this.You need dedicated card like those of Nvidia.The highest version of OpenGL I have seen on the dedicated cards is 3.1 which still delivers fixed pipeline API with Shaders support.But OpenGL 4.2 has completely removed all this and uses programmable way of doing things which requires programmable hardware too.