I have an Intel HD4400 graphics card in my laptop. When I run glxinfo | grep "OpenGL", I get
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Haswell Mobile
OpenGL core profile version string: 4.5 (Core Profile) Mesa 18.0.0-rc5
OpenGL core profile shading language version string: 4.50
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 18.0.0-rc5
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.1 Mesa 18.0.0-rc5
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.10
OpenGL ES profile extensions:
This should mean that my graphics card can support GLSL version 4.30, right? However, my shader compilation fails with the following message
error: GLSL 4.30 is not supported. Supported versions are: 1.10, 1.20, 1.30, 1.00 ES, 3.00 ES, and 3.10 ES
In my code, I do set the context profile to core, with the following statement
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE)
after which I set the context major and minor versions to 4 and 3, respectively.
Any ideas? I am on Ubuntu 18.04. I thought it might be the graphics drivers that are not up to date, but if they are not up to date, then would glxinfo still tell me that version 4.50 is supported? It also seems like Ubuntu has the latest Intel graphics drivers installed already, and I do not want to risk installing graphics drivers that might break my display.
Additional Info:
I ran this exact code on another machine, and it worked perfectly (and I am quite sure that that machine does not support GL 4.3 compatible). I therefore do not believe that it is the way I created my context. Here is the code where I set my profile context version:
void set_gl_attribs()
{
if (SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE))
{
cout << "Could not set core profile" << endl;
}
if (SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 4))
{
cout << "Could not set GL major version" << endl;
}
if (SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 3))
{
cout << "Could not set GL minor version" << endl;
}
}
And this is where I call the code (before I call any other SDL or OpenGL functions):
SDL_Init(SDL_INIT_EVERYTHING);
window = SDL_CreateWindow("Sphere", 100, 100, width, height, SDL_WINDOW_OPENGL);
gl_context = SDL_GL_CreateContext(window);
//SDL_HideWindow(window);
set_gl_attribs(); // where I set the context, as shown above
SDL_GL_SetSwapInterval(1);
glewExperimental = GL_TRUE;
GLuint glew_return = glewInit();
The setup code is incorrect. You must choose a core profile context before creating the context, changing the profile after the context exists is too late and it will not work. Also, according to the SDL2 documentation, all calls to SDL_GL_SetAttribute must be made before the window is created via SDL_CreateWindow.
Move all calls to SDL_GL_SetAtrribute so they are before SDL_CreateWindow.
The reason that this incorrect code appears to work on other machines is because different drivers provide different versions of OpenGL depending on whether you ask for a compatibility or code profile context.
Mesa will only provide 3.0 for compatibility contexts,
macOS will only provide 2.1 for compatibility contexts,
NVidia and AMD drivers on Windows will provide the latest version they support.
Related
I've been hearing about DSA or Direct_State_Access extension, and I am looking to try it out, I used GLAD to load OpenGL, I first tried GL_ARB_direct_state_access extension, and called:
if(!GLAD_GL_ARB_direct_state_access) appFatal("Direct State Access Extension Unsupported.\n");
Without any problems, but I don't have access to functions like:
glProgramUniform...
For some reason, or another.... I then try GL_EXT_direct_state_access, which does give me access to those functions, but GLAD_GL_ext_direct_state_access fails, and I get an error...
On the other hand, my computer supports up to OpenGL 4.5 Core, which is odd, since DSA was core since 4.5, and therefore, there should be support
Extended renderer info (GLX_MESA_query_renderer):
Vendor: Intel Open Source Technology Center (0x8086)
Device: Mesa DRI Intel(R) HD Graphics 5500 (Broadwell GT2) (0x1616)
Version: 17.2.8
Accelerated: yes
Video memory: 3072MB
Unified memory: yes
Preferred profile: core (0x1)
Max core profile version: 4.5
Max compat profile version: 3.0
Max GLES1 profile version: 1.1
Max GLES[23] profile version: 3.1
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) HD Graphics 5500 (Broadwell GT2)
OpenGL core profile version string: 4.5 (Core Profile) Mesa 17.2.8
OpenGL core profile shading language version string: 4.50
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
What's the issue here? And how can I access those DSA functions, if I even can...
glProgramUniform isn't part of GL_ARB_direct_state_access but GL_ARB_separate_shader_objects. As such, you must check for GLAD_GL_ARB_separate_shader_objects (or GL 4.1) before you can use the glProgramUniform...() family of functions.
Since you seem to have generated a GL loader for 3.3 core, you must also explictely add the GL_ARB_separate_shader_objects extensions when generating your loader with GLAD.
Ttheoretically, there can be 3.x implementations supporting these extensions. But in practice, GPU vendors seldom add such new functionality to really old drivers (and 3.x only GPUs are end-of-life since several years, and only supported by "legacy" branches of the various vendor drivers.). GL_ARB_direct_state_access won't be available on MacOSX in general, and it will be lacking from most windows drivers not supporting GL 4.5 anyway. The only notable exception might be mesa itself, where many driver backends use the same base infrastructure, and where also still a lot of effort is put into supporting older GPUs.
So while it doesn't hurt to use 3.3 + some extensions which are core in 4.x, the increase (relatively to using GL 4.x directly) in the number of potential implementations which can run your code might not as big as you may hope. YMMV.
I'm trying to get a very basic openGL application up and running. I have a GTX 770, and i've installed nvidia-361 drivers. when i run glxinfo | grep version, i get:
OpenGL core profile version string: 4.5.0 NVIDIA 361.42
OpenGL core profile shading language version string: 4.5.0 NVIDIA
OpenGL version string: 4.5.0 NVIDIA
this would lead one to believe that your drivers supported OpenGL 4.5, right?
now, i'm using GLEW in my basic application. i get the version string and print it:
const GLubyte* version = glGetString(GL_VERSION);
printf("version: %s\n", version);
and when i run the program, i get:
version: 3.2.0 NVIDIA 361.42
?????????????????????????
what's happening here? i checked my version of libglew-dev, and it's 1.13.0. OpenGL 4.5 support was added in 1.11.0. so i don't think GLEW is the problem, but i can't figure out what's going on.
glGetString(GL_VERSION) returns the version the current GL context is providing, not necessarily the highest version your GL implementation is supporting.
GLEW has nothing to do with that, it just loads GL function pointers. Relevant is the way you created the context. What you see here is the normal behavior of the nvidia driver in recent versions: when you ask it for some GL x.y context, it does return version x.y, and not a higher version it would still support.
If you want a 4.5 context, just request a GL 4.5 context. How to do that depends on the way you create the context. If you use some libraries like GLFW, GLUT, SDL, Qt, ..., just consult the documentation on how to request a specific context version. If you manually create the context via glX, use glXCreateContextAttribsARB with proper GLX_CONTEXT_MAJOR_VERSION_ARB and GLX_CONTEXT_MINOR_VERSION_ARB attributes.
I am using Linux and trying to learn OpenGL. I am refering to the website learnopengl.com and have compiled the first program available here.
I seem to compile the program without problem with he command
g++ -Wall -lGLEW -lglfw -lGL -lX11 -lpthread -lXrandr -lXi ./foo.cpp
But when I run the program it shows stuff from behind, instead of blank window, like this
Note however, that this window is not transparent, as in if I move the window, the background is same. But if I minimize and reopen the window, the background is newly created. This happens for both dedicated and integrated GPU.
What do I need to do to make it work properly?
Here is some info
System: Host: aditya-pc Kernel: 4.4.13-1-MANJARO x86_64 (64 bit gcc: 6.1.1)
Desktop: KDE Plasma 5.6.4 (Qt 5.6.0) Distro: Manjaro Linux
Machine: System: HP (portable) product: HP Notebook v: Type1ProductConfigId
Mobo: HP model: 8136 v: 31.36
Bios: Insyde v: F.1F date: 01/18/2016
CPU: Dual core Intel Core i5-6200U (-HT-MCP-) cache: 3072 KB
flags: (lm nx sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx) bmips: 9603
clock speeds: max: 2800 MHz 1: 583 MHz 2: 510 MHz 3: 670 MHz 4: 683 MHz
Graphics: Card-1: Intel Skylake Integrated Graphics bus-ID: 00:02.0
Card-2: Advanced Micro Devices [AMD/ATI] Sun XT [Radeon HD 8670A/8670M/8690M / R5 M330]
bus-ID: 01:00.0
Display Server: X.Org 1.17.4 drivers: ati,radeon,intel
Resolution: 1920x1080#60.06hz
GLX Renderer: Mesa DRI Intel HD Graphics 520 (Skylake GT2)
GLX Version: 3.0 Mesa 11.2.2 Direct Rendering: Yes
The two graphics card's info
Amd card
$ DRI_PRIME=1 glxinfo|grep OpenGL
OpenGL vendor string: X.Org
OpenGL renderer string: Gallium 0.4 on AMD HAINAN (DRM 2.43.0, LLVM 3.8.0)
OpenGL core profile version string: 4.1 (Core Profile) Mesa 11.2.2
OpenGL core profile shading language version string: 4.10
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 11.2.2
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.0 Mesa 11.2.2
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.00
OpenGL ES profile extensions:
Intel integrated
$ DRI_PRIME=0 glxinfo|grep OpenGL
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) HD Graphics 520 (Skylake GT2)
OpenGL core profile version string: 3.3 (Core Profile) Mesa 11.2.2
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 11.2.2
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.1 Mesa 11.2.2
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.10
OpenGL ES profile extensions:
Technically what you experience is undefined behaviour. So here's what's happening: Your program asks the graphics system to create a new window and because this window is going to be used with OpenGL the library you're using (GLFW) is asking the graphics system to not clear the window on its own (the reason for this is that when playing animations the graphics system clearing the window between frame draws will cause flicker, which is unwanted).
So that new window is created and it needs graphics memory for doing that. These days there are two ways at presenting a window to the user. One is off-screen rendering with on-screen composition. The other is on-screen framebuffer windowing with pixel ownership testing. In your case you're getting the later of the two. Which means your window is essentially just a rectangle in the main screen framebuffer and when it gets created new, whatever was there before will remain in the window until it gets drawn with the desired contents.
Of course this "pixels left behind" behaviour is specified nowhere, it's an artifact of how the most reasonable implementation behaves (it behaves the same on Microsoft Windows with Areo disabled).
What is specified is, that when a window gets moved around its contents are moved along with it. Graphics hardware, even as old as from the late 1980-ies has special circuitry built in to do this efficiently. But this works only for as long as the window stays within the screen framebuffer. If you drag the window partially outside of the screen area and back again you'll see the border pixels getting replicated along the edges.
So what do you have to do to get a defined result? Well, you'll have to actually draw something (might be as simple as clearing it) to the window – and if it's a double buffered one swap the buffers.
I am trying to integrate freetype_gl and the shaders they use are for an older version. I can convert them to work with 3.3+, I am just wondering if that could be casing nothing to appear on the screen.
Short answer
On Windows, it will work. On OS X and Linux/Mesa it will not work.
Long answer
If you are using the compatibility profile, yes, it will work. If you are using the core profile, then GLSL versions before 1.40 will not be supported.
OpenGL implementations on Windows tend to have strong support for legacy applications, including the compatibility profile. Other implementations, such as Mesa and the OS X implementations, do not support newer versions of the compatibility profile. For non-core profiles, OS X only supports version 2.1 and Mesa supports 3.0.
This means that if you want to run your program on Linux/Mesa or OS X, you will have to port your shader to at least 1.40 (at which point you might as well use 3.30), but you don't have to change anything if you only care about Windows.
OpenGL and GLSL versions
Every OpenGL version since 2.0 has been released with a corresponding GLSL version. However, the GLSL version numbers were not always in sync with the GL version. Here is a table:
> OpenGL Version GLSL Version
> 2.0 1.10
> 2.1 1.20
> 3.0 1.30
> 3.1 1.40
> 3.2 1.50
For all versions of OpenGL 3.3 and above, the corresponding GLSL version matches the OpenGL version. So GL 4.1 uses GLSL 4.10.
In my case, I am using glsl 1.30 with opengl 3.3 but I think you might not be able to use glsl 1.2 there
I downloaded the drivers for Intel(R) HD Graphics version 8.15.10.2993 which are known to support shader 3.0. but after installation I have checked it by Geeks 3d caps viewer and calling gl.glGetString(GL2.GL_VERSION) code and showed just 2.1.0.Thanks!
GL 2.1.0 / GLSL 1.20 is fine for shader model 3. GL 3.x/GLSL >=1.30 requires shader model 4.