I am using Linux and trying to learn OpenGL. I am refering to the website learnopengl.com and have compiled the first program available here.
I seem to compile the program without problem with he command
g++ -Wall -lGLEW -lglfw -lGL -lX11 -lpthread -lXrandr -lXi ./foo.cpp
But when I run the program it shows stuff from behind, instead of blank window, like this
Note however, that this window is not transparent, as in if I move the window, the background is same. But if I minimize and reopen the window, the background is newly created. This happens for both dedicated and integrated GPU.
What do I need to do to make it work properly?
Here is some info
System: Host: aditya-pc Kernel: 4.4.13-1-MANJARO x86_64 (64 bit gcc: 6.1.1)
Desktop: KDE Plasma 5.6.4 (Qt 5.6.0) Distro: Manjaro Linux
Machine: System: HP (portable) product: HP Notebook v: Type1ProductConfigId
Mobo: HP model: 8136 v: 31.36
Bios: Insyde v: F.1F date: 01/18/2016
CPU: Dual core Intel Core i5-6200U (-HT-MCP-) cache: 3072 KB
flags: (lm nx sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx) bmips: 9603
clock speeds: max: 2800 MHz 1: 583 MHz 2: 510 MHz 3: 670 MHz 4: 683 MHz
Graphics: Card-1: Intel Skylake Integrated Graphics bus-ID: 00:02.0
Card-2: Advanced Micro Devices [AMD/ATI] Sun XT [Radeon HD 8670A/8670M/8690M / R5 M330]
bus-ID: 01:00.0
Display Server: X.Org 1.17.4 drivers: ati,radeon,intel
Resolution: 1920x1080#60.06hz
GLX Renderer: Mesa DRI Intel HD Graphics 520 (Skylake GT2)
GLX Version: 3.0 Mesa 11.2.2 Direct Rendering: Yes
The two graphics card's info
Amd card
$ DRI_PRIME=1 glxinfo|grep OpenGL
OpenGL vendor string: X.Org
OpenGL renderer string: Gallium 0.4 on AMD HAINAN (DRM 2.43.0, LLVM 3.8.0)
OpenGL core profile version string: 4.1 (Core Profile) Mesa 11.2.2
OpenGL core profile shading language version string: 4.10
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 11.2.2
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.0 Mesa 11.2.2
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.00
OpenGL ES profile extensions:
Intel integrated
$ DRI_PRIME=0 glxinfo|grep OpenGL
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) HD Graphics 520 (Skylake GT2)
OpenGL core profile version string: 3.3 (Core Profile) Mesa 11.2.2
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 11.2.2
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.1 Mesa 11.2.2
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.10
OpenGL ES profile extensions:
Technically what you experience is undefined behaviour. So here's what's happening: Your program asks the graphics system to create a new window and because this window is going to be used with OpenGL the library you're using (GLFW) is asking the graphics system to not clear the window on its own (the reason for this is that when playing animations the graphics system clearing the window between frame draws will cause flicker, which is unwanted).
So that new window is created and it needs graphics memory for doing that. These days there are two ways at presenting a window to the user. One is off-screen rendering with on-screen composition. The other is on-screen framebuffer windowing with pixel ownership testing. In your case you're getting the later of the two. Which means your window is essentially just a rectangle in the main screen framebuffer and when it gets created new, whatever was there before will remain in the window until it gets drawn with the desired contents.
Of course this "pixels left behind" behaviour is specified nowhere, it's an artifact of how the most reasonable implementation behaves (it behaves the same on Microsoft Windows with Areo disabled).
What is specified is, that when a window gets moved around its contents are moved along with it. Graphics hardware, even as old as from the late 1980-ies has special circuitry built in to do this efficiently. But this works only for as long as the window stays within the screen framebuffer. If you drag the window partially outside of the screen area and back again you'll see the border pixels getting replicated along the edges.
So what do you have to do to get a defined result? Well, you'll have to actually draw something (might be as simple as clearing it) to the window – and if it's a double buffered one swap the buffers.
Related
I have an Intel HD4400 graphics card in my laptop. When I run glxinfo | grep "OpenGL", I get
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Haswell Mobile
OpenGL core profile version string: 4.5 (Core Profile) Mesa 18.0.0-rc5
OpenGL core profile shading language version string: 4.50
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 18.0.0-rc5
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.1 Mesa 18.0.0-rc5
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.10
OpenGL ES profile extensions:
This should mean that my graphics card can support GLSL version 4.30, right? However, my shader compilation fails with the following message
error: GLSL 4.30 is not supported. Supported versions are: 1.10, 1.20, 1.30, 1.00 ES, 3.00 ES, and 3.10 ES
In my code, I do set the context profile to core, with the following statement
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE)
after which I set the context major and minor versions to 4 and 3, respectively.
Any ideas? I am on Ubuntu 18.04. I thought it might be the graphics drivers that are not up to date, but if they are not up to date, then would glxinfo still tell me that version 4.50 is supported? It also seems like Ubuntu has the latest Intel graphics drivers installed already, and I do not want to risk installing graphics drivers that might break my display.
Additional Info:
I ran this exact code on another machine, and it worked perfectly (and I am quite sure that that machine does not support GL 4.3 compatible). I therefore do not believe that it is the way I created my context. Here is the code where I set my profile context version:
void set_gl_attribs()
{
if (SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE))
{
cout << "Could not set core profile" << endl;
}
if (SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 4))
{
cout << "Could not set GL major version" << endl;
}
if (SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 3))
{
cout << "Could not set GL minor version" << endl;
}
}
And this is where I call the code (before I call any other SDL or OpenGL functions):
SDL_Init(SDL_INIT_EVERYTHING);
window = SDL_CreateWindow("Sphere", 100, 100, width, height, SDL_WINDOW_OPENGL);
gl_context = SDL_GL_CreateContext(window);
//SDL_HideWindow(window);
set_gl_attribs(); // where I set the context, as shown above
SDL_GL_SetSwapInterval(1);
glewExperimental = GL_TRUE;
GLuint glew_return = glewInit();
The setup code is incorrect. You must choose a core profile context before creating the context, changing the profile after the context exists is too late and it will not work. Also, according to the SDL2 documentation, all calls to SDL_GL_SetAttribute must be made before the window is created via SDL_CreateWindow.
Move all calls to SDL_GL_SetAtrribute so they are before SDL_CreateWindow.
The reason that this incorrect code appears to work on other machines is because different drivers provide different versions of OpenGL depending on whether you ask for a compatibility or code profile context.
Mesa will only provide 3.0 for compatibility contexts,
macOS will only provide 2.1 for compatibility contexts,
NVidia and AMD drivers on Windows will provide the latest version they support.
I've been hearing about DSA or Direct_State_Access extension, and I am looking to try it out, I used GLAD to load OpenGL, I first tried GL_ARB_direct_state_access extension, and called:
if(!GLAD_GL_ARB_direct_state_access) appFatal("Direct State Access Extension Unsupported.\n");
Without any problems, but I don't have access to functions like:
glProgramUniform...
For some reason, or another.... I then try GL_EXT_direct_state_access, which does give me access to those functions, but GLAD_GL_ext_direct_state_access fails, and I get an error...
On the other hand, my computer supports up to OpenGL 4.5 Core, which is odd, since DSA was core since 4.5, and therefore, there should be support
Extended renderer info (GLX_MESA_query_renderer):
Vendor: Intel Open Source Technology Center (0x8086)
Device: Mesa DRI Intel(R) HD Graphics 5500 (Broadwell GT2) (0x1616)
Version: 17.2.8
Accelerated: yes
Video memory: 3072MB
Unified memory: yes
Preferred profile: core (0x1)
Max core profile version: 4.5
Max compat profile version: 3.0
Max GLES1 profile version: 1.1
Max GLES[23] profile version: 3.1
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) HD Graphics 5500 (Broadwell GT2)
OpenGL core profile version string: 4.5 (Core Profile) Mesa 17.2.8
OpenGL core profile shading language version string: 4.50
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
What's the issue here? And how can I access those DSA functions, if I even can...
glProgramUniform isn't part of GL_ARB_direct_state_access but GL_ARB_separate_shader_objects. As such, you must check for GLAD_GL_ARB_separate_shader_objects (or GL 4.1) before you can use the glProgramUniform...() family of functions.
Since you seem to have generated a GL loader for 3.3 core, you must also explictely add the GL_ARB_separate_shader_objects extensions when generating your loader with GLAD.
Ttheoretically, there can be 3.x implementations supporting these extensions. But in practice, GPU vendors seldom add such new functionality to really old drivers (and 3.x only GPUs are end-of-life since several years, and only supported by "legacy" branches of the various vendor drivers.). GL_ARB_direct_state_access won't be available on MacOSX in general, and it will be lacking from most windows drivers not supporting GL 4.5 anyway. The only notable exception might be mesa itself, where many driver backends use the same base infrastructure, and where also still a lot of effort is put into supporting older GPUs.
So while it doesn't hurt to use 3.3 + some extensions which are core in 4.x, the increase (relatively to using GL 4.x directly) in the number of potential implementations which can run your code might not as big as you may hope. YMMV.
I've deploy desktop application that using QML.
According to Qt docs, Qt automatically choose most relevant rendering mode.
When I work from the Qt Creator, Qt runs correct mode (I've check it using Fraps and LogView).
But if I run application from the folder (with dll binaries), app using Software rendering. If I delete opengl32.dll - app using native OpenGL mode.
Comparing results are below.
Run from the Creator:
qt.qpa.gl: Basic wglCreateContext gives version 4.6 qt.qpa.gl:
OpenGL 2.0 entry points available qt.qpa.gl: GPU features: QSet()
qt.qpa.gl: supportedRenderers GpuDescription(vendorId=0x10de,
deviceId=0x1201, subSysId=0x0, revision=161, driver: "nvd3dum.dll",
version=23.21.13.8843, "NVIDIA GeForce GTX 560") renderer:
QFlags(0x1|0x2|0x4|0x8|0x20) qt.qpa.gl: Qt: Using WGL and OpenGL from
"opengl32.dll" qt.qpa.gl: create OpenGL: "NVIDIA
Corporation","GeForce GTX 560/PCIe/SSE2" default ContextFormat: v4.6
profile: 0 options: QFlags(0x4),SampleBuffers, Extension-API present
Extensions: 326
Run from the folder:
qt.qpa.gl: Basic wglCreateContext gives version 3.0 qt.qpa.gl: OpenGL
2.0 entry points available qt.qpa.gl: GPU features: QSet() qt.qpa.gl: supportedRenderers GpuDescription(vendorId=0x10de,
deviceId=0x1201, subSysId=0x0, revision=161, driver: "nvd3dum.dll",
version=23.21.13.8843, "NVIDIA GeForce GTX 560") renderer:
QFlags(0x1|0x2|0x4|0x8|0x20) qt.qpa.gl: Qt: Using WGL and OpenGL from
"opengl32.dll" qt.qpa.gl: create OpenGL: "VMware, Inc.","Gallium 0.4
on llvmpipe (LLVM 3.4, 128 bits)" default ContextFormat: v3.0 profile:
0 options: QFlags(0x4),SampleBuffers, Extension-API present
Extensions: 184
I downloaded the drivers for Intel(R) HD Graphics version 8.15.10.2993 which are known to support shader 3.0. but after installation I have checked it by Geeks 3d caps viewer and calling gl.glGetString(GL2.GL_VERSION) code and showed just 2.1.0.Thanks!
GL 2.1.0 / GLSL 1.20 is fine for shader model 3. GL 3.x/GLSL >=1.30 requires shader model 4.
I have had a lot of problems / confusion setting up my laptop to work for OpenGL programming / the running of OpenGL programs.
My laptop has one of these very clever (too clever for me) designs where the Intel CPU has a graphics processor on chip, and there is also a dedicated graphics card. Specifically, the CPU is a 3630QM, with "HD Graphics 4000" (a very exciting name, I am sure), and the "proper" Graphics Processor is a Nvidia GTX 670MX.
Theoretically, according to Wikipedia, the HD Graphics Chip (Intel), under Linux, supports OpenGL 3.1, if the correct drivers are installed. (They probably aren't.)
According to NVIDIA, the 670MX can support OpenGL 4.1, so ideally I would like to develop and execute on this GPU.
Do I have drivers installed to enable me to execute OpenGL 4.1 code on the NVIDIA GPU? Answer: Probably no, currently I use this "optirun" thing to execute OpenGL programs on the dedicated GPU. See this link to see the process I followed to setup my computer.
My question is, I know how to run a compiled program on the 670MX; that would be 'optirun ./programname', but how can I find out what OpenGL version the installed graphics drivers on my system will support? Running 'glxinfo | grep -i opengl' in a terminal tells me that the Intel Chip supports OpenGl version 3.0. See the below information:
ed#kubuntu1304-P151EMx:~$ glxinfo | grep -i opengl
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Ivybridge Mobile
OpenGL version string: 3.0 Mesa 9.1.3
OpenGL shading language version string: 1.30
OpenGL extensions:
How do I do the same or similar thing to find out what support is available under 'optirun', and what version of OpenGL is supported?
Update
Someone suggested I use glGetString() to find this information: I am now completely confused!
Without optirun, the supported OpenGL version is '3.0 MESA 9.1.3', so version 3, which is what I expected. However, under optirun, the supported OpenGL version is '4.3.0 NVIDIA 313.30', so version 4.3?! How can it be Version 4.3 if the hardware specification from NVIDIA states only Version 4.1 is supported?
You can just run glxinfo under optirun:
optirun glxinfo | grep -i opengl
Both cards have different features, so its normal to get different OpenGL versions.