I am rendering a mesh using OpenGl through Qt. (Qt 5.4).
On my OSX computer the rendering is relatively slow. When I rotate the mesh I can see that the rendering can't keep up with my mouse input.
On the same OSX computer when running a Windows 7 virtual machine and my application the rendering is silky smooth. It almost looks like the Mac version is rendering in software mode, instead of using acceleration.
I used glGetString to check the vender and renderer being used and this looks ok:
"NVIDIA Corporation"
"NVIDIA GeForce GT 650M OpenGL Engine"
Any ideas why the native OSX generated code would run so much slower.
BTW: I am rendering a mesh composed of about 150,000 vertices using a GL_ARRAY_BUFFER.
I am quite new to OpenGL, any ideas?
I'm answering this question so that it can be closed.
As Kuba Ober indicated in the comments above the problem was caused by an opengl mistake that Windows appeared to be hiding. In my case I forgot to call the QOpenGLShaderProgram::disableAttributeArray() function, ex:
program->enableAttributeArray(texcoorLocation);
glVertexAttribPointer(texcoorLocation, 2, GL_FLOAT, GL_FALSE, 0, uv);
glDrawElements(GL_TRIANGLES, elementCount, GL_UNSIGNED_SHORT, indices);
program->disableAttributeArray(texcoorLocation); //<-- this line was missing
It seems Windows forgave this problem, while OSX did not.
Related
I'm using SDL2 on Windows 10 to create an OpenGL context, but when I try to get the framebuffer attachment color encoding on an Intel UHD 630, I get an Invalid Operation error instead. On my Nvidia Geforce 1070, it's returning the correct value (GL_LINEAR).
According to Khronos, my code should work:
checkGlErrors(); // no error
auto params = GLint{ 0 };
glGetFramebufferAttachmentParameteriv(GL_DRAW_FRAMEBUFFER,
GL_BACK,
GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING,
¶ms);
checkGlErrors(); // invalid operation
What I'm reading is that Intel drivers are notoriously unreliable, but I'll be surprised if they don't support an SRGB framebuffer. For context: I'm using GL_FRAMEBUFFER_SRGB for gamma correction, and this too doesn't work on my integrated GPU even though it works perfectly on my Nvidia GPU.
Am I doing something wrong? Is there a reliable way of enabling SRGB on the default framebuffer? My drivers are up to date (27.10.100.8425). The output is gamma-corrected on my Geforce GPU but the integrated Intel GPU is rendering the default framebuffer without gamma correction, so I'm assuming there's something unique about the default framebuffer in the Intel drivers that I don't know.
Edit: The correct value should be GL_SRGB, not GL_LINEAR. The correct code for getting that parameter:
// default framebuffer, using glGetFramebuffer
glGetFramebufferAttachmentParameteriv(GL_DRAW_FRAMEBUFFER, GL_FRONT_LEFT, GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING, ¶ms);
// default framebuffer, using glGetNamedFramebuffer
glGetNamedFramebufferAttachmentParameteriv(GL_ZERO, GL_FRONT_LEFT, GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING, ¶ms);
// named framebuffer
glGetNamedFramebufferAttachmentParameteriv(namedFramebuffer.getId(), GL_COLOR_ATTACHMENT0, GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING, ¶ms);
It looks like it's just a driver bug. 100.8425 (the current beta driver) works well except for the SRGB issue; the current stable driver, along with several other newer drivers, all display flickering green, horizontal lines in the window. I reverted to 100.8190 and now the window is rendering correctly--identical to the Geforce GPU.
This sums up the situation: "Intel drivers suck, don't trust anything they do."
The glDrawArrays document does not mention GL_QUADS in mode.
https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glDrawArrays.xhtml
But my PC is rendered with glDrawArrays (GL_QUADS, 0, 4); the rectangle is drawn. My PC is running on OpenGL 4.3.
Why is this?
Probably because your PC implements the compatibility profile.
However the OpenGL spec is just a spec. Vendors are free to implement other stuff as well to make users happy, for example by supporting legacy software.
My setup includes on-board Intel integrated GPU for everyday tasks and a high-performance Nvidia GPU for graphics-intensive applications. I'm developing an OpenGL 3.3 (core profile) application (using shaders, not fixed-function-pipeline). By default, my app runs on Intel GPU and works just fine. But should I try to run it on Nvidia, it only shows the black screen.
Now here's the interesting part. OpenGL context gets loaded correctly, and world coordinate axes I draw for debugging actually get drawn (GL_LINE). For some reason, Nvidia doesn't draw any GL_POLYGONs or GL_QUADs.
Has anyone experienced something similar, and what do you think is the culprit here?
It appears GL_POLYGON, GL_QUADS and GL_QUAD_STRIP are removed from OpenGL 3.3 core profile. For some reason Intel draws them regardless, but Nvidia started drawing as well, as soon as I substituted those with GL_TRIANGLES etc. Always check for removed features if problems like this arise.
I have a problem with creating compute shader.
My program seems to not know GLenum type GL_COMPUTE_SHADER when I'm trying to create shader with glCreateShader() func.
My graphics card is kinda low-end but when I check for GL_ARB_compute_shader extension it is present so that shouldn't be a problem I guess.
Is there something that I have to do to enable this extension or is there another problem and I have to use OpenCL?
OpenGL Compute Shaders are new in version 4.3. I'm guess you have headers that predate that version. However, even if you got newer headers, your GPU or driver may be too old to support OpenGL 4.3. What version does your hardware return for glGetString(GL_VERSION)?
Apple's OpenGL Shader Builder let's you drop in your vertex (or fragment) shader and it will link and validate it then tell you which GL_RENDERER is used for that shader. For me it either shows: Apple Software Renderer (in red because it means the shader will be dog slow) or AMD Radeon HD 6970M OpenGL Engine (i.e. my gpu's renderer which I usually want to run the shader).
How can I also determine this at runtime in my own software?
Edit:
Querying GL_RENDERER in my CPU code always seems to return AMD Radeon HD 6970M OpenGL Engine regardless of where I place it in the draw loop even though I'm using a shader that OpenGL Shader Builder says is running on Apple Software Renderer (and I believe it because it's very slow). Is it a matter of querying GL_RENDERER at just the right time? If so, when?
The renderer used is tied to the OpenGL context and a proper OpenGL implementation should not switch the renderer inbetween. Of course a OpenGL implementation may be built on some infrastructure that dynamically switches between backend renderers, but this must then reflect to the frontend context in renderer string that identifies this.
So what you do is indeed the correct method.