Swapping buffers with core profile uses invalid operations - c++

Whenever I call a function to swap buffers I get tons of errors from glDebugMessageCallback saying:
glVertex2f has been removed from OpenGL Core context (GL_INVALID_OPERATION)
I've tried using both with GLFW and freeglut, and neither work appropriately.
I haven't used glVertex2f, of course. I even went as far as to delete all my rendering code to see if I can find what's causing it, but the error is still there, right after glutSwapBuffers/glfwSwapBuffers.
Using single-buffering causes no errors either.
I've initialized the context to 4.3, core profile, and flagged forward-compatibility.

As explained in comments, the problem here is actually third-party software and not any code you yourself wrote.
When software such as the Steam overlay or FRAPS need to draw something overtop OpenGL they usually go about this by hooking/injecting some code into your application's SwapBuffers implementation at run-time.
You are dealing with a piece of software (RivaTuner) that still uses immediate mode to draw its overlay and that is the source of the unexplained deprecated API calls on every buffer swap.

Do you have code you can share? Either the driver is buggy or something tries to call glVertex in your process. You could try to use glloadgen to build a loader library that covers OpenGL-4.3 symbols only (and only that symbols), so that when linking your program uses of symbols outside the 4.3 specs causes linkage errors.

Related

how to purge opengl shader cache

current opengl drivers use compiled shader cache located in
c:/users/name/appdata/roaming/amd|nvidia/glcache/...
unfortunately, it causes program crashes almost every time i change some of the shaders, which i currently fix by manually deleting the shader cache.
the question is, is there any good way of purging the cache when i ship new version of the program? any opengl extension to control the caching? or some magical api from the operating system? or, at least, a proper way to find the folder?
another question: what keys do the drivers use to identify individual shaders? so that i can somehow change the key every time i change a shader.
unfortunately, it causes program crashes almost every time i change some of the shaders, which i currently fix by manually deleting the shader cache.
If that is happening there's something seriously broken with your system and/or your drivers' installation. This must not happen and if it does then it's not something a OpenGL program should have to concern itself with.
another question: what keys do the drivers use to identify individual shaders?
Usually some hash derived from the shader source AST (i.e. just adding a whitespace or renaming a symbol will not do the trick).
the question is, is there any good way of purging the cache when i ship new version of the program?
Not that I know of. Shaders are a "black box" in the OpenGL specification. You send in GLSL source text, it gets compiled and linked and that's it. Things like a shader cache or the internal representation are not specified by OpenGL.
any opengl extension to control the caching?
Nope. Technically vendors could add a vendor specific extension for that, but none did.
or some magical api from the operating system?
Nothing officially specified for that.
or, at least, a proper way to find the folder?
Again nothing about this is properly specified.

Tesselation in Go-GL

I'm trying to tesselate a simple triangle using the Golang OpenGL bindings
The library doesn't claim support for the tesselation shaders, but I looked through the source code, and adding the correct bindings didn't seem terribly tricky. So I branched it and tried adding the correct constants in gl_defs.go.
The bindings still compile just fine and so does my program, it's when I actually try to use the new bindings that things go strange. The program goes from displaying a nicely circling triangle to a black screen whenever I actually try to include the tesselation shaders.
I'm following along with the OpenGL Superbible (6th edition) and using their shaders for this project, so I don't image I'm using broken shaders (they don't spit out an error log, anyway). But in case the shaders themselves could be at fault, they can be found in the setupProgram() function here.
I'm pretty sure my graphics card supports tesselation because printing the openGL version returns 4.4.0 NVIDIA 331.38
.
So my questions:
Is there any reason adding go bindings for tesselation wouldn't work? The bindings seem quite straightforward.
Am I adding the new bindings incorrectly?
If it should work, why is it not working for me?
What am I doing wrong here?
Steps that might be worth taking:
Your driver and video card may support tessellation shaders, but the GL context that your binding is returning for you might be for an earlier version of OpenGL. Try glGetString​(GL_VERSION​) and see what you get.
Are you calling glGetError basically everywhere and actually checking its values? Does this binding provide error return values? If so, are you checking those?

GL Image and loading OpenGL functions

Why some GL Image functions requires using GL Load to initialize OpenGL context ? Is it possible to fully utilize GL Image using GLEW to initialize OpenGL context ?
If you are talking about the Unofficial SDK's GL Image system, the answer is no. The SDK is intended to be a package deal; if you're using one part of it, you ought to be using the rest. After all, if you can build and include GL Image, you could also use GL Load, since they're bundled together.
And GL Load does exactly what GLEW does; it's better in many ways, as it doesn't require that "experimental" thing to make it work on core contexts. Through the C interface, you could just swap out all of the #include <GL/glew.h> parts with #include <glload/gl_[Insert Version Here].h>. You wouldn't need to modify any code other than that (as well as the initialization code of course).
That being said, you ought to be able to use GL Load and GLEW simultaneously, as long as:
You initialize both of them. This means calling glewInit and LoadFunctions after creating the OpenGL context. Their variables shouldn't interact or anything.
You never try to include both of their non-system headers in the same file. GL Image's TextureGenerator.h is actually designed specifically to not require the inclusion of OpenGL headers (that is, it doesn't directly use GL types like GLint or GLenum).
It's obviously wasteful, as they do the exact same job. But it ought to work.

How to implement 3d texturing in OpenGL 3.3

So I have just realized that the code I was working on for 3d textures was for OpenGL 1.1 or something and is no longer supported in OpenGL 3.3. Is there another way to do this without glTexture3D? Perhaps through a library or another function in OpenGL 3.3 that I do not know about?
EDIT:
I am not sure where I read that 3d texturing was taken out of OpenGL in newer versions (been googling a lot today), but consider this:
I have been following the tutorial/guide here. The program compiles without a hitch. Now read the following quote from the article:
The potential exists that the environment the program is being run on does not support 3D texturing, which would cause us to get a NULL address back, and attempting to use a NULL pointer is A Bad Thing so make sure to check for it and respond appropriately (the provided example exits with an error).
That quote is referring to the following function:
glTexImage3D = (PFNGLTEXIMAGE3DPROC) wglGetProcAddress("glTexImage3D");
When running my program on my computer (which has OpenGL 3.3) that same function returns null for me. When my friend runs it on his computer (which has OpenGL 1.2) it does not return null.
The way one uploads 3D textures has not changes since OpenGL-1.2. The functions for this are still named
glTexImage3D
glTexSubImage3D
glCopyTexSubImage3D

no-render with OpenGL --> contexts

i have a program that does some GPU computing with Optional OpenGL rendering.
The use dynamic is as follow:
init function (init GLEW being the most relevant).
load mesh from file to GPU (use glGenBuffers are related functions to make VBO).
process this mesh in parallel (GPU Computing API).
save mesh into file.
my problem is that when mesh is loading i use opengl calls and wihout context created i just
get segmentation fault.
Edit: Evolution of the problem:
I was missing GL/glx.h I thought that GL/glxew.h included it, thanks to the answers that got fixed.
I was missing glXMakeCurrent; and therefore it was having zero contexts.
After this fixes, it works :).
also thanks for the tools suggestions, i would gladly use them it is just that i needed the low level code for this particular case.
i tried making a context with this code ( i am using glew, so i change the header to GL/glxew.h but the rest of this code remains the same)
Don'd do it. glxew is used for loading glx functions. You probably don't need it.
If you want to use GLEW, replace GL/gl.h with GL/glew.h leave GL/glx.h as it is.
X11 and GLX are quite complex, consider using sdl of glfw instead.
Just wildly guessing here, but could it be that GLEW redefined glXChooseFBConfig with something custom? Something in the call of glXChooseFBConfig dereferences an invalid pointer. So either glXChooseFBConfig itself is invalid, or fbcount to so small, or visual_attribs not properly terminated.
GLEW has nothing to do with context creation. It is an OpenGL loading library; it loads OpenGL functions. It needs you to have an OpenGL context in order for it to function.
Since you're not really using this context for drawing stuff, I would suggest using an off-the-shelf tool for context creation. GLFW or FreeGLUT would be the most light-weight alternatives. Just use them to create a context, do what you need to do, then destroy the windows they create.