glEnableClientState was not declared in OpenGL v4.5 - opengl

I am using OpenGL version 4.5.0 and getting this error:
error: ‘glEnableClientState’ was not declared in this scope
I have read that glEnableClientState is deprecated in this version, but I need to write code compatible with this method, as this is home assignment from class and they require us to write using this method. Is there any way could I get this working in OpenGL 4.5.0?
Including this has had no effect:
glutInitContextVersion (3,3);
glutInitContextProfile (GLUT_COMPATIBILITY_PROFILE);

glutInitContextProfile (GLUT_CORE_PROFILE);
That's the opposite of what you need to do. If you need compatibility OpenGL features, then you have to use GLUT_COMPATIBILITY_PROFILE.
However:
error: ‘glEnableClientState’ was not declared in this scope
That suggests that the OpenGL loading library you're using doesn't even declare this function. Which means you need to move to one that can expose compatibility profile OpenGL functions.

glEnableVertexAttribArray and glVertexAttribPointer are "modern" replacement for glEnableClientState/glVertexPointer. The new generic variant has been available since GL 2.0.

Related

Why is linking to opengl32.dll required?

In order to use modern OpenGL functions above legacy version 1.1, a loading library is required (unless you manually load the function pointers from the GPU drivers yourself of course). If opengl32.dll only contains a software legacy OpenGL 1.1 implementation for windows, why is it still required to be linked to by loading libraries like GLEW or internally loaded by GLAD?
As alluded to in the comments above, openGL32.dll contains the method wglGetProcAddress. This is pretty much the only function that you need for openGL.
The loader libraries (e.g. glew, glad, et al) are basically a load of function pointers. Those function pointers need to be initialised at startup, hence needing to call glewInit, which will actually just make a load of calls to wglGetProcAddress.
Basically glew will essentially boil down to something along these lines internally:
// declare a function pointer type
typedef void (*glFlushPointer)();
// a global function pointer
glFlushPointer glFlush = 0;
#include <GL/gl.h> //< required for wglGetProcAddress
void glewInit() {
// now repeat this process for every GL function you need...
glFlush = (glFlushPointer)wglGetProcAddress("glFlush");
}

Is there a way to check if a platform supports an OpenGL function?

I want to load some textures using glTexStorageXX(), but also fall back to glTexImageXX() if that feature isn't available to the platform.
Is there a way to check if those functions are available on a platform? I think glew.h might try to load the GL_ARB_texture_storage extensions into the same function pointer if using OpenGL 3.3, but I'm not sure how to check if it succeeded. Is it as simple as checking the function pointer, or is it more complicated?
(Also, I'm making some guesses at how glew.h works that might be wrong, it might not use function pointers and this might not be a run-time check I can make? If so, would I just... need to compile executables for different versions of OpenGL?)
if (glTexStorage2D) {
// ... calls that assume all glTexStorageXX also exist,
// ... either as core functions or as ARB extensions
} else {
// ... calls that fallback to glTexImage2D() and such.
}
You need to check if the OpenGL extension is supported. The number of extensions supported by the GL implementation can be called up with glGetIntegerv(GL_NUM_EXTENSIONS, ...).
The name of an extension can be queried with glGetStringi(GL_EXTENSIONS, ...).
Read the extensions into a std::set
#include <set>
#include <string>
GLint no_of_extensions = 0;
glGetIntegerv(GL_NUM_EXTENSIONS, &no_of_extensions);
std::set<std::string> ogl_extensions;
for (int i = 0; i < no_of_extensions; ++i)
ogl_extensions.insert((const char*)glGetStringi(GL_EXTENSIONS, i));
Check if an extension is supported:
bool texture_storage =
ogl_extensions.find("GL_ARB_texture_storage") != ogl_extensions.end();
glTexStorage2D is in core since OpenGL version 4.2. So if you've created at least an OpenGL 4.2 context, there's no need to look for the extension.
When an extension is supported, all of the features and functions specified in the extension specification are supported. (see GL_ARB_texture_storage)
GLEW makes this a little easier because it provides a Boolean state for each extension.
(see GLEW - Checking for Extensions) e.g.:
if (GLEW_ARB_texture_storage)
{
// [...]
}

What values correspond to OpenGL terms (e.x. for GL_TRUE or GL_TEXTURE_2D)?

I'm writing a header. I need to know the values of things like those. Copying and pasting this information or linking to a source would be sufficient. Do not simply answer for GL_TRUE or GL_TEXTURE_2D, I am asking for the values corresponding to every term in existence from OpenGL.
It's still unclear. Why don't you just use the GL symbolic name in your header? For instance:
enum PrimitiveType {
Triangles = GL_TRIANGLES,
TriangleStrip = GL_TRIANGLE_STRIP
}
Anyhow, the OpenGL specification doesn't mandate any specific value for the tokens. It instead refers to the Implementers' Guide and to the official headers released by Khronos, which in turn are generated from the spec/XML files available for instance here.
The only reason I've found so far to hardcode values (instead of putting the symbolic names) is for allowing the code to compile on platforms which don't expose such values. For instance, suppose I'm writing a piece of code that draws primitives, and I'm defining the enum above. I could then continue defining more primitive types, and then I get to:
Patches = GL_PATCHES
The actual usage of such primitive type would still be guarded by a runtime check on the version, but this particular line won't compile on an OpenGL 3 implementation (as GL_PATCHES is for currently used for tessellation, i.e. OpenGL 4). That is:
you can't compile it on a GL3 machine even if you run your application on a GL4 machine;
you can't compile it on a GL3 machine even if you don't actually use patches.
For this exact reason I chose sometimes to hardcode the values and not use the symbolic names.
OK, I'll answer the question.
If you have a Linux box,
% grep '^#define GL_.*0x' /usr/include/GL/gl.h
will generate all the #define name value pairs in the OpenGL header file (and not the other stuff like function calls)
If you have MacOS X, same command but /System/Library/Frameworks/OpenGL.framework/Headers/gl.h

opengl c++: glBlendEquationOES was not declared in this scope error

I assume I am forgeting to include something, but it is odd because i am using other opengl functions. How do I find out what I am not including?
Looks like glBlendEquationOES is only an extension for OpenGLES1.1
In version 2.0 it's a core function, just called glBlendEquation.

"glUniform1fARB"?

What dose glUniform1fARB do?
is there any detailed online reference materials?
What about this google (OpenGL reference) result?
Note that the base name for such methods is glUniform, so look for that. 1f means one float parameter, ARB means the method is common but not part of core OpenGL as of some version. I'm not sure in which exact version the function was promoted, but I assume it was something around 3.0, and thus you'd now just use glUniform1f.
Edit: The spec sais promotion happend in 2.0: "glUniform is available only if the GL version is 2.0 or greater."