'glCreateShader' was not declared in this scope? - c++

Why am I getting these errors?
error: 'GL_VERTEX_SHADER' was not declared in this scope
error: 'glCreateShader' was not declared in this scope
Code:
GLuint vs = glCreateShader(GL_VERTEX_SHADER);
And yes, I do have the includes to glut.

What does glGetString(GL_VERSION) return?
CreateShader is not in GLUT but OpenGL 2.0. If your "includes to glut" are not including gl.h for some reason or your GL version is less than 2.0, the headers will not declare it.
I'd also check your gl.h to see if CreateShader is actually declared there.
Edit: This OpenGL header version thing seems to be a general problem in Windows. Most people suggest using GLEW or another extension loader library to get around it.

You need to either use an OpenGL loading library to load OpenGL functions, or manually load the functions yourself. You can't just use gl.h and expect to get everything.

Make sure you are initializing your OpenGL context before you try to access any of the GL namespace. Eg. with GLAD:
// after initializing window
if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress))
{
std::cout << "Failed to initialize GLAD" << std::endl;
return -1;
}

Related

ImGui is not being displayed

I'm trying to make an OpenGL program on Windows. Since the main exe file was getting bigger and bigger, I decided to split it into DLLs. And this is how my problem is started.
For ImGui functions, I created a class. Here is render() function of my class:
cout << "render" << endl;
imgui_newFrame();
{
ImGui::SetNextWindowSize(ImVec2(30, 30), ImGuiSetCond_FirstUseEver);
ImGui::Begin("Test", &show_another_window);
ImGui::Button("Test Window");
ImGui::End();
}
glClearColor(clear_color.x, clear_color.y, clear_color.z, clear_color.w);
ImGui::Render();
Before calling render() function in my class, I initiate ImGui in another function in my class with this:
if (!imgui_init(glfw_window)) {
return false;
}
And here is my main glfw loop:
while (!glfwWindowShouldClose(window)) {
glClear(GL_COLOR_BUFFER_BIT);
glfwPollEvents();
MyMyGuiClass.render(); //here i ask my class to render imgui
glfwSwapBuffers(window);
}
With this code I am able to make ImGui clear window color (glClearColor function works and my console prints "render")
But it doesn't show anything else.
By the way, here is the command working perfectly when I run it.
while (!glfwWindowShouldClose(window)) {
glClear(GL_COLOR_BUFFER_BIT);
glfwPollEvents();
imgui_newFrame();
{
ImGui::SetNextWindowSize(ImVec2(30, 30), ImGuiSetCond_FirstUseEver);
ImGui::Begin("", &show_another_window);
ImGui::Button("Test Window");
ImGui::End();
}
glClearColor(clear_color.x, clear_color.y, clear_color.z, clear_color.w);
ImGui::Render();
glfwSwapBuffers(window);
}
I'm using VS2017 and my compiler doesn't show any warnings or errors when i compile this. Also I tried to make my class' functions static, and I got nothing.
So simply, is ImGui not able to render when called from inside a class?
The problem you're running into is that ImGui maintains a global state and that this state has to be kept somewhere. ImGui keeps it around in a module-local global symbol.
Note the "module-local" here! It means that every DLL (and the main EXE) gets its very own copy of that state. So doing things with ImGui in DLL "A" (or EXE "1" for that matter) will operate on its very own instance of ImGui state.
There is a solution to this, by making that pesky ImGui global state shared across DLLs. How to share data between DLLs is described in MSDN here https://msdn.microsoft.com/en-us/library/h90dkhs0(v=vs.90).aspx – as for the details in ImGui itself. It mostly boils down to the ImGuiContext that's being used. For now this is a module-local global variable, but the ImGui devs plan on making it explicit-per call and user managed eventually.
Comment from the ImGui code:
// Default context storage + current context pointer. Implicitely used by all
// ImGui functions. Always assumed to be != NULL. Change to a different context
// by calling ImGui::SetCurrentContext() ImGui is currently not thread-safe
// because of this variable. If you want thread-safety to allow N threads to
// access N different contexts, you might work around it by: - Having multiple
// instances of the ImGui code compiled inside different namespace
// (easiest/safest, if you have a finite number of contexts) - or: Changing
// this variable to be TLS. You may #define GImGui in imconfig.h for further
// custom hackery. Future development aim to make this context pointer explicit
// to all calls. Also read https://github.com/ocornut/imgui/issues/586

Code::blocks does not find the SDL_FreeTexture command from the sdl library

i am trying to make a siple "PickStick" game to learn sdl2 with c++. Everything works fine I can create window, load texture, create rendere, etc... but when I try to use the command sdl_FreeTexture("the texture name here"); the compiler/code::blocks simply does not find it in the library and output
Error : 'SDL_FreeTexture' was not declared in this scope
Use this instead:
SDL_DestroyTexture("the texture name here");

Call glewInit once for each rendering context? or exactly once for the whole app?

I have a question about how to (correctly) use glewInit().
Assume I have an multiple-window application, should I call glewInit() exactly once at application (i.e., global) level? or call glewInit() for each window (i.e., each OpenGL rendering context)?
Depending on the GLEW build being used the watertight method is to call glewInit after each and every context change!
With X11/GLX functions pointers are invariant.
But in Windows OpenGL function pointers are specific to each context. Some builds of GLEW are multi context aware, while others are not. So to cover that case, technically you have to call it, everytime the context did change.
(EDIT: due to request for clarification)
for each window (i.e., each OpenGL rendering context)?
First things first: OpenGL contexts are not tied to windows. It is perfectly fine to have a single window but multiple rendering contexts. In Microsoft Windows what matters to OpenGL is the device context (DC) associated with a window. But it also works the other way round: You can have a single OpenGL context, but multiple windows using it (as long as the window's pixelformat is compatible with the OpenGL context).
So this is legitimate:
HWND wnd = create_a window()
HDC dc = GetDC(wnd)
PIXELFORMATDESCRIPTOR pf = select_pixelformat();
SetPixelFormat(dc, pf);
HGLRC rc0 = create_opengl_context(dc);
HGLRC rc1 = create_opengl_context(dc);
wglMakeCurrent(dc, rc0);
draw_stuff(); // uses rc0
wglMakeCurrent(dc, rc1);
draw_stuff(); // uses rc1
And so is this
HWND wnd0 = create_a window()
HDC dc0 = GetDC(wnd)
HWND wnd1 = create_a window()
HDC dc1 = GetDC(wnd)
PIXELFORMATDESCRIPTOR pf = select_pixelformat();
SetPixelFormat(dc0, pf);
SetPixelFormat(dc1, pf);
HGLRC rc = create_opengl_context(dc0); // works also with dc1
wglMakeCurrent(dc0, rc);
draw_stuff();
wglMakeCurrent(dc1, rc);
draw_stuff();
Here's where extensions enter the picture. A function like glActiveTexture is not part of the OpenGL specification that has been pinned down into the Windows Application Binary Interface (ABI). Hence you have to get a function pointer to it at runtime. That's what GLEW does. Internally it looks like this:
First it defines types for the function pointers, declares them as extern variables and uses a little bit of preprocessor magic to avoid namespace collisions.
typedef void (*PFNGLACTIVETEXTURE)(GLenum);
extern PFNGLACTIVETEXTURE glew_ActiveTexture;
#define glActiveTexture glew_ActiveTexture;
In glewInit the function pointer variables are set to the values obtained using wglGetProcAddress (for the sake of readability I omit the type castings).
int glewInit(void)
{
/* ... */
if( openglsupport >= gl1_2 ) {
/* ... */
glew_ActiveTexture = wglGetProcAddress("glActiveTexture");
/* ... */
}
/* ... */
}
Now the important part: wglGetProcAddress works with the OpenGL rendering context that is current at the time of calling. So whatever was to the very last wglMakeCurrent call made before it. As already explained, extension function pointers are tied to their OpenGL context and different OpenGL contexts may give different function pointers for the same function.
So if you do this
wglMakeCurrent(…, rc0);
glewInit();
wglMakeCurrent(…, rc1);
glActiveTexture(…);
it may fail. So in general, with GLEW, every call to wglMakeCurrent must immediately be followed by a glewInit. Some builds of GLEW are multi context aware and do this internally. Others are not. However it is perfectly safe to call glewInit multiple times, so the safe way is to call it, just to be sure.
It should not be necessary to get multiple function ptrs one-per-context according to this... https://github.com/nigels-com/glew/issues/38 in 2016 ....
nigels-com answers this question from kest-relm…
 do you think it is correct to call glewInit() for every context change?
 Is the above the valid way to go for handling multiple opengl contexts?
…with…
I don't think calling glewInit for each context change is desirable, or even necessary, depending on the circumstances.
Obviously this scheme would not be appropriate for multi-threading, anyway.
Kest-relm then says…
From my testing it seems like calling glewInit() repeatedly is not required; the code runs just fine with multiple contexts
It is documented here:
https://www.opengl.org/wiki/Load_OpenGL_Functions
where it states:
"In practice, if two contexts come from the same vendor and refer to the same GPU, then the function pointers pulled from one context will work in the other."
I assume this should be true for most mainstream Windows GL drivers?

GLX/GLEW order of initialization catch-22: GLXEW_ARB_create_context, glXCreateContextAttribsARB, glXCreateContext

Currently I'm working on an application that uses GLEW and GLX (on X11).
The logic works as follows...
glewInit(); /* <- needed so 'GLXEW_ARB_create_context' is set! */
if (GLXEW_ARB_create_context) {
/* opengl >= 3.0*/
.. get fb_config ..
context = glXCreateContextAttribsARB(...);
}
else {
/* legacy context */
context = glXCreateContext(...);
}
The problem I'm running into, is GLXEW_ARB_create_context is initialized by glew, but initializing glew calls glGetString, which crashes if its called before (glXCreateContextAttribsARB / glXCreateContext).
Note that this only happens with Mesa's software rasterizer, (libGL.so compiled with swrast). So its possibly a problem with Mesa too.
Correction, this works on Mesa-SWRast and NVidia's propriatry OpenGL drivers, but segfaults with Intel's OpenGL.
Though its possible this is a bug in the Intel drivers. Need to check how other projects handle this.
The cause in this case is the case of intel is glXGetCurrentDisplay() returns NULL before glx is initialized (another catch-22).
So for now, as far as I can tell, its best do avoid glew before glx context is created, and instead use glx directly, eg:
if (glXQueryExtension(m_display, NULL, NULL)) {
const char *glx_ext = glXGetClientString(display, GLX_EXTENSIONS);
if (... check_string_for_extension(glx_ext, "GLX_SOME_EXTENSION")) {
printf("We have the extension!\n");
}
}
Old answer...
Found the solution (seems obvious in retrospect!)
First call glxewInit()
check GLXEW_ARB_create_context
create the context with glXCreateContextAttribsARB or glXCreateContext.
call glewInit()

wglGetProcAddress returns NULL

I was trying to use WGL_ARB_pbuffer for offscreen rendering with OpenGL,
but I was failed during initialization.
Here is my code.
wglGetExtensionsStringARB = (PFNWGLGETEXTENSIONSSTRINGARBPROC) wglGetProcAddress("wglGetExtensionsStringARB");
if(!wglGetExtensionsStringARB) return;
const GLubyte* extensions = (const GLubyte*) wglGetExtensionsStringARB(wglGetCurrentDC());
So actually this ends at 2nd line because wglGetExtensionsStringARB got NULL.
I have no idea why wglGetProcAddress doesn't work.
I included "wglext.h" and also I defined as below at the header.
PFNWGLGETEXTENSIONSSTRINGARBPROC pwglGetExtensionsStringARB = 0;
#define wglGetExtensionsStringARB pwglGetExtensionsStringARB
Why can't I use wglGetProcAddress as I intended??
wglGetProcAddress requires an OpenGL rendering context; you need to call your wglCreateContext and wglMakeCurrent prior to calling wglGetProcAddress. If you have not already setup an OpenGL context, wglGetProcAddress will always return NULL. If you're not sure if you have an OpenGL context yet (for example, if you're using a 3rd party framework/library), call wglGetCurrentContext and check to make sure it's not returning NULL.