wglGetProcAddress returns NULL - c++

I was trying to use WGL_ARB_pbuffer for offscreen rendering with OpenGL,
but I was failed during initialization.
Here is my code.
wglGetExtensionsStringARB = (PFNWGLGETEXTENSIONSSTRINGARBPROC) wglGetProcAddress("wglGetExtensionsStringARB");
if(!wglGetExtensionsStringARB) return;
const GLubyte* extensions = (const GLubyte*) wglGetExtensionsStringARB(wglGetCurrentDC());
So actually this ends at 2nd line because wglGetExtensionsStringARB got NULL.
I have no idea why wglGetProcAddress doesn't work.
I included "wglext.h" and also I defined as below at the header.
PFNWGLGETEXTENSIONSSTRINGARBPROC pwglGetExtensionsStringARB = 0;
#define wglGetExtensionsStringARB pwglGetExtensionsStringARB
Why can't I use wglGetProcAddress as I intended??

wglGetProcAddress requires an OpenGL rendering context; you need to call your wglCreateContext and wglMakeCurrent prior to calling wglGetProcAddress. If you have not already setup an OpenGL context, wglGetProcAddress will always return NULL. If you're not sure if you have an OpenGL context yet (for example, if you're using a 3rd party framework/library), call wglGetCurrentContext and check to make sure it's not returning NULL.

Related

SDL - Difference between SDL_GetRenderer and SDL_CreateRenderer

Both the functions SDL_GetRenderer(SDL_Window*) and SDL_CreateRenderer(SDL_Window*, int, Uint32) seem to do the same thing: return a pointer to SDL_Renderer from the window. However, what method is more appropriate for the task? The SDL Wiki does not provide much information on where which method should be used, so please explain what each method does, how they differ and where they should be used.
SDL_CreateRenderer allows you to create a renderer for a window by specifying some options. It's stored in the specific window data which you can query with SDL_GetRenderer (so the latter is equivalent to (SDL_Renderer *)SDL_GetWindowData(window, SDL_WINDOWRENDERDATA))
If you call SDL_GetRenderer without having created it beforehand, you'll get a NULL pointer.
If you call SDL_CreateRenderer on a window twice, the second call will fail with SDL_SetError("Renderer already associated with window"); (see line 805).
See here

QOpenGLWidget's makeCurrent does not work?

I am trying to use QOpenGLWidget without subclassing.
When I try to make OpenGL calls outside of QOpenGLWidget's methods or signals, nothing seems to happen. For example, following code clears window black despite me setting glClearColor:
MainWindow::MainWindow(QWidget *parent)
: QMainWindow(parent)
{
auto glw = new QOpenGLWidget( this );
glw->makeCurrent();
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glw->doneCurrent();
connect( glw, &QOpenGLWidget::aboutToCompose, [=] {
glClear( GL_COLOR_BUFFER_BIT );
});
setCentralWidget( glw );
}
However, when I move glClearColor inside the lambda connected to the aboutToCompose signal, widget is cleared with white color.
As essentially explained in the comments section by Fabio and G.M., QOpenGLWidget::makeCurrent won't work if called before things are setup enough.
As of Qt 5.11 and probably other releases, QOpenGLWidget::makeCurrent works by calling the QOpenGLContext::makeCurrent method. That call only happens if the QOpenGLWidget is already in the initialized state however. Additionally, QOpenGLContext::makeCurrent can fail. The latter at least gives some indication of failure via its bool return parameter. Unfortunately, QOpenGLWidget::makeCurrent gives no indication at all. QOpenGLWidget::makeCurrent fails silently.
Besides knowing this & heeding the advice in the comments, one can use the QOpenGLWidget::context method to determine whether the QOpenGLWidget is in the initialized state. According to the linked documentation (and as seen in practice), the context method returns "0 if not yet initialized" or a non-null pointer otherwise. So it's a means to determine whether or not QOpenGLWidget::makeCurrent calls QOpenGLContext::makeCurrent and a means to partially work around QOpenGLWidget::makeCurrent returning void. That's probably not particularly useful in this case, but can be useful in other related contexts so I thought this worth mentioning.
So to get QOpenGLWidget::makeCurrent to actually succeed, QOpenGLWidget::makeCurrent has to be called after the QOpenGLWidget has been initialized in order for it to work.
Reading between the lines of this question, it sounds as if wondering about what was needed to be done in order for GL calls to work. And as the question's author recognizes, delaying the GL calls till the aboutToCompose signal has been fired, works for that (at least in the context of this user's code). Another way, is to make the QOpenGLWidget visible, then call the GL code.
Hope this answers your question completely if not at least helpfully.
You can call auto *ctx = QOpenGLContext::currentContext(); to check if makeCurrent succeeded.

Call glewInit once for each rendering context? or exactly once for the whole app?

I have a question about how to (correctly) use glewInit().
Assume I have an multiple-window application, should I call glewInit() exactly once at application (i.e., global) level? or call glewInit() for each window (i.e., each OpenGL rendering context)?
Depending on the GLEW build being used the watertight method is to call glewInit after each and every context change!
With X11/GLX functions pointers are invariant.
But in Windows OpenGL function pointers are specific to each context. Some builds of GLEW are multi context aware, while others are not. So to cover that case, technically you have to call it, everytime the context did change.
(EDIT: due to request for clarification)
for each window (i.e., each OpenGL rendering context)?
First things first: OpenGL contexts are not tied to windows. It is perfectly fine to have a single window but multiple rendering contexts. In Microsoft Windows what matters to OpenGL is the device context (DC) associated with a window. But it also works the other way round: You can have a single OpenGL context, but multiple windows using it (as long as the window's pixelformat is compatible with the OpenGL context).
So this is legitimate:
HWND wnd = create_a window()
HDC dc = GetDC(wnd)
PIXELFORMATDESCRIPTOR pf = select_pixelformat();
SetPixelFormat(dc, pf);
HGLRC rc0 = create_opengl_context(dc);
HGLRC rc1 = create_opengl_context(dc);
wglMakeCurrent(dc, rc0);
draw_stuff(); // uses rc0
wglMakeCurrent(dc, rc1);
draw_stuff(); // uses rc1
And so is this
HWND wnd0 = create_a window()
HDC dc0 = GetDC(wnd)
HWND wnd1 = create_a window()
HDC dc1 = GetDC(wnd)
PIXELFORMATDESCRIPTOR pf = select_pixelformat();
SetPixelFormat(dc0, pf);
SetPixelFormat(dc1, pf);
HGLRC rc = create_opengl_context(dc0); // works also with dc1
wglMakeCurrent(dc0, rc);
draw_stuff();
wglMakeCurrent(dc1, rc);
draw_stuff();
Here's where extensions enter the picture. A function like glActiveTexture is not part of the OpenGL specification that has been pinned down into the Windows Application Binary Interface (ABI). Hence you have to get a function pointer to it at runtime. That's what GLEW does. Internally it looks like this:
First it defines types for the function pointers, declares them as extern variables and uses a little bit of preprocessor magic to avoid namespace collisions.
typedef void (*PFNGLACTIVETEXTURE)(GLenum);
extern PFNGLACTIVETEXTURE glew_ActiveTexture;
#define glActiveTexture glew_ActiveTexture;
In glewInit the function pointer variables are set to the values obtained using wglGetProcAddress (for the sake of readability I omit the type castings).
int glewInit(void)
{
/* ... */
if( openglsupport >= gl1_2 ) {
/* ... */
glew_ActiveTexture = wglGetProcAddress("glActiveTexture");
/* ... */
}
/* ... */
}
Now the important part: wglGetProcAddress works with the OpenGL rendering context that is current at the time of calling. So whatever was to the very last wglMakeCurrent call made before it. As already explained, extension function pointers are tied to their OpenGL context and different OpenGL contexts may give different function pointers for the same function.
So if you do this
wglMakeCurrent(…, rc0);
glewInit();
wglMakeCurrent(…, rc1);
glActiveTexture(…);
it may fail. So in general, with GLEW, every call to wglMakeCurrent must immediately be followed by a glewInit. Some builds of GLEW are multi context aware and do this internally. Others are not. However it is perfectly safe to call glewInit multiple times, so the safe way is to call it, just to be sure.
It should not be necessary to get multiple function ptrs one-per-context according to this... https://github.com/nigels-com/glew/issues/38 in 2016 ....
nigels-com answers this question from kest-relm…
 do you think it is correct to call glewInit() for every context change?
 Is the above the valid way to go for handling multiple opengl contexts?
…with…
I don't think calling glewInit for each context change is desirable, or even necessary, depending on the circumstances.
Obviously this scheme would not be appropriate for multi-threading, anyway.
Kest-relm then says…
From my testing it seems like calling glewInit() repeatedly is not required; the code runs just fine with multiple contexts
It is documented here:
https://www.opengl.org/wiki/Load_OpenGL_Functions
where it states:
"In practice, if two contexts come from the same vendor and refer to the same GPU, then the function pointers pulled from one context will work in the other."
I assume this should be true for most mainstream Windows GL drivers?

GLX/GLEW order of initialization catch-22: GLXEW_ARB_create_context, glXCreateContextAttribsARB, glXCreateContext

Currently I'm working on an application that uses GLEW and GLX (on X11).
The logic works as follows...
glewInit(); /* <- needed so 'GLXEW_ARB_create_context' is set! */
if (GLXEW_ARB_create_context) {
/* opengl >= 3.0*/
.. get fb_config ..
context = glXCreateContextAttribsARB(...);
}
else {
/* legacy context */
context = glXCreateContext(...);
}
The problem I'm running into, is GLXEW_ARB_create_context is initialized by glew, but initializing glew calls glGetString, which crashes if its called before (glXCreateContextAttribsARB / glXCreateContext).
Note that this only happens with Mesa's software rasterizer, (libGL.so compiled with swrast). So its possibly a problem with Mesa too.
Correction, this works on Mesa-SWRast and NVidia's propriatry OpenGL drivers, but segfaults with Intel's OpenGL.
Though its possible this is a bug in the Intel drivers. Need to check how other projects handle this.
The cause in this case is the case of intel is glXGetCurrentDisplay() returns NULL before glx is initialized (another catch-22).
So for now, as far as I can tell, its best do avoid glew before glx context is created, and instead use glx directly, eg:
if (glXQueryExtension(m_display, NULL, NULL)) {
const char *glx_ext = glXGetClientString(display, GLX_EXTENSIONS);
if (... check_string_for_extension(glx_ext, "GLX_SOME_EXTENSION")) {
printf("We have the extension!\n");
}
}
Old answer...
Found the solution (seems obvious in retrospect!)
First call glxewInit()
check GLXEW_ARB_create_context
create the context with glXCreateContextAttribsARB or glXCreateContext.
call glewInit()

SDL2 - Check if OpenGL context is created

I am creating an application using SDL2 & OpenGL, and it worked fine on 3 different computers. But on another computer (an updated arch linux), it doesn't, and it crashes with this error:
OpenGL context already created
So my question is: How do I check if the OpenGL context has already been created? And then, if it is already created, how do I get a handle for it?
If I can't do this, how do I bypass this issue?
SDL2 does not in fact create an OpenGL context without you asking to make one. However, if you ask it to create an OpenGL context when OpenGL doesn't work at all, SDL2 likes to, erm, freestyle a bit. (The actual reason is that it does a bad job in error checking, so if X fails to create an OpenGL context, it assumes that it's because a context was already created)
So, to answer the third question ("how do I bypass this issue"), you have to fix OpenGL before attempting to use it. Figures, right?
To answer the first and second, well, no API call that I know of... but you can do it a slightly different way:
SDL_Window* window = NULL;
SDL_GLContext* context = NULL; // NOTE: This is a pointer!
...
int main(int argc, char** argv) {
// Stuff here, initialize 'window'
*context = SDL_GL_CreateContext(window);
// More stuff here
if (context) {
// context is initialized!! yay!
}
return 2; // Just to confuse people a bit =P
}