SDL2 - Check if OpenGL context is created - c++

I am creating an application using SDL2 & OpenGL, and it worked fine on 3 different computers. But on another computer (an updated arch linux), it doesn't, and it crashes with this error:
OpenGL context already created
So my question is: How do I check if the OpenGL context has already been created? And then, if it is already created, how do I get a handle for it?
If I can't do this, how do I bypass this issue?

SDL2 does not in fact create an OpenGL context without you asking to make one. However, if you ask it to create an OpenGL context when OpenGL doesn't work at all, SDL2 likes to, erm, freestyle a bit. (The actual reason is that it does a bad job in error checking, so if X fails to create an OpenGL context, it assumes that it's because a context was already created)
So, to answer the third question ("how do I bypass this issue"), you have to fix OpenGL before attempting to use it. Figures, right?
To answer the first and second, well, no API call that I know of... but you can do it a slightly different way:
SDL_Window* window = NULL;
SDL_GLContext* context = NULL; // NOTE: This is a pointer!
...
int main(int argc, char** argv) {
// Stuff here, initialize 'window'
*context = SDL_GL_CreateContext(window);
// More stuff here
if (context) {
// context is initialized!! yay!
}
return 2; // Just to confuse people a bit =P
}

Related

What is the correct way to get OpenCL to play nice with OpenGL in Qt5?

I have found several unofficial sources for how to get OpenCL to play nice with OpenGL and Qt5, each with different levels of complexity:
https://github.com/smistad/Qt-OpenGL-OpenCL-Interoperability
https://github.com/petoknm/QtOpenCLGLInterop
http://www.krazer.com/?p=109
Having these examples is nice, however they don't answer the following question: What exact steps are the minimum required to have a Qt5 widgets program display the result of a calculation made in OpenCL kernel and then transferred directly to attached OpenGL context initiated by Qt5?
Sub-questions include:
What is the correct way to expose the OpenGL context in Qt5 to OpenCL?
How do I initiate my Qt5 app in the first place to make sure that the OpenGL context is correctly set up for use with OpenCL?
How should OpenCL be initiated to be compatible with the OpenGL context in Qt5?
What quirks must I look out for to have this working across the platforms that Qt5 supports?
Is there such a thing as an "official" way to do this, or is Digia working on one?
Note, I am primarily interested in using OpenGL as a widget as opposed to a window/full-screen.
I have used Qt5 and OpenCL together on mac, linux and windows with the following strategy:
Create a QGLWidget and GL context (this example creates two GL context, one for Qt/visualization in the QGLwidget and one for OpenCL called mainGLContext, useful when doing multi-threading. These two context will be able to share data.)
QGLWidget* widget = new QGLWidget;
QGLContext* mainGLContext = new QGLContext(QGLFormat::defaultFormat(), widget);
mainGLContext->create();
Create an OpenCL context using the OpenGL context. This is platform specific. For linux you use glx, for windows wgl, and mac cgl sharegroups. Below is the function I use to create the context properties for interoperability. The display variable is used on linux and windows and you can get it with glXGetCurrentDisplay() and wglGetCurrentDC().
cl_context_properties* createInteropContextProperties(
const cl::Platform &platform,
cl_context_properties OpenGLContext,
cl_context_properties display) {
#if defined(__APPLE__) || defined(__MACOSX)
CGLSetCurrentContext((CGLContextObj)OpenGLContext);
CGLShareGroupObj shareGroup = CGLGetShareGroup((CGLContextObj)OpenGLContext);
if(shareGroup == NULL)
throw Exception("Not able to get sharegroup");
cl_context_properties * cps = new cl_context_properties[3];
cps[0] = CL_CONTEXT_PROPERTY_USE_CGL_SHAREGROUP_APPLE;
cps[1] = (cl_context_properties)shareGroup;
cps[2] = 0;
#else
#ifdef _WIN32
// Windows
cl_context_properties * cps = new cl_context_properties[7];
cps[0] = CL_GL_CONTEXT_KHR;
cps[1] = OpenGLContext;
cps[2] = CL_WGL_HDC_KHR;
cps[3] = display;
cps[4] = CL_CONTEXT_PLATFORM;
cps[5] = (cl_context_properties) (platform)();
cps[6] = 0;
#else
cl_context_properties * cps = new cl_context_properties[7];
cps[0] = CL_GL_CONTEXT_KHR;
cps[1] = OpenGLContext;
cps[2] = CL_GLX_DISPLAY_KHR;
cps[3] = display;
cps[4] = CL_CONTEXT_PLATFORM;
cps[5] = (cl_context_properties) (platform)();
cps[6] = 0;
#endif
#endif
return cps;
}
Often you want to do multi-threading, having one thread do the Qt event handling, while doing some OpenCL processing in another thread. Remember to make the GL context "current" in each thread. Use the makeCurrent and moveToThread function on the QGLContext object for this. You can find details on how I have done this here: https://github.com/smistad/FAST/blob/master/source/FAST/Visualization/Window.cpp
I don't know of a Qt OpenCL wrapper to create the OpenCL context.
After working on this some more I felt compelled to add some more info. Erik Smistad's answer is correct and will remain accepted, however it is only one of several ways to do this.
Based on this article there are at least 3 ways we can have OpenGL vs. OpenCL interop:
Sharing an OpenGL texture directly with OpenCL. PROS: Fastest route since everything is zero-copy. CONS: Severely limits available supported data formats.
Sharing an OpenGL PBO with OpenCL and copy from this into a Texture. Second quickest, but will force the need for a memory copy.
Mapping output buffer in OpenCL to host memory and uploading texture from there. Slowest when using OpenCL on GPU. Fastest when using OpenCL on CPU. Forces data to be copied to host memory and back. Most flexible with respect to available data formats.

Call glewInit once for each rendering context? or exactly once for the whole app?

I have a question about how to (correctly) use glewInit().
Assume I have an multiple-window application, should I call glewInit() exactly once at application (i.e., global) level? or call glewInit() for each window (i.e., each OpenGL rendering context)?
Depending on the GLEW build being used the watertight method is to call glewInit after each and every context change!
With X11/GLX functions pointers are invariant.
But in Windows OpenGL function pointers are specific to each context. Some builds of GLEW are multi context aware, while others are not. So to cover that case, technically you have to call it, everytime the context did change.
(EDIT: due to request for clarification)
for each window (i.e., each OpenGL rendering context)?
First things first: OpenGL contexts are not tied to windows. It is perfectly fine to have a single window but multiple rendering contexts. In Microsoft Windows what matters to OpenGL is the device context (DC) associated with a window. But it also works the other way round: You can have a single OpenGL context, but multiple windows using it (as long as the window's pixelformat is compatible with the OpenGL context).
So this is legitimate:
HWND wnd = create_a window()
HDC dc = GetDC(wnd)
PIXELFORMATDESCRIPTOR pf = select_pixelformat();
SetPixelFormat(dc, pf);
HGLRC rc0 = create_opengl_context(dc);
HGLRC rc1 = create_opengl_context(dc);
wglMakeCurrent(dc, rc0);
draw_stuff(); // uses rc0
wglMakeCurrent(dc, rc1);
draw_stuff(); // uses rc1
And so is this
HWND wnd0 = create_a window()
HDC dc0 = GetDC(wnd)
HWND wnd1 = create_a window()
HDC dc1 = GetDC(wnd)
PIXELFORMATDESCRIPTOR pf = select_pixelformat();
SetPixelFormat(dc0, pf);
SetPixelFormat(dc1, pf);
HGLRC rc = create_opengl_context(dc0); // works also with dc1
wglMakeCurrent(dc0, rc);
draw_stuff();
wglMakeCurrent(dc1, rc);
draw_stuff();
Here's where extensions enter the picture. A function like glActiveTexture is not part of the OpenGL specification that has been pinned down into the Windows Application Binary Interface (ABI). Hence you have to get a function pointer to it at runtime. That's what GLEW does. Internally it looks like this:
First it defines types for the function pointers, declares them as extern variables and uses a little bit of preprocessor magic to avoid namespace collisions.
typedef void (*PFNGLACTIVETEXTURE)(GLenum);
extern PFNGLACTIVETEXTURE glew_ActiveTexture;
#define glActiveTexture glew_ActiveTexture;
In glewInit the function pointer variables are set to the values obtained using wglGetProcAddress (for the sake of readability I omit the type castings).
int glewInit(void)
{
/* ... */
if( openglsupport >= gl1_2 ) {
/* ... */
glew_ActiveTexture = wglGetProcAddress("glActiveTexture");
/* ... */
}
/* ... */
}
Now the important part: wglGetProcAddress works with the OpenGL rendering context that is current at the time of calling. So whatever was to the very last wglMakeCurrent call made before it. As already explained, extension function pointers are tied to their OpenGL context and different OpenGL contexts may give different function pointers for the same function.
So if you do this
wglMakeCurrent(…, rc0);
glewInit();
wglMakeCurrent(…, rc1);
glActiveTexture(…);
it may fail. So in general, with GLEW, every call to wglMakeCurrent must immediately be followed by a glewInit. Some builds of GLEW are multi context aware and do this internally. Others are not. However it is perfectly safe to call glewInit multiple times, so the safe way is to call it, just to be sure.
It should not be necessary to get multiple function ptrs one-per-context according to this... https://github.com/nigels-com/glew/issues/38 in 2016 ....
nigels-com answers this question from kest-relm…
 do you think it is correct to call glewInit() for every context change?
 Is the above the valid way to go for handling multiple opengl contexts?
…with…
I don't think calling glewInit for each context change is desirable, or even necessary, depending on the circumstances.
Obviously this scheme would not be appropriate for multi-threading, anyway.
Kest-relm then says…
From my testing it seems like calling glewInit() repeatedly is not required; the code runs just fine with multiple contexts
It is documented here:
https://www.opengl.org/wiki/Load_OpenGL_Functions
where it states:
"In practice, if two contexts come from the same vendor and refer to the same GPU, then the function pointers pulled from one context will work in the other."
I assume this should be true for most mainstream Windows GL drivers?

Integrating C++ OpenGL project with another C++ project

I was working with a project that reads a data file, performs some calculations, and show results on standard output. Later i wanted to give a 3D graphical view to the results, so I made a new OpenGL project that shows data as 3D object.
Now the problem is, I can not figure out a way to integrate these two independent projects, because the main() in my OpenGL project goes in a non terminating glutMainLoop() loop, and I am unable to figure out where to put the loop in main() of my first project !
/**** Proj1 ****/
int main()
{
while(ESC key not pressed)
{
// read data file
// do some processing
// show results on standard output
}
}
/**** Proj2 ****/
int main()
{
glutInit(&argc, argv);
Init();
glutDisplayFunc(Display);
glutKeyboardFunc(Keyboard);
glutMouseFunc(Mouse);
glutIdleFunc(Idle);
glutMainLoop();
}
Least mixing of codes between Proj1 & Proj2 is requested.
Is it possible to do something like:
/**** Proj1 ****/
#include <filesFromProj2>
int main()
{
while(ESC key not pressed)
{
// read data file
// do some processing
proj2.showResult(result) // how to do this ?
}
}
The most simple solution would be to ditch GLUT and use a OpenGL windowing framework that lets you implement the event loop. GLFW would be the immediate choice. Then instead of having an opaque glutMainLoop that never returns you instead call glfwPollEvents beside your your stdio processing.
GLUT decouples your event handling code from your display code. It feels strange if you're used to the paradigm where you have full control over the loop, but it's not really hard to deal with. Basically, you need to maintain a state that your glutDisplayFunc will react to, and update that state in your glutKeyboardFunc. So in pseudocode (it seems like you have the C++ down):
displayFunc:
if state.showProj1
proj1.showResult
else if state.showProj2
proj2.showResult
keyboardFunc
if keyPressed(esc)
state.showProj1 = false
state.showProj2 = true
glutPostRedisplay()
Ok, so that is some pretty naive code there, but it should get the idea of how to make changes to your state in response to user input which in turn affects what you are rendering.
As mentioned in the previous answer, if you want explicit control of the program loop (as opposed to the event-based paradigm), you have some good options in GLFW and SDL, but of course there will be some ramp-up with those since GLUT does things in a pretty different way.
Found a solution so answering it myself for everyone's reference:
I desperately needed a workaround without having to change my existing glut base code into GLFW and SDF etc.
Digging more on internet I found that freeglut supports a function glutMainLoopEvent() that "causes freeglut to process one iteration’s worth of events in its event loop. This allows the application to control its own event loop and still use the freeglut windowing system."
Also, freeglut supported all the functions of glut (or atleast supported all the glut functions used in my prog). So,i didn't have to change my glut base code.
The pseudo-code for the workaround is as below. I welcome your comments.
#include <gl/freeglut.h>
#include <filesFromProj2>
int main()
{
glutInit(&argc, argv);
Init();
glutDisplayFunc(Display);
glutKeyboardFunc(Keyboard);
glutMouseFunc(Mouse);
glutIdleFunc(Idle);
// glutMainLoop(); // Do not use this
while(ESC key not pressed)
{
// read data file
// do some processing
proj2.showresults(results)
glutMainLoopEvent(); // One iteration only
Display(); // Call the func used with glutDisplayFunc()
}
glutLeaveMainLoop();
}
I also thought that multi-threading, may also solve this problem. One thread for glutMainLoop() and another for data processing !!

GLX/GLEW order of initialization catch-22: GLXEW_ARB_create_context, glXCreateContextAttribsARB, glXCreateContext

Currently I'm working on an application that uses GLEW and GLX (on X11).
The logic works as follows...
glewInit(); /* <- needed so 'GLXEW_ARB_create_context' is set! */
if (GLXEW_ARB_create_context) {
/* opengl >= 3.0*/
.. get fb_config ..
context = glXCreateContextAttribsARB(...);
}
else {
/* legacy context */
context = glXCreateContext(...);
}
The problem I'm running into, is GLXEW_ARB_create_context is initialized by glew, but initializing glew calls glGetString, which crashes if its called before (glXCreateContextAttribsARB / glXCreateContext).
Note that this only happens with Mesa's software rasterizer, (libGL.so compiled with swrast). So its possibly a problem with Mesa too.
Correction, this works on Mesa-SWRast and NVidia's propriatry OpenGL drivers, but segfaults with Intel's OpenGL.
Though its possible this is a bug in the Intel drivers. Need to check how other projects handle this.
The cause in this case is the case of intel is glXGetCurrentDisplay() returns NULL before glx is initialized (another catch-22).
So for now, as far as I can tell, its best do avoid glew before glx context is created, and instead use glx directly, eg:
if (glXQueryExtension(m_display, NULL, NULL)) {
const char *glx_ext = glXGetClientString(display, GLX_EXTENSIONS);
if (... check_string_for_extension(glx_ext, "GLX_SOME_EXTENSION")) {
printf("We have the extension!\n");
}
}
Old answer...
Found the solution (seems obvious in retrospect!)
First call glxewInit()
check GLXEW_ARB_create_context
create the context with glXCreateContextAttribsARB or glXCreateContext.
call glewInit()

How can I initialize glut without the main line arugments?

In c++, we are used to see that opengl is installed in main function.Such as-
int main(int argv,char **argc){
glutInit(&argv,argc);
glutInitDisplayMode(GLUT_DOUBLE| GLUT_RGB|GLUT_DEPTH);
..............................
}
But without this main function,how can we declare opengl in other sub functions?
Such as-
int main(){
...........}
int installopengl(int argv,char **argc){
glutInit(&argv,argc);
glutInitDisplayMode(GLUT_DOUBLE| GLUT_RGB|GLUT_DEPTH);
..............................
}
Maybe I misunderstood - why cant you directly call the function like below ?
int main(int argv,char **argc) {
installopengl(argv, argc);
...........
}
int installopengl(int argv,char **argc) {
glutInit(&argv,argc);
glutInitDisplayMode(GLUT_DOUBLE| GLUT_RGB|GLUT_DEPTH);
..............................
}
Please get your terminology right. OpenGL is not installed in a program. It get's initialized.
Also the pattern you quoted is GLUT initialization. GLUT is not OpenGL, but a simple windowing framework, that creates a OpenGL context for you to use to draw to the window. But there are several other frameworks as well.
Then you seem to completely misunderstand what the main function does. main is the program entry point, the very first function that gets called in a process after the runtime environment has been set up. main can call any function, and you can simply call a dedicated framework initialization there. If it needs parameters from main you simply pass them along.
While it is defiantly not recommended, you can always do this:
int i = 0;
glutInit(&i, NULL);
The issue with doing it this way is that you won't be able to pass any information to the glut library from the command line.