OpenGL / SDL2 : stencil buffer bits always 0 on PC - c++

I'm writing an app using SDL2 / OpenGL, and doing some stencil operations.
Everything works as expected on Mac, however on PC the stenciling doesn't work.
Upon closer inspection I realized that the following code provides different outcomes on my Mac and PC:
SDL_Init(SDL_INIT_VIDEO);
SDL_GL_SetAttribute( SDL_GL_STENCIL_SIZE, 8 );
SDL_CreateWindow( ... );
SDL_CreateRenderer( ... )
... do stuff ...
When I print out the stencil bits ( SDL_GL_STENCIL_SIZE ) on Mac I get 8. When I do the same on PC, I get 0.
The same happens whether I run it on an actual PC, or on a PC emulator on the Mac.
What am I missing? How can I force SDL2 to request a context with a stencil buffer?
It looks to me like the Mac's OpenGL implementation has different defaults than the PC one, so I'm probably forgetting to do something to specifically request a stencil buffer, but I can't find any good information online ...
Help ^_^' ?

Never mind, I found the answer:
On PC SDL2 was defaulting to Direct3D (which I guess would explain why my opengl stencil buffer was not there ^_^').
To force SDL2 to use a specific driver, you can use the second parameter in SDL_CreateRenderer.
Problem solved :D
StackOverflow, the biggest rubber duck available... ^-^'

Related

How can I get a valid QGLContext in Windows 16 bit color?

I have an existing/working Qt 5 QGLWidget that uses OpenGL Desktop and GLSL shader version 1.5 on Windows 7. When my monitor/display driver is set to 32 bit True Color works, but when I change to 16 bit mode it does not work.
QGLFormat::openGLVersionFlags()
Using this returns only OpenGL version 1.1 is supported whereas in 32 bit True Color it reports 4.2. My video card is nVidia GTX 670.
I believe this has to do with requesting the correct QGLContext to be created, but I'm not sure what options I need to be requesting and testing it has proven to be shots in the dark. Here is what I've currently tried from searching online:
QGLFormat f = QGLFormat::defaultFormat();
f.setVersion(4, 2); // also tried "3, 2" which coincides with GLSL 1.5
f.setProfile(QGLFormat::CoreProfile);
f.setDepth(true);
QGLFormat::setDefaultFormat(f);
I have had success with the underlying window by creating the context myself with winapi, but I cannot find what's wrong with the Qt version of the code.
I would at least like to figure out the situation and not crash because a context that supports my required OpenGL version was not created.

Getting a pixelformat/context with stencil buffer with Mesa OpenGL

I need to change a very old application to be able to work through Remote Desktop Connection (which only supports a subset of opengl 1.1). It only needs various opengl 1.x functions, so I'm trying to use the trick of placing a mesa opengl32.dll file in the application folder. The application only makes sparse use of opengl so it's ok to go with a low performance software renderer.
Anyway, I obtained a precompiled mesa opengl32.dll file from https://wiki.qt.io/Cross_compiling_Mesa_for_Windows but I can't get a pixelformat/context with stencil buffer enabled. If I disable stencil buffer use then everything else works but really it would be best if I could figure out how to get a pixelformat/context with stencil buffer enabled.
Here's the pixelformat part of context creation code:
function gl_context_create_init(adevice_context:hdc):int;
var
pfd,pfd2:tpixelformatdescriptor;
begin
mem_zero(pfd,sizeof(pfd));
pfd.nSize:=sizeof(pfd);
pfd.nVersion:=1;
pfd.dwFlags:=PFD_DRAW_TO_WINDOW or PFD_SUPPORT_OPENGL or PFD_DOUBLEBUFFER;
pfd.iPixelType:=PFD_TYPE_RGBA;
pfd.cColorBits:=32;
pfd.iLayerType:=PFD_MAIN_PLANE;
pfd.cStencilBits:=4;
gl_pixel_format:=choosepixelformat(adevice_context,#pfd);
if gl_pixel_format=0 then
gl_error('choosepixelformat');
if not setpixelformat(adevice_context,gl_pixel_format,#pfd) then
gl_error('setpixelformat');
describepixelformat(adevice_context,gl_pixel_format,sizeof(pfd2),pfd2);
if ((pfd.dwFlags and pfd2.dwFlags)<>pfd.dwFlags) or
(pfd.iPixelType<>pfd2.iPixelType) or
(pfd.cColorBits<>pfd2.cColorBits) or
(pfd.iLayerType<>pfd2.iLayerType) or
(pfd.cStencilBits>pfd2.cStencilBits) then
gl_error('describepixelformat');
...
end;
The error happens at the line (pfd.cStencilBits>pfd2.cStencilBits), i can't seem to find a pixelformat that has cStencilBits not 0 through mesa, so I can't get a context that supports stencils.
Well it turns out that choosepixelformat cannot choose a pixel format only available through mesa opengl32.dll however, wglchoosepixelformat can choose a pixel format only available through mesa, so my problem is solved, as I have now been able to get the stencil buffers to work while using Remote Desktop Connection with this old program.
The thing I don't understand but don't have time to look into (if you know the answer please post it in the comments of this answer), is that setpixelformat and describepixelformat both work perfectly fine with pixel formats only available through mesa. I expected either all 3 of choosepixelformat/setpixelformat/describepixelformat to either all work or all not work, but this is how it is.

What buffers to use for stereo in Qt using QOpenGLWidget?

I'm trying to do a stereoscopic visualization in Qt. I have found some tutorials but all of them use the older QGLWidget and the buffers GL_FRONT_LEFT and GL_FRONT_RIGHT.
AS I'm using the newer QOpenGLWidget I tried drawing images to the same buffers but the call to glDrawBuffer(GL_FRONT_LEFT) is generating a GL_INVALID_ENUM.
I also saw that the default buffer is GL_COLOR_ATTACHMENT0 instead of GL_FRONT_LEFT so I imagine I need to use a different set of buffers to enable stereo.
Which buffers should I use?
you should use
glDrawBuffer(GL_BACK_RIGHT);
glDrawBuffer(GL_BACK_LEFT);
look this link
I am working on the same thing with Nvidia Quadro 4000 . No luck yet, I got 2 images slightly offset, the IR tansmitter light up BUT the screen flicker!
GOT IT: the sync was 60hz, I put it to 120 and everything works fine
I still need work on the right/left frustrum to say eureka.

OpenGL version stuck at 1.1 software

I've been trying to use OpenGL 4 and the first obstacle was actually loading GL4 instead of the disgusting software-only GL 1.1 that comes with MS Windows. I tried using GLEW and then updated my drivers a second time, and still GL continued putting the version as 1.1.0. It turns out that it was not a problem with GLEW (nor did it even require GLEW, it seems), nor was it SDL breaking something internally (which I considered since that is what I use to create the context). Andon M. Coleman brought up the issue of pixel format, which is something I had totally overlooked to begin with. It turns out that I'd been using 8 bits for each of red, green, blue, and alpha, plus 32 bits for depth. I thought that was a nice even 64 bits, with 32 for the color and 32 for the distance. However, since SDL assumes you want a stencil buffer too (which I realize now is actually needed), it was actually making the pixel format 72 bits, which is not allowed for an accelerated context, since GPUs typically handle 64 bits at most. So, it was defaulting to the ancient GL 1.1 support provided by Windows for use in the absence of drivers, also making it software-only.
The code has a lot of other stuff in it, so I have put together a basic sample of what I was trying to do. It is valid C++ and compiles on MinGW, assuming you link it properly.
#include <gl\gl.h>
#include <SDL2\SDL.h>
#include <stdio.h>
#define SDL_main main
int main ()
{
SDL_Init(SDL_INIT_EVERYTHING);
SDL_GL_SetAttribute(SDL_GL_RED_SIZE,8);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE,8);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE,8);
SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE,8);
/// What I *was* doing...
/* SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE,32); */
// And then I didn't even set SDL_STENCIL_SIZE at all.
// Results in the entire size for each pixel being more than 64.
/// What I *am* doing...
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE,24);
SDL_GL_SetAttribute(SDL_GL_STENCIL_SIZE,8);
// Nice even 32 bits for this and 32 bits for the color is 64.
SDL_Window* window = SDL_CreateWindow(
"Hay GPU y is u no taek pixelz ovar 64 bitz?",
SDL_WINDOWPOS_UNDEFINED,SDL_WINDOWPOS_UNDEFINED,
1024,768,
SDL_WINDOW_FULLSCREEN | SDL_WINDOW_OPENGL
);
SDL_GLContext context = SDL_GL_CreateContext(window);
printf("GL Version [%s]\n",glGetString(GL_VERSION));
SDL_DestroyWindow(window);
SDL_GL_DeleteContext(context);
SDL_Quit();
return 0;
};
Hopefully other people who have been having a similar issue can learn from my mistake, or at least be able to mark it off a list of possible problems.
From SDL docs:
SDL_GL_CONTEXT_PROFILE_MASK determines the type of context created,
while both SDL_GL_CONTEXT_MAJOR_VERSION and
SDL_GL_CONTEXT_MINOR_VERSION determine which version. All three
attributes must be set prior to creating the first window, and in
general you can't change the value of SDL_GL_CONTEXT_PROFILE_MASK
without first destroying all windows created with the previous
setting.
Try setting the right values for SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_MAJOR_VERSION and SDL_GL_CONTEXT_MINOR_VERSION, that will probably solve the issue (if your drivers actually support OpenGL 4.x contexts)

Weird results when running my program in gDebugger

I'm trying to run a an OpenGL program through gDEBugger (http://www.gremedy.com) and I'm seeing a couple of strange things:
The frames seem to be rendering MUCH faster with gDEBugger. For example, if I update some object's position every frame - it'll just fly across the screen really fast, but when the program is run without gDEBugger, it'll move at much slower speed.
Strangely, gDEBugger reports 8 GL frames/second. Which doesn't seem realistic: clearly, FPS is higher than 8 (btw I have checked every possible OpenGL Render Frame Terminator in the Debug Settings dialog). Here's a screenshot (click here for full resolution):
My program uses SDL to create an OpenGL rendering context:
Uint32 flags = SDL_HWSURFACE | SDL_DOUBLEBUF | SDL_OPENGL;
if(fullscreen) flags |= SDL_FULLSCREEN;
// Initialize SDL's video subsystem
SDL_Init(SDL_INIT_VIDEO) == -1;
// Attempt to set the video mode
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
SDL_Surface* s = SDL_SetVideoMode(width, height, 0, flags);
I'm using Windows 7 and an NVidia graphics card (geforce gtx 660m).
My question is, how does one explain the strange behavior that I'm seeing in 1) and 2) ? Could it be that for some reason the rendering is being performed in software instead of the graphics card?
UPD: Obviously, I'm calling SDL_GL_SwapBuffers (which isn't listed as one of render frame terminators) at the end of each frame, but I assume it should just call the windows SwapBuffers function.
Regarding issue 1: apparently gDebugger disables wait-for-vsync, which is why the framerate is much higher than 60 fps.
Regarding issue 2: for some reason, when working with SDL, 2 OpenGL contexts are created. One can see the correct number by adding performance counters for the second context.