Getting a pixelformat/context with stencil buffer with Mesa OpenGL - opengl

I need to change a very old application to be able to work through Remote Desktop Connection (which only supports a subset of opengl 1.1). It only needs various opengl 1.x functions, so I'm trying to use the trick of placing a mesa opengl32.dll file in the application folder. The application only makes sparse use of opengl so it's ok to go with a low performance software renderer.
Anyway, I obtained a precompiled mesa opengl32.dll file from https://wiki.qt.io/Cross_compiling_Mesa_for_Windows but I can't get a pixelformat/context with stencil buffer enabled. If I disable stencil buffer use then everything else works but really it would be best if I could figure out how to get a pixelformat/context with stencil buffer enabled.
Here's the pixelformat part of context creation code:
function gl_context_create_init(adevice_context:hdc):int;
var
pfd,pfd2:tpixelformatdescriptor;
begin
mem_zero(pfd,sizeof(pfd));
pfd.nSize:=sizeof(pfd);
pfd.nVersion:=1;
pfd.dwFlags:=PFD_DRAW_TO_WINDOW or PFD_SUPPORT_OPENGL or PFD_DOUBLEBUFFER;
pfd.iPixelType:=PFD_TYPE_RGBA;
pfd.cColorBits:=32;
pfd.iLayerType:=PFD_MAIN_PLANE;
pfd.cStencilBits:=4;
gl_pixel_format:=choosepixelformat(adevice_context,#pfd);
if gl_pixel_format=0 then
gl_error('choosepixelformat');
if not setpixelformat(adevice_context,gl_pixel_format,#pfd) then
gl_error('setpixelformat');
describepixelformat(adevice_context,gl_pixel_format,sizeof(pfd2),pfd2);
if ((pfd.dwFlags and pfd2.dwFlags)<>pfd.dwFlags) or
(pfd.iPixelType<>pfd2.iPixelType) or
(pfd.cColorBits<>pfd2.cColorBits) or
(pfd.iLayerType<>pfd2.iLayerType) or
(pfd.cStencilBits>pfd2.cStencilBits) then
gl_error('describepixelformat');
...
end;
The error happens at the line (pfd.cStencilBits>pfd2.cStencilBits), i can't seem to find a pixelformat that has cStencilBits not 0 through mesa, so I can't get a context that supports stencils.

Well it turns out that choosepixelformat cannot choose a pixel format only available through mesa opengl32.dll however, wglchoosepixelformat can choose a pixel format only available through mesa, so my problem is solved, as I have now been able to get the stencil buffers to work while using Remote Desktop Connection with this old program.
The thing I don't understand but don't have time to look into (if you know the answer please post it in the comments of this answer), is that setpixelformat and describepixelformat both work perfectly fine with pixel formats only available through mesa. I expected either all 3 of choosepixelformat/setpixelformat/describepixelformat to either all work or all not work, but this is how it is.

Related

Encode OpenGL rendered video without leaving the GPU memory

I am doing some preliminary work to make a rendering pipeline and I am investigating whether OpenGL is a good option for my use case: from a markup language I need to generate a video, ideally using opengl which already implements most of the primitives I need.
Is there a way to, instead of (or additionally to) updating a framebuffer, to make an mp4 video file using nvenc, without copying data back and forth between the GPU's and main memory?
The nvenc SDK page[1] on the NVidia website suggests that it can, as the current header graphic is of a game being streamed. (Even if it's a Direct3D game, same chip underneath.) A quick search for "nvenc share buffer with OpenGL" turned up a number of people apparently combining the two.
Runs on Linux and MS Windows only, so no joy if you have a Mac.
Hope this helps.

OpenGL Qt 4.8 Render to texture floating point

so i'm working on a project based on Qt 4.8 so when using OpenGL i have to go through the QGL stuff.
My goal is to write data on a floating point texture to perform per pixel picking (3 values are written on each pixel 2 integers and a float value).
so i used the QGLFramebufferObject and the offscreen rendering is happening, but i'm having issues retrieving my data. The first thing i looked into is specifying the internal format for the FBO , but when trying to use the right format i need GL_RGB32F the compiler can't find it , i checked the context and it is a 3.1 core profile so it should be there. My second problem is with the clamping when reading back information from the buffer it is normalized so i know i have to disable the clamping of values with glClampColorARB but compiler doesn't find it neither.
So i guess my question is how do i load what's missing so i can find my constant for the internal format and the clamping function.
Thanks
I would guess that you are compiling with an older OpenGL header file. On MS Windows the default GL/gl.h is for something like version 1.1 :-(
AFAIK the Qt headers for the GL related classes don't include everything in OpenGL, just the minimum to work. You should get your own copy of glcorearb.h, say from www.opengl.org, and include that in your source code.
(What you are attempting can be done: I have a Linux/MSWin/Mac program built with Qt 4.8.6 that renders to offscreen floating point buffers. I'd offer code but I created the framebuffer directly with OpenGL calls rather than using the Qt class.)

Remote off-screen rendering (Linux / no GUI)

The situation is as follows:
There is a remote Linux server (no GUI), which builds the OpenGL scene.
Objective: Transfer generated image(s) to client windows machine
I can not understand some thing with offscreen rendering, read a lot of literature, but still not well understood:
Using GLUT implies setting variable DISPLAY. If I right understand means remote rendering via x11. If I run x11 server on windows (XWin server) machine everything works. If I try to run without rendering server , then : freeglut (. / WFWorkspace): failed to open display 'localhost: 11.0'. Anyway x11 is not suitable.
Do I need to create a graphics context (hardware rendering support is required)?
How can I create a graphics context on Linux server without GLUT/x11?
Framebuffer object - whether it is suitable for my task and whether the graphics context is necessary for it?
What is the most efficient way to solve this problem (rendering requires hardware support).
Not an important issue, but nevertheless:
Pixel buffer object. I plan to use it to increase the read performance of GPU memory. Is it profitable within my task?
You need to modify your program to use OSMesa - it's a "null display" driver used by Mesa for software rendering. Consider this answer for near duplicate question as a starter:
https://stackoverflow.com/a/8442800/2702398
For a full example, you can check out the examples in the Mesa distribution itself, such as this: http://cgit.freedesktop.org/mesa/demos/tree/src/osdemos/osdemo.c
Update
It appears that VirtualGL (http://www.virtualgl.org) supports remote rendering of OpenGL/GLX protocol and serves rendered pixmaps to the client over VNC (whereupon, VNC head can be trivially made virtual).
If you want to use full OpenGL spec, use X11 to create context. Here is a tutorial showing how you can do this:
http://arrayfire.com/remote-off-screen-rendering-with-opengl/

Cinder: How to get a pointer to data\frame generated but never shown on screen?

There is that grate lib I want to use called libCinder, I looked thru its docs but do not get if it is possible and how to render something with out showing it first?
Say we want to create a simple random color 640x480 canvas with 3 red white blue circles on it, and get RGB\HSL\any char * to raw image data out of it with out ever showing any windows to user. (say we have console application project type). I want to use such feaure for server side live video stream generation and for video streaming I would prefer to use ffmpeg so that is why I want a pointer to some RGB\HSV or what ever buffer with actuall image data. How to do such thing with libCInder?
You will have to use off-screen rendering. libcinder seems to be just a wrapper for OpenGL, as far as graphics go, so you can use OpenGL code to achieve this.
Since OpenGL does not have a native mechanism for off-screen rendering, you'll have to use an extension. A tutorial for using such an extension, called Framebuffer Rendering, can be found here. You will have to modify renderer.cpp to use this extension's commands.
An alternative to using such an extension is to use Mesa 3D, which is an open-source implementation of OpenGL. Mesa has a software rendering engine which allows it to render into memory without using a video card. This means you don't need a video card, but on the other hand the rendering might be slow. Mesa has an example of rendering to a memory buffer at src/osdemos/ in the Demos zip file. This solution will probably require you to write a complete Renderer class, similar to Renderer2d and RendererGl which will use Mesa's intrusctions instead of Windows's or Mac's.

OpenGL: How to select correct mipmapping method automatically?

I'm having problem at mipmapping the textures on different hardware. I use the following code:
char *exts = (char *)glGetString(GL_EXTENSIONS);
if(strstr(exts, "SGIS_generate_mipmap") == NULL){
// use gluBuild2DMipmaps()
}else{
// use GL_GENERATE_MIPMAP
}
But on some cards it says GL_GENERATE_MIPMAP is supported when its not, thus the gfx card is trying to read memory from where the mipamp is supposed to be, thus the card renders other textures to those miplevels.
I tried glGenerateMipmapEXT(GL_TEXTURE_2D) but it makes all my textures white, i enabled GL_TEXTURE_2D before using that function (as it was told to).
I could as well just use gluBuild2DMipmaps() for everyone, since it works. But i dont want to make new cards load 10x slower because theres 2 users who have really old cards.
So how do you choose mipmap method correctly?
glGenerateMipmap is supported at least by OpenGL 3.3 as a part of functionality, not as extension.
You have following options:
Check OpenGL version, if it is more recent that the first one that ever supported glGenerateMipmap, use glGenerateMipmap.
(I'd recommend this one) OpenGL 1.4..2.1 supports texParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE)(see this) Which will generate mipmaps from base level. This probably became "deprecated" in OpenGL 3, but you should be able to use it.
Use GLEE or GLEW ans use glewIsSupported / gleeIsSupported call to check for extension.
Also I think that instead of using extensions, it should be easier to stick with OpenGL specifications. A lot of hardware supports OpenGL 3, so you should be able get most of required functionality (shaders, mipmaps, framebuffer objects, geometry shaders) as part of OpenGL specification, not as extension.
If drivers lie, there's not much you can do about it. Also remember that glGenerateMipmapEXT is part of the GL_EXT_framebuffer_object extension.
What you are doing wrong is checking for the SGIS_generate_mipmap extension and using GL_GENERATE_MIPMAP, since this enum belongs to core OpenGL, but that's not really the problem.
The issue you describe sounds like a very horrible OpenGL implementation bug, i would bypass it using gluBuild2DMipmaps on those cards (having a list and checking at startup).