OpenGL Qt 4.8 Render to texture floating point - c++

so i'm working on a project based on Qt 4.8 so when using OpenGL i have to go through the QGL stuff.
My goal is to write data on a floating point texture to perform per pixel picking (3 values are written on each pixel 2 integers and a float value).
so i used the QGLFramebufferObject and the offscreen rendering is happening, but i'm having issues retrieving my data. The first thing i looked into is specifying the internal format for the FBO , but when trying to use the right format i need GL_RGB32F the compiler can't find it , i checked the context and it is a 3.1 core profile so it should be there. My second problem is with the clamping when reading back information from the buffer it is normalized so i know i have to disable the clamping of values with glClampColorARB but compiler doesn't find it neither.
So i guess my question is how do i load what's missing so i can find my constant for the internal format and the clamping function.
Thanks

I would guess that you are compiling with an older OpenGL header file. On MS Windows the default GL/gl.h is for something like version 1.1 :-(
AFAIK the Qt headers for the GL related classes don't include everything in OpenGL, just the minimum to work. You should get your own copy of glcorearb.h, say from www.opengl.org, and include that in your source code.
(What you are attempting can be done: I have a Linux/MSWin/Mac program built with Qt 4.8.6 that renders to offscreen floating point buffers. I'd offer code but I created the framebuffer directly with OpenGL calls rather than using the Qt class.)

Related

Is there any equivalent for gluScaleImage function?

I am trying to load a texture with non-power-of-two (NPOT) sizes in my application which uses OGLPlus library. So, I use images::Image to load an image as a texture. When I call Context::Bound function to set the texture, it throws an exception. When the size of the input image is POT, it works fine.
I checked the source code of OGLPlus and it seems that it uses glTexImage2D function. I know that I can use gluScaleImage to scale my input image, but it is dated and I want to avoid it. Is there any functions in newer libraries like GLEW or OGLPLUS with the same functionality?
It has been 13 years (OpenGL 2.0) since the restriction of power-of-two on texture sizes was lifted. Just load the texture with glTexImage and, if needed, generate the mipmaps with glGenerateMipmap.
EDIT: If you truly want to scale the image prior to uploading to an OpenGL texture, I can recommend stb_image_resize.h — a one-file public domain library that does that for you.

Getting a pixelformat/context with stencil buffer with Mesa OpenGL

I need to change a very old application to be able to work through Remote Desktop Connection (which only supports a subset of opengl 1.1). It only needs various opengl 1.x functions, so I'm trying to use the trick of placing a mesa opengl32.dll file in the application folder. The application only makes sparse use of opengl so it's ok to go with a low performance software renderer.
Anyway, I obtained a precompiled mesa opengl32.dll file from https://wiki.qt.io/Cross_compiling_Mesa_for_Windows but I can't get a pixelformat/context with stencil buffer enabled. If I disable stencil buffer use then everything else works but really it would be best if I could figure out how to get a pixelformat/context with stencil buffer enabled.
Here's the pixelformat part of context creation code:
function gl_context_create_init(adevice_context:hdc):int;
var
pfd,pfd2:tpixelformatdescriptor;
begin
mem_zero(pfd,sizeof(pfd));
pfd.nSize:=sizeof(pfd);
pfd.nVersion:=1;
pfd.dwFlags:=PFD_DRAW_TO_WINDOW or PFD_SUPPORT_OPENGL or PFD_DOUBLEBUFFER;
pfd.iPixelType:=PFD_TYPE_RGBA;
pfd.cColorBits:=32;
pfd.iLayerType:=PFD_MAIN_PLANE;
pfd.cStencilBits:=4;
gl_pixel_format:=choosepixelformat(adevice_context,#pfd);
if gl_pixel_format=0 then
gl_error('choosepixelformat');
if not setpixelformat(adevice_context,gl_pixel_format,#pfd) then
gl_error('setpixelformat');
describepixelformat(adevice_context,gl_pixel_format,sizeof(pfd2),pfd2);
if ((pfd.dwFlags and pfd2.dwFlags)<>pfd.dwFlags) or
(pfd.iPixelType<>pfd2.iPixelType) or
(pfd.cColorBits<>pfd2.cColorBits) or
(pfd.iLayerType<>pfd2.iLayerType) or
(pfd.cStencilBits>pfd2.cStencilBits) then
gl_error('describepixelformat');
...
end;
The error happens at the line (pfd.cStencilBits>pfd2.cStencilBits), i can't seem to find a pixelformat that has cStencilBits not 0 through mesa, so I can't get a context that supports stencils.
Well it turns out that choosepixelformat cannot choose a pixel format only available through mesa opengl32.dll however, wglchoosepixelformat can choose a pixel format only available through mesa, so my problem is solved, as I have now been able to get the stencil buffers to work while using Remote Desktop Connection with this old program.
The thing I don't understand but don't have time to look into (if you know the answer please post it in the comments of this answer), is that setpixelformat and describepixelformat both work perfectly fine with pixel formats only available through mesa. I expected either all 3 of choosepixelformat/setpixelformat/describepixelformat to either all work or all not work, but this is how it is.

What is the use case of cudaGraphicsRegisterFlagsWriteDiscard in cudaGraphicsGLRegisterImage?

I'm fairly new to CUDA, but I've managed to display something generated by a kernel on the screen using OpenGL. I've tried several approach :
Using a PBO and an OpenGL texture (old style);
Using a OpenGL texture as a CUDA surface and rendering on a quad (new style);
Using a renderbuffer as a CUDA surface and rendering using glBlitFramebuffer.
All of them worked, but, while implementing #2, I erroneously set the hint as cudaGraphicsRegisterFlagsWriteDiscard. Since all of the data will be generated by CUDA, I thought this was the correct option. However, later I realized that I needed a CUDA surface to write to an OpenGL texture, and when you use a surface, you are requested to use the LoadStore flag.
So basically my question is this : Since I absolutely need a CUDA surface to write to an OpenGL texture in CUDA, what is the use case of cudaGraphicsRegisterFlagsWriteDiscard in cudaGraphicsGLRegisterImage?
The documentation description seems pretty straightforward. It is for one-way delivery of data from CUDA to OpenGL.
This online book excerpt provides a similar explanation:
Applications where CUDA is the producer and OpenGL is the consumer should register the objects with a write-discard flag...
If you want to see an example, take a look at the postProcessGL cuda sample. In that case, OpenGL is rendering an image, and it's being post-processed (blur added) by cuda, before display. In this case, there are two separate pathways for data flow. In the OpenGL->CUDA case, the data is handled by the createTextureSrc function, and the flag specified is read-only. For the CUDA->OpenGL case (delivery of the post-processed frame) the function is handled in createTextureDst, where a call is made to cudaGraphicsGLRegisterImage with the cudaGraphicsMapFlagsWriteDiscard flag specified, since on this path, CUDA is producing and OpenGL is consuming.
To understand how the textures are handled (populated with data from the cuda operations via a cudaArray) you probably want to study the sequence of operations in processImage().

How to implement 3d texturing in OpenGL 3.3

So I have just realized that the code I was working on for 3d textures was for OpenGL 1.1 or something and is no longer supported in OpenGL 3.3. Is there another way to do this without glTexture3D? Perhaps through a library or another function in OpenGL 3.3 that I do not know about?
EDIT:
I am not sure where I read that 3d texturing was taken out of OpenGL in newer versions (been googling a lot today), but consider this:
I have been following the tutorial/guide here. The program compiles without a hitch. Now read the following quote from the article:
The potential exists that the environment the program is being run on does not support 3D texturing, which would cause us to get a NULL address back, and attempting to use a NULL pointer is A Bad Thing so make sure to check for it and respond appropriately (the provided example exits with an error).
That quote is referring to the following function:
glTexImage3D = (PFNGLTEXIMAGE3DPROC) wglGetProcAddress("glTexImage3D");
When running my program on my computer (which has OpenGL 3.3) that same function returns null for me. When my friend runs it on his computer (which has OpenGL 1.2) it does not return null.
The way one uploads 3D textures has not changes since OpenGL-1.2. The functions for this are still named
glTexImage3D
glTexSubImage3D
glCopyTexSubImage3D

Rendering Vector Graphics in OpenGL? [duplicate]

This question already has answers here:
Displaying SVG in OpenGL without intermediate raster
(5 answers)
Closed 5 years ago.
Is there a way to load a vector graphics file and then render it using OpenGL? This is a vague question as I don't know much about file formats for vector graphics. I know of SVG, though.
Turning it to raster isn't really helpful as I want to do real time zooming in on the objects.
I see most of the answers are about Qt somehow, even though the original question doesn't mention it. Here's my answer in terms of OpenGL alone (which also benefits greatly from the passage of time, as it could not have been given in 2010):
Since 2011, the state of the art is Mark Kilgard's baby, NV_path_rendering, which is currently only a vendor (Nvidia) extension as you might have guessed already from its name. There are a lot of materials on that:
https://developer.nvidia.com/nv-path-rendering Nvidia hub, but some material on the landing page is not the most up-to-date
http://developer.download.nvidia.com/devzone/devcenter/gamegraphics/files/opengl/gpupathrender.pdf Siggraph 2012 paper
http://on-demand.gputechconf.com/gtc/2014/presentations/S4810-accelerating-vector-graphics-mobile-web.pdf GTC 2014 presentation
http://www.opengl.org/registry/specs/NV/path_rendering.txt official extension doc
NV_path_rendering is now used by Google's Skia library behind the scenes, when available. (Nvidia contributed the code in late 2013 and 2014.)
You can of course load SVGs and such https://www.youtube.com/watch?v=bCrohG6PJQE. They also support the PostScript syntax for paths. You can also mix path rendering with other OpenGL (3D) stuff, as demoed at:
https://www.youtube.com/watch?v=FVYl4o1rgIs
https://www.youtube.com/watch?v=yZBXGLlmg2U
An upstart having even less (or downright no) vendor support or academic glitz is NanoVG, which is currently developed and maintained. (https://github.com/memononen/nanovg) Given the number of 2D libraries over OpenGL that have come and gone over time, you're taking a big bet using something not supported by a major vendor, in my humble opinion.
This isn't an implementation, but very relevant to your question and viewers.
Chapter 25. Rendering Vector Art on the GPU
https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_ch25.html
Let me expand on Greg's answer.
It's true that Qt has a SVG renderer class, QSvgRenderer. Also, any drawing that you do in Qt can be done on any "QPaintDevice", where we're interested in the following "paint devices":
A Qt widget;
In particular, a GL-based Qt widget (QGLWidget);
A Qt image
So, if you decide to use Qt, your options are:
Stop using your current method of setting up the window (and GL context), and start using QGLWidget for all your rendering, including the SVG rendering. This might be a pretty small change, depending on your needs. QGLWidget isn't particularly limiting in its capabilities.
Use QSvgRenderer to render to a QImage, then put the data from that QImage into a GL texture (as you normally would), and render it any way you want (e.g. into a rectangular GL_QUAD). Might have worse performance than the other method but requires the least change to your code.
Wondering what QGLWidget does exactly? Well, when you issue Qt rendering commands to a QGLWidget, they're translated to GL calls for you. And this also happens when the rendering commands are issued by the SVG renderer. So in the end, your SVG is going to end up being rendered via a bunch of GL primitives (lines, polygons, etc).
This has a disadvantage. Different videocards implement OpenGL slightly differently, and Qt does not (and can not) account for all those differences. So, for example, if your user has a cheap on-board Intel videocard, then his videocard doesn't support OpenGL antialiasing, and this means your SVG will also look aliased (jaggy), if you render it directly to a QGLWidget. Going through a QImage avoids such problems.
You can use the QImage method when you're zooming in realtime, too. It just depends on how fast you need it to be. You may need careful optimizations such as reusing the same QImage, and enabling clipping for your QPainter.
Qt has good support for directly rendering SVG images using OpenGL functionality (see the documentation for QSvgRenderer).
I hope that helps.
It has primitives like GL_LINES and GL_LINE_STRIP for drawing lines in space if that's what you mean. Edit: This site has some information: http://www.falloutsoftware.com/tutorials/gl/gl2p5.htm