In this example code it deals with framebuffers before setting up the context.
I've read the man pages of the functions, but I still don't understand exactly what's going on.
So my question is, what exactly is a framebuffer in GLX and how significant is configuring it?
A framebuffer is an area of memory that holds a displayable image. You need one when creating an OpenGL context so that OpenGL has a place to store the image it renders.
Related
I am interested in writing a real-time ray tracing application in c++ and I heard that using OpenCL-OpenGL interoperability is a good way to do this (to make good use of the GPU), so I have started writing a c++ project using this interoperability and using GLFW for window management. I should mention that although I have some coding experience, I do not have so much in c++ and have not worked with OpenCL or OpenGL before attempting this project, so I would appreciate it if answers are given with this in mind (that is, beginner-friendly terminology is preferred).
So far I have been able to get OpenCL-OpenGL interoperability working with an example using a vertex buffer object. I have also demonstrated that I can create image data with an RGBA array (at least on the CPU), send this to an OpenGL texture with glTexImage2D() and display it using glBlitFramebuffer().
My problem is that I don't know how to create an OpenCL kernel that is able to calculate pixel data such that it can be given as the data parameter in glTexImage2D(). I understand that to use the interoperability, we must first create OpenGL objects and then create OpenCL objects from these to write the data on as these objects share memory, so I am assuming I must first create an empty OpenGL array object then create an OpenCL array object from this to apply an appropriate kernel to which would write the pixel data before using the OpenGL array object as the data parameter in glTexImage2D(), but I am not sure what kind of object to use and have not seen any examples demonstrating this. A simple example showing how OpenCL can create pixel data for an OpenGL texture image (assuming a valid OpenCL-OpenGL context) would be much appreciated. Please do not leave any line out as I might not be able to fill in the blanks!
It's also very possible that the method I described above for implementing a ray tracer is not possible or at least not recommended, so if this is the case please outline an advised alternate method for sending OpenCL kernel calculated pixel data to OpenGL and subsequently drawing this to the screen. The answer to this similar question does not go into enough detail for me and the CL/GL interop link is not working. The answer mentions that this can be achieved using a renderbuffer rather than a texture, but it says at the bottom of the Khronos OpenGL wiki for Renderbuffer Objects that the only way to send pixel data to them is via pixel transfer operations but I can not find any straightforward explanation for how to initialize data this way.
Note that I am using OpenCL c (no c++ bindings).
From your second para you are creating an OpenCL context with a platform specific combination of GLX_DISPLAY / WGL_HDC and GL_CONTEXT properties to interoperate with OpenGL, and you can create a vertex buffer object that can be read/written as necessary by both OpenGL and OpenCL.
That's most of the work. In OpenGL you can copy any VBO into a texture with
glBindBuffer(GL_PIXEL_UNPACK_BUFER, myVBO);
glTexSubImage2D(GL_TEXTURE_2D, level, x, y, width, height, format, size, NULL);
with the NULL at the end meaning to copy from GPU memory (the unpack buffer) rather than CPU memory.
As with copying from regular CPU memory, you might also need to change the pixel alignment if it isn't 32 bit.
The short form of the question: How can I draw in my QGLWidget using my FBO as a texture, without just getting a blank white image?
And, some background and details... I am using Qt 5.1 for an app that does some image processing on the GPU. have a “compositor” class which uses a QOffscreenSurface, with a QOpenGLContext, and a QOpenGLFramebufferObject. It renders some things to the FBO. If the app is running in a render only mode, the result will get written to a file. Run interactively, it gets shown on my subclass of QGLWidget the “viewer.”
If I make a QImage from the FBO and draw that to the viewer, it works. However, this requires round-tripping from GPU-> QImage-> Back to the GPU as a texture. In theory, I should be able to use the FBO as a texture directly, which is what I want.
I am trying to share between my QOpenGLContext and the QGLWidget’s QGLContext like so:
viewer = new tl::ui::glViewer(this);
compositor = new tl::playback::glCompositor(1280, 720, this);
viewer->context()->contextHandle()->setShareContext(compositor->context);
Is it possible to share between the two types of contexts? Is this the way to do it? Do I need to do something else to draw in the viewer using the FBO in the compositor? I’m just getting solid white when I draw the FBO directly instead of the QImage, so I’m clearly doing something wrong.
So I have figured out my problem. I was misinterpreting the documentation for setShareContext() which notes that it "Won't take effect until create() is called" which I mistakenly thought meant you had to share the context after it was created. Instead, sharing has to be established right before:
viewer = new tl::ui::glViewer(this);
compositor = new tl::playback::glCompositor(512, 512, viewer->context()->contextHandle(), this);
and the new constructor for my glCompositor:
offscreenSurface = new QOffscreenSurface();
QSurfaceFormat format;
format.setMajorVersion(4);
format.setMinorVersion(0);
format.setProfile(QSurfaceFormat::CompatibilityProfile);
format.setSamples(0);
offscreenSurface->setFormat(format);
offscreenSurface->create();
context = new QOpenGLContext();
context->setShareContext(srcCtx);
context->setFormat(format);
context->create();
context->makeCurrent(offscreenSurface);
QOpenGLFramebufferObjectFormat f;
f.setSamples(0);
f.setInternalTextureFormat(GL_RGBA32F);
frameBuffer = new QOpenGLFramebufferObject(w, h, f);
Will create a new FBO in a new context which is sharing with the viewer's context. When it comes time to draw, I just bind glBindTexture(GL_TEXTURE_2D, frameBuffer->texture());
I am marking the answer I got correct, even though it didn't directly solve my problem. It was very informative.
I believe your problem has to do with the fact that FBOs themselves cannot be shared across contexts. What you can, however, do is share the attached data stores (renderbuffers, textures, etc.).
It really boils down to the fact that FBOs are little more than a pretty front-end to manage collections of read/draw buffers, the FBOs themselves are not actually a sharable resource. In fact, despite the name, they are not even Buffer Objects as far as the rest of the API is concerned; have you ever wondered why you do not use glBufferData (...), etc. on them? :)
Critically Important Point:
FrameBuffer Objects are not Buffer Objects; they contain no data stores; they only manage state for attachment points and provide an interface for validation and binding semantics.
This is why they cannot be shared, the same way that Vertex Array Objects cannot be shared, but their constituent Vertex Buffer Objects can.
The takeaway message here is:
If it does not store data, it is generally not a sharable resource.
The solution you will have to pursue will involve using the renderbuffers and textures that are attached to your FBO. Provided you do not do anything crazy like try and render in both contexts at once, sharing these resources should not present too much trouble. It could get really ugly if you started trying to read from the renderbuffer or texture while the other context is drawing into it, so do not do this :P
Due to the following language in the OpenGL specification, you will probably have to detach the texture before using it in your other context:
OpenGL 3.2 (Core Profile) - 4.4.3 Feedback Loops Between Textures and the Framebuffer - pp. 230:
A feedback loop may exist when a texture object is used as both the source and destination of a GL operation. When a feedback loop exists, undefined behavior results. This section describes rendering feedback loops (see section 3.8.9) and texture copying feedback loops (see section 3.8.2) in more detail.
To put this bluntly, your white textures could be the result of feedback loops (OpenGL did not give this situation a name in versions of the spec. prior to 3.1, so proper discussion of "feedback loops" will be found in 3.1+ literature only). Because this invokes undefined behavior it will behave differently between vendors. Detaching will eliminate the source of undefined behavior and you should be good to go.
Instead of using glGenTextures() to get an unused texture ID. Can I randomly choose a number, say, 99999 and use it?
I will, of course, query:
glIsTexture( m_texture )
and make sure it's false before proceeding.
Here's some background:
I am developing an image slideshow app for the mac. Previewing the slideshow is flawless. To save the slideshow, I am rendering to an FBO. I create an AGL context, instantiate a texture with glGenTextures() and render to a frame buffer. All's well except for a minor issue.
After I've saved the slideshow and return to the main window, all my image thumbnails are grey, i.e. the textures have been cleared.
I have investigated it and found out that the image thumbnails and my FBO texture somehow have the same texture ID. When I delete my FBO texture at the end of the slideshow saving operation, the thumbnail textures are also lost. This is weird because I have an AGL context during saving, and the main UI has another AGL context, presumably created in the background by Core Image and which I have no control on.
So my options, as I see it now, is to:
Not delete the FBO texture.
Randomly choose a high texture ID in the hopes that the main UI will not be using it.
Actually I read that you do not necessarily need to delete your texture if you are deleting the AGL opengl context. Because deleting your openGL context automatically deletes all associated textures. Is this true? If yes, option 1 makes more sense.
I do realize that there are funny things happening here that I can't really explain. For example, after I've saved my image slideshow to an .mov file, I delete the context, which was created in the same class. By right, it should not affect textures created in another context. But it does, and I seriously can't explain that.
Allow me to answer your basic question first: yes, it is legal in OpenGL 2.1 to simply pick a texture name arbitrarily and use it as a texture. Note that it is not legal in 3.2 core. The fact that the OpenGL ARB removed this ability should help illustrate whether it is a good idea or not.
Now, you seem to have run into an odd bug. glGenTextures should never return a texture name that is already in use. After all that is what it is for. You appear to somehow be getting textures that are already in use. I don't know how that happens.
An OpenGL context, when destroyed, will delete all resources specific to that context. If you have created a second context that shares resources with the first, deleting the second context will not delete the shared resources (textures, buffers, etc). It will delete un-shared resources (VAOs, etc), but everything else will remain.
I would suggest not creating and deleting the FBO texture constantly. Just create it on application startup. Worst-case, you'll have to reallocate storage for the object as needed (via glTexImage2D), if you need a different size or something.
To anybody who has this problem, make sure your context is fully initialized; until it is glGenTextures will always return 0. I didn't realize what was happening at first because it seems 0 is still a valid texture ID.
So what I need is simple: Imagine we have no gui at all - ssh access to some linux where we gonna build and host our app. That app would generate video stream. We have some SDL app with OpenGL shader in it. All we want is to get rendering (as normally we would have in SDL window) as a char* (with size W*H*3) How to do such thing? How to make SDL render stuff not onto its gui window but into some swappable pointer?
To be of any use, OpenGL should be hardware accelerated, so first check if your server does have a GPU that meets your requirements. If you're on a rented virtual server or some standard root server, then you very likely don't have a GPU.
If you have a GPU, then there are two possible methods:
Method 1 -- the easy one
You'll (unfortunately) have to configure and start the X server for it and this X server must also be the current virtual terminal (i.e. it must be the active thing on the graphics card). Then you give the user who'll be running that video generator access to that X display (read man xauth and what it references)
The next step is independent of SDL, it's an OpenGL think: Create a Framebuffer Object onto which the desired graphics is rendered; a PBuffer would work as well, and actually I'd prefer it in this situation, however I found Framebuffer Objects be more reliable than PBuffers on current Linux and its drivers.
Then render to this Framebuffer Object or PBuffer as usual and retrieve the content using glReadPixels
Method 2 -- the flexible one
On the low level this is quite similar to Method 1, but things get abstracted for you: Get VirtualGL http://www.virtualgl.org/ to perform the actual OpenGL rendering on the GPU. Instead of starting your application on a secondary X server you make direct use of the VirtualGL server provided sending the GLX stream and get a JPEG image stream back. You could also use a secondary X server running a virtual framebuffer and take a continous screencapture of that. Or probably most elegant: Write your own X.Org video driver that passes the video to the video streamer directly.
You cannot directly render to a byte array in OpenGL.
There are two ways to work with this. The first way is the simplest and doesn't require context gimmickery, and the second way does.
So first, the simple way.
In order for OpenGL to work, you need to have a window. That doesn't mean the window needs to be visible, but you need to create one to get a valid OpenGL context. Therefore Step 1: Create a window and minimize it.
Now, in order to get valid rendering, the pixels in the framebuffer must pass the "pixel ownership test." When rendering to the framebuffer that holds the screen itself, pixels of the window that are not actually visible on screen fail the pixel ownership test. So the values of those pixels are undefined if you use glReadPixels.
However, this only pertains to the default framebuffer that is associated with the window. Framebuffer objects always pass the pixel ownership test. Therefore, Step 2: Create a framebuffer object and the associated renderbuffers for your needs.
From there, it's pretty simple. Just render as normal and do a glReadPixels when you want to get the data. Pixel buffer objects can be used to asynchronous transfer pixel data, if performance is a concern. Step 3: Render and use glReadPixels to get the data.
The second way is more widely available (FBOs require extension support or OpenGL 3.0), but more platform-specific.
Instead of creating an FBO in step 2, you instead have Step 2: use glXCreatePbuffer to create a pbuffer. A pbuffer is an off-screen render target that acts like the default framebuffer. You glXMakeContextCurrent to tell OpenGL to render to the pbuffer instead of the default framebuffer.
Steps 1 and 3 are the same as above.
I ran into an issue while compiling an openGl code. The thing is that i want to achieve full scene anti-aliasing and i don't know how. I turned on force-antialiasing from the Nvidia control-panel and that was what i really meant to gain. I do it now with GL_POLYGON_SMOOTH. Obviously it is not efficient and good-looking. Here are the questions
1) Should i use multi sampling?
2) Where in the pipeline does openGl blend the colors for antialiasing?
3) What alternatives do exist besides GL_*_SMOOTH and multisampling?
GL_POLYGON_SMOOTH is not a method to do Full-screen AA (FSAA).
Not sure what you mean by "not efficient" in this context, but it certainly is not good looking, because of its tendency to blend in the middle of meshes (at the triangle edges).
Now, with respect to FSAA and your questions:
Multisampling (aka MSAA) is the standard way today to do FSAA. The usual alternative is super-sampling (SSAA), that consists in rendering at a higher resolution, and downsample at the end. It's much more expensive.
The specification says that logically, the GL keeps a sample buffer (4x the size of the pixel buffer, for 4xMSAA), and a pixel buffer (for a total of 5x the memory), and on each sample write to the sample buffer, updates the pixel buffer with the resolved value from the current 4 samples in the sample buffer (It's not called blending, by the way. Blending is what happens at the time of the write into the sample buffer, controlled by glBlendFunc et al.). In practice, this is not what happens in hardware though. Typically, you write only to the sample buffer (and the hardware usually tries to compress the data), and when comes the time to use it, the GL implementation will resolve the full buffer at once, before the usage happens. This also helps if you actually use the sample buffer directly (no need to resolve at all, then).
I covered SSAA and its cost. The latest technique is called Morphological anti-aliasing (MLAA), and is actively being researched. The idea is to do a post-processing pass on the fully rendered image, and anti-alias what looks like sharp edges. Bottom line is, it's not implemented by the GL itself, you have to code it as a post-processing pass. I include it for reference, but it can cost quite a lot.
I wrote a post about this here: Getting smooth, big points in OpenGL
You have to specify WGL_SAMPLE_BUFFERS and WGL_SAMPLES (or GLX prefix for XOrg/GLX) before creating your OpenGL context, when selecting a pixel format or visual.
On Windows, make sure that you use wglChoosePixelFormatARB() if you want a pixel format with extended traits, NOT ChoosePixelFormat() from GDI/GDI+. wglChoosePixelFormatARB has to be queried with wglGetProcAddress from the ICD driver, so you need to create a dummy OpenGL context beforehand. WGL function pointers are valid even after the OpenGL context is destroyed.
WGL_SAMPLE_BUFFERS is a boolean (1 or 0) that toggles multisampling. WGL_SAMPLES is the number of buffers you want. Typically 2,4 or 8.