I would like to perform some 3D rendering on my Debian machine and bring the result into client-side memory.
I've created a C++, GLX, and GLEW based application that does not require a window. I get a display with glXOpenDisplay, use it to find a proper framebuffer with glXChooseFBConfig (passing the DefaultScreen of the display), obtain visual info with glXGetVisualFromFBConfig, and pass the relevant information to glXCreateContext. I make that context current & initialize GLEW.
As a starting test, I'm simply clearing the default framebuffer with a variety of colors; I would like to now query the result pixel-by-pixel, presumably with glReadPixels.
But this is where I seem to be fundamentally misunderstanding something: What are the dimensions of the default framebuffer? I never define an initial height or a width for it, and I'm not seeing a way to do so.
Answers such as this one imply the window "defines" the dimensions. In my application, is the DefaultScreen defining the dimensions? If that's the case, what can I do to make the default framebuffer larger than a particularly small screen?
and pass the relevant information to glXCreateContext. I then initialize GLEW.
Just because you have a context does not mean, that you can immediately use it. Consider what happens if you have more than one context. Before you can make OpenGL calls, you have to make active a context on the current thread, using glXMakeCurrent or glXMakeContextCurrent. If you look at those functios' signatures you'll see that they take a Drawable as parameter in addition to the OpenGL context. So you need that, too.
For windowless operation GLX offers PBuffers, which offer windowless drawables. Or you could use a window that you don't map to the screen. PBuffers allow to do offscreen rendering without the use of framebuffer objects, but the use of their main framebuffer is a bit finicky. My recommendation is the use of a 0×0 sized PBuffer and a framebuffer object.
You need to use framebuffer objects. These use textures whose dimensions you have specified. For example:
// Generate framebuffer ID
GLuint fb;
glGenFramebuffers(1, &fb);
// Make framebuffer active
glBindFramebuffer(GL_FRAMEBUFFER, fb);
// Attach color and depth textures (must have same dimensions)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, color_tex, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depth_tex, 0);
// Check framebuffer is valid
assert(glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE);
Related
In my application I have a module that manages rendering using OpenGL, and renders the results into QOpenGLFrameBufferObjects. I know this much works, because I'm able to save their contents using QOpenGLFrameBufferObject::toImage() and see the render.
I am now attempting to take the texture ID returned from QOpenGLFrameBufferObject::texture(), bind it as a normal OpenGL texture and render it into a QOpenGLWidget viewport using a fullscreen quad. (I'm using this two-step method because I'm not aware of a way to get around the fact that QOpenGLWidgets each work in their own context, but that's a different story.)
The problem here is that glBindTexture() returns InvalidOperation when I call it. According to the OpenGL documentation, this is because "[The] texture was previously created with a target that doesn't match that of [the input]." However, I created the frame buffer object by passing GL_TEXTURE_2D into the constructor, and am passing the same in as the target to glBindTexture(), so I'm not sure where I'm going wrong. There isn't much documentation online about how to correctly use QOpenGLFrameBufferObject::texture().
Other supplementary information, in case it helps:
The creation of the frame buffer object doesn't set any special formats. They're left at whatever defaults Qt uses. As far as I know, this means it also has no depth or stencil attachments as of yet, though once I've got the basics working this will probably change.
Binding the FBO before binding its texture doesn't seem to make a difference.
QOpenGLFrameBufferObject::isValid() returns true;
Calling glIsTexture() on the texture handle returns false, but I'm not sure why this would be given that it's a value provided to me by Qt for the purposes of binding an OpenGL texture. The OpenGL documentation does mention that "a name returned by glGenTextures, but not yet associated with a texture by calling glBindTexture, is not the name of a texture", but here I can't bind it anyway.
I'm attempting to bind the texture in a different context to the one the FBO was created in (ie. the QOpenGLWidget's context instead of the render module's context).
I'll provide some code, but a lot of what I have is specific to the systems that exist in the rendering module, so there's only a small amount of relevant OpenGL code.
In the render module context:
QOpenGLFrameBufferObject fbo = new QOpenGLFrameBufferObject(QSize(...), GL_TEXTURE_2D);
// Do some rendering in this context later
In the QOpenGLWidget context, after having rendered to the frame buffer in the rendering module:
GLuint textureId = fbo->texture();
glBindTexture(GL_TEXTURE_2D, textureId)) // Invalid operation
EDIT: It turns out the culprit was that my contexts weren't actually being shared, as I'd misinterpreted what the Qt::AA_ShareOpenGLContexts application attribute did. Once I made them properly shared the issue was fixed.
I created a Renderbuffer, that's then modified in OpenCL.
//OpenGL
glGenFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
glGenRenderbuffers(1, &colorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8, 600, 600);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer);
//OpenCL
renderEngine = new OpenCLProgram("render.cl");
renderEngine->addArgumentGLRBuffer(colorRenderbuffer);
How would I then proceed drawing my OpenCL creation, the buffer to the screen? I could bind it to a texture and draw a quad the size of my window, but I am not that sure, if it is the most efficient way. Also, if there was a better way of drawing to the screen from OpenCL, that would help!
The call you're looking for is glBlitFramebuffer(). To use this, you bind your FBO as the read framebuffer, and the default framebuffer as the draw framebuffer:
glBindFramebuffer(GL_READ_FRAMEBUFFER, srcFbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, srcWidth, srcHeight, 0, 0, dstWidth, dstHeight,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
Adjust the parameters for your specific use based on the linked man page.
This is preferable over writing your own shader and rendering a screen sized quad. Not only is it simpler, and requires fewer state changes, it can also be more efficient. Knowing that a blit operation needs to be performed gives the implementation a chance to use a more efficient path. For example, where present, it could use a dedicated blit engine that can run asynchronously to the general rendering functionality of the GPU.
Whether you should use a renderbuffer or texture is not as clear cut. Chances are that it won't make much of a difference. Still, I would recommend to use a renderbuffer as long as that's all you need. Because it has more limited functionality, the driver has the option to create a memory allocation that is more optimized for the purpose. Rendering to a renderbuffer can potentially be more efficient than rendering to a texture on some hardware, particularly if your rendering is pixel output limited.
Don't make it a renderbuffer.
OpenGL renderbuffers exist for the sole purpose of being render targets. The only OpenGL operations that read from them are per-sample operations during rendering to the framebuffer, framebuffer blits, and pixel transfer operations.
Use a texture instead. There is no reason you couldn't create a 600x600 GL_RGBA8 2D texture.
This is totally driving me nuts!
I've created my fbo by
QOpenGLFramebufferObjectFormat format;
format.setAttachment(QOpenGLFramebufferObject::CombinedDepthStencil);
format.setMipmap(false);
format.setSamples(4);
format.setTextureTarget(GL_TEXTURE_2D);
format.setInternalTextureFormat(GL_RGBA32F_ARB);
qfbo=new QOpenGLFramebufferObject(QSize(w,h),format);
ok it seems to be working and Qt doesn't claim about that.
The problem is that I need that QOpenGLFramebufferObject as read/write, write on it seems to be easy, just qfbo.bind() makes its work, the problem is that I cannot bind its texture, if I ask it about GLuint handle it always return 0 in both modes:
qDebug()<<qfbo->texture();
qDebug()<<qfbo->takeTexture();
obviously my intention is to bind the texture by itself like:
glBindTexture(GL_TEXTURE_1D, fbo->texture());
Any tip about this? I've been googling for a long time without luck :(
I forget to say that if I don't use .setsamples(4); get a !=0 GLuit but totally black.
If you use multisampling, there's no texture allocated, instead a multisampled renderbuffer gets used.
You need to create another, non multisampled FBO of matching dimensions and use blitFramebuffer to resolve the multisampled data into single sampled.
Then you'll have the texture you can bind (to the 2D target, not to the 1D!). If this texture is still full black, run apitrace on your code and see what you're doing wrong.
Using OpenGL FBO to do Offscreen rendering. Main code fragment of creating FBO is listed bellow:
glGenFramebuffers(1, &fbo);
// Create new framebuffers with new size.
int maxSize; glGetIntegerv(GL_MAX_RENDERBUFFER_SIZE, &maxSize);
glGenRenderbuffers(1,&clrRbo)
glBindRenderbuffer(GL_RENDERBUFFER,clrRbo);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8, (std::min)(maxSize,m_localvp.pix_width),(std::min)(maxSize, m_localvp.pix_height));
glGenRenderbuffers(1,&stencilRbo);
glBindRenderbuffer(GL_RENDERBUFFER,stencilRbo);
glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, (std::min)(maxSize,m_localvp.pix_width),(std::min)(maxSize, m_localvp.pix_height));
// Bind new framebuffers to FBO;
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, clrRbo);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, stencilRbo);
// Checke FBO status
checkFramebufferStatus();
The code above works for some of the graphic hardware, but fails on others. And the error reported in glCheckFramebufferStatus() is GL_FRAMEBUFFER_UNSUPPORTED which means the internal format of color or stencil format is non-renderable.
How can I get the correct color or stencil-renderable internal format supported on the graphic hardware where the code run? Or, how can I make my code portable across different OpenGL versions and hardware implementations?
MY research:
glGetInternalformat() is only supported in OpenGL 4.1 or higher. If I can get the same functionality of this function in older OpenGL versions?
This says GL_STENCIL_INDEX8 is the only stencil-renderable format, is that correct? It fails to function in my code.
This doc shows all the supported formats enums. What's the difference between internal formats and base formats.
glGetInternalFormat() was new functionality in OpenGL 4.2. There is no way to get the same information in earlier versions.
I'm not convinced that it would give you all you need anyway. While it provides a ton of information about each format, which looks partly helpful for the purpose, it still does not tell you which combinations of formats are valid as FBO attachments. GL_FRAMEBUFFER_UNSUPPORTED is not based on each format individually being renderable, but the combination of them. From the spec:
The combination of internal formats of the attached images does not violate an implementation-dependent set of restrictions.
If you used an attachment that does not use a renderable format, you would in fact get a GL_FRAMEBUFFER_ATTACHMENT_INCOMPLETE value.
To find a combination of formats that is supported by a given implementation, your best bet is to try a few combinations that can support the functionality you need. I would try them in order of preference, until you find a combination where glCheckFramebufferStatus() succeeds.
I'm interested in know how is the right way to mimic the low resolution of the older games (like Atari 2600) in OpenGL to do a fps game. I imagine the best way to do it is writing the buffer into a texture, put onto a quad and display it to the screen resolution.
Take a look of http://www.youtube.com/watch?v=_ELRv06sa-c, for example (great game!)
Any advice, help or sample-code will be welcome.
I think the best way to do it would be like you said, render everything into a low-res texture (best done using FBOs) and then just display the texture by drawing a sceen-sized quad (of course using GL_NEAREST as magnification filter for the texture). Maybe you can also use glBlitFramebuffer for copying directly from the low-res FBO into the high-res framebuffer, although I don't know if you can copy directly into the default framebuffer (the displayed one) this way.
EDIT: After looking up the specification for framebuffer_blit it seems you can just copy from the low-res FBO into the high-res default framebuffer using glBlitFramebuffer(EXT/ARB). This might be faster than using a texture mapped quad as it completely bypasses the vertex-fragment-pipeline (although this would have been a simple one). And another advantage is that you also get the low-res depth and stencil buffers if needed and can this way render high-res content on top of the low-res background which might be an interesting effect. So it would happen somehow like this:
generate FBO with low-res renderbuffers for color and depth (and stencil)
...
glBindFramebuffer(GL_FRAMEBUFFER, lowFBO);
render_scene();
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, 640, 480, 0, 0, 1024, 768,
GL_COLOR_BUFFER_BIT [| GL_DEPTH_BUFFER_BIT], GL_NEAREST);