There are some mysteries, implementing a simple colorPicking with QOpenGL in-built tools.
Context :
I've an application, owning its own OGL widget. For some reasons (multi-widgets), I had to change my QGLWidget by a QOpenGLWidget which allows me easily to have many OpenGL contexts without (a priori) any problems. This change actually broke my color Picking and I'm then investigating:
I previously did this so as to get my object:
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
// render every object in our scene
ShaderLib::bindShader(ShaderLib::PICKING_SHADER);
{
for(auto const& _3dobject : model_->getObjects())
_3dobject.second->draw(projection_, cameraview_, true);
}
ShaderLib::unbind();
glFlush();
glFinish();
// get color information from frame buffer
float pixel[4];
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
glReadPixels(event->x(), viewport[3] - event->y(), 1, 1, GL_RGBA, GL_FLOAT, pixel)
This perfectly worked with the QGLWidget. I could get the pixel, then the matching object. I saved my pixels in QImage to confirm and I exactly had what was expected.
After changing QGLWidget by QOpenGLWidget :
Then, with the QOpenGLWidget, the above code didn't work. Worst : glReadPixels seems to not reading in the back framebuffer. How do I know ? I simply displayed the whole supposed buffer read via glReadPixels, as before, and it gives me a partial screenshot of my application but not my QOPenGLWidget :O, what means that now, glReadPixels has a different behavior according to QGLWidget or QOpenGLWidget !
Well. Never give up !
I try to get the framebuffer through QOpenGLWidget::grabFrameBuffer();
It creates a QImage of the... I don't know what buffer.
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
// render every object in our scene
ShaderLib::bindShader(ShaderLib::PICKING_SHADER);
{
for(auto const& _3dobject : model_->getObjects())
_3dobject.second->draw(projection_, cameraview_, true);
}
ShaderLib::unbind();
glFlush();
glFinish();
QImage fb = grabFramebuffer();
This prints me an image of the framebuffer drawn in my paintGL() function, which is different from the one rendered in my MousePressEvent() (where I render with a specific 'picking' shader...
Hoped you followed everything. To sum up :
Does anyone understand why glReadPixels gives a different result between the two 'painters' used ? I certainly missed something
Does anyone gets how double-buffering is working with QOpenGLWidget ? It seems that the user cannot really choose what's happening.
Related
Goal
I'd like to implement an actual widget for Qt3D since QWidget::createWindowContainer just doesn't cut it for me.
Problem Description
My first approach of letting a new class subclass QWidget and QSurface was not successful since the Qt3D code either expects a QWindow or a QOffscreenSurface in multiple places and I don't want to recompile the whole Qt3D base.
My second idea was to render the Qt3D content to an offscreen surface and then draw the texture on a quad in a QOpenGLWidget. When I use a QRenderCapture framegraph node to save the image rendered to the offscreen texture and then load the image into a QOpenGLTexture and draw it in the QOpenGLWidget's paintGL function I can see the rendered image - i.e. rendering in Qt3D and also in the OpenGL widget works properly. This is just extremely slow compared to rendering the content from Qt3D directly.
Now, when I use the GLuint returned by the QTexutre2D to bind the texture during rendering of the QOpenGLWidget, everything stays black.
Of course this would make sense, if the contexts of the QOpenGLWidget and QT3D were completely separate. But by retrieving the AbstractRenderer from the QRenderAspectPrivate I was able to obtain the context that Qt3D uses. In my main.cpp I set
QApplication::setAttribute(Qt::AA_ShareOpenGLContexts);
The context of the QOpenGLWidget and of Qt3D both reference the same shared context - I verified this by printing both using qDebug, they are the same object.
Shouldn't this allow me to use the texture from Qt3D?
Or any other suggestions on how to implement such a widget? I just thought this to be the easiest way.
Implementation Details / What I've tried so far
This is what the paintGL function in my QOpenGLWidget looks like:
glClearColor(1.0, 1.0, 1.0, 1.0);
glDisable(GL_BLEND);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
d->m_shaderProgram->bind();
{
QMatrix4x4 m;
m.ortho(0, 1, 1, 0, 1.0f, 3.0f);
m.translate(0.0f, 0.0f, -2.0f);
QOpenGLVertexArrayObject::Binder vaoBinder(&d->m_vao);
d->m_shaderProgram->setUniformValue("matrix", m);
glBindTexture(GL_TEXTURE_2D, d->m_colorTexture->handle().toUInt());
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
}
d->m_shaderProgram->release();
m_colorTexture is the QTexture2D that is attached to the QRenderTargetOutput/QRenderTarget that Qt3D renders the scene offscreen to.
I have a QFrameAction in place to trigger draw updates on the QOpenGLWidget:
connect(d->m_frameAction, &Qt3DLogic::QFrameAction::triggered, this, &Qt3DWidget::paintGL);
I have verified that this indeed calls the paintGL function. So every time I draw the QOpenGLWidget, the frame should be ready and present in the texture.
I've also tried to replace the m_colorTexture with a QSharedGLTexture. Then I created this texture with the context of the QOpenGLWidget like this
m_texture = new QOpenGLTexture(QOpenGLTexture::Target2D);
m_texture->setFormat(QOpenGLTexture::RGBA8_UNorm);
// w and h are width and height of the widget
m_texture->setSize(w, h);
// m_colorTexture is the QSharedGLTexture
m_colorTexture->setTextureId(m_texture->textureId());
In the resizeEvent function of the QOpenGLWdiget I set the appropriate size on this texture and also on all offscreen resources of Qt3D. This also shows just a black screen. Placing qDebug() << glGetError(); directly after binding the texture simply shows 0 every time, so I assume that there aren't any errors.
The code can be found here in my GitHub project.
Update (10th May 2021, since I stumbled upon my answer again):
My Qt3DWidget implementation works perfectly now, the issue was that I had to call update() when the frame action was triggered instead of paintGL (duh, silly me, I actually know that).
Although I didn't find an exact solution to my question I'll post an answer here since I succeeded in creating a Qt3D widget.
The code can be found here. It's not the cleanest solution because I think it should be possible to use the shared texture somehow. Instead, now I'm setting the QOpenGLWidget's context on Qt3D for which I have to use Qt3D's private classes. This means that Qt3D draws directly onto the frame buffer bound by the OpenGL widget. Unfortunately, now the widget has to be the render driver and performs manual updates on the QAspectEngine by calling processFrame. Ideally, I would have liked to leave all processing loops to Qt3D but at least the widget works now as it is.
Edit:
I found an example for QSharedGLTexture in the manual tests here. It works the other way round, i.e. OpenGL renders to the texture and Qt3D uses it so I assume it should be possible to inverse the direction. Unfortunately, QSharedGLTexture seems to be a bit unstable as resizing the OpenGL window sometimes crashes the app. That's why I'll stick with my solution for now. But if anyone has news regarding this issue feel free to post an answer!
I'm making a game in Libgdx.
The only way I have ever known how to use shaders is to have the batch affect the given textures one after another. This is what I normally do in my code:
shader = new ShaderProgram(Gdx.files.internal("shaders/shader.vert"), Gdx.files.internal("shaders/shader.frag"));
batch.setShader(shader);
And that's about all of the needed code.
Anyways, I do not want this separation between textures. However, I can't find any way to affect the whole screen at once with a shader, like the whole screen is just one big texture. To me, it seems like the most logical way to use a shader.
So, does anyone know how to do something like this?
Draw all textures (players, actors, landscape, ...) with the same batch and, if you want to affect also the background with the same shader, draw a still texture with the size of the screen in the background and draw it with the same batch.
Quite easy with FBO objects, you can get "the whole screen as just one big texture" like you said in your question:
First of all, before any rendering, create yout FBO object and begin it:
FrameBuffer fbo = new FrameBuffer(Format.RGBA8888, Width, Height, false);
fbo.begin();
Then do all of your normal rendering:
Gdx.gl.glClearColor(0.2f, 0.2f, 0.2f, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
...
Batch b = new SpriteBatach(...
//Whatever rendering code you have
Finally save that FBO into a texture or sprite, do any transformation needed on it, and prepare and use your shader on it.
fbo.end();
SpriteBatch b = new SpriteBatch();
Sprite s = new Sprite(fbo.getColorBufferTexture());
s.flip(false,true); //Coord systems in buffer differs from screen
b.setShader(your_shader);
b.begin();
your_shader.setUniformMatrix("u_projTrans",camera.combined); //if you have camera
viewport.apply(); //if you have viewport
b.draw(s,0,0,viewportWidth,viewportHeight);
b.end();
b.setShader(null);
And this is all!
Essentially what you are doing is to "render" all your assets and game scene and stages into a buffer, than, saving that buffer image into a texture and finally rendering that texture with the shader effect you want.
As you may notice, this is highly inefficient, since you are copying all your screen to a buffer. Also note that some older drivers only support power of 2 sizes for the FBO, so you may have to have that in mind, check here for more information on the topic.
I'm using a QQuickFramebufferObject object to render a red triangle to a framebuffer, which itself gets drawn to the QML scene.
To do that i overwrote the render function of the associated QQuickFramebufferObject::Renderer class.
This render function looks like following:
void GLRenderEngine::render()
{
glClearColor(0,0,0,1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glColor3d(1,0,0);
glBegin(GL_TRIANGLES);
glVertex2d(0,0);
glVertex2d(1,0);
glVertex2d(0,1);
glEnd();
glFlush();
//QQuickWindow context of encapsuling QQuickFramebufferObject
//is set in overwritten synchronize call
if(m_pWindow)
{
m_pWindow->resetOpenGLState();
update();
}
}
The problem i experence is that the first frame gets drawn correctly, while all other frames only show the clear color.
I've analyzed the opengl api calls with vogl and posted the result in pastebin:
Frame0 (correct Frame): https://pastebin.com/aWu4ee6m
Frame1: https://pastebin.com/4EmWmnMv
The only differences i noticed were the initializing calls, where Qt querys the statemachines states, so i'm curious what else i did wrong.
Thanks for help in advance.
Small update:
If i remove glClear(...) The frames show the correct image, though i doubt this is correct behaviour.
The framebuffer bound when I use glClear is the one Qt created for me to use. It is bound with flag GL_FRAMEBUFFER, which also enables drawing.
After i returned from the function the default framebuffer (0) is bound and cleared. This procedure can be seen in Frame 1 pretty well.
What I've been wondering about is whether glBlitFrameBuffer is being called. Vogl doesn't seem to catch that call, also in the preview of the individual framebuffers, provided by Vogl, i couldn't see my red triangle in Frame1, while it is visible in Frame0.
I solved the problem when i compared the statemachines states and saw, that the Shaderprogram switched from 0 to 1.
Changing it back to 0, and thus disabling shaderprograms, at every start of the render function resulted in the expected behaviour.
I'm having an issue while using FBO.
My window size is 1200x300.
When I create a FBO that's 1200x300, everything is fine.
However, when I create FBO with 2400x600 size (effectively, two times bigger on both axes) and try to render the exact same primitives, I get used only one quarter of the FBO's actual area.
FBO same size as window:
FBO twice bigger (triangle clipping can be noticed):
I render these two triangles into FBO, then render a fullscreen quad with a FBO's texture over it. I clear FBO with this pine green color, so I know for sure that all that empty space on the second picture actually comes from the FBO.
// init() of the program
albedo = new RenderTarget(2400, 600, 24 /*depth*/); // in first case, params are 1200, 300, 24
// draw()
RenderTarget::set(albedo); // render to fbo
RenderTarget::clearColor(0.0f, 0.3f, 0.3f, 1.0f);
RenderTarget::clear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// render triangles ...
glDrawArrays(GL_TRIANGLES, 0, 6);
// now it's time to render a fullscreen quad
RenderTarget::set(); // render to back-buffer
RenderTarget::clearColor(0.3f, 0.0f, 0.0f, 1.0f);
RenderTarget::clear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, albedo->texture());
glUniform1i(albedoUnifLoc, 0);
RenderTarget::drawFSQ(); // draw fullscreen quad
I have no cameras of any kind, I don't use glViewport anywhere, I always send coordiantes of the primitives to be drawn in the unit-square space (both x and y coord are in [-1,1] range).
Question is, what am I doing wrong and how do I fix it?
Aside question is, is glViewport in any kind related to currently bound framebuffer? As far as I could understand, that function is just used to set the rectangle area on the window in which the drawing will occur.
Any suggestion would be greatly appreciated. I tried searching for the problem online, the only similar thing was in this SO question, but it hasn't helped me.
You need to call glViewport() with the size of your render target. The only time you can get away without calling it is when you render to the window, and the window is never resized. That's because the default viewport matches the initial window size. From the spec:
In the initial state, w and h are set to the width and height, respectively, of the window into which the GL is to do its rendering.
If you want to render to an FBO with a size different from your window, you have to call glViewport() with the size of the FBO. And when you go back to rendering to the window, you need to call glViewport() with the window size again.
The viewport dimensions are not per framebuffer state. I always thought that would have made sense, but it is not defined that way. So whenever you call glViewport(), you are changing global (i.e. per context) state, independent of the currently bound framebuffer.
This question changed a lot since it was first asked because I didn't understand how little I knew about what I was asking. And one issue, regarding resizing, was clouding my ability to understand the larger issue of creating and using the framebuffer. If you just need a framebuffer jump to the answer... for history I've left the original question intact.
Newbie question. I've got a GL project I'm working on and am trying to develop a selection strategy using unique colors. Most discussion/tutorials revolve around drawing the selectable entities in the back buffer and calculating the selection when a user clicks somewhere. I want the selection buffer to be persistent so I can quickly calculate hits on any mouse movement and will not redraw the selection buffer unless display or object geometry changes.
It would seem that the best choice would be a dedicated framebuffer object. Here's my issue. On top of being completely new to framebuffer objects, I am curious. Am I better off deleting and recreating the frambuffer object on window size events or creating it once at the maximum screen resolution and then using what may be just a small portion of it. I've got my events working properly to only call the framebuffer routine once for what could be a stream of many resize events, yet I'm concerned about GPU memory fragmentation, or other issues, recreating the buffer, possibly many times.
Also, will a framebuffer object (texture & depth) even behave coherently when using just a portion of it.
Ideas? Am I completely offbase?
EDIT:
I've got my framebuffer object setup and working now at the windows dimensions, and I resize it with the window. I think my issue was classic "overthink". While it is certainly true that deleting/recreating objects on the GPU should be avoided when possible. As long as it is handled correctly the resizes are relatively few.
What I found works is to set a flag and mark the buffer as dirty on window resize, then wait for a normal mouse event before resizing the buffer. A normal mouse enter or move signals you're done dragging the window to size and are ready to get back to work. The buffers recreated once. Also, since the main framebuffer is generally resized for every window size event in the pipeline, it would stand to reason that resizing a framebuffer isn't going to burn a hole in your laptop.
Crisis averted, carry on!
I mentioned in the question that I was overthinking the problem. The main reason for that is because the problem was bigger than the question. The problem was, not only did I not know how to control the framebuffer, I didn't know how to create one. There are so many options and none of the web resources seemed to specifically address what I was trying do, so I struggled with it. If you're also struggling with how to move your selection routine to a unique color scheme with a persistent buffer, or are just at a complete loss as to framebuffers and offscreen rendering, read on.
I've got my OpenGL canvas defined as a class, and I needed a "Selection Buffer Object." I added this to the private members of the class.
unsigned int sbo;
unsigned int sbo_pixels;
unsigned int sbo_depth;
bool sbo_dirty;
void setSelectionBuffer();
In both my resize handler and OpenGL initialization I set the dirty flag for the selection buffer.
sbo_dirty = true;
At the begining of my mouse handler I check for the dirty bit and setSelectionBuffer(); if appropriate.
if(sbo_dirty) setSelectionBuffer();
This tackles my initial concerns about multiple delete/recreates of the buffer. The selection buffer isn't resized until the mouse pointer reenters the client area, after resizing the window. Now I just had to figure out the buffer...
void BFX_Canvas::setSelectionBuffer()
{
if(sbo != 0) // delete current selection buffer if it exists
{
glDeleteFramebuffersEXT(1, &sbo);
glDeleteRenderbuffersEXT(1, &sbo_depth);
glDeleteRenderbuffersEXT(1, &sbo_pixels);
sbo = 0;
}
// create depth renderbuffer
glGenRenderbuffersEXT(1, &sbo_depth);
// bind to new renderbuffer
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, sbo_depth);
// Set storage for depth component, with width and height of the canvas
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, canvas_width, canvas_height);
// Set it up for framebuffer attachment
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, sbo_depth);
// rebind to default renderbuffer
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0);
// create pixel renderbuffer
glGenRenderbuffersEXT(1, &sbo_pixels);
// bind to new renderbuffer
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, sbo_pixels);
// Create RGB storage space(you might want RGBA), with width and height of the canvas
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_RGB, canvas_width, canvas_height);
// Set it up for framebuffer attachment
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, sbo_pixels);
// rebind to default renderbuffer
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0);
// create framebuffer object
glGenFramebuffersEXT(1, &sbo);
// Bind our new framebuffer
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, sbo);
// Attach our pixel renderbuffer
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, sbo_pixels);
// Attach our depth renderbuffer
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, sbo_depth);
// Check that the wheels haven't come off
GLenum status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
if (status != GL_FRAMEBUFFER_COMPLETE_EXT)
{
// something went wrong
// Output an error to the console
cout << "Selection buffer creation failed" << endl;
// restablish a coherent state and return
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
sbo_dirty = false;
sbo = 0;
return;
}
// rebind back to default framebuffer
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
// cleanup and go home
sbo_dirty = false;
Refresh(); // force a screen draw
}
Then at the end of my render function I test for the sbo, and draw to it if it seems to be ready.
if((sbo) && (!sbo_dirty)) // test that sbo exists and is ready
{
// disable anything that's going to affect color such as...
glDisable(GL_LIGHTING);
glDisable(GL_LINE_SMOOTH);
glDisable(GL_POINT_SMOOTH);
glDisable(GL_POLYGON_SMOOTH);
// bind to our selection buffer
// it inherits current transforms/rotations
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, sbo);
// clear it
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// draw selectables
// for now i'm just drawing my object
if (object) object->draw();
// reenable that stuff from before
glEnable(GL_POLYGON_SMOOTH);
glEnable(GL_POINT_SMOOTH);
glEnable(GL_LINE_SMOOTH);
glEnable(GL_LIGHTING);
// blit to default framebuffer just to see what's going on
// delete this bit once selection is setup and working properly.
glBindFramebufferEXT(GL_READ_FRAMEBUFFER_EXT, sbo);
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, 0);
glBlitFramebufferEXT(0, 0, canvas_width, canvas_height,
0, 0, canvas_width/3, canvas_height/3,
GL_COLOR_BUFFER_BIT, GL_LINEAR);
// We're done here, bind back to default buffer.
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
}
That gives me this...
At this point I believe everything is in place to actually draw selectable items to the buffer, and use mouse move events to test for hits. And I've got an onscreen thumbnail to show how bad things are blowing up.
I hope this was as big a help to you, as it would have been to me a week ago. :)