I'm using a QQuickFramebufferObject object to render a red triangle to a framebuffer, which itself gets drawn to the QML scene.
To do that i overwrote the render function of the associated QQuickFramebufferObject::Renderer class.
This render function looks like following:
void GLRenderEngine::render()
{
glClearColor(0,0,0,1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glColor3d(1,0,0);
glBegin(GL_TRIANGLES);
glVertex2d(0,0);
glVertex2d(1,0);
glVertex2d(0,1);
glEnd();
glFlush();
//QQuickWindow context of encapsuling QQuickFramebufferObject
//is set in overwritten synchronize call
if(m_pWindow)
{
m_pWindow->resetOpenGLState();
update();
}
}
The problem i experence is that the first frame gets drawn correctly, while all other frames only show the clear color.
I've analyzed the opengl api calls with vogl and posted the result in pastebin:
Frame0 (correct Frame): https://pastebin.com/aWu4ee6m
Frame1: https://pastebin.com/4EmWmnMv
The only differences i noticed were the initializing calls, where Qt querys the statemachines states, so i'm curious what else i did wrong.
Thanks for help in advance.
Small update:
If i remove glClear(...) The frames show the correct image, though i doubt this is correct behaviour.
The framebuffer bound when I use glClear is the one Qt created for me to use. It is bound with flag GL_FRAMEBUFFER, which also enables drawing.
After i returned from the function the default framebuffer (0) is bound and cleared. This procedure can be seen in Frame 1 pretty well.
What I've been wondering about is whether glBlitFrameBuffer is being called. Vogl doesn't seem to catch that call, also in the preview of the individual framebuffers, provided by Vogl, i couldn't see my red triangle in Frame1, while it is visible in Frame0.
I solved the problem when i compared the statemachines states and saw, that the Shaderprogram switched from 0 to 1.
Changing it back to 0, and thus disabling shaderprograms, at every start of the render function resulted in the expected behaviour.
Related
Goal
I'd like to implement an actual widget for Qt3D since QWidget::createWindowContainer just doesn't cut it for me.
Problem Description
My first approach of letting a new class subclass QWidget and QSurface was not successful since the Qt3D code either expects a QWindow or a QOffscreenSurface in multiple places and I don't want to recompile the whole Qt3D base.
My second idea was to render the Qt3D content to an offscreen surface and then draw the texture on a quad in a QOpenGLWidget. When I use a QRenderCapture framegraph node to save the image rendered to the offscreen texture and then load the image into a QOpenGLTexture and draw it in the QOpenGLWidget's paintGL function I can see the rendered image - i.e. rendering in Qt3D and also in the OpenGL widget works properly. This is just extremely slow compared to rendering the content from Qt3D directly.
Now, when I use the GLuint returned by the QTexutre2D to bind the texture during rendering of the QOpenGLWidget, everything stays black.
Of course this would make sense, if the contexts of the QOpenGLWidget and QT3D were completely separate. But by retrieving the AbstractRenderer from the QRenderAspectPrivate I was able to obtain the context that Qt3D uses. In my main.cpp I set
QApplication::setAttribute(Qt::AA_ShareOpenGLContexts);
The context of the QOpenGLWidget and of Qt3D both reference the same shared context - I verified this by printing both using qDebug, they are the same object.
Shouldn't this allow me to use the texture from Qt3D?
Or any other suggestions on how to implement such a widget? I just thought this to be the easiest way.
Implementation Details / What I've tried so far
This is what the paintGL function in my QOpenGLWidget looks like:
glClearColor(1.0, 1.0, 1.0, 1.0);
glDisable(GL_BLEND);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
d->m_shaderProgram->bind();
{
QMatrix4x4 m;
m.ortho(0, 1, 1, 0, 1.0f, 3.0f);
m.translate(0.0f, 0.0f, -2.0f);
QOpenGLVertexArrayObject::Binder vaoBinder(&d->m_vao);
d->m_shaderProgram->setUniformValue("matrix", m);
glBindTexture(GL_TEXTURE_2D, d->m_colorTexture->handle().toUInt());
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
}
d->m_shaderProgram->release();
m_colorTexture is the QTexture2D that is attached to the QRenderTargetOutput/QRenderTarget that Qt3D renders the scene offscreen to.
I have a QFrameAction in place to trigger draw updates on the QOpenGLWidget:
connect(d->m_frameAction, &Qt3DLogic::QFrameAction::triggered, this, &Qt3DWidget::paintGL);
I have verified that this indeed calls the paintGL function. So every time I draw the QOpenGLWidget, the frame should be ready and present in the texture.
I've also tried to replace the m_colorTexture with a QSharedGLTexture. Then I created this texture with the context of the QOpenGLWidget like this
m_texture = new QOpenGLTexture(QOpenGLTexture::Target2D);
m_texture->setFormat(QOpenGLTexture::RGBA8_UNorm);
// w and h are width and height of the widget
m_texture->setSize(w, h);
// m_colorTexture is the QSharedGLTexture
m_colorTexture->setTextureId(m_texture->textureId());
In the resizeEvent function of the QOpenGLWdiget I set the appropriate size on this texture and also on all offscreen resources of Qt3D. This also shows just a black screen. Placing qDebug() << glGetError(); directly after binding the texture simply shows 0 every time, so I assume that there aren't any errors.
The code can be found here in my GitHub project.
Update (10th May 2021, since I stumbled upon my answer again):
My Qt3DWidget implementation works perfectly now, the issue was that I had to call update() when the frame action was triggered instead of paintGL (duh, silly me, I actually know that).
Although I didn't find an exact solution to my question I'll post an answer here since I succeeded in creating a Qt3D widget.
The code can be found here. It's not the cleanest solution because I think it should be possible to use the shared texture somehow. Instead, now I'm setting the QOpenGLWidget's context on Qt3D for which I have to use Qt3D's private classes. This means that Qt3D draws directly onto the frame buffer bound by the OpenGL widget. Unfortunately, now the widget has to be the render driver and performs manual updates on the QAspectEngine by calling processFrame. Ideally, I would have liked to leave all processing loops to Qt3D but at least the widget works now as it is.
Edit:
I found an example for QSharedGLTexture in the manual tests here. It works the other way round, i.e. OpenGL renders to the texture and Qt3D uses it so I assume it should be possible to inverse the direction. Unfortunately, QSharedGLTexture seems to be a bit unstable as resizing the OpenGL window sometimes crashes the app. That's why I'll stick with my solution for now. But if anyone has news regarding this issue feel free to post an answer!
There are some mysteries, implementing a simple colorPicking with QOpenGL in-built tools.
Context :
I've an application, owning its own OGL widget. For some reasons (multi-widgets), I had to change my QGLWidget by a QOpenGLWidget which allows me easily to have many OpenGL contexts without (a priori) any problems. This change actually broke my color Picking and I'm then investigating:
I previously did this so as to get my object:
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
// render every object in our scene
ShaderLib::bindShader(ShaderLib::PICKING_SHADER);
{
for(auto const& _3dobject : model_->getObjects())
_3dobject.second->draw(projection_, cameraview_, true);
}
ShaderLib::unbind();
glFlush();
glFinish();
// get color information from frame buffer
float pixel[4];
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
glReadPixels(event->x(), viewport[3] - event->y(), 1, 1, GL_RGBA, GL_FLOAT, pixel)
This perfectly worked with the QGLWidget. I could get the pixel, then the matching object. I saved my pixels in QImage to confirm and I exactly had what was expected.
After changing QGLWidget by QOpenGLWidget :
Then, with the QOpenGLWidget, the above code didn't work. Worst : glReadPixels seems to not reading in the back framebuffer. How do I know ? I simply displayed the whole supposed buffer read via glReadPixels, as before, and it gives me a partial screenshot of my application but not my QOPenGLWidget :O, what means that now, glReadPixels has a different behavior according to QGLWidget or QOpenGLWidget !
Well. Never give up !
I try to get the framebuffer through QOpenGLWidget::grabFrameBuffer();
It creates a QImage of the... I don't know what buffer.
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
// render every object in our scene
ShaderLib::bindShader(ShaderLib::PICKING_SHADER);
{
for(auto const& _3dobject : model_->getObjects())
_3dobject.second->draw(projection_, cameraview_, true);
}
ShaderLib::unbind();
glFlush();
glFinish();
QImage fb = grabFramebuffer();
This prints me an image of the framebuffer drawn in my paintGL() function, which is different from the one rendered in my MousePressEvent() (where I render with a specific 'picking' shader...
Hoped you followed everything. To sum up :
Does anyone understand why glReadPixels gives a different result between the two 'painters' used ? I certainly missed something
Does anyone gets how double-buffering is working with QOpenGLWidget ? It seems that the user cannot really choose what's happening.
I'm trying to draw a custom opengl overlay (steam does that for example) in a 3d desktop game.
This overlay should basically be able to show the status of some variables which the user
can affect by pressing some keys. Think about it like a game trainer.
The goal is in the first place to draw a few primitives at a specific point on the screen. Later I want to have a little nice looking "gui" component in the game window.
The game uses the "SwapBuffers" method from the GDI32.dll.
Currently I'm able to inject a custom DLL file into the game and hook the "SwapBuffers" method.
My first idea was to insert the drawing of the overlay into that function. This could be done by switching the 3d drawing mode from the game into 2d, then draw the 2d overlay on the screen and switch it back again, like this:
//SwapBuffers_HOOK (HDC)
glPushMatrix();
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glOrtho(0.0, 640, 480, 0.0, 1.0, -1.0);
//"OVERLAY"
glBegin(GL_QUADS);
glColor3f(1.0f, 1.0f, 1.0f);
glVertex2f(0, 0);
glVertex2f(0.5f, 0);
glVertex2f(0.5f, 0.5f);
glVertex2f(0.0f, 0.5f);
glEnd();
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
SwapBuffers_OLD(HDC);
However, this does not have any effect on the game at all.
Is my approach correct and reasonable (also considering my 3d to 2d switching code)?
I would like to know what the best way is to design and display a custom overlay in the hooked function. (should I use something like windows forms or should I assemble my component with opengl functions - lines, quads
...?)
Is the SwapBuffers method the best place to draw my overlay?
Any hint, source code or tutorial to something similiar is appreciated too.
The game by the way is counterstrike 1.6 and I don't intend to cheat online.
Thanks.
EDIT:
I could manage to draw a simple rectangle into the game's window by using a new opengl context as proposed by 'derHass'. Here is what I did:
//1. At the beginning of the hooked gdiSwapBuffers(HDC hdc) method save the old context
GLboolean gdiSwapBuffersHOOKED(HDC hdc) {
HGLRC oldContext = wglGetCurrentContext();
//2. If the new context has not been already created - create it
//(we need the "hdc" parameter for the current window, so the initialition
//process is happening in this method - anyone has a better solution?)
//Then set the new context to the current one.
if (!contextCreated) {
thisContext = wglCreateContext(hdc);
wglMakeCurrent(hdc, thisContext);
initContext();
}
else {
wglMakeCurrent(hdc, thisContext);
}
//Draw the quad in the new context and switch back to the old one.
drawContext();
wglMakeCurrent(hdc, oldContext);
return gdiSwapBuffersOLD(hdc);
}
GLvoid drawContext() {
glColor3f(1.0f, 0, 0);
glBegin(GL_QUADS);
glVertex2f(0,190.0f);
glVertex2f(100.0f, 190.0f);
glVertex2f(100.0f,290.0f);
glVertex2f(0, 290.0f);
glEnd();
}
GLvoid initContext() {
contextCreated = true;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, 640, 480, 0.0, 1.0, -1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClearColor(0, 0, 0, 1.0);
}
Here is the result:
cs overlay example
It is still very simple but I will try to add some more details, text etc. to it.
Thanks.
If the game is using OpenGL, then hooking into SwapBuffers is the way to go, in principle. In theory, there might be sevaral different drawables, and you might have to decide in your swap buffer function which one(s) are the right ones to modify.
There are a couple of issues with such kind of OpenGL interceptions, though:
OpenGL is a state machine. The application might have modified any GL state variable there is. The code you provided is far from complete to guarantee that something is draw. For example, if the application happens to have shaders enabled, all your matrix setup might be without effect, and what really would appear on the screen depends on the shaders.
If depth testing is on, your fragments might lie behind what already was drawn. If polygon culling is on, your primitive might be incorrectly winded for the currect culling mode. If the color masks are set to GL_FALSE or the draw buffer is not set to where you expect it, nothing will appear.
Also note that your attempt to "reset" the matrices is also wrong. You seem to assume that the current matrix mode is GL_MODELVIEW. But this doesn't have to be the case. It could as well be GL_PROJECTION or GL_TEXTURE. You also apply glOrtho to the current projection matrix without loading identity first, so this alone is a good reason for nothing to appear on the screen.
As OpenGL is a state machine, you also must restore all the state you touched. You already try this with the matrix stack push/pop. But you for example failed to restore the exact matrix mode. As you have seen in 1, a lot more state changes will be required, so restoring it will be more comples. Since you use legacy OpenGL, glPushAttrib() might come handy here.
SwapBuffers is not a GL function, but one of the operating system's API. It gets a drawable as parameter, and does only indirectly refer to any GL context. It might be called while another GL context is bound to the thread, or with none at all. If you want to play it safe, you'll also have to intercept the GL context creation function as well as MakeCurrent. In the worst (though very unlikely) case, the application has the GL context bound to another thread while it is calling the SwapBuffers, so there is no change for you in the hooked function to get to the context.
Putting this all together opens up another alternative: You can create your own GL context, bind it temporarily during the hooked SwapBuffers call and restore the original binding again. That way, you don't interfere with the GL state of the application at all. You still can augment the image content the application has rendered, since the framebuffer is part of the drawable, not the GL context. Doing so might have a negative impact on performance, but it might be so small that you never would even notice it.
Since you want to do this only for a single specific application, another approach would be to find out the minimal state changes which are necessary by observing what GL state the application actually set during the SwapBuffers call. A tool like apitrace can help you with that.
I try to create a FrameBuffer with 2 textures attaching to it (Multi Render Targets). Then in every time step, both textures are cleared and painted, as following code. (Some part will be replaced as pseudo code to make it shorter.)
Version 1
//beginning of the 1st time step
initialize(framebufferID12)
//^ I quite sure it is done correctly,
//^ Note : there is no glDrawBuffers() calling
loop , do once every time step {
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebufferID12);
//(#1#) a line will be add here in version 2 (see belowed) <------------
glClearColor (0.5f, 0.0f, 0.5f, 0.0f);
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
// paint a lot of object here , using glsl (Shader .frag, .vert)
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
}
All objects are painted correctly to both texture, but only the first texture (ATTACHMENT0) is cleared every frame, which is wrong.
Version 2
I try to insert a line of code ...
glDrawBuffers({ATTACHMENT0,ATTACHMENT1}) ;
at (#1#) and it works as expected i.e. clear all two textures.
(image http://s13.postimg.org/66k9lr5av/gl_Draw_Buffer.jpg)
Version 3
From version 2, I move that glDrawBuffers() statement to be inside frame buffer initialization like this
initialize(int framebufferID12){
int nameFBO = glGenFramebuffersEXT();
int nameTexture0=glGenTextures();
int nameTexture1=glGenTextures();
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,nameFBO);
glBindTexture(nameTexture0);
glTexImage2D( .... ); glTexParameteri(...);
glFramebufferTexture2DEXT( ATTACHMENT0, nameTexture0);
glBindTexture(nameTexture1);
glTexImage2D( .... ); glTexParameteri(...);
glFramebufferTexture2DEXT( ATTACHMENT0, nameTexture1);
glDrawBuffers({ATTACHMENT0,ATTACHMENT1}) ; //<--- moved here ---
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,0);
return nameFBO ;
}
It is no longer work (symptom like version 1), why?
The opengl manual said that "changes to context state will be stored in this object", so the state modification from glDrawBuffers() will be stored in "framebufferID12" right? Then, why I have to call it every time step (or every time I change FBO)
I may misunderstand some opengl's concept, someone enlighten me please.
Edit 1: Thank j-p. I agree that it is make sense, but shouldn't the state be recorded in the FBO already?
Edit 2 (accept answer): Reto Koradi's answer is correct! I am using a not-so-standard library called LWJGL.
Yes, the draw buffers setting is part of the framebuffer state. If you look at for example the OpenGL 3.3 spec document, it is listed in table 6.23 on page 299, titled "Framebuffer (state per framebuffer object)".
The default value for FBOs is a single draw buffer, which is GL_COLOR_ATTACHMENT0. From the same spec, page 214:
For framebuffer objects, in the initial state the draw buffer for fragment color zero is COLOR_ATTACHMENT0. For both the default framebuffer and framebuffer objects, the initial state of draw buffers for fragment colors other then zero is NONE.
So it's expected that if you have more than one draw buffer, you need the explicit glDrawBuffers() call.
Now, why it doesn't seem to work for you if you make the glDrawBuffers() call as part of the FBO setup, that's somewhat mysterious. One thing I notice in your code is that you're using the EXT form of the FBO calls. I suspect that this might have something to do with your problem.
FBOs have been part of standard OpenGL since version 3.0. If there's any way for you to use OpenGL 3.0 or later, I would strongly recommend that you use the standard entry points. While the extensions normally still work even after the functionality has become standard, I would always be skeptical how they interact with other features. Particularly, there were multiple extensions for FBO functionality before 3.0, with different behavior. I wouldn't be surprised if some of them interact differently with other OpenGL calls compared to the standard FBO functionality.
So, try using the standard entry points (the ones without the EXT in their name). That will hopefully solve your problem.
I have a huge problem with using FBO.
I have a multi-pass display using FBOs and multitexturing. Everything seems to work fine until the end of first execution of display.
I set the render target back to screen using glBindFrameBufferEXT(GL_FRAMEBUFFER_EXT, 0) at the end of my display function but the passes after that do not take effect. The screen seems to freeze..
What might be the cause? Any guesses?
I suggest you add the
glPushAttrib(GL_VIEWPORT_BIT | GL_COLOR_BUFFER_BIT);
before binding the FBO, and
glPopAttrib();
after you release it.