I'm currently working with Qt5.1 and trying to draw some OpenGL stuff within a QGLWidget:
void Widget::paintGL() {
startClipping(10, height()-110,100,100);
qglColor(Qt::red);
glBegin(GL_QUADS);
glVertex2d(0,0);
glVertex2d(500,0);
glVertex2d(500,500);
glVertex2d(0,500);
glEnd();
qglColor(Qt::green);
this->renderText(50, 50, "SCISSOR TEST STRING");
endClipping();
}
The quad gets clipped correctly but the text doesn't.
I tried three ways of implementing the startClipping method: scissor test, setting the viewport to the clipping area and with a stencil buffer.
None of them worked and the whole string was drawn instead of beeing cut off at the edges of the clipping area.
Now my question is: Is this behavior a bug of Qt or is there something, I missed or another possibility I could try??
After a week of trying around, I suddenly found a very simple way to achieve, what I was looking for.
Using a QPainter and it's methods instead of the QGLWidget's renderText() simply makes text clipping work:
QPainter *painter = new QPainter();
painter->begin();
painter->setClipping(true);
painter->setClipPath(...); // or
painter->setClipRect(...); // or
painter->setClipRegion(...);
painter->drawText(...);
painter->end();
As I understand it, this is by design. According to the documentation ( https://qt-project.org/doc/qt-4.8/qglwidget.html#renderText ):
Note: This function clears the stencil buffer.
Note: This function temporarily disables depth-testing when the text is drawn.
However for the 'xyz version' (overloaded function)
Note: If depth testing is enabled before this function is called, then the drawn text will be depth-tested against the models that have already been drawn in the scene. Use glDisable(GL_DEPTH_TEST) before calling this function to annotate the models without depth-testing the text.
So, if you use the second version (by including a z-value, eg 0) in your original code, I think you get what you want. I think you would want to do this if you do a scene that is 'real' 3D (eg, axis labels on a 3D plot).
The documentation does also mention using drawText.
Related
Goal
I'd like to implement an actual widget for Qt3D since QWidget::createWindowContainer just doesn't cut it for me.
Problem Description
My first approach of letting a new class subclass QWidget and QSurface was not successful since the Qt3D code either expects a QWindow or a QOffscreenSurface in multiple places and I don't want to recompile the whole Qt3D base.
My second idea was to render the Qt3D content to an offscreen surface and then draw the texture on a quad in a QOpenGLWidget. When I use a QRenderCapture framegraph node to save the image rendered to the offscreen texture and then load the image into a QOpenGLTexture and draw it in the QOpenGLWidget's paintGL function I can see the rendered image - i.e. rendering in Qt3D and also in the OpenGL widget works properly. This is just extremely slow compared to rendering the content from Qt3D directly.
Now, when I use the GLuint returned by the QTexutre2D to bind the texture during rendering of the QOpenGLWidget, everything stays black.
Of course this would make sense, if the contexts of the QOpenGLWidget and QT3D were completely separate. But by retrieving the AbstractRenderer from the QRenderAspectPrivate I was able to obtain the context that Qt3D uses. In my main.cpp I set
QApplication::setAttribute(Qt::AA_ShareOpenGLContexts);
The context of the QOpenGLWidget and of Qt3D both reference the same shared context - I verified this by printing both using qDebug, they are the same object.
Shouldn't this allow me to use the texture from Qt3D?
Or any other suggestions on how to implement such a widget? I just thought this to be the easiest way.
Implementation Details / What I've tried so far
This is what the paintGL function in my QOpenGLWidget looks like:
glClearColor(1.0, 1.0, 1.0, 1.0);
glDisable(GL_BLEND);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
d->m_shaderProgram->bind();
{
QMatrix4x4 m;
m.ortho(0, 1, 1, 0, 1.0f, 3.0f);
m.translate(0.0f, 0.0f, -2.0f);
QOpenGLVertexArrayObject::Binder vaoBinder(&d->m_vao);
d->m_shaderProgram->setUniformValue("matrix", m);
glBindTexture(GL_TEXTURE_2D, d->m_colorTexture->handle().toUInt());
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
}
d->m_shaderProgram->release();
m_colorTexture is the QTexture2D that is attached to the QRenderTargetOutput/QRenderTarget that Qt3D renders the scene offscreen to.
I have a QFrameAction in place to trigger draw updates on the QOpenGLWidget:
connect(d->m_frameAction, &Qt3DLogic::QFrameAction::triggered, this, &Qt3DWidget::paintGL);
I have verified that this indeed calls the paintGL function. So every time I draw the QOpenGLWidget, the frame should be ready and present in the texture.
I've also tried to replace the m_colorTexture with a QSharedGLTexture. Then I created this texture with the context of the QOpenGLWidget like this
m_texture = new QOpenGLTexture(QOpenGLTexture::Target2D);
m_texture->setFormat(QOpenGLTexture::RGBA8_UNorm);
// w and h are width and height of the widget
m_texture->setSize(w, h);
// m_colorTexture is the QSharedGLTexture
m_colorTexture->setTextureId(m_texture->textureId());
In the resizeEvent function of the QOpenGLWdiget I set the appropriate size on this texture and also on all offscreen resources of Qt3D. This also shows just a black screen. Placing qDebug() << glGetError(); directly after binding the texture simply shows 0 every time, so I assume that there aren't any errors.
The code can be found here in my GitHub project.
Update (10th May 2021, since I stumbled upon my answer again):
My Qt3DWidget implementation works perfectly now, the issue was that I had to call update() when the frame action was triggered instead of paintGL (duh, silly me, I actually know that).
Although I didn't find an exact solution to my question I'll post an answer here since I succeeded in creating a Qt3D widget.
The code can be found here. It's not the cleanest solution because I think it should be possible to use the shared texture somehow. Instead, now I'm setting the QOpenGLWidget's context on Qt3D for which I have to use Qt3D's private classes. This means that Qt3D draws directly onto the frame buffer bound by the OpenGL widget. Unfortunately, now the widget has to be the render driver and performs manual updates on the QAspectEngine by calling processFrame. Ideally, I would have liked to leave all processing loops to Qt3D but at least the widget works now as it is.
Edit:
I found an example for QSharedGLTexture in the manual tests here. It works the other way round, i.e. OpenGL renders to the texture and Qt3D uses it so I assume it should be possible to inverse the direction. Unfortunately, QSharedGLTexture seems to be a bit unstable as resizing the OpenGL window sometimes crashes the app. That's why I'll stick with my solution for now. But if anyone has news regarding this issue feel free to post an answer!
I'm creating a box and placing "magnets" on the bottom. The sides are slightly see through(alpha is somewhere between .2 and .5) and the bottom is solid. I'm trying to use gluUnProject() to select where the "magnet" is placed, but when the sides of the box are rendered, I can't get my magnets into the box.
Is there anyway to still have the sides of the wall to be rendered but ignore them for the sake of mouse clicks?
I've tried GL_CULL_FACE but at first glance that doesn't seem to be what I'm looking for.
So if I understand correctly, you have semi-transparent boxes and when the magnet is inside the boxes you want to see the magnet in according to the semi-transparency of the boxes.
My guess is that when you're drawing the boxes you have the depth writes turned on, this way if boxes happen to get drawn before the magnet, then when you draw the magnet it will fail the depth test and the part that's inside won't get drawn as a result.
The easiest way to do this is:
Draw all the solid objects first
Disable depth writes:
glDepthMask(GL_FALSE);
Use an order-independent blending function when drawing the semi-transparent objects, for example:
glBlendFunc(GL_ONE, GL_ONE)
Draw all your transparent objects
Enable depth writes again
glDepthMask(GL_TRUE);
Bear in mind this simple method will only work if you can get away with using an commutative blending equation, if not then consider using order-independent transparency, a good article is "Efficient Layered Fragment Buffer Techniques" By Pyarelal Knowles, Geoff Leach, and Fabio Zambetta
I'm trying to translate an OpenGL renderer into DirectX9. It mostly seems to work, but the two don't seem to agree on the settings for alpha blending. In OpenGL, I'm using:
glDepthFunc(GL_LEQUAL);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
and never actually setting the GL_DEST_ALPHA, so it's whatever the default is. This works fine. Translating to DirectX, I get:
device->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_INVSRCALPHA);
which should do about the same thing, but totally doesn't. The closest I can get is:
device->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA);
device->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_DESTALPHA);
which is almost right, but if the geometry overlaps itself, the alpha in front overrides the alpha in back, and makes the more distant faces invisible. For the record, the other potentially related render states I've got going on are:
device->SetRenderState(D3DRS_LIGHTING, FALSE);
device->SetRenderState(D3DRS_ZENABLE, TRUE);
device->SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);
device->SetTextureStageState(0, D3DTSS_ALPHAOP, D3DTOP_MODULATE);
At this point, I feel like I'm just changing states at random to see which combination gives the best results, but nothing is working as well as it did in OpenGL. Not sure what I'm missing here...
The alpha blending itself is performed correctly. Otherwise, every particle would look strange. The reason why some parts of some particles are not drawn, is that they are behind the transparent parts of some other particles.
To solve this problem you have two options:
Turn off ZWriteEnable for the particles. With that, every object drawn after the particle will be in front of it. This could lead to problems, if you have objects that should actually behind the particles and are drawn afterwards.
Enable alpha testing for the particles. Alpha testing is a technique to remove transparent pixels (given a certain threshold) from the target. This includes the ZBuffer.
Btw. when rendering transparent objects, it is almost always necessary to sort the objects to solve ZBuffer issues. The above solutions work for some special cases.
I am developing a paint-like application using C++ and Open GL. But every time i draw objects like circle, lines etc they don't ** stay ** on the page. By this I mean that every new object I draw is getting placed on a blank page. How do I get my drawn objects to persist?
OpenGL has no geometry persistency. Basically it's pencils, brushes and paint, with which you draw on a canvas called the "framebuffer". So after you drawn something and clear the framebuffer, it will not reappear in some magic way.
There are two solutions:
you keep a list of all drawing operations and at each redraw you repaint everything from that list.
After drawing something copy the image in the framebuffer to a texture and instead of glClear you fill the background with that texture.
Both techniques can be combined.
Just don't clear the framebuffer and anything you draw will stay on the screen. This is the same method I use to allow users to draw on my OpenGL models. This is only good for marking up an image, since by using this method you can't erase what you've drawn, unless your method of erasing is to draw using your background color.
I am rendering an OpenGL scene that include some bitmap text. It is my understanding the order I draw things in will determine which items are on top.
However, my bitmap text, even though I draw it last, is not on top!
For instance, I am drawing:
1) Background
2) Buttons
3) Text
All at the same z depth. Buttons are above the background, but text is invisible. It I change the z depth of the text, I can see it, but I then have other problems.
I am using the bitmap text method from Nehe's Tutorials.
How can I make the text visible without changing the z depth?
You can simply disable the z-test via
glDisable (GL_DEPTH_TEST); // or something related..
If you do so the Z of your text-primitives will be ignored. Primitives are drawn in the same order as your call the gl-functions.
Another way would be to set some constant z-offset via glPolygonOffset (not recommended) or set the depth-compare mode to something like GL_LESS_EQUAL (the EQUAL is the important one). That makes sure that primitives drawn with the same depth are rendered ontop of each other.
Hope that helps.
You can also use glDepthFunc (GL_ALWAYS).