OpenSceneGraph, HUDs, & Textured Qt Widget Problems - c++

So I have program which is a Qt App. I have some basic Qt GUI on the outside but then I have a Qt widget that makes use of OpenSceneGraph to render a 3D scene. To make things more complicated inside that screen I have a HUD that toggles on and off. This HUD consists of some graphic elements and then a Qt Widget rendered to a texture.
I basically have that working however I am having some size issues within the HUD/Qt Widget. When I first toggle the HUD to visible the Qt Widget is there but way too big. I can change the size but regardless, the first time I press a key the Qt Widget is auto re-sized to fit in the textured area I give it (which is what I expect) but this auto re-sized widget doesn't fit the area correctly. Its impossible to read the table that the widget contains.
To help I have two screen shots. The first is before I type a key:
http://cerrnim.com/wp-content/uploads/2012/12/before.png
And the second is after I type a key:
http://cerrnim.com/wp-content/uploads/2012/12/after.png
Additionally here are some code fragments showing how I create the HUD. Its of course a part of a much larger program but hopefully this is enough information.
/* AUX function to create HUD geo. */
osg::Geode* HUDGeometry( int x1, int y1, int x2, int y2,
std::string model, HUDEvents event, NetworkViewer* viewer ) {
osg::Geometry* quad = osg::createTexturedQuadGeometry(osg::Vec3(x1,y1,0),
osg::Vec3(x2-x1,0,0), osg::Vec3(0,y2-y1,0), 1, 1);
osg::Geode* geode = new osg::Geode( ) ;
geode->setName( model ) ;
geode->setUserData( new HUDEvent( event, viewer ) ) ;
osg::Texture2D* HUDTexture = new osg::Texture2D();
HUDTexture->setDataVariance(osg::Object::DYNAMIC);
osg::Image* hudImage = osgDB::readImageFile(model);
HUDTexture->setImage(hudImage);
geode->getOrCreateStateSet()->setTextureAttributeAndModes(
0, HUDTexture, osg::StateAttribute::ON);
geode->addDrawable( quad ) ;
return geode ;
}
/* Creates the HUD but does not display it yet. */
void NetworkViewer::initHUD( ) {
osg::MatrixTransform* mt = new osg::MatrixTransform;
osg::Camera* hudCamera = new osg::Camera;
hudCamera->setReferenceFrame(osg::Transform::ABSOLUTE_RF);
hudCamera->setViewMatrix(osg::Matrix::identity());
//hudCamera->setProjectionResizePolicy(osg::Camera::FIXED);
hudCamera->setProjectionMatrixAsOrtho2D(0,100,0,100);
hudCamera->setClearMask(GL_DEPTH_BUFFER_BIT);
hudCamera->setRenderOrder(osg::Camera::POST_RENDER);
hudCamera->setAllowEventFocus(true);
QWidget* widget = new QWidget;
layout = new QVBoxLayout( ) ;
widget->setLayout(layout);
widget->layout()->addWidget(((GUI*)view)->getTabs( ));
//widget->setGeometry(0, 0, 500, 400);
osg::ref_ptr<osgQt::QWidgetImage> widgetImage = new osgQt::QWidgetImage(widget);
osg::Geometry* quad = osg::createTexturedQuadGeometry(osg::Vec3(30,32,0),
osg::Vec3(40,0,0), osg::Vec3(0,35,0), 1, 1);
osg::Geode* geode = new osg::Geode;
geode->addDrawable(quad);
osg::Texture2D* texture = new osg::Texture2D(widgetImage.get());
texture->setResizeNonPowerOfTwoHint(false);
texture->setFilter(osg::Texture::MIN_FILTER,osg::Texture::LINEAR);
texture->setWrap(osg::Texture::WRAP_S, osg::Texture::CLAMP_TO_EDGE);
texture->setWrap(osg::Texture::WRAP_T, osg::Texture::CLAMP_TO_EDGE);
mt->getOrCreateStateSet()->setTextureAttributeAndModes(0, texture, osg::StateAttribute::ON);
mt->addChild(hudCamera);
osgViewer::InteractiveImageHandler* handler =
new osgViewer::InteractiveImageHandler(widgetImage.get(), texture, hudCamera);
mt->getOrCreateStateSet()->setMode(GL_LIGHTING, osg::StateAttribute::OFF);
mt->getOrCreateStateSet()->setMode(GL_BLEND, osg::StateAttribute::ON);
mt->getOrCreateStateSet()->setRenderingHint(osg::StateSet::TRANSPARENT_BIN);
mt->getOrCreateStateSet()->setAttribute(new osg::Program);
quad->setEventCallback(handler);
quad->setCullCallback(handler);
hudCamera->addChild( geode ) ;
hudCamera->addChild( HUDGeometry(73,73,75,75,
"models/quit.png", EXIT_OBJ, this ));
hudCamera->addChild( HUDGeometry(25,25,75,75,
"models/hud.png", NO_EVENT, this ));
osg::Group* overlay = new osg::Group;
overlay->addChild(mt);
root->addChild(overlay);
HUD = hudCamera ;
disableHUD( ) ;
}

I think the main problem is that you do not adapt the dimensions of the widget to the dimensions of the quad you render it on.
I'm not sure what QWidgetImage is doing internally, but I guess it's just rendering the widget onto a canvas of appropriate size and convert the result into an image. In your code you map the complete image (regardless of its dimension or aspect ration) onto that quad. If you want the widget to fit the quad you need to resize the widget before creating an image of it.

Related

What differences between QPhongMaterial and QPhongAlphaMaterial? [duplicate]

I have a function that draws triangles through OpenGL
I draw two triangles by pressing a button (function on_drawMapPushButton_clicked()).
Then i draw a sphere that placed above these triangles. And now i see, that sphere is drawed correctly over first triangle, but second triangle drawed over the sphere and not vice versa.
If i press the button second time, then spehere is drawed correctly over first and second triangles.
When i press the button third time, then second triangle drawed over the sphere again.
When i press the button fourth time, then spehere is drawed correctly over first and second triangles and so on.
If i use in sphereMesh QPhongMaterial instead of QPhongAlphaMaterial, then spehere is drawed correctly over first and second triangles always. Like it must to be.
I can't understand what i do wrong to get my sphere is drawed always over the triangles.
Code, that draws transparent sphere:
selectModel_ = new Qt3DExtras::QSphereMesh(selectEntity_);
selectModel_->setRadius(75);
selectModel_->setSlices(150);
selectMaterial_ = new Qt3DExtras::QPhongAlphaMaterial(selectEntity_);
selectMaterial_->setAmbient(QColor(28, 61, 136));
selectMaterial_->setDiffuse(QColor(11, 56, 159));
selectMaterial_->setSpecular(QColor(10, 67, 199));
selectMaterial_->setShininess(0.8f);
selectEntity_->addComponent(selectModel_);
selectEntity_->addComponent(selectMaterial_);
Function drawTriangles:
void drawTriangles(QPolygonF triangles, QColor color){
int numOfVertices = triangles.size();
// Create and fill vertex buffer
QByteArray bufferBytes;
bufferBytes.resize(3 * numOfVertices * static_cast<int>(sizeof(float)));
float *positions = reinterpret_cast<float*>(bufferBytes.data());
for(auto point : triangles){
*positions++ = static_cast<float>(point.x());
*positions++ = 0.0f; //We need to drow only on the surface
*positions++ = static_cast<float>(point.y());
}
geometry_ = new Qt3DRender::QGeometry(mapEntity_);
auto *buf = new Qt3DRender::QBuffer(geometry_);
buf->setData(bufferBytes);
positionAttribute_ = new Qt3DRender::QAttribute(mapEntity_);
positionAttribute_->setName(Qt3DRender::QAttribute::defaultPositionAttributeName());
positionAttribute_->setVertexBaseType(Qt3DRender::QAttribute::Float); //In our buffer we will have only floats
positionAttribute_->setVertexSize(3); // Size of a vertex
positionAttribute_->setAttributeType(Qt3DRender::QAttribute::VertexAttribute); // Attribute type
positionAttribute_->setByteStride(3 * sizeof(float));
positionAttribute_->setBuffer(buf);
geometry_->addAttribute(positionAttribute_); // Add attribute to ours Qt3DRender::QGeometry
// Create and fill an index buffer
QByteArray indexBytes;
indexBytes.resize(numOfVertices * static_cast<int>(sizeof(unsigned int))); // start to end
unsigned int *indices = reinterpret_cast<unsigned int*>(indexBytes.data());
for(unsigned int i = 0; i < static_cast<unsigned int>(numOfVertices); ++i) {
*indices++ = i;
}
auto *indexBuffer = new Qt3DRender::QBuffer(geometry_);
indexBuffer->setData(indexBytes);
indexAttribute_ = new Qt3DRender::QAttribute(geometry_);
indexAttribute_->setVertexBaseType(Qt3DRender::QAttribute::UnsignedInt); //In our buffer we will have only unsigned ints
indexAttribute_->setAttributeType(Qt3DRender::QAttribute::IndexAttribute); // Attribute type
indexAttribute_->setBuffer(indexBuffer);
indexAttribute_->setCount(static_cast<unsigned int>(numOfVertices)); // Set count of our vertices
geometry_->addAttribute(indexAttribute_); // Add the attribute to ours Qt3DRender::QGeometry
shape_ = new Qt3DRender::QGeometryRenderer(mapEntity_);
shape_->setPrimitiveType(Qt3DRender::QGeometryRenderer::Triangles);
shape_->setGeometry(geometry_);
//Create material
material_ = new Qt3DExtras::QPhongMaterial(mapEntity_);
material_->setAmbient(color);
trianglesEntity_ = new Qt3DCore::QEntity(mapEntity_);
trianglesEntity_->addComponent(shape_);
trianglesEntity_->addComponent(material_);
}
Press button handler on_drawMapPushButton_clicked():
void on_drawMapPushButton_clicked()
{
clearMap(); //Implementation is above
QPolygonF triangle1;
triangle1 << QPointF( 0 ,-1000) << QPointF(0 ,1000) << QPointF(1000, -1000);
drawTriangles(triangle1, Qt::black);
QPolygonF triangle2;
triangle2 << QPointF(-1000,-1000) << QPointF(-100,1000) << QPointF(-100,-1000);
drawTriangles(triangle2, Qt::red);
}
Map clearing function clearMap():
void clearMap()
{
if(mapEntity_){
delete mapEntity_;
mapEntity_ = nullptr;
mapEntity_ = new Qt3DCore::QEntity(view3dRootEntity_);
}
}
Ok here comes the extend answer.
The reason why this sometimes happens and sometimes not depends on the order of your entities. If you experiment with two simple spheres, one transparent and one not, you will see that when the sphere that is transparent is added later it will be drawn above the opaque object - just like you want it to.
This happens because the opaque object will be drawn first (it comes first in the scene graph) and the transparent object later which will give you the result you want. In the other case where the transparent object gets drawn first, the opaque object is drawn above because the QPhongAlphaMaterial has a QNoDepthMask render state which tells it not to write to the depth buffer. Thus, the opaque object always passes the depth test, where the transparent object actually already drew to. You have to do some more work to properly draw transparent objects for arbitrary scene graphs and camera positions.
The Qt3D Rendering Graph
To understand what you have to do you should understand how the Qt3D rendering graph is laid out. If you know this already you can skip this part.
Italic words reference items in the graph image in the following text.
If you use a Qt3DWindow, you can't access the root node of rendering graph. It is maintained by the window. You can access the QRenderSettings and root node of your framegraph through the functions activeFramegraph() and renderSettings() which you can both call on the window. You can also set the root node of scene graph through the setRootEntity() function of Qt3DWindow. The window internally has a QAspectEngine, where it sets the root node of the whole graph, which is the root node of the rendering graph in the graph image above.
If you want to insert a framegraph node to the existing framegraph of the 3D window, you have to add it as the parent of the active framegraph which I will explain in the next section. If you have your own custom framegraph which you set on the window through setActiveFramegraph() then just append it to the end, this should suffice.
Using QSortPolicy
As you already found out according to you other questions, you can use QSortPolicy in your framegraph to sort the entities by distance to camera. You can add a sort policy as follows (assuming that view is your Qt3DWindow and scene is your root entity of the scene graph, although I don't understand why it has to be):
Qt3DRender::QFrameGraphNode *framegraph = view.activeFrameGraph();
Qt3DRender::QSortPolicy *sortPolicy = new Qt3DRender::QSortPolicy(scene);
framegraph->setParent(sortPolicy);
QVector<Qt3DRender::QSortPolicy::SortType> sortTypes =
QVector<Qt3DRender::QSortPolicy::SortType>() << Qt3DRender::QSortPolicy::BackToFront;
sortPolicy->setSortTypes(sortTypes);
view.setActiveFrameGraph(framegraph);
The issue with this code is that this sort policy sorts the entities by the distance of their centers to the camera. If one of the opaque objects is closer to the camera than the transparent object it gets drawn later anyways and occludes the transparent object. See the images below for a graphical explanation.
The red and black sphere are further away from the camera than the torus, that's why they get drawn first and they don't occlude the torus.
No the center of the red sphere is closer to the camera than the center of the torus. It gets rendered later than the torus and occludes it.
Using Two Framegraph Branches
You can tackle the issue above if you use two framegraph branches. One which draws all opaque entities and one which draws all transparent ones. To achieve this you have to make use of QLayer and QLayerFilter. You can attach layers to entities and then add layer filters to your framegraph. This way you can exclude entities from entering a certain branch of your framegraph.
Let's say you create two layers, one for opaque objects and one for transparents ones:
Qt3DRender::QLayer *transparentLayer = new Qt3DRender::QLayer;
Qt3DRender::QLayer *opaqueLayer = new Qt3DRender::QLayer;
You have to attach the transparent layer to each transparent object and the opaque layer to each opaque object as a component (using addComponent()).
Unfortunately, you need a special framegraph tree to include the two corresponding layer filters (again, assuming that view is your Qt3DWindow):
Qt3DRender::QRenderSurfaceSelector *renderSurfaceSelector
= new Qt3DRender::QRenderSurfaceSelector();
renderSurfaceSelector->setSurface(&view);
Qt3DRender::QClearBuffers *clearBuffers
= new Qt3DRender::QClearBuffers(renderSurfaceSelector);
clearBuffers->setBuffers(Qt3DRender::QClearBuffers::AllBuffers);
clearBuffers->setClearColor(Qt::white);
This is the first branch to clear the buffers. Now you add the following code:
Qt3DRender::QViewport *viewport = new Qt3DRender::QViewport(renderSurfaceSelector);
Qt3DRender::QCameraSelector *cameraSelector = new Qt3DRender::QCameraSelector(viewport);
Qt3DRender::QCamera *camera = new Qt3DRender::QCamera(cameraSelector);
// set your camera parameters here
cameraSelector->setCamera(camera);
Since you create the QViewport as a child of the QRenderSurfaceSelector it is now a sibling in your framegraph with respect to the QClearBuffers. You can see an illustration of the example framegraphs here.
Now you have to create the two leaf nodes that contain the layer filters. The Qt3D engine always executes a whole branch when it reaches a leaf. This means that first the opaque objects are drawn and then the transparent ones.
// not entirely sure why transparent filter has to go first
// I would have expected the reversed order of the filters but this works...
Qt3DRender::QLayerFilter *transparentFilter = new Qt3DRender::QLayerFilter(camera);
transparentFilter->addLayer(transparentLayer);
Qt3DRender::QLayerFilter *opaqueFilter = new Qt3DRender::QLayerFilter(camera);
opaqueFilter->addLayer(opaqueLayer);
The two layer filters are now leaf nodes in your framegraph branch and Qt3D will first draw the opaque objects and then afterwards, since it uses the same viewport and everything, will draw the transparent objects above. It will draw them correctly (i.e. not in front of parts of opaque objects that the transparent object actually lies behind, because we did not clear the depth buffers again -> Splitting the framegraph happens only on the camera node).
Now set the new framegaph on your Qt3DWindow:
view.setActiveFrameGraph(renderSurfaceSelector);
Result:
Edit (26.03.21): As Patrick B. pointed out correctly, using the suggested solution with two layers you will have to add both layers as components to any lights in the scene. You can get around this by setting the filter mode on the QLayerFilters to QLayerFilter::FilterMode::DiscardAnyMatching and then reverse the order of the filters. This way, the transparentFilter discards any entities with the transparentLayer attached - but not the lights because they don't have the transparentLayer. Vice versa for the opaqueFilter.
My mistake was that i did wrong order of creating and deletion of Triangles and Sphere entities.
In pseudo code right order is as follows:
clearTriangles();
clearSphere();
drawTriangles();
drawSphere();
If you are using Qt3d with QML and want to control the order elements are drawn you can control it by the order of layers in your QML file.
Something like:
{
objectName: "firstLayer"
id : firstLayer
}
Layer {
objectName: "secondLayer"
id: secondLayer
}
The order you add them to layer filters will then control which is drawn first:
RenderSurfaceSelector {
CameraSelector {
id : cameraSelector
camera: mainCamera
FrustumCulling {
ClearBuffers {
buffers : ClearBuffers.AllBuffers
clearColor: "#04151c"
NoDraw {}
}
LayerFilter
{
objectName: "firstLayerFilter"
id: firstLayerFilter
layers: [firstLayer]
}
LayerFilter
{
id: secondLayerFilter
objectName: "secondLayerFilter"
layers: [secondLayer]
}
Then anything you add to the secondLayer will get drawn over-top of the first layer. I used this to make sure text always showed up in front of shapes, but it can be used similarly with transparencies.

Qt3d. Draw transparent QSphereMesh over triangles

I have a function that draws triangles through OpenGL
I draw two triangles by pressing a button (function on_drawMapPushButton_clicked()).
Then i draw a sphere that placed above these triangles. And now i see, that sphere is drawed correctly over first triangle, but second triangle drawed over the sphere and not vice versa.
If i press the button second time, then spehere is drawed correctly over first and second triangles.
When i press the button third time, then second triangle drawed over the sphere again.
When i press the button fourth time, then spehere is drawed correctly over first and second triangles and so on.
If i use in sphereMesh QPhongMaterial instead of QPhongAlphaMaterial, then spehere is drawed correctly over first and second triangles always. Like it must to be.
I can't understand what i do wrong to get my sphere is drawed always over the triangles.
Code, that draws transparent sphere:
selectModel_ = new Qt3DExtras::QSphereMesh(selectEntity_);
selectModel_->setRadius(75);
selectModel_->setSlices(150);
selectMaterial_ = new Qt3DExtras::QPhongAlphaMaterial(selectEntity_);
selectMaterial_->setAmbient(QColor(28, 61, 136));
selectMaterial_->setDiffuse(QColor(11, 56, 159));
selectMaterial_->setSpecular(QColor(10, 67, 199));
selectMaterial_->setShininess(0.8f);
selectEntity_->addComponent(selectModel_);
selectEntity_->addComponent(selectMaterial_);
Function drawTriangles:
void drawTriangles(QPolygonF triangles, QColor color){
int numOfVertices = triangles.size();
// Create and fill vertex buffer
QByteArray bufferBytes;
bufferBytes.resize(3 * numOfVertices * static_cast<int>(sizeof(float)));
float *positions = reinterpret_cast<float*>(bufferBytes.data());
for(auto point : triangles){
*positions++ = static_cast<float>(point.x());
*positions++ = 0.0f; //We need to drow only on the surface
*positions++ = static_cast<float>(point.y());
}
geometry_ = new Qt3DRender::QGeometry(mapEntity_);
auto *buf = new Qt3DRender::QBuffer(geometry_);
buf->setData(bufferBytes);
positionAttribute_ = new Qt3DRender::QAttribute(mapEntity_);
positionAttribute_->setName(Qt3DRender::QAttribute::defaultPositionAttributeName());
positionAttribute_->setVertexBaseType(Qt3DRender::QAttribute::Float); //In our buffer we will have only floats
positionAttribute_->setVertexSize(3); // Size of a vertex
positionAttribute_->setAttributeType(Qt3DRender::QAttribute::VertexAttribute); // Attribute type
positionAttribute_->setByteStride(3 * sizeof(float));
positionAttribute_->setBuffer(buf);
geometry_->addAttribute(positionAttribute_); // Add attribute to ours Qt3DRender::QGeometry
// Create and fill an index buffer
QByteArray indexBytes;
indexBytes.resize(numOfVertices * static_cast<int>(sizeof(unsigned int))); // start to end
unsigned int *indices = reinterpret_cast<unsigned int*>(indexBytes.data());
for(unsigned int i = 0; i < static_cast<unsigned int>(numOfVertices); ++i) {
*indices++ = i;
}
auto *indexBuffer = new Qt3DRender::QBuffer(geometry_);
indexBuffer->setData(indexBytes);
indexAttribute_ = new Qt3DRender::QAttribute(geometry_);
indexAttribute_->setVertexBaseType(Qt3DRender::QAttribute::UnsignedInt); //In our buffer we will have only unsigned ints
indexAttribute_->setAttributeType(Qt3DRender::QAttribute::IndexAttribute); // Attribute type
indexAttribute_->setBuffer(indexBuffer);
indexAttribute_->setCount(static_cast<unsigned int>(numOfVertices)); // Set count of our vertices
geometry_->addAttribute(indexAttribute_); // Add the attribute to ours Qt3DRender::QGeometry
shape_ = new Qt3DRender::QGeometryRenderer(mapEntity_);
shape_->setPrimitiveType(Qt3DRender::QGeometryRenderer::Triangles);
shape_->setGeometry(geometry_);
//Create material
material_ = new Qt3DExtras::QPhongMaterial(mapEntity_);
material_->setAmbient(color);
trianglesEntity_ = new Qt3DCore::QEntity(mapEntity_);
trianglesEntity_->addComponent(shape_);
trianglesEntity_->addComponent(material_);
}
Press button handler on_drawMapPushButton_clicked():
void on_drawMapPushButton_clicked()
{
clearMap(); //Implementation is above
QPolygonF triangle1;
triangle1 << QPointF( 0 ,-1000) << QPointF(0 ,1000) << QPointF(1000, -1000);
drawTriangles(triangle1, Qt::black);
QPolygonF triangle2;
triangle2 << QPointF(-1000,-1000) << QPointF(-100,1000) << QPointF(-100,-1000);
drawTriangles(triangle2, Qt::red);
}
Map clearing function clearMap():
void clearMap()
{
if(mapEntity_){
delete mapEntity_;
mapEntity_ = nullptr;
mapEntity_ = new Qt3DCore::QEntity(view3dRootEntity_);
}
}
Ok here comes the extend answer.
The reason why this sometimes happens and sometimes not depends on the order of your entities. If you experiment with two simple spheres, one transparent and one not, you will see that when the sphere that is transparent is added later it will be drawn above the opaque object - just like you want it to.
This happens because the opaque object will be drawn first (it comes first in the scene graph) and the transparent object later which will give you the result you want. In the other case where the transparent object gets drawn first, the opaque object is drawn above because the QPhongAlphaMaterial has a QNoDepthMask render state which tells it not to write to the depth buffer. Thus, the opaque object always passes the depth test, where the transparent object actually already drew to. You have to do some more work to properly draw transparent objects for arbitrary scene graphs and camera positions.
The Qt3D Rendering Graph
To understand what you have to do you should understand how the Qt3D rendering graph is laid out. If you know this already you can skip this part.
Italic words reference items in the graph image in the following text.
If you use a Qt3DWindow, you can't access the root node of rendering graph. It is maintained by the window. You can access the QRenderSettings and root node of your framegraph through the functions activeFramegraph() and renderSettings() which you can both call on the window. You can also set the root node of scene graph through the setRootEntity() function of Qt3DWindow. The window internally has a QAspectEngine, where it sets the root node of the whole graph, which is the root node of the rendering graph in the graph image above.
If you want to insert a framegraph node to the existing framegraph of the 3D window, you have to add it as the parent of the active framegraph which I will explain in the next section. If you have your own custom framegraph which you set on the window through setActiveFramegraph() then just append it to the end, this should suffice.
Using QSortPolicy
As you already found out according to you other questions, you can use QSortPolicy in your framegraph to sort the entities by distance to camera. You can add a sort policy as follows (assuming that view is your Qt3DWindow and scene is your root entity of the scene graph, although I don't understand why it has to be):
Qt3DRender::QFrameGraphNode *framegraph = view.activeFrameGraph();
Qt3DRender::QSortPolicy *sortPolicy = new Qt3DRender::QSortPolicy(scene);
framegraph->setParent(sortPolicy);
QVector<Qt3DRender::QSortPolicy::SortType> sortTypes =
QVector<Qt3DRender::QSortPolicy::SortType>() << Qt3DRender::QSortPolicy::BackToFront;
sortPolicy->setSortTypes(sortTypes);
view.setActiveFrameGraph(framegraph);
The issue with this code is that this sort policy sorts the entities by the distance of their centers to the camera. If one of the opaque objects is closer to the camera than the transparent object it gets drawn later anyways and occludes the transparent object. See the images below for a graphical explanation.
The red and black sphere are further away from the camera than the torus, that's why they get drawn first and they don't occlude the torus.
No the center of the red sphere is closer to the camera than the center of the torus. It gets rendered later than the torus and occludes it.
Using Two Framegraph Branches
You can tackle the issue above if you use two framegraph branches. One which draws all opaque entities and one which draws all transparent ones. To achieve this you have to make use of QLayer and QLayerFilter. You can attach layers to entities and then add layer filters to your framegraph. This way you can exclude entities from entering a certain branch of your framegraph.
Let's say you create two layers, one for opaque objects and one for transparents ones:
Qt3DRender::QLayer *transparentLayer = new Qt3DRender::QLayer;
Qt3DRender::QLayer *opaqueLayer = new Qt3DRender::QLayer;
You have to attach the transparent layer to each transparent object and the opaque layer to each opaque object as a component (using addComponent()).
Unfortunately, you need a special framegraph tree to include the two corresponding layer filters (again, assuming that view is your Qt3DWindow):
Qt3DRender::QRenderSurfaceSelector *renderSurfaceSelector
= new Qt3DRender::QRenderSurfaceSelector();
renderSurfaceSelector->setSurface(&view);
Qt3DRender::QClearBuffers *clearBuffers
= new Qt3DRender::QClearBuffers(renderSurfaceSelector);
clearBuffers->setBuffers(Qt3DRender::QClearBuffers::AllBuffers);
clearBuffers->setClearColor(Qt::white);
This is the first branch to clear the buffers. Now you add the following code:
Qt3DRender::QViewport *viewport = new Qt3DRender::QViewport(renderSurfaceSelector);
Qt3DRender::QCameraSelector *cameraSelector = new Qt3DRender::QCameraSelector(viewport);
Qt3DRender::QCamera *camera = new Qt3DRender::QCamera(cameraSelector);
// set your camera parameters here
cameraSelector->setCamera(camera);
Since you create the QViewport as a child of the QRenderSurfaceSelector it is now a sibling in your framegraph with respect to the QClearBuffers. You can see an illustration of the example framegraphs here.
Now you have to create the two leaf nodes that contain the layer filters. The Qt3D engine always executes a whole branch when it reaches a leaf. This means that first the opaque objects are drawn and then the transparent ones.
// not entirely sure why transparent filter has to go first
// I would have expected the reversed order of the filters but this works...
Qt3DRender::QLayerFilter *transparentFilter = new Qt3DRender::QLayerFilter(camera);
transparentFilter->addLayer(transparentLayer);
Qt3DRender::QLayerFilter *opaqueFilter = new Qt3DRender::QLayerFilter(camera);
opaqueFilter->addLayer(opaqueLayer);
The two layer filters are now leaf nodes in your framegraph branch and Qt3D will first draw the opaque objects and then afterwards, since it uses the same viewport and everything, will draw the transparent objects above. It will draw them correctly (i.e. not in front of parts of opaque objects that the transparent object actually lies behind, because we did not clear the depth buffers again -> Splitting the framegraph happens only on the camera node).
Now set the new framegaph on your Qt3DWindow:
view.setActiveFrameGraph(renderSurfaceSelector);
Result:
Edit (26.03.21): As Patrick B. pointed out correctly, using the suggested solution with two layers you will have to add both layers as components to any lights in the scene. You can get around this by setting the filter mode on the QLayerFilters to QLayerFilter::FilterMode::DiscardAnyMatching and then reverse the order of the filters. This way, the transparentFilter discards any entities with the transparentLayer attached - but not the lights because they don't have the transparentLayer. Vice versa for the opaqueFilter.
My mistake was that i did wrong order of creating and deletion of Triangles and Sphere entities.
In pseudo code right order is as follows:
clearTriangles();
clearSphere();
drawTriangles();
drawSphere();
If you are using Qt3d with QML and want to control the order elements are drawn you can control it by the order of layers in your QML file.
Something like:
{
objectName: "firstLayer"
id : firstLayer
}
Layer {
objectName: "secondLayer"
id: secondLayer
}
The order you add them to layer filters will then control which is drawn first:
RenderSurfaceSelector {
CameraSelector {
id : cameraSelector
camera: mainCamera
FrustumCulling {
ClearBuffers {
buffers : ClearBuffers.AllBuffers
clearColor: "#04151c"
NoDraw {}
}
LayerFilter
{
objectName: "firstLayerFilter"
id: firstLayerFilter
layers: [firstLayer]
}
LayerFilter
{
id: secondLayerFilter
objectName: "secondLayerFilter"
layers: [secondLayer]
}
Then anything you add to the secondLayer will get drawn over-top of the first layer. I used this to make sure text always showed up in front of shapes, but it can be used similarly with transparencies.

How to create polygons to display running number in cocos2dx

I'm trying to create a node that is simply a rectangle with a number in it. And this is how I'm doing it now:
int size = 100, fontSize = 64;
auto node = DrawNode::create();
Vec2 vertices[] =
{
Vec2(0,size),
Vec2(size,size),
Vec2(size,0),
Vec2(0,0)
};
node->drawPolygon(vertices, 4, Color4F(1.0f,0.3f,0.3f,1), 0, Color4F(1.0f,1.0f,1.0f,1));
auto texture = new Texture2D();
int numberToDisplay = 2000;
std::string s = std::to_string(numberToDisplay);
texture -> initWithString(s.c_str(), "fonts/Marker Felt.ttf", fontSize, Size(size, size), TextHAlignment::CENTER, TextVAlignment::CENTER);
auto textSprite = Sprite::createWithTexture(texture);
node -> addChild(textSprite);
textSprite -> setPosition(size/2, size/2);
Every time I want to change the number I have to re-create a textureSprite, remove the current child and add the new one. Is there a better way to do it?
i wonder whether you want some special features, so why not use LayerColor and labelTTF?
LayerColor* node = LayerColor::create(Color4B(255, 85, 85, 255), 100, 100);
LabelTTF* label = LabelTTF::create(s, "fonts/Marker Felt.ttf", fontSize);
node->addChild(label);
just change content of labelttf,no need to create sprite
You could use two different techniques for achieve this, to me both of them are good
1:- Use texture cache to cache texture and change image texture at run time(good if u know how many exact textures are there and texture has same Size). in your .h file define no of textures like:-
Texture2D *startTexture, *endTexture, *midTexture;
in you .cpp file do it like:-
startTexture = Director::getInstance()->getTextureCache()->addImage(
"start.png");
endTexture = Director::getInstance()->getTextureCache()->addImage(
"end.png");
middleTexture = Director::getInstance()->getTextureCache()->addImage(
"middle.png");
after that when you want to change texture of any Sprite, simply do it like:-
textSprite->setTexture(startTexture);
for this to work with you, declare "textSprite" in your .h file aswell for quick access.
Pit-fall:- changing texture doesn't change sprite initial bounding box, if initial sprite texture was 32*32 and changed texture was 50*50, then extra texture of 20*20 will be cropped automatically starting from origin point, which might look bad. to over come this you need to change rect also using
textSprite->setTextureRect(
Rect(0, 0, startTexture->getContentSize().width,
startTexture->getContentSize().height));
2:- Using Sprite Frame Cache, put all your texture in a spriteframe, load it into memory like :-
SpriteFrameCache *spriteCache = SpriteFrameCache::getInstance();
spriteCache->addSpriteFramesWithFile("test.plist", "test.png");
now when ever you want to change you texture do it like this
testSprite->setSpriteFrame(
(SpriteFrameCache::getInstance())->getSpriteFrameByName(
"newImage.png"));
this will first check sprite cache for a image named "newImage.png", if it found it in memory then it will return that texture or else it will return nullptr.

Insane CPU usage in QT 5.0

I'm having trouble using the QT framework, particularly with the paintEvent of QWidget. I have a QWidget set up, and am overriding the paintEvent of it. I need to render a bunch of rectangles (grid system), 51 by 19, leading to 969 rectangles being drawn. This is done in a for loop. Then I also need to draw an image on each on of these grids. The QWidget is added to a QMainWindow, which is shown.
This works nicely, but it's using up 47% of CPU per window open! And I want to allow the user to open multiple windows like this, likey having 3-4 open at a time, which puts the CPU close to 150%.
Why does this happen? Here is the paintEvent contents. The JNI calls don't cause the CPU usage, commenting them out doesn't lower it, but commenting out the p.fillRect and Renderer::renderString (which draws the image) lowers the CPU to about 5%.
// Background
QPainter p(this);
p.fillRect(0, 0, this->width(), this->height(), QBrush(QColor(0, 0, 0)));
// Lines
for (int y = 0; y < Global::terminalHeight; y++) {
// Line and color method ID
jmethodID lineid = Manager::jenv->GetMethodID(this->javaClass, "getLine", "(I)Ljava/lang/String;");
error();
jmethodID colorid = Manager::jenv->GetMethodID(this->javaClass, "getColorLine", "(I)Ljava/lang/String;");
error();
// Values
jstring jl = (jstring) Manager::jenv->CallObjectMethod(this->javaObject, lineid, jint(y));
error();
jstring cjl = (jstring) Manager::jenv->CallObjectMethod(this->javaObject, colorid, jint(y));
error();
// Convert to C values
const char *l = Manager::jenv->GetStringUTFChars(jl, 0);
const char *cl = Manager::jenv->GetStringUTFChars(cjl, 0);
QString line = QString(l);
QString color = QString(cl);
// Render line
for (int x = 0; x < Global::terminalWidth; x++) {
QColor bg = Renderer::colorForHex(color.mid(x + color.length() / 2, 1));
// Cell location on widget
int cellx = x * Global::cellWidth + Global::xoffset;
int celly = y * Global::cellHeight + Global::yoffset;
// Background
p.fillRect(cellx, celly, Global::cellWidth, Global::cellHeight, QBrush(bg));
// String
// Renders the image to the grid
Renderer::renderString(p, tc, text, cellx, celly);
}
// Release
Manager::jenv->ReleaseStringUTFChars(jl, l);
Manager::jenv->ReleaseStringUTFChars(cjl, cl);
}
The problem you're having is that every time you call a draw function on QPainter, it has an overhead in setting up the painter for what it needs to draw, especially when the pen and brush needs changing. As you're calling the function over 900 times, that's over 900 times that Painter needs to change its internal properties for rendering and would explain why commenting out the drawRect and drawString functions reduce the CPU usage.
To solve this problem you need to batch up all the same types of paint calls, where a type uses the same brush and pen. To do this you can use the class QPainterPath and add the objects that you need with functions such as addRect(..); ensuring you do this outside of the paint function!
If, for example, you were going to draw a chess board pattern, you would create two QPainterPath objects and add all the white squares to one and all the black to another. Then when it comes to drawing use the QPainter function drawPath. This would only require two calls to draw the entire board.
Of-course, if you need each square to be a different colour, then you're still going to have the same problem, in which case I suggest looking to generate an image of what you need and rendering that instead.

MiniMap/PIP for OpenSceneGraph

So I am trying to create a mini-map/PIP. I have an existing program with scene that runs inside a Qt Widget. I have a class, NetworkViewer, which extends CompositeViewer. In NetworkViewer's constructor I call the following function. Notice the root is the scene which is populated elsewhere.
void NetworkViewer::init() {
root = new osg::Group() ;
viewer = new osgViewer::View( );
viewer->setSceneData( root ) ;
osg::Camera* camera ;
camera = createCamera(0,0,100,100) ;
viewer->setCamera( camera );
viewer->addEventHandler( new NetworkGUIHandler( (GUI*)view ) ) ;
viewer->setCameraManipulator(new osgGA::TrackballManipulator) ;
viewer->getCamera()->setClearColor(
osg::Vec4( LIGHT_CLOUD_BLUE_F,0.0f));
addView( viewer );
osgQt::GraphicsWindowQt* gw =
dynamic_cast( camera->getGraphicsContext() );
QWidget* widget = gw ? gw->getGLWidget() : NULL;
QGridLayout* grid = new QGridLayout( ) ;
grid->addWidget( widget );
grid->setSpacing(1);
grid->setMargin(1);
setLayout( grid );
initHUD( ) ;
}
The create camera function is as follows:
osg::Camera* createCamera( int x, int y, int w, int h ) {
osg::DisplaySettings* ds = osg::DisplaySettings::instance().get();
osg::ref_ptr traits
= new osg::GraphicsContext::Traits;
traits->windowName = "" ;
traits->windowDecoration = false ;
traits->x = x;
traits->y = y;
traits->width = w;
traits->height = h;
traits->doubleBuffer = true;
traits->alpha = ds->getMinimumNumAlphaBits();
traits->stencil = ds->getMinimumNumStencilBits();
traits->sampleBuffers = ds->getMultiSamples();
traits->samples = ds->getNumMultiSamples();
osg::ref_ptr camera = new osg::Camera;
camera->setGraphicsContext( new osgQt::GraphicsWindowQt(traits.get()) );
camera->setViewport( new osg::Viewport(0, 0, traits->width, traits->height) );
camera->setViewMatrix(osg::Matrix::translate(-10.0f,-10.0f,-30.0f));
camera->setProjectionMatrixAsPerspective(
20.0f,
static_cast(traits->width)/static_cast(traits->height),
1.0f, 10000.0f );
return camera.release();
}
I have been looking at several camera examples and searching for a solution for a while to no avail. What I am really looking for is the background being my main camera which takes up most of the screen and displays the scene graph while my mini-map appears in the bottom right. It has the same scene as the main camera but is overlaid on top of it and has its own set of controls for selection etc since it will have different functionality.
I was thinking that perhaps by adding another camera as a slave I would be able to do this:
camera = createCamera(40,40,50,50) ;
viewer->addSlave(camera) ;
But this doesn't seem to work. If I disable the other camera I do see a clear area that it appears this camera was suppose to be rendering in (its viewport) but that doesn't help. I've played around with rendering order thinking it could be that to no avail.
Any ideas? What it the best way to do such a minimap is? What am I doing wrong? Also anyway to make the rendering of the minimap circular instead of rectangular?
I am not personnally using OpenSceneGraph, so I can't advise you on your code. I think the best is to ask in the official forums. But I have some ideas on the minimap:
First, you have to specify the basic features of your minimap. Do you want it to emphasize some elements of the scene, or just show the scene (ie, RTS-like minimap vs simple top-show of the scene) ?
I assume you do not want to emphasize some elements of the scene. I also assume your main camera is not really 'minimap-friendly'. So, the simplest is to create a camera with the following properties:
direction = (0,-1,0) (Y is the vertical axis)
mode = orthographic
position = controlled by what you want, for example your main camera
Now, for the integration of the image. You can:
set the viewport of the minimap camera to what you want, if your minimap is rectangular.
render the minimap to a texture (RTT) and then blend it through an extra rendering pass.
You can also search other engines forums (like Irrlicht and Ogre) to see how they're doing minimaps.