Draw a circle on top of Renderer2D - xtk

I created a renderer2D so the user can click and pick the centre of a lesion. I want to show the user where he clicked. Currently my idea is to freeze the renderer (so the slice will be the same and the zoom too) and then use the canvas to draw a circle.
Here is my code:
centerpick2D = new X.renderer2D();
centerpick2D.container = 'pick_center_segment';
centerpick2D.orientation = 'Z';
centerpick2D.init();
centerpick2D.add(volumeT1DCM);
centerpick2D.render();
centerpick2D.interactor.onMouseDown = function(){
tumorCenter=centerpick2D.xy2ijk(centerpick2D.interactor.mousePosition[0],centerpick2D.interactor.mousePosition[1]);
centerpick2D.interactor.config.MOUSEWHEEL_ENABLED = false;
centerpick2D.interactor.config.MOUSECLICKS_ENABLED = false;
$('canvas').attr('id', 'xtkCanvas');
var myCanvas = document.getElementById("xtkCanvas");
var ctx=myCanvas.getContext("2d");
ctx.fillStyle="#FF0000";
ctx.beginPath();
ctx.arc(centerpick2D.interactor.mousePosition[0],centerpick2D.interactor.mousePosition[1],20,0,Math.PI*2,true);
ctx.closePath();
ctx.fill();
};
I have two problems:
The MOUSEWHEEL_ENABLED=false and MOUSECLICKS_ENABLED = false do not work. I tried adding a centerpick2D.init() which works but add a second canvas on top of the previous one.
My circle does not appear anywhere.
Any help would be greatly appreciated. :-D

Sorry took me a while to get this uploaded. Here's a quick overview of how I am copying the XTK canvas' contents into my own canvas and then do my own custom drawing on top of it. The actual code for my project is all over the place, so am just pasting formatted snippets here. Again there's a definite drawback in terms of performance here (due to the copying of pixels), so I think it would be better to introduce all this into the XTK Code in the first place and do all the drawing in one Canvas element.
// initialise animation loop with requestAnimationFrame
startDraw:function(){
var _this = this;
var time = new Date().getTime();
function draw() {
//update time
var now = new Date().getTime();
//only draw the frame if 25 milliseconds have passed!
if(now > (time + 25)){
// call drawing function here
drawFrame();
time = now;
}
requestAnimationFrame(draw);
}
requestAnimationFrame(draw);
};
//...
// actual drawing function, for each frame copy the pixel contents from a XTK canvas
// into custom canvas and do custom drawing on top of it, so drawing is actually at each
// frame
drawFrame:function(){
//...
// this.ctx is the context of my own custom canvas element
// I use drawImage() function to copy the pixels from this.srcCanvasA
// this.srcCanvasA is a predefined XTKCanvas
this.ctx.drawImage(this.srcCanvas, 1, 1)
// custom func to draw on top of this same canvas (ie. this.ctx) with standard
// HTML Canvas functionality, so this is where you could draw your own circle based
// on the user mouse coords
this.drawAnnotation();
//...
}
Let me know if you have any more questions. The full code is available here:
https://github.com/davidbasalla/IndividualProject

Related

What differences between QPhongMaterial and QPhongAlphaMaterial? [duplicate]

I have a function that draws triangles through OpenGL
I draw two triangles by pressing a button (function on_drawMapPushButton_clicked()).
Then i draw a sphere that placed above these triangles. And now i see, that sphere is drawed correctly over first triangle, but second triangle drawed over the sphere and not vice versa.
If i press the button second time, then spehere is drawed correctly over first and second triangles.
When i press the button third time, then second triangle drawed over the sphere again.
When i press the button fourth time, then spehere is drawed correctly over first and second triangles and so on.
If i use in sphereMesh QPhongMaterial instead of QPhongAlphaMaterial, then spehere is drawed correctly over first and second triangles always. Like it must to be.
I can't understand what i do wrong to get my sphere is drawed always over the triangles.
Code, that draws transparent sphere:
selectModel_ = new Qt3DExtras::QSphereMesh(selectEntity_);
selectModel_->setRadius(75);
selectModel_->setSlices(150);
selectMaterial_ = new Qt3DExtras::QPhongAlphaMaterial(selectEntity_);
selectMaterial_->setAmbient(QColor(28, 61, 136));
selectMaterial_->setDiffuse(QColor(11, 56, 159));
selectMaterial_->setSpecular(QColor(10, 67, 199));
selectMaterial_->setShininess(0.8f);
selectEntity_->addComponent(selectModel_);
selectEntity_->addComponent(selectMaterial_);
Function drawTriangles:
void drawTriangles(QPolygonF triangles, QColor color){
int numOfVertices = triangles.size();
// Create and fill vertex buffer
QByteArray bufferBytes;
bufferBytes.resize(3 * numOfVertices * static_cast<int>(sizeof(float)));
float *positions = reinterpret_cast<float*>(bufferBytes.data());
for(auto point : triangles){
*positions++ = static_cast<float>(point.x());
*positions++ = 0.0f; //We need to drow only on the surface
*positions++ = static_cast<float>(point.y());
}
geometry_ = new Qt3DRender::QGeometry(mapEntity_);
auto *buf = new Qt3DRender::QBuffer(geometry_);
buf->setData(bufferBytes);
positionAttribute_ = new Qt3DRender::QAttribute(mapEntity_);
positionAttribute_->setName(Qt3DRender::QAttribute::defaultPositionAttributeName());
positionAttribute_->setVertexBaseType(Qt3DRender::QAttribute::Float); //In our buffer we will have only floats
positionAttribute_->setVertexSize(3); // Size of a vertex
positionAttribute_->setAttributeType(Qt3DRender::QAttribute::VertexAttribute); // Attribute type
positionAttribute_->setByteStride(3 * sizeof(float));
positionAttribute_->setBuffer(buf);
geometry_->addAttribute(positionAttribute_); // Add attribute to ours Qt3DRender::QGeometry
// Create and fill an index buffer
QByteArray indexBytes;
indexBytes.resize(numOfVertices * static_cast<int>(sizeof(unsigned int))); // start to end
unsigned int *indices = reinterpret_cast<unsigned int*>(indexBytes.data());
for(unsigned int i = 0; i < static_cast<unsigned int>(numOfVertices); ++i) {
*indices++ = i;
}
auto *indexBuffer = new Qt3DRender::QBuffer(geometry_);
indexBuffer->setData(indexBytes);
indexAttribute_ = new Qt3DRender::QAttribute(geometry_);
indexAttribute_->setVertexBaseType(Qt3DRender::QAttribute::UnsignedInt); //In our buffer we will have only unsigned ints
indexAttribute_->setAttributeType(Qt3DRender::QAttribute::IndexAttribute); // Attribute type
indexAttribute_->setBuffer(indexBuffer);
indexAttribute_->setCount(static_cast<unsigned int>(numOfVertices)); // Set count of our vertices
geometry_->addAttribute(indexAttribute_); // Add the attribute to ours Qt3DRender::QGeometry
shape_ = new Qt3DRender::QGeometryRenderer(mapEntity_);
shape_->setPrimitiveType(Qt3DRender::QGeometryRenderer::Triangles);
shape_->setGeometry(geometry_);
//Create material
material_ = new Qt3DExtras::QPhongMaterial(mapEntity_);
material_->setAmbient(color);
trianglesEntity_ = new Qt3DCore::QEntity(mapEntity_);
trianglesEntity_->addComponent(shape_);
trianglesEntity_->addComponent(material_);
}
Press button handler on_drawMapPushButton_clicked():
void on_drawMapPushButton_clicked()
{
clearMap(); //Implementation is above
QPolygonF triangle1;
triangle1 << QPointF( 0 ,-1000) << QPointF(0 ,1000) << QPointF(1000, -1000);
drawTriangles(triangle1, Qt::black);
QPolygonF triangle2;
triangle2 << QPointF(-1000,-1000) << QPointF(-100,1000) << QPointF(-100,-1000);
drawTriangles(triangle2, Qt::red);
}
Map clearing function clearMap():
void clearMap()
{
if(mapEntity_){
delete mapEntity_;
mapEntity_ = nullptr;
mapEntity_ = new Qt3DCore::QEntity(view3dRootEntity_);
}
}
Ok here comes the extend answer.
The reason why this sometimes happens and sometimes not depends on the order of your entities. If you experiment with two simple spheres, one transparent and one not, you will see that when the sphere that is transparent is added later it will be drawn above the opaque object - just like you want it to.
This happens because the opaque object will be drawn first (it comes first in the scene graph) and the transparent object later which will give you the result you want. In the other case where the transparent object gets drawn first, the opaque object is drawn above because the QPhongAlphaMaterial has a QNoDepthMask render state which tells it not to write to the depth buffer. Thus, the opaque object always passes the depth test, where the transparent object actually already drew to. You have to do some more work to properly draw transparent objects for arbitrary scene graphs and camera positions.
The Qt3D Rendering Graph
To understand what you have to do you should understand how the Qt3D rendering graph is laid out. If you know this already you can skip this part.
Italic words reference items in the graph image in the following text.
If you use a Qt3DWindow, you can't access the root node of rendering graph. It is maintained by the window. You can access the QRenderSettings and root node of your framegraph through the functions activeFramegraph() and renderSettings() which you can both call on the window. You can also set the root node of scene graph through the setRootEntity() function of Qt3DWindow. The window internally has a QAspectEngine, where it sets the root node of the whole graph, which is the root node of the rendering graph in the graph image above.
If you want to insert a framegraph node to the existing framegraph of the 3D window, you have to add it as the parent of the active framegraph which I will explain in the next section. If you have your own custom framegraph which you set on the window through setActiveFramegraph() then just append it to the end, this should suffice.
Using QSortPolicy
As you already found out according to you other questions, you can use QSortPolicy in your framegraph to sort the entities by distance to camera. You can add a sort policy as follows (assuming that view is your Qt3DWindow and scene is your root entity of the scene graph, although I don't understand why it has to be):
Qt3DRender::QFrameGraphNode *framegraph = view.activeFrameGraph();
Qt3DRender::QSortPolicy *sortPolicy = new Qt3DRender::QSortPolicy(scene);
framegraph->setParent(sortPolicy);
QVector<Qt3DRender::QSortPolicy::SortType> sortTypes =
QVector<Qt3DRender::QSortPolicy::SortType>() << Qt3DRender::QSortPolicy::BackToFront;
sortPolicy->setSortTypes(sortTypes);
view.setActiveFrameGraph(framegraph);
The issue with this code is that this sort policy sorts the entities by the distance of their centers to the camera. If one of the opaque objects is closer to the camera than the transparent object it gets drawn later anyways and occludes the transparent object. See the images below for a graphical explanation.
The red and black sphere are further away from the camera than the torus, that's why they get drawn first and they don't occlude the torus.
No the center of the red sphere is closer to the camera than the center of the torus. It gets rendered later than the torus and occludes it.
Using Two Framegraph Branches
You can tackle the issue above if you use two framegraph branches. One which draws all opaque entities and one which draws all transparent ones. To achieve this you have to make use of QLayer and QLayerFilter. You can attach layers to entities and then add layer filters to your framegraph. This way you can exclude entities from entering a certain branch of your framegraph.
Let's say you create two layers, one for opaque objects and one for transparents ones:
Qt3DRender::QLayer *transparentLayer = new Qt3DRender::QLayer;
Qt3DRender::QLayer *opaqueLayer = new Qt3DRender::QLayer;
You have to attach the transparent layer to each transparent object and the opaque layer to each opaque object as a component (using addComponent()).
Unfortunately, you need a special framegraph tree to include the two corresponding layer filters (again, assuming that view is your Qt3DWindow):
Qt3DRender::QRenderSurfaceSelector *renderSurfaceSelector
= new Qt3DRender::QRenderSurfaceSelector();
renderSurfaceSelector->setSurface(&view);
Qt3DRender::QClearBuffers *clearBuffers
= new Qt3DRender::QClearBuffers(renderSurfaceSelector);
clearBuffers->setBuffers(Qt3DRender::QClearBuffers::AllBuffers);
clearBuffers->setClearColor(Qt::white);
This is the first branch to clear the buffers. Now you add the following code:
Qt3DRender::QViewport *viewport = new Qt3DRender::QViewport(renderSurfaceSelector);
Qt3DRender::QCameraSelector *cameraSelector = new Qt3DRender::QCameraSelector(viewport);
Qt3DRender::QCamera *camera = new Qt3DRender::QCamera(cameraSelector);
// set your camera parameters here
cameraSelector->setCamera(camera);
Since you create the QViewport as a child of the QRenderSurfaceSelector it is now a sibling in your framegraph with respect to the QClearBuffers. You can see an illustration of the example framegraphs here.
Now you have to create the two leaf nodes that contain the layer filters. The Qt3D engine always executes a whole branch when it reaches a leaf. This means that first the opaque objects are drawn and then the transparent ones.
// not entirely sure why transparent filter has to go first
// I would have expected the reversed order of the filters but this works...
Qt3DRender::QLayerFilter *transparentFilter = new Qt3DRender::QLayerFilter(camera);
transparentFilter->addLayer(transparentLayer);
Qt3DRender::QLayerFilter *opaqueFilter = new Qt3DRender::QLayerFilter(camera);
opaqueFilter->addLayer(opaqueLayer);
The two layer filters are now leaf nodes in your framegraph branch and Qt3D will first draw the opaque objects and then afterwards, since it uses the same viewport and everything, will draw the transparent objects above. It will draw them correctly (i.e. not in front of parts of opaque objects that the transparent object actually lies behind, because we did not clear the depth buffers again -> Splitting the framegraph happens only on the camera node).
Now set the new framegaph on your Qt3DWindow:
view.setActiveFrameGraph(renderSurfaceSelector);
Result:
Edit (26.03.21): As Patrick B. pointed out correctly, using the suggested solution with two layers you will have to add both layers as components to any lights in the scene. You can get around this by setting the filter mode on the QLayerFilters to QLayerFilter::FilterMode::DiscardAnyMatching and then reverse the order of the filters. This way, the transparentFilter discards any entities with the transparentLayer attached - but not the lights because they don't have the transparentLayer. Vice versa for the opaqueFilter.
My mistake was that i did wrong order of creating and deletion of Triangles and Sphere entities.
In pseudo code right order is as follows:
clearTriangles();
clearSphere();
drawTriangles();
drawSphere();
If you are using Qt3d with QML and want to control the order elements are drawn you can control it by the order of layers in your QML file.
Something like:
{
objectName: "firstLayer"
id : firstLayer
}
Layer {
objectName: "secondLayer"
id: secondLayer
}
The order you add them to layer filters will then control which is drawn first:
RenderSurfaceSelector {
CameraSelector {
id : cameraSelector
camera: mainCamera
FrustumCulling {
ClearBuffers {
buffers : ClearBuffers.AllBuffers
clearColor: "#04151c"
NoDraw {}
}
LayerFilter
{
objectName: "firstLayerFilter"
id: firstLayerFilter
layers: [firstLayer]
}
LayerFilter
{
id: secondLayerFilter
objectName: "secondLayerFilter"
layers: [secondLayer]
}
Then anything you add to the secondLayer will get drawn over-top of the first layer. I used this to make sure text always showed up in front of shapes, but it can be used similarly with transparencies.

Qt3d. Draw transparent QSphereMesh over triangles

I have a function that draws triangles through OpenGL
I draw two triangles by pressing a button (function on_drawMapPushButton_clicked()).
Then i draw a sphere that placed above these triangles. And now i see, that sphere is drawed correctly over first triangle, but second triangle drawed over the sphere and not vice versa.
If i press the button second time, then spehere is drawed correctly over first and second triangles.
When i press the button third time, then second triangle drawed over the sphere again.
When i press the button fourth time, then spehere is drawed correctly over first and second triangles and so on.
If i use in sphereMesh QPhongMaterial instead of QPhongAlphaMaterial, then spehere is drawed correctly over first and second triangles always. Like it must to be.
I can't understand what i do wrong to get my sphere is drawed always over the triangles.
Code, that draws transparent sphere:
selectModel_ = new Qt3DExtras::QSphereMesh(selectEntity_);
selectModel_->setRadius(75);
selectModel_->setSlices(150);
selectMaterial_ = new Qt3DExtras::QPhongAlphaMaterial(selectEntity_);
selectMaterial_->setAmbient(QColor(28, 61, 136));
selectMaterial_->setDiffuse(QColor(11, 56, 159));
selectMaterial_->setSpecular(QColor(10, 67, 199));
selectMaterial_->setShininess(0.8f);
selectEntity_->addComponent(selectModel_);
selectEntity_->addComponent(selectMaterial_);
Function drawTriangles:
void drawTriangles(QPolygonF triangles, QColor color){
int numOfVertices = triangles.size();
// Create and fill vertex buffer
QByteArray bufferBytes;
bufferBytes.resize(3 * numOfVertices * static_cast<int>(sizeof(float)));
float *positions = reinterpret_cast<float*>(bufferBytes.data());
for(auto point : triangles){
*positions++ = static_cast<float>(point.x());
*positions++ = 0.0f; //We need to drow only on the surface
*positions++ = static_cast<float>(point.y());
}
geometry_ = new Qt3DRender::QGeometry(mapEntity_);
auto *buf = new Qt3DRender::QBuffer(geometry_);
buf->setData(bufferBytes);
positionAttribute_ = new Qt3DRender::QAttribute(mapEntity_);
positionAttribute_->setName(Qt3DRender::QAttribute::defaultPositionAttributeName());
positionAttribute_->setVertexBaseType(Qt3DRender::QAttribute::Float); //In our buffer we will have only floats
positionAttribute_->setVertexSize(3); // Size of a vertex
positionAttribute_->setAttributeType(Qt3DRender::QAttribute::VertexAttribute); // Attribute type
positionAttribute_->setByteStride(3 * sizeof(float));
positionAttribute_->setBuffer(buf);
geometry_->addAttribute(positionAttribute_); // Add attribute to ours Qt3DRender::QGeometry
// Create and fill an index buffer
QByteArray indexBytes;
indexBytes.resize(numOfVertices * static_cast<int>(sizeof(unsigned int))); // start to end
unsigned int *indices = reinterpret_cast<unsigned int*>(indexBytes.data());
for(unsigned int i = 0; i < static_cast<unsigned int>(numOfVertices); ++i) {
*indices++ = i;
}
auto *indexBuffer = new Qt3DRender::QBuffer(geometry_);
indexBuffer->setData(indexBytes);
indexAttribute_ = new Qt3DRender::QAttribute(geometry_);
indexAttribute_->setVertexBaseType(Qt3DRender::QAttribute::UnsignedInt); //In our buffer we will have only unsigned ints
indexAttribute_->setAttributeType(Qt3DRender::QAttribute::IndexAttribute); // Attribute type
indexAttribute_->setBuffer(indexBuffer);
indexAttribute_->setCount(static_cast<unsigned int>(numOfVertices)); // Set count of our vertices
geometry_->addAttribute(indexAttribute_); // Add the attribute to ours Qt3DRender::QGeometry
shape_ = new Qt3DRender::QGeometryRenderer(mapEntity_);
shape_->setPrimitiveType(Qt3DRender::QGeometryRenderer::Triangles);
shape_->setGeometry(geometry_);
//Create material
material_ = new Qt3DExtras::QPhongMaterial(mapEntity_);
material_->setAmbient(color);
trianglesEntity_ = new Qt3DCore::QEntity(mapEntity_);
trianglesEntity_->addComponent(shape_);
trianglesEntity_->addComponent(material_);
}
Press button handler on_drawMapPushButton_clicked():
void on_drawMapPushButton_clicked()
{
clearMap(); //Implementation is above
QPolygonF triangle1;
triangle1 << QPointF( 0 ,-1000) << QPointF(0 ,1000) << QPointF(1000, -1000);
drawTriangles(triangle1, Qt::black);
QPolygonF triangle2;
triangle2 << QPointF(-1000,-1000) << QPointF(-100,1000) << QPointF(-100,-1000);
drawTriangles(triangle2, Qt::red);
}
Map clearing function clearMap():
void clearMap()
{
if(mapEntity_){
delete mapEntity_;
mapEntity_ = nullptr;
mapEntity_ = new Qt3DCore::QEntity(view3dRootEntity_);
}
}
Ok here comes the extend answer.
The reason why this sometimes happens and sometimes not depends on the order of your entities. If you experiment with two simple spheres, one transparent and one not, you will see that when the sphere that is transparent is added later it will be drawn above the opaque object - just like you want it to.
This happens because the opaque object will be drawn first (it comes first in the scene graph) and the transparent object later which will give you the result you want. In the other case where the transparent object gets drawn first, the opaque object is drawn above because the QPhongAlphaMaterial has a QNoDepthMask render state which tells it not to write to the depth buffer. Thus, the opaque object always passes the depth test, where the transparent object actually already drew to. You have to do some more work to properly draw transparent objects for arbitrary scene graphs and camera positions.
The Qt3D Rendering Graph
To understand what you have to do you should understand how the Qt3D rendering graph is laid out. If you know this already you can skip this part.
Italic words reference items in the graph image in the following text.
If you use a Qt3DWindow, you can't access the root node of rendering graph. It is maintained by the window. You can access the QRenderSettings and root node of your framegraph through the functions activeFramegraph() and renderSettings() which you can both call on the window. You can also set the root node of scene graph through the setRootEntity() function of Qt3DWindow. The window internally has a QAspectEngine, where it sets the root node of the whole graph, which is the root node of the rendering graph in the graph image above.
If you want to insert a framegraph node to the existing framegraph of the 3D window, you have to add it as the parent of the active framegraph which I will explain in the next section. If you have your own custom framegraph which you set on the window through setActiveFramegraph() then just append it to the end, this should suffice.
Using QSortPolicy
As you already found out according to you other questions, you can use QSortPolicy in your framegraph to sort the entities by distance to camera. You can add a sort policy as follows (assuming that view is your Qt3DWindow and scene is your root entity of the scene graph, although I don't understand why it has to be):
Qt3DRender::QFrameGraphNode *framegraph = view.activeFrameGraph();
Qt3DRender::QSortPolicy *sortPolicy = new Qt3DRender::QSortPolicy(scene);
framegraph->setParent(sortPolicy);
QVector<Qt3DRender::QSortPolicy::SortType> sortTypes =
QVector<Qt3DRender::QSortPolicy::SortType>() << Qt3DRender::QSortPolicy::BackToFront;
sortPolicy->setSortTypes(sortTypes);
view.setActiveFrameGraph(framegraph);
The issue with this code is that this sort policy sorts the entities by the distance of their centers to the camera. If one of the opaque objects is closer to the camera than the transparent object it gets drawn later anyways and occludes the transparent object. See the images below for a graphical explanation.
The red and black sphere are further away from the camera than the torus, that's why they get drawn first and they don't occlude the torus.
No the center of the red sphere is closer to the camera than the center of the torus. It gets rendered later than the torus and occludes it.
Using Two Framegraph Branches
You can tackle the issue above if you use two framegraph branches. One which draws all opaque entities and one which draws all transparent ones. To achieve this you have to make use of QLayer and QLayerFilter. You can attach layers to entities and then add layer filters to your framegraph. This way you can exclude entities from entering a certain branch of your framegraph.
Let's say you create two layers, one for opaque objects and one for transparents ones:
Qt3DRender::QLayer *transparentLayer = new Qt3DRender::QLayer;
Qt3DRender::QLayer *opaqueLayer = new Qt3DRender::QLayer;
You have to attach the transparent layer to each transparent object and the opaque layer to each opaque object as a component (using addComponent()).
Unfortunately, you need a special framegraph tree to include the two corresponding layer filters (again, assuming that view is your Qt3DWindow):
Qt3DRender::QRenderSurfaceSelector *renderSurfaceSelector
= new Qt3DRender::QRenderSurfaceSelector();
renderSurfaceSelector->setSurface(&view);
Qt3DRender::QClearBuffers *clearBuffers
= new Qt3DRender::QClearBuffers(renderSurfaceSelector);
clearBuffers->setBuffers(Qt3DRender::QClearBuffers::AllBuffers);
clearBuffers->setClearColor(Qt::white);
This is the first branch to clear the buffers. Now you add the following code:
Qt3DRender::QViewport *viewport = new Qt3DRender::QViewport(renderSurfaceSelector);
Qt3DRender::QCameraSelector *cameraSelector = new Qt3DRender::QCameraSelector(viewport);
Qt3DRender::QCamera *camera = new Qt3DRender::QCamera(cameraSelector);
// set your camera parameters here
cameraSelector->setCamera(camera);
Since you create the QViewport as a child of the QRenderSurfaceSelector it is now a sibling in your framegraph with respect to the QClearBuffers. You can see an illustration of the example framegraphs here.
Now you have to create the two leaf nodes that contain the layer filters. The Qt3D engine always executes a whole branch when it reaches a leaf. This means that first the opaque objects are drawn and then the transparent ones.
// not entirely sure why transparent filter has to go first
// I would have expected the reversed order of the filters but this works...
Qt3DRender::QLayerFilter *transparentFilter = new Qt3DRender::QLayerFilter(camera);
transparentFilter->addLayer(transparentLayer);
Qt3DRender::QLayerFilter *opaqueFilter = new Qt3DRender::QLayerFilter(camera);
opaqueFilter->addLayer(opaqueLayer);
The two layer filters are now leaf nodes in your framegraph branch and Qt3D will first draw the opaque objects and then afterwards, since it uses the same viewport and everything, will draw the transparent objects above. It will draw them correctly (i.e. not in front of parts of opaque objects that the transparent object actually lies behind, because we did not clear the depth buffers again -> Splitting the framegraph happens only on the camera node).
Now set the new framegaph on your Qt3DWindow:
view.setActiveFrameGraph(renderSurfaceSelector);
Result:
Edit (26.03.21): As Patrick B. pointed out correctly, using the suggested solution with two layers you will have to add both layers as components to any lights in the scene. You can get around this by setting the filter mode on the QLayerFilters to QLayerFilter::FilterMode::DiscardAnyMatching and then reverse the order of the filters. This way, the transparentFilter discards any entities with the transparentLayer attached - but not the lights because they don't have the transparentLayer. Vice versa for the opaqueFilter.
My mistake was that i did wrong order of creating and deletion of Triangles and Sphere entities.
In pseudo code right order is as follows:
clearTriangles();
clearSphere();
drawTriangles();
drawSphere();
If you are using Qt3d with QML and want to control the order elements are drawn you can control it by the order of layers in your QML file.
Something like:
{
objectName: "firstLayer"
id : firstLayer
}
Layer {
objectName: "secondLayer"
id: secondLayer
}
The order you add them to layer filters will then control which is drawn first:
RenderSurfaceSelector {
CameraSelector {
id : cameraSelector
camera: mainCamera
FrustumCulling {
ClearBuffers {
buffers : ClearBuffers.AllBuffers
clearColor: "#04151c"
NoDraw {}
}
LayerFilter
{
objectName: "firstLayerFilter"
id: firstLayerFilter
layers: [firstLayer]
}
LayerFilter
{
id: secondLayerFilter
objectName: "secondLayerFilter"
layers: [secondLayer]
}
Then anything you add to the secondLayer will get drawn over-top of the first layer. I used this to make sure text always showed up in front of shapes, but it can be used similarly with transparencies.

cocos2d-x v3 c++ Drop shadow cocos2d::Sprite

As far as I've found out, cocos doesn't offer a simple filter handling like AS3 for example does.
My situation:
I want to add a realtime shadow to an cocos2d::Sprite.
For example I would like to do something like this (similar to AS3):
auto mySprite = Sprite::createWithSpriteFrameName("myCharacter.png");
DropShadowFilter* dropShadow = new DropShadowFilter();
dropShadow->distance = 0;
dropShadow->angle = 45;
dropShadow->color = 0x333333;
dropShadow->alpha = 1;
dropShadow->blurX = 10;
dropShadow->blurY = 10;
dropShadow->strength = 1;
dropShadow->quality = 15;
mySprite->addFilter(dropShadow);
This should add a shadow to my Sprite to achieve an result like this:
Adobe Drop Shadow Example
Could you help me please?
There isn't any built in support for shadows on Sprites in Cocos2D-X.
The best option, performance-wise, would be to place your shadows in your sprite images already, instead of calculating and drawing them in the code.
Another option is to sub-class Sprite and override the draw method so that you duplicate the sprite and apply your effects and draw it below the original.
One possible way to achieve that is with this snippet from this thread on the Cocos forum. I can't say that I completely follow what this code does with the GL transforms, but you can use this as a starting point to experiment.
void CMySprite::draw()
{
// is_shadow is true if this sprite is to be considered like a shadow sprite, false otherwise.#
if (is_shadow)
{
ccBlendFunc blend;
// Change the default blending factors to this one.
blend.src = GL_SRC_ALPHA;
blend.dst = GL_ONE;
setBlendFunc( blend );
// Change the blending equation to thi in order to subtract from the values already written in the frame buffer
// the ones of the sprite.
glBlendEquationOES(GL_FUNC_REVERSE_SUBTRACT_OES);
}
CCSprite::draw();
if (is_shadow)
{
// The default blending function of cocos2d-x is GL_FUNC_ADD.
glBlendEquationOES(GL_FUNC_ADD_OES);
}
}

Drawing App, How to Erase

(I'm a newbie in AS3) I found a usefull code here http://www.flashandmath.com/advanced/smoothdraw/ and I'm trying to create an Erase button, I'm trying to use .cacheAsBitmap = true; but this work only if there is a mask(I saw this on some AS2 app where you can erase a bitmap then you can redraw).
P.S. I changed the code slightly to change the background now the drawBackgroundfunction look like this:
function drawBackground():void {
//We draw a background with a very subtle gradient effect so that the canvas darkens towards the edges.
var mesh:Shape = new Shape();
var gradMat:Matrix = new Matrix();
gradMat.createGradientBox(700,500,0,0,0);
var bg:Sprite = new Sprite();
mesh.graphics.beginBitmapFill(new lignes(), null, true, false);
//bg.graphics.beginGradientFill("radial",[0xDDD0AA,0xC6B689],[1,1],[1,255],gradMat);
//bg.graphics.drawRect(0,0,700,500);
//bg.graphics.endFill();
mesh.graphics.drawRect(0, 0, 700, 500);
mesh.graphics.endFill();
boardBitmapData.draw(mesh);
mesh.cacheAsBitmap = true;
//We clear out the undo buffer with a copy of just a blank background:
undoStack = new Vector.<BitmapData>;
var undoBuffer:BitmapData = new BitmapData(boardWidth, boardHeight, false);
undoBuffer.copyPixels(boardBitmapData,undoBuffer.rect,new Point(0,0));
undoStack.push(undoBuffer);
}
I just want to create an Erase methode, thanks for your help
Edit: I meant to erase only the clicked area
To clean the Shape Try:
mesh.graphics.clear();
To clean the Shape object. I don't know what else you are doing with the boardBitmapData variable, maybe will be necessary to do something about as well.
clear() - Clears the graphics that were drawn to this Graphics object, and resets fill and line style settings.
You need convert your canvas in bitmap, then use a mask to delete or blend mode ERASE, and to draw again create a new bitmap from the masked bitmap and finally use this as background.

Is object remain fixed when scrolling background in cocos2d

I have one question when infinite background scrolling is done, is the object remain fixed(like doodle in doodle jump, papy in papi jump) or these object really moves.Is only background move or both (background and object )move.plz someone help me.I am searching for this solution for 4/5 days,but can't get the solution.So plz someone help me. And if object does not move how to create such a illusion of object moving.
If you add the object to the same layer as the scrolling background, then it will scroll as the background scrolls.
If your looking for an effect like the hero in doodle jump, you may want to look at having two or more layers in a scene.
Layer 1: Scrolling Background Layer
Layer 2: Sprite layer
SomeScene.m
CCLayer *backgroundLayer = [[CCLayer alloc] init];
CCLayer *spriteLayer= [[CCLayer alloc] init];
[self addChild:backgroundLayer z:0];
[self addChild:spriteLayer z:1];
//Hero stays in one spot regardless of background scrolling.
CCSprite *squidHero = [[CCSprite alloc] initWithFile:#"squid.png"];
[spriteLayer addChild:squidHero];
If you want objects to scroll with the background add it to the background layer:
//Platform moves with background.
CCSprite *bouncePlatform= [[CCSprite alloc] initWithFile:#"bouncePlatform.png"];
[backgroundLayer addChild:bouncePlatform];
Another alternative is to use a CCFollow action. You would code as if the background is static (which it will be) and the player is moving (which it will be), but add a CCFollow action to the player. This essentially moves the camera so that it tracks your player.
You can also modify the classes so that you can get the CCFollow action to follow with an offset (i.e., so the player is not in the middle of the screen) as well as to have a smoothing effect to it, so that when the player moves, the follow action is not jerky. See the below code:
*NOTE I am using cocos2d-x, the c++ port. The methods are similar in cocos2d, and you should be able to modify these to fit the cocos2d syntax. Or search around -- I found these for cocos2d and then ported to c++.
//defines the action to constantly follow the player (in my case, "runner.p_sprite is the sprite pointing to the player)
FollowWithOffset* followAction = FollowWithOffset::create(runner.p_sprite, CCRectZero);
runAction(followAction);
And separately, I have copied the class definition for CCFollow to create my own class, CCFollowWithAction. This also has a smoothing effect (you can look this up more online) so that when the player moves, the actions are not jerky. I modified "initWithTarget," to take into account an offset, and "step," to add a smoothing action. You can see the modifications in the comments below.
bool FollowWithOffset::initWithTarget(CCNode *pFollowedNode, const CCRect& rect/* = CCRectZero*/)
{
CCAssert(pFollowedNode != NULL, "");
pFollowedNode->retain();
m_pobFollowedNode = pFollowedNode;
if (rect.equals(CCRectZero))
{
m_bBoundarySet = false;
}
else
{
m_bBoundarySet = true;
}
m_bBoundaryFullyCovered = false;
CCSize winSize = CCDirector::sharedDirector()->getWinSize();
m_obFullScreenSize = CCPointMake(winSize.width, winSize.height);
//m_obHalfScreenSize = ccpMult(m_obFullScreenSize, 0.5f);
m_obHalfScreenSize = CCPointMake(m_obFullScreenSize.x/2 + RUNNER_FOLLOW_OFFSET_X,
m_obFullScreenSize.y/2 + RUNNER_FOLLOW_OFFSET_Y);
if (m_bBoundarySet)
{
m_fLeftBoundary = -((rect.origin.x+rect.size.width) - m_obFullScreenSize.x);
m_fRightBoundary = -rect.origin.x ;
m_fTopBoundary = -rect.origin.y;
m_fBottomBoundary = -((rect.origin.y+rect.size.height) - m_obFullScreenSize.y);
if(m_fRightBoundary < m_fLeftBoundary)
{
// screen width is larger than world's boundary width
//set both in the middle of the world
m_fRightBoundary = m_fLeftBoundary = (m_fLeftBoundary + m_fRightBoundary) / 2;
}
if(m_fTopBoundary < m_fBottomBoundary)
{
// screen width is larger than world's boundary width
//set both in the middle of the world
m_fTopBoundary = m_fBottomBoundary = (m_fTopBoundary + m_fBottomBoundary) / 2;
}
if( (m_fTopBoundary == m_fBottomBoundary) && (m_fLeftBoundary == m_fRightBoundary) )
{
m_bBoundaryFullyCovered = true;
}
}
return true;
}
void FollowWithOffset::step(float dt)
{
CC_UNUSED_PARAM(dt);
if(m_bBoundarySet){
// whole map fits inside a single screen, no need to modify the position - unless map boundaries are increased
if(m_bBoundaryFullyCovered)
return;
CCPoint tempPos = ccpSub( m_obHalfScreenSize, m_pobFollowedNode->getPosition());
m_pTarget->setPosition(ccp(clampf(tempPos.x, m_fLeftBoundary, m_fRightBoundary),
clampf(tempPos.y, m_fBottomBoundary, m_fTopBoundary)));
}
else{
//custom written code to add in support for a smooth ccfollow action
CCPoint tempPos = ccpSub( m_obHalfScreenSize, m_pobFollowedNode->getPosition());
CCPoint moveVect = ccpMult(ccpSub(tempPos,m_pTarget->getPosition()),0.25); //0.25 is the smooth constant.
CCPoint newPos = ccpAdd(m_pTarget->getPosition(), moveVect);
m_pTarget->setPosition(newPos);
}
}