how to pick up mouse in 3d world - opengl

i have some questions about how to pick up mouse in 3d world.
Now i have several points in 3d world,by using these points i can draw curve
i want to choose a point on this curve using mouse, the viewport of the 3d world can be changed, so i want to know at any viewport when i chick mouse button near the curve, which point is the nearest to the mouse ?or which point is chose by me?
i do have no idea about how to implement it,help me please if you know.
Thanks. Good luck.

If you are trying to click discrete points (perhaps nodal points on a curve) another approach is to project them back into screen coords and check for a close match to the mouse location - since this is done in integer coordinates and need only be done roughly accurate (assuming your points are more than a few screen pixels apart.
Another trick used to be to redraw the scene in an off-screen buffer with each point (or position on the curve) drawn in a different RGB color - then use the getPixel function to return the color under the mouse point and look up the point id. This doesn't work in complex scenes with fog.

OpenGL has a selection mode, where you can take the position of a mouse click, and create a matrix that restricts drawing to some arbitrary (small) area around the mouse (e.g., a 7x7 pixel block). You can then draw your "stuff" (curve, points, whatever) and it'll create a record of all the objects that fell inside of the block you defined. If more than one item fell inside that block, you get records of whatever did, one record per item, sorted by z-order from front to back.
When you're done, you get a record of the points that were close to the mouse click. Most of the time, you'll just use the closest to the front, but once in a while you'll want to do a bit more work to figure out which of them (if there was more than one, obviously) was really closest to the original mouse click point. gluProject and (possibly) gluUnProject can be handy if you decide to do that.

the java code below can make picking primitive shapes ( lines, points etc.)or 3d geometries (cube , sphere etc.)in 3d screen and prints what is selected.
firstly built a main class(i.e.tuval1) and secondly a public class (i.e. secim2).
public class tuval1 {
public static void main(String[] args) {
// TODO Auto-generated method stub
new secim2();
}
}
import java.awt.GraphicsConfiguration;
import java.awt.event.MouseAdapter;
import java.awt.event.MouseEvent;
import javax.media.j3d.Appearance;
import javax.media.j3d.BranchGroup;
import javax.media.j3d.Canvas3D;
import javax.media.j3d.PolygonAttributes;
import javax.media.j3d.Shape3D;
import javax.media.j3d.Transform3D;
import javax.media.j3d.TransformGroup;
import javax.swing.JFrame;
import javax.vecmath.Vector3f;
import com.sun.j3d.utils.geometry.ColorCube;
import com.sun.j3d.utils.geometry.Primitive;
import com.sun.j3d.utils.geometry.Sphere;
import com.sun.j3d.utils.picking.PickCanvas;
import com.sun.j3d.utils.picking.PickResult;
import com.sun.j3d.utils.universe.SimpleUniverse;
public class secim2 extends MouseAdapter{
private PickCanvas pickCanvas;
public secim2(){
JFrame pencere=new JFrame();
pencere.setSize(300, 300);
pencere.setVisible(true);
JFrame frame = new JFrame(" 3D Box Select");
GraphicsConfiguration config = SimpleUniverse.getPreferredConfiguration();
Canvas3D canvas = new Canvas3D(config);
canvas.setSize(400, 400);
SimpleUniverse universe = new SimpleUniverse(canvas);
BranchGroup group = new BranchGroup();
// create a color cube
Vector3f vector = new Vector3f(-0.3f, 0.0f, 0.0f);
Transform3D transform = new Transform3D();
transform.setTranslation(vector);
TransformGroup transformGroup = new TransformGroup(transform);
ColorCube cube = new ColorCube(0.2f);
transformGroup.addChild(cube);
group.addChild(transformGroup);
// create sphere
Vector3f vector2=new Vector3f(+0.5f,0.0f,0.0f);
Appearance app=new Appearance();
app.setPolygonAttributes(
new PolygonAttributes(PolygonAttributes.POLYGON_LINE,
PolygonAttributes.CULL_BACK,0.0f));
Transform3D transform2= new Transform3D();
transform2.setTranslation(vector2);
TransformGroup transformGroup2=new TransformGroup(transform2) ;
Sphere sphere= new Sphere(0.2f ,app);
transformGroup2.addChild(sphere );
group.addChild(transformGroup2);
universe.getViewingPlatform().setNominalViewingTransform();
universe.addBranchGraph(group);
pickCanvas = new PickCanvas(canvas, group);
pickCanvas.setMode(PickCanvas.BOUNDS);
pencere.add(canvas);
canvas.addMouseListener(this);
}
public void mouseClicked(MouseEvent o)
{
pickCanvas.setShapeLocation(o);
PickResult result = pickCanvas.pickClosest();
if (result == null) {
System.out.println("Nothing picked");
} else {
Primitive p = (Primitive)result.getNode(PickResult.PRIMITIVE);
Shape3D s = (Shape3D)result.getNode(PickResult.SHAPE3D);
if (p != null) {
// System.out.println("Selected");
System.out.println(p.getClass().getName());
} else if (s != null) {
System.out.println(s.getClass().getName());
} else{
System.out.println("null");
}
}
}
}

Related

What differences between QPhongMaterial and QPhongAlphaMaterial? [duplicate]

I have a function that draws triangles through OpenGL
I draw two triangles by pressing a button (function on_drawMapPushButton_clicked()).
Then i draw a sphere that placed above these triangles. And now i see, that sphere is drawed correctly over first triangle, but second triangle drawed over the sphere and not vice versa.
If i press the button second time, then spehere is drawed correctly over first and second triangles.
When i press the button third time, then second triangle drawed over the sphere again.
When i press the button fourth time, then spehere is drawed correctly over first and second triangles and so on.
If i use in sphereMesh QPhongMaterial instead of QPhongAlphaMaterial, then spehere is drawed correctly over first and second triangles always. Like it must to be.
I can't understand what i do wrong to get my sphere is drawed always over the triangles.
Code, that draws transparent sphere:
selectModel_ = new Qt3DExtras::QSphereMesh(selectEntity_);
selectModel_->setRadius(75);
selectModel_->setSlices(150);
selectMaterial_ = new Qt3DExtras::QPhongAlphaMaterial(selectEntity_);
selectMaterial_->setAmbient(QColor(28, 61, 136));
selectMaterial_->setDiffuse(QColor(11, 56, 159));
selectMaterial_->setSpecular(QColor(10, 67, 199));
selectMaterial_->setShininess(0.8f);
selectEntity_->addComponent(selectModel_);
selectEntity_->addComponent(selectMaterial_);
Function drawTriangles:
void drawTriangles(QPolygonF triangles, QColor color){
int numOfVertices = triangles.size();
// Create and fill vertex buffer
QByteArray bufferBytes;
bufferBytes.resize(3 * numOfVertices * static_cast<int>(sizeof(float)));
float *positions = reinterpret_cast<float*>(bufferBytes.data());
for(auto point : triangles){
*positions++ = static_cast<float>(point.x());
*positions++ = 0.0f; //We need to drow only on the surface
*positions++ = static_cast<float>(point.y());
}
geometry_ = new Qt3DRender::QGeometry(mapEntity_);
auto *buf = new Qt3DRender::QBuffer(geometry_);
buf->setData(bufferBytes);
positionAttribute_ = new Qt3DRender::QAttribute(mapEntity_);
positionAttribute_->setName(Qt3DRender::QAttribute::defaultPositionAttributeName());
positionAttribute_->setVertexBaseType(Qt3DRender::QAttribute::Float); //In our buffer we will have only floats
positionAttribute_->setVertexSize(3); // Size of a vertex
positionAttribute_->setAttributeType(Qt3DRender::QAttribute::VertexAttribute); // Attribute type
positionAttribute_->setByteStride(3 * sizeof(float));
positionAttribute_->setBuffer(buf);
geometry_->addAttribute(positionAttribute_); // Add attribute to ours Qt3DRender::QGeometry
// Create and fill an index buffer
QByteArray indexBytes;
indexBytes.resize(numOfVertices * static_cast<int>(sizeof(unsigned int))); // start to end
unsigned int *indices = reinterpret_cast<unsigned int*>(indexBytes.data());
for(unsigned int i = 0; i < static_cast<unsigned int>(numOfVertices); ++i) {
*indices++ = i;
}
auto *indexBuffer = new Qt3DRender::QBuffer(geometry_);
indexBuffer->setData(indexBytes);
indexAttribute_ = new Qt3DRender::QAttribute(geometry_);
indexAttribute_->setVertexBaseType(Qt3DRender::QAttribute::UnsignedInt); //In our buffer we will have only unsigned ints
indexAttribute_->setAttributeType(Qt3DRender::QAttribute::IndexAttribute); // Attribute type
indexAttribute_->setBuffer(indexBuffer);
indexAttribute_->setCount(static_cast<unsigned int>(numOfVertices)); // Set count of our vertices
geometry_->addAttribute(indexAttribute_); // Add the attribute to ours Qt3DRender::QGeometry
shape_ = new Qt3DRender::QGeometryRenderer(mapEntity_);
shape_->setPrimitiveType(Qt3DRender::QGeometryRenderer::Triangles);
shape_->setGeometry(geometry_);
//Create material
material_ = new Qt3DExtras::QPhongMaterial(mapEntity_);
material_->setAmbient(color);
trianglesEntity_ = new Qt3DCore::QEntity(mapEntity_);
trianglesEntity_->addComponent(shape_);
trianglesEntity_->addComponent(material_);
}
Press button handler on_drawMapPushButton_clicked():
void on_drawMapPushButton_clicked()
{
clearMap(); //Implementation is above
QPolygonF triangle1;
triangle1 << QPointF( 0 ,-1000) << QPointF(0 ,1000) << QPointF(1000, -1000);
drawTriangles(triangle1, Qt::black);
QPolygonF triangle2;
triangle2 << QPointF(-1000,-1000) << QPointF(-100,1000) << QPointF(-100,-1000);
drawTriangles(triangle2, Qt::red);
}
Map clearing function clearMap():
void clearMap()
{
if(mapEntity_){
delete mapEntity_;
mapEntity_ = nullptr;
mapEntity_ = new Qt3DCore::QEntity(view3dRootEntity_);
}
}
Ok here comes the extend answer.
The reason why this sometimes happens and sometimes not depends on the order of your entities. If you experiment with two simple spheres, one transparent and one not, you will see that when the sphere that is transparent is added later it will be drawn above the opaque object - just like you want it to.
This happens because the opaque object will be drawn first (it comes first in the scene graph) and the transparent object later which will give you the result you want. In the other case where the transparent object gets drawn first, the opaque object is drawn above because the QPhongAlphaMaterial has a QNoDepthMask render state which tells it not to write to the depth buffer. Thus, the opaque object always passes the depth test, where the transparent object actually already drew to. You have to do some more work to properly draw transparent objects for arbitrary scene graphs and camera positions.
The Qt3D Rendering Graph
To understand what you have to do you should understand how the Qt3D rendering graph is laid out. If you know this already you can skip this part.
Italic words reference items in the graph image in the following text.
If you use a Qt3DWindow, you can't access the root node of rendering graph. It is maintained by the window. You can access the QRenderSettings and root node of your framegraph through the functions activeFramegraph() and renderSettings() which you can both call on the window. You can also set the root node of scene graph through the setRootEntity() function of Qt3DWindow. The window internally has a QAspectEngine, where it sets the root node of the whole graph, which is the root node of the rendering graph in the graph image above.
If you want to insert a framegraph node to the existing framegraph of the 3D window, you have to add it as the parent of the active framegraph which I will explain in the next section. If you have your own custom framegraph which you set on the window through setActiveFramegraph() then just append it to the end, this should suffice.
Using QSortPolicy
As you already found out according to you other questions, you can use QSortPolicy in your framegraph to sort the entities by distance to camera. You can add a sort policy as follows (assuming that view is your Qt3DWindow and scene is your root entity of the scene graph, although I don't understand why it has to be):
Qt3DRender::QFrameGraphNode *framegraph = view.activeFrameGraph();
Qt3DRender::QSortPolicy *sortPolicy = new Qt3DRender::QSortPolicy(scene);
framegraph->setParent(sortPolicy);
QVector<Qt3DRender::QSortPolicy::SortType> sortTypes =
QVector<Qt3DRender::QSortPolicy::SortType>() << Qt3DRender::QSortPolicy::BackToFront;
sortPolicy->setSortTypes(sortTypes);
view.setActiveFrameGraph(framegraph);
The issue with this code is that this sort policy sorts the entities by the distance of their centers to the camera. If one of the opaque objects is closer to the camera than the transparent object it gets drawn later anyways and occludes the transparent object. See the images below for a graphical explanation.
The red and black sphere are further away from the camera than the torus, that's why they get drawn first and they don't occlude the torus.
No the center of the red sphere is closer to the camera than the center of the torus. It gets rendered later than the torus and occludes it.
Using Two Framegraph Branches
You can tackle the issue above if you use two framegraph branches. One which draws all opaque entities and one which draws all transparent ones. To achieve this you have to make use of QLayer and QLayerFilter. You can attach layers to entities and then add layer filters to your framegraph. This way you can exclude entities from entering a certain branch of your framegraph.
Let's say you create two layers, one for opaque objects and one for transparents ones:
Qt3DRender::QLayer *transparentLayer = new Qt3DRender::QLayer;
Qt3DRender::QLayer *opaqueLayer = new Qt3DRender::QLayer;
You have to attach the transparent layer to each transparent object and the opaque layer to each opaque object as a component (using addComponent()).
Unfortunately, you need a special framegraph tree to include the two corresponding layer filters (again, assuming that view is your Qt3DWindow):
Qt3DRender::QRenderSurfaceSelector *renderSurfaceSelector
= new Qt3DRender::QRenderSurfaceSelector();
renderSurfaceSelector->setSurface(&view);
Qt3DRender::QClearBuffers *clearBuffers
= new Qt3DRender::QClearBuffers(renderSurfaceSelector);
clearBuffers->setBuffers(Qt3DRender::QClearBuffers::AllBuffers);
clearBuffers->setClearColor(Qt::white);
This is the first branch to clear the buffers. Now you add the following code:
Qt3DRender::QViewport *viewport = new Qt3DRender::QViewport(renderSurfaceSelector);
Qt3DRender::QCameraSelector *cameraSelector = new Qt3DRender::QCameraSelector(viewport);
Qt3DRender::QCamera *camera = new Qt3DRender::QCamera(cameraSelector);
// set your camera parameters here
cameraSelector->setCamera(camera);
Since you create the QViewport as a child of the QRenderSurfaceSelector it is now a sibling in your framegraph with respect to the QClearBuffers. You can see an illustration of the example framegraphs here.
Now you have to create the two leaf nodes that contain the layer filters. The Qt3D engine always executes a whole branch when it reaches a leaf. This means that first the opaque objects are drawn and then the transparent ones.
// not entirely sure why transparent filter has to go first
// I would have expected the reversed order of the filters but this works...
Qt3DRender::QLayerFilter *transparentFilter = new Qt3DRender::QLayerFilter(camera);
transparentFilter->addLayer(transparentLayer);
Qt3DRender::QLayerFilter *opaqueFilter = new Qt3DRender::QLayerFilter(camera);
opaqueFilter->addLayer(opaqueLayer);
The two layer filters are now leaf nodes in your framegraph branch and Qt3D will first draw the opaque objects and then afterwards, since it uses the same viewport and everything, will draw the transparent objects above. It will draw them correctly (i.e. not in front of parts of opaque objects that the transparent object actually lies behind, because we did not clear the depth buffers again -> Splitting the framegraph happens only on the camera node).
Now set the new framegaph on your Qt3DWindow:
view.setActiveFrameGraph(renderSurfaceSelector);
Result:
Edit (26.03.21): As Patrick B. pointed out correctly, using the suggested solution with two layers you will have to add both layers as components to any lights in the scene. You can get around this by setting the filter mode on the QLayerFilters to QLayerFilter::FilterMode::DiscardAnyMatching and then reverse the order of the filters. This way, the transparentFilter discards any entities with the transparentLayer attached - but not the lights because they don't have the transparentLayer. Vice versa for the opaqueFilter.
My mistake was that i did wrong order of creating and deletion of Triangles and Sphere entities.
In pseudo code right order is as follows:
clearTriangles();
clearSphere();
drawTriangles();
drawSphere();
If you are using Qt3d with QML and want to control the order elements are drawn you can control it by the order of layers in your QML file.
Something like:
{
objectName: "firstLayer"
id : firstLayer
}
Layer {
objectName: "secondLayer"
id: secondLayer
}
The order you add them to layer filters will then control which is drawn first:
RenderSurfaceSelector {
CameraSelector {
id : cameraSelector
camera: mainCamera
FrustumCulling {
ClearBuffers {
buffers : ClearBuffers.AllBuffers
clearColor: "#04151c"
NoDraw {}
}
LayerFilter
{
objectName: "firstLayerFilter"
id: firstLayerFilter
layers: [firstLayer]
}
LayerFilter
{
id: secondLayerFilter
objectName: "secondLayerFilter"
layers: [secondLayer]
}
Then anything you add to the secondLayer will get drawn over-top of the first layer. I used this to make sure text always showed up in front of shapes, but it can be used similarly with transparencies.

Qt3d. Draw transparent QSphereMesh over triangles

I have a function that draws triangles through OpenGL
I draw two triangles by pressing a button (function on_drawMapPushButton_clicked()).
Then i draw a sphere that placed above these triangles. And now i see, that sphere is drawed correctly over first triangle, but second triangle drawed over the sphere and not vice versa.
If i press the button second time, then spehere is drawed correctly over first and second triangles.
When i press the button third time, then second triangle drawed over the sphere again.
When i press the button fourth time, then spehere is drawed correctly over first and second triangles and so on.
If i use in sphereMesh QPhongMaterial instead of QPhongAlphaMaterial, then spehere is drawed correctly over first and second triangles always. Like it must to be.
I can't understand what i do wrong to get my sphere is drawed always over the triangles.
Code, that draws transparent sphere:
selectModel_ = new Qt3DExtras::QSphereMesh(selectEntity_);
selectModel_->setRadius(75);
selectModel_->setSlices(150);
selectMaterial_ = new Qt3DExtras::QPhongAlphaMaterial(selectEntity_);
selectMaterial_->setAmbient(QColor(28, 61, 136));
selectMaterial_->setDiffuse(QColor(11, 56, 159));
selectMaterial_->setSpecular(QColor(10, 67, 199));
selectMaterial_->setShininess(0.8f);
selectEntity_->addComponent(selectModel_);
selectEntity_->addComponent(selectMaterial_);
Function drawTriangles:
void drawTriangles(QPolygonF triangles, QColor color){
int numOfVertices = triangles.size();
// Create and fill vertex buffer
QByteArray bufferBytes;
bufferBytes.resize(3 * numOfVertices * static_cast<int>(sizeof(float)));
float *positions = reinterpret_cast<float*>(bufferBytes.data());
for(auto point : triangles){
*positions++ = static_cast<float>(point.x());
*positions++ = 0.0f; //We need to drow only on the surface
*positions++ = static_cast<float>(point.y());
}
geometry_ = new Qt3DRender::QGeometry(mapEntity_);
auto *buf = new Qt3DRender::QBuffer(geometry_);
buf->setData(bufferBytes);
positionAttribute_ = new Qt3DRender::QAttribute(mapEntity_);
positionAttribute_->setName(Qt3DRender::QAttribute::defaultPositionAttributeName());
positionAttribute_->setVertexBaseType(Qt3DRender::QAttribute::Float); //In our buffer we will have only floats
positionAttribute_->setVertexSize(3); // Size of a vertex
positionAttribute_->setAttributeType(Qt3DRender::QAttribute::VertexAttribute); // Attribute type
positionAttribute_->setByteStride(3 * sizeof(float));
positionAttribute_->setBuffer(buf);
geometry_->addAttribute(positionAttribute_); // Add attribute to ours Qt3DRender::QGeometry
// Create and fill an index buffer
QByteArray indexBytes;
indexBytes.resize(numOfVertices * static_cast<int>(sizeof(unsigned int))); // start to end
unsigned int *indices = reinterpret_cast<unsigned int*>(indexBytes.data());
for(unsigned int i = 0; i < static_cast<unsigned int>(numOfVertices); ++i) {
*indices++ = i;
}
auto *indexBuffer = new Qt3DRender::QBuffer(geometry_);
indexBuffer->setData(indexBytes);
indexAttribute_ = new Qt3DRender::QAttribute(geometry_);
indexAttribute_->setVertexBaseType(Qt3DRender::QAttribute::UnsignedInt); //In our buffer we will have only unsigned ints
indexAttribute_->setAttributeType(Qt3DRender::QAttribute::IndexAttribute); // Attribute type
indexAttribute_->setBuffer(indexBuffer);
indexAttribute_->setCount(static_cast<unsigned int>(numOfVertices)); // Set count of our vertices
geometry_->addAttribute(indexAttribute_); // Add the attribute to ours Qt3DRender::QGeometry
shape_ = new Qt3DRender::QGeometryRenderer(mapEntity_);
shape_->setPrimitiveType(Qt3DRender::QGeometryRenderer::Triangles);
shape_->setGeometry(geometry_);
//Create material
material_ = new Qt3DExtras::QPhongMaterial(mapEntity_);
material_->setAmbient(color);
trianglesEntity_ = new Qt3DCore::QEntity(mapEntity_);
trianglesEntity_->addComponent(shape_);
trianglesEntity_->addComponent(material_);
}
Press button handler on_drawMapPushButton_clicked():
void on_drawMapPushButton_clicked()
{
clearMap(); //Implementation is above
QPolygonF triangle1;
triangle1 << QPointF( 0 ,-1000) << QPointF(0 ,1000) << QPointF(1000, -1000);
drawTriangles(triangle1, Qt::black);
QPolygonF triangle2;
triangle2 << QPointF(-1000,-1000) << QPointF(-100,1000) << QPointF(-100,-1000);
drawTriangles(triangle2, Qt::red);
}
Map clearing function clearMap():
void clearMap()
{
if(mapEntity_){
delete mapEntity_;
mapEntity_ = nullptr;
mapEntity_ = new Qt3DCore::QEntity(view3dRootEntity_);
}
}
Ok here comes the extend answer.
The reason why this sometimes happens and sometimes not depends on the order of your entities. If you experiment with two simple spheres, one transparent and one not, you will see that when the sphere that is transparent is added later it will be drawn above the opaque object - just like you want it to.
This happens because the opaque object will be drawn first (it comes first in the scene graph) and the transparent object later which will give you the result you want. In the other case where the transparent object gets drawn first, the opaque object is drawn above because the QPhongAlphaMaterial has a QNoDepthMask render state which tells it not to write to the depth buffer. Thus, the opaque object always passes the depth test, where the transparent object actually already drew to. You have to do some more work to properly draw transparent objects for arbitrary scene graphs and camera positions.
The Qt3D Rendering Graph
To understand what you have to do you should understand how the Qt3D rendering graph is laid out. If you know this already you can skip this part.
Italic words reference items in the graph image in the following text.
If you use a Qt3DWindow, you can't access the root node of rendering graph. It is maintained by the window. You can access the QRenderSettings and root node of your framegraph through the functions activeFramegraph() and renderSettings() which you can both call on the window. You can also set the root node of scene graph through the setRootEntity() function of Qt3DWindow. The window internally has a QAspectEngine, where it sets the root node of the whole graph, which is the root node of the rendering graph in the graph image above.
If you want to insert a framegraph node to the existing framegraph of the 3D window, you have to add it as the parent of the active framegraph which I will explain in the next section. If you have your own custom framegraph which you set on the window through setActiveFramegraph() then just append it to the end, this should suffice.
Using QSortPolicy
As you already found out according to you other questions, you can use QSortPolicy in your framegraph to sort the entities by distance to camera. You can add a sort policy as follows (assuming that view is your Qt3DWindow and scene is your root entity of the scene graph, although I don't understand why it has to be):
Qt3DRender::QFrameGraphNode *framegraph = view.activeFrameGraph();
Qt3DRender::QSortPolicy *sortPolicy = new Qt3DRender::QSortPolicy(scene);
framegraph->setParent(sortPolicy);
QVector<Qt3DRender::QSortPolicy::SortType> sortTypes =
QVector<Qt3DRender::QSortPolicy::SortType>() << Qt3DRender::QSortPolicy::BackToFront;
sortPolicy->setSortTypes(sortTypes);
view.setActiveFrameGraph(framegraph);
The issue with this code is that this sort policy sorts the entities by the distance of their centers to the camera. If one of the opaque objects is closer to the camera than the transparent object it gets drawn later anyways and occludes the transparent object. See the images below for a graphical explanation.
The red and black sphere are further away from the camera than the torus, that's why they get drawn first and they don't occlude the torus.
No the center of the red sphere is closer to the camera than the center of the torus. It gets rendered later than the torus and occludes it.
Using Two Framegraph Branches
You can tackle the issue above if you use two framegraph branches. One which draws all opaque entities and one which draws all transparent ones. To achieve this you have to make use of QLayer and QLayerFilter. You can attach layers to entities and then add layer filters to your framegraph. This way you can exclude entities from entering a certain branch of your framegraph.
Let's say you create two layers, one for opaque objects and one for transparents ones:
Qt3DRender::QLayer *transparentLayer = new Qt3DRender::QLayer;
Qt3DRender::QLayer *opaqueLayer = new Qt3DRender::QLayer;
You have to attach the transparent layer to each transparent object and the opaque layer to each opaque object as a component (using addComponent()).
Unfortunately, you need a special framegraph tree to include the two corresponding layer filters (again, assuming that view is your Qt3DWindow):
Qt3DRender::QRenderSurfaceSelector *renderSurfaceSelector
= new Qt3DRender::QRenderSurfaceSelector();
renderSurfaceSelector->setSurface(&view);
Qt3DRender::QClearBuffers *clearBuffers
= new Qt3DRender::QClearBuffers(renderSurfaceSelector);
clearBuffers->setBuffers(Qt3DRender::QClearBuffers::AllBuffers);
clearBuffers->setClearColor(Qt::white);
This is the first branch to clear the buffers. Now you add the following code:
Qt3DRender::QViewport *viewport = new Qt3DRender::QViewport(renderSurfaceSelector);
Qt3DRender::QCameraSelector *cameraSelector = new Qt3DRender::QCameraSelector(viewport);
Qt3DRender::QCamera *camera = new Qt3DRender::QCamera(cameraSelector);
// set your camera parameters here
cameraSelector->setCamera(camera);
Since you create the QViewport as a child of the QRenderSurfaceSelector it is now a sibling in your framegraph with respect to the QClearBuffers. You can see an illustration of the example framegraphs here.
Now you have to create the two leaf nodes that contain the layer filters. The Qt3D engine always executes a whole branch when it reaches a leaf. This means that first the opaque objects are drawn and then the transparent ones.
// not entirely sure why transparent filter has to go first
// I would have expected the reversed order of the filters but this works...
Qt3DRender::QLayerFilter *transparentFilter = new Qt3DRender::QLayerFilter(camera);
transparentFilter->addLayer(transparentLayer);
Qt3DRender::QLayerFilter *opaqueFilter = new Qt3DRender::QLayerFilter(camera);
opaqueFilter->addLayer(opaqueLayer);
The two layer filters are now leaf nodes in your framegraph branch and Qt3D will first draw the opaque objects and then afterwards, since it uses the same viewport and everything, will draw the transparent objects above. It will draw them correctly (i.e. not in front of parts of opaque objects that the transparent object actually lies behind, because we did not clear the depth buffers again -> Splitting the framegraph happens only on the camera node).
Now set the new framegaph on your Qt3DWindow:
view.setActiveFrameGraph(renderSurfaceSelector);
Result:
Edit (26.03.21): As Patrick B. pointed out correctly, using the suggested solution with two layers you will have to add both layers as components to any lights in the scene. You can get around this by setting the filter mode on the QLayerFilters to QLayerFilter::FilterMode::DiscardAnyMatching and then reverse the order of the filters. This way, the transparentFilter discards any entities with the transparentLayer attached - but not the lights because they don't have the transparentLayer. Vice versa for the opaqueFilter.
My mistake was that i did wrong order of creating and deletion of Triangles and Sphere entities.
In pseudo code right order is as follows:
clearTriangles();
clearSphere();
drawTriangles();
drawSphere();
If you are using Qt3d with QML and want to control the order elements are drawn you can control it by the order of layers in your QML file.
Something like:
{
objectName: "firstLayer"
id : firstLayer
}
Layer {
objectName: "secondLayer"
id: secondLayer
}
The order you add them to layer filters will then control which is drawn first:
RenderSurfaceSelector {
CameraSelector {
id : cameraSelector
camera: mainCamera
FrustumCulling {
ClearBuffers {
buffers : ClearBuffers.AllBuffers
clearColor: "#04151c"
NoDraw {}
}
LayerFilter
{
objectName: "firstLayerFilter"
id: firstLayerFilter
layers: [firstLayer]
}
LayerFilter
{
id: secondLayerFilter
objectName: "secondLayerFilter"
layers: [secondLayer]
}
Then anything you add to the secondLayer will get drawn over-top of the first layer. I used this to make sure text always showed up in front of shapes, but it can be used similarly with transparencies.

openGL object real movement simulation

I need to simulate the movement of a row(oar). The oar object is loaded into eclipse with the min3D library which works with openGL.
At this moment, I make the oar move in the 3 axis x, y and z, but I'm not able to control this movement and to make the oar move in the desired way.
(Don't take care of the values, aren't real values)
This is the class which loads the oar, places it in the screen and moves it:
public class Obj3DView extends RendererFragment {
private Object3dContainer rowObject3D;
/** Called when the activity is first created. */
#Override
public void initScene() {
scene.lights().add(new Light());
scene.lights().add(new Light());
Light myLight = new Light();
myLight.position.setZ(150);
scene.lights().add(myLight);
IParser myParser = Parser.createParser(Parser.Type.OBJ, getResources(), "com.masermic.rowingsoft:raw/row_obj",true);
myParser.parse();
rowObject3D = myParser.getParsedObject();
rowObject3D.position().x = rowObject3D.position().y = rowObject3D.position().z = 0;
rowObject3D.scale().x = rowObject3D.scale().y = rowObject3D.scale().z = 0.28f;
scene.addChild(rowObject3D);
}
//THIS MAKES THE OAR MOVE
#Override
public void updateScene() {
rowObject3D.rotation().x += 1; //pitch
rowObject3D.rotation().z += 1; //roll
rowObject3D.rotation().y += 0.5; //yaw
}
roation() method definition: X/Y/Z euler rotation of object, using Euler angles. Units should be in degrees, to match OpenGL usage.
So the question is about how could I define the values that make the oar simulate a real movement?
This looks more like a mathematics question.
I'll present some general tips;
On positioning:
The fixed point of the oar is where the oar is held on the boat, so the oar's rotation is relative to that point, not the center of the oar.
And on top of that, the boat is moving, so is the oar's "fixed" point.
The order for positioning should be:
Translate to the boat position.
Translate the oar so it's center is relative to the correct spot of the boat.
Apply the oar rotation.
Draw it.
On animation:
It will be easier to animate if you alter your model so the origin is at the point where the oar is fixed, but it may complicate other animations/calculae if you later pretend to do more complex manipulations on the oar.
Interpolation of Euler rotation is a mess, I suggest quartenions. You can grab the angles from that nice picture and interpolate. (if you need Euler, still, you can convert the end result to Euler)
For simple animations, (say you just want the oar to repeatedly rotate in some pattern), hardly you will find a better method then key-frames, that is, create a list of coordinates/angles along the path you want the oar to do, and iterate through them, interpolating.
With enough points, a simple linear interpolation will do just fine.

Draw a circle on top of Renderer2D

I created a renderer2D so the user can click and pick the centre of a lesion. I want to show the user where he clicked. Currently my idea is to freeze the renderer (so the slice will be the same and the zoom too) and then use the canvas to draw a circle.
Here is my code:
centerpick2D = new X.renderer2D();
centerpick2D.container = 'pick_center_segment';
centerpick2D.orientation = 'Z';
centerpick2D.init();
centerpick2D.add(volumeT1DCM);
centerpick2D.render();
centerpick2D.interactor.onMouseDown = function(){
tumorCenter=centerpick2D.xy2ijk(centerpick2D.interactor.mousePosition[0],centerpick2D.interactor.mousePosition[1]);
centerpick2D.interactor.config.MOUSEWHEEL_ENABLED = false;
centerpick2D.interactor.config.MOUSECLICKS_ENABLED = false;
$('canvas').attr('id', 'xtkCanvas');
var myCanvas = document.getElementById("xtkCanvas");
var ctx=myCanvas.getContext("2d");
ctx.fillStyle="#FF0000";
ctx.beginPath();
ctx.arc(centerpick2D.interactor.mousePosition[0],centerpick2D.interactor.mousePosition[1],20,0,Math.PI*2,true);
ctx.closePath();
ctx.fill();
};
I have two problems:
The MOUSEWHEEL_ENABLED=false and MOUSECLICKS_ENABLED = false do not work. I tried adding a centerpick2D.init() which works but add a second canvas on top of the previous one.
My circle does not appear anywhere.
Any help would be greatly appreciated. :-D
Sorry took me a while to get this uploaded. Here's a quick overview of how I am copying the XTK canvas' contents into my own canvas and then do my own custom drawing on top of it. The actual code for my project is all over the place, so am just pasting formatted snippets here. Again there's a definite drawback in terms of performance here (due to the copying of pixels), so I think it would be better to introduce all this into the XTK Code in the first place and do all the drawing in one Canvas element.
// initialise animation loop with requestAnimationFrame
startDraw:function(){
var _this = this;
var time = new Date().getTime();
function draw() {
//update time
var now = new Date().getTime();
//only draw the frame if 25 milliseconds have passed!
if(now > (time + 25)){
// call drawing function here
drawFrame();
time = now;
}
requestAnimationFrame(draw);
}
requestAnimationFrame(draw);
};
//...
// actual drawing function, for each frame copy the pixel contents from a XTK canvas
// into custom canvas and do custom drawing on top of it, so drawing is actually at each
// frame
drawFrame:function(){
//...
// this.ctx is the context of my own custom canvas element
// I use drawImage() function to copy the pixels from this.srcCanvasA
// this.srcCanvasA is a predefined XTKCanvas
this.ctx.drawImage(this.srcCanvas, 1, 1)
// custom func to draw on top of this same canvas (ie. this.ctx) with standard
// HTML Canvas functionality, so this is where you could draw your own circle based
// on the user mouse coords
this.drawAnnotation();
//...
}
Let me know if you have any more questions. The full code is available here:
https://github.com/davidbasalla/IndividualProject

Box2D creating rectangular bounding boxes around angled line bodies

I'm having a lot of trouble detecting collisions in a zero-G space game. Hopefully this image will help me explain:
http://i.stack.imgur.com/f7AHO.png
The white rectangle is a static body with a b2PolygonShape fixture attached, as such:
// Create the line physics body definition
b2BodyDef wallBodyDef;
wallBodyDef.position.Set(0.0f, 0.0f);
// Create the line physics body in the physics world
wallBodyDef.type = b2_staticBody; // Set as a static body
m_Body = world->CreateBody(&wallBodyDef);
// Create the vertex array which will be used to make the physics shape
b2Vec2 vertices[4];
vertices[0].Set(m_Point1.x, m_Point1.y); // Point 1
vertices[1].Set(m_Point1.x + (sin(angle - 90*(float)DEG_TO_RAD)*m_Thickness), m_Point1.y - (cos(angle - 90*(float)DEG_TO_RAD)*m_Thickness)); // Point 2
vertices[2].Set(m_Point2.x + (sin(angle - 90*(float)DEG_TO_RAD)*m_Thickness), m_Point2.y - (cos(angle - 90*(float)DEG_TO_RAD)*m_Thickness)); // Point 3
vertices[3].Set(m_Point2.x, m_Point2.y); // Point 3
int32 count = 4; // Vertex count
b2PolygonShape wallShape; // Create the line physics shape
wallShape.Set(vertices, count); // Set the physics shape using the vertex array above
// Define the dynamic body fixture
b2FixtureDef fixtureDef;
fixtureDef.shape = &wallShape; // Set the line shape
fixtureDef.density = 0.0f; // Set the density
fixtureDef.friction = 0.0f; // Set the friction
fixtureDef.restitution = 0.5f; // Set the restitution
// Add the shape to the body
m_Fixture = m_Body->CreateFixture(&fixtureDef);
m_Fixture->SetUserData("Wall");[/code]
You'll have to trust me that that makes the shape in the image. The physics simulation works perfectly, the player (small triangle) collides with the body with pixel perfect precision. However, I come to a problem when I try to determine when a collision takes place so I can remove health and what-not. The code I am using for this is as follows:
/*------ Check for collisions ------*/
if (m_Physics->GetWorld()->GetContactCount() > 0)
{
if (m_Physics->GetWorld()->GetContactList()->GetFixtureA()->GetUserData() == "Player" &&
m_Physics->GetWorld()->GetContactList()->GetFixtureB()->GetUserData() == "Wall")
{
m_Player->CollideWall();
}
}
I'm aware there are probably better ways to do collisions, but I'm just a beginner and haven't found anywhere that explains how to do listeners and callbacks well enough for me to understand. The problem I have is that GetContactCount() shows a contact whenever the player body enters the purple box above. Obviously there is a rectangular bounding box being created that encompasses the white rectangle.
I've tried making the fixture an EdgeShape, and the same thing happens. Does anyone have any idea what is going on here? I'd really like to get collision nailed so I can move on to other things. Thank you very much for any help.
The bounding box is an AABB (axis aligned bounding box) which means it will always be aligned with the the Cartesian axes. AABBs are normally used for broadphase collision detection because it's a relatively simple (and inexpensive) computation.
You need to make sure that you're testing against the OBB (oriented bounding box) for the objects if you want more accurate (but not pixel perfect, as Micah pointed out) results.
Also, I agree with Micah's answer that you will most likely need a more general system for handling collisions. Even if you only ever have just walls and the player, there's no guarantee that which object will be A and which will be B. And as you add other object types, this will quickly unravel.
Creating the contact listener isn't terribly difficult, from the docs (added to attempt to handle your situation):
class MyContactListener:public b2ContactListener
{
private:
PlayerClass *m_Player;
public:
MyContactListener(PlayerClass *player) : m_Player(player)
{ }
void BeginContact(b2Contact* contact)
{ /* handle begin event */ }
void EndContact(b2Contact* contact)
{
if (contact->GetFixtureA()->GetUserData() == m_Player
|| contact->GetFixtureB()->GetUserData() == m_Player)
{
m_Player->CollideWall();
}
}
/* we're not interested in these for the time being */
void PreSolve(b2Contact* contact, const b2Manifold* oldManifold)
{ /* handle pre-solve event */ }
void PostSolve(b2Contact* contact, const b2ContactImpulse* impulse)
{ /* handle post-solve event */ }
};
This requires you to assign m_Player to the player's fixture's user data field. Then you can use the contact listener like so:
m_Physics->GetWorld()->SetContactListener(new MyContactListener(m_Player));
How do you know GetFixtureA is the player and B is the wall? Could it be reversed? Could there be an FixtureC? I would think you would need a more generic solution.
I've used a similar graphics framework (Qt) and it had something so you could grab any two objects and call something like 'hasCollided' which would return a bool. You could get away with not using a callback and just call it in the drawScene() or check it periodically.
In Box2D the existence of a contact just means that the AABBs of two fixtures overlaps. It does not necessarily mean that the shapes of the fixtures themselves are touching.
You can use the IsTouching() function of a contact to check if the shapes are actually touching, but the preferred way to deal with collisions is to use the callback feature to have the engine tell you whenever two fixtures start/stop touching. Using callbacks is much more efficient and easier to manage in the long run, though it may be a little more effort to set up initially and there are a few things to be careful about - see here for an example: http://www.iforce2d.net/b2dtut/collision-callbacks