Detect the surfaces of a 3D mesh, selected by mouse - c++

I have an application that is used to display objects in 3D. Now I want to improve my application. If I double click in an area of my aillage, I want to retrieve them arrested and surfaces existing in this area, in order to subdivide this area then. Is it possible to receive them? Thanks.

Convert the click on the viewport to a ray in world space.
Then query your scene with the ray to find intersecting objects (based on axis-aligned bounding box search using your scene's octtree, if you have one).
Then if you need to detect triangles, test the ray against all triangles in the objects found by the scene query. You could optimize this step if necessary by building an octtree for the object's mesh. The one that is closest to the ray origin is the hit point.
For each object you can transform the ray into its own local coordinate system.

Related

Get minimum oriented bounding box from partial point cloud from depth sensor

How can I compute the oriented bounding box from a partial point cloud?
My use case is I have a depth camera looking down at a table with one object (which can have irregular geometry). The camera does not get the full point cloud because the bottom of the object is occluded.
From the limited point cloud information I can get, how do I fit a oriented bounding box around it?
I intend to use the bounding box to calculate center of mass, assuming uniform density, and using this as a grasping heuristic.
Similar question in 2d: minimal bounding box of a clipped point cloud
If i get you right why don't you assume following:
reproject visible points to a planar surface (table)
get bounding rectangle around reprojected points (unseen points must
be inside of it and can't be outside if camera is orthogonal to
projection surface, table in this case), this is a bounding box profile
bounding box bottom is the table and its top is parallel to it and contains the nearest point to camera

Retrieving occluded faces given a rectangular region

I am trying to do a click-and-drag selection to select all the visible faces of a model (similar to those in 3D modelling software such as Blender).
Initially I am thinking of using line intersection to find all the occluded faces in the scene: for every pixel in the viewport, trace a line into the scene and find the first intersection. Then the list of occluded faces would be the ones that did not get intersected. But then after experimentation I realized that this method is very slow.
I heard of another method which goes something like:
Assigning a unique color for each primitive.
Project all those onto a virtual plane coincides with the viewport
From the projected pixels, if the colors corresponding to a primitive are not present, then it is occluded.
The problem is that I have no idea how to go about creating such a "virtual" plane, and at the same time not revealing it to the end-user. Any help or better idea to solve this?

View frustum culling for animated meshes

I implemented frustum culling in my system, it tests the frustum planes on every object's bounding sphere, and it works great. (I find the PlaneVsAabb check unneeded)
However, the bounding sphere of the mesh is adjusted for its bind pose, so when the mesh starts moving (e.g the player attacks with his sword) some vertices could go out of the sphere.
This often results in a mesh getting culled, although there are some vertices that should be rendered (e.g the player's sword that went out of the sphere).
I could think of two possible solutions for this:
For every mesh in every frame, calculate its new bounding sphere based on bone changes. (I have no idea how to start with this...) Could this be too inefficient?
Add a fixed offset for every sphere radius (based on the entire mesh size maybe?), so there could be no chance of the mesh getting culled even when animated.
(1) would be inefficient in real-time yes. However you can do a mixture of both, by computing the largest possible bounding sphere statically i.e. when you load it. Using that in (2) would guarantee a better result than some arbitrary offset you make up.
(1) You can add locators to key elements (e.g. dummy bone on the tip of the sword) and transform their origin while animating. You can done it on CPU on each update and then calculate bounding box or bounding sphere. Or you can precompute bounding volumes for each frame of animation offline. Doom3 uses second approach.

directx 9 mouse over an object

ok when i used to draw stuff using gdi+ i used to know the coordinates for objects but now when im using meshes in directx 9 i have no idea how to get the coordinates for the object so i can find if the mouse over an object. any idea how to find the coordinates ?
You need to cast the mouse position into the world and convert it to world-space coordinates, which then are tested against the various objects. You may be able to find a library to do this for you, I know OpenGL supports picking and most wrappers offer enhanced functions for that, but the principle is:
Find the mouse coordinates in the window. Using those coordinates, cast a ray (whether you actually use a ray in the system or simply do similar math isn't a big deal here) into the world. You'll use the current view matrix ("camera" angle and position) to calculate the direction and origin of the ray. Using that ray, test against your objects, their bounding boxes or geometry (whichever you choose) to find the object. Using the intersection coordinates, find the object that is at that location.
You can also use the depth buffer for this very easily, if your scene is relatively static. Simply render with a depth texture set as the Z buffer, then use the depth, mouse position and view matrix to find the point of intersection.
It may be possible to do this in reverse, that is, map each object to the appropriate screen coordinates, but you will likely run into issues with depth sorting and overlapping areas. Also, it can be unnecessarily slow mapping every object to window areas each frame.

OpenGL: getting clipping planes that will bound the entire scene

I am looking for a way to display my entire scene on the screen. This involves a call to glOrtho() with my clipping plane bounds.
However, the size of my scene is dynamic and as such, I need to find a way to determine a projection box that will contain the whole scene.
Any suggestions?
You will need to know the bounding boxes of every object in your scene. Then you can keep expanding your scene's bounding box by each object that is in it. You can see an example of this in OpenSceneGraph using their BoundingBox class.
If you need to get the bounding box for a particular object, you can just store the minimum and maximum values along each axis as you load the model (since bounding boxes are axis aligned).