I'd like to compute the intersection between a ray and the models the user sees on the screen of my app. I'm following the OpenGL guide but I have many models in the scene that I draw independently and dynamically in a push-pop context applying a translate to the current modelview matrix (ie for each model I do a
glPushMatrix,
glTranslatef,
draw the model,
glPopMatrix
).
Do I need to compute also different rays, one for each object (for testing later if it intersects with the model's bounding sphere)?
Maybe some concept about ray picking is not so clear to me yet. Any link, guide, etc will be very appreciated ^^
The picking ray has nothing to do with the model matrix. If the objects in your scene move (i.e. the model matrices change), the ray itself won't move.
Which method do you plan to use between those proposed in the link ? GL_SELECTION, rendering with a special color, home-made intersections ?
I'll edit this answer accordingly
EDIT : let's go for the home-made one then
First, you need to generate a ray. You usually want to express it in world space (i.e. "my ray starts at the same pos than my camera, and its dir is (xyz) ). You have two methods to do that, which are pretty well explained in the openGL Faq (One way to generate a pick ray... Another method...). Just copy 'n paste the code.
If the mouse is in the middle of the screen, you should have your ray pointing towards the same direction than your camera. You can check that with your arguments to gluLookAt.
Now, what you want to do is intersect your ray with your meshes' triangles, but you have a problem, because the triangles are not expressed in the same system than the ray.
So you have to express the ray in the triangle's system.
triangles system : the model system ; ray system : the world system. Fortunately, you have a matrix to go from model to world (the Model matrix, the one you set after your glPushMatrix). So inverse(ModelMatrix) is the matrix from world to matrix.
ray = Ray(cam_pos, dir);
foreach (mesh)
ModelMatrix = whatever you want
InverseModelMatrix = inverse(ModelMatrix)
rayInModelSpace = Ray(InverseModelMatrix x ray.pos, InverseModelMatrix x ray.dir)
Intersect(rayInModelSpace, mesh)
Your Intersect method could look like this :
foreach (triangle in mesh )
if ray.intersect(triangle) return true
triangle-ray intersection code can be found at geometrictools.com
This will be quite slow, but it will work. There are several ways to improve the preformance, but the easiest is to compute the AABB of each mesh, and first intersect the ray with the AABB. No intersection with the AABB => no intersection with the mesh
Code can be found on geometrictools too.
I'll be away for a few days, so good luck :)
Related
I'm currently using LWJGL but if you have a solution for OpenGL I can use that too.
Now, I'm trying to apply a selection area to a plane that I can move around with my mouse (like my terrible drawing above). I'm trying to make it flat to the plane, so it can move over any obstacles. I've considered projection texture but I dont know how to implement it. Is this a good way of solving the problem or is there any better alternative?
What would be the best way to implement a selection area?
Alternative options, pros and cons.
Edit: This will be moving over another texture if that makes a difference.
When you already know the intersection point in world space, there is a relative simple solution that doesn't require projected textures:
In the fragment shader calculate the world-space distance between the intersection point and the current fragment. When the distance between the two is smaller than the desired radius of the circle, then the selection color should be drawn. Otherwise just the normal plane is drawn.
float dist = length(current_ws - intersection_ws);
if (dist < circle_radius)
//Draw overlay
else
//Draw plane normal
Given a human 3D model, I want to change its shape by giving parameters, like height, waist, bust etc.
From what I gathered, the 3D model should have some 'hooks' around the areas I can change.
Any pointers for this would be very helpful through OpenGL, Three.js or any other means. I don't want to do it in Blender or other 3D manipulation tools. I want it done programatically.
Here's a Sample 3D model
What you should do is "tag" a group of vertices together.
Then apply a vertex shader to those groups, which changes the position of the vertices to shrink/expand the mesh.
One way to do this is to place a point inside the mesh, and give it a radius. This pretty much means you're creating a sphere.
Run the shader on all the vertices inside the sphere.
What the shader should do is "inflate" the sphere - moving the vertices away from the center point.
Just transform each vertice away from the center by a certain ammount.
(Make a vector from the center to the current vertice, continue the vector, and move the vertice there.
This should work well for the belly.
Another shader you can do is to stretch the mesh vertically (for the person's height).
This is more straightforward.
Just run on all vertices and add to their height.
How much to add - that's what you should figure out. My intuition says it can't be a constant - I think it's a linear function but I'm not sure.
I am trying to implement an overlay for a particular screen saver that I inject my application into.
This screen saver uses 3d objects and a dynamic camera.
I have camera position and direction and the 3d object positions, and the fov value, and I'd like to create an audio based overlay that adds icons to the objects.
I have the 2d overlay in place, and can successfully iterate the objects, however I can't figure how to calculate a frustum with the data I have.
Basically: how can one create a frustrum from the camera direction? Is a world to screen matrix required to create the frustum? I don't have the w2s matrix, so would that make the problem impossible to solve?
There is an excellent resource at http://web.archive.org/web/20120531231005/http://crazyjoke.free.fr/doc/3D/plane%20extraction.pdf which should explain everything you need and outlines how to extract the view frustum using the view and model matrices.
In regards to the world to screen (W2S) matrix you can calculate it on the fly by utilising a variant of the following:
ScreenPosition = (ProjectionMatrix* (ViewMatrix * WorldPosition))
Thast way you can calculate it on the fly as required.
There is also a frustum tutorial at Lighthouse aimed at frustum culling, but explaining the implementation quite nicely.
I have a 3D model consisting of point vertices (XYZ) and eventually triangular faces.
Using OpenGL or camera-view-matrix-projection I can project the 3D model to a 2D plane, i.e. a view window or an image with m*n resolution.
The question is how can I determine the correspondence between a pixel from the 2D projection plan and its corresponding vertex (or face) from the original 3D model.
Namely,
What is the closest vertices in 3D model for a given pixel from 2D projection?
It sounds like picking in openGL or raytracing problem. Is there however any easy solution?
With the idea of ray tracing it is actually about finding the first vertex/face intersected with the ray from a view point. Can someone show me some tutorial or examples? I would like to find an algorithm independent from using OpenGL.
Hit testing in OpenGL usually is done without raytracing. Instead, as each primitive is rendered, a plane in the output is used to store the unique ID of the primitive. Hit testing is then as simple as reading the ID plane at the cursor location.
My (possibly-naive) thought would be to create an array of the vertices and then sort them by their distance (or distance-squared, for speed) once projected to your screen point. The first item in the list will be closest. It will be O(n) for n vertices, but no worse.
Edit: Better for speed and memory: simply loop through all vertices and keep track of the vertex whose projection is closest (distance squared) to your viewport pixel. This assumes that you are able to perform the projection yourself, without relying on OpenGL.
For example, in pseudo-code:
function findPointFromViewPortXY( pointOnViewport )
closestPoint = false
bestDistance = false
for (each point in points)
projectedXY = projectOntoViewport(point)
distanceSquared = distanceBetween(projectedXY, pointOnViewport)
if bestDistance==false or distanceSquared<bestDistance
closestPoint = point
bestDistance = distanceSquared
return closestPoint
In addition to Ben Voigt's answer:
If you do a separate pass over pickable objects, then you can set the viewport to contain only a single pixel that you will read.
You can also encode triangle ID by using geometry shader (gl_PrimitiveID).
As I have understood, it is recommended to use glTranslate / glRotate in favour of glutLootAt. I am not going to seek the reasons beyond the obvious HW vs SW computation mode, but just go with the wave. However, this is giving me some headaches as I do not exactly know how to efficiently stop the camera from breaking through walls. I am only interested in point-plane intersections, not AABB or anything else.
So, using glTranslates and glRotates means that the viewpoint stays still (at (0,0,0) for simplicity) while the world revolves around it. This means to me that in order to check for any intersection points, I now need to recompute the world's vertices coordinates (which was not needed with the glutLookAt approach) for every camera movement.
As there is no way in obtaining the needed new coordinates from GPU-land, they need to be calculated in CPU land by hand. For every camera movement ... :(
It seems there is the need to retain the current rotations aside each of the 3 axises and the same for translations. There is no scaling used in my program. My questions:
1 - is the above reasoning flawed ? How ?
2 - if not, there has to be a way to avoid such recalculations.
The way I see it (and by looking at http://www.glprogramming.com/red/appendixf.html) it needs one matrix multiplication for translations and another one for rotating (only aside the y axis needed). However, having to compute so many additions / multiplications and especially the sine / cosine will certainly be killing FPS. There are going to be thousands or even tens of thousands of vertices to compute on. Every frame... all the maths... After having computed the new coordinates of the world things seem to be very easy - just see if there is any plane that changed its 'd' sign (from the planes equation ax + by + cz + d = 0). If it did, use a lightweight cross products approach to test if the point is inside the space inside each 'moving' triangle of that plane.
Thanks
edit: I have found about glGet and I think it is the way to go but I do not know how to properly use it:
// Retains the current modelview matrix
//glPushMatrix();
glGetFloatv(GL_MODELVIEW_MATRIX, m_vt16CurrentMatrixVerts);
//glPopMatrix();
m_vt16CurrentMatrixVerts is a float[16] which gets filled with 0.f or 8.67453e-13 or something similar. Where am I screwing up ?
gluLookAt is a very handy function with absolutely no performance penalty. There is no reason not to use it, and, above all, no "HW vs SW" consideration about that. As Mk12 stated, glRotatef is also done on the CPU. The GPU part is : gl_Position = ProjectionMatrix x ViewMatrix x ModelMatrix x VertexPosition.
"using glTranslates and glRotates means that the viewpoint stays still" -> same thing for gluLookAt
"at (0,0,0) for simplicity" -> not for simplicity, it's a fact. However, this (0,0,0) is in the Camera coordinate system. It makes sense : relatively to the camera, the camera is at the origin...
Now, if you want to prevent the camera from going through the walls, the usual method is to trace a ray from the camera. I suspect this is what you're talking about ("to check for any intersection points"). But there is no need to do this in camera space. You can do this in world space. Here's a comparison :
Tracing rays in camera space : ray always starts from (0,0,0) and goes to (0,0,-1). Geometry must be transformed from Model space to World space, and then to Camera space, which is what annoys you
Tracing rays in world space : ray starts from camera position (in world space) and goes to (eyeCenter - eyePos).normalize(). Geometry must be transformed from Model space to World space.
Note that there is no third option (Tracing rays in Model space) which would avoid to transform the geometry from Model space to World space. However, you have a pair of workarounds :
First, your game's world is probably still : the Model matrix is probably always identity. So transforming its geometry from Model to World space is equivalent to doing nothing at all.
Secondly, for all other objets, you can take the opposite approach. Intead of transforming the entire geometry in one direction, transform only the ray the other way around : Take your Model matrix, inverse it, and you've got a matrix which goes from world space to model space. Multiply your ray's origin and direction by this matrix : your ray is now in model space. Intersect the normal way. Done.
Note that all I've said is standard techniques. No hacks or other weird stuff, just math :)