I'm trying to find the mouse position in world coordinates but am having trouble finding the right code. At the moment I use this to determine the ray:
float pointX, pointY;
D3DXMATRIX projectionMatrix, viewMatrix, inverseViewMatrix, worldMatrix, translateMatrix, inverseWorldMatrix;
D3DXVECTOR3 direction, origin, rayOrigin, rayDirection;
bool intersect, result;
// Move the mouse cursor coordinates into the -1 to +1 range.
pointX = ((2.0f * (float)mouseX) / (float)m_screenWidth) - 1.0f;
pointY = (((2.0f * (float)mouseY) / (float)m_screenHeight) - 1.0f) * -1.0f;
// Adjust the points using the projection matrix to account for the aspect ratio of the viewport.
m_Direct3D->GetProjectionMatrix(projectionMatrix);
pointX = pointX / projectionMatrix._11;
pointY = pointY / projectionMatrix._22;
// Get the inverse of the view matrix.
m_Camera->GetViewMatrix(viewMatrix);
D3DXMatrixInverse(&inverseViewMatrix, NULL, &viewMatrix);
// Calculate the direction of the picking ray in view space.
direction.x = (pointX * inverseViewMatrix._11) + (pointY * inverseViewMatrix._21) + inverseViewMatrix._31;
direction.y = (pointX * inverseViewMatrix._12) + (pointY * inverseViewMatrix._22) + inverseViewMatrix._32;
direction.z = (pointX * inverseViewMatrix._13) + (pointY * inverseViewMatrix._23) + inverseViewMatrix._33;
// Get the origin of the picking ray which is the position of the camera.
origin = m_Camera->GetPosition();
This gives me the origin and direction of the ray.
But...
I use a custom mesh (not the one from directX) with a heightmap, separated into quadtrees and I don't know if my logic is correct, I tried using the frustum to determine which nodes in the quadtree are visible and so do the checking intersection of triangles only on those nodes, here is this code:
Note* m_mousepos is a vector.
bool QuadTreeClass::getTriangleRay(NodeType* node, FrustumClass* frustum, ID3D10Device* device, D3DXVECTOR3 vPickRayDir, D3DXVECTOR3 vPickRayOrig){
bool result;
int count, i, j, indexCount;
unsigned int stride, offset;
float fBary1, fBary2;
float fDist;
D3DXVECTOR3 v0, v1, v2;
float p1, p2, p3;
// Check to see if the node can be viewed.
result = frustum->CheckCube(node->positionX, 0.0f, node->positionZ, (node->width / 2.0f));
if(!result)
{
return false;
}
// If it can be seen then check all four child nodes to see if they can also be seen.
count = 0;
for(i=0; i<4; i++)
{
if(node->nodes[i] != 0)
{
count++;
getTriangleRay(node->nodes[i], frustum, device, vPickRayOrig, vPickRayDir);
}
}
// If there were any children nodes then dont continue
if(count != 0)
{
return false;
}
// Now intersect each triangle in this node
j = 0;
for(i=0; i<node->triangleCount; i++){
j = i * 3;
v0 = D3DXVECTOR3( node->vertexArray[j].x, node->vertexArray[j].y, node->vertexArray[j].z);
j++;
v1 = D3DXVECTOR3( node->vertexArray[j].x, node->vertexArray[j].y, node->vertexArray[j].z);
j++;
v2 = D3DXVECTOR3( node->vertexArray[j].x, node->vertexArray[j].y, node->vertexArray[j].z);
result = IntersectTriangle( vPickRayOrig, vPickRayDir, v0, v1, v2, &fDist, &fBary1, &fBary2);
if(result == true){
// intersection = true, so get a aproximate center of the triangle on the world
p1 = (v0.x + v0.x + v0.x)/3;
p2 = (v0.y + v1.y + v2.y)/3;
p3 = (v0.z + v1.z + v2.z)/3;
m_mousepos = D3DXVECTOR3(p1, p2, p3);
return true;
}
}
}
bool QuadTreeClass::IntersectTriangle( const D3DXVECTOR3& orig, const D3DXVECTOR3& dir,D3DXVECTOR3& v0, D3DXVECTOR3& v1, D3DXVECTOR3& v2, FLOAT* t, FLOAT* u, FLOAT* v ){
// Find vectors for two edges sharing vert0
D3DXVECTOR3 edge1 = v1 - v0;
D3DXVECTOR3 edge2 = v2 - v0;
// Begin calculating determinant - also used to calculate U parameter
D3DXVECTOR3 pvec;
D3DXVec3Cross( &pvec, &dir, &edge2 );
// If determinant is near zero, ray lies in plane of triangle
FLOAT det = D3DXVec3Dot( &edge1, &pvec );
D3DXVECTOR3 tvec;
if( det > 0 )
{
tvec = orig - v0;
}
else
{
tvec = v0 - orig;
det = -det;
}
if( det < 0.0001f )
return FALSE;
// Calculate U parameter and test bounds
*u = D3DXVec3Dot( &tvec, &pvec );
if( *u < 0.0f || *u > det )
return FALSE;
// Prepare to test V parameter
D3DXVECTOR3 qvec;
D3DXVec3Cross( &qvec, &tvec, &edge1 );
// Calculate V parameter and test bounds
*v = D3DXVec3Dot( &dir, &qvec );
if( *v < 0.0f || *u + *v > det )
return FALSE;
// Calculate t, scale parameters, ray intersects triangle
*t = D3DXVec3Dot( &edge2, &qvec );
FLOAT fInvDet = 1.0f / det;
*t *= fInvDet;
*u *= fInvDet;
*v *= fInvDet;
return TRUE;
}
Please is this code right? If it is then my problem must be related to the quadtree.
Thanks!
Iterating over all visible triangle to find the intersection is very expensive. Additional the cost will rise if your heightmap gets finer.
For my heightmap I use a different approach:
I do a step-by-step search regarding the height on the clickray starting at the origin. At every step the current position is moved along the ray and tested against the height of the heightmap (therefore you need a heightfunction). If the current position is below the heightmap, the last intervall is searched again by an additional iteration to find a finer position. This works as long as your heightmap hasn't a too high frequency in the heightvalues regarding to the stepsize (otherwise you could jump over a peak).
Related
i need to implement arcball camera. I got something similar, but it works very crookedly (the angle changes sharply, when turning to the right / left, the camera raises up / down strongly).
Here is my source code, can you tell me where I went wrong:
bool get_arcball_vec(double x, double y, glm::vec3& a)
{
glm::vec3 vec = glm::vec3((2.0 * x) / window.getWidth() - 1.0, 1.0 - (2.0 * y) / window.getHeight(), 0.0);
if (glm::length(vec) >= 1.0)
{
vec = glm::normalize(vec);
}
else
{
vec.z = sqrt(1.0 - pow(vec.x, 2.0) - pow(vec.y, 2.0));
}
a = vec;
return true;
}
...
void onMouseMove(double x, double y) {
if (rightMouseButtonPressed) {
glm::vec3 a,b;
cur_mx = x;
cur_my = y;
if (cur_mx != last_mx || cur_my != last_my)
if (get_arcball_vec(last_mx, last_my, a) && get_arcball_vec(cur_mx, cur_my, b))
viewport.getCamera().orbit(a,b);
last_mx = cur_mx;
last_my = cur_my;
...
void Camera::orbit(glm::vec3 a, glm::vec3 b)
{
forward = calcForward();
right = calcRight();
double alpha = acos(glm::min(1.0f, glm::dot(b, a)));
glm::vec3 axis = glm::cross(a, b);
glm::mat4 rotationComponent = glm::mat4(1.0f);
rotationComponent[0] = glm::vec4(right, 0.0f);
rotationComponent[1] = glm::vec4(up, 0.0f);
rotationComponent[2] = glm::vec4(forward, 0.0f);
glm::mat4 toWorldCameraSpace = glm::transpose(rotationComponent);
axis = toWorldCameraSpace * glm::vec4(axis, 1.0);
glm::mat4 orbitMatrix = glm::rotate(glm::mat4(1.0f), (float)alpha, axis);
eye = glm::vec4(target, 1.0) + orbitMatrix * glm::vec4(eye - target, 1.0f);
up = orbitMatrix * glm::vec4(up, 1.0f);
}
I use this code to map 2D mouse position to the sphere:
Vector3 GetArcBallVector(const Vector2f & mousePos) {
float radiusSquared = 1.0; //squared radius of the sphere
//compute mouse position from the centre of screen to interval [-half, +half]
Vector3 pt = Vector3(
mousePos.x - halfScreenW,
halfScreenH - mousePos.y,
0.0f
);
//if length squared is smaller than sphere diameter
//point is inside
float lengthSqr = pt.x * pt.x + pt.y * pt.y;
if (lengthSqr < radiusSquared){
//inside
pt.z = std::sqrtf(radiusSquared - lengthSqr);
}
else {
pt.z = 0.0f;
}
pt.z *= -1;
return pt;
}
To calculate rotation, I use the last (startPt) and current (endPt) mapped position and do:
Quaternion actRot = Quaternion::Identity();
Vector3 axis = Vector3::Cross(endPt, startPt);
if (axis.LengthSquared() > MathUtils::EPSILON) {
float angleCos = Vector3::Dot(endPt, startPt);
actRot = Quaternion(axis.x, axis.y, axis.z, angleCos);
}
I prefer to use Quaternions over matrices since they are easy to multiply (for acumulated rotation) and interpolate (for some smooting).
could anyone please help me calculating vertex normals in OpenGL?
I am loading an obj file and adding Gouraud shading by calculating vertex normals without using glNormal3f or glLight functions..
I have declared functions like operators, crossproduct, innerproduct,and etc..
I have understood that in order to get vertex normals, I first need to calculate surface normal aka normal vector with crossproduct.. and also
since I am loading an obj file.. and I am placing the three points of Faces of the obj file in id1,id2,id3 something like that
I would be grateful if anyone can help me writing codes or give me a guideline how to start the codes. please ...
thanks..
its to draw
FACE cur_face = cube.face[i];
glColor3f(cube.vertex_color[cur_face.id1].x,cube.vertex_color[cur_face.id1].y,cube.vertex_color[cur_face.id1].z);
glVertex3f(cube.vertex[cur_face.id1].x,cube.vertex[cur_face.id1].y,cube.vertex[cur_face.id1].z);
glColor3f(cube.vertex_color[cur_face.id2].x,cube.vertex_color[cur_face.id2].y,cube.vertex_color[cur_face.id2].z);
glVertex3f(cube.vertex[cur_face.id2].x,cube.vertex[cur_face.id2].y,cube.vertex[cur_face.id2].z);
glColor3f(cube.vertex_color[cur_face.id3].x,cube.vertex_color[cur_face.id3].y,cube.vertex_color[cur_face.id3].z);
glVertex3f(cube.vertex[cur_face.id3].x,cube.vertex[cur_face.id3].y,cube.vertex[cur_face.id3].z);
}
This is the equation for color calculation
VECTOR kd;
VECTOR ks;
kd=VECTOR(0.8, 0.8, 0.8);
ks=VECTOR(1.0, 0.0, 0.0);
double inner = kd.InnerProduct(ks);
int i, j;
for(i=0;i<cube.vertex.size();i++)
{
VECTOR n = cube.vertex_normal[i];
VECTOR l = VECTOR(100,100,0) - cube.vertex[i];
VECTOR v = VECTOR(0,0,1) - cube.vertex[i];
float xl = n.InnerProduct(l)/n.Magnitude();
VECTOR x = (n * (1.0/ n.Magnitude())) * xl;
VECTOR r = x - (l-x);
VECTOR color = kd * (n.InnerProduct(l)) + ks * pow((v.InnerProduct(r)),10);
cube.vertex_color[i] = color;
*This answer is for triangular mesh and can be extended to poly mesh as well.
tempVertices stores list of all vertices.
vertexIndices stores details of faces(triangles) of the mesh in a vector (in a flat manner).
std::vector<glm::vec3> v_normal;
// initialize vertex normals to 0
for (int i = 0; i != tempVertices.size(); i++)
{
v_normal.push_back(glm::vec3(0.0f, 0.0f, 0.0f));
}
// For each face calculate normals and append to the corresponding vertices of the face
for (unsigned int i = 0; i < vertexIndices.size(); i += 3)
{
//vi v(i+1) v(i+2) are the three faces of a triangle
glm::vec3 A = tempVertices[vertexIndices[i] - 1];
glm::vec3 B = tempVertices[vertexIndices[i + 1] - 1];
glm::vec3 C = tempVertices[vertexIndices[i + 2] - 1];
glm::vec3 AB = B - A;
glm::vec3 AC = C - A;
glm::vec3 ABxAC = glm::cross(AB, AC);
v_normal[vertexIndices[i] - 1] += ABxAC;
v_normal[vertexIndices[i + 1] - 1] += ABxAC;
v_normal[vertexIndices[i + 2] - 1] += ABxAC;
}
Now normalize each v_normal and use.
Note that the number of vertex normals is equal to the number of vertices of the mesh.
This code works fine on my machine
glm::vec3 computeFaceNormal(glm::vec3 p1, glm::vec3 p2, glm::vec3 p3) {
// Uses p2 as a new origin for p1,p3
auto a = p3 - p2;
auto b = p1 - p2;
// Compute the cross product a X b to get the face normal
return glm::normalize(glm::cross(a, b));
}
void Mesh::calculateNormals() {
this->normals = std::vector<glm::vec3>(this->vertices.size());
// For each face calculate normals and append it
// to the corresponding vertices of the face
for (unsigned int i = 0; i < this->indices.size(); i += 3) {
glm::vec3 A = this->vertices[this->indices[i]];
glm::vec3 B = this->vertices[this->indices[i + 1LL]];
glm::vec3 C = this->vertices[this->indices[i + 2LL]];
glm::vec3 normal = computeFaceNormal(A, B, C);
this->normals[this->indices[i]] += normal;
this->normals[this->indices[i + 1LL]] += normal;
this->normals[this->indices[i + 2LL]] += normal;
}
// Normalize each normal
for (unsigned int i = 0; i < this->normals.size(); i++)
this->normals[i] = glm::normalize(this->normals[i]);
}
It seems all you need to implement is the function to get the average vector from N vectors. This is one of the ways to do it:
struct Vector3f {
float x, y, z;
};
typedef struct Vector3f Vector3f;
Vector3f averageVector(Vector3f *vectors, int count) {
Vector3f toReturn;
toReturn.x = .0f;
toReturn.y = .0f;
toReturn.z = .0f;
// sum all the vectors
for(int i=0; i<count; i++) {
Vector3f toAdd = vectors[i];
toReturn.x += toAdd.x;
toReturn.y += toAdd.y;
toReturn.z += toAdd.z;
}
// divide with number of vectors
// TODO: check (count == 0)
float scale = 1.0f/count;
toReturn.x *= scale;
toReturn.y *= scale;
toReturn.z *= scale;
return toReturn;
}
I am sure you can port that to your C++ class. The result should then be normalized unless the length iz zero.
Find all surface normals for every vertex you have. Then use the averageVector and normalize the result to get the smooth normals you are looking for.
Still as already mentioned you should know that this is not appropriate for edged parts of the shape. In those cases you should use the surface vectors directly. You would probably be able to solve most of such cases by simply ignoring a surface normal(s) that are too different from the others. Extremely edgy shapes like cube for instance will be impossible with this procedure. What you would get for instance is:
{
1.0f, .0f, .0f,
.0f, 1.0f, .0f,
.0f, .0f, 1.0f
}
With the normalized average of {.58f, .58f, .58f}. The result would pretty much be an extremely low resolution sphere rather then a cube.
I have a square area on which I have to determine where the mouse pointing.
With D3DXIntersectTri I can tell IF the mouse pointing on it, but I have trouble calculating the x,y,z coordinates.
The drawing from vertex buffer, which initialized with the vertices array:
vertices[0].position = D3DXVECTOR3(-10, 0, -10);
vertices[1].position = D3DXVECTOR3(-10, 0, 10);
vertices[2].position = D3DXVECTOR3( 10, 0, -10);
vertices[3].position = D3DXVECTOR3( 10, 0, -10);
vertices[4].position = D3DXVECTOR3(-10, 0, 10);
vertices[5].position = D3DXVECTOR3( 10, 0, 10);
I have this method so far, this is not giving me the right coordinates (works only on a small part of the area, near two of the edges and more less accurate inside):
BOOL Area::getcoord( Ray& ray, D3DXVECTOR3& coord)
{
D3DXVECTOR3 rayOrigin, rayDirection;
rayDirection = ray.direction;
rayOrigin = ray.origin;
float d;
D3DXMATRIX matInverse;
D3DXMatrixInverse(&matInverse, NULL, &matWorld);
// Transform ray origin and direction by inv matrix
D3DXVECTOR3 rayObjOrigin,rayObjDirection;
D3DXVec3TransformCoord(&rayOrigin, &rayOrigin, &matInverse);
D3DXVec3TransformNormal(&rayDirection, &rayDirection, &matInverse);
D3DXVec3Normalize(&rayDirection,&rayDirection);
float u, v;
BOOL isHit1, isHit2;
D3DXVECTOR3 p1, p2, p3;
p1 = vertices[3].position;
p2 = vertices[4].position;
p3 = vertices[5].position;
isHit1 = D3DXIntersectTri(&p1, &p2, &p3, &rayOrigin, &rayDirection, &u, &v, &d);
isHit2 = FALSE;
if(!isHit1)
{
p1 = vertices[0].position;
p2 = vertices[1].position;
p3 = vertices[2].position;
isHit2 = D3DXIntersectTri(&p1, &p2, &p3, &rayOrigin, &rayDirection, &u, &v, &d);
}
if(isHit1)
{
coord.x = 1 * ((1-u-v)*p3.x + u*p3.y + v*p3.z);
coord.y = 0.2f;
coord.z = -1 * ((1-u-v)*p1.x + u*p1.y + v*p1.z);
D3DXVec3TransformCoord(&coord, &coord, &matInverse);
}
if(isHit2)
{
coord.x = -1 * ((1-u-v)*p3.x + u*p3.y + v*p3.z);
coord.y = 0.2f;
coord.z = 1 * ((1-u-v)*p1.x + u*p1.y + v*p1.z);
D3DXVec3TransformCoord(&coord, &coord, &matWorld);
}
return isHit1 || isHit2;
}
Barycentric coordinates don't work the way you used them. u and v define the weight of the source vectors. So if you want to calculate the hit point, you will have to compute
coord = u * p1 + v * p2 + (1 - u - v) * p3
Alternatively you can use the d ray parameter:
coord = rayOrigin + d * rDirection
Both ways should result in the same coordinate.
Using Monotouch and OpenTK I am trying to get the screen coordinate of one 3D point. I have my world view projection matrix set up, and OpenGL makes sense of it and projects my 3D model perfectly, but how to use the same matrix to project just one point from 2D to 3D?
I thought I could simply use:
Vector3.Transform(ref input3Dpos, ref matWorldViewProjection, out projected2Dpos);
Then have the projected screen coordinate in projected2DPos. But the resulting Vector4 does not seem to represent the proper projected screen coordinate. And I do not know how to calculate it from there on.
I found I need to divide by Vector4.w, however I am still getting the wrong values. Using this method now:
private static bool GluProject(OpenTK.Vector3 objPos, OpenTK.Matrix4 matWorldViewProjection, int[] viewport, out OpenTK.Vector3 screenPos)
{
OpenTK.Vector4 _in;
_in.X = objPos.X;
_in.Y = objPos.Y;
_in.Z = objPos.Z;
_in.W = 1f;
Vector4 _out = OpenTK.Vector4.Transform(_in, matWorldViewProjection);
if (_out.W <= 0.0)
{
screenPos = OpenTK.Vector3.Zero;
return false;
}
_out.X /= _out.W;
_out.Y /= _out.W;
_out.Z /= _out.W;
/* Map x, y and z to range 0-1 */
_out.X = _out.X * 0.5f + 0.5f;
_out.Y = -_out.Y * 0.5f + 0.5f;
_out.Z = _out.Z * 0.5f + 0.5f;
/* Map x,y to viewport */
_out.X = _out.X * viewport[2] + viewport[0];
_out.Y = _out.Y * viewport[3] + viewport[1];
screenPos.X = _out.X;
screenPos.Y = _out.Y;
screenPos.Z = _out.Z;
return true;
}
I cannot see any errors though... :S
In the first question you're missing the last step: Mapping from NDC (Normalized Device Coordinates) to viewport coordinates. That's what the lines
/* Map x,y to viewport */
_out.X = _out.X * viewport[2] + viewport[0];
_out.Y = _out.Y * viewport[3] + viewport[1];
in your GluProject do,
You have two options. You can calculate it yourself, or use the glProject function. I prefer the first.
Number 1:
private Vector2 Convert(
Vector3 pos,
Matrix4 viewMatrix,
Matrix4 projectionMatrix,
int screenWidth,
int screenHeight)
{
pos = Vector3.Transform(pos, viewMatrix);
pos = Vector3.Transform(pos, projectionMatrix);
pos.X /= pos.Z;
pos.Y /= pos.Z;
pos.X = (pos.X + 1) * screenWidth / 2;
pos.Y = (pos.Y + 1) * screenHeight / 2;
return new Vector2(pos.X, pos.Y);
}
Number 2:
public Vector2 form3Dto2D(Vector3 our3DPoint)
{
Vector3 our2DPoint;
float[] modelviewMatrix = new float[16];
float[] projectionMatrix = new float[16];
int[] viewport = new int[4];
GL.GetFloat(GetPName.ModelviewMatrix, modelviewMatrix);
GL.GetFloat(GetPName.ProjectionMatrix, projectionMatrix);
GL.GetInteger(GetPName.Viewport, viewport);
OpenTK.Graphics.Glu.Project(our3DPoint, convertFloatsToDoubles(modelviewMatrix),
convertFloatsToDoubles(projectionMatrix), viewport, out our2DPoint);
return new Vector2(our2DPoint.X, our2DPoint.Y)
}
public static double[] convertFloatsToDoubles(float[] input)
{
if (input == null)
{
return null; // Or throw an exception - your choice
}
double[] output = new double[input.Length];
for (int i = 0; i < input.Length; i++)
{
output[i] = input[i];
}
return output;
}
How do you go about creating a sphere with meshes in Direct-x? I'm using C++ and the program will be run on windows, only.
Everything is currently rendered through an IDiRECT3DDEVICE9 object.
You could use the D3DXCreateSphere function.
There are lots of ways to create a sphere.
One is to use polar coordinates to generate slices of the sphere.
struct Vertex
{
float x, y, z;
float nx, ny, nz;
};
Given that struct you'd generate the sphere as follows (I haven't tested this so I may have got it slightly wrong).
std::vector< Vertex > verts;
int count = 0;
while( count < numSlices )
{
const float phi = M_PI / numSlices;
int count2 = 0;
while( count2 < numSegments )
{
const float theta = M_2PI / numSegments
const float xzRadius = fabsf( sphereRadius * cosf( phi ) );
Vertex v;
v.x = xzRadius * cosf( theta );
v.y = sphereRadius * sinf( phi );
v.z = xzRadius * sinf( theta );
const float fRcpLen = 1.0f / sqrtf( (v.x * v.x) + (v.y * v.y) + (v.z * v.z) );
v.nx = v.x * fRcpLen;
v.ny = v.y * fRcpLen;
v.nz = v.z * fRcpLen;
verts.push_back( v );
count2++;
}
count++;
}
This is how D3DXCreateSphere does it i believe. Of course the code above does not form the faces but thats not a particularly complex bit of code if you set your mind to it :)
The other, and more interesting in my opinion, way is through surface subdivision.
If you start with a cube that has normals defined the same way as the above code you can recursively subdivide each side. Basically you find the center of the face. Generate a vector from the center to the new point. Normalise it. Push the vert out to the radius of the sphere as follows (Assuming v.n* is the normalised normal):
v.x = v.nx * sphereRadius;
v.y = v.ny * sphereRadius;
v.z = v.nz * sphereRadius;
You then repeat this process for the mid point of each edge of the face you are subdividing.
Now you can split each face into 4 new quadrilateral faces. You can then subdivide each of those quads into 4 new quads and so on until you get to the refinement level you require.
Personally I find this process provides a nicer vertex distribution on the sphere than the first method.