could anyone please help me calculating vertex normals in OpenGL?
I am loading an obj file and adding Gouraud shading by calculating vertex normals without using glNormal3f or glLight functions..
I have declared functions like operators, crossproduct, innerproduct,and etc..
I have understood that in order to get vertex normals, I first need to calculate surface normal aka normal vector with crossproduct.. and also
since I am loading an obj file.. and I am placing the three points of Faces of the obj file in id1,id2,id3 something like that
I would be grateful if anyone can help me writing codes or give me a guideline how to start the codes. please ...
thanks..
its to draw
FACE cur_face = cube.face[i];
glColor3f(cube.vertex_color[cur_face.id1].x,cube.vertex_color[cur_face.id1].y,cube.vertex_color[cur_face.id1].z);
glVertex3f(cube.vertex[cur_face.id1].x,cube.vertex[cur_face.id1].y,cube.vertex[cur_face.id1].z);
glColor3f(cube.vertex_color[cur_face.id2].x,cube.vertex_color[cur_face.id2].y,cube.vertex_color[cur_face.id2].z);
glVertex3f(cube.vertex[cur_face.id2].x,cube.vertex[cur_face.id2].y,cube.vertex[cur_face.id2].z);
glColor3f(cube.vertex_color[cur_face.id3].x,cube.vertex_color[cur_face.id3].y,cube.vertex_color[cur_face.id3].z);
glVertex3f(cube.vertex[cur_face.id3].x,cube.vertex[cur_face.id3].y,cube.vertex[cur_face.id3].z);
}
This is the equation for color calculation
VECTOR kd;
VECTOR ks;
kd=VECTOR(0.8, 0.8, 0.8);
ks=VECTOR(1.0, 0.0, 0.0);
double inner = kd.InnerProduct(ks);
int i, j;
for(i=0;i<cube.vertex.size();i++)
{
VECTOR n = cube.vertex_normal[i];
VECTOR l = VECTOR(100,100,0) - cube.vertex[i];
VECTOR v = VECTOR(0,0,1) - cube.vertex[i];
float xl = n.InnerProduct(l)/n.Magnitude();
VECTOR x = (n * (1.0/ n.Magnitude())) * xl;
VECTOR r = x - (l-x);
VECTOR color = kd * (n.InnerProduct(l)) + ks * pow((v.InnerProduct(r)),10);
cube.vertex_color[i] = color;
*This answer is for triangular mesh and can be extended to poly mesh as well.
tempVertices stores list of all vertices.
vertexIndices stores details of faces(triangles) of the mesh in a vector (in a flat manner).
std::vector<glm::vec3> v_normal;
// initialize vertex normals to 0
for (int i = 0; i != tempVertices.size(); i++)
{
v_normal.push_back(glm::vec3(0.0f, 0.0f, 0.0f));
}
// For each face calculate normals and append to the corresponding vertices of the face
for (unsigned int i = 0; i < vertexIndices.size(); i += 3)
{
//vi v(i+1) v(i+2) are the three faces of a triangle
glm::vec3 A = tempVertices[vertexIndices[i] - 1];
glm::vec3 B = tempVertices[vertexIndices[i + 1] - 1];
glm::vec3 C = tempVertices[vertexIndices[i + 2] - 1];
glm::vec3 AB = B - A;
glm::vec3 AC = C - A;
glm::vec3 ABxAC = glm::cross(AB, AC);
v_normal[vertexIndices[i] - 1] += ABxAC;
v_normal[vertexIndices[i + 1] - 1] += ABxAC;
v_normal[vertexIndices[i + 2] - 1] += ABxAC;
}
Now normalize each v_normal and use.
Note that the number of vertex normals is equal to the number of vertices of the mesh.
This code works fine on my machine
glm::vec3 computeFaceNormal(glm::vec3 p1, glm::vec3 p2, glm::vec3 p3) {
// Uses p2 as a new origin for p1,p3
auto a = p3 - p2;
auto b = p1 - p2;
// Compute the cross product a X b to get the face normal
return glm::normalize(glm::cross(a, b));
}
void Mesh::calculateNormals() {
this->normals = std::vector<glm::vec3>(this->vertices.size());
// For each face calculate normals and append it
// to the corresponding vertices of the face
for (unsigned int i = 0; i < this->indices.size(); i += 3) {
glm::vec3 A = this->vertices[this->indices[i]];
glm::vec3 B = this->vertices[this->indices[i + 1LL]];
glm::vec3 C = this->vertices[this->indices[i + 2LL]];
glm::vec3 normal = computeFaceNormal(A, B, C);
this->normals[this->indices[i]] += normal;
this->normals[this->indices[i + 1LL]] += normal;
this->normals[this->indices[i + 2LL]] += normal;
}
// Normalize each normal
for (unsigned int i = 0; i < this->normals.size(); i++)
this->normals[i] = glm::normalize(this->normals[i]);
}
It seems all you need to implement is the function to get the average vector from N vectors. This is one of the ways to do it:
struct Vector3f {
float x, y, z;
};
typedef struct Vector3f Vector3f;
Vector3f averageVector(Vector3f *vectors, int count) {
Vector3f toReturn;
toReturn.x = .0f;
toReturn.y = .0f;
toReturn.z = .0f;
// sum all the vectors
for(int i=0; i<count; i++) {
Vector3f toAdd = vectors[i];
toReturn.x += toAdd.x;
toReturn.y += toAdd.y;
toReturn.z += toAdd.z;
}
// divide with number of vectors
// TODO: check (count == 0)
float scale = 1.0f/count;
toReturn.x *= scale;
toReturn.y *= scale;
toReturn.z *= scale;
return toReturn;
}
I am sure you can port that to your C++ class. The result should then be normalized unless the length iz zero.
Find all surface normals for every vertex you have. Then use the averageVector and normalize the result to get the smooth normals you are looking for.
Still as already mentioned you should know that this is not appropriate for edged parts of the shape. In those cases you should use the surface vectors directly. You would probably be able to solve most of such cases by simply ignoring a surface normal(s) that are too different from the others. Extremely edgy shapes like cube for instance will be impossible with this procedure. What you would get for instance is:
{
1.0f, .0f, .0f,
.0f, 1.0f, .0f,
.0f, .0f, 1.0f
}
With the normalized average of {.58f, .58f, .58f}. The result would pretty much be an extremely low resolution sphere rather then a cube.
Related
I adopted "Separating axis theorem (SAT)" to realize "OBB collision detection".
As shown below, SAT requires three elements.
The coordinates (x, y, z) of the midpoint
Length of each axis
Direction vector of each axis
// Initialized
SATOBB::SATOBB(glm::vec3 &pos, std::vector<glm::vec3> &dir, glm::vec3 &len)
{
i_Pos = pos;
i_Dir = dir;
i_Len = len;
m_Dir.push_back(glm::vec3(0,0,0)); // Yeah... I know this strange code.. Thanks for tkausel
}
// i_... is before change, m_... is after change
void SATOBB::update(
glm::mat4 &Rotate,
glm::mat4 &Trans,
glm::mat4 &Scale
)
{
glm::vec3 m_Pos = Trans * glm::vec4(i_Pos, 1.0f);
for (int i=0; i<i_Dir.size(); i++){
glm::vec3 m_Dir = Rotate * glm::vec4(i_Dir, 1.0f);
}
glm::vec3 m_Len = Scale * glm::vec4(i_Len, 1.0f);
}
I think the code for calculating "3." is wrong.
So, please let me know the correct calculation code.
For calculation, I wanted to use the mat4 function, so vec3 is used for "1. & 2.." (For reasons of expediency)
"3." was calculated using vec3.
Is it really enough to multiply the vector by the rotation matrix?
That is the problem.
I am trying to create my own quaternion class and I get weird results. Either the cube I am trying to rotate is flickering like crazy, or it is getting warped.
This is my code:
void Quaternion::AddRotation(vec4 v)
{
Quaternion temp(v.x, v.y, v.z, v.w);
*this = temp * (*this);
}
mat4 Quaternion::GenerateMatrix(Quaternion &q)
{
q.Normalize();
//Row order
mat4 m( 1 - 2*q.y*q.y - 2*q.z*q.z, 2*q.x*q.y - 2*q.w*q.z, 2*q.x*q.z + 2*q.w*q.y, 0,
2*q.x*q.y + 2*q.w*q.z, 1 - 2*q.x*q.x - 2*q.z*q.z, 2*q.y*q.z + 2*q.w*q.x, 0,
2*q.x*q.z - 2*q.w*q.y, 2*q.y*q.z - 2*q.w*q.x, 1 - 2*q.x*q.x - 2*q.y*q.y, 0,
0, 0, 0, 1);
//Col order
// mat4 m( 1 - 2*q.y*q.y - 2*q.z*q.z,2*q.x*q.y + 2*q.w*q.z,2*q.x*q.z - 2*q.w*q.y,0,
// 2*q.x*q.y - 2*q.w*q.z,1 - 2*q.x*q.x - 2*q.z*q.z,2*q.y*q.z - 2*q.w*q.x,0,
// 2*q.x*q.z + 2*q.w*q.y,2*q.y*q.z + 2*q.w*q.x,1 - 2*q.x*q.x - 2*q.y*q.y,0,
// 0,0,0,1);
return m;
}
When I create the entity I give it a quaternion:
entity->Quat.AddRotation(vec4(1.0f, 1.0f, 0.0f, 45.f));
And each frame I try to rotate it additionally by a small amount:
for (int i = 0; i < Entities.size(); i++)
{
if (Entities[i] != NULL)
{
Entities[i]->Quat.AddRotation(vec4(0.5f, 0.2f, 1.0f, 0.000005f));
Entities[i]->DrawModel();
}
else
break;
}
And finally this is how I draw each cube:
void Entity::DrawModel()
{
glPushMatrix();
//Rotation
mat4 RotationMatrix;
RotationMatrix = this->Quat.GenerateMatrix(this->Quat);
//Position
mat4 TranslationMatrix = glm::translate(mat4(1.0f), this->Pos);
this->Trans = TranslationMatrix * RotationMatrix;
glMultMatrixf(value_ptr(this->Trans));
if (this->shape != NULL)
this->shape->DrawShape();
glPopMatrix();
}
EDIT: This is the tutorial I used to learn quaternions:
http://www.cprogramming.com/tutorial/3d/quaternions.html
Without studying your rotation matrix to the end, there are two possible bugs I can think of. The first one is that your rotation matrix R is not orthogonal, i.e. the inverse of R is not equal to the transposed. This could cause warping of the object. The second place to hide a bug is inside the multiplication of your quaternions.
There's a mistake in the rotation matrix. Try exchanging the element (2,3) with element (3,2).
So I currently have a triangle mesh (made with bezier curves) that can be changed dynamically. The problem I am facing is trying to figure out which triangles to actually render based on where the camera is at. The camera always looks towards the origin (0,0,0) so I am finding each triangle's normal and taking it's dotproduct with my camera vector. Then, based on the result, determining if the triangle should be "visible" or not.
The following is the code I am using for the calculations:
void bezier_plane()
{
for (int i = 0; i < 20; i++) {
for (int j = 0; j < 20; j++) {
grid[i][j].x = 0;
grid[i][j].y = 0;
grid[i][j].z = 0;
}
}
//Creates the grid using bezier calculation
CalcBezier();
for (int i = 0; i < 19; i++) {
for (int j = 0; j < 19; j++) {
Vector p1, p2, p3, normal;
p1.x = grid[i+1][j+1].x - grid[i][j].x; p1.y = grid[i+1][j+1].y - grid[i][j].y; p1.z = grid[i+1][j+1].z - grid[i][j].z;
p2.x = grid[i+1][j].x - grid[i][j].x; p1.y = grid[i+1][j].y - grid[i][j].y; p1.z = grid[i+1][j].z - grid[i][j].z;
normal = CalcNormal(p2, p1);
double first = dotproduct(normal, Camera);
p3.x = grid[i][j+1].x - grid[i][j].x; p3.y = grid[i][j+1].y - grid[i][j].y; p3.z = grid[i][j+1].z - grid[i][j].z;
normal = CalcNormal(p1, p3);
double second = dotproduct(normal, Camera);
if (first < 0 && second < 0) {
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glColor3f(0, 1, 0);
glBegin(GL_TRIANGLE_STRIP);
glVertex3f(grid[i][j].x, grid[i][j].y, grid[i][j].z);
glVertex3f(grid[i][j+1].x, grid[i][j+1].y, grid[i][j+1].z);
glVertex3f(grid[i+1][j].x, grid[i+1][j].y, grid[i+1][j].z);
glVertex3f(grid[i+1][j+1].x, grid[i+1][j+1].y, grid[i+1][j+1].z);
glEnd();
} else if (first < 0 && second > 0) {
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glColor3f(0, 1, 0);
glBegin(GL_TRIANGLE_STRIP);
glVertex3f(grid[i][j].x, grid[i][j].y, grid[i][j].z);
glVertex3f(grid[i+1][j].x, grid[i+1][j].y, grid[i+1][j].z);
glVertex3f(grid[i+1][j+1].x, grid[i+1][j+1].y, grid[i+1][j+1].z);
glEnd();
} else if (first > 0 && second < 0) {
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glColor3f(0, 1, 0);
glBegin(GL_TRIANGLE_STRIP);
glVertex3f(grid[i][j].x, grid[i][j].y, grid[i][j].z);
glVertex3f(grid[i][j+1].x, grid[i][j+1].y, grid[i][j+1].z);
glVertex3f(grid[i+1][j+1].x, grid[i+1][j+1].y, grid[i+1][j+1].z);
glEnd();
}
}
}
}
Here is CalcNormal:
Vector CalcNormal(Vector p1, Vector p2)
{
Vector normal;
normal.x = (p1.y * p2.z) - (p1.z * p2.y);
normal.y = (p1.z * p2.x) - (p1.x * p2.z);
normal.z = (p1.x * p2. y) - (p1.y * p2.x);
return normal;
}
double dotproduct(Vector normal, Vector Camera)
{
return (normal.x * Camera.x + normal.y * Camera.y + normal.z + Camera.z);
}
Right now, my code gives this result. The part circled in red should NOT be displayed (I believe, the triangles in back).
Your approach of testing the normals will still have visual artifacts, because triangles facing the camera could also be obscured. Imagine if that bulge were at the corner closest to the camera.
You will also have triangles that are partially visible and partially obscured.
A solution that would work on the pixel level would be:
glEnable(GL_DEPTH_TEST)
Draw the surface first with solid triangles instead of wire frame
Clear the frame buffer, but not the depth buffer
Now draw your entire scene. The depth buffer will prevent obscured pixels from being drawn
"Normal is a global variable" - could it be that that is already your problem? This looks like the worst application of global data I can think of! Instead, calling this thing crossproduct and returning a vector sounds like a good idea, no? Also, the dotproduct should take two vectors as parameter.
That said, your approach is sound. If you always have the same direction for the corners of triangles, the cross product of two sides will give you the normal. Further, if the angle between the normal and the view is less than 90 degrees, it looks away from the view and should be made invisible. Therefore the problem must be in your implementation, and using global state that could be stored in CPU registers anyway is the first thing you should fix.
Edit: You could use operator overloading to the reader's advantage here:
class Vector
{
Vector(){}
Vector(scalar x0, scalar y0, scalar z0): x(x0), y(y0), z(z0){}
float x, y, z;
};
Vector operator-(Vector const& v1, Vector const& v2)
{
return Vector(v1.x - v2.x, v1.y - v2.y, v1.z - v2.z);
}
Then, start the loop body like this:
Vector const point1 = grid[i, j];
Vector const point2 = grid[i + 1, j];
Vector const point3 = grid[i, j + 1];
Vector const point4 = grid[i + 1, j + 1];
These will easily be optimized out by the compiler, while they ease debugging and improve readability. Also note that they are constant, which makes the compiler verify that you don't change them accidentally. Then, you compute the two normals of the two triangles:
Vector const norm1 = crossproduct(point2 - point1, point3 - point1);
Vector const norm2 = crossproduct(point4 - point2, point4 - point3);
Then, you can check the dotproduct for visibility:
bool const visible1 = dotproduct(norm1, Camera) > 0;
bool const visible2 = dotproduct(norm2, Camera) > 0;
Lastly, you could overload glVertex3f() to take a Vector, but I'd stay away from overloading other libraries' functions.
I'm trying to find the mouse position in world coordinates but am having trouble finding the right code. At the moment I use this to determine the ray:
float pointX, pointY;
D3DXMATRIX projectionMatrix, viewMatrix, inverseViewMatrix, worldMatrix, translateMatrix, inverseWorldMatrix;
D3DXVECTOR3 direction, origin, rayOrigin, rayDirection;
bool intersect, result;
// Move the mouse cursor coordinates into the -1 to +1 range.
pointX = ((2.0f * (float)mouseX) / (float)m_screenWidth) - 1.0f;
pointY = (((2.0f * (float)mouseY) / (float)m_screenHeight) - 1.0f) * -1.0f;
// Adjust the points using the projection matrix to account for the aspect ratio of the viewport.
m_Direct3D->GetProjectionMatrix(projectionMatrix);
pointX = pointX / projectionMatrix._11;
pointY = pointY / projectionMatrix._22;
// Get the inverse of the view matrix.
m_Camera->GetViewMatrix(viewMatrix);
D3DXMatrixInverse(&inverseViewMatrix, NULL, &viewMatrix);
// Calculate the direction of the picking ray in view space.
direction.x = (pointX * inverseViewMatrix._11) + (pointY * inverseViewMatrix._21) + inverseViewMatrix._31;
direction.y = (pointX * inverseViewMatrix._12) + (pointY * inverseViewMatrix._22) + inverseViewMatrix._32;
direction.z = (pointX * inverseViewMatrix._13) + (pointY * inverseViewMatrix._23) + inverseViewMatrix._33;
// Get the origin of the picking ray which is the position of the camera.
origin = m_Camera->GetPosition();
This gives me the origin and direction of the ray.
But...
I use a custom mesh (not the one from directX) with a heightmap, separated into quadtrees and I don't know if my logic is correct, I tried using the frustum to determine which nodes in the quadtree are visible and so do the checking intersection of triangles only on those nodes, here is this code:
Note* m_mousepos is a vector.
bool QuadTreeClass::getTriangleRay(NodeType* node, FrustumClass* frustum, ID3D10Device* device, D3DXVECTOR3 vPickRayDir, D3DXVECTOR3 vPickRayOrig){
bool result;
int count, i, j, indexCount;
unsigned int stride, offset;
float fBary1, fBary2;
float fDist;
D3DXVECTOR3 v0, v1, v2;
float p1, p2, p3;
// Check to see if the node can be viewed.
result = frustum->CheckCube(node->positionX, 0.0f, node->positionZ, (node->width / 2.0f));
if(!result)
{
return false;
}
// If it can be seen then check all four child nodes to see if they can also be seen.
count = 0;
for(i=0; i<4; i++)
{
if(node->nodes[i] != 0)
{
count++;
getTriangleRay(node->nodes[i], frustum, device, vPickRayOrig, vPickRayDir);
}
}
// If there were any children nodes then dont continue
if(count != 0)
{
return false;
}
// Now intersect each triangle in this node
j = 0;
for(i=0; i<node->triangleCount; i++){
j = i * 3;
v0 = D3DXVECTOR3( node->vertexArray[j].x, node->vertexArray[j].y, node->vertexArray[j].z);
j++;
v1 = D3DXVECTOR3( node->vertexArray[j].x, node->vertexArray[j].y, node->vertexArray[j].z);
j++;
v2 = D3DXVECTOR3( node->vertexArray[j].x, node->vertexArray[j].y, node->vertexArray[j].z);
result = IntersectTriangle( vPickRayOrig, vPickRayDir, v0, v1, v2, &fDist, &fBary1, &fBary2);
if(result == true){
// intersection = true, so get a aproximate center of the triangle on the world
p1 = (v0.x + v0.x + v0.x)/3;
p2 = (v0.y + v1.y + v2.y)/3;
p3 = (v0.z + v1.z + v2.z)/3;
m_mousepos = D3DXVECTOR3(p1, p2, p3);
return true;
}
}
}
bool QuadTreeClass::IntersectTriangle( const D3DXVECTOR3& orig, const D3DXVECTOR3& dir,D3DXVECTOR3& v0, D3DXVECTOR3& v1, D3DXVECTOR3& v2, FLOAT* t, FLOAT* u, FLOAT* v ){
// Find vectors for two edges sharing vert0
D3DXVECTOR3 edge1 = v1 - v0;
D3DXVECTOR3 edge2 = v2 - v0;
// Begin calculating determinant - also used to calculate U parameter
D3DXVECTOR3 pvec;
D3DXVec3Cross( &pvec, &dir, &edge2 );
// If determinant is near zero, ray lies in plane of triangle
FLOAT det = D3DXVec3Dot( &edge1, &pvec );
D3DXVECTOR3 tvec;
if( det > 0 )
{
tvec = orig - v0;
}
else
{
tvec = v0 - orig;
det = -det;
}
if( det < 0.0001f )
return FALSE;
// Calculate U parameter and test bounds
*u = D3DXVec3Dot( &tvec, &pvec );
if( *u < 0.0f || *u > det )
return FALSE;
// Prepare to test V parameter
D3DXVECTOR3 qvec;
D3DXVec3Cross( &qvec, &tvec, &edge1 );
// Calculate V parameter and test bounds
*v = D3DXVec3Dot( &dir, &qvec );
if( *v < 0.0f || *u + *v > det )
return FALSE;
// Calculate t, scale parameters, ray intersects triangle
*t = D3DXVec3Dot( &edge2, &qvec );
FLOAT fInvDet = 1.0f / det;
*t *= fInvDet;
*u *= fInvDet;
*v *= fInvDet;
return TRUE;
}
Please is this code right? If it is then my problem must be related to the quadtree.
Thanks!
Iterating over all visible triangle to find the intersection is very expensive. Additional the cost will rise if your heightmap gets finer.
For my heightmap I use a different approach:
I do a step-by-step search regarding the height on the clickray starting at the origin. At every step the current position is moved along the ray and tested against the height of the heightmap (therefore you need a heightfunction). If the current position is below the heightmap, the last intervall is searched again by an additional iteration to find a finer position. This works as long as your heightmap hasn't a too high frequency in the heightvalues regarding to the stepsize (otherwise you could jump over a peak).
I'm new to c++ 3D, so I may just be missing something obvious, but how do I convert from 3D to 2D and (for a given z location) from 2D to 3D?
You map 3D to 2D via projection. You map 2D to 3D by inserting the appropriate value in the Z element of the vector.
It is a matter of casting a ray from the screen onto a plane which is parallel to x-y and is at the required z location. You then need to find out where on the plane the ray is colliding.
Here's one example, considering that screen_x and screen_y ranges from [0, 1], where 0 is the left-most or top-most coordinate and 1 is right-most or bottom-most, respectively:
Vector3 point_of_contact(-1.0f, -1.0f, -1.0f);
Matrix4 view_matrix = camera->getViewMatrix();
Matrix4 proj_matrix = camera->getProjectionMatrix();
Matrix4 inv_view_proj_matrix = (proj_matrix * view_matrix).inverse();
float nx = (2.0f * screen_x) - 1.0f;
float ny = 1.0f - (2.0f * screen_y);
Vector3 near_point(nx, ny, -1.0f);
Vector3 mid_point(nx, ny, 0.0f);
// Get ray origin and ray target on near plane in world space
Vector3 ray_origin, ray_target;
ray_origin = inv_view_proj_matrix * near_point;
ray_target = inv_view_proj_matrix * mid_point;
Vector3 ray_direction = ray_target - ray_origin;
ray_direction.normalise();
// Check for collision with the plane
Vector3 plane_normal(0.0f, 0.0f, 1.0f);
float denominator = plane_normal.dotProduct(ray_direction);
if (fabs(denom) >= std::numeric_limits<float>::epsilon())
{
float num = plane_normal.dotProduct(ray.getOrigin()) + Vector3(0, 0, z_pos);
float distance = -(num/denom);
if (distance > 0)
{
point_of_contact = ray_origin + (ray_direction * distance);
}
}
return point_of_contact
Disclaimer Notice: This solution was taken from bits and pieces of Ogre3D graphics library.
The simplest way is to do a divide by z. Therefore ...
screenX = projectionX / projectionZ;
screenY = projectionY / projectionZ;
That does perspective projection based on distance. Thing is it is often better to use homgeneous coordinates as this simplifies matrix transformation (everything becomes a multiply). Equally this is what D3D and OpenGL use. Understanding how to use non-homogeneous coordinates (ie an (x,y,z) coordinate triple) will be very helpful for things like shader optimisations however.
One lame solution:
^ y
|
|
| /z
| /
+/--------->x
Angle is the angle between the Ox and Oz axes (
#include <cmath>
typedef struct {
double x,y,z;
} Point3D;
typedef struct {
double x,y;
} Point2D
const double angle = M_PI/4; //can be changed
Point2D* projection(Point3D& point) {
Point2D* p = new Point2D();
p->x = point.x + point.z * sin(angle);
p->y = point.y + point.z * cos(angle);
return p;
}
However there are lots of tutorials on this on the net... Have you googled for it?