OpenGL pModel translation - opengl

When I load an .obj file in GMax it is positioned in the center of the space (0,0,0).
How can I change this position? Is there any special function?
I don't want to use glTranslatef. Rather I would like the whole pModel to move (the pModel structure to change). I found the function glmScale. Is there anything similar for translating or rotating?
When I load the obj I do smth like this:
pModelScaun=glmReadOBJ(filename);
glmUnitize(pModelScaun);
glmFacetNormals(pModelScaun);
glmVertexNormals(pModelScaun,90.0);
and then I use the triangle to determine the light position and the shadow frascum
for (unsigned int i = 0; i < pModelScaun->numtriangles; i++)
{
//compute the light vector (between the center of the current
//triangle and the position of the light (converted to object space)
for (unsigned int j = 0; j < 3; j++)
{
fvIncidentLightDir[j] = (pModelScaun->vertices[3*pModelScaun->triangles[i].vindices[0]+j] +
pModelScaun->vertices[3*pModelScaun->triangles[i].vindices[1]+j] +
pModelScaun->vertices[3*pModelScaun->triangles[i].vindices[2]+j]) / 3.0 - lp[j];
}
Can you point me a way in which I could use the transformation matrices in this situation?

Since you have are using the vertices in object-space, and have transformed the light into their object-space, you can multiply the fvIncidentLightDir by your object's transformation matrix to transform it back into world space.

Related

Rotate a std::vector<Eigen::Vector3d> as a rigid transformation?

I have a few 3d points, stored in a std::vector<Eigen::Vector3d>. I need to rigidly rotate and translate these points, without changing their relationship to one another. As if moving the cloud as a whole.
Based on this question:
https://stackoverflow.com/questions/50507665/eigen-rotate-a-vector3d-with-a-quaternion
I have this code:
std::vector<Eigen::Vector3d> pts, ptsMoved;
Eigen::Quaterniond rotateBy = Eigen::Quaterniond(0.1,0.5,0.08,0.02);
Eigen::Vector3d translateBy(1, 2.5, 1.5);
for (int i = 0; i < pts.size(); i++)
{
//transform point
Vector3d rot = rotateBy * (pts[i] + translateBy);
ptsMoved.push_back(rot);
}
When i view the points and compare them to the original points, however, I get this: (White are the original, green are the transformed).
What i expect, is the cloud as a whole to look the same, just in a different position and orientation. What i get, is a moved and rotated and scaled cloud, that looks different to the original. What am i doing wrong?
EDIT:
If i apply the inverse transform to the adjusted points, using:
std::vector<Eigen::Vector3d> pntsBack;
for (int i = 0; i < ptsMoved.size(); i++)
{
//transform point
Vector3d rot = rotateBy.inverse() * (ptsMoved[i] - translateBy);
pntsBack.push_back(rot);
}
It gives me an even worse result. (dark green = original points, white = transformed, light green = transformed inverse)
Your Quaternion is not a unit-Quaternion, therefore you will get unspecified results.
If you are not sure your quaternion is normalized, just write
rotateBy.normalize();
before using it. Additionally, if you want to rotate more than one vector it is more efficient to convert the Quaternion to a rotation matrix:
Eigen::Matrix3d rotMat = rotateBy.toRotationMatrix();
// ...
// inside for loop:
Vector3d rot = rotMat * (ptsMoved[i] - translateBy);
Also, instead of .inverse() you can use .conjugate() for unit quaternions and .adjoint() or .transpose() for orthogonal Matrices.

Raytracing - Rays shot from camera through screen don't deviate on the y axis - C++

So I am trying to write a Raytracer as a personal project, and I have got the basic recursion, mesh geometry, and ray triangle intersection down.
I am trying to get a plausible image out of it but encounter the problem that all pixel rows are the same, giving me straight vertical lines.
I found that all pixel positions generated from the camera function are the same on the y axis but cannot find the problem with my vector math here (I use my Vertex structure as vectors too, its lazy I know):
void Renderer::CameraShader()
{
//compute the width and height of the screen based on angle and distance of the near clip plane
double widthRad = tan(0.5*m_Cam.angle)*m_Cam.nearClipPlane;
double heightRad = ((double)m_Cam.pixelRows / (double)m_Cam.pixelCols)*widthRad;
//get the horizontal vector of the camera by crossing the direction angle with an
Vertex cross = ((m_Cam.direction - m_Cam.origin).CrossProduct(Vertex(0, 1, 0)).Normalized(0.0001))*widthRad;
//get the up/down vector of the camera by crossing the horizontal vector with the direction vector
Vertex crossDown = m_Cam.direction.CrossProduct(cross).Normalized(0.0001)*heightRad;
//generate rays per pixel row and column
for (int i = 0; i < m_Cam.pixelCols;i++)
{
for (int j = 0; j < m_Cam.pixelRows; j++)
{
Vertex pixelPos = m_Cam.origin + (m_Cam.direction - m_Cam.origin).Normalized(0.0001)*m_Cam.nearClipPlane //vector of the screen center
- cross + (cross*((i / (double)m_Cam.pixelCols)*widthRad*2)) //horizontal vector based on i
+ crossDown - (crossDown*((j / (double)m_Cam.pixelRows)*heightRad*2)); //vertical vector based on j
//cast a ray through according screen pixel to get color
m_Image[i][j] = raycast(m_Cam.origin, pixelPos - m_Cam.origin, p_MaxBounces);
}
}
}
I hope the comments in the code make clear what is happening.
If anyone sees the problem help would be nice
The problem was that I had to substract the camera origin from the direction point. It now actually renders sillouettes, so I guess I can say its fixed :)

How to ensure particles will always be aligned in the center

I'm having trouble figuring out how to ensure particles aligned in a square will always be placed in the middle of the screen, regardless of the size of the square. The square is created with:
for(int i=0; i<(int)sqrt(d_MAXPARTICLES); i++) {
for(int j=0; j<(int)sqrt(d_MAXPARTICLES); j++) {
Particle particle;
glm::vec2 d2Pos = glm::vec2(j*0.06, i*0.06) + glm::vec2(-17.0f,-17.0f);
particle.pos = glm::vec3(d2Pos.x,d2Pos.y,-70);
particle.life = 1000.0f;
particle.cameradistance = -1.0f;
particle.r = d_R;
particle.g = d_G;
particle.b = d_B;
particle.a = d_A;
particle.size = d_SIZE;
d_particles_container.push_back(particle);
}
}
the most important part is the glm::vec2(-17.0f, -17.0f) which correctly positions the square in the center of the screen. This looks like:
the problem is that my program supports any number of particles, so only specifying
now my square is off center, but how can I change glm::vec2(-17.0f,-17.0f) to account for different particles?
Do not make position dependent on "i", and "j" indices if you want a fixed position.
glm::vec2 d2Pos = glm::vec2(centerOfScreenX,centerOfScreenY); //much better
But how to compute centerOfSCreen? It depends if you are using a 2D or a 3D camera.
If you use a fixed 2D camera, then center is (Width/2,Height/2).
If you use a moving 3d camera, you need to launch a ray from the center of the screen and get any point on the ray (so you just use X,Y and then set Z as you wish)
Edit:
Now that the question is clearer here is the answer:
int maxParticles = (int)sqrt(d_MAXPARTICLES);
factorx = (i-(float)maxParticles/2.0f)/(float)maxParticles;
factory = (j-(float)maxParticles/2.0f)/(float)maxParticles;
glm::vec2 particleLocaleDelta = glm::vec2(extentX*factorx,extentY*factory)
glm::vec2 d2Pos = glm::vec2(centerOfScreenX,centerOfScreenY)
d2Pos += particleLocaleDelta;
where
extentX,extentY
are the dimensions of the "big square" and factor is the current scale by "i" and "j". The code is not optimized. Just thinked to work (assuming you have a 2D camera with world units corresponding to pixel units).

Why do I have to divide by Z?

I needed to implement 'choosing an object' in a 3D environment. So instead of going with robust, accurate approach, such as raycasting, I decided to take the easy way out. First, I transform the objects world position onto screen coordinates:
glm::mat4 modelView, projection, accum;
glGetFloatv(GL_PROJECTION_MATRIX, (GLfloat*)&projection);
glGetFloatv(GL_MODELVIEW_MATRIX, (GLfloat*)&modelView);
accum = projection * modelView;
glm::mat4 transformed = accum * glm::vec4(objectLocation, 1);
Followed by some trivial code to transform the opengl coordinate system to normal window coordinates, and do a simple distance from the mouse check. BUT that doesn't quite work. In order to translate from world space to screen space, I need one more calculation added on to the end of the function shown above:
transformed.x /= transformed.z;
transformed.y /= transformed.z;
I don't understand why I have to do this. I was under the impression that, once one multiplied your vertex by the accumulated modelViewProjection matrix, you had your screen coordinates. But I have to divide by Z to get it to work properly. In my openGL 3.3 shaders, I never have to divide by Z. Why is this?
EDIT: The code to transform from from opengl coordinate system to screen coordinates is this:
int screenX = (int)((trans.x + 1.f)*640.f); //640 = 1280/2
int screenY = (int)((-trans.y + 1.f)*360.f); //360 = 720/2
And then I test if the mouse is near that point by doing:
float length = glm::distance(glm::vec2(screenX, screenY), glm::vec2(mouseX, mouseY));
if(length < 50) {//you can guess the rest
EDIT #2
This method is called upon a mouse click event:
glm::mat4 modelView;
glm::mat4 projection;
glm::mat4 accum;
glGetFloatv(GL_PROJECTION_MATRIX, (GLfloat*)&projection);
glGetFloatv(GL_MODELVIEW_MATRIX, (GLfloat*)&modelView);
accum = projection * modelView;
float nearestDistance = 1000.f;
gameObject* nearest = NULL;
for(uint i = 0; i < objects.size(); i++) {
gameObject* o = objects[i];
o->selected = false;
glm::vec4 trans = accum * glm::vec4(o->location,1);
trans.x /= trans.z;
trans.y /= trans.z;
int clipX = (int)((trans.x+1.f)*640.f);
int clipY = (int)((-trans.y+1.f)*360.f);
float length = glm::distance(glm::vec2(clipX,clipY), glm::vec2(mouseX, mouseY));
if(length<50) {
nearestDistance = trans.z;
nearest = o;
}
}
if(nearest) {
nearest->selected = true;
}
mouseRightPressed = true;
The code as a whole is incomplete, but the parts relevant to my question works fine. The 'objects' vector contains only one element for my tests, so the loop doesn't get in the way at all.
I've figured it out. As Mr David Lively pointed out,
Typically in this case you'd divide by .w instead of .z to get something useful, though.
My .w values were very close to my .z values, so in my code I change the statement:
transformed.x /= transformed.z;
transformed.y /= transformed.z;
to:
transformed.x /= transformed.w;
transformed.y /= transformed.w;
And it still worked just as before.
https://stackoverflow.com/a/10354368/2159051 explains that division by w will be done later in the pipeline. Obviously, because my code simply multiplies the matrices together, there is no 'later pipeline'. I was just getting lucky in a sense, because my .z value was so close to my .w value, there was the illusion that it was working.
The divide-by-Z step effectively applies the perspective transformation. Without it, you'd have an iso view. Imagine two view-space vertices: A(-1,0,1) and B(-1,0,100).
Without the divide by Z step, the screen coordinates are equal (-1,0).
With the divide-by-Z, they are different: A(-1,0) and B(-0.01,0). So, things farther away from the view-space origin (camera) are smaller in screen space than things that are closer. IE, perspective.
That said: if your projection matrix (and matrix multiplication code) is correct, this should already be happening, as the projection matrix will contain 1/Z scaling components which do this. So, some questions:
Are you really using the output of a projection transform, or just the view transform?
Are you doing this in a pixel/fragment shader? Screen coordinates there are normalized (-1,-1) to (+1,+1), not pixel coordinates, with the origin at the middle of the viewport. Typically in this case you'd divide by .w instead of .z to get something useful, though.
If you're doing this on the CPU, how are you getting this information back to the host?
I guess it is because you are going from 3 dimensions to 2 dimensions, so you are normalizing the 3 dimension world to a 2 dimensional coordinates.
P = (X,Y,Z) in 3D will be q = (x,y) in 2D where x=X/Z and y = Y/Z
So a circle in 3D will not be circle in 2D.
You can check this video out:
https://www.youtube.com/watch?v=fVJeJMWZcq8
I hope I understand your question correctly.

OpenGL Frustum visibility test with sphere : Far plane not working

I am doing a program to test sphere-frustum intersection and being able to determine the sphere's visibility. I am extracting the frustum's clipping planes into camera space and checking for intersection. It works perfectly for all planes except the far plane and I cannot figure out why. I keep pulling the camera back but my program still claims the sphere is visible, despite it having been clipped long ago. If I go far enough it eventually determines that it is not visible, but this is some distance after it has exited the frustum.
I am using a unit sphere at the origin for the test. I am using the OpenGL Mathematics (GLM) library for vector and matrix data structures and for its built in math functions. Here is my code for the visibility function:
void visibilityTest(const struct MVP *mvp) {
static bool visLastTime = true;
bool visThisTime;
const glm::vec4 modelCenter_worldSpace = glm::vec4(0,0,0,1); //at origin
const int negRadius = -1; //unit sphere
//Get cam space model center
glm::vec4 modelCenter_cameraSpace = mvp->view * mvp->model * modelCenter_worldSpace;
//---------Get Frustum Planes--------
//extract projection matrix row vectors
//NOTE: since glm stores their mats in column-major order, we extract columns
glm::vec4 rowVec[4];
for(int i = 0; i < 4; i++) {
rowVec[i] = glm::vec4( mvp->projection[0][i], mvp->projection[1][i], mvp->projection[2][i], mvp->projection[3][i] );
}
//determine frustum clipping planes (in camera space)
glm::vec4 plane[6];
//NOTE: recall that indices start at zero. So M4 + M3 will be rowVec[3] + rowVec[2]
plane[0] = rowVec[3] + rowVec[2]; //near
plane[1] = rowVec[3] - rowVec[2]; //far
plane[2] = rowVec[3] + rowVec[0]; //left
plane[3] = rowVec[3] - rowVec[0]; //right
plane[4] = rowVec[3] + rowVec[1]; //bottom
plane[5] = rowVec[3] - rowVec[1]; //top
//extend view frustum by 1 all directions; near/far along local z, left/right among local x, bottom/top along local y
// -Ax' -By' -Cz' + D = D'
plane[0][3] -= plane[0][2]; // <x',y',z'> = <0,0,1>
plane[1][3] += plane[1][2]; // <0,0,-1>
plane[2][3] += plane[2][0]; // <-1,0,0>
plane[3][3] -= plane[3][0]; // <1,0,0>
plane[4][3] += plane[4][1]; // <0,-1,0>
plane[5][3] -= plane[5][1]; // <0,1,0>
//----------Determine Frustum-Sphere intersection--------
//if any of the dot products between model center and frustum plane is less than -r, then the object falls outside the view frustum
visThisTime = true;
for(int i = 0; i < 6; i++) {
if( glm::dot(plane[i], modelCenter_cameraSpace) < static_cast<float>(negRadius) ) {
visThisTime = false;
}
}
if(visThisTime != visLastTime) {
printf("Sphere is %s visible\n", (visThisTime) ? "" : "NOT " );
visLastTime = visThisTime;
}
}
The polygons appear to be clipped by the far plane properly so it seems that the projection matrix is set up properly, but the calculations make it seem like the plane is way far out. Perhaps I am not calculating something correctly or have a fundamental misunderstanding of the calculations that are required?
The calculations that deal specifically with the far clipping plane are:
plane[1] = rowVec[3] - rowVec[2]; //far
and
plane[1][3] += plane[1][2]; // <0,0,-1>
I'm setting the plane to be equal to the 4th row (or in this case column) of the projection matrix - the 3rd row of the projection matrix. Then I'm extending the far plane one unit further (due to the sphere's radius of one; D' = D - C(-1) )
I've looked over this code many times and I can't see why it shouldn't work. Any help is appreciated.
EDIT:
I can't answer my own question as I don't have the rep, so I will post it here.
The problem was that I wasn't normalizing the plane equations. This didn't seem to make much of a difference for any of the clip planes besides the far one, so I hadn't even considered it (but that didn't make it any less wrong). After normalization everything works properly.