Scale a billboard matrix in DX11 - c++

I have constructed a billboard matrix using the code below:
XMFLOAT4X4 translationMatrix = XMFLOAT4X4();
translationMatrix._11 = right.x; translationMatrix._21 = up.x; translationMatrix._31 = look.x; translationMatrix._41 = worldposition.x;
translationMatrix._12 = right.y; translationMatrix._22 = up.y; translationMatrix._32 = look.y; translationMatrix._42 = worldposition.y;
translationMatrix._13 = right.z; translationMatrix._23 = up.z; translationMatrix._33 = look.z; translationMatrix._43 = worldposition.z;
translationMatrix._14 = 0; translationMatrix._24 = 0; translationMatrix._34 = 0; translationMatrix._44 = 1;
And this works correctly, however I want the billboard to be scalable, how can I achieve this as the matrix is entirely vectors and thus isn't inherently scalable?
Trying to scale using XMMatrixScaleFromVector() causes the billboard to start moving when the camera approaches it.
Any help is much appreciated.

If my understanding of what you're doing is correct, then right, up, and look simply represent the world-space base vectors of the local coordinate system of the billboard. In this case, it should be sufficient to just scale the right and up vectors (assuming those correspond to the x and y axes of the billboard) to your heart's desire and that's all there is to it…

Related

Rotate a std::vector<Eigen::Vector3d> as a rigid transformation?

I have a few 3d points, stored in a std::vector<Eigen::Vector3d>. I need to rigidly rotate and translate these points, without changing their relationship to one another. As if moving the cloud as a whole.
Based on this question:
https://stackoverflow.com/questions/50507665/eigen-rotate-a-vector3d-with-a-quaternion
I have this code:
std::vector<Eigen::Vector3d> pts, ptsMoved;
Eigen::Quaterniond rotateBy = Eigen::Quaterniond(0.1,0.5,0.08,0.02);
Eigen::Vector3d translateBy(1, 2.5, 1.5);
for (int i = 0; i < pts.size(); i++)
{
//transform point
Vector3d rot = rotateBy * (pts[i] + translateBy);
ptsMoved.push_back(rot);
}
When i view the points and compare them to the original points, however, I get this: (White are the original, green are the transformed).
What i expect, is the cloud as a whole to look the same, just in a different position and orientation. What i get, is a moved and rotated and scaled cloud, that looks different to the original. What am i doing wrong?
EDIT:
If i apply the inverse transform to the adjusted points, using:
std::vector<Eigen::Vector3d> pntsBack;
for (int i = 0; i < ptsMoved.size(); i++)
{
//transform point
Vector3d rot = rotateBy.inverse() * (ptsMoved[i] - translateBy);
pntsBack.push_back(rot);
}
It gives me an even worse result. (dark green = original points, white = transformed, light green = transformed inverse)
Your Quaternion is not a unit-Quaternion, therefore you will get unspecified results.
If you are not sure your quaternion is normalized, just write
rotateBy.normalize();
before using it. Additionally, if you want to rotate more than one vector it is more efficient to convert the Quaternion to a rotation matrix:
Eigen::Matrix3d rotMat = rotateBy.toRotationMatrix();
// ...
// inside for loop:
Vector3d rot = rotMat * (ptsMoved[i] - translateBy);
Also, instead of .inverse() you can use .conjugate() for unit quaternions and .adjoint() or .transpose() for orthogonal Matrices.

How to ensure particles will always be aligned in the center

I'm having trouble figuring out how to ensure particles aligned in a square will always be placed in the middle of the screen, regardless of the size of the square. The square is created with:
for(int i=0; i<(int)sqrt(d_MAXPARTICLES); i++) {
for(int j=0; j<(int)sqrt(d_MAXPARTICLES); j++) {
Particle particle;
glm::vec2 d2Pos = glm::vec2(j*0.06, i*0.06) + glm::vec2(-17.0f,-17.0f);
particle.pos = glm::vec3(d2Pos.x,d2Pos.y,-70);
particle.life = 1000.0f;
particle.cameradistance = -1.0f;
particle.r = d_R;
particle.g = d_G;
particle.b = d_B;
particle.a = d_A;
particle.size = d_SIZE;
d_particles_container.push_back(particle);
}
}
the most important part is the glm::vec2(-17.0f, -17.0f) which correctly positions the square in the center of the screen. This looks like:
the problem is that my program supports any number of particles, so only specifying
now my square is off center, but how can I change glm::vec2(-17.0f,-17.0f) to account for different particles?
Do not make position dependent on "i", and "j" indices if you want a fixed position.
glm::vec2 d2Pos = glm::vec2(centerOfScreenX,centerOfScreenY); //much better
But how to compute centerOfSCreen? It depends if you are using a 2D or a 3D camera.
If you use a fixed 2D camera, then center is (Width/2,Height/2).
If you use a moving 3d camera, you need to launch a ray from the center of the screen and get any point on the ray (so you just use X,Y and then set Z as you wish)
Edit:
Now that the question is clearer here is the answer:
int maxParticles = (int)sqrt(d_MAXPARTICLES);
factorx = (i-(float)maxParticles/2.0f)/(float)maxParticles;
factory = (j-(float)maxParticles/2.0f)/(float)maxParticles;
glm::vec2 particleLocaleDelta = glm::vec2(extentX*factorx,extentY*factory)
glm::vec2 d2Pos = glm::vec2(centerOfScreenX,centerOfScreenY)
d2Pos += particleLocaleDelta;
where
extentX,extentY
are the dimensions of the "big square" and factor is the current scale by "i" and "j". The code is not optimized. Just thinked to work (assuming you have a 2D camera with world units corresponding to pixel units).

Why do I have to divide by Z?

I needed to implement 'choosing an object' in a 3D environment. So instead of going with robust, accurate approach, such as raycasting, I decided to take the easy way out. First, I transform the objects world position onto screen coordinates:
glm::mat4 modelView, projection, accum;
glGetFloatv(GL_PROJECTION_MATRIX, (GLfloat*)&projection);
glGetFloatv(GL_MODELVIEW_MATRIX, (GLfloat*)&modelView);
accum = projection * modelView;
glm::mat4 transformed = accum * glm::vec4(objectLocation, 1);
Followed by some trivial code to transform the opengl coordinate system to normal window coordinates, and do a simple distance from the mouse check. BUT that doesn't quite work. In order to translate from world space to screen space, I need one more calculation added on to the end of the function shown above:
transformed.x /= transformed.z;
transformed.y /= transformed.z;
I don't understand why I have to do this. I was under the impression that, once one multiplied your vertex by the accumulated modelViewProjection matrix, you had your screen coordinates. But I have to divide by Z to get it to work properly. In my openGL 3.3 shaders, I never have to divide by Z. Why is this?
EDIT: The code to transform from from opengl coordinate system to screen coordinates is this:
int screenX = (int)((trans.x + 1.f)*640.f); //640 = 1280/2
int screenY = (int)((-trans.y + 1.f)*360.f); //360 = 720/2
And then I test if the mouse is near that point by doing:
float length = glm::distance(glm::vec2(screenX, screenY), glm::vec2(mouseX, mouseY));
if(length < 50) {//you can guess the rest
EDIT #2
This method is called upon a mouse click event:
glm::mat4 modelView;
glm::mat4 projection;
glm::mat4 accum;
glGetFloatv(GL_PROJECTION_MATRIX, (GLfloat*)&projection);
glGetFloatv(GL_MODELVIEW_MATRIX, (GLfloat*)&modelView);
accum = projection * modelView;
float nearestDistance = 1000.f;
gameObject* nearest = NULL;
for(uint i = 0; i < objects.size(); i++) {
gameObject* o = objects[i];
o->selected = false;
glm::vec4 trans = accum * glm::vec4(o->location,1);
trans.x /= trans.z;
trans.y /= trans.z;
int clipX = (int)((trans.x+1.f)*640.f);
int clipY = (int)((-trans.y+1.f)*360.f);
float length = glm::distance(glm::vec2(clipX,clipY), glm::vec2(mouseX, mouseY));
if(length<50) {
nearestDistance = trans.z;
nearest = o;
}
}
if(nearest) {
nearest->selected = true;
}
mouseRightPressed = true;
The code as a whole is incomplete, but the parts relevant to my question works fine. The 'objects' vector contains only one element for my tests, so the loop doesn't get in the way at all.
I've figured it out. As Mr David Lively pointed out,
Typically in this case you'd divide by .w instead of .z to get something useful, though.
My .w values were very close to my .z values, so in my code I change the statement:
transformed.x /= transformed.z;
transformed.y /= transformed.z;
to:
transformed.x /= transformed.w;
transformed.y /= transformed.w;
And it still worked just as before.
https://stackoverflow.com/a/10354368/2159051 explains that division by w will be done later in the pipeline. Obviously, because my code simply multiplies the matrices together, there is no 'later pipeline'. I was just getting lucky in a sense, because my .z value was so close to my .w value, there was the illusion that it was working.
The divide-by-Z step effectively applies the perspective transformation. Without it, you'd have an iso view. Imagine two view-space vertices: A(-1,0,1) and B(-1,0,100).
Without the divide by Z step, the screen coordinates are equal (-1,0).
With the divide-by-Z, they are different: A(-1,0) and B(-0.01,0). So, things farther away from the view-space origin (camera) are smaller in screen space than things that are closer. IE, perspective.
That said: if your projection matrix (and matrix multiplication code) is correct, this should already be happening, as the projection matrix will contain 1/Z scaling components which do this. So, some questions:
Are you really using the output of a projection transform, or just the view transform?
Are you doing this in a pixel/fragment shader? Screen coordinates there are normalized (-1,-1) to (+1,+1), not pixel coordinates, with the origin at the middle of the viewport. Typically in this case you'd divide by .w instead of .z to get something useful, though.
If you're doing this on the CPU, how are you getting this information back to the host?
I guess it is because you are going from 3 dimensions to 2 dimensions, so you are normalizing the 3 dimension world to a 2 dimensional coordinates.
P = (X,Y,Z) in 3D will be q = (x,y) in 2D where x=X/Z and y = Y/Z
So a circle in 3D will not be circle in 2D.
You can check this video out:
https://www.youtube.com/watch?v=fVJeJMWZcq8
I hope I understand your question correctly.

Opengl Billboard matrix

I am writing a viewer for a proprietary mesh & animation format in OpenGL.
During rendering a transformation matrix is created for each bone (node) and is applied to the vertices that bone is attached to.
It is possible for a bone to be marked as "Billboarded" which as most everyone knows, means it should always face the camera.
So the idea is to generate a matrix for that bone which when used to transform the vertices it's attached to, causes the vertices to be billboarded.
On my test model it should look like this:
However currently it looks like this:
Note, that despite its incorrect orientation, it is billboarded. As in no matter which direction the camera looks, those vertices are always facing that direction at that orientation.
My code for generating the matrix for bones marked as billboarded is:
mat4 view;
glGetFloatv(GL_MODELVIEW_MATRIX, (float*)&view);
vec4 camPos = vec4(-view[3].x, -view[3].y, -view[3].z,1);
vec3 camUp = vec3(view[0].y, view[1].y, view[2].y);
// zero the translation in the matrix, so we can use the matrix to transform
// camera postion to world coordinates using the view matrix
view[3].x = view[3].y = view[3].z = 0;
// the view matrix is how to get to the gluLookAt pos from what we gave as
// input for the camera position, so to go the other way we need to reverse
// the rotation. Transposing the matrix will do this.
{
float * matrix = (float*)&view;
float temp[16];
// copy this into temp
memcpy(temp, matrix, sizeof(float) * 16);
matrix[1] = temp[4]; matrix[4] = temp[1];
matrix[2] = temp[8]; matrix[8] = temp[2];
matrix[6] = temp[9]; matrix[9] = temp[6];
}
// get the correct position of the camera in world space
camPos = view * camPos;
//vec3 pos = pivot;
vec3 look = glm::normalize(vec3(camPos.x-pos.x,camPos.y-pos.y,camPos.z-pos.z));
vec3 right = glm::cross(camUp,look);
vec3 up = glm::cross(look,right);
mat4 bmatrix;
bmatrix[0].x = right.x;
bmatrix[0].y = right.y;
bmatrix[0].z = right.z;
bmatrix[0].w = 0;
bmatrix[1].x = up.x;
bmatrix[1].y = up.y;
bmatrix[1].z = up.z;
bmatrix[1].w = 0;
bmatrix[2].x = look.x;
bmatrix[2].y = look.y;
bmatrix[2].z = look.z;
bmatrix[2].w = 0;
bmatrix[3].x = pos.x;
bmatrix[3].y = pos.y;
bmatrix[3].z = pos.z;
bmatrix[3].w = 1;
I am using GLM to do the math involved.
Though this part of the code is based off of the tutorial here, other parts of the code are based off of an open source program similar to the one I'm building. However that program was written for DirectX and I haven't had much luck directly converting. The (working) directX code for billboarding looks like this:
D3DXMatrixRotationY(&CameraRotationMatrixY, -Camera.GetPitch());
D3DXMatrixRotationZ(&CameraRotationMatrixZ, Camera.GetYaw());
D3DXMatrixMultiply(&CameraRotationMatrix, &CameraRotationMatrixY, &CameraRotationMatrixZ);
D3DXQuaternionRotationMatrix(&CameraRotation, &CameraRotationMatrix);
D3DXMatrixTransformation(&CameraRotationMatrix, NULL, NULL, NULL, &ModelBaseData->PivotPoint, &CameraRotation, NULL);
D3DXMatrixDecompose(&Scaling, &Rotation, &Translation, &BaseMatrix);
D3DXMatrixTransformation(&RotationMatrix, NULL, NULL, NULL, &ModelBaseData->PivotPoint, &Rotation, NULL);
D3DXMatrixMultiply(&TempMatrix, &CameraRotationMatrix, &RotationMatrix);
D3DXMatrixMultiply(&BaseMatrix, &TempMatrix, &BaseMatrix);
Note the results are stored in baseMatrix in the directX version.
EDIT2: Here's the code I came up with when I tried to modify my code according to datenwolf's suggestions. I'm pretty sure I made some mistakes still. This attempt creates heavily distorted results with one end of the object directly in the camera.
mat4 view;
glGetFloatv(GL_MODELVIEW_MATRIX, (float*)&view);
vec3 pos = vec3(calculatedMatrix[3].x,calculatedMatrix[3].y,calculatedMatrix[3].z);
mat4 inverted = glm::inverse(view);
vec4 plook = inverted * vec4(0,0,0,1);
vec3 look = vec3(plook.x,plook.y,plook.z);
vec3 right = orthogonalize(vec3(view[0].x,view[1].x,view[2].x),look);
vec3 up = orthogonalize(vec3(view[0].y,view[1].y,view[2].y),look);
mat4 bmatrix;
bmatrix[0].x = right.x;
bmatrix[0].y = right.y;
bmatrix[0].z = right.z;
bmatrix[0].w = 0;
bmatrix[1].x = up.x;
bmatrix[1].y = up.y;
bmatrix[1].z = up.z;
bmatrix[1].w = 0;
bmatrix[2].x = look.x;
bmatrix[2].y = look.y;
bmatrix[2].z = look.z;
bmatrix[2].w = 0;
bmatrix[3].x = pos.x;
bmatrix[3].y = pos.y;
bmatrix[3].z = pos.z;
bmatrix[3].w = 1;
calculatedMatrix = bmatrix;
vec3 orthogonalize(vec3 toOrtho, vec3 orthoAgainst) {
float bottom = (orthoAgainst.x*orthoAgainst.x)+(orthoAgainst.y*orthoAgainst.y)+(orthoAgainst.z*orthoAgainst.z);
float top = (toOrtho.x*orthoAgainst.x)+(toOrtho.y*orthoAgainst.y)+(toOrtho.z*orthoAgainst.z);
return toOrtho - top/bottom*orthoAgainst;
}
Creating a parallel to view billboard matrix is as simple as setting the upper left 3×3 submatrix of the total modelview matrix to identity. There are only some cases where you actually require the actual look vector.
Anyway, you're thinking far too complicated. All your tinkering with the matrix completely misses the point. Namely that the modelview transformation assumes that the camera is always at (0,0,0) and moves world and models in opposite. What you try to do is finding the vector in model space that points towards the camera. Which is simply the vector that will point toward (0,0,0) after transformation.
So all we have to do is invert the modelview matrix and transform (0,0,0,1) with it. That's your look vector. For your calculations of right and up vectors orthogonalize the 1st (X) and 2nd (Y) column of the modelview matrix against that look vector.
Figured it out myself. It turns out the model format I'm using uses different axes for billboarding. Most billboarding implementations (including the one I used) use the X,Y coordinates to position the billboarded object. The format I was reading uses Y and Z.
The thing to look for is that there was a billboarding effect, but facing the wrong direction. To fix this I played with the different camera vectors until I arrived at the correct matrix calculation:
bmatrix[1].x = right.x;
bmatrix[1].y = right.y;
bmatrix[1].z = right.z;
bmatrix[1].w = 0;
bmatrix[2].x = up.x;
bmatrix[2].y = up.y;
bmatrix[2].z = up.z;
bmatrix[2].w = 0;
bmatrix[0].x = look.x;
bmatrix[0].y = look.y;
bmatrix[0].z = look.z;
bmatrix[0].w = 0;
bmatrix[3].x = pos.x;
bmatrix[3].y = pos.y;
bmatrix[3].z = pos.z;
bmatrix[3].w = 1;
My attempts to follow datenwolf's advice did not succeed and at this time he hasn't offered any additional explanation so I'm unsure why. Thanks anyways!

3d Alternative for D3DXSPRITE for billboarding

I am looking to billboard a sun image in my 3d world (directx 9).
Creating a D3DXSPRITE is great in some cases, but it is only a 2d object and can not exist in my "world" as a 3d object. What is an alternative method for billboarding, similar to d3dxsprite? How can I implement it?
The only alternative I have currently found is this link: http://www.two-kings.de/tutorials/dxgraphics/dxgraphics17.html which does not seem to work
Taking the center of your object vCenter. The object has a width and height of (w,h).
Firstly you need your camera to billboard vector. This is calculated as vCamToCen = normalise( vCamera - vCenter ).
You then need an appropriate rough up vector. This can be extracted from the view matrix (handily described here, ie the second column). You can then calculate the side vector by doing vSide = vCamToCen x vUp. Then calculate the REAL up vector by doing vUp = vCamToCen x vSide. Where 'x' is a cross product.
You now have all the info you need to do your billboarding.
You can then form your 4 verts as follows.
const float halfW = w / 2.0f;
const float halfH = h / 2.0f;
const D3DXVECTOR3 vHalfSide = vSide * halfW;
const D3DXVECTOR3 vHalfUp = vUp * halfH;
vertex[0].pos = vCenter;
vertex[1].pos = vCenter;
vertex[2].pos = vCenter;
vertex[3].pos = vCenter;
vertex[0].pos -= vHalfSide;
vertex[0].pos -= vHalfUp;
vertex[1].pos += vHalfSide;
vertex[1].pos -= vHalfUp;
vertex[2].pos += vHalfSide;
vertex[2].pos += vHalfUp;
vertex[3].pos -= vHalfSide;
vertex[3].pos += vHalfUp;
Build your 2 triangles out of those verts and pass it through your pipeline as normal (ie with your normal view and projection matrices).