The camera's up, focus and position values after interaction - xtk

After interacting with the XTK camera in some way -- translation, rotation, zooming -- is there a way to retrieve from the camera the new values of the position, the focus and the up vector? It looks like the getter and setters are defined in the camera javascript, but the attributes corresponding to these are not updated during the interaction. The value returned by camera.position, for instance, is not updated even following a translation.
Is there either a mechanism that can provide these values, or a way to add an additional watcher to all interactions that modify the camera?

the position, up vector and focus are used to configure the 3d space at first. then, all interactions just modify the created view matrix.
You can query the view matrix like this
ren = new X.renderer3D()
console.log(ren.camera.view + "") // prints the view matrix as a string

I see a few solutions depending of who does the job.
First : adding a option to the cameras to enable tracking (disabled by default), an option that would update up/position/focus would be easy no ? Since it's just to multiply the previous vector by the transformation matrix at the same moment than we multiply the view matrix by the transformation matrix. But it may bring an additionnal cost in operations. Or we can compute it like in my "Second"
Second : if my memories are good, the transformation matrix T in a base B(O(x,y,z),i,*j*,k) has a well known-structure no ? Something like this (maybe I forget a transposition) :
i1 j1 k1 u1
i2 j2 k2 u2
i3 j3 k3 u3
0 0 0 1
Where :
([u1,u2,u3])=T(0(x,y,z)) i.e. gives the translation in the base B
([i1,i2,i3])=T(**i**)
([j1,j2,j3])=T(**j**)
([k1,k2,k3])=T(**k**)
Then if you want the 3 angles it is very harsh (see euler's calculations), but if you want something else it could be easier. For example you want the up vector which is the image of the "k" vector of the base B by our transformation T, so it is [j1,j2,j3] no ? Then, you cannot get easy the focus point, but you can easy get the focus vector : it is [k1,k2,k3] ! (actualy maybe it is -1*[k1,k2,k3]). If you look well the LookAt_ method of X.camera3D, it does not give a focus point to webgl but a normalized focus vector : the point position doesn't matter, you just need 1 point on the focus vector, and you can compute it now, no ? It's just a sum of currentposition & currentfocusvector coordinates.
I wish my memories are well and I am not saying total sh*t.
Just one question : there is a setter for the view matrix, so why do you want to store & set up/focus/position instead of directly storing and setting view ?
PS : caution, There could be a scale operator k, so the matrix would be different different, but I don't think there is in xtk.
PS 2 : in bold are vectors. [number,number,number] is a 3D vector shown by his coordinates.

Related

OpenCV estimate distance & normal vector from homography

I'm matching a template from which I know my distance to & my normal vector to.
i.e. if my homography is the identity matrix then my camera is at Distance = 1.0m & my normal is at 0.
Now I have a second image in which I successfully aligned my template giving an homography:
[0.82072, 0.05685, 66.75024]
H = [0.02006, 0.86092, 39.34907]
[0.00003, 0.00017, 01.00000]
I also have my camera matrix.
the opencv function :
cv::decomposeHomographyMat()
gives me 4 solutions for the Rotation(3x3 mat),Translation(3x1 mat) & Normal vector(3x1).
cv::warpPerspective()
Is able to map nearly perfectly the current view of the camera to my template.
So it should be possible to get the actual scaling (template to alignment) & the normal vector.
But I can't figure it out how to actually choose the correct solutions of cv::decomposeHomographyMat(), I'm I missing something?
EDIT: Posted "question" without the question...
I figured it out.
Step one:
I create a set of point in the ROI I can map to my template (points in the area defined by the corners of the ROI).
Step two:
Warp the points in ROI (from step one; 8 points are enough in all my tests & use case) with all the solutions of cv::decomposeHomographyMat()
Exclude all solutions that give a point3D(x, y, z) with a z value < 0 (i.e. point is behind the camera).
Step three:
At this point you should have one to two solutions left.
All rotations matrixes should be the same, only the normal & translation matrix should differ.
Translations matrixes should verify:
Translation_Solution1 = -1* Translation_Solution2
Then compare your ROI area to you template area.
If you ROI area is smaller than your template, it means that you template as been "scaled down", i.e. your camera did a translation on z in the negative values.
Else you camera did a translation on the positive z values.
Chose the appropriate solution.
My error was to think that warpPerspective() was actually solving the Homography decomposition, but its not.
in paper Faugeras O D, Lustman F. Motion and structure from motion in a piecewise planar environment.1988 page 9 https://www.researchgate.net/publication/243764888_Motion_and_Structure_from_Motion_in_a_Piecewise_Planar_Environment

How to calculate the new global position of a child object based on a relative change in the parent?

I have a spatial structure where I keep a few objects. All the positions of the objects are global.
I'm now trying to create a parent/child system but I'm having trouble with the math. What I tried at first was, everytime I move an object, I also move all of its children by the same amount. That works but I also need rotation, so I tried using matrixes. I built a model matrix for the child. It was built using the relative position/rotation/scale to the parent:
glm::mat4 child_model;
//"this" is the parent
child_model = glm::translate(child_model, child_spatial.position - this->m_position);
child_model = child_model * glm::toMat4(glm::inverse(this->m_rotation) * child_spatial.rotation);
child_model = glm::scale(child_model, child_spatial.scale - this->m_scale);
I would then rotate/translate/scale the child matrix by the amount the parent was rotated/moved/scaled and then I would decompose the resulting matrix back to the global child:
child_model = child_model * glm::toMat4(this->m_rotation * rotation);
child_model = glm::translate(child_model, this->m_position + position);
child_model = glm::scale(child_model, this->m_scale * scale);
where position/rotation/scale are defined as:
//How much the parent changed
position = this->position - m_position;
rotation = glm::inverse(m_rotation) * this->rotation;
scale = this->scale - m_scale;
and:
glm::decompose(child_model, d_scale, d_rotation, d_translation, d_skew, d_perspective);
child_spatial.position = d_translation;
child_spatial.rotation = d_rotation;
child_spatial.scale = d_scale;
But it doesn't work and I'm not sure what is wrong. Everything just spins/moves out of control. What am I missing here?
This problem is similar to joint animation used for computer animations. First you have to use or construct transformation matrices for each object. However, the transformation matrix of each child object should be with respect to their parent's coordinate system (i.e., child coordinate should be in parent's coordinate space). We'll call this matrix a ToParentMatrix. In addition, each object needs to have another matrix which transforms it all the way to the root (which is in world space). We'll call this matrix aToRootMatrix. When we multiply these two matrices we get a 'ToWorldMatrix' which contains the position and orientation in world space. So the transformation of each object to world space is as follows in a 2-joint hierarchie (root, child1 (of root) and child2 (of child1)):
RootTransform = ToWorldMatrix; // The root object is allready in world space;
Child1.ToRootMatrix = RootTransform;
Child1.ToWorldMatrix = Child1.ToRootMatrix*Child1.ToParentMatrix;
Child2.ToRootMatrix = Child1.ToWorldMatrix;
Child2.ToWorldMatrix = Child2.ToRootMatrix*Child2.ToParentMatrix;
For more information, search for joint (or skeletal) animation and forward kinematics. Also Frank Luna's book has two nice chapters about skeletal animation.
You are way overthinking things because you are missing some math fundamentals. There are tons of linear algebra for 3d graphics tutorials and courses just a google away!
You need to read through some of them. Your mistake is that you think of a matrix as "how to apply scale/translate/rotate". You need to change that point of view. Matrices describe local spaces. It does not matter how you set them up. So stop thinking about "how things move". Think of every matrix as a space (base). Multiplying a vector by a matrix moves it into that space. And then you move it to the next space. Think of a space as a relative coordinate system.
This is probably not very helpful, but what I am trying to say is: Spend some time to study how linear algebra works! You will need it and it is quite easy in the end. I have a hard time finding a good tutorial but maybe https://www.khanacademy.org/math/linear-algebra looks good.

'Ray' creation for raypicking not fully working

I'm trying to implement a 'raypicker' for selecting objects within my project. I do not fully understand how to implement this, but I understand conceptually how it should work. I've been trying to learn how to do this, but most tutorials I find go way over my head. My current code is based on one of the recent tutorials I found, here.
After several hours of revisions, I believe the problem I'm having with my raypicker is actually the creation of the ray in the first place. If I substitute/hardcode my near/far planes with a coordinate that would undisputably be located within the region of a triangle, the picker identifies it correctly.
My problem is this: my ray creation doesn't seem to fully take my current "camera" or perspective into account, so camera rotation won't affect where my mouse is.
I believe to remedy this I need something like using gluUnProject() or something, but whenever I used this the x,y,z coordinates returned would be incredibly small,
My current ray creation is a mess. I tried to use methods that others proposed initially, but it seemed like whatever method I tried it never worked with my picker/intersection function.
Here's the code for my ray creation:
void oglWidget::mousePressEvent(QMouseEvent *event)
{
QVector3D nearP = QVector3D(event->x()+camX, -event->y()-camY, -1.0);
QVector3D farP = QVector3D(event->x()+camX, -event->y()-camY, 1.0);
int i = -1;
for (int x = 0; x < tileCount; x++)
{
bool rayInter = intersect(nearP, farP, tiles[x]->vertices);
if (rayInter == true)
i = x;
}
if (i != -1)
{
tiles[i]->showSelection();
}
else
{
for (int x = 0; x < tileCount; x++)
tiles[x]->hideSelection();
}
//tiles[0]->showSelection();
}
To repeat, I used to load up the viewport, model & projection matrices, and unproject the mouse coordinates, but within a 1920x1080 window, all I get is values in the range of -2 to 2 for x y & z for each mouse event, which is why I'm trying this method, but this method doesn't work with camera rotation and zoom.
I don't want to do pixel color picking, because who knows I may need this technique later on, and I'd rather not give up after the amount of effort I put in so far
As you seem to have problems constructing your rays, here's how I would do it. This has not been tested directly. You could do it like this, making sure that all vectors are in the same space. If you use multiple model matrices (or stacks thereof) the calculation needs to be repeated separately with each of them.
use pos = gluUnproject(winx, winy, near, ...) to get the position of the mouse coordinate on the near plane in model space; near being the value given to glFrustum() or gluPerspective()
origin of the ray is the camera position in model space: rayorig = inv(modelmat) * camera_in_worldspace
the direction of the ray is the normalized vector from the position from 1. to the ray origin: raydir = normalize(pos - rayorig)
On the website linked they use two points for the ray and they don't seem to normalize the ray direction vector, so this is optional.
Ok, so this is the beginning of my trail of breadcrumbs.
I was somehow having issues with the QT datatypes for the matrices, and the logic pertaining to matrix transformations.
This particular problem in this question resulted from not actually performing any transformations whatsoever.
Steps to solving this problem were:
Converting mouse coordinates into NDC space (within the range of -1 to 1: x/screen width * 2 - 1, y - height / height * 2 - 1)
grabbing the 4x4 matrix for my view matrix (can be the one used when rendering, or re calculated)
In a new vector, have it equal the inverse view matrix multiplied by the inverse projection matrix.
In order to build the ray, I had to do the following:
Take the previously calculated value for the matrices that were multiplied together. This will be multiplied by a vector 4 (array of 4 spots), where it will hold the previously calculated x and y coordinates, as well as -1, then +1.
Then this vector will be divided by the last spot value of the entire vector
Create another vector 4 which was just like the last, but instead of -1, put "1" .
Once again divide that by its last spot value.
Now the coordinates for the ray have been created at the far and near planes, so it can intersect with anything along it in the scene.
I opened a series of questions (because of great uncertainty with my series of problems), so parts of my problem overlap in them too.
In here, I learned that I needed to take the screen height into consideration for switching the origin of the y axis for a Cartesian system, since windows has the y axis start at the top left. Additionally, retrieval of matrices was redundant, but also wrong since they were never declared "properly".
In here, I learned that unProject wasn't working because I was trying to pull the model and view matrices using OpenGL functions, but I never actually set them in the first place, because I built the matrices by hand. I solved that problem in 2 fold: I did the math manually, and I made all the matrices of the same data type (they were mixed data types earlier, leading to issues as well).
And lastly, in here, I learned that my order of operations was slightly off (need to multiply matrices by a vector, not the reverse), that my near plane needs to be -1, not 0, and that the last value of the vector which would be multiplied with the matrices (value "w") needed to be 1.
Credits goes to those individuals who helped me solve these problems:
srobins of facepunch, in this thread
derhass from here, in this question, and this discussion
Take a look at
http://www.realtimerendering.com/intersections.html
Lot of help in determining intersections between various kinds of geometry
http://geomalgorithms.com/code.html also has some c++ functions one of them serves your purpose

Getting the velocity vector from position vectors

I looked at a bunch of similar questions, and I cannot seem to find one that particularly answers my question. I am coding a simple 3d game, and I am trying to allow the player to pick up and move entities around my map. I essentially want to get a velocity vector that will "push" the physics object a distance from the player's eyes, wherever they are looking. Here's an example of this being done in another game (the player is holding a chair entity in front of his eyes).
To do this, I find out the player's eye angles, then get the forward vector from the angles, then calculate the velocity of the object. Here is my working code:
void Player::PickupOtherEntity( Entity& HoldingEntity )
{
QAngle eyeAngles = this->GetPlayerEyeAngles();
Vector3 vecPos = this->GetEyePosition();
Vector3 vecDir = eyeAngles.Forward();
Vector3 holdingEntPos = HoldingEntity.GetLocation();
// update object by holding it a distance away
vecPos.x += vecDir.x * DISTANCE_TO_HOLD;
vecPos.y += vecDir.y * DISTANCE_TO_HOLD;
vecPos.z += vecDir.z * DISTANCE_TO_HOLD;
Vector3 vecVel = vecPos - holdingEntPos;
vecVel = vecVel.Scale(OBJECT_SPEED_TO_MOVE);
// set the entity's velocity as to "push" it to be in front of the player's eyes
// at a distance of DISTANCE_TO_HOLD away
HoldingEntity.SetVelocity(vecVel);
}
All that is great, but I want to convert my math so that I can apply an impulse. Instead of setting a completely new velocity to the object, I want to "add" some velocity to its existing velocity. So supposing I have its current velocity, what kind of math do I need to "add" velocity? This is essentially a game physics question. Thank you!
A very simple implementation could be like this:
velocity(t+delta) = velocity(t) + delta * acceleration(t)
acceleration(t) = force(t) / mass of the object
velocity, acceleration and force are vectors. t, delta and mass scalars.
This only works reasonably well for small and equally spaced deltas. What you are essentially trying to achieve with this is a simulation of bodies using classical mechanics.
An Impulse is technically F∆t for a constant F. Here we might want to assume a∆t instead because mass is irrelevant. If you want to animate an impulse you have to decide what the change in velocity should be and how long it needs to take. It gets complicated real fast.
To be honest an impulse isn't the correct thing to do. Instead it would be preferable to set a constant pick_up_velocity (people don't tend to pick things up using an impulse), and refresh the position each time the object rises up velocity.y, until it reaches the correct level:
while(entPos.y < holdingEntPos.y)
{
entPos.y += pickupVel.y;
//some sort of short delay
}
And as for floating in front of the player's eyes, set an EyeMovementEvent of some sort that also sends the correct change in position to any entity the player is holding.
And if I missed something and that's what you are already doing, remember that when humans apply an impulse, it is generally really high acceleration for a really short time, much less than a frame. You wouldn't see it in-game anyways.
basic Newtonian/D'Alembert physics dictate:
derivate(position)=velocity
derivate(velocity)=acceleration
and also backwards:
integrate(acceleration)=velocity
integrate(velocity)=position
so for your engine you can use:
rectangle summation instead of integration (numerical solution of integral). Define time constant dt [seconds] which is the interval between updates (timer or 1/fps). So the update code (must be periodically called every dt:
vx+=ax*dt;
vy+=ay*dt;
vz+=az*dt;
x+=vx*dt;
y+=vy*dt;
z+=vz*dt;
where:
a{x,y,z} [m/s^2] is actual acceleration (in your case direction vector scaled to a=Force/mass)
v{x,y,z} [m/s] is actual velocity
x,y,z [m] is actual position
These values have to be initialized a,v to zero and x,y,z to init position
all objects/players... have their own variables
full stop is done by v=0; a=0;
driving of objects is done only by change of a
in case of collision mirror v vector by collision normal
and maybe multiply by some k<1.0 (0.95 for example) to account energy loss on impact
You can add gravity or any other force field by adding g vector:
vx+=ax*dt+gx*dt;
vy+=ay*dt+gy*dt;
vz+=az*dt+gz*dt;
also You can add friction and anything else you need
PS. the same goes for angles just use angle/omega/epsilon/I instead of x/a/v/m
to be clear by angles I mean rotation (pitch,yaw,roll) around mass center

Physically-based fracture simulation with opengl/c++

I am trying to implement the ideas in this paper for modeling fracture:
http://graphics.berkeley.edu/papers/Obrien-GMA-1999-08/index.html
I am stuck at a point (essentially page 4...) and would really appreciate any help. The part I am stuck on involves the deformation of tetrahedron (using FEM).
I have a single tetrahedron defined by four nodes (each node has a x, y, z position) in which I calculate the following matrices from:
u: each column is a vector containing material coordinates (x, y, z,
1) for each node (so total 4 columns), a 4x4 matrix
B: inverse(u), he calls this the basis matrix, a 4x4 matrix
P: each column is a vector containing real world coordinates (x, y,
z) for each node, I set P is initially equal to u since the object is
not deformed at the rest state, a 3x4 matrix
V: give some initial velocities for (x, y, z) in each node, so a 3x4
matrix
delta: basically an identity matrix, {{1, 0, 0}, {0, 1, 0}, {0, 0,
1}, {0, 0, 0}}
I get x(u) = P*B*u and v(u) = V*B*u, but not sure where to use these...
Also, I get dx = P*B*delta and dv = V*B*delta
I then get strain by Green's strain tensor, epsilon = 1/2(dx+transpose(dx)) - Identity_3x3
And then stress, sigma = lambda*trace(epsilon)*Identity_3x3 + 2*mu*epsilon
I get the elastic force by equation (24) on page 4 of the paper. It's just a big summation.
I then using explicit integration to update real world coordinates P. The idea is that the velocity update involves the force on the node of the tetrahedron and therefore affects the real-world coordinate position, making the object deform.
The problem, however, is that the force is incredibly small...something x 10^-19, etc. So, c++ usually rounds to 0. I've stepped through the calculations and can't figure out why.
I know I'm missing something here, just can't figure out what. What update am I not doing correctly?
A common reason why the force is small is that your Young's modulus (lambda) is too small. If you are using a scale of meters, a macro scale object might have 10^5 young's modlus and a .3 to .4 Poisson's ratio.
It sounds like what might be happening is that your tet is still in the rest configuration. In the presence of no deformation, the strain will be zero and so in-turn the stress and force will also be about zero. You can perturb the vertices in various ways and make sure your strain (epsilon) is being computed correctly. One simple test is to scale by 2 about the centroid which should give you a positive strain. If you scale by .5 about the centroid you will get a negative strain. If you translate the vertices uniformly you will get no change in strain (a common FEM invariant). If you rotate them you probably will get a change, but a co-rotational constitutive model wouldn't.
Note you might think that gravity would cause deformation, but unless one of the vertices is constrained, the uniform force on all vertices will cause a uniform translation which will not change the strain from being zero.
You definitely should not need to use arbitrary precision arithmetic for the examples in the paper. In fact, floats typically are sufficient for these types of simulation.
I might be mistaken, but c++ doubles only go to 15 decimal places, (at least that's what my std::numeric_limits says). So you're way out of precision.
So you might end up needing a library for arbitrary precision arithmetics, e.g., http://gmplib.org/