MVP matrix not working outside of shader? - c++

Odd problem here, I've been converting my current project from Qt's native matrix/vector classes to Eigen's, but I've come across an issue that I can't work out.
I calculate the MVP for the shader thus:
DiagonalMatrix< double, 4 > diag( Vector4d( 1.0, 1.0, -1.0, 1.0 ) );
scrMatrix_.noalias() = projMatrix_ * diag * camMatrix_.inverse();
The diag matrix inverts the Z-axis because all my maths sees the camera's aim vector pointing into the screen, but OpenGL does the opposite. Anyway this works because the OpenGL side of the viewports appear and operate fine.
The other side of my viewport output is 2D overlay painting via Qt's paintEvent() system, grid labelling for example. So I use the same matrix to find the 3D location in the camera's clip space:
Vector4d outVec( scrMatrix_ * ( Vector4d() << inVec, 1.0 ).finished() );
Except I get totally wrong results:
inVec: 0 0 10
outVec: 11.9406 -7.20796
In this example I expected something more like outVec: 0.55 -0.15. My GLSL vertex shader performs the calculation like this:
gl_Position = scrMatrix_ * transform * vec4( inVec, 1.0 );
In the examples above transform is the identity, so I can't see any difference between the two projections, and yet the outcomes are totally different! I know this is a long shot, but can anyone see where I'm going wrong?
Update:
I reimplemented the old (working) Qt code for comparison purposes:
QVector3D qvec( vector( 0 ), vector( 1 ), vector( 2 ) );
QMatrix4x4 qmat( Affine3d( scrMatrix_ ).data() );
QPointF pnt = ( qvec * qmat ).toPointF() / 2.0;
Vs:
Vector4d vec( scrMatrix_ * ( Vector4d() << vector, 1.0 ).finished() );
QPointF pnt = QPointF( vec( 0 ), vec( 1 ) ) / 2.0;
To me they are identical, but only the Qt version works!

Well I sussed it out, you need to scale the XYZ axes of the resulting vector by the W axis scale factor (clues in the name...).
It's amazing how much Qt and OpenGL do in the background for you.

Related

Why fwidth behaves differently?

I'm working on a WebGL project to create isolines on a 3D surface on macOS/amd GPU. My idea is to colour the pixels based on elevation in fragment shader. With some optimizations, I can achieve a relatively consistent line width and I am happy about that. However when I tested it on windows it behaves differently.
Then I figured out it's because of fwidth(). I use fwidth() to prevent fragment shader from coloring the whole horizontal plane when it happens to locate at a isolevel. Please see the screenshot:
I solved this issue by adding the follow glsl line:
if (fwidth(vPositionZ) < 0.001) { /**then do not colour isoline on these pixels**/ };
It works very well on macOS since I got this:
.
However, on windows/nvidia GPU all isolines are gone because fwidth(vPositionZ) always evaluates to 0.0. Which doesn't make sense to me.
What am I doing wrong? Is there any better way to solve the issue presented in the first screenshot? Thank you all!
EDIT:
Here I attach my fragment shader. It's simplified but I think that's all relevant. I know looping is slow but for now I'm not worried about it.
uniform float zmin; // min elevation
uniform vec3 lineColor;
varying float vPositionZ; // elevation value for each vertex
float interval;
vec3 originColor = finalColor.rgb; // original surface color
for ( int i = 0; i < COUNT; i ++ ) {
float elevation = zmin + float( i + 1 ) * interval;
lineColor = mix( originColor, lineColor, step( 0.001, fwidth(vPositionZ)));
if ( vPositionZ <= elevation + lineWidth && vPositionZ >= elevation - lineWidth ) {
finalColor.rgb = lineColor;
}
// same thing but without condition:
// finalColor.rgb = mix( mix( originColor, lineColor, step(elevation - lineWidth, vPositionZ) ),
// originColor,
// step(elevation + lineWidth, vPositionZ) );
}
gl_FragColor = finalColor;
Environment: WebGL2.0, es version 300, chrome browser.
Put fwidth(vPosistionZ) before the loop will work. Otherwise, fwidth() evaluates anything to 0 if it's inside a loop.
I suspect this is a bug with Nvidia GPU.

Kinect Scaling in C++

I am experimenting with the kinect, however I am having some problems with scaling. The below is code from the kinect-kcb and although the face tracking works for the 'mesh' I am having problems returning the scaling value for my own classes. The below code returns a correct rotation and translation which function perfectly, but the scale only ever returns 1 for a long period (despite the mesh clearly changing size) and then slowly gets smaller 0.98... etc but clearly not correct scaling values.
float scale;
float rotation[ 3 ];
float translation[ 3 ];
hr = mResult->Get3DPose( &scale, rotation, translation );
if ( SUCCEEDED( hr ) ) {
Vec3f r( rotation[ 0 ], rotation[ 1 ], rotation[ 2 ] );
Vec3f t( translation[ 0 ], translation[ 1 ], translation[ 2 ] );
face.mPoseMatrix.translate( t );
face.mPoseMatrix.rotate( r );
face.mPoseMatrix.translate( -t );
face.mPoseMatrix.translate( t );
face.mPoseMatrix.scale( Vec3f::one() * scale );
}
This scale value is used repeatedly thoughout the code, but does not seem to change often enough (example functions - not in order):
hr = mModel->Get3DShape( shapeUnits, numShapeUnits, animationUnits, numAnimationUnits, scale, rotation, translation, pts, numVertices );
hr = mModel->GetProjectedShape( &mConfigColor, mSensorData.ZoomFactor, viewOffset, shapeUnits, numShapeUnits, animationUnits,
numAnimationUnits, scale, rotation, translation, pts, numVertices );
The kinect has a function FaceModel.Scale(), however this only returns a constant value which I assume is the initial scaling value for the 3D model, and then I assumed the above scaling value would change as the user moved closer and further away from the camera.
The method IFTResult::Get3DPose among other things, gives you the face scale value. If it is equal to 1.0 then the face scale is equal to the loaded 3D model (so nothing to do?).
If when reloading the 3d model, the face value is not equal to 1.0 then you need to do work on the model.
have you tried outputting some debug info of what IFTResult::Get3DPose assigns to pScale?
its also possible that the system is failing to track, you can check this with IFTResult::GetStatus.
It may be that what you are after is the magnitude of the face rectangle. This would scale with the proximity of the image subject.
Heres a relevant code project link.

Cannot both rotate and translate my scene - direct3d

I have drawn a cube onto the screen and I want to both rotate and translate the scene:
// Translation
XMStoreFloat4x4( &m_constantBufferData.model, XMMatrixTranspose( XMMatrixTranslation( placement->GetPosX(), placement->GetPosY(), placement->GetPosZ() ) ) );
// Rotation
XMStoreFloat4x4( &m_constantBufferData.model, XMMatrixTranspose(XMMatrixRotationX( placement->GetRotX() ) ) );
XMStoreFloat4x4( &m_constantBufferData.model, XMMatrixTranspose(XMMatrixRotationY( placement->GetRotY() ) ) );
XMStoreFloat4x4( &m_constantBufferData.model, XMMatrixTranspose(XMMatrixRotationZ( placement->GetRotZ() ) ) );
the problem is, only the translation is working... Do I have to set something somehow before doing the rotations too.
I have used the default Windows 8 Phone Direct3D C++ Project in Visual Studio 2012 Windows Phone.
I have passed in a few more variables and thanks to intellisense, found out there was a matrixtransaltion function
I added my positioning to this matrix and also hooked up the rotation to some custom variables too
The cube will move (translate) but I am guessing I need to save this movement somehow and THEN do the rotation.
Anything I can add to this to help solve the issue?
You are overwriting the contents of m_constantBufferData.model every time. You need to call XMMatrixMultiply on the four matrices to combine the transformations into a single matrix, then store the final result. For example:
// Rotation
XMMATRIX m = XMMatrixRotationX( placement->GetRotX() );
m = XMMatrixMultiply(m, XMMatrixRotationY( placement->GetRotY() );
m = XMMatrixMultiply(m, XMMatrixRotationZ( placement->GetRotZ() );
// Translation
m = XMMatrixMultiply(m, XMMatrixTranslation( placement->GetPosX(), placement->GetPosY(), placement->GetPosZ() ) );
XMStoreFloat4x4( &m_constantBufferData.model, XMMatrixTranspose(m) );

Precision issue - viewpoint far from origin - OpenGL C++

I have a camera class for controlling the camera, with the main function:
void PNDCAMERA::renderMatrix()
{
float dttime=getElapsedSeconds();
GetCursorPos(&cmc.p_cursorPos);
ScreenToClient(hWnd, &cmc.p_cursorPos);
double d_horangle=((double)cmc.p_cursorPos.x-(double)cmc.p_origin.x)/(double)screenWidth*PI;
double d_verangle=((double)cmc.p_cursorPos.y-(double)cmc.p_origin.y)/(double)screenHeight*PI;
cmc.horizontalAngle=d_horangle+cmc.d_horangle_prev;
cmc.verticalAngle=d_verangle+cmc.d_verangle_prev;
if(cmc.verticalAngle>PI/2) cmc.verticalAngle=PI/2;
if(cmc.verticalAngle<-PI/2) cmc.verticalAngle=-PI/2;
changevAngle(cmc.verticalAngle);
changehAngle(cmc.horizontalAngle);
rightVector=glm::vec3(sin(horizontalAngle - PI/2.0f),0,cos(horizontalAngle - PI/2.0f));
directionVector=glm::vec3(cos(verticalAngle) * sin(horizontalAngle), sin(verticalAngle), cos(verticalAngle) * cos(horizontalAngle));
upVector=glm::vec3(glm::cross(rightVector,directionVector));
glm::normalize(upVector);
glm::normalize(directionVector);
glm::normalize(rightVector);
if(moveForw==true)
{
cameraPosition=cameraPosition+directionVector*(float)C_SPEED*dttime;
}
if(moveBack==true)
{
cameraPosition=cameraPosition-directionVector*(float)C_SPEED*dttime;
}
if(moveRight==true)
{
cameraPosition=cameraPosition+rightVector*(float)C_SPEED*dttime;
}
if(moveLeft==true)
{
cameraPosition=cameraPosition-rightVector*(float)C_SPEED*dttime;
}
glViewport(0,0,screenWidth,screenHeight);
glScissor(0,0,screenWidth,screenHeight);
projection_matrix=glm::perspective(60.0f, float(screenWidth) / float(screenHeight), 1.0f, 40000.0f);
view_matrix = glm::lookAt(
cameraPosition,
cameraPosition+directionVector,
upVector);
gShader->bindShader();
gShader->sendUniform4x4("model_matrix",glm::value_ptr(model_matrix));
gShader->sendUniform4x4("view_matrix",glm::value_ptr(view_matrix));
gShader->sendUniform4x4("projection_matrix",glm::value_ptr(projection_matrix));
gShader->sendUniform("camera_position",cameraPosition.x,cameraPosition.y,cameraPosition.z);
gShader->sendUniform("screen_size",(GLfloat)screenWidth,(GLfloat)screenHeight);
};
It runs smooth, I can control the angle with my mouse in X and Y directions, but not around the Z axis (the Y is the "up" in world space).
In my rendering method I render the terrain grid with one VAO call. The grid itself is a quad as the center (highes lod), and the others are L shaped grids scaled by powers of 2. It is always repositioned before the camera, scaled into world space, and displaced by a heightmap.
rcampos.x = round((camera_position.x)/(pow(2,6)*gridscale))*(pow(2,6)*gridscale);
rcampos.y = 0;
rcampos.z = round((camera_position.z)/(pow(2,6)*gridscale))*(pow(2,6)*gridscale);
vPos = vec3(uv.x,0,uv.y)*pow(2,LOD)*gridscale + rcampos;
vPos.y = texture(hmap,vPos.xz/horizontal_scale).r*vertical_scale;
The problem:
The camera starts at the origin, at (0,0,0). When I move it far away from that point, it causes the rotation along the X axis discontinuous. It feels like the mouse cursor was aligned with a grid in screen space, and only the position at grid points were recorded as the cursor movement.
I've also recorded the camera position when it gets pretty noticeable, it's about at 1,000,000 from the origin in X or Z directions. I've noticed that this 'lag' increases linearly with distance, (from the origin).
There is also a little Z-fighting at this point(or similar effect), even if I use a single plane with no displacement, and no planes can overlap. (I use tessellation shaders and render patches.) Black spots appear on the patches. May be caused by fog:
float fc = (view_matrix*vec4(Pos,1)).z/(view_matrix*vec4(Pos,1)).w;
float fResult = exp(-pow(0.00005f*fc, 2.0));
fResult = clamp(fResult, 0.0, 1.0);
gl_FragColor = vec4(mix(vec4(0.0,0.0,0.0,0),vec4(n,1),fResult));
Another strange behavior is the little rotation by the Z axis, this increases with distance too, but I don't use this kind of rotation.
Variable formats:
The vertices are unsigned short format, the indexes are in unsigned int format.
The cmc struct is the camera/cursor struct with double variables.
PI and C_SPEED are #define constants.
Additional information:
The grid is created with the above mentioned ushort array, with the spacing of 1. In the shader I scale it with a constant, then use tessellation to achieve the best performance and the largest view distance.
The final position of a vertex is calculated in the tessellation evaluation shader.
mat4 MVP = projection_matrix*view_matrix*model_matrix;
As you could see I send my matrices to the shader with the glm library.
+Q:
How could the length of a float (or any other format) cause this kind of 'precision loss', or whatever causes the problem. The view_matrix could be a cause of this, but I still cannot output it on the screen at runtime.
PS: I don't know If this helps, but the view matrix at about the 'lag start location' is
-0.49662 -0.49662 0.863129 0
0.00514956 0.994097 0.108373 0
-0.867953 0.0582648 -0.493217 0
1.62681e+006 16383.3 -290126 1
EDIT
Comparing the camera position and view matrix:
view matrix = 0.967928 0.967928 0.248814 0
-0.00387854 0.988207 0.153079 0
-0.251198 -0.149134 0.956378 0
-2.88212e+006 89517.1 -694945 1
position = 2.9657e+006, 6741.52, -46002
It's a long post so I might not answer everything.
I think it is most likely precision issue. Lets start with the camera rotation problem. I think the main problem is here
view_matrix = glm::lookAt(
cameraPosition,
cameraPosition+directionVector,
upVector);
As you said, position is quite a big number like 2.9657e+006 - and look what glm does in glm::lookAt:
GLM_FUNC_QUALIFIER detail::tmat4x4<T> lookAt
(
detail::tvec3<T> const & eye,
detail::tvec3<T> const & center,
detail::tvec3<T> const & up
)
{
detail::tvec3<T> f = normalize(center - eye);
detail::tvec3<T> u = normalize(up);
detail::tvec3<T> s = normalize(cross(f, u));
u = cross(s, f);
In your case, eye and center are these big (very similar) numbers and then glm subtracts them to compute f. This is bad, because if you subtract two almost equal floats, the most significant digits are set to zero, which leaves you with the insignificant (most erroneous) digits. And you use this for further computations, which only emphasizes the error. Check this link for some details.
The z-fighting is similar issue. Z-buffer is not linear, it has the best resolution near the camera because of the perspective divide. The z-buffer range is set according to your near and far clipping plane values. You always want to have the smallest possible ration between far and near values (generally far/near should not be greater than 30000). There is a very good explanation of this on the openGL wiki, I suggest you read it :)
Back to the camera issue - first, I would consider if you really need such a huge scene. I don't think so, but if yes, you could try computing your view matrix differently, compute rotation and translation separately, which could help your case. The way I usually handle camera:
glm::vec3 cameraPos;
glm::vec3 cameraRot;
glm::vec3 cameraPosLag;
glm::vec3 cameraRotLag;
int ox, oy;
const float inertia = 0.08f; //mouse inertia
const float rotateSpeed = 0.2f; //mouse rotate speed (sensitivity)
const float walkSpeed = 0.25f; //walking speed (wasd)
void updateCameraViewMatrix() {
//camera inertia
cameraPosLag += (cameraPos - cameraPosLag) * inertia;
cameraRotLag += (cameraRot - cameraRotLag) * inertia;
// view transform
g_CameraViewMatrix = glm::rotate(glm::mat4(1.0f), cameraRotLag[0], glm::vec3(1.0, 0.0, 0.0));
g_CameraViewMatrix = glm::rotate(g_CameraViewMatrix, cameraRotLag[1], glm::vec3(0.0, 1.0, 0.0));
g_CameraViewMatrix = glm::translate(g_CameraViewMatrix, cameraPosLag);
}
void mousePositionChanged(int x, int y) {
float dx, dy;
dx = (float) (x - ox);
dy = (float) (y - oy);
ox = x;
oy = y;
if (mouseRotationEnabled) {
cameraRot[0] += dy * rotateSpeed;
cameraRot[1] += dx * rotateSpeed;
}
}
void keyboardAction(int key, int action) {
switch (key) {
case 'S':// backwards
cameraPos[0] -= g_CameraViewMatrix[0][2] * walkSpeed;
cameraPos[1] -= g_CameraViewMatrix[1][2] * walkSpeed;
cameraPos[2] -= g_CameraViewMatrix[2][2] * walkSpeed;
break;
...
}
}
This way, the position would not affect your rotation. I should add that I adapted this code from NVIDIA CUDA samples v5.0 (Smoke Particles), I really like it :)
Hope at least some of this helps.

Processing & OpenGL - Changing the camera position?

I'm doing a small project where I plot data sets onto a world. I've got the plotting done. Now I want to implement camera movement.
I have some code where if a user holds down c and drags the mouse, the camera position is changed. The thing is, I'm not sure how to calculate the camera movement from the mouse movement.
This is the camera code for the default position: camera(width/2.0, height/2.0, (height/2.0) / tan(PI*60.0 / 360.0), width/2.0, height/2.0, 0, 0, 1, 0);
How can I change the camera position in relation to the mouse dragging? (I've tried using mouseX and mouseY to offset the camera eye position, but it doesn't work well.)
If you have got direction vector, you can set position of your camera as follow (abstract code):
pos += speed * normalize( direction );
That's for moving forward. If you wanna move backward - just multiply your normalized direction vertor by -1. For strafing left and right, use something this:
pos += speed * normalize( cross_product( direction, upvector ) ); // strafing right
pos += speed * normalize( cross_product( upvector, direction ) ); // strafing left
Here are some notes on vector operations (from one of my "HelloWorld" applications =) ):
normalize( vec ); returns vec, which length equals to 1; this one "cuts" vec to needed length
cross_product( vec_a, vec_b ); returns vec_c, which is directed perpendicullary to vec_a and vec_b (see this article for more).
My version of cross_product() looks like this:
Vector Vector::CrossProduct(const Vector &v)
{
double k1 = (y * v.z) - (z * v.y);
double k2 = (z * v.x) - (x * v.z);
double k3 = (x * v.y) - (y * v.x);
return Vector(NumBounds(k1), NumBounds(k2), NumBounds(k3));
// NumBounds(v) returns 0 when v is less than 10 ^ -8
}
Hope this will help =)
I think by far the easiest thing to do would be to use the peasycam library
http://mrfeinberg.com/peasycam/
this library will give you access to your camera using a mouse which you can constrain in various ways as well various getters that make it easy to access information about the camera and its current state.