Reflect camera in a plane - c++

I have a camera, which is defined through an up vector, a position and a reference point (camera looks at this point). Furthermore I can calculate the view direction, of course.
Now I tried to reflect this camera in a plane (e.g. z = 0). My first attempt was to reflect every single vector in the plane with the belonging reflection matrix and looked like this:
mat4 mReflection = mat4(1, 0, 0, 0,
0, 1, 0, 0,
0, 0, -1, 0,
0, 0, 0, 1);
up = mReflection * up;
position = mReflection * position;
lookAt = mReflection * lookAt;
But this didn't work very well and I don't know why. What is wrong with this method?

Related

OpenGL overlapping ugly rendering

I'm trying to render a scene with OpenGL 2.1 but the borders on overlapping shapes are weird. I tested some OpenGL initialisations but without any change. I reduce my issue to a simple test application with 2 sphere with the same result.
I tried several things about Gl_DEPTH_TEST, enable/disable smoothing without success.
Here is my result with 2 gluSphere :
We can see some sort of aliasing when a line will be enough to separate blue and red faces...
I use SharpGL but I think that it's not significant (as I use it only as a an OpenGL wrapper). Here my simplest code to render the same thing (You can copy it in a Form to test it) :
OpenGL gl;
IntPtr hdc;
int cpt;
private void Init()
{
cpt = 0;
hdc = this.Handle;
gl = new OpenGL();
gl.Create(SharpGL.Version.OpenGLVersion.OpenGL2_1, RenderContextType.NativeWindow, 500, 500, 32, hdc);
gl.Enable(OpenGL.GL_DEPTH_TEST);
gl.DepthFunc(OpenGL.GL_LEQUAL);
gl.ClearColor(1.0F, 1.0F, 1.0F, 0);
gl.ClearDepth(1);
gl.MatrixMode(OpenGL.GL_PROJECTION);
gl.Perspective(30, 1, 0.1F, 1.0E+7F);
gl.MatrixMode(OpenGL.GL_MODELVIEW);
gl.LookAt(0, 3000, 0, 0, 0, 0, 0, 0, 1);
}
private void Render(int angle)
{
gl.Clear(OpenGL.GL_COLOR_BUFFER_BIT | OpenGL.GL_DEPTH_BUFFER_BIT | OpenGL.GL_STENCIL_BUFFER_BIT);
RenderSphere(gl, 0, 0, 0, 0, 300, Color.Red);
RenderSphere(gl, 0, 0, 100, angle, 300, Color.Blue);
gl.Blit(hdc);
}
private void RenderSphere(OpenGL gl, int x, int y, int z, int angle, int radius, Color col)
{
IntPtr obj = gl.NewQuadric();
gl.PushMatrix();
gl.Translate(x, y, z);
gl.Rotate(angle, 0, 0);
gl.Color(new float[] { col.R / 255f, col.G / 255f, col.B / 255f, col.A / 255f });
gl.QuadricDrawStyle(obj, OpenGL.GLU_FILL);
gl.Sphere(obj, radius, 20, 10);
gl.Color(new float[] { 0, 0, 0, 1 });
gl.QuadricDrawStyle(obj, OpenGL.GLU_SILHOUETTE);
gl.Sphere(obj, radius, 20, 10);
gl.DeleteQuadric(obj);
gl.PopMatrix();
}
Thanks in advance for your advices !
EDIT :
I tested that without success :
gl.Enable(OpenGL.GL_LINE_SMOOTH);
gl.Enable(OpenGL.GL_POLYGON_SMOOTH);
gl.ShadeModel(OpenGL.GL_SMOOTH);
gl.Hint(OpenGL.GL_LINE_SMOOTH_HINT, OpenGL.GL_NICEST);
gl.Hint(OpenGL.GL_POLYGON_SMOOTH_HINT, OpenGL.GL_NICEST);
gl.Hint(OpenGL.GL_PERSPECTIVE_CORRECTION_HINT, OpenGL.GL_NICEST);
EDIT2 : With more faces, image with and without lines
It is ... different... but not pleasing.
The issue has 2 reasons.
The first one indeed is a Z-fighting issue, which is cause by the monstrous distance between the near and far plane
gl.Perspective(30, 1, 0.1F, 1.0E+7F);
and the fact that at perspective projection, the depth is not linear. See also How to render depth linearly ....
This can be improved by putting the near plane as close as possible to the geometry. Since the distance to the object is 3000.0 and the radius of the sphere is 300, the near plane has to be less than 2700.0:
e.g.
gl.Perspective(30, 1, 2690.0F, 5000.0F);
The second issue is caused by the fact, that the sphere consist of triangle primitives. As you suggested in your answer, you can improve that by increasing the number of primitives.
I will provide an alternative solution, by using a clip plane. Clip the red sphere at the bottom and the blue sphere at the top. Exactly in the plane where the spheres are intersecting, so that a cap is cut off from each sphere.
A clip plane can be set by glClipPlane and to be enabled by glEnable.
The parameters to the clipping plane are interpreted as a Plane Equation.
The first 3 components of the plane equation are the normal vector to the clipping plane. The 4th component is the distance to the origin.
So the clip plane equation for the red sphere has to be {0, 0, -1, 50} and for the blue sphere {0, 0, 1, -50}.
Note, when glClipPlane is called, then the equation is transformed by the inverse of the modelview matrix. So the clip plane has to be set before the model transformations like rotation, translation and scale.
e.g.
private void Render(int angle)
{
gl.Clear(OpenGL.GL_COLOR_BUFFER_BIT | OpenGL.GL_DEPTH_BUFFER_BIT | OpenGL.GL_STENCIL_BUFFER_BIT);
double[] plane1 = new double[] {0, 0, -1, 50};
RenderSphere(gl, 0, 0, 0, 0, 300, Color.Red, plane1);
double[] plane2 = new double[] {0, 0, 1, -50};
RenderSphere(gl, 0, 0, 100, angle, 300, Color.Blue, plane2);
gl.Blit(hdc);
}
private void RenderSphere(
OpenGL gl, int x, int y, int z, int angle, int radius,
Color col, double[] plane)
{
IntPtr obj = gl.NewQuadric();
gl.ClipPlane(OpenGL.GL_CLIP_PLANE0, plane);
gl.Enable(OpenGL.GL_CLIP_PLANE0);
gl.PushMatrix();
gl.Translate(x, y, z);
gl.Rotate(angle, 0, 0);
gl.Color(new float[] { col.R / 255f, col.G / 255f, col.B / 255f, col.A / 255f });
gl.QuadricDrawStyle(obj, OpenGL.GLU_FILL);
gl.Sphere(obj, radius, 20, 10);
gl.Color(new float[] { 0, 0, 0, 1 });
gl.QuadricDrawStyle(obj, OpenGL.GLU_SILHOUETTE);
gl.Sphere(obj, radius, 20, 10);
gl.DeleteQuadric(obj);
gl.PopMatrix();
gl.Disable(OpenGL.GL_CLIP_PLANE0);
}
Solution 1 (not a good one): Applying gl.Scale(0.0001, 0.0001, 0.0001); to the ModelView matrix
Solution 2 : The near plane has to be as far as possible to avoid compressing z value in a small range. In this case, use 10 instead of 0.1 is enough. The best is to compute an adapted value depending on objects distance (in this case the nearest object is at 2700)
I think we can focus on z is stored non-linearly in the #PikanshuKumar link and the implicit consequencies.
Result :
Only the faces are cutted by a line: there is a straight line as separator at the equator.
Those lines disappear as expected when we increase the number of faces.
You're killing depth buffer precision with the way you setup your projection matrix
gl.MatrixMode(OpenGL.GL_PROJECTION);
gl.Perspective(30, 1, 0.1F, 1.0E+7F);
Essentially this compresses almost all of the depth buffer precision into the range 0.1 to 0.2 or so (I didn't do the math, just eyeballing it here).
In general you should choose the distance for the near clip plane to be as far away as possible, still keeping all the objects in your scene. The distance of the far plane doesn't matter that much (in fact, with the right matrix magic you can place it at infinity), but in general it's also a good idea to keep it as close as possible.

How to rotate an object using glm::lookAt()?

I'm working on a scenario that involves some cone meshes that are to be used as spot lights in a deferred renderer. I need to scale, rotate and translate these cone meshes so that they point in the correct direction. According to one of my lecturers I can rotate the cones to align with a direction vector and move them to the correct position by multiplying its model matrix with the matrix returned by this,
glm::inverse(glm::lookAt(spot_light_direction, spot_light_position, up));
however this doesn't seem to work, doing this causes all of the cones to be placed on the world origin. If I then translate the cones manually using another matrix it seems that the cones aren't even facing the right direction.
Is there a better way to rotate objects so that they face a specific direction?
Here is my current code that gets executed for each cone,
//Move the cone to the correct place
glm::mat4 model = glm::mat4(1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
spot_light_position.x, spot_light_position.y, spot_light_position.z, 1);
// Calculate rotation matrix
model *= glm::inverse(glm::lookAt(spot_light_direction, spot_light_position, up));
float missing_angle = 180 - (spot_light_angle / 2 + 90);
float scale = (spot_light_range * sin(missing_angle)) / sin(spot_light_angle / 2);
// Scale the cone to the correct dimensions
model *= glm::mat4(scale, 0, 0, 0,
0, scale, 0, 0,
0, 0, spot_light_range, 0,
0, 0, 0, 1);
// The origin of the cones is at the flat end, offset their position so that they rotate around the point.
model *= glm::mat4(1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, -1, 1);
I've noted this in the comments but I'll mention again that the cones origin is at center of the flat end of the cone, I don't know if this makes a difference or not, I just thought I'd bring it up.
Your order of the matrices seems correct, but the lookAt function expects:
glm::mat4 lookAt ( glm::vec3 eye, glm::vec3 center, glm::vec3 up )
Here eye is the location of the camera, center is the location of the object you are looking at (in your case if you dont have that location, you can use
spot_light_direction + spot_light_position ).
so just change
glm::lookAt(spot_light_direction, spot_light_position, up)
to
glm::lookAt(spot_light_position, spot_light_direction + spot_light_position, up)

OpenGL - reconstruct position from depth in VS

I am trying to reconstruct position from depth texture in Vertex Shader. Usually, this is done in Pixel Shader, but for some reason I need it in VS to transform some geometry.
So my approach.
1) I calculate View Frustrum corners in View Space
I use this input NDC.Those values are transformed via Inverse(view * proj) to put them into World Space and then transformed via view matrix.
//GL - Left Handed - need to "swap" front and back Z coordinate
MyMath::Vector4 cornersVector4[] =
{
//front
MyMath::Vector4(-1, -1, 1, 1), //A
MyMath::Vector4( 1, -1, 1, 1), //B
MyMath::Vector4( 1, 1, 1, 1), //C
MyMath::Vector4(-1, 1, 1, 1), //D
//back
MyMath::Vector4(-1, -1, -1, 1), //E
MyMath::Vector4( 1, -1, -1, 1), //F
MyMath::Vector4( 1, 1, -1, 1), //G
MyMath::Vector4(-1, 1, -1, 1), //H
};
If I print debug output, it seems correct (camera pos is at dist zNear from near plane and far is far enough)
2) post values to shader
3) In shader I do this:
vec3 _cornerPos0 = cornerPos0.xyz * mat3(viewInv);
vec3 _cornerPos1 = cornerPos1.xyz * mat3(viewInv);
vec3 _cornerPos2 = cornerPos2.xyz * mat3(viewInv);
vec3 _cornerPos3 = cornerPos3.xyz * mat3(viewInv);
float x = (TEXCOORD1.x / 100.0); //TEXCOORD1.x = <0, 100>
float y = (TEXCOORD1.y / 100.0); //TEXCOORD1.y = <0, 100>
vec3 ray = mix(mix(_cornerPos0, _cornerPos1, x),
mix(_cornerPos2, _cornerPos3, x),
y);
float depth = texture2D(depthTexture, vec2(x, y));
//depth is created in draw pass before with depth = vertexViewPos.z / farClipPlane;
vec3 reconstructed_posWS = camPos + (depth * ray);
But if I do this nad translate my geometry from [0,0,0] to reconstructed_posWS, only part of screen is covered. What can be incorrect ?
PS: some calculations are useless (transform to space and after that transform back), but speed is not concern atm.

3D - Rotation Matrix from direction vector (Forward, Up, Right)

I need to get rotation matrix from direction vector (vForward) I also have vRight and vUp vectors. All those vectors are unit vectors.
I just need to get rotation matrix.
To get rotation matrix for rotation in only one plane (xy) parallel to ground, I do this:
XMMATRIX xmResult;
Vec3f vFwd = pPlayer->VForward;
vFwd.z = 0;
vFwd.Normalize();
xmResult = XMMatrixSet( vFwd.y, -vFwd.x, 0, 0,
vFwd.x, vFwd.y, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1);
Above code only get rotation matrix to rotate around Z axis:
I would like to get the code to rotate around all axis.
This is coordinate system I'm forced to use. I know it is strange:
This is how I'm using my matrix later in code:
XMStoreFloat3((XMFLOAT3*)&vStart, XMVector3Transform(XMLoadFloat3((XMFLOAT3*)&vStart), xmTransformation));
XMStoreFloat3((XMFLOAT3*)&vEnd, XMVector3Transform(XMLoadFloat3((XMFLOAT3*)&vEnd), xmTransformation));
Depending on how you use your matrices, Right, Up and Forward should correspond to the rows or columns of your matrix.
xmResult = XMMatrixSet( vRight.x, vRight.y, vRight.z, 0, vFwd.x, vFwd.y, vFwd.z, 0, vUp.x, vUp.y, vUp.z, 0, 0, 0, 0, 1);

gluLookAt alternative doesn't work

I'm trying to calculate a lookat matrix myself, instead of using gluLookAt().
My problem is that my matrix doesn't work. using the same parameters on gluLookAt does work however.
my way of creating a lookat matrix:
Vector3 Eye, At, Up; //these should be parameters =)
Vector3 zaxis = At - Eye; zaxis.Normalize();
Vector3 xaxis = Vector3::Cross(Up, zaxis); xaxis.Normalize();
Vector3 yaxis = Vector3::Cross(zaxis, xaxis); yaxis.Normalize();
float r[16] =
{
xaxis.x, yaxis.x, zaxis.x, 0,
xaxis.y, yaxis.y, zaxis.y, 0,
xaxis.z, yaxis.z, zaxis.z, 0,
0, 0, 0, 1,
};
Matrix Rotation;
memcpy(Rotation.values, r, sizeof(r));
float t[16] =
{
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
-Eye.x, -Eye.y, -Eye.z, 1,
};
Matrix Translation;
memcpy(Translation.values, t, sizeof(t));
View = Rotation * Translation; // i tried reversing this as well (translation*rotation)
now, when i try to use this matrix be calling glMultMatrixf, nothing shows up in my engine, while using the same eye, lookat and up values on gluLookAt works perfect as i said before.
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glMultMatrixf(View);
the problem must be in somewhere in the code i posted here, i know the problem is not in my Vector3/Matrix classes, because they work fine when creating a projection matrix.
I assume you have a right handed coordinate system (it is default in OpenGL).
Try the following code. I think you forgot to normalize up and you have to put "-zaxis" in the matrix.
Vector3 Eye, At, Up; //these should be parameters =)
Vector3 zaxis = At - Eye; zaxis.Normalize();
Up.Normalize();
Vector3 xaxis = Vector3::Cross(Up, zaxis); xaxis.Normalize();
Vector3 yaxis = Vector3::Cross(zaxis, xaxis); yaxis.Normalize();
float r[16] =
{
xaxis.x, yaxis.x, -zaxis.x, 0,
xaxis.y, yaxis.y, -zaxis.y, 0,
xaxis.z, yaxis.z, -zaxis.z, 0,
0, 0, 0, 1,
};