OpenGL overlapping ugly rendering - opengl

I'm trying to render a scene with OpenGL 2.1 but the borders on overlapping shapes are weird. I tested some OpenGL initialisations but without any change. I reduce my issue to a simple test application with 2 sphere with the same result.
I tried several things about Gl_DEPTH_TEST, enable/disable smoothing without success.
Here is my result with 2 gluSphere :
We can see some sort of aliasing when a line will be enough to separate blue and red faces...
I use SharpGL but I think that it's not significant (as I use it only as a an OpenGL wrapper). Here my simplest code to render the same thing (You can copy it in a Form to test it) :
OpenGL gl;
IntPtr hdc;
int cpt;
private void Init()
{
cpt = 0;
hdc = this.Handle;
gl = new OpenGL();
gl.Create(SharpGL.Version.OpenGLVersion.OpenGL2_1, RenderContextType.NativeWindow, 500, 500, 32, hdc);
gl.Enable(OpenGL.GL_DEPTH_TEST);
gl.DepthFunc(OpenGL.GL_LEQUAL);
gl.ClearColor(1.0F, 1.0F, 1.0F, 0);
gl.ClearDepth(1);
gl.MatrixMode(OpenGL.GL_PROJECTION);
gl.Perspective(30, 1, 0.1F, 1.0E+7F);
gl.MatrixMode(OpenGL.GL_MODELVIEW);
gl.LookAt(0, 3000, 0, 0, 0, 0, 0, 0, 1);
}
private void Render(int angle)
{
gl.Clear(OpenGL.GL_COLOR_BUFFER_BIT | OpenGL.GL_DEPTH_BUFFER_BIT | OpenGL.GL_STENCIL_BUFFER_BIT);
RenderSphere(gl, 0, 0, 0, 0, 300, Color.Red);
RenderSphere(gl, 0, 0, 100, angle, 300, Color.Blue);
gl.Blit(hdc);
}
private void RenderSphere(OpenGL gl, int x, int y, int z, int angle, int radius, Color col)
{
IntPtr obj = gl.NewQuadric();
gl.PushMatrix();
gl.Translate(x, y, z);
gl.Rotate(angle, 0, 0);
gl.Color(new float[] { col.R / 255f, col.G / 255f, col.B / 255f, col.A / 255f });
gl.QuadricDrawStyle(obj, OpenGL.GLU_FILL);
gl.Sphere(obj, radius, 20, 10);
gl.Color(new float[] { 0, 0, 0, 1 });
gl.QuadricDrawStyle(obj, OpenGL.GLU_SILHOUETTE);
gl.Sphere(obj, radius, 20, 10);
gl.DeleteQuadric(obj);
gl.PopMatrix();
}
Thanks in advance for your advices !
EDIT :
I tested that without success :
gl.Enable(OpenGL.GL_LINE_SMOOTH);
gl.Enable(OpenGL.GL_POLYGON_SMOOTH);
gl.ShadeModel(OpenGL.GL_SMOOTH);
gl.Hint(OpenGL.GL_LINE_SMOOTH_HINT, OpenGL.GL_NICEST);
gl.Hint(OpenGL.GL_POLYGON_SMOOTH_HINT, OpenGL.GL_NICEST);
gl.Hint(OpenGL.GL_PERSPECTIVE_CORRECTION_HINT, OpenGL.GL_NICEST);
EDIT2 : With more faces, image with and without lines
It is ... different... but not pleasing.

The issue has 2 reasons.
The first one indeed is a Z-fighting issue, which is cause by the monstrous distance between the near and far plane
gl.Perspective(30, 1, 0.1F, 1.0E+7F);
and the fact that at perspective projection, the depth is not linear. See also How to render depth linearly ....
This can be improved by putting the near plane as close as possible to the geometry. Since the distance to the object is 3000.0 and the radius of the sphere is 300, the near plane has to be less than 2700.0:
e.g.
gl.Perspective(30, 1, 2690.0F, 5000.0F);
The second issue is caused by the fact, that the sphere consist of triangle primitives. As you suggested in your answer, you can improve that by increasing the number of primitives.
I will provide an alternative solution, by using a clip plane. Clip the red sphere at the bottom and the blue sphere at the top. Exactly in the plane where the spheres are intersecting, so that a cap is cut off from each sphere.
A clip plane can be set by glClipPlane and to be enabled by glEnable.
The parameters to the clipping plane are interpreted as a Plane Equation.
The first 3 components of the plane equation are the normal vector to the clipping plane. The 4th component is the distance to the origin.
So the clip plane equation for the red sphere has to be {0, 0, -1, 50} and for the blue sphere {0, 0, 1, -50}.
Note, when glClipPlane is called, then the equation is transformed by the inverse of the modelview matrix. So the clip plane has to be set before the model transformations like rotation, translation and scale.
e.g.
private void Render(int angle)
{
gl.Clear(OpenGL.GL_COLOR_BUFFER_BIT | OpenGL.GL_DEPTH_BUFFER_BIT | OpenGL.GL_STENCIL_BUFFER_BIT);
double[] plane1 = new double[] {0, 0, -1, 50};
RenderSphere(gl, 0, 0, 0, 0, 300, Color.Red, plane1);
double[] plane2 = new double[] {0, 0, 1, -50};
RenderSphere(gl, 0, 0, 100, angle, 300, Color.Blue, plane2);
gl.Blit(hdc);
}
private void RenderSphere(
OpenGL gl, int x, int y, int z, int angle, int radius,
Color col, double[] plane)
{
IntPtr obj = gl.NewQuadric();
gl.ClipPlane(OpenGL.GL_CLIP_PLANE0, plane);
gl.Enable(OpenGL.GL_CLIP_PLANE0);
gl.PushMatrix();
gl.Translate(x, y, z);
gl.Rotate(angle, 0, 0);
gl.Color(new float[] { col.R / 255f, col.G / 255f, col.B / 255f, col.A / 255f });
gl.QuadricDrawStyle(obj, OpenGL.GLU_FILL);
gl.Sphere(obj, radius, 20, 10);
gl.Color(new float[] { 0, 0, 0, 1 });
gl.QuadricDrawStyle(obj, OpenGL.GLU_SILHOUETTE);
gl.Sphere(obj, radius, 20, 10);
gl.DeleteQuadric(obj);
gl.PopMatrix();
gl.Disable(OpenGL.GL_CLIP_PLANE0);
}

Solution 1 (not a good one): Applying gl.Scale(0.0001, 0.0001, 0.0001); to the ModelView matrix
Solution 2 : The near plane has to be as far as possible to avoid compressing z value in a small range. In this case, use 10 instead of 0.1 is enough. The best is to compute an adapted value depending on objects distance (in this case the nearest object is at 2700)
I think we can focus on z is stored non-linearly in the #PikanshuKumar link and the implicit consequencies.
Result :
Only the faces are cutted by a line: there is a straight line as separator at the equator.
Those lines disappear as expected when we increase the number of faces.

You're killing depth buffer precision with the way you setup your projection matrix
gl.MatrixMode(OpenGL.GL_PROJECTION);
gl.Perspective(30, 1, 0.1F, 1.0E+7F);
Essentially this compresses almost all of the depth buffer precision into the range 0.1 to 0.2 or so (I didn't do the math, just eyeballing it here).
In general you should choose the distance for the near clip plane to be as far away as possible, still keeping all the objects in your scene. The distance of the far plane doesn't matter that much (in fact, with the right matrix magic you can place it at infinity), but in general it's also a good idea to keep it as close as possible.

Related

opengl - camera cannot see object when glOrtho is used

I'm new to OpenGL and I'm trying to understand how the projection matrix works in it.
To create a simple case, I define a triangle in the world space and its coordinates are:
(0,1,0), (1,0,0), (-1,0,0)
I set the modelview matrix and projection matrix as below:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(
0, 0, 2,
0, 0, 0,
0, 1, 0);
glMatrixMode(GL_PROJECTION);
glOrtho(-2, 2, -2, 2, -0.1, -2.0); // does not work
// glOrtho(-2, 2, -2, 2, 0.1, 2.0); // works
From my understanding, gluLookAt() is used to set the viewing matrix. Since OpenGL does not have a concept of "camera", and thus it transforms the entire world to reach the effect of a camera. In the above code, I assume the "camera" is at (0,0,2), looking at (0,0,0). So OpenGL internally moves the triangle backwards along z axis to z=-2.
To define a view frustum, glOrtho() get 6 parameters. To make the triangle visible in the frustum, I set the near and far value to -0.1 and -2.0 respectively and this should indicate that the frustum include [-0.1, -2.0] on z axis.
I searched for similar questions and found out someone states that the last two parameters of glOrtho() is in fact -near and -far. But if this is correct, the following code should work(but it doesn't):
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(
0, 0, -2, // changed 2 to -2, thus the triangle should be transformed to z=2?
0, 0, 0,
0, 1, 0);
glMatrixMode(GL_PROJECTION);
glOrtho(-2, 2, -2, 2, -0.1, -2.0); // -near=-0.1, -far=-2.0, thus the frustum should include [0.1, 2.0], thus include the triangle
If I'm correct, the triangle should be drawn on the screen, so there must be something wrong with my code. Can anyone help?
First of all note, that the fixed function pipeline matrix stack and drawing by glBegin/glEnd sequences is deprecated since more than 10 years.
Read about Fixed Function Pipeline and see Vertex Specification for a state of the art way of rendering.
If you use a view matrix like this:
gluLookAt(0, 0, 2, 0, 0, 0, 0, 1, 0);
Then the values for the near and the far plane have to be positive when you set up the the projection matrix,
glOrtho(-2, 2, -2, 2, 0.1, 2.0);
because, gluLookAt transforms the vertices to view space (in view space the z axis points out of the viewport), but the projection matrix inverts the z-axis.
But be careful, since the triangle is at z=0
(0,1,0), (1,0,0), (-1,0,0)
and the distance from the camera to the triangle is 2, because of the view matrix, the triangle is placed exactly on the far plane (which is 2.0 too). I recommend to increase the distance to the far plane from 2.0 to (e.g.) 3.0:
glOrtho(-2, 2, -2, 2, 0.1, 3.0);
If you change the view matrix,
gluLookAt(0, 0, -2, 0, 0, 0, 0, 1, 0);
then still the (view space) z-axis points out of the viewport, but you look at the "back" side of the triangle. The triangle is still in the center of the view (0, 0, 0), but the camera position has changed. The triangle is still in front of the camera.
If you would do
gluLookAt(0, 0, 2, 0, 0, 4, 0, 1, 0);
then you would look away from the triangle. You would have to project the backside of the view to the viewport to "see" the triangle (glOrtho(-2, 2, -2, 2, -0.1, -3.0);).
Further note, that glOrtho multiplies the current matrix by the orthographic projection matrix. This means you should set the identity matrix, before you use glOrtho, as you do it with the model view matrix:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-2, 2, -2, 2, 0.1, 2.0);
Explanation
The projection, view and model matrix interact together to present the objects (meshes) of a scene on the viewport.
The model matrix defines the position orientation and scale of a single object (mesh) in the worldspace of the scene.
The view matrix defines the position and viewing direction of the observer (viewer) within the scene.
The projection matrix defines the area (volume) with respect to the observer (viewer) projected onto the viewport.
At orthographic projection, this area (volume) is defined by 6 distances (left, right, bottom, top, near and far) to the viewer's position.
View matrix
The view coordinates system describes the direction and position from which the scene is looked at. The view matrix transforms from the wolrd space to the view (eye) space.
If the coordiante system of the view space is a Right-handed system, then the X-axis points to the left, the Y-axis up and the Z-axis out of the view (Note in a right hand system the Z-Axis is the cross product of the X-Axis and the Y-Axis).
Projection matrix
The projection matrix describes the mapping from 3D points of the view on a scene, to 2D points on the viewport. It transforms from eye space to the clip space, and the coordinates in the clip space are transformed to the normalized device coordinates (NDC) by dividing with the w component of the clip coordinates. The NDC are in range (-1,-1,-1) to (1,1,1). Every geometry which is out of the clippspace is clipped.
At Orthographic Projection the coordinates in the view space are linearly mapped to clip space coordinates and the clip space coordinates are equal to the normalized device coordinates, because the w component is 1 (for a cartesian input coordinate).
The values for left, right, bottom, top, near and far define a box. All the geometry which is inside the volume of the box is "visible" on the viewport.
The Orthographic Projection Matrix looks like this:
r = right, l = left, b = bottom, t = top, n = near, f = far
2/(r-l) 0 0 0
0 2/(t-b) 0 0
0 0 -2/(f-n) 0
-(r+l)/(r-l) -(t+b)/(t-b) -(f+n)/(f-n) 1
The z-axis is inverted by the projection matrix.

How to decrease first person shooting camera object in OpenGL

I am making 3d open gl project which contain camera object as a shooting bullet but it is render with very big size and contain whole screen in white lines like this
i want to display object as a center of camera with small size how to do this
code is here
static GLdouble ort1[] = { -200, 200, -33, 140 };
static GLdouble viewer[] = { 525, 25, -180 };
static GLdouble up[] = { 0, 1, 0 };
static GLdouble objec[] = { 525.0, 25, -350 };
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();
gluLookAt(viewer[0], viewer[1], viewer[2], objec[0], objec[1], objec[2], 0, 1, 0);
glMatrixMode(GL_PROJECTION);
//glOrtho(-1, 1, -1, 1, -1, 100);
glLoadIdentity();
//gluPerspective(fov, 1.333, n, f);
gluPerspective(fov, 1, 0.001, 1000);
//gluPerspective(50, screenWidth / screenHeight, 0.000001, 2000);
glPointSize(2.0);
glMatrixMode(GL_MODELVIEW);
//cube.drawFace(10, 20, 10, 22);
drawFlorr();
glPushMatrix();
glTranslatef(viewer[0], viewer[1], viewer[2]); // Translation to the camera center
glRotatef(camAngle * 57.2957795, 0, 1, 0); // Rotate to correspond to the camera
//glTranslatef(0.016, 0, -0.05); // Offset to draw the object
glutWireCone(0.005, 1, 20, 20);
glPopMatrix();
i am new in game prgramming and stuck in this problem ?
You're not setting up the projection matrix correctly.
You need to set the mode to GL_PROJECTION, then set the projection matrix to look at the target (shooter's object of attention) and have a perspective correct with right field of view.
Then set the modelview matrix, mode GL_MODELVIEW.
The gun sight needs to be placed so that it is looking at the camera, and the camera is looking at it. So on the line between the shooter's eyes and his object of attention, perpendicular to it. Do this in the modelview matrix, and call gluLookAt again, on the model.
(Ultimately projection and modelview get multiplied, but Open GL handles that for you).

How to rotate an object using glm::lookAt()?

I'm working on a scenario that involves some cone meshes that are to be used as spot lights in a deferred renderer. I need to scale, rotate and translate these cone meshes so that they point in the correct direction. According to one of my lecturers I can rotate the cones to align with a direction vector and move them to the correct position by multiplying its model matrix with the matrix returned by this,
glm::inverse(glm::lookAt(spot_light_direction, spot_light_position, up));
however this doesn't seem to work, doing this causes all of the cones to be placed on the world origin. If I then translate the cones manually using another matrix it seems that the cones aren't even facing the right direction.
Is there a better way to rotate objects so that they face a specific direction?
Here is my current code that gets executed for each cone,
//Move the cone to the correct place
glm::mat4 model = glm::mat4(1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
spot_light_position.x, spot_light_position.y, spot_light_position.z, 1);
// Calculate rotation matrix
model *= glm::inverse(glm::lookAt(spot_light_direction, spot_light_position, up));
float missing_angle = 180 - (spot_light_angle / 2 + 90);
float scale = (spot_light_range * sin(missing_angle)) / sin(spot_light_angle / 2);
// Scale the cone to the correct dimensions
model *= glm::mat4(scale, 0, 0, 0,
0, scale, 0, 0,
0, 0, spot_light_range, 0,
0, 0, 0, 1);
// The origin of the cones is at the flat end, offset their position so that they rotate around the point.
model *= glm::mat4(1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, -1, 1);
I've noted this in the comments but I'll mention again that the cones origin is at center of the flat end of the cone, I don't know if this makes a difference or not, I just thought I'd bring it up.
Your order of the matrices seems correct, but the lookAt function expects:
glm::mat4 lookAt ( glm::vec3 eye, glm::vec3 center, glm::vec3 up )
Here eye is the location of the camera, center is the location of the object you are looking at (in your case if you dont have that location, you can use
spot_light_direction + spot_light_position ).
so just change
glm::lookAt(spot_light_direction, spot_light_position, up)
to
glm::lookAt(spot_light_position, spot_light_direction + spot_light_position, up)

dynamically render a 2d board in 3d view

I am a beginner in openGL. I am currently working on a program which take in inputs the width and the length of a board. Given those inputs i want to dynamically position my camera so that i can have a view on the whole board. Let' s say that my window size is 1024x768.
Are there any mathematical formula to compute the different parameters of the opengl function glookat to make it possible ?
the view i want to have on the board should look like this.
It doesn't matter if a board too big will make things look tiny. What matters the most here is to position the camera in a way that the view on the whole board is made possible
So far i am hopelessly randomly changing the parameters of my glookat function till i ran into something decent for a X size width and and Y size Height.
my gluperpective function :
gluPerspective(70 ,1024 / 768,1,1000)
my glooatfunction for a 40 * 40 board
gluLookAt(20, 20, 60, 20, -4, -20, 0, 1, 0);
how i draw my board (plane):
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
gluLookAt(20, 20, 60, 20, -4, -20, 0, 1, 0);
glBindTexture(GL_TEXTURE_2D, texture_sol);
glBegin(GL_QUADS);
glTexCoord2i(0, 0); glVertex3i(width, 0, height);
glTexCoord2i(10, 0); glVertex3i(0, 0, height)
glTexCoord2i(10, 10); glVertex3i(0, 0, 0);
glTexCoord2i(0, 10); glVertex3i(width, 0, 0);
glEnd();
the output looks as follow :
gluLookAt takes 2 points and a vector; the eye and centre positions and the up vector. There's no issue with the last parameter. The first two are relevant to your question.
I see that your board in the world space is extending on the positive X and Y axes with some arbitrary width and height values. Lets take width = height = 1.0 for instance. So the board spans from (0, 0), (1, 0), (1, 1), (0, 1); the Y value is ignored here since the board lies on the Y = 0 plane and have the same value for all vertices; these are just (X, Z) values.
Now coming to gluLookAt, eye is where the camera is in world space and centre is the point where you want the camera to be looking at (in world space)
Say you want the camera to look at centre of the board I presume, so
eye = (width / 2.0f, 0, height/2.0f);
Now you've to position the camera at its vantage point. Say somewhere above the board but towards the positive Z direction since there's where the user is (assuming your world space is right handed and positive Z direction is towards the viewer), so
centre = (width / 2.0f, 5.0f, 1.0f);
Since the farthest point on Z is 0, I just added one more to be slightly father than that. Y is how much above you want to see the board from, I just chose 5.0 as an example. These are just arbitrary values I can come up with, you'll still have to experiment with these values. But I hope you got the essence of how gluLookAt works.
Though this is written as an XNA tutorial, the basic technique and math behind it should carry over to OpenGL and your project:
Positioning the Camera to View All Scene Objects
Also see
OpenGL FAQ
8.070 How can I automatically calculate a view that displays my entire model? (I know the bounding sphere and up vector.)
Edit in response to the comment question
A bounding sphere is simply a sphere that completely encloses your model. It can be described as:
A bounding sphere, S, of a point set P with n points is described by
a center point, c, and a radius, r.
So,
P = the vertices of your model (the board in this case)
c = origin of your model
r = distance from origin of the vertex, in P, farthest from the origin
So the Bounding Sphere for your board would be composed of the origin location (c) and the distance from one corner to the origin (r) assuming the board is a square and all points are equidistant.
For more complicated models, you may employ pre-created solutions [1] or implement your own calculations [2] [3]

setting up an opengl perspective projection

I am having an issue setting up the viewing projection. I am drawing a cube with the vertices (0, 0, 0) (0, 0, 1) (0, 1, 1) (0, 1, 0) (1, 0, 0) (1, 1, 0) (1, 1, 1) and (1, 0, 1). This is how I am initializing the view:
void initGL(int x,int y, int w, int h)
{
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB | GLUT_DEPTH );
glutInitWindowPosition( x, y );
glutInitWindowSize( w, h );
glutCreateWindow( "CSE328 Project 1" );
glutDisplayFunc(draw);
glFrontFace(GL_FRONT_AND_BACK);
glMatrixMode(GL_PROJECTION);
glFrustum(-10.0, 10.0, -10.0, 10.0, 2.0, 40.0);
glMatrixMode(GL_MODELVIEW);
gluLookAt(10, 10, 10, 0.5, 0.5, 0, 0, 1.0, 0);
glutMainLoop();
}
For some reason, the cube is filling the entire screen. I have tried changing the values of the frustum and lookAt methods, and either the cube is not visible at all, or it fills the entire viewport. In glLookAt I assume the 'eye' is positioned at (10, 10, 10) and looking at the point (0.5, 0.5, 0), which is on the surface of the cube. I thought this would give enough distance so the whole cube would be visible. Am i thinking about this in the wrong way? I have also tried moving the cube in the z direction so that it lies from z = 10 to z = 11, and so is in the clipping plane, but it gives similar results.
The cube has length 1, the viewing volume spans 20 units in x and y dimensions. The cube occupies some pixels in the middle even with orthographic projection; unless there is some other transformation applied during drawing.
I suggest making the frustum smaller (e.g. +/- 2.0f) and moving the camera closer; (4.0f, 4.0f, 4.0f).
Moving the eye position further from the cube by changing the first 3 parameters of gluLookAt() should make it smaller.
You could also replace your call to glFrustum() with a call to gluPerspective() which would make it easier to configure the perspective projection to your liking.