PRE: I'm using Assimp (Open Asset Import) library to import a .3ds file. Meshes are rendered with normals and materials. Using Qt. Drivers up to date on all the computers we tried.
POST: When I rotate around the objects,using camera,I can see that some mesh' faces flickering.
The same happens using Assimp' render() method (sample code downloaded from A. wsite).
1)The strange thing is that it usually happens with small size .3ds,while never happens with big ones.
2)If I am really close there are no artifacts.The furthest I am,the more artifacts I see.
Is it a .3ds problem or mine?
Example of big .3ds (20MB)
Example of small .3ds (3MB)
I paste here my Draw() function (uses glLists but i can't get rid of them):
void Preview::BuildObjectsLists(Scene *sc,GLenum mode){
QHash<QString, SceneObject*>& hash=sc->getObj();
int counter =0;
for (QHash<QString,SceneObject*>::ConstIterator i = hash.begin();i!=hash.end();++i) {
glNewList(index-counter, GL_COMPILE);
Mesh* p = dynamic_cast<Mesh*>(i.value());
if(p){
Matrix4x4& a=p->getTrasformation();
a.transpose();
if(mode==GL_SELECT){
glPushName(counter);
}
glPushMatrix();
glMultMatrixf((float*) &(a.values));
applyMaterial(p->getMat());
QList<Face>& faccie=p->getFaces();
int numerofacce=faccie.count();
QList<Vector3D>& normals =p->getNormals();
bool hasNormals=(!(normals.isEmpty()));
if(hasNormals) glEnable(GL_LIGHTING);
else glDisable(GL_LIGHTING);
for (int t = 0; t < numerofacce; ++t) {
Face& f = faccie[t];
GLenum face_mode;
Vector3D* lista=f.arrayVertici;
int* listaNorm=f.normalIndex;
switch(f.numVertici) {
case 1:
face_mode = GL_POINTS;
glBegin(face_mode);
if(hasNormals)
glNormal3fv(&((normals[listaNorm[0]]).pos[0]));
glVertex3fv(&lista[0].pos[0]);
break;
case 2:
face_mode = GL_LINES;
glBegin(face_mode);
if(hasNormals){
glNormal3fv(&((normals[(f.normalIndex)[0]]).pos[0]));
glVertex3fv(&lista[0].pos[0]);
glNormal3fv(&((normals[(f.normalIndex)[1]]).pos[0]));
glVertex3fv(&lista[1].pos[0]);
}
else{
glVertex3fv(&lista[0].pos[0]);
glVertex3fv(&lista[1].pos[0]);
}
break;
case 3:
face_mode = GL_TRIANGLES;
glBegin(face_mode);
if(hasNormals){
glNormal3fv(&normals[(f.normalIndex)[0]].pos[0]);
glVertex3fv(&lista[0].pos[0]);
glNormal3fv(&normals[(f.normalIndex)[1]].pos[0]);
glVertex3fv(&lista[1].pos[0]);
glNormal3fv(&normals[(f.normalIndex)[2]].pos[0]);
glVertex3fv(&lista[2].pos[0]);
}
else{
glVertex3fv(&lista[0].pos[0]);
glVertex3fv(&lista[1].pos[0]);
glVertex3fv(&lista[2].pos[0]);
}
break;
default: face_mode = GL_POLYGON; break;
}
glEnd();
}
glPopMatrix();
}
if(mode==GL_SELECT) glPopName();
glEndList();
counter++;
}
}
12.040 Depth buffering seems to work, but polygons seem to bleed through polygons that are in front of them. What's going on?
You may have configured your zNear and zFar clipping planes in a way that severely limits your depth buffer precision. Generally, this is caused by a zNear clipping plane value that's too close to 0.0.
http://www.opengl.org/archives/resources/faq/technical/depthbuffer.htm
Related
I'm writing code that draws a polygon and gives it two feet, walks from the right until it gets to the middle, does a flip, and then lands and walks to the left. I'm having a lot of trouble figuring out how to animate his feet. All I want to do is make one go up, then come down, then the other go up, and then come down. I know all I have to do is change the Y values of his feet, but I can't figure out how.
My professor talks about key frames a lot, but wouldn't every step that my "Polyman" would take be a key frame leading to infinite amount of cases? Here is my timer function...
void TimerFunction(int value) //float plx = 7.0, ply=-3.0, linet=0.00;
{
switch(frame)
{
case 1:
dx-=0.15;
plx-=0.15; //dx=polygon, plx = one foot, pl2x = other foot
pl2x-=0.15;
if(dx<=0.0)
{
plx=0.0; //this case makes polyman walk to the middle
dx=0.0;
pl2x=0.0;
frame=2;
}
break;
case 2:
dxt+=0.05;
if (dxt<=-0.00) //this is a triangle I translate over polyman appearing as if he's opening his mouth
{
dxt=0.00;
frame=3;
}
break;
case 3:
dy+=0.2;
theta+=10.0;
thetat+=10.0;
dyt+=0.2; //this is the flip with polyman's mouth open
ply+=0.2;
pl2y+=0.2;
linet2+=10.0;
linet+=10.0;
if(dy>5.0 || theta>360.00)
{
dy=5.0;
dyt=5.0;
ply=5.0;
pl2y=5.0;
linet2=0.0;
theta=0.0;
thetat=0.0;
linet=0.0;
frame=4;
}
break;
case 4:
dy-=0.2;
dyt-=0.2;
ply-=0.2;
pl2y-=0.2;
if(dy<=-3.0) //this is polyman coming back down to earth
{
dy=-3.0;
dyt=-3.0;
ply=-3.0;
pl2y=-3.0;
frame=5;
}
break;
case 5:
dxt-=0.2;
if (dxt<-3)
{ //this is the triangle slowly translating left appearing as if he's closing his mouth
dxt-=3.0;
}
if (dxt<=-8)
{
dxt = -8;
frame = 6;
}
break;
case 6:
dx-= 0.15;
plx-= 0.15;
pl2x-=0.15; //this is polyman walking off the stage to the left
if(dx<=-8.0)
{
dx=-8.0;
plx=-8.0;
pl2x=-8.0;
}
break;
}
glutPostRedisplay();
glutTimerFunc(30, TimerFunction, 1);
}
All variables that are used in my timerfunction are global. Thanks for your time! If you need any more of my code just ask and i'll append.
I need to create two Boxes, which should both be rotating with the same speed in the same way, only their position should be different. All i got is this:
http://i.stack.imgur.com/JMua9.png
I have used the following code:
float rotatevalue;
void setup()
{
rotatevalue = 0;
size(500, 500, OPENGL);
if (frame != null) {
frame.setResizable(true);
}
}
void draw()
{
background(245, 238, 184);
fill(246, 225, 65);
rotatevalue = rotatevalue + 2;
pushMatrix();
translate(width/4, height/4);
rotateX(radians(rotatevalue));
rotateY(radians(rotatevalue));
box(50);
popMatrix();
pushMatrix();
translate(3*width/4, height/4);
rotateX(radians(rotatevalue));
rotateY(radians(rotatevalue));
box(50);
popMatrix();
}
What is wrong that makes them to rotate differently?
I'm not used to using the OpenGL matrix stack, so this may be a little off-base. I calculate my own model matrices to pass to the vertex shader. When I do this, I do the rotations first before the translation.
If you want draw 3D object inside 2D sketch you must use some type of projection same as your eye is projecting real world. For more information you should study more about perspective and projection.
So your boxes are rotating in the same way! I will try to demonstrate it on this basic example. Here you can see 5 boxes around middle of sketch:
void setup(){
size(500, 500, OPENGL);
fill(246, 225, 65);
//ortho();
}
void draw(){
background(245, 238, 184);
translate(width/2, height/2);
draw_box(0);
draw_box(1);
draw_box(2);
draw_box(3);
draw_box(4);
}
void draw_box(int pos){
pushMatrix();
switch(pos){
case 0: translate( 0, 0); break;
case 1: translate( 0,-100); break;
case 2: translate( 0, 100); break;
case 3: translate( 100, 0); break;
case 4: translate(-100, 0); break;
}
box(50);
popMatrix();
}
There is no rotation so they should be same? NO! It is same as railway tracks = they are parallel but in long distance you can almost see them touching (img)
You can try orthographic projection to get more similar boxes for more info see ortho. Also you should be more centric if you want better results.
I am really new to OpenGL and I am trying to just make a surface from two triangles. I don't know where I am going wrong with this code. I know that all the positions and colors are getting into the triangles class and that the Triangles are being made, but it's not getting outputted. Can someone help?
I tried to get just the output from the Triangle class but it doesn't seem to be working. I don't think there's anything wrong with the way I am calling the Display function.
Code:
#include<GL/gl.h>
#include<GL/glu.h>
#include<GL/glut.h>
#include<iostream>
#include<vector>
using namespace std;
class Triangle
{
public:
float position[9],color[3];
Triangle()
{}
Triangle(float position_t[], float color_t[])
{
for(int i=0;i<9;i++)
{position[i] = position_t[i];}
for(int i=0;i<3;i++)
{color[i]= color_t[i];}
}
void makeTriangle()
{
glBegin(GL_TRIANGLES);
glColor3f(color[0],color[1],color[2]);glVertex3f(position[0],position[1],position[2]);
glColor3f(color[0],color[1],color[2]);glVertex3f(position[3],position[4],position[5]);
glColor3f(color[0],color[1],color[2]);glVertex3f(position[6],position[7],position[8]);
glEnd();}
};
class Mesh
{
public:
/*float center[3],position[9],color[3];
float size;*/
vector<Triangle> elements;
float center[3],position[9],color[3];
float size;
Mesh(){}
Mesh(float center_in[3], float color_in[3])
{
for (int i=0;i<3;i++)
{
color[i] = color_in[i];
center[i] = center_in[i];
}
}
void getPositions()
{
position[0] = 1;position[1] = 1; position[2] = 1;
position[3] = -1;position[4] = -1; position[5] = 1;
position[6] = 1;position[7] = -1; position[8] = 1;
}
void getColor()
{
color[0] = 1; color[1]=0; color[2]=0;
}
static Mesh makeMesh()
{
Mesh a;
a.elements.resize(2);
a.getPositions();
a.getColor();
Triangle T(a.position,a.color);
a.elements[0] = T;
//Triangle O(2);
//a.elements[1] = 0;
return a;
}
};
void render()
{
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
Mesh a;
a.elements.resize(2);
a.getPositions();
a.getColor();
Triangle T(a.position,a.color);
//vector<Mesh> m;
//m.push_back(Mesh::makeMesh());
glPushMatrix();
T.makeTriangle();
glPopMatrix();
glFlush();
glutSwapBuffers();
glutPostRedisplay();
}
Full Code: http://pastebin.com/xa3B7166
As I suggested you in the comments, you are not setting the gluLookat() function. Everything is being drawn but you are just not looking at it!
Docs: https://www.opengl.org/sdk/docs/man2/xhtml/gluLookAt.xml
Your code does not specify any transformations. Therefore, your coordinates need to be within the default view volume, which is [-1, 1] in all coordinate directions.
Or more technically, the model/view/projection transformations (or all the transformations applied in your vertex shader if you use the programmable pipeline) transform the coordinates into the clip coordinate space, and after perspective division into the normalized device coordinate (aka NDC) space. The range of the NDC space is [-1, 1] for all coordinates.
If you don't apply any transformations, like is the case in your code, your original coordinates already have to be in NDC space.
With your current coordinates:
position[0] = 1;position[1] = 1; position[2] = 1;
position[3] = -1;position[4] = -1; position[5] = 1;
position[6] = 1;position[7] = -1; position[8] = 1;
all the z-coordinates have values of 1, which means that the whole triangle is right on the boundary of the clip volume. To make it visible, you can simply set the z-coordinates to 0:
position[0] = 1;position[1] = 1; position[2] = 0;
position[3] = -1;position[4] = -1; position[5] = 0;
position[6] = 1;position[7] = -1; position[8] = 0;
This centers it within the NDC space in z-direction, with the vertices being on 3 of the corners in the xy-plane. You will therefore see half of your window covered by the triangle, cutting it in half along the diagonal.
It's of course common in OpenGL to have the original coordinates in a different coordinate space, and then apply transformations to place them within the view volume.
You're probably already aware of this, but I thought I'd mention it anyway: If you're just starting to learn OpenGL, I would suggest that you learn what people often call "modern OpenGL". This includes the OpenGL Core Profile, or OpenGL ES 2.0 or later. The calls you are using now are mostly deprecated in newer versions of OpenGL, and not available anymore in the Core Profile and ES. The initial hurdle is somewhat higher for "modern OpenGL", particularly since you have to write your own shaders, but you will get on the path to acquiring knowledge that is still current.
im creating a 3d room which you can walk around with a first person camera
i have defined the position of the eyeX eyeY and eyeZ as shown below:
float eyeX = 0;
float eyeY = 100;
float eyeZ = 75;
here is my lookat code:
D3DXMatrixLookAtLH( &g_matView, &D3DXVECTOR3( eyeX, eyeY,eyeZ ),
&D3DXVECTOR3( LookatX, LookatY, LookatZ ),
&D3DXVECTOR3( 0.0f, 1.0f, 0.0f ) );
g_pd3dDevice->SetTransform( D3DTS_VIEW, &g_matView );
my code allows me to move the camera around but not like a first person camera and i am struggling to achieve this.
// forwards = UP ARROW
// Backwards = DOWN ARROW
// rotate left = LEFT ARROW
// rotate right = RIGHT ARROW
case WM_KEYDOWN:
{
// Handle any non-accelerated key commands
switch (wparam)
{
case VK_RIGHT:
if(eyeX >=-50)
{
--eyeX;
}
return (0);
case VK_LEFT:
if(eyeX <=50)
{
++eyeX;
}
return (0);
case VK_DOWN:
if(eyeZ >=-50)
{
--eyeZ;
}
return (0);
case VK_UP:
if(eyeZ <=50)
{
++eyeZ;
}
return (0);
case VK_SPACE:
if(eyeY >=-50)
{
--eyeY;
}
return (0);
case VK_SHIFT:
if(eyeY <=50)
{
++eyeY;
}
return (0);
}
break;
}
LookatX = eyeX + 5.0f;
LookatY = eyeY;
LookatZ = eyeZ;
case WM_DESTROY:
{
// kill the application
PostQuitMessage(0);
return(0);
}
default:
break;
} // end switch
could anyone suggest some changes which would allow me to move around my room like a first person camera?
Instead of using D3DXMatrixLookAtLH, you could keep a view matrix.
Set up
(Note that I am making up names of functions, you might have to create these yourself)
Start with something like
Matrix view = Matrices.createIdentity();
Then every frame, you set the view matrix(just like you are doing with the matrix you are getting from MatrixLookAtLH)
Moving around
Normally modifing a model matrix is like this.
model = Matrix.multiply(model,transformation).
However, you manipulate the camera backwards
view = Matrix.multiply(transformation, view)
Simply run your switch statement, generate a transformation and update the view matrix.
e.g:
if (key == 'w')
view = Matrix.multiply(Matrices.createTranslate(0,0,-5), view);
if (key = 'j') // Key to turn
view = Matrix.multiply(Matrices.createRotateY(.1), view);
Formulas for genereating these matrices can be found on wikipedia(or DirectX might give them on its own).
(This is all based off of a simple software renderer I made a while ago, but it should apply the same to DirectX)
EDIT:
Oh, it looks like DirectX has all of these functions for you already in http://msdn.microsoft.com/en-us/library/windows/desktop/bb281696(v=vs.85).aspx
I have gotten my basic L-System working and I decided to try and optimize the rendering of the application. Previously I was looping through the whole string of the L-System with a switch case and drawing. Better yet I will show you what I was doing:
for(unsigned int stringLoop = 0; stringLoop < _buildString.length(); stringLoop++)
{
switch(_buildString.at(stringLoop))
{
case'X':
//Do Nothing
//X is there just to facilitate the Curve of the plant
break;
case'F':
_prevState = _currState;
_currState.position += _currState.direction * stdBranchLength;
//Set the previous state to the current state
_graphics.SetColour3f(0.0f, 1.0f, 0.0f);
_graphics.Begin(OGLFlags::LINE_STRIP);
_graphics.Vertex3f(_prevState.position.X(), _prevState.position.Y(), _prevState.position.Z());
_graphics.Vertex3f(_currState.position.X(), _currState.position.Y(), _currState.position.Z());
_graphics.End();
break;
case'[':
_prevStack.push(_currState);
break;
case']':
_prevState = _currState;
_currState = _prevStack.top();
_prevStack.pop();
break;
case'-':
_currState.direction = _currState.direction.RotatedAboutZ(-(ROTATION) * Math::DegreesToRadians);
break;
case'+':
_currState.direction = _currState.direction.RotatedAboutZ(ROTATION * Math::DegreesToRadians);
break;
};
}
I removed all of this because I was literally resolving the tree every single frame, I changed this loop so that it would save all of the verticies in a std vector.
for(unsigned int stringLoop = 0; stringLoop < _buildString.length(); stringLoop++)
{
switch(_buildString.at(stringLoop))
{
case'X':
break;
case'F':
//_prevState = _currState;
_currState.position += _currState.direction * stdBranchLength;
_vertexVector.push_back(_currState.position);
break;
case'[':
_prevStack.push(_currState);
break;
case']':
_currState = _prevStack.top();
_prevStack.pop();
break;
case'-':
_currState.direction = _currState.direction.RotatedAboutZ(-(ROTATION) * Math::DegreesToRadians);
break;
case'+':
_currState.direction = _currState.direction.RotatedAboutZ(ROTATION * Math::DegreesToRadians);
break;
};
}
Now I changed my render loop so I just read straight from the vector array.
DesignPatterns::Facades::OpenGLFacade _graphics = DesignPatterns::Facades::openGLFacade::Instance();
_graphics.Begin(OGLFlags::LINE_STRIP);
for(unsigned int i = 0; i < _vertexVector.size(); i++)
{
_graphics.Vertex3f(_vertexVector.at(i).X(), _vertexVector.at(i).Y(), _vertexVector.at(i).Z());
}
_graphics.End();
Now my problem is that when I am using a vector array and I use Line Stip, I get extra artifact.
The first image is the original render that is the unoptimized one, then the second is the newer render which runs faster, and the thirds is a render of a dragon curve which uses no push and pops like the first two are using (I am pretty sure the push and pop is where the problems are coming in).
Is the problem with my logic here of storing verticies, or is it because I am using a line strip? I would just use lines, but it doesn't really work at all, it ends up looking more of a line_stipple.
Also sorry about the length of this post.
alt text http://img197.imageshack.us/img197/8030/bettera.jpg
alt text http://img23.imageshack.us/img23/3924/buggyrender.jpg
alt text http://img8.imageshack.us/img8/627/dragoncurve.jpg
You are getting those extra lines because you are using a LINE_STRIP.
In your 'F' case, push both end points of your line into the vector (like you were doing originally).
_vertexVector.push_back(_prevState.position);
_vertexVector.push_back(_currState.position);
And when you draw, use LINE_LIST instead.