Warped scene with two sets of Geodes - c++

I have a few objects that I want to combine into a scene graph.
Street inherits from Geode and has a Geometry child drawable made up of a GL_LINE_STRIP.
Pointer inherits from PositionAttitudeTransform and contains a Geode which contains two Geometry polygons.
When I add a bunch of Streets to a Group, it looks just fine. When I add only the Pointer to a Group, it also looks fine. But if I somehow have them both in the scene, the second one is screwed up. See the two pictures.
In the above picture, the street network is as desired, and in the picture below, the pointer is as desired.
I'd appreciate any help! If you need to see the code, let me know and I'll update my question.
Update 1: Since nothing has happened so far, here is the minimal amount of code necessary to produce the phenomenon. I have put two pointers next to each other with no problem, so I'm starting to suspect that I made the streets wrong... next update will be some street generation code.
Update 2: The code now contains the street drawing code.
Update 3: The code now contains the pointer drawing code as well, and the street drawing
code has been simplified.
// My libraries:
#include <asl/util/color.h>
using namespace asl;
#include <straph/point.h>
#include <straph/straph.h>
using namespace straph;
// Standard and OSG libraries:
#include <utility>
#include <boost/tuple/tuple.hpp> // tie
using namespace std;
#include <osg/ref_ptr>
#include <osg/Array>
#include <osg/Geometry>
#include <osg/Geode>
#include <osg/Group>
#include <osg/LineWidth>
using namespace osg;
#include <osgUtil/Tessellator>
#include <osgViewer/Viewer>
using namespace osgViewer;
/*
* Just FYI: A Polyline looks like this:
*
* typedef std::vector<Point> Polyline;
*
* And a Point basically is a simple struct:
*
* struct Point {
* double x;
* double y;
* };
*/
inline osg::Vec3d toVec3d(const straph::Point& p, double elevation=0.0)
{
return osg::Vec3d(p.x, p.y, elevation);
}
Geometry* createStreet(const straph::Polyline& path)
{
ref_ptr<Vec3dArray> array (new Vec3dArray(path.size()));
for (unsigned i = 0; i < path.size(); ++i) {
(*array)[i] = toVec3d(path[i]);
}
Geometry* geom = new Geometry;
geom->setVertexArray(array.get());
geom->addPrimitiveSet(new osg::DrawArrays(GL_LINE_STRIP, 0, array->size()));
return geom;
}
Geode* load_streets()
{
unique_ptr<Straph> graph = read_shapefile("mexico/roads", 6);
Geode* root = new Geode();
boost::graph_traits<straph::Straph>::edge_iterator ei, ee;
for (boost::tie(ei, ee) = edges(*graph); ei != ee; ++ei) {
const straph::Segment& s = (*graph)[*ei];
root->addDrawable(createStreet(s.polyline));
}
return root;
}
Geode* createPointer(double width, const Color& body_color, const Color& border_color)
{
float f0 = 0.0f;
float f3 = 3.0f;
float f1 = 1.0f * width;
float f2 = 2.0f * width;
// Create vertex array
ref_ptr<Vec3Array> vertices (new Vec3Array(4));
(*vertices)[0].set( f0 , f0 , f0 );
(*vertices)[1].set( -f1/f3, -f1/f3 , f0 );
(*vertices)[2].set( f0 , f2/f3 , f0 );
(*vertices)[3].set( f1/f3, -f1/f3 , f0 );
// Build the geometry object
ref_ptr<Geometry> polygon (new Geometry);
polygon->setVertexArray( vertices.get() );
polygon->addPrimitiveSet( new DrawArrays(GL_POLYGON, 0, 4) );
// Set the colors
ref_ptr<Vec4Array> body_colors (new Vec4Array(1));
(*body_colors)[0] = body_color.get();
polygon->setColorArray( body_colors.get() );
polygon->setColorBinding( Geometry::BIND_OVERALL );
// Start the tesselation work
osgUtil::Tessellator tess;
tess.setTessellationType( osgUtil::Tessellator::TESS_TYPE_GEOMETRY );
tess.setWindingType( osgUtil::Tessellator::TESS_WINDING_ODD );
tess.retessellatePolygons( *polygon );
// Create the border-lines
ref_ptr<Geometry> border (new Geometry);
border->setVertexArray( vertices.get() );
border->addPrimitiveSet(new DrawArrays(GL_LINE_LOOP, 0, 4));
border->getOrCreateStateSet()->setAttribute(new LineWidth(2.0f));
ref_ptr<Vec4Array> border_colors (new Vec4Array(1));
(*border_colors)[0] = border_color.get();
border->setColorArray( border_colors.get() );
border->setColorBinding( Geometry::BIND_OVERALL );
// Create Geode object
ref_ptr<Geode> geode (new Geode);
geode->addDrawable( polygon.get() );
geode->addDrawable( border.get() );
return geode.release();
}
int main(int, char**)
{
Group* root = new Group();
Geode* str = load_streets();
root->addChild(str);
Geode* p = createPointer(6.0, TangoColor::Scarlet3, TangoColor::Black);
root->addChild(p);
Viewer viewer;
viewer.setSceneData(root);
viewer.getCamera()->setClearColor(Color(TangoColor::White).get());
viewer.run();
}

In the functions createStreet I use a Vec3dArray for the vertex array, whereas in the createPointer function, I use a Vec3Array. In the library I guess it expects all nodes
to be composed of floats or doubles, but not both. Changing these two functions solves the problem:
inline osg::Vec3 toVec3(const straph::Point& p, float elevation=0.0)
{
return osg::Vec3(float(p.x), float(p.y), elevation);
}
Geometry* createStreet(const straph::Polyline& path)
{
ref_ptr<Vec3Array> array (new Vec3Array(path.size()));
for (unsigned i = 0; i < path.size(); ++i) {
(*array)[i] = toVec3(path[i]);
}
Geometry* geom = new Geometry;
geom->setVertexArray(array.get());
geom->addPrimitiveSet(new osg::DrawArrays(GL_LINE_STRIP, 0, array->size()));
return geom;
}
Here a comment by Robert Osfield:
I can only provide a guess, and that would be that the Intel OpenGL doesn't handle double vertex data correctly, so you are stumbling across a driver bug.
In general OpenGL hardware is based around floating point maths so the drivers normally convert any double data you pass it into floats before passing it to the GPU. Even if the driver does this correctly this conversion process slows performance down so it's best to keep osg::Geometry vertex/texcoord/normal etc. data all in float arrays such as Vec3Array.
You can retain precision by translating your data to a local origin prior to conversion to float then place a MatrixTransform above your data to place it in the correct 3D position. The OSG by default uses double for all internal matrices that that when it accumulates the modelvew matrix during the cull traversal double precision is maintain for as long as possible before passing the final modelview matrix to OpenGL. Using this technique the OSG can handle whole earth data without any jitter/precision problems.

Related

Trying to load animations in OpenGL from an MD5 file using Assimp GLM

I'm trying to follow the tutorial at here ( at ogldev ) mentioned in this answer .
I am however facing a few issues which I believe to be related to the Row Major Order for Assimp vs the Column major order fro GLM, although I am quite not sure.
I've tried a few variations and orders to see if any of those work, but to no avail.
Here ( Gist ) is the Class which I use to load the complete MD5 file. And the current Result I have.
And, this is the part where I think it is going wrong, when I try to update the bone transformation matrices.
void SkeletalModel::ReadNodeHierarchyAnimation(float _animationTime, const aiNode* _node,
const glm::mat4& _parentTransform)
{
std::string node_name = _node->mName.data;
const aiAnimation * p_animation = scene->mAnimations[0];
glm::mat4 node_transformation(1.0f);
convert_aimatrix_to_glm(node_transformation, _node->mTransformation);
// Transpose it.
node_transformation = glm::transpose(node_transformation);
const aiNodeAnim * node_anim = FindNodeAnim(p_animation, node_name);
if (node_anim) {
//glm::mat4 transformation_matrix(1.0f);
glm::mat4 translation_matrix(1.0f);
glm::mat4 rotation_matrix(1.0f);
glm::mat4 scaling_matrix(1.0f);
aiVector3D translation;
CalcInterpolatedPosition(translation, _animationTime, node_anim);
translation_matrix = glm::translate(translation_matrix, glm::vec3(translation.x, translation.y, translation.z));
aiQuaternion rotation;
CalcInterpolatedRotation(rotation, _animationTime, node_anim);
// Transpose the matrix after this.
convert_aimatrix_to_glm(rotation_matrix, rotation.GetMatrix());
//rotation_matrix = glm::transpose(rotation_matrix);
aiVector3D scaling;
CalcInterpolatedScaling(scaling, _animationTime, node_anim);
scaling_matrix = glm::scale(scaling_matrix, glm::vec3(scaling.x, scaling.y, scaling.z));
node_transformation = scaling_matrix * rotation_matrix * translation_matrix;
//node_transformation = translation_matrix * rotation_matrix * scaling_matrix;
}
glm::mat4 global_transformation = node_transformation * _parentTransform;
if (boneMapping.find(node_name) != boneMapping.end()) {
// Update the Global Transformation.
auto bone_index = boneMapping[node_name];
//boneInfoData[bone_index].finalTransformation = globalInverseTransform * global_transformation * boneInfoData[bone_index].boneOffset;
boneInfoData[bone_index].finalTransformation = boneInfoData[bone_index].boneOffset * global_transformation * globalInverseTransform;
//boneInfoData[bone_index].finalTransformation = globalInverseTransform;
}
for (auto i = 0; i < _node->mNumChildren; i++) {
ReadNodeHierarchyAnimation(_animationTime, _node->mChildren[i], global_transformation);
}
}
My Current Output:
I tried going through each matrix used in the code to check whether I should tranpose it or not. Whether I should change the matrix multiplication order or not. I could not find my issue.
If anyone can point out my mistakes here or direct me to a different tutorial that would help me load animations, that would be great.
Also, I see suggestions to use a basic model in the initial stages of learning this. But I was told Obj format doesn't support animations, and I have been using just Obj before this. Can I use any other formats that blender exports in a manner similar to MD5 as shown in this tutorial?
I built an animated scene a few years ago using Assimp library, basically following these tutorials. http://ogldev.atspace.co.uk/www/tutorial38/tutorial38.html and http://sourceforge.net/projects/assimp/forums/forum/817654/topic/3880745
While I was using and old X format (blender can work with X, using an extension), I can definitely confirm you need to transpose the assimp animation matrices for use with GML.
Regarding using other formats, you can use what whatever you like provided they are supported by Blender (import, Editing, Export) and by Assimp. Be prepared for a fair bit of trial and error when changing formats!
Rather then me trying to understand your code, I will post the relevant fragments from my working system, that shows the calculation of bone matrices. Hopefully this will help you, as I remember having the same problem as you describe, and taking some time to track it down. Code is plain 'C'.
You can see where the transposition takes place at the end of the code.
// calculateAnimPose() calculates the bone transformations for a mesh at a particular time in an animation (in scene)
// Each bone transformation is relative to the rest pose.
void calculateAnimPose(aiMesh* mesh, const aiScene* scene, int animNum, float poseTime, mat4 *boneTransforms) {
if(mesh->mNumBones == 0 || animNum < 0) { // animNum = -1 for no animation
boneTransforms[0] = mat4(1.0); // so, just return a single identity matrix
return;
}
if(scene->mNumAnimations <= (unsigned int)animNum)
failInt("No animation with number:", animNum);
aiAnimation *anim = scene->mAnimations[animNum]; // animNum = 0 for the first animation
// Set transforms from bone channels
for(unsigned int chanID=0; chanID < anim->mNumChannels; chanID++) {
aiNodeAnim *channel = anim->mChannels[chanID];
aiVector3D curPosition;
aiQuaternion curRotation; // interpolation of scaling purposefully left out for simplicity.
// find the node which the channel affects
aiNode* targetNode = scene->mRootNode->FindNode( channel->mNodeName );
// find current positionKey
size_t posIndex = 0;
for(posIndex=0; posIndex+1 < channel->mNumPositionKeys; posIndex++)
if( channel->mPositionKeys[posIndex + 1].mTime > poseTime )
break; // the next key lies in the future - so use the current key
// This assumes that there is at least one key
if(posIndex+1 == channel-> mNumPositionKeys)
curPosition = channel->mPositionKeys[posIndex].mValue;
else {
float t0 = channel->mPositionKeys[posIndex].mTime; // Interpolate position/translation
float t1 = channel->mPositionKeys[posIndex+1].mTime;
float weight1 = (poseTime-t0)/(t1-t0);
curPosition = channel->mPositionKeys[posIndex].mValue * (1.0f - weight1) +
channel->mPositionKeys[posIndex+1].mValue * weight1;
}
// find current rotationKey
size_t rotIndex = 0;
for(rotIndex=0; rotIndex+1 < channel->mNumRotationKeys; rotIndex++)
if( channel->mRotationKeys[rotIndex + 1].mTime > poseTime )
break; // the next key lies in the future - so use the current key
if(rotIndex+1 == channel-> mNumRotationKeys)
curRotation = channel->mRotationKeys[rotIndex].mValue;
else {
float t0 = channel->mRotationKeys[rotIndex].mTime; // Interpolate using quaternions
float t1 = channel->mRotationKeys[rotIndex+1].mTime;
float weight1 = (poseTime-t0)/(t1-t0);
aiQuaternion::Interpolate(curRotation, channel->mRotationKeys[rotIndex].mValue,
channel->mRotationKeys[rotIndex+1].mValue, weight1);
curRotation = curRotation.Normalize();
}
aiMatrix4x4 trafo = aiMatrix4x4(curRotation.GetMatrix()); // now build a rotation matrix
trafo.a4 = curPosition.x; trafo.b4 = curPosition.y; trafo.c4 = curPosition.z; // add the translation
targetNode->mTransformation = trafo; // assign this transformation to the node
}
// Calculate the total transformation for each bone relative to the rest pose
for(unsigned int a=0; a<mesh->mNumBones; a++) {
const aiBone* bone = mesh->mBones[a];
aiMatrix4x4 bTrans = bone->mOffsetMatrix; // start with mesh-to-bone matrix to subtract rest pose
// Find the bone, then loop through the nodes/bones on the path up to the root.
for(aiNode* node = scene->mRootNode->FindNode(bone->mName); node!=NULL; node=node->mParent)
bTrans = node->mTransformation * bTrans; // add each bone's current relative transformation
boneTransforms[a] = mat4(vec4(bTrans.a1, bTrans.a2, bTrans.a3, bTrans.a4),
vec4(bTrans.b1, bTrans.b2, bTrans.b3, bTrans.b4),
vec4(bTrans.c1, bTrans.c2, bTrans.c3, bTrans.c4),
vec4(bTrans.d1, bTrans.d2, bTrans.d3, bTrans.d4)); // Convert to mat4
}
}

How to store a vector field with VTK? C++, VTKWriter

Let's say, I have a vector field u, with components ux, uy and uz, defined at (unstructured) positions in space rx, ry and rz.
All I want, is to store this vector field with the VTK format, i.e. with the class "vtkwriter" from libvtk to enable visualization with Paraview.
I think I got the code for incorporating the positions right, but somehow I can't figure out, how to incorporate the data:
#include <vtkPoints.h>
#include <vtkPolyDataWriter.h>
#include <vtkSmartPointer.h>
void write_file (double* rx, double* ry, double* rz,
double* ux, double* uy, double* uz,
int n, const char* filename)
{
vtkSmartPointer<vtkPoints> points =
vtkSmartPointer<vtkPoints>::New ();
points->SetNumberOfPoints(n);
for (int i = 0; i < n; ++i) {
points->SetPoint(i, rx[i], ry[i], rz[i]);
}
// how to incorporate the vector field u?
vtkSmartPointer<vtkPolyDataWriter> writer =
vtkSmartPointer<vtkPolyDataWriter>::New ();
writer->setFileName (filename);
// how to tell the writer, what to write?
writer->Write ();
}
The first question is: is the general way correct, i.e. the coordinate's treatment with vtkPoints?
When searching the internet, I find many results, how the final file should look like.
I could probably generate that format by hand, but that isn't really what I want to do.
On the other hand, I'm somehow not able to understand VTK's documentation. Whenever I look up the documentation of a class, it refers to the documentation of some other classes, and these other classes' documentations refer back to the first one.
The same holds for the examples.
So far, I haven't found one, that explains how to handle vector valued data, that is defined at arbitrary positions, and the other examples are so complicated, that I'm completely stuck here.
I think, the solution somehow uses vtkPolyData, but I can't figure out, how to insert data.
I think, it needs a vtkDoubleArray, but I haven't found so far, how to make if vector valued.
Thanks in advance.
Ok, I got it done after enough trial and error.
The coordinates, where the vector field is defined should be vtkPoints and the data of interest should be a vtkDoubleArray.
The incorporation into the final vtkPolyData object is done via vtkPolyData::GetPointData()->SetVectors(...).
Finally, the cell type needs to be be set as vtkVertex:
#include <vtkCellArray.h>
#include <vtkDoubleArray.h>
#include <vtkPointData.h>
#include <vtkPoints.h>
#include <vtkPolyData.h>
#include <vtkPolyDataWriter.h>
#include <vtkSmartPointer.h>
#include <vtkVertex.h>
void VTKWriter::write_file(double* rx, double *ry, double *rz,
double* ux, double *uy, double *uz,
int n, const char* filename)
{
vtkSmartPointer<vtkPoints> points =
vtkSmartPointer<vtkPoints>::New();
points->SetNumberOfPoints(n);
vtkSmartPointer<vtkCellArray> vertices =
vtkSmartPointer<vtkCellArray>::New();
vertices->SetNumberOfCells(n);
for (int i = 0; i < n; ++i) {
points->SetPoint(i, rx[i], ry[i], rz[i]);
vtkSmartPointer<vtkVertex> vertex =
vtkSmartPointer<vtkVertex>::New();
vertex->GetPointIds()->SetId(0, i);
vertices->InsertNextCell(vertex);
}
vtkSmartPointer<vtkDoubleArray> u =
vtkSmartPointer<vtkDoubleArray>::New();
u->SetName("u");
u->SetNumberOfComponents(3);
u->SetNumberOfTuples(n);
for (int i = 0; i < n; ++i) {
u->SetTuple3(i, ux[i], uy[i], uz[i]);
}
vtkSmartPointer<vtkPolyData> polydata =
vtkSmartPointer<vtkPolyData>::New();
polydata->SetPoints(points);
polydata->SetVerts(vertices);
polydata->GetPointData()->SetVectors(u);
vtkSmartPointer<vtkPolyDataWriter> writer =
vtkSmartPointer<vtkPolyDataWriter>::New();
writer->SetFileName(filename);
writer->SetInputData(polydata);
writer->Write ();
}
The reason, why I didn't got this at first was, because the interaction between points, cells, vertices, pointdata and polydata isn't easy to grasp when one is new to VTK, the tutorials do not really cover this at all, and VTK's Doxygen documentation is also somehow useless at this point.

std::vector memory, vector of unwanted 0's

My Code works for my purely glut implementation, but I am trying to get it to work in qt.
I have a vector of masspoints for a wire mesh system
std::vector<masspoint> m_particles;
The problem is in my qt version none of what I write really sticks and I am left with an array of zeros. Basically I am confused why the glut version has correct values but the qt one does not given that it is basically identical code. What is wrong with the qt code?
Yes I only see zeros when using qDebug. When I am calling my drawing function in the qt version all vertex points turn out to be 0 in all components so nothing is seen.
int myboog = 1;
int county = 0;
// Constructors
Cloth::Cloth(float width, float height, int particles_in_width, int particles_in_height):
m_width(particles_in_width),
m_height(particles_in_height),
m_dimensionWidth(width),
m_dimensionHeight(height),
m_distanceX(width/(float)particles_in_width),
m_distanceY(height/(float)particles_in_height)
{
//Set the particle array to the given size
//Height by width
//mparticles is the name of our vector
m_particles.resize(m_width*m_height);
qDebug() << m_particles.size();
// Create the point masses to simulate the cloth
for (int x = 0; x < m_width; ++x)
{
for (int y=0; y < m_height; ++y)
{
// Place the pointmass of the cloth, lift the edges to give the wind more effect as the cloth falls
Vector3f position = Vector3f(m_dimensionWidth * (x / (float)m_width),
((x==0)||(x==m_width-1)||(y==0)||(y==m_height-1)) ? m_distanceY/2.0f:0,
m_dimensionHeight * (y / (float)m_height));
// The gravity effect is applied to new pmasspoints
m_particles[y * m_width + x] = masspoint(position,Vector3f(0,-0.06,0));
}
}
int num = (int)m_particles.size();
for (int i=0; i<num; ++i)
{
masspoint* p = &m_particles[i];
if(myboog)
{
qDebug() << "test " << *p->getPosition().getXLocation() << county;
county++;
}
}
myboog = 0;
// Calculate the normals for the first time so the initial draw is correctly lit
calculateClothNormals();
}
Code for masspoint involved in constructor for CLoth
#ifndef MASSPOINT_H
#define MASSPOINT_H
#include <QGLWidget>
#include "vector3f.h"
class masspoint
{
private:
Vector3f m_position; // Current Location of the pointmass
Vector3f m_velocity; // Direction and speed the pointmass is traveling in
Vector3f m_acceleration; // Speed at which the pointmass is accelerating (used for gravity)
Vector3f m_forceAccumulated; // Force that has been accumulated since the last update
Vector3f m_normal; // Normal of this pointmass, used to light the cloth when drawing
float m_damping; // Amount of velocity lost per update
bool m_stationary; // Whether this pointmass is currently capible of movement
public:
masspoint& operator= (const masspoint& particle);
//Some constructors
masspoint();
masspoint(const masspoint& particle);
masspoint(Vector3f position, Vector3f acceleration);
//Like eulur integration
void integrate(float duration);
// Accessor functions
//Get the position of the point mass
inline Vector3f getPosition() const {return m_position;}
Vector stuff involved in the constructor for CLoth
#ifndef VECTOR3F_H
#define VECTOR3F_H
#include <math.h>
// Vector library to be used
class Vector3f
{
private:
float m_x, m_y, m_z;
public:
const float* getXLocation() const { return &m_x; }

OpenGL Camera vectors

I have a very rudimentary camera which generates 3 vectors for use with gluLookAt(...) the problem is I'm not sure if this is correct, I adapted code from something my lecturer showed us (I think he got it from somewhere).
This actually works until you spin the mouse round in circles than camera starts to rotate around the z-axis. Which shouldn't happen as the mouse coords are only attached to the pitch and yaw not the roll.
Camera
// Camera.hpp
#ifndef MOOT_CAMERA_INCLUDE_HPP
#define MOOT_CAMERA_INCLUDE_HPP
#include <GL/gl.h>
#include <GL/glu.h>
#include <boost/utility.hpp>
#include <Moot/Platform.hpp>
#include <Moot/Vector3D.hpp>
namespace Moot
{
class Camera : public boost::noncopyable
{
protected:
Vec3f m_position, m_up, m_right, m_forward, m_viewPoint;
uint16_t m_height, m_width;
public:
Camera()
{
m_forward = Vec3f(0.0f, 0.0f, -1.0f);
m_right = Vec3f(1.0f, 0.0f, 0.0f);
m_up = Vec3f(0.0f, 1.0f, 0.0f);
}
void setup(uint16_t setHeight, uint16_t setWidth)
{
m_height = setHeight;
m_width = setWidth;
}
void move(float distance)
{
m_position += (m_forward * distance);
}
void addPitch(float setPitch)
{
m_forward = (m_forward * cos(setPitch) + (m_up * sin(setPitch)));
m_forward.setNormal();
// Cross Product
m_up = (m_forward / m_right) * -1;
}
void addYaw(float setYaw)
{
m_forward = ((m_forward * cos(setYaw)) - (m_right * sin(setYaw)));
m_forward.setNormal();
// Cross Product
m_right = m_forward / m_up;
}
void addRoll(float setRoll)
{
m_right = (m_right * cos(setRoll) + (m_up * sin(setRoll)));
m_right.setNormal();
// Cross Product
m_up = (m_forward / m_right) * -1;
}
virtual void apply() = 0;
}; // Camera
} // Moot
#endif
Snippet from update cycle
// Mouse movement
m_camera.addPitch((float)input().mouseDeltaY() * 0.001);
m_camera.addYaw((float)input().mouseDeltaX() * 0.001);
apply() in the camera class is defined in an inherited class, which is called from the draw function of the game loop.
void apply()
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(40.0,(GLdouble)m_width/(GLdouble)m_height,0.5,20.0);
m_viewPoint = m_position + m_forward;
gluLookAt( m_position.getX(), m_position.getY(), m_position.getZ(),
m_viewPoint.getX(), m_viewPoint.getY(), m_viewPoint.getZ(),
m_up.getX(), m_up.getY(), m_up.getZ());
}
Don't accumulate the transforms in your vectors, store the angles and generate the vectors on-the-fly.
EDIT: Floating-point stability. Compare the output of a and b:
#include <iostream>
using namespace std;
int main()
{
const float small = 0.00001;
const unsigned int times = 100000;
float a = 0.0f;
for( unsigned int i = 0; i < times; ++i )
{
a += small;
}
cout << a << endl;
float b = 0.0f;
b = small * times;
cout << b << endl;
return 0;
}
Output:
1.00099
1
I am not sure where to start, as you are posting only small snippets, not enough to fully reproduce the problem.
In your methods you update all parameters, and your parameters are depending on previous values. I am not sure what exactly you call, because you posted that you call only these two :
m_camera.addPitch((float)input().mouseDeltaY() * 0.001);
m_camera.addYaw((float)input().mouseDeltaX() * 0.001);
You should somehow break that circle by adding new parameters, and the output should depend on the input (for example, m_position shouldn't depend on m_forward).
You should also initialize all variables in the constructor, and I see you are initializing only m_forward, m_right and m_up (by the way, use initialization list).
You might want to reconsider your approach in favor of using quaternion rotations as described in this paper. This has the advantage of representing all of your accumulated rotations as a single rotation about a single vector (only need to keep track of a single quaternion) which you can apply to the canonical orientation vectors (up, norm and right) describing the camera orientation. Furthermore, since you're using C++, you can use the Boost quaternion class to manage the math of most of it.

OpenGL skeleton animation

I am trying to add animation to my program.
I have human model created in Blender with skeletal animation, and I can skip through the keyframes to see the model walking.
Now I've exported the model to an XML (Ogre3D) format, and in this XML file I can see the rotation, translation and scale assigned to each bone at a specific time (t=0.00000, t=0.00040, ... etc.)
What I've done is found which vertices are assigned each bone. Now I'm assuming all I need to do is apply the transformations defined for the bone to each one of these vertices. Is this the correct approach?
In my OpenGL draw() function (rough pseudo-code):
for (Bone b : bones){
gl.glLoadIdentity();
List<Vertex> v= b.getVertices();
rotation = b.getRotation();
translation = b.getTranslation();
scale = b.getScale();
gl.glTranslatef(translation);
gl.glRotatef(rotation);
gl.glScalef(scale);
gl.glDrawElements(v);
}
Vertices are usually affected by more than one bone -- it sounds like you're after linear blend skinning. My code's in C++ unfortunately, but hopefully it'll give you the idea:
void Submesh::skin(const Skeleton_CPtr& skeleton)
{
/*
Linear Blend Skinning Algorithm:
P = (\sum_i w_i * M_i * M_{0,i}^{-1}) * P_0 / (sum i w_i)
Each M_{0,i}^{-1} matrix gets P_0 (the rest vertex) into its corresponding bone's coordinate frame.
We construct matrices M_n * M_{0,n}^-1 for each n in advance to avoid repeating calculations.
I refer to these in the code as the 'skinning matrices'.
*/
BoneHierarchy_CPtr boneHierarchy = skeleton->bone_hierarchy();
ConfiguredPose_CPtr pose = skeleton->get_pose();
int boneCount = boneHierarchy->bone_count();
// Construct the skinning matrices.
std::vector<RBTMatrix_CPtr> skinningMatrices(boneCount);
for(int i=0; i<boneCount; ++i)
{
skinningMatrices[i] = pose->bones(i)->absolute_matrix() * skeleton->to_bone_matrix(i);
}
// Build the vertex array.
RBTMatrix_Ptr m = RBTMatrix::zeros(); // used as an accumulator for \sum_i w_i * M_i * M_{0,i}^{-1}
int vertCount = static_cast<int>(m_vertices.size());
for(int i=0, offset=0; i<vertCount; ++i, offset+=3)
{
const Vector3d& p0 = m_vertices[i].position();
const std::vector<BoneWeight>& boneWeights = m_vertices[i].bone_weights();
int boneWeightCount = static_cast<int>(boneWeights.size());
Vector3d p;
if(boneWeightCount != 0)
{
double boneWeightSum = 0;
for(int j=0; j<boneWeightCount; ++j)
{
int boneIndex = boneWeights[j].bone_index();
double boneWeight = boneWeights[j].weight();
boneWeightSum += boneWeight;
m->add_scaled(skinningMatrices[boneIndex], boneWeight);
}
// Note: This is effectively p = m*p0 (if we think of p0 as (p0.x, p0.y, p0.z, 1)).
p = m->apply_to_point(p0);
p /= boneWeightSum;
// Reset the accumulator matrix ready for the next vertex.
m->reset_to_zeros();
}
else
{
// If this vertex is unaffected by the armature (i.e. no bone weights have been assigned to it),
// use its rest position as its real position (it's the best we can do).
p = p0;
}
m_vertArray[offset] = p.x;
m_vertArray[offset+1] = p.y;
m_vertArray[offset+2] = p.z;
}
}
void Submesh::render() const
{
glPushClientAttrib(GL_CLIENT_VERTEX_ARRAY_BIT);
glPushAttrib(GL_ENABLE_BIT | GL_POLYGON_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_DOUBLE, 0, &m_vertArray[0]);
if(m_material->uses_texcoords())
{
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_DOUBLE, 0, &m_texCoordArray[0]);
}
m_material->apply();
glDrawElements(GL_TRIANGLES, static_cast<GLsizei>(m_vertIndices.size()), GL_UNSIGNED_INT, &m_vertIndices[0]);
glPopAttrib();
glPopClientAttrib();
}
Note in passing that real-world implementations usually do this sort of thing on the GPU to the best of my knowledge.
Your code assumes that each bone has an independent transformation matrix (you reset your matrix at the start of each loop iteration). But in reality, bones form a hierarchical structure that you must preserve when you do your rendering. Consider that when your upper arm rotates your forearm rotates along, because it is attached. The forearm may have its own rotation, but that is applied after it is rotated with the upper arm.
The rendering of the skeleton then is done recursively. Here is some pseudo-code:
function renderBone(Bone b) {
setupTransformMatrix(b);
draw(b);
foreach c in b.getChildren()
renderBone(c);
}
main() {
gl.glLoadIdentity();
renderBone(rootBone);
}
I hope this helps.