Problems loading .dae files, indices and vertices load correct - c++

So after reading this I thought it would be better to change to format i load my files in and decided to use .dae files. After writing this very simple xml-loader:
void Model::loadModelFromDae(std::string source)
{
using namespace std;
string name = source.substr(0,source.find("."));
cout<<"+++LOADING::"<<name<<"+++"<<endl;
ifstream file;
file.open(source);
string line;
size =0;
indexSize=0;
while(getline(file,line))
{
/*possible float_array's: vertices and normal*/
if(std::string::npos != line.find("<float_array"))
{
if(std::string::npos != line.find("positions"))
{
//get the size of the vertices float_array
int loc_count = line.find("count");
// to explain the hardcoded values: count=" is 7 char's
string cout_substr = line.substr(line.find("count"),line.find(">"));
cout_substr = cout_substr.substr(7,cout_substr.find("\""));
size = atoi(cout_substr.c_str());
cout<<"\tSIZE:"<<size<<endl;
vertices = new float[size];
memset(vertices,0.0f,sizeof(float)*size);
string values = line.substr(line.find("count")+11,line.size());
values = values.substr(0,values.size()-14);
std::stringstream converter(values);
vector<float> temp;
// Iterate over the istream, using >> to grab floats
// and push_back to store them in the vector
std::copy(std::istream_iterator<float>(converter),
std::istream_iterator<float>(),
std::back_inserter(temp));
for(int i=0;i<temp.size();i++) vertices[i] = temp.at(i);
}
//TODO: handle normal array
}
/*Handling indices*/
if(std::string::npos != line.find("<p>"))
{
string values = line.substr(line.find(">")+1,line.length());
values = values.substr(0,values.length()-4);
std::stringstream converter(values);
vector<short> temp;
// Iterate over the istream, using >> to grab shorts
// and push_back to store them in the vector
std::copy(std::istream_iterator<short>(converter),
std::istream_iterator<short>(),
std::back_inserter(temp));
indices = new short[temp.size()];
indexSize = temp.size();
cout<<"\tINDEXSIZE:"<<indexSize<<endl;
for(int i=0;i<temp.size();i++) indices[i] = temp.at(i);
}
}
cout<<"+++ENDED LOADING +DAE+ FILE:"<<name<<"+++"<<endl;
}
I still have a few problems: I'm trying to load a cube, and checked each vertex/index and they exactly match the XML-file but the output is not at all what i expect: the cube renders the triangles, but not in the order they should be. So I got a few questions:
Does .dae save it's indices in a different format then openGL takes
it?
(Optional because it requires a lot of debugging) Can you spot my mistake, my initial thought was that the facing of the vertices is
wrong but putting glFrontFace(GL_CCW); or glFrontFace(GL_CW);
doesn't change anything.

After messing around with the indices, I came to this conclusion: the indices are not only the indices of the vertices, but also of the normals and the UV's. So, when loading the indices you should be carefull which index belongs to which element.

Related

Building the set of unique vertices gives trashed result with random triangles around initial mesh

I have a problem with creating a set of unique vertices containing Position Coordinate, Texture Coordinade and Normals. I decided to use std::set for this kind of problem. The problem is that it seems that it somehow does not creating the correct set of vertices. The data I loaded from .obj file 100% correct as it is renders the mesh as I expected. But new set of data is just creates a blob of triangles.
Correct result using only Positions of vertices, expected result:
And generated data:
The code routine:
struct vertex
{
glm::vec3 Position;
glm::vec2 TextureCoord;
glm::vec3 Normal;
bool operator==(const vertex& rhs) const
{
bool Res1 = this->Position == rhs.Position;
bool Res2 = this->TextureCoord == rhs.TextureCoord;
bool Res3 = this->Normal == rhs.Normal;
return Res1 && Res2 && Res3;
}
};
namespace std
{
template<>
struct hash<vertex>
{
size_t operator()(const vertex& VertData) const
{
size_t res = 0;
const_cast<hash<vertex>*>(this)->hash_combine(res, hash<glm::vec3>()(VertData.Position));
const_cast<hash<vertex>*>(this)->hash_combine(res, hash<glm::vec2>()(VertData.TextureCoord));
const_cast<hash<vertex>*>(this)->hash_combine(res, hash<glm::vec3>()(VertData.Normal));
return res;
}
private:
// NOTE: got from glm library
void hash_combine(size_t& seed, size_t hash)
{
hash += 0x9e3779b9 + (seed << 6) + (seed >> 2);
seed ^= hash;
}
};
}
void mesh::BuildUniqueVertices()
{
std::unordered_set<vertex> UniqueVertices;
//std::unordered_map<u32, vertex> UniqueVertices;
u32 IndexCount = CoordIndices.size();
std::vector<u32> RemapedIndices(IndexCount);
for(u32 VertexIndex = 0;
VertexIndex < IndexCount;
++VertexIndex)
{
vertex Vert = {};
v3 Pos = Coords[CoordIndices[VertexIndex]];
Vert.Position = glm::vec3(Pos.x, Pos.y, Pos.z);
if(NormalIndices.size() != 0)
{
v3 Norm = Normals[NormalIndices[VertexIndex]];
Vert.Normal = glm::vec3(Norm.x, Norm.y, Norm.z);
}
if(TextCoords.size() != 0)
{
v2 TextCoord = TextCoords[TextCoordIndices[VertexIndex]];
Vert.TextureCoord = glm::vec2(TextCoord.x, TextCoord.y);
}
// NOTE: think about something faster
auto Hint = UniqueVertices.insert(Vert);
if (Hint.second)
{
RemapedIndices[VertexIndex] = VertexIndex;
}
else
{
RemapedIndices[VertexIndex] = static_cast<u32>(std::distance(UniqueVertices.begin(), UniqueVertices.find(Vert)));
}
}
VertexIndices = std::move(RemapedIndices);
Vertices.reserve(UniqueVertices.size());
for(auto it = UniqueVertices.begin();
it != UniqueVertices.end();)
{
Vertices.push_back(std::move(UniqueVertices.extract(it++).value()));
}
}
What the error could be. I am suspecting that hash function is not doing its job right and therefor I am getting really trashed results of random triangles around initial mesh bounds. Thanks.
This construction shows that you're going down the wrong path:
RemapedIndices[VertexIndex] = static_cast<u32>(std::distance(UniqueVertices.begin(), UniqueVertices.find(Vert)));
It's clear that you want the "offset" of the vertex within the UniqueVertices but you're modifying the container in the same loop with UniqueVertices.insert(Vert);. That means that the std::distance result you calculate is potentially invalidated by the next loop. Every index except the very last one you calculate will potentially be garbage by the time you finish the outer loop.
Just use a vector<vertex> and stop trying to de-dupe vertices. If you later find you need to optimize for better performance, you're going to be better off trying to make sure that the mesh is optimized for cache locality.
If you really, really feel the need to dedupe, you need to do it in a multi-step process, something like this...
First map the original indices to some kind of unique key (NOT the full vertex... you want something that can be used as both a map key and value, so has good hashing and equality functionality, like std::string). For instance, you could take the 3 source indices and convert them to fixed length hex strings concatenated together. Just make sure the transformation is reversible. Here I've stubbed it out wit std::string to_key(uint32_t CoordIndex, uint32_t NormalIndex, uint32_t TextCoordIndex)
std::unordered_map<uint32_t, std::string> originalIndexToKey;
std::unordered_set<std::string> uniqueKeys;
uint32_t IndexCount = CoordIndices.size();
for (u32 VertexIndex = 0; VertexIndex < IndexCount; ++VertexIndex) {
uint32_t CoordIndex = UINT32_MAX, NormalIndex = UINT32_MAX, TextCoordIndex = UINT32_MAX;
CoordIndex = CoordIndices[VertexIndex];
if (NormalIndices.size() != 0) {
NormalIndex = NormalIndices[VertexIndex];
}
if (TextCoords.size() != 0) {
TextCoordIndex = TextCoordIndices[VertexIndex];
}
std::string key = to_key(CoordIndex, NormalIndex, TextCoordIndex);
originalIndexToKey.insert(VertexIndex, key);
uniqueKeys.insert(key);
}
Next, take all the unique keys and construct the unique vertices with them in a vector, so they have fixed positions. Here you need to get the sub-indices back from the key with a void from_key(const std::string& key, uint32_t & CoordIndex, uint32_t & NormalIndex, uint32_t & TextCoordIndex) function.
std::unordered_map<std::string, uint32_t> keyToNewIndex;
std::vector<vertex> uniqueVertices;
for (const auto& key : uniqueKeys) {
uint32_t NewIndex = uniqueVertices.size();
keyToNewIndex.insert(key, NewIndex)
// convert the key back into 3 indices
uint32_t CoordIndex, NormalIndex, TextCoordIndex;
from_key(key, CoordIndex, NormalIndex, TextCoordIndex);
vertex Vert = {};
v3 Pos = Coords[CoordIndex];
Vert.Position = glm::vec3(Pos.x, Pos.y, Pos.z);
if (NormalIndex != UINT32_MAX) {
v3 Norm = Normals[NormalIndices[VertexIndex]];
Vert.Normal = glm::vec3(Norm.x, Norm.y, Norm.z);
}
if (TextCoordIndex != UINT32_MAX) {
v2 TextCoord = TextCoords[TextCoordIndices[VertexIndex]];
Vert.TextureCoord = glm::vec2(TextCoord.x, TextCoord.y);
}
uniqueVertices.push_back(Vert);
}
Finally, you need to map from the original index out to the new index, preserving the triangle geometry.
std::vector<uint32_t> RemapedIndices;
RemapedIndices.reserve(IndexCount);
for(u32 OriginalVertexIndex = 0; OriginalVertexIndex < IndexCount; ++OriginalVertexIndex) {
auto key = originalIndexToKey[OriginalVertexIndex];
auto NewIndex = keyToNewIndex[key];
RemapedIndices.push_back(NewIndex );
}
The image is not completely randomly messed up. If you combine the two images, these fit perfectly:
The only problem, a mesh means triangles. You tell Vulkan how to draw each triangle. Having a mesh composed of 8 triangles, 9 unique vertices, you pass vertices for each one of 8 triangles separately, see screenshot below:
1:[1,2,4], 2:[2,5,4], 3:[2,3,5], 4:[3,6,5], 5:[4,5,7], 6:[5,8,7], 7:[5,6,8], 8:[6,9,8] (see red numbers).
So the vertex 5 will be repeated 6 times, because there the triangles 2,3,4,5,6,7 shares the vertex 5. Same goes for normals and textures. The vast majority of vertices with all set coordinates, normals, textures, colors, will be repeated exactly 6 times. If it is double coated it will be repeated 12 times. The boundary vertices will be repeated 3 and respectively 6 times if double coated. And the edge vertices will be repeated 1/2 times, see left and right edge, and respectively 2/4 times if double coated:
If you try to remove the duplicates, you transform a mesh in a mess. And this is regardless of how well these are sorted or unsorted. The order makes no more any sense.
If you want to use unique set of coordinates, there probably exists in Vulkan some technique to add an index buffer, similar to OpenGL. So you can pass unique coordinates, but build correctly the index array. All vertices will be unique, but the indexes in index array will not.

C++ obj loader texture coordinates messed up

I have written a simple obj parser in c++ that loads the vertices, indices and texture coordinates (that's all the data I need).
Here is the function:
Model* ModelLoader::loadFromOBJ(string objFile, ShaderProgram *shader, GLuint texture)
{
fstream file;
file.open(objFile);
if (!file.is_open())
{
cout << "ModelLoader: " << objFile << " was not found";
return NULL;
}
int vertexCount = 0;
int indexCount = 0;
vector<Vector3> vertices;
vector<Vector2> textureCoordinates;
vector<Vector2> textureCoordinatesFinal;
vector<unsigned int> vertexIndices;
vector<unsigned int> textureIndices;
string line;
while (getline(file, line))
{
vector<string> splitLine = Common::splitString(line, ' ');
// v - vertex
if (splitLine[0] == "v")
{
Vector3 vertex(stof(splitLine[1]), stof(splitLine[2]), stof(splitLine[3]));
vertices.push_back(vertex);
vertexCount++;
}
// vt - texture coordinate
else if (splitLine[0] == "vt")
{
Vector2 textureCoordinate(stof(splitLine[1]), 1 - stof(splitLine[2]));
textureCoordinates.push_back(textureCoordinate);
}
// f - face
else if (splitLine[0] == "f")
{
vector<string> faceSplit1 = Common::splitString(splitLine[1], '/');
vector<string> faceSplit2 = Common::splitString(splitLine[2], '/');
vector<string> faceSplit3 = Common::splitString(splitLine[3], '/');
unsigned int vi1 = stoi(faceSplit1[0]) - 1;
unsigned int vi2 = stoi(faceSplit2[0]) - 1;
unsigned int vi3 = stoi(faceSplit3[0]) - 1;
unsigned int ti1 = stoi(faceSplit1[1]) - 1;
unsigned int ti2 = stoi(faceSplit2[1]) - 1;
unsigned int ti3 = stoi(faceSplit3[1]) - 1;
vertexIndices.push_back(vi1);
vertexIndices.push_back(vi2);
vertexIndices.push_back(vi3);
textureIndices.push_back(ti1);
textureIndices.push_back(ti2);
textureIndices.push_back(ti3);
indexCount += 3;
}
}
// rearanging textureCoordinates into textureCoordinatesFinal based on textureIndices
for (int i = 0; i < indexCount; i++)
textureCoordinatesFinal.push_back(textureCoordinates[textureIndices[i]]);
Model *result = new Model(shader, vertexCount, &vertices[0], NULL, texture, indexCount, &textureCoordinatesFinal[0], &vertexIndices[0]);
models.push_back(result);
return result;
}
As you can see, I take into account the 1 - texCoord.y (because blender and opengl use a different coordinate system for textures).
I also arrange the texture coordinates based on the texture indices after the while loop.
However, the models I try to render have their textures messed up. Here is an example:
Texture messed up
I even tried it with a single cube which I unwrapped myself in blender and applied a very simple brick texture. In 1 or 2 faces, the texture was fine and working, then in some other faces, 1 of the tringles had a correct texture and the others appeared streched out (same as in the picture above).
To define a mesh, there is only one index list that indexes the vertex attributes. The vertex attributes (in your case the vertices and the texture coordinate) form a record set, which is referred by these indices.
This causes, that each vertex coordinate may occur several times in the list and each texture coordinate may occur several times in the list. But each combination of vertices and texture coordinates is unique.
Take the vertexIndices and textureIndices an create unique pairs of vertices and texture coordinates (verticesFinal, textureCoordinatesFinal).
Create new attribute_indices, which indexes the pairs.
Use the a temporary container attribute_pairs to manage the unique pairs and to identify their indices:
#include <vector>
#include <map>
// input
std::vector<Vector3> vertices;
std::vector<Vector2> textureCoordinates;
std::vector<unsigned int> vertexIndices;
std::vector<unsigned int> textureIndices;
std::vector<unsigned int> attribute_indices; // final indices
std::vector<Vector3> verticesFinal; // final vertices buffer
std::vector<Vector2> textureCoordinatesFinal; // final texture coordinate buffer
// map a pair of indices to the final attribute index
std::map<std::pair<unsigned int, unsigned int>, unsigned int> attribute_pairs;
// vertexIndices.size() == textureIndices.size()
for ( size_t i = 0; i < vertexIndices.size(); ++ i )
{
// pair of vertex index an texture index
auto attr = std::make_pair( vertexIndices[i], textureIndices[i] );
// check if the pair aready is a member of "attribute_pairs"
auto attr_it = attribute_pairs.find( attr );
if ( attr_it == attribute_pairs.end() )
{
// "attr" is a new pair
// add the attributes to the final buffers
verticesFinal.push_back( vertices[attr.first] );
textureCoordinatesFinal.push_back( textureCoordinates[attr.first] );
// the new final index is the next index
unsigned int new_index = (unsigned int)attribute_pairs.size();
attribute_indices.push_back( new_index );
// add the new map entry
attribute_pairs[attr] = new_index;
}
else
{
// the pair "attr" already exists: add the index which was found in the map
attribute_indices.push_back( attr_it->second );
}
}
Note the number of the vertex coordinates (verticesFinal.size()) is equal the number of the texture coordiantes (textureCoordinatesFinal.size()). But the number of the indices (attribute_indices.size()) is something completely different.
// verticesFinal.size() == textureCoordinatesFinal.size()
Model *result = new Model(
shader,
verticesFinal.size(),
verticesFinal.data(),
NULL, texture,
attribute_indices.size(),
textureCoordinatesFinal.data(),
attribute_indices.data() );

Bone weights to GPU in vectors

I've been working on skeletal animation using OPenGL and sending bone weight influences to the GPU in a struct containing a pair of arrays, however changing these arrays to vectors doesn't seem to work (as in the model ceases to render, as if the bone information was missing or wrong in some manner).
Asides from the declaration of the vectors within the struct, the code path is identical. I've debugged through and ensured the size/values of the elements are okay. The only thing I can think of is that C++ is freeing the contents of the vectors before it can be uploaded to the GPU, but I've seen no evidence to support that claim.
It's worth noting that the resizes used are to make the structure compatible with the existing array functionality and will be removed to allow for dynamic scaling after this issue is resolved.
Here's the structure as it stands, using vectors:
struct VertexBoneData
{
std::vector<glm::uint> IDs;
std::vector<float> weights;
VertexBoneData()
{
Reset();
IDs.resize(NUM_BONES_PER_VEREX);
weights.resize(NUM_BONES_PER_VEREX);
};
void Reset()
{
IDs.clear();
weights.clear();
}
void AddBoneData(glm::uint p_boneID, float p_weight);
};
Edit: Additional information.
Here's the loop to get the bone information in to the struct:
void Skeleton::LoadBones(glm::uint baseVertex, const aiMesh* mesh, std::vector<VertexBoneData>& bones)
{
for (glm::uint i = 0; i < p_mesh->mNumBones; i++) {
glm::uint boneIndex = 0;
std::string boneName(mesh->mBones[i]->mName.data);
if (_boneMap.find(boneName) == _boneMap.end()) {
//Allocate an index for a new bone
boneIndex = _numBones;
_numBones++;
BoneInfo bi;
_boneInfo.push_back(bi);
SetMat4x4(_boneInfo[boneIndex].boneOffset, mesh->mBones[i]->mOffsetMatrix);
_boneMap[boneName] = boneIndex;
}
else {
boneIndex = _boneMap[boneName];
}
for (glm::uint j = 0; j < mesh->mBones[i]->mNumWeights; j++) {
glm::uint vertId = baseVertex + mesh->mBones[i]->mWeights[j].mVertexId;
float weight = mesh->mBones[i]->mWeights[j].mWeight;
bones[vertId].AddBoneData(boneIndex, weight);
}
}
}
The code to add the ID/weights to the vertex:
void VertexBoneData::AddBoneData(glm::uint p_boneID, float p_weight)
{
for (glm::uint i = 0; i < ARRAY_SIZE_IN_ELEMENTS(IDs); i++) {
if (weights[i] == 0.0) {
IDs[i] = p_boneID;
weights[i] = p_weight;
return;
}
}
assert(0); //Should never get here - more bones than we have space for
}
Loop to load the Skeleton and insert to the VBO object from the Mesh class:
if (_animator.HasAnimations())
{
bones.resize(totalIndices);
for (unsigned int i = 0; i < _scene->mNumMeshes; ++i){
_animator.LoadSkeleton(_subMeshes[i].baseVertex, _scene->mMeshes[i], bones);
}
_vertexBuffer[2].AddData(&bones);
}
Insert to the VBO object:
void VertexBufferObject::AddData(void* ptr_data)
{
data = *static_cast<std::vector<BYTE>*>(ptr_data);
}
Finally the code to add the information to the GPU from the Mesh class:
_vertexBuffer[2].BindVBO();
_vertexBuffer[2].UploadDataToGPU(GL_STATIC_DRAW);
glEnableVertexAttribArray(BONE_ID_LOCATION);
glVertexAttribIPointer(BONE_ID_LOCATION, 4, GL_INT, sizeof(VertexBoneData), (const GLvoid*)0);
glEnableVertexAttribArray(BONE_WEIGHT_LOCATION);
glVertexAttribPointer(BONE_WEIGHT_LOCATION, 4, GL_FLOAT, GL_FALSE, sizeof(VertexBoneData), (const GLvoid*)(sizeof(VertexBoneData) / 2));
This can't possibly work. A std::vector instance is an object that internally contains a pointer to dynamically allocated data.
So with this definition:
struct VertexBoneData
{
std::vector<glm::uint> IDs;
std::vector<float> weights;
The values for the IDs and weights aren't stored directly in the structure. The are in dynamically allocated heap memory. The structure only contains the vector objects, which contain pointers and some bookkeeping attributes.
You can't store these structs directly in an OpenGL buffer, and have the GPU access the data. OpenGL won't know what do with vector objects.

VW3D Model Format

Lately, I have been working to resurrect an old open source game. My main problem is it uses a custom format: VW3D. The following code snippet is used to load the model from its File System. Is it possible to reconstruct the model format (and convert it) from the below snippet? I have no idea how I would go about this and would appreciate any pointers. My hope is to be able to construct a script to convert to/from this format (Planning for vw3d to obj and vice versa)
void eModel3D::ReadVW3D(const char *nName)
{
eFILE *file = 0;
file = vw_fopen(nName);
if (file == 0) return;
size_t SizeB = strlen(nName)+1;
Name = new char[SizeB];
strcpy(Name, nName);
// пропускаем заголовок / skip header
file->fread(&DrawObjectCount, 4, 1);
// читаем, сколько объектов read how many objects
file->fread(&DrawObjectCount, sizeof(int), 1);
DrawObjectList = new eObjectBlock[DrawObjectCount];
unsigned int GlobalRangeStart = 0;
// для каждого объекта for each object
for (int i=0; i<DrawObjectCount; i++)
{
DrawObjectList[i].RangeStart = GlobalRangeStart;
// FVF_Format
file->fread(&(DrawObjectList[i].FVF_Format),sizeof(int),1);
// Stride
file->fread(&(DrawObjectList[i].Stride),sizeof(int),1);
// VertexCount на самом деле, это кол-во индексов на объект In fact, this count of the index on an object
file->fread(&(DrawObjectList[i].VertexCount),sizeof(int),1);
GlobalRangeStart += DrawObjectList[i].VertexCount;
// Location
file->fread(&(DrawObjectList[i].Location),sizeof(float)*3,1);
// Rotation
file->fread(&(DrawObjectList[i].Rotation),sizeof(float)*3,1);
// рисуем нормально, не прозрачным draw a fine, not transparent
DrawObjectList[i].DrawType = 0;
// вертексный буфер Vertex Buffer
DrawObjectList[i].VertexBufferDestrType = 0;
DrawObjectList[i].VertexBuffer = 0;
DrawObjectList[i].VertexBufferVBO = 0;
// индексный буфер Index buffer
DrawObjectList[i].IndexBuffer = 0;
DrawObjectList[i].IndexBufferVBO = 0;
}
// получаем сколько всего вертексов get how many verticies
int VCount = 0;
file->fread(&VCount,sizeof(int),1);
// собственно данные actual data
GlobalVertexBuffer = new float[VCount*DrawObjectList[0].Stride];
file->fread(GlobalVertexBuffer, VCount*DrawObjectList[0].Stride*sizeof(float),1);
// индекс буфер I.B.
GlobalIndexBuffer = new unsigned int[GlobalRangeStart];
file->fread(GlobalIndexBuffer, GlobalRangeStart*sizeof(unsigned int),1);
// делаем общее VBO making the total VBO
GlobalVertexBufferVBO = new unsigned int;
if (!vw_BuildVBO(VCount, GlobalVertexBuffer, DrawObjectList[0].Stride, GlobalVertexBufferVBO))
{
delete GlobalVertexBufferVBO; GlobalVertexBufferVBO=0;
}
// делаем общий индекс VBO makes the overall index VBO
GlobalIndexBufferVBO = new unsigned int;
if (!vw_BuildIndexVBO(GlobalRangeStart, GlobalIndexBuffer, GlobalIndexBufferVBO))
{
delete GlobalIndexBufferVBO; GlobalIndexBufferVBO=0;
}
// устанавливаем правильные указатели на массивы establish the correct pointers to arrays
for (int i=0; i<DrawObjectCount; i++)
{
DrawObjectList[i].VertexBuffer = GlobalVertexBuffer;
DrawObjectList[i].VertexBufferVBO = GlobalVertexBufferVBO;
DrawObjectList[i].IndexBuffer = GlobalIndexBuffer;
DrawObjectList[i].IndexBufferVBO = GlobalIndexBufferVBO;
}
vw_fclose(file);
}
The code you've posted is pretty straight-forward. I especially enjoy the Cyrillic/English comment pairs. You probably can't fully convert back and forth between VW3d and Obj formats. It looks like VW3d is exclusively about geometry while the Obj format provides for textures, normals, and material properties too. Furthermore, I think Obj stores one object per file, while VW3d stores multiple objects, potentially re-using the same object geometry multiple times with different translations and rotations.
The functions, vw_BuildVBO and vw_BuildIndexVBO, are doing some key work, although I can guess at what they're doing, it'd be better to have access to them.
The other potential problem is in the 'Rotation' variable. It seems to be an array holding 3 floats. That suggests that they're using Euler rotation, but there are multiple ways to do that. If you wanted to make sure to orient the objects correctly, you'd need to understand a little more about how those three floats are used when drawing the object.

Trying to make an array of DirectX vertex with out knowing until run time what type they will be

Bit of background for those who don't know DirectX. A vertex is not just an XYZ position, it can have other data in it as well. DirectX uses a system known as Flexible Vertex Format, FVF, to let you define what format you want your vertexs to be in. You define these by passing a number to DirectX that use bitwise or to build it up, eg (D3DFVF_XYZ | D3DFVF_DIFFUSE)means you are going to start using (from when you tell DirectX) vertexs that have an XYZ (three floats) and a RGB components (DWORD / unsigned long).
In order to pass your vertexs to the graphics card, you basicaly lock the memory in the graphics card where your buffer is, and use memcpy to transfer your array over.
Your array is an array of a struct you deffine your self, so in this case you would have made a struct like...
struct CUSTOMVERTEX {
FLOAT X, Y, Z;
DWORD COLOR;
};
You then make an array of type CUSTOMVERTEX and fill in the data fields.
I think my best appraoch is let my class build up an array of each component type, so an array of struct pos{ flaot x,y,z;}; an array of struct colour{ DWROD colour;}; etc.
But I will then need to merge these together so that I have an array structs like CUSTOMVERTEX.
Now, I think I have made a function that will merge to arrays together, but I am not sure if it is going to work as intended, here it is (currently missing the abilaty to actually return this 'interlaced' array)
void Utils::MergeArrays(char *ArrayA, char *ArrayB, int elementSizeA, int elementSizeB, int numElements)
{
char *packedElements = (char*)malloc(numElements* (elementSizeA, elementSizeB));
char *nextElement = packedElements;
for(int i = 0; i < numElements; ++i)
{
memcpy(nextElement, (void*)ArrayA[i], elementSizeA);
nextElement += elementSizeA;
memcpy(nextElement, (void*)ArrayB[i], elementSizeB);
nextElement += elementSizeB;
}
}
when calling this function, you will pass in the two arrays you want merged, and size of the elements in each array and the number of elements in your array.
I was asking about this in chat for a while whilst SO was down. A few things to say.
I am dealing with fairly small data sets, like 100 tops, and this (in theory) is more of an initialisation task, so should only get done once, so a bit of time is ok by me.
My final array that I want to be able to use memcpy on to transfer into the graphics card needs to have no padding, it has to be contiguous data.
EDIT The combined array of vertex data will be transfered to the GPU, this is first done by requesting the GPU to set a void* to the start of the memory I have access to and requesting space the size of my customVertex * NumOfVertex. So if my mergeArray function does loose what the types are within it, that is ok, just a long as I get my single combined array to transfer in one block /EDIT
Finally, their is a dam good chance I am barking up the wrong tree with this, so their may well be a much simpler way to just not have this problem in the first place, but part of me has dug my heals in and wants to get this system working, so I would appreciate knowing how to get such a system to work (the interlacing arrays thing)
Thank you so much... I need to sooth my head now, so I look forward to hearing any ideas on the problem.
No, no, no. The FVF system has been deprecated for years and isn't even available in D3D10 or later. D3D9 uses the VertexElement system. Sample code:
D3DVERTEXELEMENT9 VertexColElements[] =
{
{0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0},
{0, 12, D3DDECLTYPE_D3DCOLOR, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_COLOR, 0},
D3DDECL_END(),
};
The FVF system has a number of fundamental flaws - for example, which order the bytes go in.
On top of that, if you want to make a runtime-variant vertex data format, then you will need to write a shader for every possible variant that you may want to have, and compile them all, and spend your life swapping them around. And, the effects on the final product would be insane - for example, how could you possibly write a competitive rendering engine if you decide to take out the lighting data you need to Phong shade?
The reality is that a runtime-variant vertex format is more than a tad insane.
However, I guess I'd better lend a hand. What you really need is a polymorphic function object and some plain memory- D3D takes void*s or somesuch so that's not a big deal. When you call the function object, it adds to the FVF declaration and copies data into the memory.
class FunctionBase {
public:
virtual ~FunctionBase() {}
virtual void Call(std::vector<std::vector<char>>& vertices, std::vector<D3DVERTEXELEMENT9>& vertexdecl, int& offset) = 0;
};
// Example implementation
class Position : public FunctionBase {
virtual void Call(std::vector<std::vector<char>>& vertices, std::vector<D3DVERTEXELEMENT9>& vertexdecl, int& offset) {
std::for_each(vertices.begin(), vertices.end(), [&](std::vector<char>& data) {
float x[3] = {0};
char* ptr = (char*)x;
for(int i = 0; i < sizeof(x); i++) {
data.push_back(ptr[i]);
}
}
vertexdecl.push_back({0, offset, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0});
offset += sizeof(x);
}
};
std::vector<std::vector<char>> vertices;
std::vector<D3DVERTEXELEMENT9> vertexdecl;
vertices.resize(vertex_count);
std::vector<std::shared_ptr<FunctionBase>> functions;
// add to functions here
int offset = 0;
std::for_each(functions.begin(), functions.end(), [&](std::shared_ptr<FunctionBase>& ref) {
ref->Call(vertices, vertexdecl, offset);
});
vertexdecl.push_back(D3DDECL_END());
Excuse my use of lambdas, I use a C++0x compiler.
Your solution looks fine. But if you want something a bit more C++ish, you could try something like this:
Edit My previous solution basically recreated something that already existed, std::pair. I don't know what I was thinking, here's the even more C++ish solution:
template<typename InIt_A, typename InIt_B, typename OutIt>
void MergeArrays(InIt_A ia, InIt_B ib, OutIt out, std::size_t size)
{
for(std::size_t i=0; i<size; i++)
{
*out = make_pair(*ia,*ib);
++out;
++ia;
++ib;
}
}
int main()
{
pos p[100];
color c[100];
typedef pair<pos,color> CustomVertex;
CustomVertex cv[100];
MergeArrays(p,c,cv,100);
}
You shouldn't have to worry about padding, because all elements in a D3D vertex are either 32 bit floats, or 32 bit integers.
Edit
Here's a solution that might work. It will do all your mergings at once, and you don't need to worry about passing around the size:
// declare a different struct for each possible vertex element
struct Position { FLOAT x,y,z; };
struct Normal { FLOAT x,y,z; };
struct Diffuse { BYTE a,r,g,b; };
struct TextureCoordinates { FLOAT u,v; };
// etc...
// I'm not all too sure about all the different elements you can have in a vertex
// But you would want a parameter for each one in this function. Any element that
// you didn't use, you would just pass in a null pointer. Since it's properly
// typed, you won't be able to pass in an array of the wrong type without casting.
std::vector<char> MergeArrays(Position * ppos, Normal * pnorm, Diffuse * pdif, TextureCoordinates * ptex, int size)
{
int element_size = 0;
if(ppos) element_size += sizeof(Position);
if(pnorm) element_size += sizeof(Normal);
if(pdif) element_size += sizeof(Diffuse);
if(ptex) element_size += sizeof(TextureCoordinates);
vector<char> packed(element_size * size);
vector<char>::iterator it = packed.begin();
while(it != packed.end())
{
if(ppos)
{
it = std::copy_n(reinterpret_cast<char*>(ppos), sizeof(Position), it);
ppos++;
}
if(pnorm)
{
it = std::copy_n(reinterpret_cast<char*>(pnorm), sizeof(Normal), it);
pnorm++;
}
if(pdif)
{
it = std::copy_n(reinterpret_cast<char*>(pdif), sizeof(Diffuse), it);
pdif++;
}
if(ptex)
{
it = std::copy_n(reinterpret_cast<char*>(ptex), sizeof(TextureCoordinates), it);
ptex++;
}
}
return packed;
}
// Testing it out. We'll create an array of 10 each of some of the elements.
// We'll use Position, Normal, and Texture Coordinates. We'll pass in a NULL
// for Diffuse.
int main()
{
Position p[10];
Normal n[10];
TextureCoordinates tc[10];
// Fill in the arrays with dummy data that we can easily read. In this
// case, what we'll do is cast each array to a char*, and fill in each
// successive element with an incrementing value.
for(int i=0; i<10*sizeof(Position); i++)
{
reinterpret_cast<char*>(p)[i] = i;
}
for(int i=0; i<10*sizeof(Normal); i++)
{
reinterpret_cast<char*>(n)[i] = i;
}
for(int i=0; i<10*sizeof(TextureCoordinates); i++)
{
reinterpret_cast<char*>(tc)[i] = i;
}
vector<char> v = MergeArrays(p,n,NULL,tc,10);
// Output the vector. It should be interlaced:
// Position-Normal-TexCoordinates-Position-Normal-TexCoordinates-etc...
for_each(v.begin(), v.end(),
[](const char & c) { cout << (int)c << endl; });
cout << endl;
}
Altering your code, this should do it:
void* Utils::MergeArrays(char *ArrayA, char *ArrayB, int elementSizeA, int elementSizeB, int numElements)
{
char *packedElements = (char*)malloc(numElements* (elementSizeA + elementSizeB));
char *nextElement = packedElements;
for(int i = 0; i < numElements; ++i)
{
memcpy(nextElement, ArrayA + i*elementSizeA, elementSizeA);
nextElement += elementSizeA;
memcpy(nextElement, ArrayB + i*elementSizeB, elementSizeB);
nextElement += elementSizeB;
}
return packedElements;
}
Note that you probably want some code that merges all the attributes at once, rather than 2 at a time (think position+normal+texture coordinate+color+...). Also note that you can do that merging at the time you fill out your vertex buffer, so that you don't ever need to allocate packedElements.
Something like:
//pass the Locked buffer in as destArray
void Utils::MergeArrays(char* destArray, char **Arrays, int* elementSizes, int numArrays, int numElements)
{
char* nextElement = destArray;
for(int i = 0; i < numElements; ++i)
{
for (int array=0; array<numArrays; ++array)
{
int elementSize = elementSizes[array];
memcpy(nextElement, Arrays[array] + i*elementSize, elementSize);
nextElement += elementSize;
}
}
}
I don't know DirectX, but the exact same sort of concept exists in OpenGL, and in OpenGL you can specify the location and stride of each vertex attribute. You can have alternating attributes (like your first struct) or you scan store them in different blocks. In OpenGL you use glVertexPointer to set these things up. Considering that DirectX is ultimately running on the same hardware underneath, I suspect there's some way to do the same thing in DirectX, but I don't know what it is.
Some Googling with DirectX and glVertexPointer as keywords turns up SetFVF and SetVertexDeclaration
MSDN on SetFVF, gamedev discussion comparing them