Bullet Physics - How can I convert btHeightfieldTerrainShape to btTriangleMeshShape? - c++

I found that btStridingMeshInterface is required to create btBvhTriangleMeshShape.
But all I have is as much data as (Width * Height) for btTerrainHeightfieldShape. There is no index information on the data.
Then, I found this post on Forum : https://pybullet.org/Bullet/phpBB3/viewtopic.php?p=38852&hilit=btTriangleIndexVertexArray#p38852
so I modify that codes like this :
auto TerrainShape = new btHeightfieldTerrainShape(mWidth * mTerrainScale.x, mDepth * mTerrainScale.z, mHeightmapData, minHeight, maxHeight, 1, false);
TerrainShape->setLocalScaling(btVector3(1, mTerrainScale.y, 1));
btVector3 aabbMin, aabbMax;
for (int k = 0; k < 3; k++)
{
aabbMin[k] = -BT_LARGE_FLOAT;
aabbMax[k] = BT_LARGE_FLOAT;
}
std::vector<XMFLOAT3> vertices;
std::vector<unsigned short> Targetindices;
btTriangleCollector collector;
collector.m_pVerticesOut = &vertices;
collector.m_pIndicesOut = &Targetindices;
TerrainShape->processAllTriangles(&collector, aabbMin, aabbMax);
mVertexArray = std::make_shared<btTriangleIndexVertexArray>();
btIndexedMesh tempMesh;
mVertexArray->addIndexedMesh(tempMesh, PHY_FLOAT);
btIndexedMesh& mesh = mVertexArray->getIndexedMeshArray()[0];
const int32_t VERTICES_PER_TRIANGLE = 3;
size_t numIndices = Targetindices.size();
mesh.m_numTriangles = numIndices / VERTICES_PER_TRIANGLE;
if (numIndices < std::numeric_limits<int16_t>::max())
{
mesh.m_triangleIndexBase = new unsigned char[sizeof(int16_t) * (size_t)numIndices];
mesh.m_indexType = PHY_SHORT;
mesh.m_triangleIndexStride = VERTICES_PER_TRIANGLE * sizeof(int16_t);
}
else
{
mesh.m_triangleIndexBase = new unsigned char[sizeof(int32_t) * (size_t)numIndices];
mesh.m_indexType = PHY_INTEGER;
mesh.m_triangleIndexStride = VERTICES_PER_TRIANGLE * sizeof(int32_t);
}
mesh.m_numVertices = vertices.size();
mesh.m_vertexBase = new unsigned char[VERTICES_PER_TRIANGLE * sizeof(btScalar) * (size_t)mesh.m_numVertices];
mesh.m_vertexStride = VERTICES_PER_TRIANGLE * sizeof(btScalar);
btScalar* vertexData = static_cast<btScalar*>((void*)(mesh.m_vertexBase));
for (int32_t i = 0; i < mesh.m_numVertices; ++i)
{
int32_t j = i * VERTICES_PER_TRIANGLE;
const XMFLOAT3& point = vertices[i];
vertexData[j] = point.x;
vertexData[j + 1] = point.y;
vertexData[j + 2] = point.z;
}
if (numIndices < std::numeric_limits<int16_t>::max())
{
int16_t* indices = static_cast<int16_t*>((void*)(mesh.m_triangleIndexBase));
for (int32_t i = 0; i < numIndices; ++i) {
indices[i] = (int16_t)Targetindices[i];
}
}
else
{
int32_t* indices = static_cast<int32_t*>((void*)(mesh.m_triangleIndexBase));
for (int32_t i = 0; i < numIndices; ++i) {
indices[i] = Targetindices[i];
}
}
btBvhTriangleMeshShape* shape = new btBvhTriangleMeshShape(mVertexArray.get(), true);
btTransform btTerrainTransform;
btTerrainTransform.setIdentity();
btTerrainTransform.setOrigin(btVector3(0, 0, 0));
mBtRigidBody = physics->CreateRigidBody(0.0f, btTerrainTransform, shape);
What I try to do is simple. make btHeightfieldTerrainShape, and collect all triangle data by TriangleCallback(collector) and make btTriangleIndexVertexArray by that data sets.
problem is data that I collected seems wrong.
if this codes works well, mBtRigdBody's AABB must be min(-511.5, 4, -511.5), max(511.5, 242, 511.5),
but, actually what i got is min(-511.5, -105.389..., -511.5), max(511.5, 25.8... ,-500.5)
definitely wrong values.
I think btTriangleIndexVertexArray doesn't have right data of terrain, but I can't find where's wrong.

Related

assimp: Loading "mTransformation" and "mOffsetMatrix" the right way. How to load skeletal animation from assimp?

I am currently trying to load a skeletal animation from a collada file (.dae).
But there seems to be a bug in my code which I can not find myself.
The bug visualizes in not showing the last child's of the hierarchy at all.
They are loaded but transformed in a way they disappear.
I try to give as dense code spinets as needed to understand what I am doing.
If you need more context please request it!
The code I am using for loading the bone transformations:
Where bone_map is the map that translate from assimp joint name to my joint array.
mesh.joint_local_transformations = std::make_unique<glm::mat4[]>(ai_mesh->mNumBones);
mesh.joint_global_inverse_transformations = std::make_unique<glm::mat4[]>(ai_mesh->mNumBones);
mesh.joint_count = ai_mesh->mNumBones;
for (uint32_t i = 0; i < ai_mesh->mNumBones; ++i)
{
const auto *joint = ai_mesh->mBones[i];
if (bone_map.contains(joint->mName.data))
{
const auto joint_index = bone_map.at(joint->mName.data);
auto joint_transformation_invert = fromAiMat4(joint->mOffsetMatrix);
const auto *node = find_node(joint->mName, ai_scene);
auto joint_transformation = fromAiMat4(node->mTransformation);
mesh.joint_global_inverse_transformations[joint_index] = joint_transformation_invert;
mesh.joint_local_transformations[joint_index] = joint_transformation;
}
}
The code I am using for loading animation transformations:
anim.joint_transformations = std::make_unique<Animation::JointTransformation[]>(anim.key_frame_count * anim.joint_count);
for (size_t i = 0; i < anim.key_frame_count; ++i)
{
for (size_t j = 0; j < ai_anim->mNumChannels; ++j)
{
if (bone_map.contains(ai_anim->mChannels[j]->mNodeName.data))
{
const auto bone_index = bone_map.at(ai_anim->mChannels[j]->mNodeName.data);
const auto r = ai_anim->mChannels[j]->mRotationKeys[i].mValue;
const auto s = ai_anim->mChannels[j]->mScalingKeys[i].mValue;
const auto p = ai_anim->mChannels[j]->mPositionKeys[i].mValue;
Animation::JointTransformation t{};
t.rotation = {r.w, r.x, r.y, r.z};
t.scale = {s.x, s.y, s.z};
t.translation = {p.x, p.y, p.z};
anim.joint_transformations[anim.joint_count * i + bone_index] = t;
}
}
}
The code I am using to animate:
Where i is the current key frame index and j is the current joint index.
Where joints is the array of transformations that is passed into the shader. (the result)
Notice that a parent has always a lower index than is child. Therefor a parent is always processed first.
const auto *f0 = &(anim->joint_transformations[(i - 1) * anim->joint_count]);
const auto *f1 = &(anim->joint_transformations[(i - 0) * anim->joint_count]);
joints[0] = Animation::mix(f0[0], f1[0], current_time);
for (uint32_t j = 1; j < mesh->joint_count; ++j)
{
const auto &parent = joints[mesh->joint_parent_indices[j]];
joints[j] = parent * Animation::mix(f0[j], f1[j], current_time);
}
for (uint32_t j = 0; j < mesh->joint_count; ++j)
{
joints[j] = joints[j] * mesh->joint_global_inverse_transformations[j];
}
The data structure used (probably not relevant for the problem):
struct Mesh
{
std::unique_ptr<glm::vec3[]> vertices;
std::unique_ptr<uint32_t[]> indices;
std::unique_ptr<glm::vec3[]> normals;
std::unique_ptr<glm::vec2[]> tex_coords;
uint32_t vertex_count;
uint32_t indices_count;
};
struct AnimatedMesh
: public Mesh
{
std::unique_ptr<uint32_t[]> joint_indices;
std::unique_ptr<float[]> joint_weights;
std::unique_ptr<glm::mat4[]> joint_local_transformations;
std::unique_ptr<glm::mat4[]> joint_global_inverse_transformations;
std::unique_ptr<uint32_t[]> joint_parent_indices;
uint32_t joint_count;
uint32_t joints_per_vertex = 3;
};
struct Animation
{
struct JointTransformation
{
glm::vec3 translation;
glm::quat rotation;
glm::vec3 scale;
};
std::unique_ptr<JointTransformation[]> joint_transformations;
std::unique_ptr<float[]> joint_timestamps;
uint32_t joint_count;
uint32_t key_frame_count;
static glm::mat4 mix (const JointTransformation &t0, const JointTransformation &t1, float delta)
{
const auto t = glm::translate(glm::identity<glm::mat4>(), glm::mix(t0.translation, t1.translation, delta));
const auto r = glm::toMat4(glm::slerp(t0.rotation, t1.rotation, delta));
const auto s = glm::scale(glm::identity<glm::mat4>(), glm::mix(t0.scale, t1.scale, delta));
return t * r * s;
}
};
Thanks to everyone who tries to help!

Simple way to iterate and transfer data from linear array to array of vec3

Simple way of transferring data from linear array to an array of vec3?
int stride = 0;
for (int i = 0; i < vertex_count; i++)
{
s_vertex[i].active = false;
s_vertex[i].x = vertex_array[0+stride];
s_vertex[i].y = vertex_array[1+stride];
s_vertex[i].z = vertex_array[2+stride];
stride += 3;
}
Obviously this prompts exception because it overshoots an index into array, which is the goal here, to not overshoot.
You can resolve this couple of ways.
Adjust the index to the std::vector.
int stride = 3;
for (int i = 0; i < vertex_count; i += stride )
{
// Making up the class name of the elements of the vector.
ElementType element;
element.active = false;
element.x = vertex_array[0+i];
element.y = vertex_array[1+i];
element.z = vertex_array[2+i];
// Index for the vector.
int i2 = i/stride;
s_vertex[i2] = element;
}
PS. Please note a slight change to how stride is used.
Use std::vector::push_back instead of indexing into it.
int stride = 3;
for (int i = 0; i < vertex_count; i += stride )
{
ElementType element;
element.active = false;
element.x = vertex_array[0+i];
element.y = vertex_array[1+i];
element.z = vertex_array[2+i];
s_vertex.push_back(element);
}
To minimize the number of memory allocations and deallocations, it's a good idea to use std::vector::reserve first before loop begins so that the calls to push_back don't need to allocate memory.
int stride = 3;
s_vertex.reserve(vertex_count/3);
for (int i = 0; i < vertex_count; i += stride )
{
ElementType element;
element.active = false;
element.x = vertex_array[0+i];
element.y = vertex_array[1+i];
element.z = vertex_array[2+i];
s_vertex.push_back(element);
}
struct vertexs {
bool active;
int x, y, z;
};
if((vertex_count % 3) != 0)
printf("ERROR: Your array is not evenly disctibuted....")
int i = 0;
while ( i < vertex_count)
{
vertexs v;
v.actice = false;
v.x = vertex_array[i++];
v.y = vertex_array[i++];
v.z = vertex_array[i++];
s_vertex.push_back(v);
}
For something cheap to default construct like a vec3 you can resize the target array to the required size and then loop though and assign values much like you have done:
s_vertex.resize(vertex_array.size() / 3);
int i = 0;
for (auto& v : s_vertex) {
v.x = vertex_array[i];
v.y = vertex_array[i + 1];
v.z = vertex_array[i + 2];
i += 3;
}
Live demo.

load cv::mat to faster rcnn blob

Currently I am working with Faster RCNN using C++. I am trying to load cv Mat object (color image) to the net_->blob_by_name("data"). I follow the given instruction here https://github.com/YihangLou/FasterRCNN-Encapsulation-Cplusplus but the result is really bad:
I didn't change anything from the original code. So I suspect loading data to blob might be the issue.
Code:
float im_info[3];
float data_buf[height*width*3];
float *boxes = NULL;
float *pred = NULL;
float *pred_per_class = NULL;
float *sorted_pred_cls = NULL;
int *keep = NULL;
const float* bbox_delt;
const float* rois;
const float* pred_cls;
int num;
for (int h = 0; h < cv_img.rows; ++h )
{
for (int w = 0; w < cv_img.cols; ++w)
{
cv_new.at<cv::Vec3f>(cv::Point(w, h))[0] = float(cv_img.at<cv::Vec3b>(cv::Point(w, h))[0])-float(102.9801);
cv_new.at<cv::Vec3f>(cv::Point(w, h))[1] = float(cv_img.at<cv::Vec3b>(cv::Point(w, h))[1])-float(115.9465);
cv_new.at<cv::Vec3f>(cv::Point(w, h))[2] = float(cv_img.at<cv::Vec3b>(cv::Point(w, h))[2])-float(122.7717);
}
}
cv::resize(cv_new, cv_resized, cv::Size(width, height));
im_info[0] = cv_resized.rows;
im_info[1] = cv_resized.cols;
im_info[2] = img_scale;
for (int h = 0; h < height; ++h )
{
for (int w = 0; w < width; ++w)
{
data_buf[(0*height+h)*width+w] = float(cv_resized.at<cv::Vec3f>(cv::Point(w, h))[0]);
data_buf[(1*height+h)*width+w] = float(cv_resized.at<cv::Vec3f>(cv::Point(w, h))[1]);
data_buf[(2*height+h)*width+w] = float(cv_resized.at<cv::Vec3f>(cv::Point(w, h))[2]);
}
}
net_->blob_by_name("data")->Reshape(1, 3, height, width);
net_->blob_by_name("data")->set_cpu_data(data_buf);
net_->blob_by_name("im_info")->set_cpu_data(im_info);
net_->ForwardFrom(0);
bbox_delt = net_->blob_by_name("bbox_pred")->cpu_data();
num = net_->blob_by_name("rois")->num();
Any advices ?
Can you please modify the code and check ...
cv::resize(cv_new, cv_resized, cv::Size(width, height));
im_info[0] = cv_resized.rows;
im_info[1] = cv_resized.cols;
im_info[2] = img_scale;
net_->blob_by_name("data")->Reshape(1, 3, height, width);
const shared_ptr<Blob<float> >& data_blob = net_->blob_by_name("data");
float* data_buf = data_blob->mutable_cpu_data();
for (int h = 0; h < height; ++h )
{
for (int w = 0; w < width; ++w)
{
data_buf[(0*height+h)*width+w] = float(cv_resized.at<cv::Vec3f> cv::Point(w, h))[0]);
data_buf[(1*height+h)*width+w] = float(cv_resized.at<cv::Vec3f>(cv::Point(w, h))[1]);
data_buf[(2*height+h)*width+w] = float(cv_resized.at<cv::Vec3f>(cv::Point(w, h))[2]);
}
}
net_->Forward();

C++ error with accessing private data to get pixels from images

I'm writing a program that creates a mosaic using filler images. I'm having an issue with accessing the private data for the pixels for each filler image. I can get the pixels from the source image (image* displayed) but when I try to get the filler image to size down specific to the block size entered, I lose the data. The filler images need to be resized to the blockheight and blockwidth so that they can properly fit the source image height and width, that is the goal of the resizeFillerImage function. I am using linked links when loading the folder with all the filler images, and I do get the averages right when I load them into the program. The pixel** array is private data within a main Image class that contains the fillerImage class, and I cannot change this, so I need to be able to get the pixels for the filler images and set them to the temporary copy image I create.
Here is my code:
Globals:
image* displayed;
fillerNode* head = NULL;
fillerNode* head2 = NULL;
fillerNode* temp1;
fillerNode* shrinkedList;
//This function will resize each filler image based on the block size
void resizeFillerImage(int blockWidth, int blockHeight)
{
int blockSize = blockWidth * blockHeight;
int avgRed = 0, avgGreen = 0, avgBlue = 0;
image* temp = new image;
temp->createNewImage(displayed->getWidth() / blockWidth, displayed->getHeight() / blockHeight);
pixel** shrinkPix = temp->getPixels(); //THIS IS WHERE I GET THE ERROR
//The pixel numbers get reset instead of being assigned to temp
for (int i = 0; i < displayed->getHeight() / blockHeight; i++)
{
for (int j = 0; j < displayed->getWidth() / blockWidth; j++)
{
for (int k = 0; k < blockHeight; k++)
{
for (int m = 0; m < blockWidth; m++)
{
avgRed = avgRed + shrinkPix[i * blockHeight + k][j * blockWidth + m].red;
avgGreen = avgGreen + shrinkPix[i * blockHeight + k][j * blockWidth + m].green;
avgBlue = avgBlue + shrinkPix[i * blockHeight + k][j * blockWidth + m].blue;
}
}
avgRed = avgRed / blockSize;
avgGreen = avgGreen / blockSize;
avgBlue = avgBlue / blockSize;
shrinkPix[i][j].red = avgRed;
shrinkPix[i][j].green = avgGreen;
shrinkPix[i][j].blue = avgBlue;
avgRed = 0;
avgGreen = 0;
avgBlue = 0;
}
}
for (int i = 0; i < displayed->getHeight(); i++)
{
for (int j = 0; j < displayed->getWidth(); j++)
{
if ((i > displayed->getHeight() / blockHeight) || (j > displayed->getWidth() / blockWidth))
{
shrinkPix[i][j].red = 0;
shrinkPix[i][j].green = 0;
shrinkPix[i][j].blue = 0;
}
}
}
displayed = temp;
return;
}
//This function should generate the photomosaic from the loaded image. Each filler image should represent a section of the original image
//that is blockWidth by blockHeight big.
void generateMosaic(int blockWidth, int blockHeight)
{
for (int i = FIRST_FILLER_NUMBER; i <= LAST_FILLER_NUMBER; i++)
{
shrinkedList = new fillerNode;
fillerImage* tempImage = new fillerImage();
resizeFillerImage(blockHeight, blockHeight);
}
}
Here is the fillerImage class:
fillerImage::fillerImage() //constructor
{
this->timesUsed = 0;
this->averageColor.red = 0;
this->averageColor.green = 0;
this->averageColor.blue = 0;
}
fillerImage::fillerImage(string filename) : image(filename)
{
timesUsed = 0;
}
fillerImage::~fillerImage() //destructor
{
timesUsed = NULL;
averageColor.red = NULL;
averageColor.green = NULL;
averageColor.blue = NULL;
}
void fillerImage::calculateAverageColor() //calcuate the average color assign that color to private member averageColor
{
pixel** pix = getPixels();
int area = FILLER_IMAGE_HEIGHT * FILLER_IMAGE_WIDTH;
int totRed = 0, totGreen = 0, totBlue = 0;
for (int i = 0; i < FILLER_IMAGE_HEIGHT; i++)
{
for (int j = 0; j < FILLER_IMAGE_WIDTH; j++)
{
totRed += pix[i][j].red;
totGreen += pix[i][j].green;
totBlue += pix[i][j].blue;
}
}
averageColor.red = totRed / area;
averageColor.green = totGreen / area;
averageColor.blue = totBlue / area;
}
pixel fillerImage::getAverageColor() //return the average color
{
return this->averageColor;
}
int fillerImage::getTimesUsed() //return how many times this fillerimage has been used
{
return this->timesUsed;
}
void fillerImage::setTimesUsed(int used) //assign "used" to timesUsed
{
this->timesUsed = used;
}
FillerNode struct:
struct fillerNode{
fillerImage *data;
fillerNode *pre;
fillerNode *next;
};
Image Class header:
class image {
public:
image(); //the image constructor (initializes everything)
image(string filename); //a image constructor that directly loads an image from disk
~image(); //the image destructor (deletes the dynamically created pixel array)
void createNewImage(int width, int height); //this function deletes any current image data and creates a new blank image
//with the specified width/height and allocates the needed number of pixels
//dynamically.
bool loadImage(string filename); //load an image from the specified file path. Return true if it works, false if it is not a valid image.
//Note that we only accept images of the RGB 8bit colorspace!
void saveImage(string filename); //Save an image to the specified path
pixel** getPixels(); //return the 2-dimensional pixels array
int getWidth(); //return the width of the image
int getHeight(); //return the height of the image
void viewImage(CImage* myImage); //This function is called by the windows GUI. It returns the image in format the GUI understands.
private:
void pixelsToCImage(CImage* myImage); //this function is called internally by the image class.
//it converts our pixel object array to a standard BGR uchar array with word spacing.
//(Don't worry about what this does)
pixel** pixels; // pixel data array for image
int width, height; // stores the image dimensions
};
The error was that I created a copy of the source image and then tried to get pixels from an empty copy, so I created another 2D pixel array and got the pixels there, and used each array accordingly.
pixel** myPix = displayed->getPixels();
int height = displayed->getHeight();
int width = displayed->getWidth();
image* temp = new image;
temp->createNewImage(displayed->getWidth() / blockWidth, displayed->getHeight() / blockHeight);
pixel** shrinkPix = temp->getPixels();
for (int i = 0; i < displayed->getHeight() / blockHeight; i++)
{
for (int j = 0; j < displayed->getWidth() / blockWidth; j++)
{
for (int k = 0; k < blockHeight; k++)
{
for (int m = 0; m < blockWidth; m++)
{
avgRed = avgRed + myPix[i * blockHeight + k][j * blockWidth + m].red;
avgGreen = avgGreen + myPix[i * blockHeight + k][j * blockWidth + m].green;
avgBlue = avgBlue + myPix[i * blockHeight + k][j * blockWidth + m].blue;
}
}
avgRed = avgRed / blockSize;
avgGreen = avgGreen / blockSize;
avgBlue = avgBlue / blockSize;
shrinkPix[i][j].red = avgRed;
shrinkPix[i][j].green = avgGreen;
shrinkPix[i][j].blue = avgBlue;
avgRed = 0;
avgGreen = 0;
avgBlue = 0;
}
}
for (int i = 0; i < displayed->getHeight(); i++)
{
for (int j = 0; j < displayed->getWidth(); j++)
{
if ((i > displayed->getHeight() / blockHeight) || (j > displayed->getWidth() / blockWidth))
{
myPix[i][j].red = 0;
myPix[i][j].green = 0;
myPix[i][j].blue = 0;
}
}
}

Creating manual mesh in Ogre3d?

I am getting a bit confused as to why my manually created mesh is not appearing correctly. I have created the vertex and index buffers and they seem (although I am not 100% sure) to contain the correct values.
Essentially I am creating a grid of mapSize * mapSize vetrices, at a height of 0, then creating the triangles out of them.
void TerrainGeneration::createTerrainMesh() {
/// Create the mesh via the MeshManager
Ogre::MeshPtr msh = Ogre::MeshManager::getSingleton().createManual("TerrainTest", "General");
Ogre::SubMesh* sub = msh->createSubMesh();
const size_t nVertices = mapSize*mapSize;
const size_t vbufCount = 3*2*nVertices;
float vertices[vbufCount];
size_t vBufCounter = 0;
for(int z = 0; z < mapSize; z++) {
for(int x = 0; x < mapSize; x++) {
//Position
vertices[vBufCounter] = x;
vertices[vBufCounter+1] = 0;
vertices[vBufCounter+2] = z;
//Normal
vertices[vBufCounter+3] = 0;
vertices[vBufCounter+4] = 1;
vertices[vBufCounter+5] = 0;
vBufCounter += 6;
}
}
Ogre::RenderSystem* rs = Ogre::Root::getSingleton().getRenderSystem();
Ogre::RGBA colours[nVertices];
Ogre::RGBA *pColour = colours;
//Create triangles
const size_t ibufCount = 6*(mapSize - 1)*(mapSize - 1);
unsigned int faces[ibufCount];
size_t iBufCounter = 0;
for(int x=0; x <= mapSize -2; x++) {
for(int y=0; y <= mapSize -2; y++) {
faces[iBufCounter] = vertices[(y*mapSize) + x];
faces[iBufCounter+1] = vertices[((y+1)*mapSize) + x];
faces[iBufCounter+2] = vertices[((y+1)*mapSize) + (x+1)];
faces[iBufCounter+3] = vertices[(y*mapSize) + x];
faces[iBufCounter+4] = vertices[((y+1)*mapSize) + (x+1)];
faces[iBufCounter+5] = vertices[(y*mapSize) + (x+1)];
iBufCounter += 6;
}
}
/// Create vertex data structure for n*n vertices shared between submeshes
msh->sharedVertexData = new Ogre::VertexData();
msh->sharedVertexData->vertexCount = nVertices;
/// Create declaration (memory format) of vertex data
Ogre::VertexDeclaration* decl = msh->sharedVertexData->vertexDeclaration;
size_t offset = 0;
// 1st buffer
decl->addElement(0, offset, Ogre::VET_FLOAT3, Ogre::VES_POSITION);
offset += Ogre::VertexElement::getTypeSize(Ogre::VET_FLOAT3);
decl->addElement(0, offset, Ogre::VET_FLOAT3, Ogre::VES_NORMAL);
offset += Ogre::VertexElement::getTypeSize(Ogre::VET_FLOAT3);
/// Allocate vertex buffer of the requested number of vertices (vertexCount)
/// and bytes per vertex (offset)
Ogre::HardwareVertexBufferSharedPtr vbuf =
Ogre::HardwareBufferManager::getSingleton().createVertexBuffer(
offset, msh->sharedVertexData->vertexCount, Ogre::HardwareBuffer::HBU_STATIC_WRITE_ONLY);
/// Upload the vertex data to the card
vbuf->writeData(0, vbuf->getSizeInBytes(), vertices, true);
/// Set vertex buffer binding so buffer 0 is bound to our vertex buffer
Ogre::VertexBufferBinding* bind = msh->sharedVertexData->vertexBufferBinding;
bind->setBinding(0, vbuf);
/// Allocate index buffer of the requested number of vertices (ibufCount)
Ogre::HardwareIndexBufferSharedPtr ibuf = Ogre::HardwareBufferManager::getSingleton().
createIndexBuffer(
Ogre::HardwareIndexBuffer::IT_16BIT,
ibufCount,
Ogre::HardwareBuffer::HBU_STATIC_WRITE_ONLY);
/// Upload the index data to the card
ibuf->writeData(0, ibuf->getSizeInBytes(), faces, true);
/// Set parameters of the submesh
sub->useSharedVertices = true;
sub->indexData->indexBuffer = ibuf;
sub->indexData->indexCount = ibufCount;
sub->indexData->indexStart = 0;
/// Set bounding information (for culling)
msh->_setBounds(Ogre::AxisAlignedBox(-5000,-5000,-5000,5000,5000,5000));
//msh->_setBoundingSphereRadius(Ogre::Math::Sqrt(3*100*100));
/// Notify -Mesh object that it has been loaded
msh->load();
}
I initialise the mesh and load it as follows
Ogre::Entity* thisEntity = mSceneMgr->createEntity("cc", "TerrainTest", "General");
thisEntity->setMaterialName("Examples/Rockwall");
Ogre::SceneNode* thisSceneNode = mSceneMgr->getRootSceneNode()->createChildSceneNode();
thisSceneNode->setPosition(0, 0, 0);
thisSceneNode->attachObject(thisEntity);
Any insight would be greatly appreciated.
Ok so I got an answer off the Ogre3d forums from a very helpful person called bstone.
It turns out that when creating my index list to create the faces I was mistakenly passing coordinates from the vertex list rather than indexes of the vertices.
faces[iBufCounter] = vertices[(y*mapSize) + x];
faces[iBufCounter+1] = vertices[((y+1)*mapSize) + x];
faces[iBufCounter+2] = vertices[((y+1)*mapSize) + (x+1)];
faces[iBufCounter+3] = vertices[(y*mapSize) + x];
faces[iBufCounter+4] = vertices[((y+1)*mapSize) + (x+1)];
faces[iBufCounter+5] = vertices[(y*mapSize) + (x+1)];
Should have been
faces[iBufCounter] = (y*mapSize) + x;
faces[iBufCounter+1] = ((y+1)*mapSize) + x;
faces[iBufCounter+2] = ((y+1)*mapSize) + (x+1);
faces[iBufCounter+3] = (y*mapSize) + x;
faces[iBufCounter+4] = ((y+1)*mapSize) + (x+1);
faces[iBufCounter+5] = (y*mapSize) + (x+1);
However I still have a problem in my code somewhere, although from what others have said it probably isn't in this code that i've posted.
Another user also proposed that I create the terrain ina much simpler way and posted the following code
int mapSize = 16;
Ogre::ManualObject *man = m_sceneManager->createManualObject("TerrainTest");
man->begin("Examples/Rockwall",Ogre::RenderOperation::OT_TRIANGLE_LIST);
for(int z = 0; z < mapSize; ++z)
{
for(int x = 0; x < mapSize; ++x)
{
man->position(x,0,z);
man->normal(0,1,0);
man->textureCoord(x,z);
}
}
for(int z = 0; z < mapSize-1; ++z)
{
for(int x = 0; x < mapSize-1; ++x)
{
man->quad((x) + (z) * mapSize, (x) + (z + 1) * mapSize, (x + 1) + (z + 1) * mapSize, (x + 1) + (z) * mapSize);
}
}
man->end();
m_sceneManager->getRootSceneNode()->attachObject(man);