QGLBuffer and VBO - c++

I have a problem with QGLBuffer. I'm trying to implement a dynamic VBO with QT + Opengl.
In the .h file
struct CVert {
float x;
float y;
};
...
typedef struct CVert CVert;
CVert* m_data;
QGLBuffer* m_bufferData;
int m_size;
in the .cpp
Constructor.
m_size = numberOfVertex;
m_bufferData = new QGLBuffer(QGLBuffer::VertexBuffer);
m_bufferData->create();
m_bufferData->bind();
m_bufferData->setUsagePattern(QGLBuffer::DynamicDraw);
m_bufferData->allocate(2*sizeof(float)* p_size);
m_data = (CVert*)m_bufferData->map (QGLBuffer::ReadWrite);
In the execution of the program I change some m_data values
m_data[pos].x = X1
m_data[pos].y = y1
In the draw method.
glEnableClientState(GL_VERTEX_ARRAY);
if (m_bufferData->bind ()) {
glVertexPointer( 2, GL_FLOAT, 0, (char *) NULL );;
glDrawArrays( GL_LINES, 0,m_size );
glDisableClientState(GL_VERTEX_ARRAY);
}
But nothig it's being drawn.
I've checked that m_data is not null, and m_bufferData->bind() returns true.
What am I doing wrong?

I think i've solved. Every time i've to edit the VBO.
I have to
m_data = (CVert*)data->map (QGLBuffer::ReadWrite);
m_data[pos].x = X1;
m_data[pos].y = y1
data->unmap ();
it's doesn't work if I map only once in the constructor

Related

Update SSBO in Compute shader

I am currently trying to update a SSBO linked/bound to a Computeshader. Doing it this way, I only write the first 32byte into the out_picture, because i only memcpy that many (sizeof(pstruct)).
Computeshader:
#version 440 core
struct Pstruct{
float picture[1920*1080*3];
float factor;
};
layout(std430, binding = 0) buffer Result{
float out_picture[];
};
layout(std430, binding = 1) buffer In_p1{
Pstruct in_p1;
};
layout(local_size_x = 1000) in;
void main() {
out_picture[gl_GlobalInvocationID.x] = out_picture[gl_GlobalInvocationID.x] +
in_p1.picture[gl_GlobalInvocationID.x] * in_p1.factor;
}
GLSL:
struct Pstruct{
std::vector<float> picture;
float factor;
};
Pstruct tmp;
tmp.factor = 1.0f;
for(int i = 0; i < getNUM_PIX(); i++){
tmp.picture.push_back(5.0f);
}
SSBO ssbo;
glGenBuffers(1, &ssbo.handle);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 1, ssbo.handle);
glBufferData(GL_SHADER_STORAGE_BUFFER, (getNUM_PIX() + 1) * sizeof(float), NULL, GL_DYNAMIC_DRAW);
...
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssbo.handle);
Pstruct* ptr = (Pstruct *) glMapBuffer(GL_SHADER_STORAGE_BUFFER, GL_WRITE_ONLY);
memcpy(ptr, &pstruct, sizeof(pstruct));
glUnmapBuffer(GL_SHADER_STORAGE_BUFFER);
...
glUseProgram(program);
glDispatchCompute(getNUM_PIX() / getWORK_GROUP_SIZE(), 1, 1);
glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
How can I copy both my picture array and my float factor at the same time?
Do I have to split the memcpy call into array and float? and if yes how? I can copy the first part, but I am not allowed to add an offset to the ptr.
First of all,
float picture[1920*1080*3];
clearly should be either a texture (you're only reading from it anyway) or at least an image.
Second:
struct Pstruct{
std::vector<float> picture;
float factor;
};
This definition does not match the definition in your shader in any way. The std::vector object will just be a meta object internally managing the data storage used by the vector. memcpy that to a GL buffer and passing that to the GPU does not make sense at all.
The correct approach would be to either copy the contents of that vector separately into the appropriate places inside the buffer, or to just us a struct definition on your client side which actually matches the one you're using in the shader (and taking all the rules of std430 into account). But, as my first point already was, the correct solution here is most likely to use a texture or image object instead.

Why is DirectX skipping every second shader call?

I get a bit frustrating trying to figure out why I have to call a DirectX 11 shader twice to see the desired result.
Here's my current state:
I have a 3d object built from vertex and index buffer. This object is then instanced several times. Later, there will be a lot more than just one object, but for now, I'm testing it with this one only. In my render routine, I iterate over all instances, change the world matrices (so that all object instances are put together and form "one big, whole object") and call the shader method to render the data to the screen.
Here's the code so far, that doesn't work:
m_pLevel->Simulate(0.1f);
std::list<CLevelElementInstance*>& lst = m_pLevel->GetInstances();
float x = -(*lst.begin())->GetPosition().x, y = -(*lst.begin())->GetPosition().y, z = -(*lst.begin())->GetPosition().z;
int i = 0;
for (std::list<CLevelElementInstance*>::iterator it = lst.begin(); it != lst.end(); it++)
{
// Extract base element from current instance
CLevelElement* elem = (*it)->GetBaseElement();
// Write vertex and index buffer to video memory
elem->Render(m_pDirect3D->GetDeviceContext());
// Call shader
m_pTextureShader->Render(m_pDirect3D->GetDeviceContext(), elem->GetIndexCount(), XMMatrixTranslation(x, y, z + (i * 8)), viewMatrix, projectionMatrix, elem->GetTexture());
++i;
}
My std::list consists of 4 3d objects, which are all the same. They only differ in their position in 3d space. All of the objects are 8.0f x 8.0f x 8.0f, so for simplicity I just line them up. (As can be seen on the shader render line, where I just add 8 units to the Z dimension)
The result is the following: I only see two elements rendered on the screen. And between them, there's an empty space the size of an element.
At first, I thought I did some errors with the 3d math, but after a lot of time spent debugging my code, I could not find any errors.
And here is now the confusing part:
If I change the content of the for-loop and add another call to the shader, I suddenly see all four elements; and they're all on their correct positions in 3d space:
m_pLevel->Simulate(0.1f);
std::list<CLevelElementInstance*>& lst = m_pLevel->GetInstances();
float x = -(*lst.begin())->GetPosition().x, y = -(*lst.begin())->GetPosition().y, z = -(*lst.begin())->GetPosition().z;
int i = 0;
for (std::list<CLevelElementInstance*>::iterator it = lst.begin(); it != lst.end(); it++)
{
// Extract base element from current instance
CLevelElement* elem = (*it)->GetBaseElement();
// Write vertex and index buffer to video memory
elem->Render(m_pDirect3D->GetDeviceContext());
// Call shader
m_pTextureShader->Render(m_pDirect3D->GetDeviceContext(), elem->GetIndexCount(), XMMatrixTranslation(x, y, z + (i * 8)), viewMatrix, projectionMatrix, elem->GetTexture());
// Call shader a second time - this seems to have no effect but to allow the next iteration to perform it's shader rendering...
m_pTextureShader->Render(m_pDirect3D->GetDeviceContext(), elem->GetIndexCount(), XMMatrixTranslation(x, y, z + (i * 8)), viewMatrix, projectionMatrix, elem->GetTexture());
++i;
}
Does anybody has an idea of what is going on here?
If it helps, here's the code of the shader:
bool CTextureShader::Render(ID3D11DeviceContext* _pDeviceContext, const int _IndexCount, XMMATRIX& _pWorldMatrix, XMMATRIX& _pViewMatrix, XMMATRIX& _pProjectionMatrix, ID3D11ShaderResourceView* _pTexture)
{
bool result = SetShaderParameters(_pDeviceContext, _pWorldMatrix, _pViewMatrix, _pProjectionMatrix, _pTexture);
if (!result)
return false;
RenderShader(_pDeviceContext, _IndexCount);
return true;
}
bool CTextureShader::SetShaderParameters(ID3D11DeviceContext* _pDeviceContext, XMMATRIX& _WorldMatrix, XMMATRIX& _ViewMatrix, XMMATRIX& _ProjectionMatrix, ID3D11ShaderResourceView* _pTexture)
{
HRESULT result;
D3D11_MAPPED_SUBRESOURCE mappedResource;
MatrixBufferType* dataPtr;
unsigned int bufferNumber;
_WorldMatrix = XMMatrixTranspose(_WorldMatrix);
_ViewMatrix = XMMatrixTranspose(_ViewMatrix);
_ProjectionMatrix = XMMatrixTranspose(_ProjectionMatrix);
result = _pDeviceContext->Map(m_pMatrixBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
if (FAILED(result))
return false;
dataPtr = (MatrixBufferType*)mappedResource.pData;
dataPtr->world = _WorldMatrix;
dataPtr->view = _ViewMatrix;
dataPtr->projection = _ProjectionMatrix;
_pDeviceContext->Unmap(m_pMatrixBuffer, 0);
bufferNumber = 0;
_pDeviceContext->VSSetConstantBuffers(bufferNumber, 1, &m_pMatrixBuffer);
_pDeviceContext->PSSetShaderResources(0, 1, &_pTexture);
return true;
}
void CTextureShader::RenderShader(ID3D11DeviceContext* _pDeviceContext, const int _IndexCount)
{
_pDeviceContext->IASetInputLayout(m_pLayout);
_pDeviceContext->VSSetShader(m_pVertexShader, NULL, 0);
_pDeviceContext->PSSetShader(m_pPixelShader, NULL, 0);
_pDeviceContext->PSSetSamplers(0, 1, &m_pSampleState);
_pDeviceContext->DrawIndexed(_IndexCount, 0, 0);
}
If it helps, I can also post the code from the shaders here.
Any help would be appreciated - I'm totally stuck here :-(
The problem is that you are transposing your data every frame, so it's only "right" every other frame:
_WorldMatrix = XMMatrixTranspose(_WorldMatrix);
_ViewMatrix = XMMatrixTranspose(_ViewMatrix);
_ProjectionMatrix = XMMatrixTranspose(_ProjectionMatrix);
Instead, you should be doing something like:
XMMATRIX worldMatrix = XMMatrixTranspose(_WorldMatrix);
XMMATRIX viewMatrix = XMMatrixTranspose(_ViewMatrix);
XMMATRIX projectionMatrix = XMMatrixTranspose(_ProjectionMatrix);
result = _pDeviceContext->Map(m_pMatrixBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
if (FAILED(result))
return false;
dataPtr = (MatrixBufferType*)mappedResource.pData;
dataPtr->world = worldMatrix;
dataPtr->view = viewMatrix;
dataPtr->projection = projectionMatrix;
_pDeviceContext->Unmap(m_pMatrixBuffer, 0);
I wonder if the problem is that the temporary created by the call to XMMatrixTranslation(x, y, z + (i * 8)) is being passed into a function by reference, and then passed into another function by reference where it is modified.
My knowledge of the c++ spec isn't complete enough for me to tell whether that's undefined behaviour or not (but I know that there are tricky rules in this area - assigning a temporary to a non-const temporary is not supported, for example). Regardless, it's close enough to being suspicious that even if it is well defined c++ it still might be a dusty corner that could trip up a non-compliant compiler.
To rule out this possibility, try doing it like:
XMMatrix worldMatrix = XMMatrixTranslation(x, y, z + (i * 8));
m_pTextureShader->Render(m_pDirect3D->GetDeviceContext(), elem->GetIndexCount(), worldMatrix, viewMatrix, projectionMatrix, elem->GetTexture());

Importing and Displaying .fbx files in OpenGl

I have been trying to import and display an fbx file using the FBX SDK.Untill. I managed to load in the file, but I got stuck at the part where I have to display it.
The questions:
What exactly are those indices?
How should I display the vertices?
Here is the class that I made:
3dModelBasicStructs.h
struct vertex
{
float x,y,z;
};
struct texturecoords
{
float a,b;
};
struct poligon
{
int a,b,c;
};
Model.h
#ifndef MODEL_H
#define MODEL_H
#define FBXSDK_NEW_API
#define MAX_VERTICES 80000
#define MAX_POLIGONS 80000
#include <fbxsdk.h>
#include "3dModelBasicStructs.h"
#include <iostream>
#include <GL/glut.h>
using namespace std;
class Model
{
public:
Model(char*);
~Model();
void ShowDetails();
char* GetModelName();
void SetModelName( char* );
void GetFbxInfo( FbxNode* );
void RenderModel();
void InitializeVertexBuffer( vertex* );
private:
char Name[25];
vertex vertices[MAX_VERTICES];
poligon poligons[MAX_POLIGONS];
int *indices;
int numIndices;
int numVertices;
};
#endif
Model.cpp
#include "Model.h"
Model::Model(char *filename)
{
cout<<"\nA model has been built!";
numVertices=0;
numIndices=0;
FbxManager *manager = FbxManager::Create();
FbxIOSettings *ioSettings = FbxIOSettings::Create(manager, IOSROOT);
manager->SetIOSettings(ioSettings);
FbxImporter *importer=FbxImporter::Create(manager,"");
importer->Initialize(filename,-1,manager->GetIOSettings());
FbxScene *scene = FbxScene::Create(manager,"tempName");
importer->Import(scene);
importer->Destroy();
FbxNode* rootNode = scene->GetRootNode();
this->SetModelName(filename);
if(rootNode) { this->GetFbxInfo(rootNode); }
}
Model::~Model()
{
cout<<"\nA model has been destroied!";
}
void Model::ShowDetails()
{
cout<<"\nName:"<<Name;
cout<<"\nVertices Number:"<<numVertices;
cout<<"\nIndices which i never get:"<<indices;
}
char* Model::GetModelName()
{
return Name;
}
void Model::SetModelName(char *x)
{
strcpy(Name,x);
}
void Model::GetFbxInfo( FbxNode* Node )
{
int numKids = Node->GetChildCount();
FbxNode *childNode = 0;
for ( int i=0 ; i<numKids ; i++)
{
childNode = Node->GetChild(i);
FbxMesh *mesh = childNode->GetMesh();
if ( mesh != NULL)
{
//================= Get Vertices ====================================
int numVerts = mesh->GetControlPointsCount();
for ( int j=0; j<numVerts; j++)
{
FbxVector4 vert = mesh->GetControlPointAt(j);
vertices[numVertices].x=(float)vert.mData[0];
vertices[numVertices].y=(float)vert.mData[1];
vertices[numVertices++].z=(float)vert.mData[2];
cout<<"\n"<<vertices[numVertices-1].x<<" "<<vertices[numVertices- 1].y<<" "<<vertices[numVertices-1].z;
this->InitializeVertexBuffer(vertices);
}
//================= Get Indices ====================================
int *indices = mesh->GetPolygonVertices();
numIndices+=mesh->GetPolygonVertexCount();
}
this->GetFbxInfo(childNode);
}
}
void Model::RenderModel()
{
glDrawElements(GL_TRIANGLES,36,GL_INT,indices);
}
void Model::InitializeVertexBuffer(vertex *vertices)
{
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3,GL_FLOAT,0,vertices);
//glDrawArrays(GL_TRIANGLES,0,36);
}
Sadly , When i try to use drawelements i get this error:
Unhandled exception at 0x77e215de in A new begging.exe: 0xC0000005: Access violation reading location 0xcdcdcdcd.
2) How should I display the vertices?
Questions like these indicate, that you should work through some OpenGL tutorials. Those are the basics and you need to know them.
This is a good start regarding your problem, but you'll need to work through the whole tutorial
http://opengl.datenwolf.net/gltut/html/Basics/Tut01%20Following%20the%20Data.html
1) What exactly are those indices ?
You have a list of vertices. The index of a vertex is the position at which it is in that list. You can draw vertex arrays by its indices using glDrawElements
Update due to comment
Say you have a cube with shared vertices (uncommon in OpenGL, but I'm too lazy for writing down 24 vertices).
I have them in my program in an array, that forms a list of their positions. You load them from a file, I'm writing them a C array:
GLfloat vertices[3][] = {
{-1,-1, 1},
{ 1,-1, 1},
{ 1, 1, 1},
{-1, 1, 1},
{-1,-1,-1},
{ 1,-1,-1},
{ 1, 1,-1},
{-1, 1,-1},
};
This gives the vertices indices (position in the array), in the picture it looks like
To draw a cube we have to tell OpenGL in which vertices, in which order make a face. So let's have a look at the faces:
We're going to build that cube out of triangles. 3 consecutive indices make up a triangle. For the cube this is
GLuint face_indices[3][] = {
{0,1,2},{2,3,0},
{1,5,6},{6,2,1},
{5,4,7},{7,6,5},
{4,0,3},{3,7,4},
{3,2,6},{6,7,2},
{4,5,0},{1,0,5}
};
You can draw this then by pointing OpenGL to the vertex array
glVertexPointer(3, GL_FLOAT, 0, &vertices[0][0]);
and issuing a batches call on the array with vertices. There are 6*2 = 12 triangles, each triangle consisting of 3 vertices, which makes a list of 36 indices.
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_INT, &face_indices[0][0]);

texture problems with librocket, ogre & ios

We are trying to use librocket http://librocket.com/ together with Ogre http://www.ogre3d.org/. They're both part of gamekit http://code.google.com/p/gamekit/ which I use for this project.
This all works fine together as long as I don't load an image with librocket. As soon as I do that the viewport on the iPad is not fullscreen anymore but small in the lower corner. Like this: http://uploads.undef.ch/machine/ipad.png
I can't make a connection between loading/rendering a texture and resizing of the viewport. And I can't find anything wrong with the RenderInterface. http://uploads.undef.ch/machine/RenderInterfaceOgre3D.cpp
Is there any OpenGLES command that could have an affect on the active viewport size?
This is the relevant code that loads an image and displays it:
// Called by Rocket when a texture is required by the library.
bool RenderInterfaceOgre3D::LoadTexture(Rocket::Core::TextureHandle& texture_handle, Rocket::Core::Vector2i& texture_dimensions, const Rocket::Core::String& source)
{
Ogre::TextureManager* texture_manager = Ogre::TextureManager::getSingletonPtr();
Ogre::TexturePtr ogre_texture = texture_manager->getByName(Ogre::String(source.CString()));
if (ogre_texture.isNull())
{
ogre_texture = texture_manager->load(Ogre::String(source.CString()),
DEFAULT_ROCKET_RESOURCE_GROUP,
Ogre::TEX_TYPE_2D,
0);
}
if (ogre_texture.isNull())
return false;
texture_dimensions.x = ogre_texture->getWidth();
texture_dimensions.y = ogre_texture->getHeight();
texture_handle = reinterpret_cast<Rocket::Core::TextureHandle>(new RocketOgre3DTexture(ogre_texture));
return true;
}
// Called by Rocket when it wants to render geometry that it does not wish to optimise.
void RenderInterfaceOgre3D::RenderGeometry(Rocket::Core::Vertex* vertices, int num_vertices, int* indices, int num_indices, Rocket::Core::TextureHandle texture, const Rocket::Core::Vector2f& translation)
{
// We've chosen to not support non-compiled geometry in the Ogre3D renderer.
// But if you want, you can uncomment this code, so borders will be shown.
/*
Rocket::Core::CompiledGeometryHandle gh = CompileGeometry(vertices, num_vertices, indices, num_indices, texture);
RenderCompiledGeometry(gh, translation);
ReleaseCompiledGeometry(gh);
*/
}
// Called by Rocket when it wants to compile geometry it believes will be static for the forseeable future.
Rocket::Core::CompiledGeometryHandle RenderInterfaceOgre3D::CompileGeometry(Rocket::Core::Vertex* vertices, int num_vertices, int* indices, int num_indices, Rocket::Core::TextureHandle texture)
{
RocketOgre3DCompiledGeometry* geometry = new RocketOgre3DCompiledGeometry();
geometry->texture = texture == NULL ? NULL : (RocketOgre3DTexture*) texture;
geometry->render_operation.vertexData = new Ogre::VertexData();
geometry->render_operation.vertexData->vertexStart = 0;
geometry->render_operation.vertexData->vertexCount = num_vertices;
geometry->render_operation.indexData = new Ogre::IndexData();
geometry->render_operation.indexData->indexStart = 0;
geometry->render_operation.indexData->indexCount = num_indices;
geometry->render_operation.operationType = Ogre::RenderOperation::OT_TRIANGLE_LIST;
// Set up the vertex declaration.
Ogre::VertexDeclaration* vertex_declaration = geometry->render_operation.vertexData->vertexDeclaration;
size_t element_offset = 0;
vertex_declaration->addElement(0, element_offset, Ogre::VET_FLOAT3, Ogre::VES_POSITION);
element_offset += Ogre::VertexElement::getTypeSize(Ogre::VET_FLOAT3);
vertex_declaration->addElement(0, element_offset, Ogre::VET_COLOUR, Ogre::VES_DIFFUSE);
element_offset += Ogre::VertexElement::getTypeSize(Ogre::VET_COLOUR);
vertex_declaration->addElement(0, element_offset, Ogre::VET_FLOAT2, Ogre::VES_TEXTURE_COORDINATES);
#if GK_PLATFORM == GK_PLATFORM_APPLE_IOS
// Create the vertex buffer.
Ogre::HardwareVertexBufferSharedPtr vertex_buffer = Ogre::HardwareBufferManager::getSingleton().createVertexBuffer(vertex_declaration->getVertexSize(0), num_vertices, Ogre::HardwareBuffer::HBU_DYNAMIC_WRITE_ONLY_DISCARDABLE,true);
geometry->render_operation.vertexData->vertexBufferBinding->setBinding(0, vertex_buffer);
// Fill the vertex buffer.
RocketOgre3DVertex* ogre_vertices = (RocketOgre3DVertex*) vertex_buffer->lock(0, vertex_buffer->getSizeInBytes(), Ogre::HardwareBuffer::HBL_DISCARD);
#else
Ogre::HardwareVertexBufferSharedPtr vertex_buffer = Ogre::HardwareBufferManager::getSingleton().createVertexBuffer(vertex_declaration->getVertexSize(0), num_vertices, Ogre::HardwareBuffer::HBU_STATIC_WRITE_ONLY);
geometry->render_operation.vertexData->vertexBufferBinding->setBinding(0, vertex_buffer);
// Fill the vertex buffer.
RocketOgre3DVertex* ogre_vertices = (RocketOgre3DVertex*) vertex_buffer->lock(0, vertex_buffer->getSizeInBytes(), Ogre::HardwareBuffer::HBL_NORMAL);
#endif
for (int i = 0; i < num_vertices; ++i)
{
ogre_vertices[i].x = vertices[i].position.x;
ogre_vertices[i].y = vertices[i].position.y;
ogre_vertices[i].z = 0;
Ogre::ColourValue diffuse(vertices[i].colour.red / 255.0f, vertices[i].colour.green / 255.0f, vertices[i].colour.blue / 255.0f, vertices[i].colour.alpha / 255.0f);
render_system->convertColourValue(diffuse, &ogre_vertices[i].diffuse);
ogre_vertices[i].u = vertices[i].tex_coord[0];
ogre_vertices[i].v = vertices[i].tex_coord[1];
}
vertex_buffer->unlock();
#if GK_PLATFORM == GK_PLATFORM_APPLE_IOS
// Create the index buffer.
Ogre::HardwareIndexBufferSharedPtr index_buffer = Ogre::HardwareBufferManager::getSingleton().createIndexBuffer(Ogre::HardwareIndexBuffer::IT_16BIT, num_indices, Ogre::HardwareBuffer::HBU_STATIC_WRITE_ONLY);
geometry->render_operation.indexData->indexBuffer = index_buffer;
geometry->render_operation.useIndexes = true;
#else
Ogre::HardwareIndexBufferSharedPtr index_buffer = Ogre::HardwareBufferManager::getSingleton().createIndexBuffer(Ogre::HardwareIndexBuffer::IT_32BIT, num_indices, Ogre::HardwareBuffer::HBU_STATIC_WRITE_ONLY);
geometry->render_operation.indexData->indexBuffer = index_buffer;
geometry->render_operation.useIndexes = true;
#endif
// Fill the index buffer.
unsigned short * ogre_indices = (unsigned short*)index_buffer->lock(0, index_buffer->getSizeInBytes(), Ogre::HardwareBuffer::HBL_NORMAL);
#if GK_PLATFORM == GK_PLATFORM_APPLE_IOS
//unsigned short short_indices[num_indices];
for(int i=0;i<num_indices;i++)
ogre_indices[i] = indices[i];
//memcpy(ogre_indices, short_indices, sizeof(unsigned short) * num_indices);
#else
memcpy(ogre_indices, indices, sizeof(unsigned int) * num_indices);
#endif
index_buffer->unlock();
return reinterpret_cast<Rocket::Core::CompiledGeometryHandle>(geometry);
}
// Called by Rocket when it wants to render application-compiled geometry.
void RenderInterfaceOgre3D::RenderCompiledGeometry(Rocket::Core::CompiledGeometryHandle geometry, const Rocket::Core::Vector2f& translation)
{
Ogre::Matrix4 transform;
transform.makeTrans(translation.x, translation.y, 0);
render_system->_setWorldMatrix(transform);
render_system = Ogre::Root::getSingleton().getRenderSystem();
RocketOgre3DCompiledGeometry* ogre3d_geometry = (RocketOgre3DCompiledGeometry*) geometry;
if (ogre3d_geometry->texture != NULL)
{
render_system->_setTexture(0, true, ogre3d_geometry->texture->texture);
// Ogre can change the blending modes when textures are disabled - so in case the last render had no texture,
// we need to re-specify them.
render_system->_setTextureBlendMode(0, colour_blend_mode);
render_system->_setTextureBlendMode(0, alpha_blend_mode);
}
else
render_system->_disableTextureUnit(0);
render_system->_render(ogre3d_geometry->render_operation);
}

Access Violation when writing to pointer variables in dynamically allocated classes

Ok so the title doesn't explain my situation very well so I'll try to explain a little better here:
Here is part of my class structure:
ObjectView (abstract class)
ShipView : ObjectView (child of object view)
In a method I create a new ShipView:
ShipView *shipview (in header file).
shipview = new ShipView(in main part of code).
I then run shipview->Initialise();
to set everything up in the new class.
But when I get to any lines of code that try to write to a pointer declared in the ObjectView class it won't allow me to do so and gives me an Access Violation message.
The message that I get is below:
"Unhandled exception at 0x00a0cf1c in AsteroidGame.exe: 0xC0000005: Access violation writing location 0xbaadf011."
For example this line:
_ObjectData = new Model[mesh->mNumVertices];
will give me the error.
Just fyi I have put this in the header file:
struct Model{
GLfloat x,y,z;
GLfloat nX,nY,nZ;
GLfloat u,v;
};
Model *_ObjectData;
However if I was to do something along the lines of
Model *_ObjectData = new Model[mesh->mNumVertices];
(declare and initialise all at once)
it would work....
It's like it doesn't know the header file is there, or the class has not been properly constructed therefore the memory has not been allocated properly.
Any help would be greatly appreciated.
EDIT
Header File:
class ObjectView
{
public:
ObjectView(void);
virtual ~ObjectView(void);
void Initialise(std::string objectpath, std::string texturepath);
void InitialiseVBO(const aiScene* sc);
void RenderObject();
virtual void ScaleObject() = 0;
virtual void TranslateObject() = 0;
virtual void RotateObject() = 0;
protected:
struct Model{
GLfloat x,y,z;
GLfloat nX,nY,nZ;
GLfloat u,v;
};
Model *_ObjectData;
struct Indices{
GLuint x,y,z;
};
Indices *_IndicesData;
TextureLoader _textureloader;
GLuint _objectTexture;
GLuint _objectVBO;
GLuint _indicesVBO;
int _numOfIndices;
};
Code:
void ObjectView::InitialiseVBO(const aiScene* sc)
{
const aiMesh* mesh = sc->mMeshes[0];
_ObjectData = new Model[mesh->mNumVertices];
for(unsigned int i = 0; i < mesh->mNumVertices; i++)
{
_ObjectData[i].x = mesh->mVertices[i].x;
_ObjectData[i].y = mesh->mVertices[i].y;
_ObjectData[i].z = mesh->mVertices[i].z;
_ObjectData[i].nX = mesh->mNormals[i].x;
_ObjectData[i].nY = mesh->mNormals[i].y;
_ObjectData[i].nZ = mesh->mNormals[i].z;
_ObjectData[i].u = mesh->mTextureCoords[0][i].x;
_ObjectData[i].v = 1-mesh->mTextureCoords[0][i].y;
}
glGenBuffers(1, &_objectVBO);
glBindBuffer(GL_ARRAY_BUFFER, _objectVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(Model) * mesh->mNumVertices, &_ObjectData[0].x, GL_STATIC_DRAW);
_IndicesData = new Indices[mesh->mNumFaces];
for(unsigned int i = 0; i < mesh->mNumFaces; ++i)
{
for (unsigned int a = 0; a < 3; ++a)
{
unsigned int temp = mesh->mFaces[i].mIndices[a];
if(a == 0)
_IndicesData[i].x = temp;
else if(a == 1)
_IndicesData[i].y = temp;
else
_IndicesData[i].z = temp;
}
}
glGenBuffers(1, &_indicesVBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indicesVBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices) * mesh->mNumFaces, _IndicesData, GL_STATIC_DRAW);
_numOfIndices = sizeof(Indices) * mesh->mNumFaces;
glBindBuffer(GL_ARRAY_BUFFER, 0);
delete _ObjectData;
delete _IndicesData;
}
If
_ObjectData = new Model[mesh->mNumVertices];
crashes while
Model *_ObjectData = new Model[mesh->mNumVertices];
doesn't, it certainly looks like your object hasn't been initialized.
My psychic debugger suggests that you have declared a different variable, also called shipview, which is what you're calling Initialize() on.
0xbaadf011 is probably an offset from 0xbaadf00d which is a hexadecimal synonym for Bad Food. It is certainly an uninitialized pointer.
You are obviously not setting your pointers before using them. I suggest looking at the call stack when it crashes, set a breakpoint at one of the functions above your crash, then restart the program and run it line by line until you find the pointer that is set to 0xbaadf011 or 0xbaadf00d. Then figure out where you were supposed to set that pointer.