C2259 : 'ID3D11ShaderResourceView' Cannot instantiate abstract class - DX11 - c++

I'm trying to make a texture handler so that I can load texture file names from a text file and then load the textures and store them into a vector and get them whenever I need to draw.
Problem is, i'm getting the C2259 error which breaks before it can compile and was wondering if anyone could help me out.
TextureManager.h
class TextureManager{
private:
std::vector<ID3D11ShaderResourceView> * textures;
public:
TextureManager();
~TextureManager();
void TMLoadTexture(ID3D11Device* d);
ID3D11ShaderResourceView * TMgetTexture(int index);
};
TextureManager.cpp - TMLoadTexture / TMGetTexture
void TextureManager::TMLoadTexture(ID3D11Device* d)
{
std::vector<std::string> files;
files = readFile("textures");
D3DX11_IMAGE_LOAD_INFO loadInfo;
ZeroMemory(&loadInfo, sizeof(D3DX11_IMAGE_LOAD_INFO));
loadInfo.BindFlags = D3D11_BIND_SHADER_RESOURCE;
loadInfo.Format = DXGI_FORMAT_BC1_UNORM;
for(int i = 0; i < files.size(); i++)
{
std::wstring stemp = std::wstring(files.at(i).begin(), files.at(i).end());
LPCWSTR sw = stemp.c_str();
ID3D11ShaderResourceView* temp;
D3DX11CreateShaderResourceViewFromFile(d, sw, &loadInfo, NULL, &temp, NULL);
textures->push_back(*temp);
delete temp;
}
}
ID3D11ShaderResourceView * TextureManager::TMgetTexture(int index)
{
return &textures->at(index);
}
Thanks :)

Since ID3D11ShaderResourceView is an interface, you must use a pointer to access these kind of objects. So:
std::vector<ID3D11ShaderResourceView*> * textures;
Btw, are you sure that you want to use a vector-pointer? I see no reason why a plain vector<...> wouldn't be sufficient.
Then when loading the texture, put the pointer in the vector:
textures.push_back(temp);
And don't delete the texture you just created.

Related

Unexpected behaviour by pointer (when manipulated) but defined behaviour when using a double pointer

I'm quite confused about the reasoning behind the behaviour of pointers to a variable. I would have thought that if I append a pointer to a vector and access it, whether I changed the pointer itself it should still work the same upon the same variable. What I mean by this is, for example, I have a vector of integer pointers and I modify a variable defined somewhere else that has been appended to the vector. If I was to then print them, it should update the vector (not in reality) but it should print the new value of an integer. I'm trying to apply this to an SDL_Texture* in SDL2, however it doesn't quite make sense. In summary, the same concept is applying there however I am using Textures instead. I modify the texture and do "things" with it, but at the end of the loop when I render it, the vector is still iterating to the SDL_Texture* appended to it anyway. What my problem is, is as I change and modify the texture, when I go to render it it doesn't show up. This isn't because the texture isn't properly loaded or anything (I have tested it and rather than using the vector, I draw it raw) but when using the vector it doesn't work properly. Here is the code:
void Main::Mainloop()
{
Screen_Object m_Screen_Object;
TTF_Font* font = TTF_OpenFont("Anonymous_Pro.ttf", 30);
SDL_Surface* surf = TTF_RenderText_Blended(font, "Starting fps", {0,0,0});
SDL_Texture* tex = SDL_CreateTextureFromSurface(Render::Get_Renderer(), surf);
SDL_FreeSurface(surf);
SDL_Rect rct = {20, 100, 0,0};
SDL_QueryTexture(tex, NULL, NULL, &rct.w, &rct.h);
m_Screen_Object.Add_Texture(tex);
Uint32 start, finish, counter;
counter = 0;
start = SDL_GetTicks();
finish = SDL_GetTicks();
bool running = true;
while (running)
{
Events::Event_Loop();
if (Events::Quit_Application()){
running = false;
break;
}
///Clear display to color
SDL_SetRenderDrawColor(Render::Get_Renderer(), 0,255,0,255);
SDL_RenderClear(Render::Get_Renderer());
///Do stuff here
m_Screen_Object.Draw_Textures();
finish = SDL_GetTicks();
counter += 2;
if (finish - start >= 500)
{
start = SDL_GetTicks();
SDL_DestroyTexture(tex);
std::string fps = std::to_string(counter);
surf = TTF_RenderText_Blended(font, fps.c_str(), {0,0,0});
tex = SDL_CreateTextureFromSurface(Render::Get_Renderer(), surf);
SDL_FreeSurface(surf);
SDL_QueryTexture(tex, NULL, NULL, &rct.w, &rct.h);
counter = 0;
}
SDL_RenderPresent(Render::Get_Renderer());
}
SDL_DestroyTexture(tex);
TTF_CloseFont(font);
}
int main(int argc, char* argv[])
{
Main::Mainloop();
return 0;
}
and here is the declaration of Screen_Object:
In the header:
std::vector < SDL_Texture* > m_Textures;
In the .cpp:
void Screen_Object::Add_Texture(SDL_Texture* p_Texture)
{
m_Textures.push_back(p_Texture);
}
void Screen_Object::Draw_Textures()
{
for (unsigned int i=0; i < m_Textures.size(); i++)
{
SDL_RenderCopy(Render::Get_Renderer(), m_Textures[i], NULL, &m_Rect);
}
}
Now this code doesn't work in the way I believe it should since I can't understand why it isn't working, but when I change the vector's type to be SDL_Texture**, the code works fine. What on earth is wrong with the code without the **, I just can't logically understand why it won't work properly
The issue seems to be that you are storing pointers in the vector, but outside the vector you are invalidating the pointer.
void Screen_Object::Add_Texture(SDL_Texture* p_Texture)
{
m_Textures.push_back(p_Texture);
}
void Main::Mainloop()
{
Screen_Object m_Screen_Object;
//...
SDL_Texture* tex = SDL_CreateTextureFromSurface(Render::Get_Renderer(), surf);
//...
m_Screen_Object.Add_Texture(tex); // <-- add pointer to vector
//...
tex = SDL_CreateTextureFromSurface(Render::Get_Renderer(), surf); // <-- This changes the pointer, but has no effect on the vector's
//...
So anything after that line, if you access the pointer in m_Textures vector, that pointer in the vector is no longer valid, or worse, it is valid, but points to an old (but still valid) SDL_Texture.
A very simple solution is to ensure that you use the pointer you stored in the m_Textures by obtaining a reference to that pointer:
void Main::Mainloop()
{
Screen_Object m_Screen_Object;
TTF_Font* font = TTF_OpenFont("Anonymous_Pro.ttf", 30);
SDL_Surface* surf = TTF_RenderText_Blended(font, "Starting fps", {0,0,0});
SDL_Texture* tex = SDL_CreateTextureFromSurface(Render::Get_Renderer(), surf);
m_Screen_Object.Add_Texture(tex); // <-- Immediately do this
auto& texRef = m_Screen_Object.back(); // <-- Get a reference to the pointer.
Then you use texRef after that. This is the actual reference to the pointer you added to the vector, and not a copy of the pointer.
Then the loop would be simply:
texRef = SDL_CreateTextureFromSurface(Render::Get_Renderer(), surf);
This changes the actual pointer stored in the vector.

How to get all vertex cordinates from DirectXTK (ToolKit) DirectX::Model class to use for collision detection

I'm, doing some basic rendering with DirectXToolKit and I would like to be able to get the vertex coordinates for each model in order to compute collisions between models.
currently, I have some test code to load the model, but the ID3D11Buffer loads internally using CreateFromSDKMESH
void Model3D::LoadSDKMESH(ID3D11Device* p_device, ID3D11DeviceContext* device_context, const wchar_t* file_mesh)
{
mAlpha = 1.0f;
mTint = DirectX::Colors::White.v;
mStates.reset(new DirectX::CommonStates(p_device));
auto fx = new DirectX::EffectFactory(p_device);
fx->SetDirectory(L"media");
mFxFactory.reset(fx);
mBatch.reset(new DirectX::PrimitiveBatch<DirectX::VertexPositionColor>(device_context));
mBatchEffect.reset(new DirectX::BasicEffect(p_device));
mBatchEffect->SetVertexColorEnabled(true);
{
void const* shaderByteCode;
size_t byteCodeLength;
mBatchEffect->GetVertexShaderBytecode(&shaderByteCode, &byteCodeLength);
HR(p_device->CreateInputLayout(DirectX::VertexPositionColor::InputElements,
DirectX::VertexPositionColor::InputElementCount,
shaderByteCode, byteCodeLength,
mBatchInputLayout.ReleaseAndGetAddressOf()));
}
mModel = DirectX::Model::CreateFromSDKMESH(p_device, file_mesh, *mFxFactory);
}
I know there is a way to get vertexes from the ID3D11Buffer, answered here:
How to read vertices from vertex buffer in Direct3d11
But they suggest not loading from GPU memory. So I assume it's better to load vertices ahead of time into a separate container.
I looked into CreateFromSDKMESH and there are a few functions that are publicly accessible without making changes to XTK
In order to get Vertices while loading a model, replace the line mModel = DirectX::Model::CreateFromSDKMESH(p_device, file_mesh, *mFxFactory); in the question above with:
size_t data_size = 0;
std::unique_ptr<uint8_t[]> v_data;
HRESULT hr = DirectX::BinaryReader::ReadEntireFile(file_mesh, v_data, &data_size);
if (FAILED(hr))
{
DirectX::DebugTrace("CreateFromSDKMESH failed (%08X) loading '%ls'\n", hr, file_mesh);
throw std::exception("CreateFromSDKMESH");
}
uint8_t* mesh_data = v_data.get();
mModel = DirectX::Model::CreateFromSDKMESH(p_device, v_data.get(), data_size, *mFxFactory, false, false);
mModel->name = file_mesh;
auto v_header = reinterpret_cast<const DXUT::SDKMESH_HEADER*>(mesh_data);
auto vb_array = reinterpret_cast<const DXUT::SDKMESH_VERTEX_BUFFER_HEADER*>(mesh_data + v_header->VertexStreamHeadersOffset);
if(v_header->NumVertexBuffers < 1)
throw std::exception("Vertex Buffers less than 1");
auto& vertex_header = vb_array[0];
uint64_t buffer_data_offset = v_header->HeaderSize + v_header->NonBufferDataSize;
uint8_t* buffer_data = mesh_data + buffer_data_offset;
auto verts_pairs = reinterpret_cast<std::pair<Vector3,Vector3>*>(buffer_data + (vertex_header.DataOffset - buffer_data_offset));
There, accessing a coordinate should be as simple as
float x = verts_pairs[0].first.x;
and the total number of vertices is stored in
vertex_header.NumVertices
Dont forget that after loading the vertex buffer gets deleted, so you may want to do something like:
memcpy(vertexBuffer, reinterpret_cast<std::pair<Vector3,Vector3>*>(buffer_data + (vertex_header.DataOffset - buffer_data_offset)), vertexCnt);
Also, vertex buffer doesn't get transformed with draw functions, so you will need to do transforms yourselves
Thanks,

Why is this CCSpriteFrame null?

The code below compiles and runs well in the init() section of the class, yet when I try to create a separate method for it, a CCSpriteFrame ended up null, I'd like to learn what conceptual assumption I gotten myself into this time =s
void SceneView::runZoo(Animal& animal, std::string attack) {
//
// Animation using Sprite BatchNode
//
std::string spriteName;
CCSpriteFrameCache* cache = CCSpriteFrameCache::sharedSpriteFrameCache();
CCSize iSize = CCDirector::sharedDirector()->getWinSize();
spriteName = "killerRabbit.plist";
cache->addSpriteFramesWithFile(spriteName.c_str());
spriteName = "killerRabbit1.png";
sprite = CCSprite::createWithSpriteFrameName(spriteName.c_str());
sprite->setPosition( ccp(iSize.width/2 - 160, iSize.height/2 - 40) );
spriteName = "killerRabbit.png";
CCSpriteBatchNode* spritebatch = CCSpriteBatchNode::create(spriteName.c_str());
spritebatch->addChild(sprite);
this->addChild(spritebatch);
CCArray* animFrames = CCArray::createWithCapacity(15);
spriteName = "killerRabbit";
char str[100] = {0};
for(int i = 1; i <= 9; i++) {
sprintf(str, (spriteName + "%d.png").c_str(), i);
CCSpriteFrame* frame = cache->spriteFrameByName(str);
//Null here
animFrames->addObject(frame);
//Null here
}
CCAnimation* animation = CCAnimation::createWithSpriteFrames(animFrames, 0.15f);
sprite->runAction( CCRepeatForever::create(CCAnimate::create(animation)) );
}
The actual error:
/** Appends an object. Behavior undefined if array doesn't have enough capacity. */
void ccArrayAppendObject(ccArray *arr, CCObject* object)
{
CCAssert(object != NULL, "Invalid parameter!");
object->retain();
arr->arr[arr->num] = object;
arr->num++;
}
That means that the CCSpriteFrameCache might be nulled for some reason, any ideas?
I havent worked much with cocos2d in a C++ environment but I know in Objective-c before you can use an array you have to do this
CCArray *array = [[CCArray alloc] initWithCapacity:15];
If you do not do that it will always be null. I would assume it is the same with C++
frame == null when the specified image not exists. I think you'd better look through plist file or locate which frame caused error.

Access Violation when writing to pointer variables in dynamically allocated classes

Ok so the title doesn't explain my situation very well so I'll try to explain a little better here:
Here is part of my class structure:
ObjectView (abstract class)
ShipView : ObjectView (child of object view)
In a method I create a new ShipView:
ShipView *shipview (in header file).
shipview = new ShipView(in main part of code).
I then run shipview->Initialise();
to set everything up in the new class.
But when I get to any lines of code that try to write to a pointer declared in the ObjectView class it won't allow me to do so and gives me an Access Violation message.
The message that I get is below:
"Unhandled exception at 0x00a0cf1c in AsteroidGame.exe: 0xC0000005: Access violation writing location 0xbaadf011."
For example this line:
_ObjectData = new Model[mesh->mNumVertices];
will give me the error.
Just fyi I have put this in the header file:
struct Model{
GLfloat x,y,z;
GLfloat nX,nY,nZ;
GLfloat u,v;
};
Model *_ObjectData;
However if I was to do something along the lines of
Model *_ObjectData = new Model[mesh->mNumVertices];
(declare and initialise all at once)
it would work....
It's like it doesn't know the header file is there, or the class has not been properly constructed therefore the memory has not been allocated properly.
Any help would be greatly appreciated.
EDIT
Header File:
class ObjectView
{
public:
ObjectView(void);
virtual ~ObjectView(void);
void Initialise(std::string objectpath, std::string texturepath);
void InitialiseVBO(const aiScene* sc);
void RenderObject();
virtual void ScaleObject() = 0;
virtual void TranslateObject() = 0;
virtual void RotateObject() = 0;
protected:
struct Model{
GLfloat x,y,z;
GLfloat nX,nY,nZ;
GLfloat u,v;
};
Model *_ObjectData;
struct Indices{
GLuint x,y,z;
};
Indices *_IndicesData;
TextureLoader _textureloader;
GLuint _objectTexture;
GLuint _objectVBO;
GLuint _indicesVBO;
int _numOfIndices;
};
Code:
void ObjectView::InitialiseVBO(const aiScene* sc)
{
const aiMesh* mesh = sc->mMeshes[0];
_ObjectData = new Model[mesh->mNumVertices];
for(unsigned int i = 0; i < mesh->mNumVertices; i++)
{
_ObjectData[i].x = mesh->mVertices[i].x;
_ObjectData[i].y = mesh->mVertices[i].y;
_ObjectData[i].z = mesh->mVertices[i].z;
_ObjectData[i].nX = mesh->mNormals[i].x;
_ObjectData[i].nY = mesh->mNormals[i].y;
_ObjectData[i].nZ = mesh->mNormals[i].z;
_ObjectData[i].u = mesh->mTextureCoords[0][i].x;
_ObjectData[i].v = 1-mesh->mTextureCoords[0][i].y;
}
glGenBuffers(1, &_objectVBO);
glBindBuffer(GL_ARRAY_BUFFER, _objectVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(Model) * mesh->mNumVertices, &_ObjectData[0].x, GL_STATIC_DRAW);
_IndicesData = new Indices[mesh->mNumFaces];
for(unsigned int i = 0; i < mesh->mNumFaces; ++i)
{
for (unsigned int a = 0; a < 3; ++a)
{
unsigned int temp = mesh->mFaces[i].mIndices[a];
if(a == 0)
_IndicesData[i].x = temp;
else if(a == 1)
_IndicesData[i].y = temp;
else
_IndicesData[i].z = temp;
}
}
glGenBuffers(1, &_indicesVBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indicesVBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices) * mesh->mNumFaces, _IndicesData, GL_STATIC_DRAW);
_numOfIndices = sizeof(Indices) * mesh->mNumFaces;
glBindBuffer(GL_ARRAY_BUFFER, 0);
delete _ObjectData;
delete _IndicesData;
}
If
_ObjectData = new Model[mesh->mNumVertices];
crashes while
Model *_ObjectData = new Model[mesh->mNumVertices];
doesn't, it certainly looks like your object hasn't been initialized.
My psychic debugger suggests that you have declared a different variable, also called shipview, which is what you're calling Initialize() on.
0xbaadf011 is probably an offset from 0xbaadf00d which is a hexadecimal synonym for Bad Food. It is certainly an uninitialized pointer.
You are obviously not setting your pointers before using them. I suggest looking at the call stack when it crashes, set a breakpoint at one of the functions above your crash, then restart the program and run it line by line until you find the pointer that is set to 0xbaadf011 or 0xbaadf00d. Then figure out where you were supposed to set that pointer.

Direct2D - Nothing is being drawn to the screen, something wrong in the way I'm handling arrays in C++

Hello everyone and thank you for looking. This is a follow up to the original question posted here.
I have a base class that I define thusly:
class DrawableShape
{
public:
virtual HRESULT DrawShape(ID2D1HwndRenderTarget* m_pRenderTarget)
{
return S_OK;
}
};
I have two classes that extend this class, both are similar, so I'm listing one:
class MyD2DEllipse : public DrawableShape
{
private:
D2D1_ELLIPSE data;
public:
MyD2DEllipse();
HRESULT DrawShape(ID2D1HwndRenderTarget* m_pRenderTarget);
};
The DrawShape function is implemented like this:
HRESULT MyD2DEllipse::DrawShape(ID2D1HwndRenderTarget* m_pRenderTarget)
{
HRESULT hr = E_FAIL;
ID2D1SolidColorBrush* m_pBrush;
hr = m_pRenderTarget->CreateSolidColorBrush(
D2D1::ColorF(D2D1::ColorF::OrangeRed),
&m_pBrush
);
m_pRenderTarget->DrawEllipse(&data, m_pBrush, 10.f);
return hr;
}
I want to draw a random number of ellipses and rectangles to the screen, so I first find out those random numbers, create an array of DrawableShape with that size (since I can't allocate objects dynamically in C++), replace the parent objects with the child objects, and then call the draw function on the array, randomly again. Here's what my code looks like:
HRESULT Demo::OnRender()
{
HRESULT hr = S_OK;
hr = CreateDeviceResources();
if (SUCCEEDED(hr))
{
m_pRenderTarget->BeginDraw();
m_pRenderTarget->SetTransform(D2D1::Matrix3x2F::Identity());
m_pRenderTarget->Clear(D2D1::ColorF(D2D1::ColorF::White));
// Decide on # of primitives
randEllipse = 1 + (rand() % 5);
randRectangle = 1 + (rand() % 5);
totalPrimitives = randEllipse + randRectangle;
DrawableShape *shapes;
shapes = new MyShape[totalPrimitives];
for (int i=0; i<randEllipse; i++)
{
MyEllipse ellipse1;
shapes[i] = ellipse1;
}
for (int i=randEllipse; i<(randEllipse + randRectangle); i++)
{
MyRectangle rect1;
shapes[i] = rect1;
}
for (int i=0; i<totalPrimitives; i++)
{
hr = shapes[i].DrawMyShape(m_pRenderTarget);
}
hr = m_pRenderTarget->EndDraw();
}
}
That should've worked, but it doesn't. Also, after writing this out, I realize that I'm better off creating the array in some sort of init function, and calling the draw on the array in the OnRender function. Please help!!
EDIT: Okay I've got the shapes working with pointers, the problem is the construction of the array. So I have something like this:
MyD2DRectangle rect1;
MyD2DEllipse ell1;
DrawableShape *shape1 = &rect1;
DrawableShape *shape2 = &ell1;
shape1->DrawShape(m_pRenderTarget);
shape2->DrawShape(m_pRenderTarget);
That seems to work by itself. How can I create the array of DrawableShape without slicing?
shapes is an array of MyShape instances. When you say shapes[i] = ellipse1; or shapes[i] = rect1;, you are losing the subclass data as part of this assignment, which is known as slicing in C++.
As such, each call to shapes[i].DrawMyShape(m_pRenderTarget); is just returning S_OK as defined in MyShape.
In order to properly use polymorphism in C++, you need to use pointers or references to MyShape instances (usually allocated using new). If you are not allowed to do this (homework?), then you need to find a way to do this without polymorphism.