A problem with returning a pointer from a class
I am making a project for a 3D course that involves utilizing DirectX 11. In assignments that were much smaller in scope we were encouraged to just put all the code in and around the main-file. For this project I wanted more structure, so I decided to subdivide the code into three classes: WindowHandler, D3DHandler and PipelineHandler.
D3DHandler sits in WindowHandler, and PipelineHandler sits in D3DHandler like so:
class WindowHandler
{
private:
HWND window;
MSG message
class D3DHandler()
{
private:
ID3D11Device*;
ID3D11DeviceContext*;
IDXGISwapChain*;
ID3D11RenderTargetView*;
ID3D11Texture2D*;
ID3D11DepthStencilView*;
D3D11_VIEWPORT;
class PipelineHandler()
{
private:
ID3D11VertexShader* vShader;
ID3D11PixelShader* pShader;
ID3D11InputLayout* inputLayout;
ID3D11Buffer* vertexBuffer;
}
}
}
(The classes are divided into their own .h libraries, I just wanted to compress the code a bit)
Everything ran smoothly until I tried to bind my VertexBuffer in my Render() function.
void D3DHandler::Render()
{
float clearColor[] = { 0.0f, 0.0f, 0.0f, 1.0f };
this->immediateContext->ClearRenderTargetView(this->rtv, clearColor); //Clear backbuffer
this->immediateContext->ClearDepthStencilView(this->dsView, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);
UINT32 vertexSize = sizeof(Vertex); //Will be the size of 8 floats, x y z, r g b, u v
UINT32 offset = 0;
this->immediateContext->VSSetShader(this->ph.GetVertexShader(), nullptr, 0);
this->immediateContext->IASetVertexBuffers(0, 1, this->ph.GetVertexBuffer(), &vertexSize, &offset);
this->immediateContext->IASetInputLayout(this->ph.GetInputLayout());
this->immediateContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
this->immediateContext->PSSetShader(this->ph.GetPixelShader(), nullptr, 0);
//
// Draw geometry
this->immediateContext->Draw(3, 0);
}
The problem occurs at IASetVertexBuffers, where this->ph.GetVertexBuffer() gives the error
"ID3D11Buffer*" is incomaptible with parameter of type "ID3D11Buffer
*const *"
The Get-function in PipelineHandler returns a pointer to the COM object, and for VertexBuffer it looks like this:
ID3D11Buffer * PipelineHandler::GetVertexBuffer() const
{
return this->vertexBuffer;
}
From what documentation I've found, IASetVertexBuffers expects an array of buffers, so a double pointer. However, in the code I've worked with before, where the ID3D11Buffer * has been global, the same code generated no errors (and yes, most of the code I'm working so far has been wholesale copied from the last assignment in this course, just placed into classes).
I have tried to:
to remove the this-> pointer in front of the ph
Creating a new function called ID3D11Buffer & GetVertexBufferRef() const which returns return *this->vertexBuffer; using that instead.
I yet to try:
Moving vertexBuffer from the PipelineHandler and up to D3DHandler. I have a feeling that would work, but it would also disconnect the vertex buffer from the other objects that I deem to be part of the pipeline and not the context, which is something I would like to avoid.
I am grateful for any feedback I can get. I'm fairly new to programming, so any comments that aren't a solution to my problem are just as welcome.
The problem your are having is that this function signature does not take a ID3D11Buffer* interface pointer. It takes an array of ID3D11Buffer* interface pointers.
this->immediateContext->IASetVertexBuffers(0, 1, this->ph.GetVertexBuffer(), &vertexSize, &offset);
Can be any of the following:
this->immediateContext->IASetVertexBuffers(0, 1, &this->ph.GetVertexBuffer(), &vertexSize, &offset);
-or-
ID3D11Buffer* vb = this->ph.GetVertexBuffer();
this->immediateContext->IASetVertexBuffers(0, 1, &vb, &vertexSize, &offset);
-or-
ID3D11Buffer* vb[] = { this->ph.GetVertexBuffer() };
this->immediateContext->IASetVertexBuffers(0, 1, vb, &vertexSize, &offset);
This is because the Direct3D 11 API supports 'multi-stream' rendering where you have more than one Vertex Buffer referenced in the Input Layout which is merged as it is drawn. The function also takes an array of vertex sizes and offsets, which is why you had to take the address of the values vertexSize and offset instead of just passing by value. See Microsoft Docs for full details on the function signature.
This feature was first introduced back in Direct3D 9, so the best introduction to it is found here. With Direct3D 11, depending on Direct3D hardware feature level you can use up to 16 or 32 slots at once. That said, most basic rendering uses a single stream/VertexBuffer.
You should take a look at this page as to why the 'first' option above can get you into trouble with smart-pointers.
For the 'second' option, I'd recommend using C++11's auto keyboard so it would be: auto vb = this->ph.GetVertexBuffer();
The 'third' option obviously makes most sense if you are using more than one VertexBuffer for multi-stream rendering.
Note that you can never have more than one ID3D11Buffer active for the Index Buffer, so the function signature does in fact take a ID3D11Buffer* in that case and not an array. See Microsoft Docs
Related
When I toggle between these two ways of drawing a vector of DrawArraysIndirectCommand, the passed gl_BaseInstance read in the vertex-shader is different. In the latter case, gl_BasInstance is always 0.
To my understanding the result should be the same, each commands gl_InstanceID should start from zero, and the gl_BaseInstance should be the baseInstance member of the DrawArraysIndirectCommand struct.
Am I wrong here, should there be a difference passing each command one at a time versus passing all commands at once?
auto commands = std::vector<DrawArraysIndirectCommand>{...};
// A - Pass each command one at a time
for (const auto& command : commands) {
glMultiDrawArraysIndirect(GL_TRIANGLES, &command, 1, 0);
}
// B - Pass all commands as an array
glMultiDrawArraysIndirect(
GL_TRIANGLES,
commands.data(),
commands.size(),
0
);
//
typedef struct {
GLuint count;
GLuint instanceCount;
GLuint first;
GLuint baseInstance;
} DrawArraysIndirectCommand;
Note: I'm using OpenGL 4.6
I sent my application for testing to several people. First tester has the same result as mine, but other twos have something strange. For some reason, image, which should be in original size at lower left corner, they see as stretched to fullscreen. Also they don't see GUI elements (however, buttons work if they are found by mouse). I will make reservation, that this isn't stretched image overlaps the buttons, I sent them version with transparent image and the buttons still aren't drawn. For GUI drawning I use Nuklear library. I will give screenshots and code that's responsible for positioning problem image. What can cause that?
[ Good behavior / Bad behavior ]
int width, height;
{
fs::path const path = fs::current_path() / "gamedata" / "images" / "logo.png";
unsigned char *const texture = stbi_load(path.u8string().c_str(), &width, &height, nullptr, STBI_rgb_alpha);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, texture);
stbi_image_free(texture);
}
...
{
float const x = -1.0f + width * 2.0f / xResolution;
float const y = -1.0f + height * 2.0f / yResolution;
float const vertices[30] = {
/* Position */ /* UV */
-1.0f, -1.0f, 0.0f, 0.0f, 0.0f,
-1.0f, y, 0.0f, 0.0f, 1.0f,
x, y, 0.0f, 1.0f, 1.0f,
-1.0f, -1.0f, 0.0f, 0.0f, 0.0f,
x, -1.0f, 0.0f, 1.0f, 0.0f,
x, y, 0.0f, 1.0f, 1.0f
};
glBufferData(GL_ARRAY_BUFFER, 30 * sizeof(float), vertices, GL_STATIC_DRAW);
}
[ UPDATE ]
By trial and error, I realized that problem is caused by classes that are responsible for rendering the background and logo, and both of them are wrong. Separately they work as it should, but as soon as something else is added to game loop, everything breaks down.
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
background.render();
glEnable(GL_BLEND);
glDepthMask(GL_FALSE);
logo.render();
nk_glfw3_render();
glDepthMask(GL_TRUE);
glDisable(GL_BLEND);
I wrote these classes myself, so most likely I missed something. Having found mistake in one, there's another, cuz they're almost identical. So far I can't determine, what exactly in these classes are wrong…
[ Background.hpp / Background.cpp ]
I found some errors in the posted code. In dramatic fashion, I will reveal the culprit last.
NULL used as an integer
For example,
glBindTexture(GL_TEXTURE_2D, NULL); // Incorrect
The glBindTexture function accepts an integer parameter, not a pointer. Here is the correct version:
glBindTexture(GL_TEXTURE_2D, 0); // Correct
This also applies to glBindBuffer and glBindVertexArray.
Nullary constructor is defined explicit
The explicit keyword only affects unary constructors (constructors which take one parameter). It does not affect constructors with any other number of parameters.
explicit Background() noexcept; // "explicit" does not do anything.
Background() noexcept; // Exact same declaration as above.
Constructor is incorrectly defined as noexcept
The noexcept keyword means "this function will never throw an exception". However it contains the following code which can throw:
new Shader("image")
According to the standard, this can throw a std::bad_alloc. So the noexcept annotation is incorrect. See Can the C++ `new` operator ever throw an exception in real life?
On a more practical note, the constructor reads an image from disk. This is likely to fail, and throwing an exception is a reasonable way of handling this.
The noexcept keyword is not particularly useful here. Perhaps the compiler can generate slightly less code at the call site, but this is likely to make at most an infinitesimal difference because the constructor is cold code (cold = not called often). The noexcept qualifier is mostly just useful for choosing between different generic algorithms, see What is noexcept useful for?
No error handling
Remember that stdbi_load will return NULL if an error occurs. This case is not handled.
Rule of three is violated
The Background class does not define a copy constructor or copy assignment operator even though it defines a destructor. While this is not guaranteed to make your program incorrect, it is kind of like leaving a loaded and cocked gun on the kitchen counter and hoping nobody touches it. This is called the Rule of Three and it is simple to fix. Add a deleted copy constructor and copy assignment operator.
// These three go together, either define all of them or none.
// Hence, "rule of three".
Background(const Background &) = delete;
Background &operator=(const Background &) = delete;
~Background();
See What is The Rule of Three?
Buffer is incorrectly deleted
Here is the line:
glDeleteBuffers(1, &VBO);
The short version... you should move this into Background::~Background().
The long version... when you delete the buffer, it is not removed from the VAO but the name can get reused immediately. According to the OpenGL 4.6 spec 5.1.2:
When a buffer, texture, or renderbuffer object is deleted, it is ... detached from any attachments of container objects that are bound to the current context, ...
So, because the VAO is not currently bound, deleting the VBO does not remove the VBO from the VAO (if the VAO were bound, this would be different). But, section 5.1.3:
When a buffer, texture, sampler, renderbuffer, query, or sync object is deleted, its name immediately becomes invalid (e.g. is marked unused), but the underlying object will not be deleted until it is no longer in use.
So the VBO will remain, but the name may be reused. This means that a later call to glGenBuffers might give you the same name. Then, when you call glBufferData, it overwrites the data in both your background and your logo. Or, glGenBuffers might give you a completely different buffer name. This is completely implementation-dependent, and this explains why you see different behavior on different computers.
As a rule, I would avoid calling glDeleteBuffers until I am done using the buffer. You can technically call glDeleteBuffers earlier, but it means you can get the same buffer back from glGenBuffers.
I have a texture, along with its shaderresourceview, to which I render my scene's original image by using it as a RenderTarget.
Like millions before me, I then use it as an input to my next shader so I can blur, do HDR, all of that.
The simple but "new to me" problem is that when I go to call PSSetShaderResources and put the texture into the shader's texture/constant buffer I get this error:
"Resource being set to PS shader resource slot 0 is still bound on output"
...which makes sense, as it may still well be. My question is how do I unbind the texture from being the render target of the initial pass?
I'm already doing this, which I presume sets it back to the original RenderTargetView that DXUT was using at the beginning, but it doesn't seem to free up the texture:
ID3D11RenderTargetView * renderTargetViewArrayTQ[1] = { DXUTGetD3D11RenderTargetView() };
pd3dImmediateContext->OMSetRenderTargets(1, renderTargetViewArray, nullptr);
Assuming your render target is in slot 0:
ID3D11RenderTargetView* nullRTV = nullptr;
d3dContext->OMSetRenderTargets(1, &nullRTV, nullptr);
You need to pass an array of ID3D11RenderTargetView* of length 1, the first of which needs to be nullptr.
When unbinding the texture from the pixel shader, this does NOT work:
ID3D11ShaderResourceView *const pSRV[1] = { NULL };
pd3dImmediateContext->PSSetShaderResources(0, 0, pSRV);
However, this DOES work:
ID3D11ShaderResourceView *const pSRV[1] = { NULL };
pd3dImmediateContext->PSSetShaderResources(0, 1, pSRV);
Put more simply, when setting the resource back to null, you still have to tell it you're passing in one pointer, albeit a null one. Passing a null alone with a zero count appears to do nothing.
I have a question realted to meemory usage by using
Dynamic ConstantBuffer vs Constant Buffer updated frequentlky(Using Defualt usagetype) vs Dynamic Vertex Buffer
I always Defined Constnat buffer usage as defualt and updated the changes in realtime like so
Eg 1
D3D11_BUFFER_DESC desc;
desc.Usage = D3D11_USAGE_DEFAULT;
// irrelevant code omitted
void Render()
{
WORLD = XMMatrixTranslation(x,y,z); // x,y,z are chaned dynamiclly
ConstantBuffer cb;
cb.world = WORLD;
devcon->UpdateSubResource(constantBuffer,0,0,&cb,0,0);
// Set the VSSetBuffer and PSSetBuffer
}
But Recently I came across a tutorial from rastertek that used
devcon->Map() and devcon->Unmap() to update them and he had defined the usage as Dynamic
Eg 2
void CreateBuffer(){
D3D11_BUFFER_DESC desc;
desc.Usage = D3D11_USAGE_DYNAMIC; // irrelkavant code ommited
}
void Render()
{
WORLD = XMMatrixTranslation(x,y,z); // x,y,z are chaned dynamiclly
D3D11_MAPPED_SUBRESOURCE mappedRes;
ConstantBuffer *cbPtr;
devcon->Map(constantBuffer,0,D3D11_MAP_WRITE_DISCARD,0,&mappedRes);
cbPtr = (ConstantBuffer*)mappedRes.pData;
cbPtr->World = WORLD;
devcon->UnMap(constantBuffer,0);
}
So the question is .
Is there any performance gain or hit by using Dynamic Constant buffer(eg2) over the the default ConstatnBuffer updatee at runtim(eg1) ??
Please do help me clear this doubt..
Thanks
The answer here like most performance advice is "It depends". Both are valid and it really depends on your content and rendering pattern.
The classic performance reference here is Windows to Reality: Getting the Most out of Direct3D 10 Graphics in Your Games from Gamefest 2007.
If you are dealing with lots of constants, Map of a DYNAMIC Constant Buffer is better if your data is scattered about and is collected as part of the update cycle. If all your constants are already laid out correctly in system memory, then UpdateSubResource is probably better. If you are reusing the same CB many times a frame and Map/Locking it, then you might run into 'rename' limits with Map/Lock that are less problematic with UpdateSuResource so "it depends' is really the answer here.
And of course, all of this goes out the window with DirectX 12 which has an entirely different mechanism for handling the equivalent of dynamic updates.
Hi i'm working with shaders and i've just got a quick question about whether or not i can do something here. When mapping data to a buffer i normally see it done like this. Define class or struct to represent whats in the buffer.
class MatrixBuffer
{
public:
D3DXMATRIX worldMatrix;
D3DXMATRIX viewMatrix;
D3DXMATRIX projectionMatrix;
}
then set up a input element description when you make when you create the input layout, i got all that. Now, when i see people update the buffers before rendering, i see it mostly done like this:
MatrixBuffer * dataPtr;
if(FAILED(deviceContext->Map(m_matrixBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource)))
return false;
dataPtr = (MatrixBuffer *) mappedResource.pData;
dataPtr->world = worldMatrix;
dataPtr->view = viewMatrix;
dataPtr->projection = projectionMatrix;
deviceContext->Unmap(m_matrixBuffer, 0);
where the values assigned are actually valid D3DXMATRIX's. seeing as that's okay, would anything go wrong if i just did it like this?
MatrixBuffer* dataPtr;
if(FAILED(deviceContext->Map(m_matrixBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource)))
return false;
dataPtr = (MatrixBuffer *) mappedResource.pData;
*dataPtr = matrices;
deviceContext->Unmap(m_matrixBuffer, 0);
where matrices is a valid filled out MatrixBuffer object? Would there be dire consequences later because of some special direct X handles this or is it fine?
I don't see anything wrong with your version, from a technical perspective.
It might be slightly harder to grasp what actually gets copied there, and from a theoretical viewpoint the resulting code should be pretty much the same -- I would expect that any decent optimizer would notice that the "long" version just copies several successive items, and lump them all into one larger copy.
Try avoiding using mapping as it stalls the pipeline. For stuff that gets updated once a frame it's better to have D3D11_USAGE_DEFAULT buffer and then use UpdateSubresource method to push new data in.
More on that here: http://blogs.msdn.com/b/shawnhar/archive/2008/04/14/stalling-the-pipeline.aspx