When I toggle between these two ways of drawing a vector of DrawArraysIndirectCommand, the passed gl_BaseInstance read in the vertex-shader is different. In the latter case, gl_BasInstance is always 0.
To my understanding the result should be the same, each commands gl_InstanceID should start from zero, and the gl_BaseInstance should be the baseInstance member of the DrawArraysIndirectCommand struct.
Am I wrong here, should there be a difference passing each command one at a time versus passing all commands at once?
auto commands = std::vector<DrawArraysIndirectCommand>{...};
// A - Pass each command one at a time
for (const auto& command : commands) {
glMultiDrawArraysIndirect(GL_TRIANGLES, &command, 1, 0);
}
// B - Pass all commands as an array
glMultiDrawArraysIndirect(
GL_TRIANGLES,
commands.data(),
commands.size(),
0
);
//
typedef struct {
GLuint count;
GLuint instanceCount;
GLuint first;
GLuint baseInstance;
} DrawArraysIndirectCommand;
Note: I'm using OpenGL 4.6
Related
e.g. when I do
glBindBuffer(GL_ARRAY_BUFFER, _id);
glNamedBufferData(_id, size, data, static_cast<GLenum>(usage));
then program works as expected. But if I delete that first line, my program crashes and prints:
ERROR 1282 in glNamedBufferData
Likewise, if I do
glBindVertexArray(_id);
GLuint attribIndex = 0;
GLuint offset = 0;
for(const GlslType type : layout) {
const auto& attrib = GLSL_TYPES.at(type);
glVertexArrayAttribFormat(_id, attribIndex, attrib.size, static_cast<GLenum>(attrib.type), GL_FALSE, offset);
glEnableVertexArrayAttrib(_id, attribIndex);
glVertexArrayAttribBinding(_id, attribIndex, 0);
offset += attrib.size_bytes();
}
It works fine, but if I delete the glBindVertexArray then it doesn't work and prints:
ERROR 1282 in glVertexArrayAttribFormat
ERROR 1282 in glEnableVertexArrayAttrib
ERROR 1282 in glVertexArrayAttribBinding
I figured that by "naming" the VBO or VAO when calling those functions then I wouldn't have to bind it beforehand. But if I have to bind regardless, what's the benefit of these functions that require the extra name argument?
The glGen* functions create a integer name representing an object, but they don't create the object state itself (well, most of them don't). It is only when you bind those objects do they gain their state, and only after they have state can any function be called which requires them to have state. In particular, the direct state access functions.
This is why ARB_direct_state_access also includes the glCreate* functions. These functions create an integer name and the state data for objects. Therefore, you don't have to bind anything to manipulate direct state access functions, so long as you properly created the object beforehand.
A problem with returning a pointer from a class
I am making a project for a 3D course that involves utilizing DirectX 11. In assignments that were much smaller in scope we were encouraged to just put all the code in and around the main-file. For this project I wanted more structure, so I decided to subdivide the code into three classes: WindowHandler, D3DHandler and PipelineHandler.
D3DHandler sits in WindowHandler, and PipelineHandler sits in D3DHandler like so:
class WindowHandler
{
private:
HWND window;
MSG message
class D3DHandler()
{
private:
ID3D11Device*;
ID3D11DeviceContext*;
IDXGISwapChain*;
ID3D11RenderTargetView*;
ID3D11Texture2D*;
ID3D11DepthStencilView*;
D3D11_VIEWPORT;
class PipelineHandler()
{
private:
ID3D11VertexShader* vShader;
ID3D11PixelShader* pShader;
ID3D11InputLayout* inputLayout;
ID3D11Buffer* vertexBuffer;
}
}
}
(The classes are divided into their own .h libraries, I just wanted to compress the code a bit)
Everything ran smoothly until I tried to bind my VertexBuffer in my Render() function.
void D3DHandler::Render()
{
float clearColor[] = { 0.0f, 0.0f, 0.0f, 1.0f };
this->immediateContext->ClearRenderTargetView(this->rtv, clearColor); //Clear backbuffer
this->immediateContext->ClearDepthStencilView(this->dsView, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);
UINT32 vertexSize = sizeof(Vertex); //Will be the size of 8 floats, x y z, r g b, u v
UINT32 offset = 0;
this->immediateContext->VSSetShader(this->ph.GetVertexShader(), nullptr, 0);
this->immediateContext->IASetVertexBuffers(0, 1, this->ph.GetVertexBuffer(), &vertexSize, &offset);
this->immediateContext->IASetInputLayout(this->ph.GetInputLayout());
this->immediateContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
this->immediateContext->PSSetShader(this->ph.GetPixelShader(), nullptr, 0);
//
// Draw geometry
this->immediateContext->Draw(3, 0);
}
The problem occurs at IASetVertexBuffers, where this->ph.GetVertexBuffer() gives the error
"ID3D11Buffer*" is incomaptible with parameter of type "ID3D11Buffer
*const *"
The Get-function in PipelineHandler returns a pointer to the COM object, and for VertexBuffer it looks like this:
ID3D11Buffer * PipelineHandler::GetVertexBuffer() const
{
return this->vertexBuffer;
}
From what documentation I've found, IASetVertexBuffers expects an array of buffers, so a double pointer. However, in the code I've worked with before, where the ID3D11Buffer * has been global, the same code generated no errors (and yes, most of the code I'm working so far has been wholesale copied from the last assignment in this course, just placed into classes).
I have tried to:
to remove the this-> pointer in front of the ph
Creating a new function called ID3D11Buffer & GetVertexBufferRef() const which returns return *this->vertexBuffer; using that instead.
I yet to try:
Moving vertexBuffer from the PipelineHandler and up to D3DHandler. I have a feeling that would work, but it would also disconnect the vertex buffer from the other objects that I deem to be part of the pipeline and not the context, which is something I would like to avoid.
I am grateful for any feedback I can get. I'm fairly new to programming, so any comments that aren't a solution to my problem are just as welcome.
The problem your are having is that this function signature does not take a ID3D11Buffer* interface pointer. It takes an array of ID3D11Buffer* interface pointers.
this->immediateContext->IASetVertexBuffers(0, 1, this->ph.GetVertexBuffer(), &vertexSize, &offset);
Can be any of the following:
this->immediateContext->IASetVertexBuffers(0, 1, &this->ph.GetVertexBuffer(), &vertexSize, &offset);
-or-
ID3D11Buffer* vb = this->ph.GetVertexBuffer();
this->immediateContext->IASetVertexBuffers(0, 1, &vb, &vertexSize, &offset);
-or-
ID3D11Buffer* vb[] = { this->ph.GetVertexBuffer() };
this->immediateContext->IASetVertexBuffers(0, 1, vb, &vertexSize, &offset);
This is because the Direct3D 11 API supports 'multi-stream' rendering where you have more than one Vertex Buffer referenced in the Input Layout which is merged as it is drawn. The function also takes an array of vertex sizes and offsets, which is why you had to take the address of the values vertexSize and offset instead of just passing by value. See Microsoft Docs for full details on the function signature.
This feature was first introduced back in Direct3D 9, so the best introduction to it is found here. With Direct3D 11, depending on Direct3D hardware feature level you can use up to 16 or 32 slots at once. That said, most basic rendering uses a single stream/VertexBuffer.
You should take a look at this page as to why the 'first' option above can get you into trouble with smart-pointers.
For the 'second' option, I'd recommend using C++11's auto keyboard so it would be: auto vb = this->ph.GetVertexBuffer();
The 'third' option obviously makes most sense if you are using more than one VertexBuffer for multi-stream rendering.
Note that you can never have more than one ID3D11Buffer active for the Index Buffer, so the function signature does in fact take a ID3D11Buffer* in that case and not an array. See Microsoft Docs
I was seperating my main function from it's loop by creating a new function loop. Now I got the problem that some objects I created in main can't be accessed in loop. Creating multiple function parameters in loop is not an option because it will end in like 30 parameters and more for each Shader and other objects. So I made the objects global:
main.cpp
Shader light("Shaders/light.shader");
Shader depth("Shaders/depth.shader");
Shader.cpp
Shader::Shader(const std::string& filePath)
{
source = parseShader(filePath);
if (shaderGeometry)
ID = createShader(source.vertexSource, source.fragmentSource, source.geometrySource);
else
ID = createShader(source.vertexSource, source.fragmentSource);
use();
}
The problem here is that parsing the file paths outside of a function causes an exception at:
int Shader::createShader(const std::string& vertexShader, const std::string& fragmentShader)
{
program = glCreateProgram(); //<-- HERE
//some further code...
}
The vertexShader and fragmentShader are parsed correctly, so there is no problem.
I guess the exception is thrown because the Shaders need to be created inside a function where the GL function pointers are loaded before.
I tried to play around with extern. But something like this just leaves a compiling error:
global variable
extern Shader light;
extern Shader depth;
in the function
int main()
{
Window wnd(width, height);
Shader light("Shaders/light.shader");
Shader depth("Shaders/depth.shader");
wnd.loop();
//some further code
return 0;
}
Swithing The extern Shader light to the Metod and reversed would end up in the same error from before.
I maybe could access the shader with a getter Method and some kind of Array that safes every vertex, fragment and geometry Shader but maybe there is a more simple way and i just missed something.
I have a texture, along with its shaderresourceview, to which I render my scene's original image by using it as a RenderTarget.
Like millions before me, I then use it as an input to my next shader so I can blur, do HDR, all of that.
The simple but "new to me" problem is that when I go to call PSSetShaderResources and put the texture into the shader's texture/constant buffer I get this error:
"Resource being set to PS shader resource slot 0 is still bound on output"
...which makes sense, as it may still well be. My question is how do I unbind the texture from being the render target of the initial pass?
I'm already doing this, which I presume sets it back to the original RenderTargetView that DXUT was using at the beginning, but it doesn't seem to free up the texture:
ID3D11RenderTargetView * renderTargetViewArrayTQ[1] = { DXUTGetD3D11RenderTargetView() };
pd3dImmediateContext->OMSetRenderTargets(1, renderTargetViewArray, nullptr);
Assuming your render target is in slot 0:
ID3D11RenderTargetView* nullRTV = nullptr;
d3dContext->OMSetRenderTargets(1, &nullRTV, nullptr);
You need to pass an array of ID3D11RenderTargetView* of length 1, the first of which needs to be nullptr.
When unbinding the texture from the pixel shader, this does NOT work:
ID3D11ShaderResourceView *const pSRV[1] = { NULL };
pd3dImmediateContext->PSSetShaderResources(0, 0, pSRV);
However, this DOES work:
ID3D11ShaderResourceView *const pSRV[1] = { NULL };
pd3dImmediateContext->PSSetShaderResources(0, 1, pSRV);
Put more simply, when setting the resource back to null, you still have to tell it you're passing in one pointer, albeit a null one. Passing a null alone with a zero count appears to do nothing.
I'm trying to use Uniform Buffer Objects to share my projection matrix accross different shaders (e.g., Deferred pass for solid objects and Forward pass for transparent ones). I think that I'll add more data to the UBO later on, when the complexity grows up. My problem is that the Red Book says:
To explicitly control a uniform block's binding, call glUniformBlockBinding() before calling glLinkProgram().
But the online documentation says:
When a program object is linked or re-linked, the uniform buffer object binding point assigned to each of its active uniform blocks is reset to zero.
What am I missing? Should I bind the Uniform Block after or before the linking?
Thanks.
glUniformBlockBinding needs the program name and the index where to find the block in this particular program.
void glUniformBlockBinding( GLuint program, GLuint uniformBlockIndex, GLuint uniformBlockBinding );
uniformBlockIndex can be obtained by calling glGetUniformBlockIndex on the program.
GLuint glGetUniformBlockIndex( GLuint program, const char *uniformBlockName );
From the API (http://www.opengl.org/wiki/GLAPI/glGetUniformBlockIndex):
program must be the name of a program object for which the command glLinkProgram must have been called in the past, although it is not required that glLinkProgram must have succeeded. The link could have failed because the number of active uniforms exceeded the limit.
So the correct order is:
void glLinkProgram(GLuint program);
GLuint glGetUniformBlockIndex( GLuint program, const char *uniformBlockName );
void glUniformBlockBinding( GLuint program, GLuint uniformBlockIndex, GLuint uniformBlockBinding );