I currently creating a texture class for a project I am working on and I am trying to create things well from the start to prevent future headaches.
Currently the way I would load a texture's information to the GPU would be as follows:
void Texture::load_to_GPU(GLuint program)
{
if(program == Rendering_Handler->shading_programs[1].programID)
exit(EXIT_FAILURE);
glUseProgram(program);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);
GLint loc = glGetUniformLocation(program, "text");
if(loc == GL_INVALID_VALUE || loc==GL_INVALID_OPERATION)
{
cerr << "Error returned when trying to find texture uniform."
<< "\nuniform: text"
<< "Error num: " << loc
<< endl;
return;
}
glUniform1i(loc,0);
}
I however would like to be able to determine the texture unit dynamically.
For example, rather than hard coding the uniform name "text", I would like to pass the string as an argument, and do something similar to glGetUniformLocation() but for texture units.
In other words I want to select the texture unit to which the texture is to be bound dynamically rather than hard coding it.
For this I need to find a texture unit that is not currently in use, ideally from smallest to largest texture unit.
What set of OpenGL functions could I use to achieve this behaviour?
EDIT:
An important tool I need to achieve the behaviour I want, which I believe is not clear from the original post is:
Once a texture unit is bound to a sampler uniform, I'd like to be able to get the texture unit bound to the uniform.
So if texture unit 5 is bound to the uniform "sampler2d texture_5"
I want a function that takes the uniform label and returns the texture unit bound to that label.
I assume you have all texture binding/unbinding wrapped.
If so, you can use following approach to allocate and free texture units in O(1) time, using O(n) memory.
(I've not seen this approach anywhere else and don't know the name of this data structure. If anyone knows what's it called, I'd appreciate the information.)
constexpr int capacity = 64; // A total number of units.
int size = 0; // Amount of allocated units.
std::vector<int> pool, indices;
void init()
{
pool.resize(capacity);
std::iota(pool.begin(), pool.end(), 0);
indices.resize(capacity);
std::iota(indices.begin(), indices.end(), 0);
}
int alloc()
{
if (size >= capacity)
return -1; // No more texture units.
return pool[size++];
}
void free(int unit)
{
// assert(indices[unit] < size) - if this fails, then you have a double free
size--;
int last_unit = pool[size];
std::swap(pool[indices[unit]], pool[size]);
std::swap(indices[unit], indices[last_unit]);
}
Related
Currently I am facing a problem when having making a texture function in OpenGL c++. When making a function to use a texture, you would have to bind your texture with an ID and before that you need to set an active texture as shown below:
void Texture::UseTexture()
{
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);
}
in order for the texture class to be more versatile i wish to add an argument to my useTexture() function to have an argument where you could slot in a constant such as GL_TEXTURE0. Are there any typenames that would work or is const enough?
The usual way this is done is taking an integer parameter (uint32_t for example) and adding it to GL_TEXTURE0:
void Texture::UseTexture(uint32_t unit)
{
if(unit >= MaxTextureUnit) {
//Handle invalid texture unit
}
else {
glActiveTexture(GL_TEXTURE0 + unit);
glBindTexture(GL_TEXTURE_2D, textureID);
}
}
This can be done because the documentation for glActiveTexture states that
texture must be one of GL_TEXTUREi, where 0 <= i < GL_MAX_TEXTURE_UNITS
and that
It is always the case that GL_TEXTUREi = GL_TEXTURE0+i .
MaxTextureUnit is the maximum number of texture units and can be queried with glGetInteger(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS, &MaxTextureUnit). It's more like a symbolic value here to show how it could work, feel free to implement error handling however you like.
Originally using glDrawElementsInstancedBaseVertex to draw the scene meshes. All the meshes vertex attributes are being interleaved in a single buffer object. In total there are only 30 unique meshes. So I've been calling draw 30 times with instance counts, etc. but now I want to batch the draw calls into one using glMultiDrawElementsIndirect. Since I have no experience with this command function, I've been reading articles here and there to understand the implementation with little success. (For testing purposes all meshes are instanced only once).
The command structure from the OpenGL reference page.
struct DrawElementsIndirectCommand
{
GLuint vertexCount;
GLuint instanceCount;
GLuint firstVertex;
GLuint baseVertex;
GLuint baseInstance;
};
DrawElementsIndirectCommand commands[30];
// Populate commands.
for (size_t index { 0 }; index < 30; ++index)
{
const Mesh* mesh{ m_meshes[index] };
commands[index].vertexCount = mesh->elementCount;
commands[index].instanceCount = 1; // Just testing with 1 instance, ATM.
commands[index].firstVertex = mesh->elementOffset();
commands[index].baseVertex = mesh->verticeIndex();
commands[index].baseInstance = 0; // Shouldn't impact testing?
}
// Create and populate the GL_DRAW_INDIRECT_BUFFER buffer... bla bla
Then later down the line, after setup I do some drawing.
// Some prep before drawing like bind VAO, update buffers, etc.
// Draw?
if (RenderMode == MULTIDRAW)
{
// Bind, Draw, Unbind
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, m_indirectBuffer);
glMultiDrawElementsIndirect (GL_TRIANGLES, GL_UNSIGNED_INT, nullptr, 30, 0);
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, 0);
}
else
{
for (size_t index { 0 }; index < 30; ++index)
{
const Mesh* mesh { m_meshes[index] };
glDrawElementsInstancedBaseVertex(
GL_TRIANGLES,
mesh->elementCount,
GL_UNSIGNED_INT,
reinterpret_cast<GLvoid*>(mesh->elementOffset()),
1,
mesh->verticeIndex());
}
}
Now the glDrawElements... still works fine like before when switched. But trying glMultiDraw... gives indistinguishable meshes but when I set the firstVertex to 0 for all commands, the meshes look almost correct (at least distinguishable) but still largely wrong in places?? I feel I'm missing something important about indirect multi-drawing?
//Indirect data
commands[index].firstVertex = mesh->elementOffset();
//Direct draw call
reinterpret_cast<GLvoid*>(mesh->elementOffset()),
That's not how it works for indirect rendering. The firstVertex is not a byte offset; it's the first vertex index. So you have to divide the byte offset by the size of the index to compute firstVertex:
commands[index].firstVertex = mesh->elementOffset() / sizeof(GLuint);
The result of that should be a whole number. If it wasn't, then you were doing unaligned reads, which probably hurt your performance. So fix that ;)
So, I need the way to render multiple objects(not instances) using one draw call. Actually I know how to do this, just to place data into single vbo/ibo and render, using glDrawElements.
The question is: what is efficient way to update uniform data without setting it up for every single object, using glUniform...?
How can I setup one buffer containing all uniform data of dozens of objects, include MVP matrices, bind it and perform render using single draw call?
I tried to use UBOs, but it's not what I need at all.
For rendering instances we just place uniform data, including matrices, at another VBO and set up attribute divisor using glVertexAttribDivisor, but it only works for instances.
Is there a way to do that I want in OpenGL? If not, what can I do to overcome overheads of setting uniform data for dozens of objects?
For example like this:
{
// setting up VBO
glGenBuffers(1, &vbo);
glBindBuffer(vbo);
glBufferData(..., data_size);
// setup buffer
for(int i = 0; i < objects_num; i++)
glBufferSubData(...offset, size, &(objects[i]));
// the same for IBO
.........
// when setup some buffer, that will store all uniforms, for every object
.........
glDrawElements(...);
}
Thanks in advance for helping.
If you're ok with requiring OpenGL 4.3 or higher, I believe you can render this with a single draw call using glMultiDrawElementsIndirect(). This allows you to essentially make multiple draw calls with a single API call. Each sub-call is defined by values in a struct of the form:
typedef struct {
GLuint count;
GLuint instanceCount;
GLuint firstIndex;
GLuint baseVertex;
GLuint baseInstance;
} DrawElementsIndirectCommand;
Since you do not want to draw multiple instances of the same vertices, you use 1 for the instanceCount in each draw call. The key idea is that you can still use instancing by specifying a different baseInstance value for each one. So each object will have a different gl_InstanceID value, and you can use instanced attributes for the values (matrices, etc) that you want to vary per object.
So if you currently have a rendering loop:
for (int k = 0; k < objectCount; ++k) {
// set uniforms for object k.
glDrawElements(GL_TRIANGLES, object[k].indexCount,
GL_UNSIGNED_INT, object[k].indexOffset * sizeof(GLuint));
}
you would instead fill an array of the struct defined above with the arguments:
DrawElementsIndirectCommand cmds[objectCount];
for (int k = 0; k < objectCount; ++k) {
cmds[k].count = object[k].indexCount;
cmds[k].instanceCount = 1;
cmds[k].firstIndex = object[k].indexOffset;
cmds[k].baseVertex = 0;
cmds[k].baseInstance = k;
}
// Rest of setup.
glMultiDrawElementsIndirect(GL_TRIANGLES, GL_UNSIGNED_INT, 0, objectCount, 0);
I didn't provide code for the full setup above. The key steps include:
Drop the cmds array into a buffer, and bind it as GL_DRAW_INDIRECT_BUFFER.
Store the per-object values in a VBO. Set up the corresponding vertex attributes, which includes specifying them as instanced with glVertexAttribDivisor(1).
Set up the per-vertex attributes as usual.
Set up the index buffer as usual.
For this to work, the indices for all the objects will have to be in the same index buffer, and the values for each attribute will have to be in the same VBO across all objects.
glBindTextures is a nice function, not only because it binds multiple textures in one call, but also because it knows to bind each texture to "the target [...] that was specified when the object was created". This way I can specify the target only at texture creation and then forget about it, which helps in generic code.
Unfortunately, I must know the target when calling functions like glGetTexParamater. Is there a way to retrieve the texture target from the texture id? Widely supported extensions are also ok.
As far as I know, there isn't.
A possible workaround could be querying the current binding for every texture target used by your application and compare the current texture against the id you have.
GLuint currentTex;
glGetIntegerv(GL_TEXTURE_BINDING_1D, ¤tTex);
if (currentTex == testTex)
{
target = GL_TEXTURE_1D;
return;
}
glGetIntegerv(GL_TEXTURE_BINDING_2D, ¤tTex);
if (currentTex == testTex)
{
target = GL_TEXTURE_2D;
return
}
// and so on ...
Of course that you must have a texture bound for this to work. If binding with glBindTexture then you need the target anyway.
But this solution is so clumsy and non-scalable that it is generally much easier to just keep an extra int together with the id for the texture target.
Since OpenGL 4.5 this can be done by:
GLenum target;
glGetTextureParameteriv(textureId, GL_TEXTURE_TARGET, (GLint*)&target);
It's also true that since the introduction of the direct-state-access API (DSA) in OpenGL 4.5, knowing the target of the texture became not as useful.
There really isn't a pretty way to do this that I could find, even after looking at the state tables in the specs. Two possibilities that are both far from attractive:
Try binding it to various targets, and see if you get a GL_INVALID_OPERATION error:
glBindTexture(GL_TEXTURE_1D, texId);
if (glGetError() != GL_INVALID_OPERATION) {
return GL_TEXTURE_1D;
}
glBindTexture(GL_TEXTURE_2D, texId);
if (glGetError() != GL_INVALID_OPERATION) {
return GL_TEXTURE_2D;
}
...
This is similar to what #glamplert suggested. Bind the texture to a given texture unit with glBindTextures(), and then query the textures bound to the various targets for that unit:
glBindTextures(texUnit, 1, &texId);
glActiveTexture(GL_TEXTURE0 + texUnit);
GLuint boundId = 0;
glGetIntegerv(GL_TEXTURE_BINDING_1D, &boundId);
if (boundId == texId) {
return GL_TEXTURE_1D;
}
glGetIntegerv(GL_TEXTURE_BINDING_2D, &boundId);
if (boundId == texId) {
return GL_TEXTURE_2D;
}
But I think you would be much happier if you simply store away which target is used for each texture when you first create it.
I have a fairly simple DirectX 11 framework setup that I want to use for various 2D simulations. I am currently trying to implement the 2D Wave Equation on the GPU. It requires I keep the grid state of the simulation at 2 previous timesteps in order to compute the new one.
How I went about it was this - I have a class called FrameBuffer, which has the following public methods:
bool Initialize(D3DGraphicsObject* graphicsObject, int width, int height);
void BeginRender(float clearRed, float clearGreen, float clearBlue, float clearAlpha) const;
void EndRender() const;
// Return a pointer to the underlying texture resource
const ID3D11ShaderResourceView* GetTextureResource() const;
In my main draw loop I have an array of 3 of these buffers. Every loop I use the textures from the previous 2 buffers as inputs to the next frame buffer and I also draw any user input to change the simulation state. I then draw the result.
int nextStep = simStep+1;
if (nextStep > 2)
nextStep = 0;
mFrameArray[nextStep]->BeginRender(0.0f,0.0f,0.0f,1.0f);
{
mGraphicsObj->SetZBufferState(false);
mQuad->GetRenderer()->RenderBuffers(d3dGraphicsObj->GetDeviceContext());
ID3D11ShaderResourceView* texArray[2] = { mFrameArray[simStep]->GetTextureResource(),
mFrameArray[prevStep]->GetTextureResource() };
result = mWaveShader->Render(d3dGraphicsObj, mQuad->GetRenderer()->GetIndexCount(), texArray);
if (!result)
return false;
// perform any extra input
I_InputSystem *inputSystem = ServiceProvider::Instance().GetInputSystem();
if (inputSystem->IsMouseLeftDown()) {
int x,y;
inputSystem->GetMousePos(x,y);
int width,height;
mGraphicsObj->GetScreenDimensions(width,height);
float xPos = MapValue((float)x,0.0f,(float)width,-1.0f,1.0f);
float yPos = MapValue((float)y,0.0f,(float)height,-1.0f,1.0f);
mColorQuad->mTransform.position = Vector3f(xPos,-yPos,0);
result = mColorQuad->Render(&viewMatrix,&orthoMatrix);
if (!result)
return false;
}
mGraphicsObj->SetZBufferState(true);
}
mFrameArray[nextStep]->EndRender();
prevStep = simStep;
simStep = nextStep;
ID3D11ShaderResourceView* currTexture = mFrameArray[nextStep]->GetTextureResource();
// Render texture to screen
mGraphicsObj->SetZBufferState(false);
mQuad->SetTexture(currTexture);
result = mQuad->Render(&viewMatrix,&orthoMatrix);
if (!result)
return false;
mGraphicsObj->SetZBufferState(true);
The problem is nothing is happening. Whatever I draw appears on the screen(I draw using a small quad) but no part of the simulation is actually ran. I can provide the shader code if required, but I am certain it works since I've implemented this before on the CPU using the same algorithm. I'm just not certain how well D3D render targets work and if I'm just drawing wrong every frame.
EDIT 1:
Here is the code for the begin and end render functions of the frame buffers:
void D3DFrameBuffer::BeginRender(float clearRed, float clearGreen, float clearBlue, float clearAlpha) const {
ID3D11DeviceContext *context = pD3dGraphicsObject->GetDeviceContext();
context->OMSetRenderTargets(1, &(mRenderTargetView._Myptr), pD3dGraphicsObject->GetDepthStencilView());
float color[4];
// Setup the color to clear the buffer to.
color[0] = clearRed;
color[1] = clearGreen;
color[2] = clearBlue;
color[3] = clearAlpha;
// Clear the back buffer.
context->ClearRenderTargetView(mRenderTargetView.get(), color);
// Clear the depth buffer.
context->ClearDepthStencilView(pD3dGraphicsObject->GetDepthStencilView(), D3D11_CLEAR_DEPTH, 1.0f, 0);
void D3DFrameBuffer::EndRender() const {
pD3dGraphicsObject->SetBackBufferRenderTarget();
}
Edit 2 Ok, I after I set up the DirectX debug layer I saw that I was using an SRV as a render target while it was still bound to the Pixel stage in out of the shaders. I fixed that by setting shader resources to NULL after I render with the wave shader, but the problem still persists - nothing actually gets ran or updated. I took the render target code from here and slightly modified it, if its any help: http://rastertek.com/dx11tut22.html
Okay, as I understand correct you need a multipass-rendering to texture.
Basiacally you do it like I've described here: link
You creating SRVs with both D3D11_BIND_SHADER_RESOURCE and D3D11_BIND_RENDER_TARGET bind flags.
You ctreating render targets from textures
You set first texture as input (*SetShaderResources()) and second texture as output (OMSetRenderTargets())
You Draw()*
then you bind second texture as input, and third as output
Draw()*
etc.
Additional advices:
If your target GPU capable to write to UAVs from non-compute shaders, you can use it. It is much more simple and less error prone.
If your target GPU suitable, consider using compute shader. It is a pleasure.
Don't forget to enable DirectX debug layer. Sometimes we make obvious errors and debug output can point to them.
Use graphics debugger to review your textures after each draw call.
Edit 1:
As I see, you call BeginRender and OMSetRenderTargets only once, so, all rendering goes into mRenderTargetView. But what you need is to interleave:
SetSRV(texture1);
SetRT(texture2);
Draw();
SetSRV(texture2);
SetRT(texture3);
Draw();
SetSRV(texture3);
SetRT(backBuffer);
Draw();
Also, we don't know what is mRenderTargetView yet.
so, before
result = mColorQuad->Render(&viewMatrix,&orthoMatrix);
somewhere must be OMSetRenderTargets .
Probably, it s better to review your Begin()/End() design, to make resource binding more clearly visible.
Happy coding! =)