Get texture target by texture id - opengl

glBindTextures is a nice function, not only because it binds multiple textures in one call, but also because it knows to bind each texture to "the target [...] that was specified when the object was created". This way I can specify the target only at texture creation and then forget about it, which helps in generic code.
Unfortunately, I must know the target when calling functions like glGetTexParamater. Is there a way to retrieve the texture target from the texture id? Widely supported extensions are also ok.

As far as I know, there isn't.
A possible workaround could be querying the current binding for every texture target used by your application and compare the current texture against the id you have.
GLuint currentTex;
glGetIntegerv(GL_TEXTURE_BINDING_1D, &currentTex);
if (currentTex == testTex)
{
target = GL_TEXTURE_1D;
return;
}
glGetIntegerv(GL_TEXTURE_BINDING_2D, &currentTex);
if (currentTex == testTex)
{
target = GL_TEXTURE_2D;
return
}
// and so on ...
Of course that you must have a texture bound for this to work. If binding with glBindTexture then you need the target anyway.
But this solution is so clumsy and non-scalable that it is generally much easier to just keep an extra int together with the id for the texture target.

Since OpenGL 4.5 this can be done by:
GLenum target;
glGetTextureParameteriv(textureId, GL_TEXTURE_TARGET, (GLint*)&target);
It's also true that since the introduction of the direct-state-access API (DSA) in OpenGL 4.5, knowing the target of the texture became not as useful.

There really isn't a pretty way to do this that I could find, even after looking at the state tables in the specs. Two possibilities that are both far from attractive:
Try binding it to various targets, and see if you get a GL_INVALID_OPERATION error:
glBindTexture(GL_TEXTURE_1D, texId);
if (glGetError() != GL_INVALID_OPERATION) {
return GL_TEXTURE_1D;
}
glBindTexture(GL_TEXTURE_2D, texId);
if (glGetError() != GL_INVALID_OPERATION) {
return GL_TEXTURE_2D;
}
...
This is similar to what #glamplert suggested. Bind the texture to a given texture unit with glBindTextures(), and then query the textures bound to the various targets for that unit:
glBindTextures(texUnit, 1, &texId);
glActiveTexture(GL_TEXTURE0 + texUnit);
GLuint boundId = 0;
glGetIntegerv(GL_TEXTURE_BINDING_1D, &boundId);
if (boundId == texId) {
return GL_TEXTURE_1D;
}
glGetIntegerv(GL_TEXTURE_BINDING_2D, &boundId);
if (boundId == texId) {
return GL_TEXTURE_2D;
}
But I think you would be much happier if you simply store away which target is used for each texture when you first create it.

Related

What would be a suitable typename for an opengl constant?

Currently I am facing a problem when having making a texture function in OpenGL c++. When making a function to use a texture, you would have to bind your texture with an ID and before that you need to set an active texture as shown below:
void Texture::UseTexture()
{
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);
}
in order for the texture class to be more versatile i wish to add an argument to my useTexture() function to have an argument where you could slot in a constant such as GL_TEXTURE0. Are there any typenames that would work or is const enough?
The usual way this is done is taking an integer parameter (uint32_t for example) and adding it to GL_TEXTURE0:
void Texture::UseTexture(uint32_t unit)
{
if(unit >= MaxTextureUnit) {
//Handle invalid texture unit
}
else {
glActiveTexture(GL_TEXTURE0 + unit);
glBindTexture(GL_TEXTURE_2D, textureID);
}
}
This can be done because the documentation for glActiveTexture states that
texture must be one of GL_TEXTUREi, where 0 <= i < GL_MAX_TEXTURE_UNITS
and that
It is always the case that GL_TEXTUREi = GL_TEXTURE0+i .
MaxTextureUnit is the maximum number of texture units and can be queried with glGetInteger(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS, &MaxTextureUnit). It's more like a symbolic value here to show how it could work, feel free to implement error handling however you like.

How to dynamically keep track of texture units in OpenGL?

I currently creating a texture class for a project I am working on and I am trying to create things well from the start to prevent future headaches.
Currently the way I would load a texture's information to the GPU would be as follows:
void Texture::load_to_GPU(GLuint program)
{
if(program == Rendering_Handler->shading_programs[1].programID)
exit(EXIT_FAILURE);
glUseProgram(program);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);
GLint loc = glGetUniformLocation(program, "text");
if(loc == GL_INVALID_VALUE || loc==GL_INVALID_OPERATION)
{
cerr << "Error returned when trying to find texture uniform."
<< "\nuniform: text"
<< "Error num: " << loc
<< endl;
return;
}
glUniform1i(loc,0);
}
I however would like to be able to determine the texture unit dynamically.
For example, rather than hard coding the uniform name "text", I would like to pass the string as an argument, and do something similar to glGetUniformLocation() but for texture units.
In other words I want to select the texture unit to which the texture is to be bound dynamically rather than hard coding it.
For this I need to find a texture unit that is not currently in use, ideally from smallest to largest texture unit.
What set of OpenGL functions could I use to achieve this behaviour?
EDIT:
An important tool I need to achieve the behaviour I want, which I believe is not clear from the original post is:
Once a texture unit is bound to a sampler uniform, I'd like to be able to get the texture unit bound to the uniform.
So if texture unit 5 is bound to the uniform "sampler2d texture_5"
I want a function that takes the uniform label and returns the texture unit bound to that label.
I assume you have all texture binding/unbinding wrapped.
If so, you can use following approach to allocate and free texture units in O(1) time, using O(n) memory.
(I've not seen this approach anywhere else and don't know the name of this data structure. If anyone knows what's it called, I'd appreciate the information.)
constexpr int capacity = 64; // A total number of units.
int size = 0; // Amount of allocated units.
std::vector<int> pool, indices;
void init()
{
pool.resize(capacity);
std::iota(pool.begin(), pool.end(), 0);
indices.resize(capacity);
std::iota(indices.begin(), indices.end(), 0);
}
int alloc()
{
if (size >= capacity)
return -1; // No more texture units.
return pool[size++];
}
void free(int unit)
{
// assert(indices[unit] < size) - if this fails, then you have a double free
size--;
int last_unit = pool[size];
std::swap(pool[indices[unit]], pool[size]);
std::swap(indices[unit], indices[last_unit]);
}

OpenGL - Is vertex attribute state bound to specific VBOs?

As I understand VAOs/VBOs currently, a VAO retains all the attribute information that has been set up since it was bound, eg. the offset, stride, number of components, etc. of a given vertex attribute within a VBO.
What I seem to be unclear on is how VAOs and VBOs work together. A lot of the examples I have seen specify the vertex attributes with respect to the currently bound VBO, and when the VAO is bound the data in the VBO become accessible. One way I can see of using VAOs in this way would be to have one per object (where each object uses its own VBO), but I've read that this is poor performance-wise because of switching between many VAOs unnecessarily. I also would rather like to avoid having to store all my object data in one monolithic VBO because I will need to add and remove objects within my scene at any time - as a 3D editor, I feel the application would be much better suited to having each geometry object own its own buffer, rather than in some large, preallocated VBO. (Is this a correct assumption?)
My question therefore is whether one VAO can store vertex attribute configurations independently of the VBOs? Would I be able to configure a VAO to expect data in a certain format (eg. position, normal, UV) and then "swap in" different VBOs as I draw the different geometry objects, or is the format information essentially bound only to the VBO itself? If the latter, is it worth me using VAOs at all?
ARB_vertex_attrib_binding allows you to separate Vao attribute format and buffer binding.
https://www.opengl.org/wiki/Vertex_Specification#Separate_attribute_format
Internally, when you configure your Vao, Vertex buffer is automatically associated with attribute index. With ARB_vertex_attrib_binding, you have new gl functions to define Attribute formats independently from the bound buffer, which may be switched with VertexBuffer functions.
Here some piece of code in c# with openTK: (full surce: https://github.com/jpbruyere/GGL/tree/ottd/Tetra )
The solution here is to build a VAO with all your meshes concatenated, keeping for each of them only
BaseVertex = the vertice offset in the VAO
IndicesOffset = the offset in the Element buffer (ebo index)
IndicesCount = and the total indice count of the model
protected void CreateVAOs()
{
//normal vao binding
vaoHandle = GL.GenVertexArray();
GL.BindVertexArray(vaoHandle);
GL.EnableVertexAttribArray(0);
GL.BindBuffer(BufferTarget.ArrayBuffer, positionVboHandle);
GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, true, Vector3.SizeInBytes, 0);
... other attrib bindings come here
//ARB vertex attrib binding use for fast instance buffers switching
//note that I use 4 attrib indices to bind a matrix
GL.VertexBindingDivisor (instanceBufferIndex, 1);
for (int i = 0; i < 4; i++) {
GL.EnableVertexAttribArray (instanceBufferIndex + i);
GL.VertexAttribBinding (instanceBufferIndex+i, instanceBufferIndex);
GL.VertexAttribFormat(instanceBufferIndex+i, 4, VertexAttribType.Float, false, Vector4.SizeInBytes * i);
}
if (indices != null)
GL.BindBuffer(BufferTarget.ElementArrayBuffer, eboHandle);
GL.BindVertexArray(0);
}
Then, I define Instances of mesh with just a Matrix array for each, that's a normal buffer creation, but not staticaly bound to the vao.
instancesVboId = GL.GenBuffer ();
GL.BindBuffer (BufferTarget.ArrayBuffer, instancesVboId);
GL.BufferData<Matrix4> (BufferTarget.ArrayBuffer,
new IntPtr (modelMats.Length * Vector4.SizeInBytes * 4),
modelMats, BufferUsageHint.DynamicDraw);
GL.BindBuffer (BufferTarget.ArrayBuffer, 0);
To render such vao, I loop inside my instance array:
public void Bind(){
GL.BindVertexArray(vaoHandle);
}
public void Render(PrimitiveType _primitiveType){
foreach (VAOItem item in Meshes) {
GL.ActiveTexture (TextureUnit.Texture1);
GL.BindTexture (TextureTarget.Texture2D, item.NormalMapTexture);
GL.ActiveTexture (TextureUnit.Texture0);
GL.BindTexture (TextureTarget.Texture2D, item.DiffuseTexture);
//Here I bind the Instance buffer with my matrices
//that's a fast switch without changing vao confing
GL.BindVertexBuffer (instanceBufferIndex, item.instancesVboId, IntPtr.Zero,Vector4.SizeInBytes * 4);
//here I draw instanced with base vertex
GL.DrawElementsInstancedBaseVertex(_primitiveType, item.IndicesCount,
DrawElementsType.UnsignedShort, new IntPtr(item.IndicesOffset*sizeof(ushort)),
item.modelMats.Length, item.BaseVertex);
}
}
The final VAO is bound only once.

QGLBuffer::map returns NULL?

I'm trying to use QGLbuffer to display an image.
Sequence is something like:
initializeGL() {
glbuffer= QGLBuffer(QGLBuffer::PixelUnpackBuffer);
glbuffer.create();
glbuffer.bind();
glbuffer.allocate(image_width*image_height*4); // RGBA
glbuffer.release();
}
// Attempting to write an image directly the graphics memory.
// map() should map the texture into the address space and give me an address in the
// to write directly to but always returns NULL
unsigned char* dest = glbuffer.map(QGLBuffer::WriteOnly); FAILS
MyGetImageFunction( dest );
glbuffer.unmap();
paint() {
glbuffer.bind();
glBegin(GL_QUADS);
glTexCoord2i(0,0); glVertex2i(0,height());
glTexCoord2i(0,1); glVertex2i(0,0);
glTexCoord2i(1,1); glVertex2i(width(),0);
glTexCoord2i(1,0); glVertex2i(width(),height());
glEnd();
glbuffer.release();
}
There aren't any examples of using GLBuffer in this way, it's pretty new
Edit --- for search here is the working solution -------
// Where glbuffer is defined as
glbuffer= QGLBuffer(QGLBuffer::PixelUnpackBuffer);
// sequence to get a pointer into a PBO, write data to it and copy it to a texture
glbuffer.bind(); // bind before doing anything
unsigned char *dest = (unsigned char*)glbuffer.map(QGLBuffer::WriteOnly);
MyGetImageFunction(dest);
glbuffer.unmap(); // need to unbind before the rest of openGL can access the PBO
glBindTexture(GL_TEXTURE_2D,texture);
// Note 'NULL' because memory is now onboard the card
glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, image_width, image_height, glFormatExt, glType, NULL);
glbuffer.release(); // but don't release until finished the copy
// PaintGL function
glBindTexture(GL_TEXTURE_2D,textures);
glBegin(GL_QUADS);
glTexCoord2i(0,0); glVertex2i(0,height());
glTexCoord2i(0,1); glVertex2i(0,0);
glTexCoord2i(1,1); glVertex2i(width(),0);
glTexCoord2i(1,0); glVertex2i(width(),height());
glEnd();
You should bind the buffer before mapping it!
In the documentation for QGLBuffer::map:
It is assumed that create() has been called on this buffer and that it has been bound to the current context.
In addition to VJovic's comments, I think you are missing a few points about PBOs:
A pixel unpack buffer does not give you a pointer to the graphics texture. It is a separate piece of memory allocated on the graphics card to which you can write to directly from the CPU.
The buffer can be copied into a texture by a glTexSubImage2D(....., 0) call, with the texture being bound as well, which you do not do. (0 is the offset into the pixel buffer). The copy is needed partly because textures have a different layout than linear pixel buffers.
See this page for a good explanation of PBO usages (I used it a few weeks ago to do async texture upload).
create will return false if the GL implementation does not support buffers, or there is no current QGLContext.
bind returns false if binding was not possible, usually because type() is not supported on this GL implementation.
You are not checking if these two functions passed.
I got the same thing, map returns NULL. When I used the following order it is solved.
bool success = mPixelBuffer->create();
mPixelBuffer->setUsagePattern(QGLBuffer::DynamicDraw);
success = mPixelBuffer->bind();
mPixelBuffer->allocate(sizeof(imageData));
void* ptr =mPixelBuffer->map(QGLBuffer::ReadOnly);

GLSL change uniform texture for each object

I'm currently trying to draw simple meshes using different textures (using C# and OpenTK). I read a lot about TextureUnit and bindings, and that's my current implementation (not working as expected) :
private void ApplyOpaquePass()
{
GL.UseProgram(this.shaderProgram);
GL.CullFace(CullFaceMode.Back);
while (this.opaqueNodes.Count > 0)
Draw(this.opaqueNodes.Pop());
GL.UseProgram(0);
}
And my draw method :
private void Draw(Assets.Model.Geoset geoset)
{
GL.ActiveTexture(TextureUnit.Texture1);
GL.BindTexture(TextureTarget.Texture2D, geoset.TextureId /*buffer id returned by GL.GenTextures*/ );
GL.Uniform1(GL.GetUniformLocation(this.shaderProgram, "Texture1"), 1 /*see note below*/ );
//Note: if I'm correct, it should be 1 when using TextureUnit.Texture1
// (2 for Texture2...), note that doesn't seem to work since no
// texture texture at all is sent to the shader, however a texture
// is shown when specifying any other number (0, 2, 3...)
// Draw vertices & indices buffers...
}
And my shader code (that shouldn't be the problem since uv mapping is ok):
uniform sampler2D Texture1;
void main(void)
{
gl_FragColor = texture2D(Texture1, gl_TexCoord[0].st);
}
What's the problem :
Since geoset.TextureId can vary from one geoset to another, I'm expecting different texture to be sent to the shader.
Instead, always the same texture is applied to all objects (geosets).
Ideas :
Using different TextureUnit for each textures (working well), but what happens if we have 2000 different textures? If my understanding is right, we must use multiple TextureUnit only if we want to use multiple texture at the same time in the shader.
I first thought that uniforms couldn't be changed once defined, but a test with a boolean uniform told me that it was actually possible.
private void Draw(Assets.Model.Geoset geoset)
{
GL.ActiveTexture(TextureUnit.Texture1);
GL.BindTexture(TextureTarget.Texture2D, geoset.TextureId);
GL.Uniform1(GL.GetUniformLocation(this.shaderProgram, "Texture1"), 1 );
//added line...
GL.Uniform1(GL.GetUniformLocation(this.shaderProgram, "UseBaseColor"), (geoset.Material.FilterMode == Assets.Model.Material.FilterType.Blend) ? 1: 0);
// Draw vertices & indices buffers...
}
Shader code:
uniform sampler2D Texture1;
uniform bool UseBaseColor;
void main(void)
{
gl_FragColor = texture2D(Texture1, gl_TexCoord[0].st);
if (UseBaseColor)
gl_FragColor = mix(vec4(0,1,1,1), gl_FragColor , gl_FragColor .a);
}
This code works great, drawing some geoset with a base color instead of transparency, that (should ?) prove that uniforms can be changed here. Why this isn't working with my textures ?
Should I use a different shader program per geoset ?
Thanks in advance for your answers :)
Regards,
Bruce
EDIT: that's how I generate textures in the renderer:
override public uint GenTexture(Bitmap bmp)
{
uint texture;
GL.GenTextures(1, out texture);
//I disabled this line because I now bind the texture before drawing a geoset
//Anyway, uncommenting this line doesn't show a better result
//GL.BindTexture(TextureTarget.Texture2D, texture);
System.Drawing.Imaging.BitmapData data = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width, data.Height, 0,
OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0);
bmp.UnlockBits(data);
//temp settings
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear);
return texture;
}
I finally solved my problem !
All the answers perfected my understanding and lead me to the solution which lied on two major problems:
1) as Calvin1602 said, this is very important to bind a newly created texture before calling glTexImage2d.
2) also UncleZeiv rose my attention about the last GL.Uniform1's parameter. The OpenTK tutorial is very misleading because the guy pass the id of the texture object to the function, that happens to work here because the order of generation of the texture exactly matches the id of used TextureUnit.
As I was unsure that my comprehension was exact, I wrongly changed this parameter back to the geoset.TextureId.
Thanks !
You don't need multiple shader programs if the only thing you are changing is the texture. Also uniform locations are constant throughout the lifetime of a shader program, so there is no need to retrieve those each frame. However, you do need to rebind the texture each time you change it, and you will need to bind each distinct texture to a separate texture ID.
As a result, I would conclude that what you posted ought to work and so the problem is likely somewhere else in your code.
EDIT: After the updated version it should still work. However I am concerned about why the following line is commented out:
//GL.BindTexture(TextureTarget.Texture2D, texture);
This should be in there. Otherwise you will keep over writing the same texture (which is ridiculous). You need to bind the texture before you initialize. Now it is entirely conceivable that something else is broken, but given what I see now this is the only error that jumps out at me.