When I render to a texture (stored in a bound framebuffer object), do any of the following texture parameters matter?
GL_TEXTURE_WRAP_S
GL_TEXTURE_WRAP_T
GL_TEXTURE_MIN_FILTER
GL_TEXTURE_MAG_FILTER
It's also redundant to generate mipmaps, right? (Might be a stupid question, but I'm just making sure!)
What about data types (the type parameter)?
Does type have to be GL_FLOAT? If not, what's the difference between specifying type as GL_FLOAT and GL_UNSIGNED_BYTE?
Also, every doc I find on the web regarding Texture2D info (e.g. https://www.opengl.org/sdk/docs/man/html/glTexImage2D.xhtml) is missing some info. (namely the GL_DEPTH_COMPONENT16/24/32, and GL_RGB16 flags. other sources miss a lot more).
Is there a source for complete info on these stuff? (preferably specialized for the render-to-texture technique)
When I render to a texture (stored in a bound framebuffer object), do any of the following texture parameters matter?
No. While those parameters can be set in a way that breaks Texture Completeness, they do not affect Framebuffer Object Completeness. No texture or sampling parameters can affect framebuffer completeness.
What about data types (the type parameter)? Does type have to be GL_FLOAT? If not, what's the difference between specifying type as GL_FLOAT and GL_UNSIGNED_BYTE?
Even if you are transferring no bytes of data (ie: passing nullptr), you must provide pixel transfer parameters that are legal values. If you don't, then your call to glTexImage2D will fail with an error.
For example, from the OpenGL Specification version 4.5:
An INVALID_OPERATION error is generated if one of the base internal
format and format is DEPTH_COMPONENT or DEPTH_STENCIL, and the other
is neither of these values.
So your internal format and pixel transfer format must at least be able to talk to one another, even if they're never actually used. So if you're making a depth/stencil texture, you must use GL_DEPTH_STENCIL as the pixel transfer format.
Or you can just stop screwing around with bad APIs and use glTexStorage.
Related
I can't find a proper description of the types of the texture. Documentation (https://docs.rs/sdl2/0.34.3/sdl2/render/struct.TextureCreator.html#method.create_texture) says about static, streaming and target textures, but gives a little information on how they differ.
If I want to update texture completely on each frame (the texture is 100% of the canvas in size), which texture should I use?
It took me a bit of time to understand difference between them, but:
Static texture is a texture which is rarely changed (like sprites).
Target texture is a texture which can be used as a 'drawing place' (to be used as a surface, using SDL draw primitives). It's intended to be updated often.
Streaming texture is a special type of texture which assumes a full update from external source of data. It was designed for video players and alike (render new frame of the video into the same texture). It's intended to be updated often too.
The streaming texture should be updated with with_lock method which takes closure to perform update. The closure gets the texture's writable byte-array as a parameter.
So, the key difference is that 'target' allows to 'draw' on the texture (fill, draw a line, bltbit, etc), and 'streaming' allows to update it as byte-array, (lower level even than pixel array).
Is it possible to find the internal format of a texture within the shader (glsl)?
For example, if I have a texture with the format GL_RG, is it possible to recognize in the shader that the blue and alpha value are "constant" and can be ignored?
I know I can use a uniform to pass the texture type from c++ to the shaders. But is there an "intrinsic" way to find out from within the shader?
No, I don't believe there is anything that would give you this information directly.
Looking at the latest GLSL spec (4.50 at this time), I would expect a hypothetical function to get this information to be listed in section "8.9.1. Texture Query Functions" starting on page 158. But the only functions listed there are:
textureSize: Get size of texture.
textureQueryLod: Get the level of detail used for the given texture coordinates.
textureQueryLevels: Get the number of mipmap levels in the texture.
textureSamples: Get the number of samples for a multisampled texture.
So unless there is something completely different I missed, what you're looking for does not exist.
I've found a lot of resources that tell you what to type to get a texture on screen, but would like a higher level conceptual understanding of what the openGL API is "doing" and what all of the differences in terminology "mean".
I'm going to do my best to explain what I've picked up, but would love any corrections/additions, or pointers to resources where I can read further (and just a note that I've found the documentation of the actual API calls to just reference themselves in circles and be conceptually lacking).
glGenTextures- this won't actually allocate any memory for the data of a texture on the graphics card (you just tell it "how many" textures you want it to generate, so it doesn't know anything about the size...), but instead sets kind of a "name" aside so you can reference given textures consistently (I've been thinking of it as kind of "allocating a pointer").
glBindTexture- use the "name" generated in glGenTexture to specify that "we're now talking about this texture for future API calls until further notice", and further, we're specifying some metadata about that "pointer" we've allocated saying whether the texture it points to (/will point to) is of type GL_TEXTURE_2D or ..._3D or whatever. (Is it just me, or is it weird that this call has those two seemingly totally different functionalities?)
glTexParameter- sets other specified metadata about the currently "bound" texture. (I like this API as it seems pretty self explanatory and lets you set metadata explicitly... but I wonder why letting OpenGL know that it's a GL_TEXTURE_2D isn't part of THIS call, and not the previous? Especially because you have to specify that it's a GL_TEXTURE_2D every time you call this anyways? And why do you have to do that?)
glTexImage2D- allocates the memory for the actual data for the texture on the graphics card (and optionally uploads it). It further specifies some metadata regarding how it ought be read: its width, height, formatting (GL_RGB, GL_RGBA, etc...). Now again, why do I again have to specify that it's a GL_TEXTURE_2D when I've done it in all the previous calls? Also, I guess I can understand why this includes some metadata (rather than offloading ALL the texture metadata calls to glTexParameter as these are pretty fundamental/non-optional bits of info, but there are also some weird parameters that seem like they oughtn't have made the cut? oh well...)
glActiveTexture- this is the bit that I really don't get... So I guess graphics cards are capable of having only a limited number of "texture units"... what is a texture unit? Is it that there can only be N texture buffers? Or only N texture pointers? Or (this is my best guess...) there can only be N pointers being actively read by a given draw call? And once I get that, where/how often to I have to specify the "Active Texture"? Does glBindTexture associate the bound texture with the currently active texture? Or is it the other way around (bind, then set active)? Or does uploading/allocating the graphics card memory do that?
sampler2D- now we're getting into glsl stuff... So, a sampler is a thing that can reference a texture from within a shader. I can get its location via glGetUniformLocation, so I can set which texture that sampler is referencing- does this correspond to the "Active Texture"? So if I want to talk about the texture I've specified as GL_TEXTURE0, I'd call glUniform1i(location_of_sampler_uniform,0)? Or are those two different things?
I think that's all I got... if I'm obviously missing some intuition or something, please let me know! Thanks!
Let me apologize for answering with what amounts to a giant wall of text. I could not figure out how to format this any less obnoxious way ;)
glGenTextures
this won't actually allocate any memory for the data of a texture on the graphics card (you just tell it "how many" textures you want it to generate, so it doesn't know anything about the size...), but instead sets kind of a "name" aside so you can reference given textures consistently (I've been thinking of it as kind of "allocating a pointer").
You can more or less think of it as "allocating a pointer." What it really does is reserve a name (handle) in the set of textures. Nothing is allocated at all at this point, basically it just flags GL to say "you can't hand out this name anymore." (more on this later).
glBindTexture
use the "name" generated in glGenTexture to specify that "we're now talking about this texture for future API calls until further notice", and further, we're specifying some metadata about that "pointer" we've allocated saying whether the texture it points to (/will point to) is of type GL_TEXTURE_2D or ..._3D or whatever. (Is it just me, or is it weird that this call has those two seemingly totally different functionalities?)
If you will recall, glGenTextures (...) only reserves a name. This function is what takes the reserved name and effectively finalizes it as a texture object (the first time it is called). The type you pass here is immutable, once you bind a name for the first time, it has to use the same type for every successive bind.
Now you have finally finished allocating a texture object, but it has no data store at this point -- it is just a set of states with no data.
glTexParameter
sets other specified metadata about the currently "bound" texture. (I like this API as it seems pretty self explanatory and lets you set metadata explicitly... but I wonder why letting OpenGL know that it's a GL_TEXTURE_2D isn't part of THIS call, and not the previous? Especially because you have to specify that it's a GL_TEXTURE_2D every time you call this anyways? And why do you have to do that?)
I am actually not quite clear what you are asking here -- maybe my explanation of the previous function call will help you? But you are right, this function sets the state associated with a texture object.
glTexImage2D
allocates the memory for the actual data for the texture on the graphics card (and optionally uploads it). It further specifies some metadata regarding how it ought be read: its width, height, formatting (GL_RGB, GL_RGBA, etc...). Now again, why do I again have to specify that it's a GL_TEXTURE_2D when I've done it in all the previous calls? Also, I guess I can understand why this includes some metadata (rather than offloading ALL the texture metadata calls to glTexParameter as these are pretty fundamental/non-optional bits of info, but there are also some weird parameters that seem like they oughtn't have made the cut? oh well...)
This is what allocates the data store and (optionally) uploads texture data (you can supply NULL for the data here and opt to finish the data upload later with glTexSubImage2D (...)).
You have to specify the texture target here because there are half a dozen different types of textures that use 2D data stores. The simplest way to illustrate this is a cubemap.
A cubemap has type GL_TEXTURE_CUBE_MAP, but you cannot upload its texture data using GL_TEXTURE_CUBE_MAP -- that is nonsensical. Instead, you call glTexImage2D (...) while the cubemap is bound to GL_TEXTURE_CUBE_MAP and then you pass something like GL_TEXTURE_CUBE_MAP_POSITIVE_X to indicate which of the 6 2D faces of the cubemap you are referencing.
glActiveTexture
this is the bit that I really don't get... So I guess graphics cards are capable of having only a limited number of "texture units"... what is a texture unit? Is it that there can only be N texture buffers? Or only N texture pointers? Or (this is my best guess...) there can only be N pointers being actively read by a given draw call? And once I get that, where/how often to I have to specify the "Active Texture"? Does glBindTexture associate the bound texture with the currently active texture? Or is it the other way around (bind, then set active)? Or does uploading/allocating the graphics card memory do that?
This is an additional level of indirection for texture binding (GL did not always have multiple texture units and you would have to do multiple render passes to apply multiple textures).
Once multi-texturing was introduced, binding a texture actually started to work this way:
glBindTexture (target, name) => ATIU.targets [target].bound = name
Where:
* ATIU is the active texture image unit
* targets is an array of all possible texture types that can be bound to this unit
* bound is the name of the texture bound to ATIU.targets [target]
The rules since OpenGL 3.0 have been, you get a minimum of 16 of these for every shader stage in the system.
This requirement allows you enough binding locations to maintain a set of 16 different textures for each stage of the programmable pipeline (vertex,geometry,fragment -- 3.x, tessellation control / evaluation -- 4.0). Most implementations can only use 16 textures in a single shader invocation (pass, basically), but you have a total of 48 (GL3) or 80 (GL4) places you can select from.
sampler2D
now we're getting into glsl stuff... So, a sampler is a thing that can reference a texture from within a shader. I can get its location via glGetUniformLocation, so I can set which texture that sampler is referencing- does this correspond to the "Active Texture"? So if I want to talk about the texture I've specified as GL_TEXTURE0, I'd call glUniform1i(location_of_sampler_uniform,0)? Or are those two different things?
Yes, the samplers in GLSL store indices that correspond to GL_TEXTUREn, where n is the value you have assigned to this uniform.
These are not regular uniforms, mind you, they are called opaque types (the value assigned cannot be changed/assigned from within a shader at run-time). You do not need to know that, but it might help to understand that if the question ever arises:
"Why can't I dynamically select a texture image unit for my sampler at run-time?" :)
In later versions of OpenGL, samplers actually became state objects of their own. They decouple some of the state that used to be tied directly to texture objects but had nothing to do with interpreting how the texture's data was stored. The decoupled state includes things like texture wrap mode, min/mag filter and mipmap levels. Sampler objects store no data.
This decoupling takes place whenever you bind a sampler object to a texture image unit - that will override the aforementioned states that are duplicated by every texture object.
So effectively, a GLSL sampler* references neither a texture nor a sampler; it references a texture image unit (which may have one or both of those things bound to it). GLSL will pull sampler state and texture data accordingly from that unit based on the declared sampler type.
For standard OpenGL textures, the filtering state is part of the texture, and must be defined when the texture is created. This leads to code like:
glGenTextures(1,&_texture_id);
glBindTexture(GL_TEXTURE_2D,_texture_id);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexImage2D(...);
This works perfectly. I am trying to make a multisampled texture (for use in a FBO). The code is very similar:
glGenTextures(1,&_texture_id);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE,_texture_id);
glTexParameterf(GL_TEXTURE_2D_MULTISAMPLE,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D_MULTISAMPLE,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexImage2DMultisample(...);
I am using a debug context, and with this code the first glTexParameterf(...) call causes:
GL_INVALID_ENUM error generated. multisample texture targets doesn't support sampler state
I don't know what this is supposed to mean. Notice that multisampled textures only support nearest filtering. I am specifying this. I noticed that for some of the calls (in particular glTexParameterf(...)), the GL_TEXTURE_2D_MULTISAMPLE is not a listed input in the documentation (which would indeed explain the invalid enum error if they're actually invalid, not just forgotten). However, if it is not accepted, then how am I supposed to set nearest filtering?
You do not need to set nearest filtering because multisample textures are not filtered at all. The specification (section 8.10) does list GL_TEXTURE_2D_MULTISAMPLE as a valid target for glTexParameteri (which you should use instead of glTexParameterf for integer parameters), but lists among possible errors:
An INVALID_ENUM error is generated if target is either TEXTURE_2D_MULTISAMPLE
or TEXTURE_2D_MULTISAMPLE_ARRAY, and pname is any sampler
state from table 23.18.
I want to determine the size (width, height) of a framebuffer object.
I created a framebuffer object via
// create the FBO.
glGenFramebuffers(1, &fboId);
How can I get the size of the first color attachment given only the framebuffer object id (fboId)?
Is this possible or do I have tor store the size of the color attachment in an external variable to know later the size of the FBO?
Your question is somewhat confused, as you ask for two different things.
Here's the easy question:
How can I get the size of the first color attachment given only the framebuffer object id (fboId)?
That's simple: get the texture/renderbuffer attached to that attachment, get what mipmap level and array layer is attached, then query the texture/renderbuffer for how big it is.
The first two steps are done with glGetFramebufferAttachmentParameter (note the key word "Attachment") for GL_COLOR_ATTACHMENT0. You query the GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE to get whether it's a renderbuffer or a texture. You can get the renderbuffer/texture name with GL_FRAMEBUFFER_ATTACHMENT_OBJECT_NAME.
If the object is a renderbuffer, you can then bind the renderbuffer and use glGetRenderbufferParameter to fetch the renderbuffer's GL_RENDERBUFFER_WIDTH and GL_RENDERBUFFER_HEIGHT.
If the object is a texture, you'll need to do more work. You need to query the attachment parameter GL_FRAMEBUFFER_ATTACHMENT_TEXTURE_LEVEL to get the mipmap level.
Of course, now you need to know how to bind it. If you're using OpenGL versions before 4.4 or without certain extensions, then this is complicated. See, you need to know which texture target type to use. Silly and annoying as this may seem, the only way to determine the target from just the texture object name is to... try everything. Go through each target and check glGetError. The one for which GL_INVALID_OPERATION isn't returned is the right one.
If you have GL 4.4 or ARB_multi_bind available, you can just use glBindTextures (note the "s"), which does not require that you specify the target. And if you have 4.5 or ARB_direct_state_access, you don't need to bind the texture at all. The DSA-style functions don't need the texture target, and it also provides glBindTextureUnit, which binds a texture to its natural internal target.
Once you have the texture bound and it's mipmap level, you use glGetTexLevelParameter to query the GL_TEXTURE_WIDTH and GL_TEXTURE_HEIGHT for that level.
Now, that's the easy problem. The hard problem is what your title asks:
I want to determine the size (width, height) of a framebuffer object.
The size of the renderable area of an FBO is not the same as the size of GL_COLOR_ATTACHMENT0. The renderable area of an FBO is the intersection of all of the sizes of all of the images attached to the FBO.
Unless you have special knowledge of this FBO, you can't assume that the FBO contains only one image or that all of the images have the same size (and if you have special knowledge of the FBO, then quite frankly you should also have special knowledge of how big it is). So you'll need to repeat the above procedure for every attachment (if the type is GL_NONE, then nothing is attached). Then take the intersection of the returned values (ie: the smallest width and height).
In general, you shouldn't have to ask an FBO that you created how big it is. Just as you don't have to ask textures how big they are. You made them; by definition, you know how big they are. You put them in the FBO, so again you know how big it is.