What's the difference between static streaming and target textures in SDL? - opengl

I can't find a proper description of the types of the texture. Documentation (https://docs.rs/sdl2/0.34.3/sdl2/render/struct.TextureCreator.html#method.create_texture) says about static, streaming and target textures, but gives a little information on how they differ.
If I want to update texture completely on each frame (the texture is 100% of the canvas in size), which texture should I use?

It took me a bit of time to understand difference between them, but:
Static texture is a texture which is rarely changed (like sprites).
Target texture is a texture which can be used as a 'drawing place' (to be used as a surface, using SDL draw primitives). It's intended to be updated often.
Streaming texture is a special type of texture which assumes a full update from external source of data. It was designed for video players and alike (render new frame of the video into the same texture). It's intended to be updated often too.
The streaming texture should be updated with with_lock method which takes closure to perform update. The closure gets the texture's writable byte-array as a parameter.
So, the key difference is that 'target' allows to 'draw' on the texture (fill, draw a line, bltbit, etc), and 'streaming' allows to update it as byte-array, (lower level even than pixel array).

Related

OpenGL texture terminology/conceptual confusion

I've found a lot of resources that tell you what to type to get a texture on screen, but would like a higher level conceptual understanding of what the openGL API is "doing" and what all of the differences in terminology "mean".
I'm going to do my best to explain what I've picked up, but would love any corrections/additions, or pointers to resources where I can read further (and just a note that I've found the documentation of the actual API calls to just reference themselves in circles and be conceptually lacking).
glGenTextures- this won't actually allocate any memory for the data of a texture on the graphics card (you just tell it "how many" textures you want it to generate, so it doesn't know anything about the size...), but instead sets kind of a "name" aside so you can reference given textures consistently (I've been thinking of it as kind of "allocating a pointer").
glBindTexture- use the "name" generated in glGenTexture to specify that "we're now talking about this texture for future API calls until further notice", and further, we're specifying some metadata about that "pointer" we've allocated saying whether the texture it points to (/will point to) is of type GL_TEXTURE_2D or ..._3D or whatever. (Is it just me, or is it weird that this call has those two seemingly totally different functionalities?)
glTexParameter- sets other specified metadata about the currently "bound" texture. (I like this API as it seems pretty self explanatory and lets you set metadata explicitly... but I wonder why letting OpenGL know that it's a GL_TEXTURE_2D isn't part of THIS call, and not the previous? Especially because you have to specify that it's a GL_TEXTURE_2D every time you call this anyways? And why do you have to do that?)
glTexImage2D- allocates the memory for the actual data for the texture on the graphics card (and optionally uploads it). It further specifies some metadata regarding how it ought be read: its width, height, formatting (GL_RGB, GL_RGBA, etc...). Now again, why do I again have to specify that it's a GL_TEXTURE_2D when I've done it in all the previous calls? Also, I guess I can understand why this includes some metadata (rather than offloading ALL the texture metadata calls to glTexParameter as these are pretty fundamental/non-optional bits of info, but there are also some weird parameters that seem like they oughtn't have made the cut? oh well...)
glActiveTexture- this is the bit that I really don't get... So I guess graphics cards are capable of having only a limited number of "texture units"... what is a texture unit? Is it that there can only be N texture buffers? Or only N texture pointers? Or (this is my best guess...) there can only be N pointers being actively read by a given draw call? And once I get that, where/how often to I have to specify the "Active Texture"? Does glBindTexture associate the bound texture with the currently active texture? Or is it the other way around (bind, then set active)? Or does uploading/allocating the graphics card memory do that?
sampler2D- now we're getting into glsl stuff... So, a sampler is a thing that can reference a texture from within a shader. I can get its location via glGetUniformLocation, so I can set which texture that sampler is referencing- does this correspond to the "Active Texture"? So if I want to talk about the texture I've specified as GL_TEXTURE0, I'd call glUniform1i(location_of_sampler_uniform,0)? Or are those two different things?
I think that's all I got... if I'm obviously missing some intuition or something, please let me know! Thanks!
Let me apologize for answering with what amounts to a giant wall of text. I could not figure out how to format this any less obnoxious way ;)
glGenTextures
this won't actually allocate any memory for the data of a texture on the graphics card (you just tell it "how many" textures you want it to generate, so it doesn't know anything about the size...), but instead sets kind of a "name" aside so you can reference given textures consistently (I've been thinking of it as kind of "allocating a pointer").
You can more or less think of it as "allocating a pointer." What it really does is reserve a name (handle) in the set of textures. Nothing is allocated at all at this point, basically it just flags GL to say "you can't hand out this name anymore." (more on this later).
glBindTexture
use the "name" generated in glGenTexture to specify that "we're now talking about this texture for future API calls until further notice", and further, we're specifying some metadata about that "pointer" we've allocated saying whether the texture it points to (/will point to) is of type GL_TEXTURE_2D or ..._3D or whatever. (Is it just me, or is it weird that this call has those two seemingly totally different functionalities?)
If you will recall, glGenTextures (...) only reserves a name. This function is what takes the reserved name and effectively finalizes it as a texture object (the first time it is called). The type you pass here is immutable, once you bind a name for the first time, it has to use the same type for every successive bind.
Now you have finally finished allocating a texture object, but it has no data store at this point -- it is just a set of states with no data.
glTexParameter
sets other specified metadata about the currently "bound" texture. (I like this API as it seems pretty self explanatory and lets you set metadata explicitly... but I wonder why letting OpenGL know that it's a GL_TEXTURE_2D isn't part of THIS call, and not the previous? Especially because you have to specify that it's a GL_TEXTURE_2D every time you call this anyways? And why do you have to do that?)
I am actually not quite clear what you are asking here -- maybe my explanation of the previous function call will help you? But you are right, this function sets the state associated with a texture object.
glTexImage2D
allocates the memory for the actual data for the texture on the graphics card (and optionally uploads it). It further specifies some metadata regarding how it ought be read: its width, height, formatting (GL_RGB, GL_RGBA, etc...). Now again, why do I again have to specify that it's a GL_TEXTURE_2D when I've done it in all the previous calls? Also, I guess I can understand why this includes some metadata (rather than offloading ALL the texture metadata calls to glTexParameter as these are pretty fundamental/non-optional bits of info, but there are also some weird parameters that seem like they oughtn't have made the cut? oh well...)
This is what allocates the data store and (optionally) uploads texture data (you can supply NULL for the data here and opt to finish the data upload later with glTexSubImage2D (...)).
You have to specify the texture target here because there are half a dozen different types of textures that use 2D data stores. The simplest way to illustrate this is a cubemap.
A cubemap has type GL_TEXTURE_CUBE_MAP, but you cannot upload its texture data using GL_TEXTURE_CUBE_MAP -- that is nonsensical. Instead, you call glTexImage2D (...) while the cubemap is bound to GL_TEXTURE_CUBE_MAP and then you pass something like GL_TEXTURE_CUBE_MAP_POSITIVE_X to indicate which of the 6 2D faces of the cubemap you are referencing.
glActiveTexture
this is the bit that I really don't get... So I guess graphics cards are capable of having only a limited number of "texture units"... what is a texture unit? Is it that there can only be N texture buffers? Or only N texture pointers? Or (this is my best guess...) there can only be N pointers being actively read by a given draw call? And once I get that, where/how often to I have to specify the "Active Texture"? Does glBindTexture associate the bound texture with the currently active texture? Or is it the other way around (bind, then set active)? Or does uploading/allocating the graphics card memory do that?
This is an additional level of indirection for texture binding (GL did not always have multiple texture units and you would have to do multiple render passes to apply multiple textures).
Once multi-texturing was introduced, binding a texture actually started to work this way:
glBindTexture (target, name) => ATIU.targets [target].bound = name
Where:
* ATIU is the active texture image unit
* targets is an array of all possible texture types that can be bound to this unit
* bound is the name of the texture bound to ATIU.targets [target]
The rules since OpenGL 3.0 have been, you get a minimum of 16 of these for every shader stage in the system.
This requirement allows you enough binding locations to maintain a set of 16 different textures for each stage of the programmable pipeline (vertex,geometry,fragment -- 3.x, tessellation control / evaluation -- 4.0). Most implementations can only use 16 textures in a single shader invocation (pass, basically), but you have a total of 48 (GL3) or 80 (GL4) places you can select from.
sampler2D
now we're getting into glsl stuff... So, a sampler is a thing that can reference a texture from within a shader. I can get its location via glGetUniformLocation, so I can set which texture that sampler is referencing- does this correspond to the "Active Texture"? So if I want to talk about the texture I've specified as GL_TEXTURE0, I'd call glUniform1i(location_of_sampler_uniform,0)? Or are those two different things?
Yes, the samplers in GLSL store indices that correspond to GL_TEXTUREn, where n is the value you have assigned to this uniform.
These are not regular uniforms, mind you, they are called opaque types (the value assigned cannot be changed/assigned from within a shader at run-time). You do not need to know that, but it might help to understand that if the question ever arises:
"Why can't I dynamically select a texture image unit for my sampler at run-time?" :)
In later versions of OpenGL, samplers actually became state objects of their own. They decouple some of the state that used to be tied directly to texture objects but had nothing to do with interpreting how the texture's data was stored. The decoupled state includes things like texture wrap mode, min/mag filter and mipmap levels. Sampler objects store no data.
This decoupling takes place whenever you bind a sampler object to a texture image unit - that will override the aforementioned states that are duplicated by every texture object.
So effectively, a GLSL sampler* references neither a texture nor a sampler; it references a texture image unit (which may have one or both of those things bound to it). GLSL will pull sampler state and texture data accordingly from that unit based on the declared sampler type.

How to determine the width and height of a GL framebuffer object given only the corresponding id

I want to determine the size (width, height) of a framebuffer object.
I created a framebuffer object via
// create the FBO.
glGenFramebuffers(1, &fboId);
How can I get the size of the first color attachment given only the framebuffer object id (fboId)?
Is this possible or do I have tor store the size of the color attachment in an external variable to know later the size of the FBO?
Your question is somewhat confused, as you ask for two different things.
Here's the easy question:
How can I get the size of the first color attachment given only the framebuffer object id (fboId)?
That's simple: get the texture/renderbuffer attached to that attachment, get what mipmap level and array layer is attached, then query the texture/renderbuffer for how big it is.
The first two steps are done with glGetFramebufferAttachmentParameter (note the key word "Attachment") for GL_COLOR_ATTACHMENT0. You query the GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE to get whether it's a renderbuffer or a texture. You can get the renderbuffer/texture name with GL_FRAMEBUFFER_ATTACHMENT_OBJECT_NAME.
If the object is a renderbuffer, you can then bind the renderbuffer and use glGetRenderbufferParameter to fetch the renderbuffer's GL_RENDERBUFFER_WIDTH and GL_RENDERBUFFER_HEIGHT.
If the object is a texture, you'll need to do more work. You need to query the attachment parameter GL_FRAMEBUFFER_ATTACHMENT_TEXTURE_LEVEL to get the mipmap level.
Of course, now you need to know how to bind it. If you're using OpenGL versions before 4.4 or without certain extensions, then this is complicated. See, you need to know which texture target type to use. Silly and annoying as this may seem, the only way to determine the target from just the texture object name is to... try everything. Go through each target and check glGetError. The one for which GL_INVALID_OPERATION isn't returned is the right one.
If you have GL 4.4 or ARB_multi_bind available, you can just use glBindTextures (note the "s"), which does not require that you specify the target. And if you have 4.5 or ARB_direct_state_access, you don't need to bind the texture at all. The DSA-style functions don't need the texture target, and it also provides glBindTextureUnit, which binds a texture to its natural internal target.
Once you have the texture bound and it's mipmap level, you use glGetTexLevelParameter to query the GL_TEXTURE_WIDTH and GL_TEXTURE_HEIGHT for that level.
Now, that's the easy problem. The hard problem is what your title asks:
I want to determine the size (width, height) of a framebuffer object.
The size of the renderable area of an FBO is not the same as the size of GL_COLOR_ATTACHMENT0. The renderable area of an FBO is the intersection of all of the sizes of all of the images attached to the FBO.
Unless you have special knowledge of this FBO, you can't assume that the FBO contains only one image or that all of the images have the same size (and if you have special knowledge of the FBO, then quite frankly you should also have special knowledge of how big it is). So you'll need to repeat the above procedure for every attachment (if the type is GL_NONE, then nothing is attached). Then take the intersection of the returned values (ie: the smallest width and height).
In general, you shouldn't have to ask an FBO that you created how big it is. Just as you don't have to ask textures how big they are. You made them; by definition, you know how big they are. You put them in the FBO, so again you know how big it is.

Applying a shader to framebuffer object to get fisheye affect

Lets say i have an application ( the details of the application should be irrelevent for solving the problem ). Instead of rendering to the screen, i am somehow able to force the application to render to a framebuffer object instead of rendering to the screen ( messing with glew or intercepting a call in a dll ).
Once the application has rendered its content to the FBO is it possible to apply a shader to the contents of the FB? My knowledge is limited here, so from what i understand at this stage all information about vertices is no longer available and all the necessary tests have been applied, so whats left in the buffer is just pixel data. Is this correct?
If it is possible to apply a shader to the FBO, is is possible to get a fisheye affect? ( like this for example: http://idea.hosting.lv/a/gfx/quakeshots.html )
The technique used in the linke above is to create 6 different viewports and render each viewport to a cubemap face and then apply the texture to a mesh.
Thanks
A framebuffer object encapsulates several other buffers, specifically those that are implicitly indexed by fragment location. So a single framebuffer object may bundle together a colour buffer, a depth buffer, a stencil buffer and a bunch of others. The individual buffers are known as renderbuffers.
You're right — there's no geometry in there. For the purposes of reading back the scene you get only final fragment values, which if you're highjacking an existing app will probably be a 2d pixel image of the frame and some other things that you don't care about.
If your GPU has render-to-texture support (originally an extension circa OpenGL 1.3 but you'd be hard pressed to find a GPU without it nowadays, even in mobile phones) then you can link a texture as a renderbuffer within a framebuffer. So the rendering code is exactly as it would be normally but ends up writing the results to a texture that you can then use as a source for drawing.
Fragment shaders can programmatically decide which location of a texture map to sample in order to create their output. So you can write a fragment shader that applies a fisheye lens, though you're restricted to the field of view rendered in the original texture, obviously. Which would probably be what you'd get in your Quake example if you had just one of the sides of the cube available rather than six.
In summary: the answer is 'yes' to all of your questions. There's a brief introduction to framebuffer objects here.
Look here for some relevant info:
http://www.opengl.org/wiki/Framebuffer_Object
The short, simple explanation is that a FBO is the 3D equivalent of a software frame buffer. You have direct access to individual pixels, instead of having to modify a texture and upload it. You can get shaders to point to an FBO. The link above gives an overview of the procedure.

How to create textures within GPU

Can anyone pls tell me how to use hardware memory to create textures in OpenGL ? Currently I'm running my game in window mode, do I need to switch to fullscreen to get the use of hardware ?
If I can create textures in hardware, is there a limit for no of textures (other than the hardware memory) ? and then how can I cache my textures into hardware ? Thanks.
This should be covered by almost all texture tutorials for OpenGL. For example here, here and here.
For every texture you first need a texture name. A texture name is like a unique index for a single texture. Every name points to a texture object that can have its own parameters, data, etc. glGenTextures is used to get new names. I don't know if there is any limit besides the uint range (2^32). If there is then you will probably get 0 for all new texture names (and a gl error).
The next step is to bind your texture (see glBindTexture). After that all operations that use or affect textures will use the texture specified by the texture name you used as parameter for glBindTexture. You can now set parameters for the texture (glTexParameter) and upload the texture data with glTexImage2D (for 2D textures). After calling glTexImage you can also free the system memory with your texture data.
For static textures all this has to be done only once. If you want to use the texture you just need to bind it again and enable texturing (glEnable(GL_TEXTURE_2D)).
The size (width/height) for a single texture is limited by GL_MAX_TEXTURE_SIZE. This is normally 4096, 8192 or 16384. It is also limited by the available graphics memory because it has to fit into it together with some other resources like the framebuffer or vertex buffers. All textures together can be bigger then the available memory but then they will be swapped.
In most cases the graphics driver should decide which textures are stored in system memory and which in graphics memory. You can however give certain textures a higher priority with either glPrioritizeTextures or with glTexParameter.
Edit:
I wouldn't worry too much about where textures are stored because the driver normally does a very good job with that. Textures that are used often are also more likely to be stored in graphics memory. If you set a priority that's just a "hint" for the driver on how important it is for the texture to stay on the graphics card. It's also possible the the priority is completely ignored. You can also check where textures currently are with glAreTexturesResident.
Usually when you talk about generating a texture on the GPU, you're not actually creating texture images and applying them like normal textures. The simpler and more common approach is to use Fragment shaders to procedurally calculate the colors of for each pixel in real time from scratch for every single frame.
The canonical example for this is to generate a Mandelbrot pattern on the surface of an object, say a teapot. The teapot is rendered with its polygons and texture coordinates by the application. At some stage of the rendering pipeline every pixel of the teapot passes through the fragment shader which is a small program sent to the GPU by the application. The fragment shader reads the 2D texture coordinates and calculates the Mandelbrot set color of the 2D coordinates and applies it to the pixel.
Fullscreen mode has nothing to do with it. You can use shaders and generate textures even if you're in window mode. As I mentioned, the textures you create never actually occupy space in the texture memory, they are created on the fly. One could probably think of a way to capture and cache the generated texture but this can be somewhat complex and require multiple rendering passes.
You can learn more about it if you look up "GLSL" in google - the OpenGL shading language.
This somewhat dated tutorial shows how to create a simple fragment shader which draws the Mandelbrot set (page 4).
If you can get your hands on the book "OpenGL Shading Language, 2nd Edition", you'll find it contains a number of simple examples on generating sky, fire and wood textures with the help of an external 3D Perlin noise texture from the application.
To create a texture on GPU look into "render to texture" tutorials. There are two common methods: Binding a PBuffer context as texture, or using Frame Buffer Objects. PBuffer render to textures are the older method, and have the wider support. Frame Buffer Objects are easier to use.
Also you don't have to switch to "fullscreen" mode for OpenGL to be HW accelerated. In fact OpenGL doesn't know about windows at all. A fullscreen OpenGL window is just that: A toplvel window on top of all other windows with no decorations and the input focus grabed. Some drivers bypass window masking and clipping code, and employ a simpler, faster buffer swap method if the window with the active OpenGL context covers the whole screen, thus gaining a little performance, but with current hard- and software the effect is very small compared to other influences.

How do I set the color of a single pixel in a Direct3D texture?

I'm attempting to draw a 2D image to the screen in Direct3D, which I'm assuming must be done by mapping a texture to a rectangular billboard polygon projected to fill the screen. (I'm not interested or cannot use Direct2D.) All the texture information I've found in the SDK describes loading a bitmap from a file and assigning a texture to use that bitmap, but I haven't yet found a way to manipulate a texture as a bitmap pixel by pixel.
What I'd really like is a function such as
void TextureBitmap::SetBitmapPixel(int x, int y, DWORD color);
If I can't set the pixels directly in the texture object, do I need to keep around a DWORD array that is the bitmap and then assign the texture to that every frame?
Finally, while I'm initially assuming that I'll be doing this on the CPU, the per-pixel color calculations could probably also be done on the GPU. Is the HLSL code that sets the color of a single pixel in a texture, or are pixel shaders only useful for modifying the display pixels?
Thanks.
First, your direct question:
You can, technically, set pixels in a texture. That would require use of LockRect and UnlockRect API.
In D3D context, 'locking' usually refers to transferring a resource from GPU memory to system memory (thereby disabling its participation in rendering operations). Once locked, you can modify the populated buffer as you wish, and then unlock - i.e., transfer the modified data back to the GPU.
Generally locking was considered a very expensive operation, but since PCIe 2.0 that is probably not a major concern anymore. You can also specify a small (even 1-pixel) RECT as a 2nd argument to LockRect, thereby requiring the memory-transfer of a negligible data volume, and hope the driver is indeed smart enough to transfer just that (I know for a fact that in older nVidia drivers this was not the case).
The more efficient (and code-intensive) way of achieving that, is indeed to never leave the GPU. If you create your texture as a RenderTarget (that is, specify D3DUSAGE_RENDERTARGET as its usage argument), you could then set it as the destination of the pipeline before making any draw calls, and write a shader (perhaps passing parameters) to paint your pixels. Such usage of render targets is considered standard, and you should be able to find many code samples around - but unless you're already facing performance issues, I'd say that's an overkill for a single 2D billboard.
HTH.