Resizing an OpenGL texture without changing UV coordinates - opengl

I have a texture atlas that gets generated during gameplay. Textures are loaded and unloaded depending on which textures are visible when the player moves around.
The problem is when there are too many textures to fit in the atlas and I need to resize it, but if I simply resize the texture atlas by copying it to a larger texture, then I would need to re-render everything to update the UV coordinates of the quads.
Is it possible to somehow create an OpenGL texture and define its width and height in UV coordinates to be something other than 1?

One solution would be to use texel coordinates rather than normalized coordinates (assuming you are not going to scale or move the images around within the atlas when you make it bigger). If you can live with their limitations (no wrapping, no mipmaps, only linear filtering), you could just use Rectangle Textures and call it a day. Alternatively, you could simply apply scaling factors to the texture coordinates in your vertex shader, either by passing them as uniforms or directly querying the texture dimensions in the shader using the textureSize() GLSL function.
Alternatively, you might consider using an Array Texture to hold multiple atlases. Instead of enlarging an existing atlas to make room for more subimages, you could just start a new atlas in another array slice. That way, the texture coordinates of all existing subimages would stay the same…

Related

render two images to the screen separately

I want to render two textures on the screen at the same time at different positions, but, I'm confused about the vertex coordinates.
How could I write a vertex shader to meet my goal?
Just to address the "two images to the screen separately" bit...
A texture maps image colours onto geometry. To be pedantic, you can't draw a texture but you can blit and you can draw geometry with a mapped texture (using per-vertex texture coordinates).
You can bind two textures at once while drawing, but you'll need both a second set of texture coordinates and to handle how they blend (or don't in your case). Even then the shader will be quite specific and because the images are separate there'll be unnecessary code running for each pixel to handle the other image. What happens when you want to draw 3 images, or 100?
Instead, just draw a quad with one image twice (binding each texture in turn before drawing). The overhead will be tiny unless you're drawing lots, at which point you might look at texture atlases and drawing all the geometry with one draw call (really getting towards the "at the same time" part of the question).

How many depthtextures can i bind to a framebuffer?

I am trying to create shadow maps of many objects in a sceneRoom with their shadows being projected on the sceneRoom. Untill now i've been able to project the shadows of the sceneRoom on itself, but i want to project the shadows of other Objects in the sceneRoom on the sceneRoom's floor.
is it possible to create multiple depth textures in one framebuffer? or should i use several Framebuffers where each has one depth texture?
There is only one GL_DEPTH_ATTACHMENT point, so you can only have at most one attached depth buffer at any time. So you have to use some other method.
No, there is only one attachment point (well, technically two if you count GL_DEPTH_STENCIL_ATTACHMENT) for depth in an FBO. You can only attach one thing to the depth, but that does not mean you are limited to a single image.
You can use an array texture to store multiple depth images and then attach this array texture to GL_DEPTH_ATTACHMENT.
However, the only way to draw into an explicit array level in this texture would be to use a Geometry Shader to do layered rendering. Since it sounds like each one of these depth images you are interested in are actually completely different sets of geometry, this does not sound like the approach you want. If you used a Geometry Shader to do this, you would process the same set of geometry for each layer.
One thing you could consider is actually using a single depth buffer, but packing your shadow maps into an atlas. If each of your shadow maps is 512x512, you could store 4 of them in a single texture with dimensions 1024x1024 and adjust texture coordinates (and viewport when you draw into the atlas) appropriately. The reason you might consider doing this is because changing the render target (FBO state) tends to be the most expensive thing you would do between draw calls in a series of depth-only draws. You might change a few uniforms or vertex pointers, but those are dirt cheap to change.

Placing multiple images on a 3D surface

If I was to place a texture on the surface of a 3D object, for example a cube, I could use the vertices of that cube to describe the placement of this texture.
But what if I want to place multiple separate images on the same flat surface? Or suppose it is just one image, but I don't want it to appear at the edges of the surface, where the vertices are, but rather somewhere small and in the middle of the surface. I want the actual images to be chosen and placed dynamically at runtime, otherwise I could condense them offline as a single texture.
I have an approach but I want to seek advice as to whether there is a better method, or if this is perfectly acceptable:
My guess is to create multiple separate 2D quads (with depth of 0), each with a texture associated with them and placed on them (they could of course be a texture atlas with different texture coordinates).
Then, I transform these quads such that they appear to be on the surface of a 3D object, such as a cube. Of course I'd have to maintain a matrix hierarchy so these quads are transformed appropriately whenever the cube is transformed, such that they appear to be attached to the cube.
While this isn't necessarily hard, I am new to texturing and would like to know if this is a normal practice for something like this.
You could try rendering a scene and saving that as a texture then use that texture on the surface.
Check out glCopyTexImage2D() or glCopyTexSubImage2D().
Or perhaps try using frame buffer objects.
But what if I want to place multiple separate images on the same flat surface?
Use multiple textures, maybe each with its own set of textuer coordinates. Your OpenGL implementation will offer you a number of textuer units. Each of them can supply a different texture.
glActiveTexture(GL_TEXTURE_0 + i);
glBindTexture(…);
glUniform1i(texturesampler[i], i); // texturesampler[i] contains the sampler uniform location of the bound program.
Or suppose it is just one image, but I don't want it to appear at the edges of the surface, where the vertices are, but rather somewhere small and in the middle of the surface.
That's where GL_CLAMP… texture wrap modes get their use.
glTexParameteri(GL_TEXTURE_WRAP_{S,T,R}, GL_CLAMP[_TO_{EDGE,BORDER}]);
With those you specify texture coordinates at the vertices to be outside the [0, 1] interval, but instead of repeating the image will show only one time, with only the edge pixels repeated. If you make the edge pixels transparent, it's as if there was no image there.

OpenGL: Accurate textures possible?

This is for a 2D game with OpenGL:
Is it with using OpenGL possible to display a texture absolutely unfiltered, not streched or blurred?
So that when I have a BMP and convert it into an OpenGL texture, and then retrieve that texture and convert it back, I have no modifications or quality / data loss?
Sure, just disable filtering, that's made by setting the GL_MIN_FILTER and the GL_MAG_FILTER to GL_NEAREST. Also make sure that you draw the texture in a appropiate size so that texels are the same size as pixels.
As Matias said previously - one thing is to set GL_MIN_FILTER and GL_MAG_FITLER to GL_NEAREST (via glTexParameter*).
But for pixel-perfect rendering, there's another important thing- you don't want your texture to be rescaled to power-of-two. The easiest way is to specify the texture via the binding target GL_TEXTURE_RECTANGLE instead of GL_TEXTURE_2D. On such bound texture, the texture coordinates are not in range (0..1,0..1) as usually, but (0..w, 0..h) instead. You can have per-texel indexing easily this way.

Avoiding glBindTexture() calls?

My game renders lots of cubes which randomly have 1 of 12 textures. I already Z order the geometry so therefore I cant just render all the cubes with texture1 then 2 then 3 etc... because that would defeat z ordering. I already keep track of the previous texture and in they are == then I do not call glbindtexture, but its still way too many calls to this. What else can I do?
Thanks
Ultimate and fastest way would be to have an array of textures (normal ones or cubemaps). Then dynamically fetch the texture slice according to an id stored in each cube instance data/ or cube face data (if you want a different texture on a per cube face basis) using GLSL built-in gl_InstanceID or gl_PrimitiveID.
With this implementation you would bind your texture array just once.
This would of course required used of gpu_shader4 and texture_array extensions:
http://developer.download.nvidia.com/opengl/specs/GL_EXT_gpu_shader4.txt
http://developer.download.nvidia.com/opengl/specs/GL_EXT_texture_array.txt
I have used this mechanism (using D3D10, but principle applies too) and it worked very well.
I had to map on sprites (3D points of a constant screen size of 9x9 or 15x15 pixels IIRC) differents textures indicating each a different meaning for the user.
Edit:
If you don't feel comfy with all shader stuff, I would simply sort cubes by textures, and don't Z order the geometry. Then measure performances gains.
Also I would try to add a pre-Z pass where you render all your cubes in Z buffer only, then render normal scene, and see if it speed up things (if fragments bound, it could help).
You can pack your textures into one texture and offset the texture coordinates accordingly
glMatrixMode(GL_TEXTURE) will also allow you to perform transformations on the texture space (to avoid changing all the texture coords)
Also from NVIDIA:
Bindless Graphics