Slicing Image to map different textures - opengl

In OpenGL fix function programming, can I possibly map different textures on different objects, but that texture in generated from one image only. For e.g. I have 1024 X 1024 image. I have four rectangles in my scene. Now I would want to slice image into 256 X 256 *4 and map these sliced images as textures.
How can I do this. One option is to off course pre-slice the image. But can this be done using glTexSubImage2D or some similar/different API?

Yes, you can use texture coordinates, which indicate which parts of the texture you wish to be mapped onto your object, rather than mapping the whole thing.
Read more: http://www.glprogramming.com/red/chapter09.html#name6

EDIT : Shaders are required to use array textures : http://www.opengl.org/registry/specs/EXT/texture_array.txt. I'll leave the answer as it might still be useful info.
I guess you can use an Array Texture for this : http://www.opengl.org/wiki/Array_Texture
You make a 256 x 2048 texture. When loading the texture you specify a layer size (256 x 256) and layer count (4). Your texture will then be split up in four layers.
You can access the texture using UVs : [x, y, LayerId]
Note that if you want to generate mipmaps, you need to define the number of levels when you allocate the storage with glTexStorage3D.

I think you have three options.
Use different texture coordinates for each different rectangle.
Transform the texture coordinates using glMatrixMode(GL_TEXTURE) and a different matrix between drawing each rectangle.
Create four different OpenGL textures from your original big texture. I don't think OpenGL offers you any help here. You have to either use a paint package to do it (easiest option if you only have to do this a few times), or copy parts of the image into a new buffer before calling glTexImage2D.
I think the first option is the easiest, with the advantage that you don't have to change any state between drawing the rectangles.

Related

How do I render two different images to two different primitives in OpenGL? 2D Texture arrays?

So I have a simple OpenGL viewer where you can draw any number of boxes that the user wants. Ive also added the ability to take a PNG or JPG image and texture map it to a primitive.
I want to be able to have the user specify any of the cubes on the screen and apply different textures to them. Im fairly new to OpenGL. Right now I can easily map an image to a single primitive, but Im wondering whats the best way to map 2 seperate images (which may be different sizes) to 2 separate primitives.
Ive done a fair amount of reading up on 2D Texture arrays and it would seem this would be the way I wanna go since I can store multiple textures in one texture unit, but I'm not sure if this is possible considering what I mentioned above. If the images are both different dimensions then I dont think I can do this (at least I dont think so). I know I can just store each image into separate texture units but doing it in an array seemed like the cleaner way to do it.
What would be the best way to do this? Can you in fact store different size images into a 2d texture array? And if so how? Or am I better off just storing them on separate texture units?
Texture arrays are mainly meant if you want to draw a single primitive (or a whole mesh) with the shader being able to select between images without exhausting the number of available texture sampling units. You can use them in the way you thought, but I doubt it will benefit you. Another approach (which is similiar to texture arrays) is using a texture atlas, i.e. creating a patchwork of images that constitutes a single texture and using appropriate texture coordinates to select the subimage.
In your case, I suggest simply load each picture into a separate texture and bind the appropriate texture before drawing the cube.

How many depthtextures can i bind to a framebuffer?

I am trying to create shadow maps of many objects in a sceneRoom with their shadows being projected on the sceneRoom. Untill now i've been able to project the shadows of the sceneRoom on itself, but i want to project the shadows of other Objects in the sceneRoom on the sceneRoom's floor.
is it possible to create multiple depth textures in one framebuffer? or should i use several Framebuffers where each has one depth texture?
There is only one GL_DEPTH_ATTACHMENT point, so you can only have at most one attached depth buffer at any time. So you have to use some other method.
No, there is only one attachment point (well, technically two if you count GL_DEPTH_STENCIL_ATTACHMENT) for depth in an FBO. You can only attach one thing to the depth, but that does not mean you are limited to a single image.
You can use an array texture to store multiple depth images and then attach this array texture to GL_DEPTH_ATTACHMENT.
However, the only way to draw into an explicit array level in this texture would be to use a Geometry Shader to do layered rendering. Since it sounds like each one of these depth images you are interested in are actually completely different sets of geometry, this does not sound like the approach you want. If you used a Geometry Shader to do this, you would process the same set of geometry for each layer.
One thing you could consider is actually using a single depth buffer, but packing your shadow maps into an atlas. If each of your shadow maps is 512x512, you could store 4 of them in a single texture with dimensions 1024x1024 and adjust texture coordinates (and viewport when you draw into the atlas) appropriately. The reason you might consider doing this is because changing the render target (FBO state) tends to be the most expensive thing you would do between draw calls in a series of depth-only draws. You might change a few uniforms or vertex pointers, but those are dirt cheap to change.

Placing multiple images on a 3D surface

If I was to place a texture on the surface of a 3D object, for example a cube, I could use the vertices of that cube to describe the placement of this texture.
But what if I want to place multiple separate images on the same flat surface? Or suppose it is just one image, but I don't want it to appear at the edges of the surface, where the vertices are, but rather somewhere small and in the middle of the surface. I want the actual images to be chosen and placed dynamically at runtime, otherwise I could condense them offline as a single texture.
I have an approach but I want to seek advice as to whether there is a better method, or if this is perfectly acceptable:
My guess is to create multiple separate 2D quads (with depth of 0), each with a texture associated with them and placed on them (they could of course be a texture atlas with different texture coordinates).
Then, I transform these quads such that they appear to be on the surface of a 3D object, such as a cube. Of course I'd have to maintain a matrix hierarchy so these quads are transformed appropriately whenever the cube is transformed, such that they appear to be attached to the cube.
While this isn't necessarily hard, I am new to texturing and would like to know if this is a normal practice for something like this.
You could try rendering a scene and saving that as a texture then use that texture on the surface.
Check out glCopyTexImage2D() or glCopyTexSubImage2D().
Or perhaps try using frame buffer objects.
But what if I want to place multiple separate images on the same flat surface?
Use multiple textures, maybe each with its own set of textuer coordinates. Your OpenGL implementation will offer you a number of textuer units. Each of them can supply a different texture.
glActiveTexture(GL_TEXTURE_0 + i);
glBindTexture(…);
glUniform1i(texturesampler[i], i); // texturesampler[i] contains the sampler uniform location of the bound program.
Or suppose it is just one image, but I don't want it to appear at the edges of the surface, where the vertices are, but rather somewhere small and in the middle of the surface.
That's where GL_CLAMP… texture wrap modes get their use.
glTexParameteri(GL_TEXTURE_WRAP_{S,T,R}, GL_CLAMP[_TO_{EDGE,BORDER}]);
With those you specify texture coordinates at the vertices to be outside the [0, 1] interval, but instead of repeating the image will show only one time, with only the edge pixels repeated. If you make the edge pixels transparent, it's as if there was no image there.

Texturing Opengl Terrain?

What is the best way to texture terrain made from quads in OpenGL? I have around 30 different textures I want to have for my terrains (1 texture per terrain type, so 30 terrain types) and would like to have smooth transitions between any two of the terrains.
I have been doing some browsing on the web and found that there are many different methods, including 3d texturing, Alpha channels, blending, and using shaders. However, which of these is the most efficient and can handle the amount of textures I am looking to use? For example: This popular answer describes how to use some techniques, but since the mixmap only has 4 properties (RGBA) and so can only support 4 textures.
I should also note that I know nothing about shaders, so non-shader required techniques would be preferable.
Since you linked to an answer that describes texture splatting, and its question mentions the game Oblivion, I can provide some additional insight into that.
Basic texture splatting with an RGBA mixmap only supports four textures per terrain quad, but you can use different sets of textures for different quads. Oblivion divides its terrain into squares (called "cells") of 32 grid points (192 feet) per side, and each cell defines its own set of four terrain textures. So you can't have lots of texture diversity within a small area, but you can easily vary your textures over larger regions. If you prefer, you can define texture sets for smaller regions, even individual quads, at the expense of using more memory.
If you really need more than four textures in a quad, you can use multiple mixmaps. For each additional one, you just do another texture lookup to get four more blending factors, and blend in four more textures on top of the results from the previous mixmap. You can scale up to as many textures as you want, again at the expense of memory.
Texture splatting can be tricky to combine with with LOD techniques on the height map, because when a single low-detail terrain quad represents a group of high-detail quads, you have to sample several different mixmaps for different regions of the big quad. Oblivion sidesteps that problem by using texture splatting only for full-detail terrain; distant cells, rendered at lower resolution, use precomputed textures produced by the editor, which does the splatting and downscaling in advance.
One alternative to texture splatting is to use a clipmap to render a "megatexture". With this approach, you have a single large texture that represents your entire terrain, and you avoid filling up your RAM by loading different parts of it with only as much detail as is actually needed to render it based on the viewer's current position. (Distant parts of the terrain can't be seen at full detail, so there's no need to load them at full detail.)
The advantage of this approach is its artistic freedom: you can place fine details anywhere you want in the texture, without regard to the vertex grid. The disadvantage is that it's rather complex to implement, and the entire clipmap has to be stored somewhere, probably in a big file on disk, so that you can load parts of it into RAM as needed.

Avoiding glBindTexture() calls?

My game renders lots of cubes which randomly have 1 of 12 textures. I already Z order the geometry so therefore I cant just render all the cubes with texture1 then 2 then 3 etc... because that would defeat z ordering. I already keep track of the previous texture and in they are == then I do not call glbindtexture, but its still way too many calls to this. What else can I do?
Thanks
Ultimate and fastest way would be to have an array of textures (normal ones or cubemaps). Then dynamically fetch the texture slice according to an id stored in each cube instance data/ or cube face data (if you want a different texture on a per cube face basis) using GLSL built-in gl_InstanceID or gl_PrimitiveID.
With this implementation you would bind your texture array just once.
This would of course required used of gpu_shader4 and texture_array extensions:
http://developer.download.nvidia.com/opengl/specs/GL_EXT_gpu_shader4.txt
http://developer.download.nvidia.com/opengl/specs/GL_EXT_texture_array.txt
I have used this mechanism (using D3D10, but principle applies too) and it worked very well.
I had to map on sprites (3D points of a constant screen size of 9x9 or 15x15 pixels IIRC) differents textures indicating each a different meaning for the user.
Edit:
If you don't feel comfy with all shader stuff, I would simply sort cubes by textures, and don't Z order the geometry. Then measure performances gains.
Also I would try to add a pre-Z pass where you render all your cubes in Z buffer only, then render normal scene, and see if it speed up things (if fragments bound, it could help).
You can pack your textures into one texture and offset the texture coordinates accordingly
glMatrixMode(GL_TEXTURE) will also allow you to perform transformations on the texture space (to avoid changing all the texture coords)
Also from NVIDIA:
Bindless Graphics