Texture mapping to different parts of a 3ds model in Opengl? - opengl

I am trying to map the textures of a tree(That is in .3ds format) in OpenGL and C++.I am using vertex buffer objects, vertex array objects and shaders.The vertex coordinates, normals and texture coordinates are uploaded to shader via glVertexAttribPointer.My question is how can I select different textures for different parts of the model(i.e. bark and laeves)?

Two solutions:
1) Render a hierarchy of objects, and each objects is rendered using the appropriate texture.
2) Draw the appropriate textures is a single texture. Each mesh triangle texture coordinates map the correct protion of the image.
The first solution is simpler (apart the hierarchy handling). The second one requires a complex texture mapping of the loaded object, but allow the use of a single texture (minimize memory). The characters of a game often uses a single texture, which has the texture of face, hands and wearing.

Related

How to bind multiple textures to primitives drawn using `glDrawRangeElements()`?

I am using glDrawRangeElements() to draw textured quads (as triangles). My problem is that I can only bind one texture before that function call, and so all quads are drawn using the same texture.
How to bind a different texture for each quad?
Is this possible when using the glDrawRangeElements() function? If not, what other OpenGL function should I look at?
First,you need to give an access to multiple textures inside your fragment shader.To do this you can use :
Arrays Textures -basically 3D texture,where 3rd dimension is the number of different 2D texture layers.The restriction is that all the textures in the array must be of the same size.Also Cube Map textures can be used (GL 4.0 and later) to stack multiple textures.
Bindless textures - these you can use on relatively new hardware only.For Nvidia that's Kepler and later.Because bindless texture is essentially a pointer to a texture memory on GPU you can fill an array or Uniform buffer with thousands of those and then index into that array in the fragment shader having an access to the sampler object directly.
Now,how can you index into those arrays per primitive?There are number of ways.First,you can use instanced drawing if you render the same primitives several times.Here you have GLSL InstanceID to track what primitive is currently drawn.
In case when you don't use instancing and also try to texture different parts of geometry in a single draw call it would be more complex.You should add texture index information on per vertex basis.That's ,if your geometry has interleaved structure per vertex looking like this:
VTN,VTN,VTN... where (V-vertices,T-texture coords,N-normals),you should add another set of data ,let's call it I - (texture index),so your vertex array will
have the structure VTNI,VTNI,VTNI...
You can also set a separate Vertex buffer including only the texture indices.But for large geometry buffers it probably will be less efficient.Interleaving of usually allows faster data access.
Once you have it you can pass that texture index as varying into fragment shader(set as flat to make sure it is not interpolated ) and index into specific texture.Yeah,that means your vertex array will be larger and contain redundant data,but that's the downside of using multitexture on a single primitive level.
Hope it helps.

Initializing a cubemap with existing 2D textures and glTextureView

I am trying to create a cubemap from six existing textures. Texture views seem to be made for this sort of thing, also updating the cubemap when the original textures change. However, cubemaps can only be views of texture arrays (or other cubemaps) and i can't find any way to fill an array with several already existing 2D textures using glTextureView since glTextureView can only be used with uninitialized textures.
Is there a way to do this or is drawing into the cubemap via an FBO the only way?
No, I do not believe this is supported. You can't "merge" multiple textures into a single texture without any copying.
I can think of a few options you have:
The ideal solution is of course that you place the data into cube map faces in the first place, instead of using separate textures. Note that gTextureView() supports the opposite direction. If you partly want to use a texture as a regular GL_TEXTURE_2D, and partly as the face of a cube map, you can store it in a cube map face, and create a texture view to treat the face as a 2D texture where that's needed.
You copy the textures into the cube map faces. The most efficient approach should be using glBlitFramebuffer(). Of course copying data is always undesirable, but sometimes it's necessary.
This may be somewhat unconventional, but you could... not use a cube map. You could use 6 separate samplers in the shader, and bind the 6 textures you want to use as cube faces to those samplers. Then you can decide which of the six textures to sample, and what texture coordinates to use, in your shader code. This shouldn't be too difficult if you look up the logic/math that is used under the hood when you sample cube maps.

How does vertex array work with materials?

My model structure contains a VertexList, ColorList and a NormalList. I'm going to do the TextureCoordList, but I'm a little confused with this. Some polygon has texture, some doesn't, and some has another texture as the other. So how does it work? I render the model as one Vertex buffer.
Texture coordinates don't have any information on what texture, if any, they are being used for. They're just numbers that specify where on the texture OpenGL should sample. You can render the same mesh with the same texture coordinates with different textures, or even no textures at all.
If you're using shaders, you don't even have to use the texture coordinates for texturing; you can use them for whatever you want (though in that case, consider renaming them by using vertex attributes instead).

How many depthtextures can i bind to a framebuffer?

I am trying to create shadow maps of many objects in a sceneRoom with their shadows being projected on the sceneRoom. Untill now i've been able to project the shadows of the sceneRoom on itself, but i want to project the shadows of other Objects in the sceneRoom on the sceneRoom's floor.
is it possible to create multiple depth textures in one framebuffer? or should i use several Framebuffers where each has one depth texture?
There is only one GL_DEPTH_ATTACHMENT point, so you can only have at most one attached depth buffer at any time. So you have to use some other method.
No, there is only one attachment point (well, technically two if you count GL_DEPTH_STENCIL_ATTACHMENT) for depth in an FBO. You can only attach one thing to the depth, but that does not mean you are limited to a single image.
You can use an array texture to store multiple depth images and then attach this array texture to GL_DEPTH_ATTACHMENT.
However, the only way to draw into an explicit array level in this texture would be to use a Geometry Shader to do layered rendering. Since it sounds like each one of these depth images you are interested in are actually completely different sets of geometry, this does not sound like the approach you want. If you used a Geometry Shader to do this, you would process the same set of geometry for each layer.
One thing you could consider is actually using a single depth buffer, but packing your shadow maps into an atlas. If each of your shadow maps is 512x512, you could store 4 of them in a single texture with dimensions 1024x1024 and adjust texture coordinates (and viewport when you draw into the atlas) appropriately. The reason you might consider doing this is because changing the render target (FBO state) tends to be the most expensive thing you would do between draw calls in a series of depth-only draws. You might change a few uniforms or vertex pointers, but those are dirt cheap to change.

Placing multiple images on a 3D surface

If I was to place a texture on the surface of a 3D object, for example a cube, I could use the vertices of that cube to describe the placement of this texture.
But what if I want to place multiple separate images on the same flat surface? Or suppose it is just one image, but I don't want it to appear at the edges of the surface, where the vertices are, but rather somewhere small and in the middle of the surface. I want the actual images to be chosen and placed dynamically at runtime, otherwise I could condense them offline as a single texture.
I have an approach but I want to seek advice as to whether there is a better method, or if this is perfectly acceptable:
My guess is to create multiple separate 2D quads (with depth of 0), each with a texture associated with them and placed on them (they could of course be a texture atlas with different texture coordinates).
Then, I transform these quads such that they appear to be on the surface of a 3D object, such as a cube. Of course I'd have to maintain a matrix hierarchy so these quads are transformed appropriately whenever the cube is transformed, such that they appear to be attached to the cube.
While this isn't necessarily hard, I am new to texturing and would like to know if this is a normal practice for something like this.
You could try rendering a scene and saving that as a texture then use that texture on the surface.
Check out glCopyTexImage2D() or glCopyTexSubImage2D().
Or perhaps try using frame buffer objects.
But what if I want to place multiple separate images on the same flat surface?
Use multiple textures, maybe each with its own set of textuer coordinates. Your OpenGL implementation will offer you a number of textuer units. Each of them can supply a different texture.
glActiveTexture(GL_TEXTURE_0 + i);
glBindTexture(…);
glUniform1i(texturesampler[i], i); // texturesampler[i] contains the sampler uniform location of the bound program.
Or suppose it is just one image, but I don't want it to appear at the edges of the surface, where the vertices are, but rather somewhere small and in the middle of the surface.
That's where GL_CLAMP… texture wrap modes get their use.
glTexParameteri(GL_TEXTURE_WRAP_{S,T,R}, GL_CLAMP[_TO_{EDGE,BORDER}]);
With those you specify texture coordinates at the vertices to be outside the [0, 1] interval, but instead of repeating the image will show only one time, with only the edge pixels repeated. If you make the edge pixels transparent, it's as if there was no image there.