Is there a sample-less cubemaps in vulkan? - glsl

In my game-engine all the static textures are bounded on an one big descriptor set(I using dynamic indexing for that), and now i also want to bind a cube-texture at orbitrary index inside this descriptors array.
All my textures bounded saperatly without any sampler(I am using glsl-extention named "GL_EXT_samplerless_texture_functions" for that).
Is there a way in vulkan-glsl to implement a cube-sampling of sample-less texture?
Sorry about my English.

The texelFetch functions are intended for accessing a specific texel from a given mipmap level and layer of the image. The texture coordinates are explicitly in "texel space", not any other space. No translation is needed, not even denormalization.
By contrast, accessing a cubemap from a 3D direction requires work. The 3D direction must be converted into a 2D XY texel coordinate and a face within the cubemap.
Not doing translations of this sort is the entire point of texelFetch. This is what it means to access a texture without a sampler. So it doesn't make sense to ever texelFetch a cubemap (and only the texelFetch functions are available for accessing a texture without a sampler).
So if you want to access a cubemap, you want one of two things:
You want to pass a 3D direction, which gets converted into a 2D coordinate and face index.
You need a sampler for that. Period.
You have a 2D coordinate in texture space and a face index, and you want to access that texel from the texture.
In this case, you're not accessing a cubemap; you're accessing a 2D array texture. Indeed, there is no such thing in Vulkan as a "cubemap image"; there is merely a 2D array image which has a flag set on it that allows it to be associated with cubemap image views. That is, a "cubemap" is a special way of accessing a 2D array. But you can just as easily create a 2D array view of the same image and bind that to the descriptor.
So do that.

Related

Initializing a cubemap with existing 2D textures and glTextureView

I am trying to create a cubemap from six existing textures. Texture views seem to be made for this sort of thing, also updating the cubemap when the original textures change. However, cubemaps can only be views of texture arrays (or other cubemaps) and i can't find any way to fill an array with several already existing 2D textures using glTextureView since glTextureView can only be used with uninitialized textures.
Is there a way to do this or is drawing into the cubemap via an FBO the only way?
No, I do not believe this is supported. You can't "merge" multiple textures into a single texture without any copying.
I can think of a few options you have:
The ideal solution is of course that you place the data into cube map faces in the first place, instead of using separate textures. Note that gTextureView() supports the opposite direction. If you partly want to use a texture as a regular GL_TEXTURE_2D, and partly as the face of a cube map, you can store it in a cube map face, and create a texture view to treat the face as a 2D texture where that's needed.
You copy the textures into the cube map faces. The most efficient approach should be using glBlitFramebuffer(). Of course copying data is always undesirable, but sometimes it's necessary.
This may be somewhat unconventional, but you could... not use a cube map. You could use 6 separate samplers in the shader, and bind the 6 textures you want to use as cube faces to those samplers. Then you can decide which of the six textures to sample, and what texture coordinates to use, in your shader code. This shouldn't be too difficult if you look up the logic/math that is used under the hood when you sample cube maps.

How many depthtextures can i bind to a framebuffer?

I am trying to create shadow maps of many objects in a sceneRoom with their shadows being projected on the sceneRoom. Untill now i've been able to project the shadows of the sceneRoom on itself, but i want to project the shadows of other Objects in the sceneRoom on the sceneRoom's floor.
is it possible to create multiple depth textures in one framebuffer? or should i use several Framebuffers where each has one depth texture?
There is only one GL_DEPTH_ATTACHMENT point, so you can only have at most one attached depth buffer at any time. So you have to use some other method.
No, there is only one attachment point (well, technically two if you count GL_DEPTH_STENCIL_ATTACHMENT) for depth in an FBO. You can only attach one thing to the depth, but that does not mean you are limited to a single image.
You can use an array texture to store multiple depth images and then attach this array texture to GL_DEPTH_ATTACHMENT.
However, the only way to draw into an explicit array level in this texture would be to use a Geometry Shader to do layered rendering. Since it sounds like each one of these depth images you are interested in are actually completely different sets of geometry, this does not sound like the approach you want. If you used a Geometry Shader to do this, you would process the same set of geometry for each layer.
One thing you could consider is actually using a single depth buffer, but packing your shadow maps into an atlas. If each of your shadow maps is 512x512, you could store 4 of them in a single texture with dimensions 1024x1024 and adjust texture coordinates (and viewport when you draw into the atlas) appropriately. The reason you might consider doing this is because changing the render target (FBO state) tends to be the most expensive thing you would do between draw calls in a series of depth-only draws. You might change a few uniforms or vertex pointers, but those are dirt cheap to change.

Placing multiple images on a 3D surface

If I was to place a texture on the surface of a 3D object, for example a cube, I could use the vertices of that cube to describe the placement of this texture.
But what if I want to place multiple separate images on the same flat surface? Or suppose it is just one image, but I don't want it to appear at the edges of the surface, where the vertices are, but rather somewhere small and in the middle of the surface. I want the actual images to be chosen and placed dynamically at runtime, otherwise I could condense them offline as a single texture.
I have an approach but I want to seek advice as to whether there is a better method, or if this is perfectly acceptable:
My guess is to create multiple separate 2D quads (with depth of 0), each with a texture associated with them and placed on them (they could of course be a texture atlas with different texture coordinates).
Then, I transform these quads such that they appear to be on the surface of a 3D object, such as a cube. Of course I'd have to maintain a matrix hierarchy so these quads are transformed appropriately whenever the cube is transformed, such that they appear to be attached to the cube.
While this isn't necessarily hard, I am new to texturing and would like to know if this is a normal practice for something like this.
You could try rendering a scene and saving that as a texture then use that texture on the surface.
Check out glCopyTexImage2D() or glCopyTexSubImage2D().
Or perhaps try using frame buffer objects.
But what if I want to place multiple separate images on the same flat surface?
Use multiple textures, maybe each with its own set of textuer coordinates. Your OpenGL implementation will offer you a number of textuer units. Each of them can supply a different texture.
glActiveTexture(GL_TEXTURE_0 + i);
glBindTexture(…);
glUniform1i(texturesampler[i], i); // texturesampler[i] contains the sampler uniform location of the bound program.
Or suppose it is just one image, but I don't want it to appear at the edges of the surface, where the vertices are, but rather somewhere small and in the middle of the surface.
That's where GL_CLAMP… texture wrap modes get their use.
glTexParameteri(GL_TEXTURE_WRAP_{S,T,R}, GL_CLAMP[_TO_{EDGE,BORDER}]);
With those you specify texture coordinates at the vertices to be outside the [0, 1] interval, but instead of repeating the image will show only one time, with only the edge pixels repeated. If you make the edge pixels transparent, it's as if there was no image there.

OpenGL - Associate Texture Coordinates Array With Index Array Rather Than Vertex Array?

Whenever we use an index array to render textured polygons with glDraw*Elements*, we can provide an array of vertices and an array of texture coordinates. Then each index in the index array refers to a vertex at some position in the vertex array and the corresponding texture coordinate at the same position in the texture array. Now, if for instance several separate primitives (like QUADS) share one vertex, but require different texture coordinates for that vertex, we have to duplicate that vertex in our array as many times as we have different texture coordinates for it. Therefore, it would be much more convenient if the texture coordinate array could be associated with the positions in the index array. That way no vertex duplication would be necessary to associate one specific vertex with different texture coordinates.
Is this possible? If yes, what syntax to use?
No. Not in a simple way.
You could use buffer textures and shader logic to implement it. But there is no simple API to make attributes index the way you want. All attributes are sampled from the same index (except when instanced array divisors are used, but that won't help you either).
Note that doing this will be a memory/performance tradeoff. Using buffer textures to access vertex data will take up less memory, but it will be significantly slower and more limiting than just using regular attributes. You won't have access to normalized vertex attributes, so compressing the vertex data will require explicit shader logic. And accessing buffer textures is just slower overall.
You should only do this if memory is at a premium.
Now, if for instance several separate primitives (like QUADS) share one vertex, but require different texture coordinates for that vertex, we have to duplicate that vertex in our array as many times as we have different texture coordinates for it.
If the texture coordinates differ on primitives sharing a vertex position, then the vertices at a whole are not shared! A vertex is a single vector consisting of
position
normal
texture coordinate(s)
other attributes
You alter any of these, you end up with a different vertex. Because of that vertex sharing does not the way you thought.
You can duplicate the vertices so that 1 has 1 texture coord & the other has the other. The only downfall of that is if you need to morph the surface - you may move 1 vertex but not both. Of course it is possible to do it "imperatively" - ie when you just run thru a loop & use different texture coord as you go - but that would not be VBO & much slower

Avoiding glBindTexture() calls?

My game renders lots of cubes which randomly have 1 of 12 textures. I already Z order the geometry so therefore I cant just render all the cubes with texture1 then 2 then 3 etc... because that would defeat z ordering. I already keep track of the previous texture and in they are == then I do not call glbindtexture, but its still way too many calls to this. What else can I do?
Thanks
Ultimate and fastest way would be to have an array of textures (normal ones or cubemaps). Then dynamically fetch the texture slice according to an id stored in each cube instance data/ or cube face data (if you want a different texture on a per cube face basis) using GLSL built-in gl_InstanceID or gl_PrimitiveID.
With this implementation you would bind your texture array just once.
This would of course required used of gpu_shader4 and texture_array extensions:
http://developer.download.nvidia.com/opengl/specs/GL_EXT_gpu_shader4.txt
http://developer.download.nvidia.com/opengl/specs/GL_EXT_texture_array.txt
I have used this mechanism (using D3D10, but principle applies too) and it worked very well.
I had to map on sprites (3D points of a constant screen size of 9x9 or 15x15 pixels IIRC) differents textures indicating each a different meaning for the user.
Edit:
If you don't feel comfy with all shader stuff, I would simply sort cubes by textures, and don't Z order the geometry. Then measure performances gains.
Also I would try to add a pre-Z pass where you render all your cubes in Z buffer only, then render normal scene, and see if it speed up things (if fragments bound, it could help).
You can pack your textures into one texture and offset the texture coordinates accordingly
glMatrixMode(GL_TEXTURE) will also allow you to perform transformations on the texture space (to avoid changing all the texture coords)
Also from NVIDIA:
Bindless Graphics