How to extract OpenGL Polygon Rasterization? - opengl

I have a triangle mesh that I am rendering in OpenGL. I need to get a mapping from each pixel in the resulting view to the index of the polygon associated with it. Is there an easy way to do that?
If it is possible do that easily, what would be a reasonable datatype for storing it?

Draw into a framebuffer with an integer texture backing it. You can use gl_PrimitiveID as the value in the fragment shader to write as output. This will give you a map from pixel to the primitive index.

Related

Is there a sample-less cubemaps in vulkan?

In my game-engine all the static textures are bounded on an one big descriptor set(I using dynamic indexing for that), and now i also want to bind a cube-texture at orbitrary index inside this descriptors array.
All my textures bounded saperatly without any sampler(I am using glsl-extention named "GL_EXT_samplerless_texture_functions" for that).
Is there a way in vulkan-glsl to implement a cube-sampling of sample-less texture?
Sorry about my English.
The texelFetch functions are intended for accessing a specific texel from a given mipmap level and layer of the image. The texture coordinates are explicitly in "texel space", not any other space. No translation is needed, not even denormalization.
By contrast, accessing a cubemap from a 3D direction requires work. The 3D direction must be converted into a 2D XY texel coordinate and a face within the cubemap.
Not doing translations of this sort is the entire point of texelFetch. This is what it means to access a texture without a sampler. So it doesn't make sense to ever texelFetch a cubemap (and only the texelFetch functions are available for accessing a texture without a sampler).
So if you want to access a cubemap, you want one of two things:
You want to pass a 3D direction, which gets converted into a 2D coordinate and face index.
You need a sampler for that. Period.
You have a 2D coordinate in texture space and a face index, and you want to access that texel from the texture.
In this case, you're not accessing a cubemap; you're accessing a 2D array texture. Indeed, there is no such thing in Vulkan as a "cubemap image"; there is merely a 2D array image which has a flag set on it that allows it to be associated with cubemap image views. That is, a "cubemap" is a special way of accessing a 2D array. But you can just as easily create a 2D array view of the same image and bind that to the descriptor.
So do that.

How does vertex array work with materials?

My model structure contains a VertexList, ColorList and a NormalList. I'm going to do the TextureCoordList, but I'm a little confused with this. Some polygon has texture, some doesn't, and some has another texture as the other. So how does it work? I render the model as one Vertex buffer.
Texture coordinates don't have any information on what texture, if any, they are being used for. They're just numbers that specify where on the texture OpenGL should sample. You can render the same mesh with the same texture coordinates with different textures, or even no textures at all.
If you're using shaders, you don't even have to use the texture coordinates for texturing; you can use them for whatever you want (though in that case, consider renaming them by using vertex attributes instead).

Reverse triangle lookup from affected pixels?

Assume I have a 3D triangle mesh, and a OpenGL framebuffer to which I can render the mesh.
For each rendered pixel, I need to build a list of triangles that rendered to that pixel, even those that are occluded.
The only way I could think of doing this is to individually render each triangle from the mesh, then go through each pixel in the framebuffer to determine if it was affected by the triangle (using the depth buffer or a user-defined fragment shader output variable). I would then have to clear the framebuffer and do the same for the next triangle.
Is there a more efficient way to do this?
I considered, for each fragment in the fragment shader, writing out a triangle identifier, but GLSL doesn't allow outputting a list of integers.
For each rendered pixel, I need to build a list of triangles that rendered to that pixel, even those that are occluded.
You will not be able to do it for entire scene. There's no structure that allow you to associate "list" with every pixel.
You can get list of primitives that affected certain area using select buffer (see glRenderMode(GL_SELECT)).
You can get scene depth complexity using stencil buffer techniques.
If there are 8 triangles total, then you can get list of triangles that effected every pixel using stencil buffer (basically, assign unique (1 << n) stencil value to each triangle, and OR it with existing stencil buffer value for every stencil OP).
But to solve it in generic case, you'll need your own rasterizer and LOTS of memory to store per-pixel triangle lists. The problem is quite similar to multi-layered depth buffer, after all.
Is there a more efficient way to do this?
Actually, yes, but it is not hardware accelerated and OpenGL has nothing to do it. Store all rasterized triangles in OCT-tree. Launch a "ray" through that OCT-tree for every pixel you want to test, and count triangles this ray hits. That's collision detection problem.

OpenGL: Using shaders to create vertex lighting by using pre-calculated colormap?

First of all, I have very little knowledge of what shaders can do, and i am very interested in making vertex lighting. I am attempting to use a 3d colormap which would be used to calculate the vertex color at that position of the world, and also interpolate the color by using the nearby colors from the colormap.
I cant use typical OpenGL lighting because its probably too slow and theres a lot of lights i need to render. I am going to "render" the lights at the colormap first, and then i could either manually map every vertex drawn with the corresponding color from the colormap.
...Or i could somehow automate this process, so i wouldnt have to change the color values of vertexes myself, but a shader could perhaps do this for me?
Questions is... is this possible, and if it is: what i need to know to make it possible?
Edit: Note that i also need to update the lightmap efficiently, without caring about the size of the lightmap, so the update should be done only at that specific part of the lightmap i want to update.
It almost sounds like what you want to do is render the lights to your color map, then use your color map as a texture, but instead of decal mode set it to modulate mode, so it's multiplied with the existing color instead of just replacing it.
That is different in one way though: instead of just affecting the vertexes, it'll map to the individual fragments (pixels, in essence).
Edit: What I had in mind wasn't a 3D texture -- it was a cube map. Basically, create a virtual cube surrounding everything in your "world". Create a 2D texture for each face of that cube. Render your coloring to the cube map. Then, to color a vertex you (virtually) extend a ray outward from the center, through the vertex, to the cube. The pixel you hit on the cube map gives you the color of lighting for that vertex.
Updating should be relatively efficient -- you have normal 2D textures for the top, bottom, front, etc., and you update them as needed.
If you cant use the fixed function pipeline functionality the best way to do per vertex lighting should be to do all the lighting calculations per vertex in the vertex-shader, when you then pass it on the the fragment shader it will be correctly interpolated across the face.
Another way to deal with performances issues when using a lot of light sources is to use deferred rendering as it will only do lighting calculation on the geometry that is actually visible.
That is possible, but will not be effective on the current hardware.
You want to render light volumes into 3d texture. The rasterizer works on a 2D surface, so your volumes have to be split along one of the axises. The split can be done in one of the following ways:
Different draw calls for each split
Instanced draw, with layer selection based on glInstanceID (will require geometry shader)
Branch in geometry shader directly from a single draw call
In order to implement it, I would suggest reading GL-3 specification and examples. It's not going to be easy, nor it will be fast enough in the result for complex scenes.

Polygonal gradients with OpenGL

I'm wondering how I could create a gradient wuth multiple stops and a direction if I'm making polygons. Right now I'm creating gradients by changing the color of the verticies but this is limiting. Is there another way to do this?
Thanks
One option you may have is to render a simple polygon with a gradient to a texture, which you then use to texture your actual polygon.
Then you can rotate the source polygon and anything textured with its image will have its gradient rotate as well, without the actual geometry changing.
The most flexible way is probably to create a texture with the gradient you want, and then apply that to your geometry.
If you're using a shader, you can pass your vertex world positions into your vertex shader and they'll interpolate to your fragment shader, so for every fragment, you'll get where it is in world-space (of course you can use any space). Then it's just a matter of choosing whatever transfer function to change that value to a color. You can make any kind of elaborate algorithm using b-splines or whatever in your fragment shader.