How to render closest vertex in OpenGL fragment shader [duplicate] - opengl

This question already has an answer here:
Reflection and refraction impossible without recursive ray tracing?
(1 answer)
Closed 4 years ago.
I want to render a 2D image rendered from a 3D model with model, view and perspective projection transforms. However for each pixel/fragment in the output image, I want to store a representation of the index of the vertex in the original mesh which is physically closest to the point where the ray from the camera centre intersects the mesh.
I have a good understanding of the math involved in doing this and can build appropriate raytracing code 'longhand' to get this result but wanted to see if it was possible to achieve in OpenGL, via e.g. a fragment shader.
I'm not an OpenGL expert but my initial reading suggests a possible approach being to set a specific render target for the fragment shader that supports integral values (to store indices) and passing the entire mesh coordinates as a uniform to the fragment shader then performing a search for the nearest coordinate after back transforming gl_FragCoord to model space.
My concern is that this would perform hideously - my mesh has about 10,000 vertices.
My question is: Does this seem like a poor use case for OpenGL? If not, is my approach reasonable> If not, what would you suggest insetad.
Edit: While the indicated answer does contain the kernel of a solution to this question it is not in any way a duplicate question; it's a different question with a different answer that have common elements (raytracing). Someone searching for n answer to this question is highly unlikely to find the proposed duplicate.

First, 10000 vertices as a uniform is not a good practice in glsl. You may use ubo or create a data texture then upload this texture as an uniform texture, see the following post:
GLSL: Replace large uniform int array with buffer or texture
I am not familiar with raytracing. However, I think in most of cases, uploading large number of data to gpu need to use texture uniform.
https://www.opengl.org/discussion_boards/showthread.php/200487-Ray-intersection-with-GLSL?p=1292112&viewfull=1#post1292112

Related

multiple shadowmaps in deferred shading

I have question about usage of multiple shadowmaps in deferred shading. I have implemented a single shadowmap in forward shading.
In forward rendering, in the vertex shader of each object I calculated it's position in lightspace and compared it to the shadowmap in the fragment shader. I can see that working with multiple maps with an array of projection matrices and an array of shadowmaps as uniforms.
In the case of deferred shading I was wondering what is the common practice. The way I see it there are a few options:
In the deferred shading, for each pixel I calculate it's position in each lightspace and compare it to the corresponding lightmap. (that way I do the calculation for each fragment and each matrix which might too be expensive?)
In forward rendering I calculate the position of each vertex in each projection and there is a G-buffer output for each position. I then do the comparison in the deferred shading. (that way I do the computation of the position only once per vertex instead of once per pixel but I have a shadowmap and lightspace position for each shadow which seems suboptimal)
A bit like 2. But I do the verification in forward rendering. That way I can store many booleans if it's in the light or not for each shadow in one int texture. The problem is that I can't do soft shadows that way.
Maybe something better?
To synthesise: 1 needs many matrices multipication but is easy to implement. 2 needs few matrices multiplication but many textures and outputs (which is limited by the graphic card). and 3 needs few output and few calculations per pixel. But I can't' get soft shadows because the result is an array of boolean.
I am not doing it really for better performance but mostly to learn new stuffs. I'm open to suggestions. Maybe I'm misunderstanding something. Is there a standard way to do it?

How to use the hardware's 3D texture sampling with a flipbook style volume texture?

A question sort of addressing the problem and another question asking a related question.
I have a 2D texture that has 12x12 slices of a volume layered in a grid like this:
What I am doing now is to calculate the offset and sampling based of the 3D coordinate inside the volume using HLSL code myself. I have followed the descriptions found here and here, where the first link also talks about 3D sampling from a 2D sliced texture. I have also heard that modern hardware have the ability to sample 3D textures.
That being said, I have not found any description or example code that samples the 3D texture. What HLSL, or OpenGL, function can I use to sample this flipbook type of texture? If you can, please add a small example snippet with explanations. If you cant, pointing me to one or the documentation would be appreciated. I have found no sampler function where I can provide the number of layers in the U and V directions so I dont see how it can sample without knowing how many slices are per axis.
If I am misunderstanding this completely I would also appreciate being told so.
Thank you for your help.
OpenGL has support for true 3D textures for ages (actually 3D texture support already appeared in OpenGL-1.2). With that you upload your 3D texture not as a "flipbook" but simply as a stack of 2D images, using the function glTexImage3D. In GLSL you then just use the regular texture access function, but with a sampler3D and a 3 component texture coordinate vector (except in older versions of GLSL, i.e. before GLSL-1.5/OpenGL-3 where you use texture3D).

How to find corresponding primitives or vertices for given pixels on the screen in OpenGL

Suppose a sphere was calculated and drawn using OpenGL such as it is done here for example.
I am interested in learning, how one would go about finding the specific vertex that correspond to an arbitrary given pixel on the screen (x,y)?
I've already read about ray casting and the selection buffer, however, since I need to iterate over a large number of pixels (let's say >10k) and find all the corresponding vertices those solutions didn't seem to be suitable for this kind of stuff.
Ideally, I would like to find a way that is both fast and modern in the sense of modern OpenGL.
Is there an "out-of-the-box"-solution for this or do I need to write a shader? In either case, any details you could give would be highly appreciated!
Add an integer "face ID" vertex attribute, make it the same for each vertex of a given triangle
Pass the face ID through the VS to the FS & write it to an integer FBO
Read back the FBO pixels (use PBOs for async FBO readback) in the desired location(s), giving the face ID at each point

How to multiply vertices with model matrix outside the vertex shader

I am using OpenGL ES2 to render a fairly large number of mostly 2d items and so far I have gotten away by sending a premultiplied model/view/projection matrix to the vertex shader as a uniform and then multiplying my vertices with the resulting MVP in there.
All items are batched using texture atlases and I use one MVP per batch. So all my vertices are relative to the translation of that MVP.
Now I want to have rotation and scaling for each of the separate items, which means I need a different model for each of them. So I modified my vertex to include the model (16 floats!) and added a mat4 attribute in my shader and it all works well. But I'm kinda dissapointed with this solution since it dramatically increased the vertex size.
So as I was staring at my screen trying to think of a different solution I thought about transforming my vertices to world space before I send them over to the shader. Or even to screen space if its possible. The vertices I use are unnormalized coordinates in pixels.
So the question is, is such a thing possible? And if yes how do you do it? I can't think why it shouldn't be since its just maths but after a fairly long search on google, it doesn't look like a lot of people are actually doing this...
Strange cause if it is indeed possible, it would be quite a major optimization in cases like this one.
If the number of matrices per batch are limited then you can pass all those matrices as uniforms (preferably in a UBO) and expand the vertex data with an index which specifies which matrix you need to use.
This is similar to GPU skinning used for skeletal animation.

How to use vertex, normal and texture indices together?

I'm writing a small rendering engine using C++ and DirectX 11. I've managed to render models using indexed vertices. Now I want to support textures and vertex normals as well. I'm using Wavefront OBJ files, which uses indices to reference to the correct texture and normal coordinates (much like indexed vertices). This is the first time I'm doing something like this with vertices, textures and normals combined, so this is all a bit new to me.
The problem I'm facing is that the number of indices for vertices, normals and texture coordinates are not the same and I'm trying to find a right way to use all these indices in a vertex shader. To illustrate the problem more clearly, I've made some images.
The left image is a wireframe of a simple piramid object and the right image is its UV coordinate layout (with the bottom of the piramid in the center). In the left image you can see that the piramid has 4 vertices, so there are 4 vertex indices. The right UV layout has 6 UV coordinates so there are 6 texture indices. For each face of the piramid has a normal of its own, so there are only 4 normal indices.
I've found this question while searching on SO and it seems to work in theory (I haven't tried it yet). Is this a common way to solve a problem like this or are there better ways? What would you recommend I do?
Thanks :)
P.S.: my vertex, normal and texture data is stored in separate arrays on the cpu side.
I would recommend you to analyze the problem narrowing down to the simplest case; to me it helped using only two squares (flat surface, 4 triangles) and try to correctly map the texture on both squares as shown in the image.
Using this simple case you have this data:
6 vertices
6 UVs
6 normals (for each vertex, even though some vertex might share same normal)
12 indices
In theory this will suffice to describe your whole data, but if you test it will find out that one of the mapped textures won't be fully mapped onto one of the squares. That's because for the vertex 2 and 3, you have two different UV coord sets, it's impossible to describe using so few vertices, you need more vertices!
To fix this problem, the data you gather from whatever exporter you chose, will duplicate vertices when clashes like this occur. For the example above, two more vertices will be needed, it will produce a duplication of data for the normals, 2 more elements to the indices but now it will describe correctly map the UV coords.
Finally, for your example those 4 vertices can't fully describe all UV coords, then duplicating some vertices will be needed to complete the UV layout, the indices array will grow.
For the part am not sure is if the OBJ format exporter actually throws the data ready to read, either you can try to figure it out with an exporter or you can try another format (which I better recommend you) that can export data that is already in the right format ready to be pushed into the graphics pipeline of DirectX11.
Hope the example that helped me understanding this problem, will help to you as well.
Take a look to some basic DirectX11 tutorials, or if you can this book.
Ah seems to me that you need to visit: http://www.rastertek.com/tutdx11.html
See what they have here and give it a whirl