I'm a noobie graphics programmer who wants to implement ray tracing with GPUs.
Since rasterization is faster in means of rendering, I thought it'd be better to render g-buffer first and compute ray-traced illumination from it, using compute shader.
However, I'm not sure how to approach ray tracing in shader; traversing BVH for ray collision check is essential for ray tracing, although sending all the object data in the scene to the VRAM doesn't seem very viable(such as, what if there are millions of triangles in the scene?). Is there a known implementation of such case? If not, how should I tackle this problem?
Related
I am working on a project where I have to implement voxel cone tracing for indirect light in C++/OpenGL. I already have a deferred renderer setup but most of the VCT examples I could find usually draw the scene once for voxelization and once with cone tracing shaders. Is it possible to run cone tracing shaders over a fullscreen quad and sample vertex data from the GBuffer or is that generally a stupid idea? Do I lose accuracy because I only have per pixel vertex data?
Is it possible to run cone tracing shaders over a fullscreen quad and sample vertex data from the GBuffer or is that generally a stupid idea?
Yes, however that's not voxel cone tracing anymore. That's Screen-Space Global Illumination (SSGI) instead. You can think of the voxelized scene in VCT as a 3D GBuffer, which makes all the difference between "screen space" and "full scene".
Do I lose accuracy because I only have per pixel vertex data?
Absolutely. All screen space approximations suffer from the same set of artifacts. They do not account for surfaces that aren't directly visible on the screen (either out of frame or occluded by visible geometry). Most noticeably, when the camera moves and objects enter or exit the frame, the reflections on visible surfaces would also change unrealistically.
A question is perhaps, what's your motivation for trying doing both.
When you do Voxel Cone tracing you are trying to solve the exact same problem you would be trying to solve with deferred rendering and now have the overhead of both techniques, if you are already willing to deal with the overhead of Voxel Cone tracing then it's better to fully commit to that technique.
The reason is simple, if you are doing voxel cone tracing then you have a 3D texture of some sort (could be voxel oct tree, and actual 3D texture or some other structure). That is essentially a 3D Gbuffer.
If your idea is simply to eliminate the need for such a structure and use the existing planar GBuffer instead, then you are introducing artifacts that do not appear with traditional SSRT techniques.
In essence trying both at the same time is likely to give you the worst of both worlds rather than the best.
To draw a sphere, one does not need to know anything else but it's position and radius. Thus, rendering a sphere by passing a triangle mesh sounds very inefficient unless you need per-vertex colors or other such features. Despite googling, searching D3D11 documentation and reading Introduction to 3D Programming with DirectX 11, I failed to understand
Is it possible to draw a sphere by passing only the position and radius of it to the GPU?
If not, what is the main principle I have misunderstood?
If yes, how to do it?
My ultimate goal is to pass more parameters later on which will be used by a shader effect.
You will need to implement Geometry Shader. This shader should take Sphere center and radius as input and emit a banch of vertices for rasterization. In general this is called point sprites.
One option would be to use tessellation.
https://en.wikipedia.org/wiki/Tessellation_(computer_graphics)
Most of the mess will be generated on the gpu side.
Note:
In the end you still have more parameters sent to the shaders because the sphere will be split into triangles that will be each rendered individually on the screen.
But the split is done on the gpu side.
While you can create a sphere from a point & vertex on the GPU, it's generally not very efficient. With higher-end GPUs you could use Hardware Tessellation, but even that would be better done a different way.
The better solution is to use instancing and render lots of the same VB/IB of sphere geometry scaled to different positions and sizes.
I'm trying to develop a high level understanding of the graphics pipeline. One thing that doesn't make much sense to me is why the Geometry shader exists. Both the Tessellation and Geometry shaders seem to do the same thing to me. Can someone explain to me what does the Geometry shader do different from the tessellation shader that justifies its existence?
The tessellation shader is for variable subdivision. An important part is adjacency information so you can do smoothing correctly and not wind up with gaps. You could do some limited subdivision with a geometry shader, but that's not really what its for.
Geometry shaders operate per-primitive. For example, if you need to do stuff for each triangle (such as this), do it in a geometry shader. I've heard of shadow volume extrusion being done. There's also "conservative rasterization" where you might extend triangle borders so every intersected pixel gets a fragment. Examples are pretty application specific.
Yes, they can also generate more geometry than the input but they do not scale well. They work great if you want to draw particles and turn points into very simple geometry. I've implemented marching cubes a number of times using geometry shaders too. Works great with transform feedback to save the resulting mesh.
Transform feedback has also been used with the geometry shader to do more compute operations. One particularly useful mechanism is that it does stream compaction for you (packs its varying amount of output tightly so there are no gaps in the resulting array).
The other very important thing a geometry shader provides is routing to layered render targets (texture arrays, faces of a cube, multiple viewports), something which must be done per-primitive. For example you can render cube shadow maps for point lights in a single pass by duplicating and projecting geometry 6 times to each of the cube's faces.
Not exactly a complete answer but hopefully gives the gist of the differences.
See Also:
http://rastergrid.com/blog/2010/09/history-of-hardware-tessellation/
I've been reading up on a lot of various articles regarding to ray-marching in GLSL shaders (such as this one article: http://www.iquilezles.org/www/articles/rmshadows/rmshadows.htm) and it raised some questions that I wanted to ask.
In my application, I am rendering a scene with a couple of meshes and I wanted to experiment with shadows. While I seem to somewhat understand the concept of how raymarching works, I don't quite understand how to properly implement this in GLSL. I know how to compute the intersection of a ray and a plane but how would this be handled through GLSL shaders?
According to this thread here: (https://gamedev.stackexchange.com/questions/67719/how-do-raymarch-shaders-work) it mentions that you're measuring the distance between the start of the ray and the 'surface'. Is the surface he's referring to the mesh? Do I need to send an array of planes/points that makes up the mesh to the shader in order to compute the ray intersection test? Do I need to use the depth buffer to determine the distance of the surface?
It's depend of what your shader does vs what your rendering engin does. In pure demo shaders like shadertoy (see its shadow examples ) the whole scene is encoded in the shader so there is no problem shooting secondary rays or more (beside perfs).
If the scene is not managed by your shader, then you need a bit of cooperation from your engine. At least, to produce a shadowmap in a first pass (many different algorithms exists).
Note that with SVO representation, the scene is first converted into sparse voxels, which can then be marched by the shader for secondary rays. Could be even for primary ray, but you do can use regular Z-buffer here, and voxel cone-tracing (for instance) for all kinds of secondary rays ( see *Interactive Indirect Illumination Using Voxel Cone Tracing * here: http://gigavoxels.imag.fr/publications.html (ok, you might find it overkill in your simple application). For soft shadows and depth of field, see the seminal paper GigaVoxels : Ray-Guided Streaming for Efficient and Detailed Voxel Rendering . Note that the tree might even be a regular BSP of triangles, instead of on octree of voxels. But then you loose many advantage of SVO (perfs, increased for soft shadows).
I'm drawing large amount of cubes (100 000+) using glDrawElementsInstanced(). Due the performance reasons I'd like to implement frustum culling, but I'm not quite sure how to do that when I'm using instancing.
From what I know, the only way to access individual instance of object is in shaders, so I assume I have to do the culling there. I'm not quite sure how to do so.
Can anyone point me to any tutorials?
Trying to do culling in the vertex shader is way too late in the process. You have to feed the cube's transforms to the shaders somehow, just take that data and set up a Bounding Volume Hierarchy. Then only draw the instances that pass the frustum culling.