Ray tracer that handles gradient refractive index? - gradient

I would like to find a ray tracing / synthetic imaging program that utilizes gradient refractive indices. I've looked online at a number of ray tracers, but I've yet to find one that specifically handles gradient indices, alongside regular materials. Any suggestions for finding a ray tracer that has this capability would be very much appreciated!

Related

Rendering an atmosphere around a planet with shading

I have a made a planet and wanted to make an atmosphere around it. So I was referring to this site:
Click to visit site
I don't understand this:
As with the lookup table proposed in Nishita et al. 1993, we can get the optical depth for the ray to the sun from any sample point in the atmosphere. All we need is the height of the sample point (x) and the angle from vertical to the sun (y), and we look up (x, y) in the table. This eliminates the need to calculate one of the out-scattering integrals. In addition, the optical depth for the ray to the camera can be figured out in the same way, right? Well, almost. It works the same way when the camera is in space, but not when the camera is in the atmosphere. That's because the sample rays used in the lookup table go from some point at height x all the way to the top of the atmosphere. They don't stop at some point in the middle of the atmosphere, as they would need to when the camera is inside the atmosphere.
Fortunately, the solution to this is very simple. First we do a lookup from sample point P to the camera to get the optical depth of the ray passing through the camera to the top of the atmosphere. Then we do a second lookup for the same ray, but starting at the camera instead of starting at P. This will give us the optical depth for the part of the ray that we don't want, and we can subtract it from the result of the first lookup. Examine the rays starting from the ground vertex (B 1) in Figure 16-3 for a graphical representation of this.
First Question - isn't optical depth dependent on how you see that is, on the viewing angle? If yes, the table just gives me the optical depth of the rays going from land to the top of the atmosphere in a straight line. So what about the case where the rays pierce the atmosphere to reach the camera? How do I get the optical depth in this case?
Second Question - What is the vertical angle it is talking about...like, is it the same as the angle with the z-axis as we use in polar coordinates?
Third Question - The article talks about scattering of the rays going to the sun..shouldn't it be the other way around? like coming from the sun to a point?
Any explanation on the article or on my questions will help a lot.
Thanks in advance!
I am no expert in the matter but have played with Atmospheric scattering and various physical and optical simulations. I strongly recommend to look at this:
my VEEERRRYYY Simplified version of atmospheric scattering in GLSL
It odes not do the full volume intergration but just linear path integration along the ray and does only the Rayleight scatering with isotropic coefficients. As you can see its still good enough.
In real scattering the viewing angle is impacting the real scattering equation as the scattering coefficients are different in different angles (against main light source and viewer) So answer to your first question is Yes it does.
Not sure what you are refer to in your second question. The scattering itself is dependent on angle between light source, particle and camera. That lies on arbitrary plane. However if the Earth surface is accounted to the equation too then its dependent on the horizontal and vertical angles (against terrain) so azimuth,elevation as usually more light is reflected when camera is facing sun (azimuth) and the reflected rays are closer to your elevation. So my guess is that's what the horizontal angle is about accounting for reflected light from the surface.
To answer your 3th question is called back ray tracing. You can cast rays both ways (from camera or from sun) however if you start from light source you do not know which way to go to hit a pixel on camera screen so you need to cast a lot of rays to increase the probability of hit enough to fill the screen which is too slow and inaccurate (produce holes). If you start from screen pixel then you cast just single or per wavelength ray instead which is much much faster. The resulting color is the same.
[Edit1] vertical angle
OK I read the linked topic a bit and this is How I understand it:
So its just angle between surface normal and the casted ray. Its scaled so vert.angle=0 means that ray and normal are the same and vert.angle=1 means the are opposite directions.

Light Propagation Volumes interpolation

I am currently implementing LPV in my engine.
As mentionned here I am supposed to interpolate the values from the octree to obtain smooth colors.
For example check out this picture
You have no interpolation (left) and with interpolation ( right ).
I actually only pick the values from the 3D texture ( octree ) and i obtain the pixelated result:
How could I do interpolation to have smooth colors?
I think there is no real issue. Single LPV doesn't work over big distances.
For radiosity in large scenes, good options are:
Cascading LPV in combination with SSGI like in cry engine
Monte carlo path tracing
Voxel cone tracing

Implementation of raymarching surfaces in GLSL

I've been reading up on a lot of various articles regarding to ray-marching in GLSL shaders (such as this one article: http://www.iquilezles.org/www/articles/rmshadows/rmshadows.htm) and it raised some questions that I wanted to ask.
In my application, I am rendering a scene with a couple of meshes and I wanted to experiment with shadows. While I seem to somewhat understand the concept of how raymarching works, I don't quite understand how to properly implement this in GLSL. I know how to compute the intersection of a ray and a plane but how would this be handled through GLSL shaders?
According to this thread here: (https://gamedev.stackexchange.com/questions/67719/how-do-raymarch-shaders-work) it mentions that you're measuring the distance between the start of the ray and the 'surface'. Is the surface he's referring to the mesh? Do I need to send an array of planes/points that makes up the mesh to the shader in order to compute the ray intersection test? Do I need to use the depth buffer to determine the distance of the surface?
It's depend of what your shader does vs what your rendering engin does. In pure demo shaders like shadertoy (see its shadow examples ) the whole scene is encoded in the shader so there is no problem shooting secondary rays or more (beside perfs).
If the scene is not managed by your shader, then you need a bit of cooperation from your engine. At least, to produce a shadowmap in a first pass (many different algorithms exists).
Note that with SVO representation, the scene is first converted into sparse voxels, which can then be marched by the shader for secondary rays. Could be even for primary ray, but you do can use regular Z-buffer here, and voxel cone-tracing (for instance) for all kinds of secondary rays ( see *Interactive Indirect Illumination Using Voxel Cone Tracing * here: http://gigavoxels.imag.fr/publications.html (ok, you might find it overkill in your simple application). For soft shadows and depth of field, see the seminal paper GigaVoxels : Ray-Guided Streaming for Efficient and Detailed Voxel Rendering . Note that the tree might even be a regular BSP of triangles, instead of on octree of voxels. But then you loose many advantage of SVO (perfs, increased for soft shadows).

Simulate Distortion of Spherical Billboard

I need to make spherical billboards (i.e., setting depth), but taking into account perspective projection--ideally including off-center frusta.
I wasn't able to find any references to anyone succeeding at this--although there are plenty of explanations as to why standard billboards don't have perspective distortions. Unfortunately, for my application, the lack isn't a cosmetic defect; it's actually important to the algorithm.
I did a bit of investigation on my own:
The math gets pretty messy rather quickly. The obvious approaches don't work: for example, you can't orient the billboard perpendicular to a viewing ray because tangential rays wouldn't intersect the billboard at right angles.
Probably the most promising approach I found was to render the billboard parallel to the near clipping plane, stretching it with a vertex shader into an ellipse. This only handles perturbations along one axis (so e.g. it won't handle spheres rendered in a corner of the view), but the main obstacle is calculating depth correctly; you can't compute it as you would for an undistorted sphere because the "sphere" is occluding itself.
Point of fact, I didn't find a good solution, and I couldn't find anyone who has. Anyone have an idea?
While browsing around not even remotely working on this problem, I stumbled on http://iquilezles.org/www/articles/sphereproj/sphereproj.htm, which is pretty close. The linked tutorial shows how to compute a bounding ellipse for a rasterized sphere; getting the depth (at worst, using a raycast) should be fairly easy to derive.

Can someone describe the algorithm used by Ken Silverman's Voxlap engine?

From what I gathered he used sparse voxel octrees and raycasting. It doesn't seem like he used opengl or direct3d and when I look at the game Voxelstein it appears that miniature cubes are actually being drawn instead of just a bunch of 2d square. Which caught me off guard I'm not sure how he is doing that without opengl or direct3d.
I tried to read through the source code but it was difficult for me to understand what was going on. I would like to implement something similar and would like the algorithm to do so.
I'm interested in how he performed rendering, culling, occlusion, and lighting. Any help is appreciated.
The algorithm is closer to ray-casting than ray-tracing. You can get an explanation from Ken Silverman himself here:
https://web.archive.org/web/20120321063223/http://www.jonof.id.au/forum/index.php?topic=30.0
In short: on a grid, store an rle list of surface voxels for each x,y stack of voxels (if z means 'up'). Assuming 4 degrees of freedom, ray-cast across it for each vertical line on the screen, and maintain a list of visible spans which is clipped as each cube is drawn. For 6 degrees of freedom, do something similar but with scanlines which are tilted in screenspace.
I didn't look at the algorithm itself, but I can tell the following based off the screenshots:
it appears that miniature cubes are actually being drawn instead of just a bunch of 2d square
Yep, that's how ray-tracing works. It doesn't draw 2d squares, it traces rays. If you trace your rays against many miniature cubes, you'll see many miniature cubes. The scene is represented by many miniature cubes (voxels), hence you see them when you look up close. It would be nice to actually smoothen the data somehow (trace against smoothed energy function) to make them look smoother.
I'm interested in how he performed rendering
by ray-tracing
culling
no need for culling when ray-tracing, particularly in a voxel scene. As you move along the ray you check only the voxels that the ray intersects.
occlusion
voxel-voxel occlusion is handled naturally by ray-tracing; it would return the first voxel hit, which is the closest. If you draw sprites you can use a Z-buffer generated by the ray-tracer.
and lighting
It's possible to approximate the local normal by looking at nearby cells and looking which are occupied and which are not. Then performing the lighting calculation. Alternatively each voxel can store the normal along with its color or other material properties.