Rasterization and Secondary Reflections - glsl

I have worked to develop a GPU-based underwater imaging sonar simulation for real-time applications (see more details in my last paper). The mission part is the reverberation phenomenon, that can be represented by a multipath algorithm.
This work uses precomputed information (normal, depth and angle) during rasterization pipeline using shaders in order to calculate the simulated sonar data, however, this way is restricted to primary reflections. So I need to take account the secondary reflections. Could ray tracing be used only for this part, in a hybrid pipeline (rasterization and ray tracing)?

I hope I can help!
With raytracing, in order to calculate secondary reflections you normally need to first calculate each ray's primary reflection, and you then recursively shoot off another ray from that position. I guess you could skip the first reflection part of raytracing if you can use you shader results to figure out where each ray starts and in which direction it should reflect. You could shoot your rays out of the pixels in the shader's result, using depth information, pixel coordinates, and camera parameters to figure out where the ray's origin is, and using normal information to figure out which direction the ray should go in.
From looking at your project's paper, I think raytracing would be a very useful tool for this project, and I wonder if it might be better to just go for a full raytracing approach to simplify the process. Why exactly do you want to do the primary reflections through shaders? I would recommend looking into nvidia optix, which performs raytracing on the gpu, and looking into global illumination techniques in order to calculate reflections off of all objects in the scene. Global illumination techniques also take into account the fact that surfaces are not perfectly smooth without the use of normal maps, as mentioned in your paper, by using monte carlo integration.
I hope this helps, if you would like me to clarify anything or have any other questions feel free to ask!

Related

Difference between SphericalMapping and CubeMapping for environmental reflection in OpenGL?

I'm working with an environmental reflection in OpenGL+GLSL.
I want to reflect the environment around an object in the most accurate way possible.
I found basically two way to do this, one is called SphericalMapping and the other is CubeMapping.
They differ in the shader code but really don't understand what is the difference between them.
Obviously for the cubemapping shader I have 6 images printed on a cube that are needed for the fragment shader to look the right pixel, and for my Spheric mapping shader a single image which is distorted with a photo-retouch software or obtained by taking a photo of a specular reflective sphere.
The drawbacks of spherical mapping seems to be that the camera (and the person which holds it) is always showed in the image and the sampling is non-uniform. What is meant by this latest statement? What is meant by "black-hole" effect in spherical mapping?
I would like to find an interactive demonstration of the differences and drawbacks of these two approaches, it seems like cubemapping is the best, but don't know why.
What is the best of the two especially for a realtime simulation with head tracking in your opinion?
Spheremaps are usually for small, low quality stuff.
The drawbacks of spherical mapping seems to be that the camera (and the person which holds it) is always showed in the image
We're talking about computer graphics here; there is no real camera, or no real person. Try imagegoogling "spheremap", you won't see anybody in the pictures.
the sampling is non-uniform
This means that the center of the spheremap has many pixels for a relatively small area, while near the border, you have few pixels for a relatively large area.
Cubemaps are almost always better : you can generate them at runtime easily, it's faster to sample for the hardware, and even though you have 6 textures instead of 1, you can use a lower resolution and still get the same quality.

OpenGL - selective world rendering

I'm building a miniature city with the basic minimum looks of a city (roads,buildings,trees etc) where u can move around. I know that rendering the whole model set in each frame doesn't work...
So can anyone give me an insight on the standard (but easiest) procedure used in selectively rendering only the visible parts of the system? I mean, just displaying only the visible stuff (with respect to the camera position) and not rendering the unseen part..
Im using VC++ and GLUT API.
Maybe this Wikipedia article provides a very basic introduction to the field of culling techniques.
A good starting point and one of the easiest techniques is view frustum culling. With this method you check for each object in your scene if it is inside the viewing volume (viewing frustum). This basically amounts to checking for some simplified bounding volume of the geometry (like a box or a sphere, that completely contain the geometry) if it lies inside the viewing frustum, defined by six planes.
This can further be optimized by grouping objects by their position and create a so-called bounding volume hierarchy, this way you e.g. first check if a whole city block is inside the viewing volume (by using a bounding volume that contains the whole block) and only if it is, you further check the individual houses.
A more complicated technique is occlusion culling, which means checking if an object is completely hidden behind another object. Because these techniques can get substantially more complicated it should (if done) actually be done after the view frustum culling. OpenGL has hardware occlusion queries that can aid you in determining if an object is actually visible, but they require some additional work to work well. Especially for cities there may be special two-dimensional occlusion culling techniques (long time ago I heard about that, don't know).
This is just a very broad overview, feel free to google for individual keywords. It is always a good idea to carefully weight if the additional CPU-overhead is worth it (especially with complicated occlusion culling techniques), considering that nowadays the trend is to batch as many geometry as possible into a single draw call (by the way, I hope you don't use immediate mode glBegin/glEnd, otherwise changing this to vertex arrays or better VBOs is the first point on your agenda). But view frustum culling might be a nice and easy starting point, especially if the city gets rather large.
Google "binary space partition trees".
BSP trees are a good means of determining what should be rendered from the camera's view angle and position. The old-school first-person shooters, i.e. Quake et al, used them (or at least some derivation of the principle).
Here is a good FAQ.
Other good resources:
link
link

OpenGL Picking from a large set

I'm trying to, in JOGL, pick from a large set of rendered quads (several thousands). Does anyone have any recommendations?
To give you more detail, I'm plotting a large set of data as billboards with procedurally created textures.
I've seen this post OpenGL GL_SELECT or manual collision detection? and have found it helpful. However it can take my program up to several minutes to complete a rendering of the full set, so I don't think drawing 2x (for color picking) is an option.
I'm currently drawing with calls to glBegin/glVertex.../glEnd. Given that I made the switch to batch rendering on the GPU with vao's and vbo's, do you think I would receive a speedup large enough to facilitate color picking?
If not, given all of the recommendations against using GL_SELECT, do you think it would be worth me using it?
I've investigated multithreaded CPU approaches to picking these quads that completely sidestep OpenGL all together. Do you think a OpenGL-less CPU solution is the way to go?
Sorry for all the questions. My main question remains to be, whats a good way that one can pick from a large set of quads using OpenGL (JOGL)?
The best way to pick from a large number of quad cannot be easily defined. I don't like color picking or similar techniques very much, because they seem to be to impractical for most situations. I never understood why there are so many tutorials that focus on people that are new to OpenGl or even programming focus on picking that is just useless for nearly everything. For exmaple: Try to get a pixel you clicked on in a heightmap: Not possible. Try to locate the exact mesh in a model you clicked on: Impractical.
If you have a large number of quads you will probably need a good spatial partitioning or at least (better also) a scene graph. Ok, you don't need this, but it helps A LOT. Look at some tutorials for scene graphs for further information's, it's a good thing to know if you start with 3D programming, because you get to know a lot of concepts and not only OpenGl code.
So what to do now to start with some picking? Take the inverse of your modelview matrix (iirc with glUnproject(...)) on the position where your mouse cursor is. With the orientation of your camera you can now cast a ray into your spatial structure (or your scene graph that holds a spatial structure). Now check for collisions with your quads. I currently have no link, but if you search for inverse modelview matrix you should find some pages that explain this better and in more detail than it would be practical to do here.
With this raycasting based technique you will be able to find your quad in O(log n), where n is the number of quads you have. With some heuristics based on the exact layout of your application (your question is too generic to be more specific) you can improve this a lot for most cases.
An easy spatial structure for this is for example a quadtree. However you should start with they raycasting first to fully understand this technique.
Never faced such problem, but in my opinion, I think the CPU based picking is the best way to try.
If you have a large set of quads, maybe you can group quads by space to avoid testing all quads. For example, you can group the quads in two boxes and firtly test which box you
I just implemented color picking but glReadPixels is slow here (I've read somehere that it might be bad for asynchron behaviour between GL and CPU).
Another possibility seems to me using transform feedback and a geometry shader that does the scissor test. The GS can then discard all faces that do not contain the mouse position. The transform feedback buffer contains then exactly the information about hovered meshes.
You probably want to write the depth to the transform feedback buffer too, so that you can find the topmost hovered mesh.
This approach works also nice with instancing (additionally write the instance id to the buffer)
I haven't tried it yet but I guess it will be a lot faster then using glReadPixels.
I only found this reference for this approach.
I'm using the solution that I've borrowed from DirectX SDK, there's a nice example how to detect the selected polygon in a vertext buffer object.
The same algorithm works nice with OpenGL.

IDEAs: how to interactively render large image series using GPU-based direct volume rendering

I'm looking for idea's how to convert a 30+gb, 2000+ colored TIFF image series into a dataset able to be visualized in realtime (interactive frame rates) using GPU-based volume rendering (using OpenCL / OpenGL / GLSL). I want to use a direct volume visualization approach instead of surface fitting (i.e. raycasting instead of marching cubes).
The problem is two-fold, first I need to convert my images into a 3D dataset. The first thing which came into my mind is to see all images as 2D textures and simply stack them to create a 3D texture.
The second problem is the interactive frame rates. For this I will probably need some sort of downsampling in combination with "details-on-demand" loading the high-res dataset when zooming or something.
A first point-wise approach i found is:
polygonization of the complete volume data through layer-by-layer processing and generating corresponding image texture;
carrying out all essential transformations through vertex processor operations;
dividing polygonal slices into smaller fragments, where the corresponding depth and texture coordinates are recorded;
in fragment processing, deploying the vertex shader programming technique to enhance the rendering of fragments.
But I have no concrete ideas of how to start implementing this approach.
I would love to see some fresh ideas or ideas on how to start implementing the approach shown above.
If anyone has any fresh ideas in this area, they're probably going to be trying to develop and publish them. It's an ongoing area of research.
In your "point-wise approach", it seems like you have outlined the basic method of slice-based volume rendering. This can give good results, but many people are switching to a hardware raycasting method. There is an example of this in the CUDA SDK if you are interested.
A good method for hierarchical volume rendering was detailed by Crassin et al. in their paper called Gigavoxels. It uses an octree-based approach, which only loads bricks needed in memory when they are needed.
A very good introductory book in this area is Real-Time Volume Graphics.
I've done a bit of volume rendering, though my code generated an isosurface using marching cubes and displayed that. However, in my modest self-education of volume rendering I did come across an interesting short paper: Volume Rendering on Common Computer Hardware. It comes with source example too. I never got around to checking it out but it seemed promising. It is it DirectX though, not OpenGL. Maybe it can give you some ideas and a place to start.

How to create fast and easy scene-independent shadows w/o shaders in OpenGL

Let i have some mesh (for ex. sphere) in the center of room, full of cubes and one light source. How can i make fast and easy shadow-casting in OpenGL, using "standard" (fixed) functions only? Note: the result must contain cube and sphere shadows as well.
If you can generate a silhouette of the sphere then you could use shadow volumes. nVidia hardware has also supported fixed function shadow mapping for a fair while as well.
Shadow volumes have the disadvantage of very high fill rate requirements. Shadow maps can be better but require an extra pass.
If you are projecting on to a single plane it may well be easier to just project the object on to a plane.
There is no fast and easy way. There are lots of differnt techiques, that each have their own pros and cons. You can look at a project I host on github, that uses very simple code to create a shadow, using the shadow volume technique (http://iuiz.github.com/VolumeShadow/). However it is written in Java, but it should not be hard to port it to any other language.
The most important ways to create shadows are the so called "shadow mapping" method, where you render your scene (with the camera at the light source, directed to each shadow casting object) to a texture. And the second technique is the shadow voulume method (made famous with Doom3).
I've found one way using StencilBuffers. Being a little confused for a while, i finally got the idea - whith this the most hard thing would be looping through each light source and projecting all scene objects. This one looks more pretty than texture shadowing and works faster than volumeric shadows. here and here are some resources, which helped me to understand matrix multiplication step (it confused me a bit when i was looking through dino demo). As for me, this method is most easy to understand and use. The only question left to solve is how to calculate multiplication matrix.
Although this method could be changed a bit using textures as shown here.
Thanks everybody! =)