Global illumination for static geometry - opengl

I have been trying to find a suitable global illumination technique, preferably based on OpenGL or GPGPU, to light an outdoor scene which has static objects and dynamic light sources (it is a city model). It does not need to be very detailled or accurate, but it should be rather simple and if possible, iterative and refining (so I can display the intermediate results).
The best matches I found on the internet is Ray tracing, Precomputed Radiance Transfer(PRT) and Radiosity.
Ray tracing will be far too slow for my application. PRT seems to be too complex and has a huge precomputation step, and radiosity seems too slow and I am not sure if it can be implemented multi threaded.
Does anyone know a better technique, or a workaround of the above problems?

In terms of a more realistic and usable approach than svoGI (Crassin's voxel technique), you might consider deferred irradiance volumes, there is a great webGL demo with full source available here, its based around using spherical harmonics.
There are also older techniques like LPV, which you can check out here, here and here.

Yes, this question is old, but people might still stumble upon it.
How about "Voxel Cone Tracing"?
The Unreal Engine 4 implements it and they also described the algorithm in a presentation.
http://www.unrealengine.com/files/misc/The_Technology_Behind_the_Elemental_Demo_16x9_%282%29.pdf

Related

random voxel terrain on the gpu (dc vs. cms)

I want to create a random fractal terrain on the gpu (with a compute shader). I've started with implementing marching cubes: Generating Complex Procedural Terrains Using the GPU, and it works really good: marching cubes on the gpu. However, marching cubes can't extract sharp features or use an adaptive resolution.
So I looked for an advanced isosurface extraction algorithm and found Dual Contouring of Hermite Data. So I implemented dc in Java to test this algorithm and it looks great: dc on the cpu. (There are some holes in the mesh and no sharp features, but I was too lazy to implement/fix this because it's only a prototype.)
But I've noticed some negative aspects:
Intercell-dependent. (I have no idea how to port this to the gpu: the only resource I've found is Dual Contouring with OpenCL.)
I don't know how to create a chunk system, because there are no "clear" borders https://i.stack.imgur.com/62dy6.png.
So I continued my search for a better algorithm and found Cubical Marching Squares: Adaptive Feature Preserving Surface Extraction from Volume Data. This seems to be the perfect algorithm for me: intercell-independent, adaptive, sharp features, primal structure and even manifold. Unfortunately, there are no resources on how to implement this algorithm except for this Cubical Marching Squares Implementation. I think I understand the two parts of the algorithm: create a grid, for each cell:
Subdivide until the maximum depth is reached or there is no need to do so.
Split each cell into 6 faces, extract their surface and stitch them together.
But I don't know how to connect those two parts (especially the part with transitional faces, page 38).
So does anybody know how to implement dc as a shader, how to implement cms or a better algorithm (maybe dual marching cubes, I think it has the same problem as dc, but I haven't tested it yet)?
As you have already mentioned for the GPU you would need no intercell-dependencies... I'm sure people have come up with workarounds for that on DC but CMS should be one of the best for all those things that you need, namely intercell-independent (by definition), preserves sharp features, creates manifold geometry, and supports adaptive resolution.
In terms of resources on CMS I agree they are quite limited.
The original paper: http://graphics.csie.ntu.edu.tw/CMS/
This project by Matt Keeter has a 'C' CMS implementation in it (https://www.mattkeeter.com/projects/kokopelli/)
https://github.com/mkeeter/kokopelli <-- code
I used Matt Keeter's implementation as a reference for my partial 'C++' implementation (the thesis that you linked), here is the code for it in case you didn't find it:
https://bitbucket.org/GRassovsky/cubical-marching-squares
However keep in mind that it is indeed partial, that is to say it has the main algorithm working (adaptive and manifold), but I haven't got around to implement the sharp-feature preservation, 2D and 3D disambiguation, etc. It is also currently a basic CPU implementation... I had the good intentions to implement all those things and make a GPU implementation but for now have had no time.
"But I don't know how to connect those two parts" - that only goes to say how badly I have written my thesis, because I try to explain how I do this :D (Mind you, I am not sure exactly how this was done in the original paper... should we say that some things are not very clear there and you would have to use your imagination :))

OpenGL Picking from a large set

I'm trying to, in JOGL, pick from a large set of rendered quads (several thousands). Does anyone have any recommendations?
To give you more detail, I'm plotting a large set of data as billboards with procedurally created textures.
I've seen this post OpenGL GL_SELECT or manual collision detection? and have found it helpful. However it can take my program up to several minutes to complete a rendering of the full set, so I don't think drawing 2x (for color picking) is an option.
I'm currently drawing with calls to glBegin/glVertex.../glEnd. Given that I made the switch to batch rendering on the GPU with vao's and vbo's, do you think I would receive a speedup large enough to facilitate color picking?
If not, given all of the recommendations against using GL_SELECT, do you think it would be worth me using it?
I've investigated multithreaded CPU approaches to picking these quads that completely sidestep OpenGL all together. Do you think a OpenGL-less CPU solution is the way to go?
Sorry for all the questions. My main question remains to be, whats a good way that one can pick from a large set of quads using OpenGL (JOGL)?
The best way to pick from a large number of quad cannot be easily defined. I don't like color picking or similar techniques very much, because they seem to be to impractical for most situations. I never understood why there are so many tutorials that focus on people that are new to OpenGl or even programming focus on picking that is just useless for nearly everything. For exmaple: Try to get a pixel you clicked on in a heightmap: Not possible. Try to locate the exact mesh in a model you clicked on: Impractical.
If you have a large number of quads you will probably need a good spatial partitioning or at least (better also) a scene graph. Ok, you don't need this, but it helps A LOT. Look at some tutorials for scene graphs for further information's, it's a good thing to know if you start with 3D programming, because you get to know a lot of concepts and not only OpenGl code.
So what to do now to start with some picking? Take the inverse of your modelview matrix (iirc with glUnproject(...)) on the position where your mouse cursor is. With the orientation of your camera you can now cast a ray into your spatial structure (or your scene graph that holds a spatial structure). Now check for collisions with your quads. I currently have no link, but if you search for inverse modelview matrix you should find some pages that explain this better and in more detail than it would be practical to do here.
With this raycasting based technique you will be able to find your quad in O(log n), where n is the number of quads you have. With some heuristics based on the exact layout of your application (your question is too generic to be more specific) you can improve this a lot for most cases.
An easy spatial structure for this is for example a quadtree. However you should start with they raycasting first to fully understand this technique.
Never faced such problem, but in my opinion, I think the CPU based picking is the best way to try.
If you have a large set of quads, maybe you can group quads by space to avoid testing all quads. For example, you can group the quads in two boxes and firtly test which box you
I just implemented color picking but glReadPixels is slow here (I've read somehere that it might be bad for asynchron behaviour between GL and CPU).
Another possibility seems to me using transform feedback and a geometry shader that does the scissor test. The GS can then discard all faces that do not contain the mouse position. The transform feedback buffer contains then exactly the information about hovered meshes.
You probably want to write the depth to the transform feedback buffer too, so that you can find the topmost hovered mesh.
This approach works also nice with instancing (additionally write the instance id to the buffer)
I haven't tried it yet but I guess it will be a lot faster then using glReadPixels.
I only found this reference for this approach.
I'm using the solution that I've borrowed from DirectX SDK, there's a nice example how to detect the selected polygon in a vertext buffer object.
The same algorithm works nice with OpenGL.

How to create fast and easy scene-independent shadows w/o shaders in OpenGL

Let i have some mesh (for ex. sphere) in the center of room, full of cubes and one light source. How can i make fast and easy shadow-casting in OpenGL, using "standard" (fixed) functions only? Note: the result must contain cube and sphere shadows as well.
If you can generate a silhouette of the sphere then you could use shadow volumes. nVidia hardware has also supported fixed function shadow mapping for a fair while as well.
Shadow volumes have the disadvantage of very high fill rate requirements. Shadow maps can be better but require an extra pass.
If you are projecting on to a single plane it may well be easier to just project the object on to a plane.
There is no fast and easy way. There are lots of differnt techiques, that each have their own pros and cons. You can look at a project I host on github, that uses very simple code to create a shadow, using the shadow volume technique (http://iuiz.github.com/VolumeShadow/). However it is written in Java, but it should not be hard to port it to any other language.
The most important ways to create shadows are the so called "shadow mapping" method, where you render your scene (with the camera at the light source, directed to each shadow casting object) to a texture. And the second technique is the shadow voulume method (made famous with Doom3).
I've found one way using StencilBuffers. Being a little confused for a while, i finally got the idea - whith this the most hard thing would be looping through each light source and projecting all scene objects. This one looks more pretty than texture shadowing and works faster than volumeric shadows. here and here are some resources, which helped me to understand matrix multiplication step (it confused me a bit when i was looking through dino demo). As for me, this method is most easy to understand and use. The only question left to solve is how to calculate multiplication matrix.
Although this method could be changed a bit using textures as shown here.
Thanks everybody! =)

How to do ray tracing in modern OpenGL?

So I'm at a point that I should begin lighting my flatly colored models. The test application is a test case for the implementation of only latest methods so I realized that ideally it should be implementing ray tracing (since theoretically, it might be ideal for real time graphics in a few years).
But where do I start?
Assume I have never done lighting in old OpenGL, so I would be going directly to non-deprecated methods.
The application has currently properly set up vertex buffer objects, vertex, normal and color input and it correctly draws and transforms models in space, in a flat color.
Is there a source of information that would take one from flat colored vertices to all that is needed for a proper end result via GLSL? Ideally with any other additional lighting methods that might be required to complement it.
I would not advise to try actual ray tracing in OpenGL because you need a lot hacks and tricks for that and, if you ask me, there is not a point in doing this anymore at all.
If you want to do ray tracing on GPU, you should go with any GPGPU language, such as CUDA or OpenCL because it makes things a lot easier (but still, far from trivial).
To illustrate the problem a bit further:
For raytracing, you need to trace the secondary rays and test for intersection with the geometry. Therefore, you need access to the geometry in some clever way inside your shader, however inside a fragment shader, you cannot access the geometry, if you do not store it "coded" into some texture. The vertex shader also does not provide you with this geometry information natively, and geometry shaders only know the neighbors so here the trouble already starts.
Next, you need acceleration data-structures to get any reasonable frame-rates. However, traversing e.g. a Kd-Tree inside a shader is quite difficult and if I recall correctly, there are several papers solely on this problem.
If you really want to go this route, though, there are a lot papers on this topic, it should not be too hard to find them.
A ray tracer requires extremely well designed access patterns and caching to reach a good performance. However, you have only little control over these inside GLSL and optimizing the performance can get really tough.
Another point to note is that, at least to my knowledge, real time ray tracing on GPUs is mostly limited to static scenes because e.g. kd-trees only work (well) for static scenes. If you want to have dynamic scenes, you need other data-structures (e.g. BVHs, iirc?) but you constantly need to maintain those. If I haven't missed anything, there is still a lot of research currently going on just on this issue.
You may be confusing some things.
OpenGL is a rasterizer. Forcing it to do raytracing is possible, but difficult. This is why raytracing is not "ideal for real time graphics in a few years". In a few years, only hybrid systems will be viable.
So, you have three possibities.
Pure raytracing. Render only a fullscreen quad, and in your fragment shader, read your scene description packed in a buffer (like a texture), traverse the hierarchy, and compute ray-triangles intersections.
Hybrid raytracing. Rasterize your scene the normal way, and use raytracing in your shader on some parts of the scene that really requires it (refraction, ... but it can be simultated in rasterisation)
Pure rasterization. The fragment shader does its normal job.
What exactly do you want to achieve ? I can improve the answer depending on your needs.
Anyway, this SO question is highly related. Even if this particular implementation has a bug, it's definetely the way to go. Another possibility is openCL, but the concept is the same.
As for 2019 ray tracing is an option for real time rendering but requires high end GPUs most users don't have.
Some of these GPUs are designed specifically for ray tracing.
OpenGL currently does not support hardware accelerated ray tracing.
DirectX 12 on windows does have support for it. It is recommended to wait a few more years before creating a ray tracing only renderer although it is possible using DirectX 12 with current desktop and laptop hardware. Support from mobile may take a while.
Opengl (glsl) can be used for ray (path) tracing. however there are few better options: Nvidia OptiX (Cuda toolkit -- cross platform), directx 12 (with Nvidia ray tracing extension DXR -- windows only), vulkan (nvidia ray tracing extension VKR -- cross platform, and widely used), metal (only works on MacOS), falcor (DXR, VKR, OptiX based framework), Intel Embree (CPU ray tracing only).
I found some of the other answers to be verbose and wordy. For visual examples that YES, functional ray tracers absolutely CAN be built using the OpenGL API, I highly recommend checking out some of the projects people are making on https://www.shadertoy.com/ (Warning: lag)
To answer the topic: OpenGL has no RTX extension, but Vulkan has, and interop is possible. Example here: https://github.com/nvpro-samples/gl_vk_raytrace_interop
As for the actual question: To light the triangles, there are tons of techniques, look up for "forward", "forward+" or "deferred" renderers. The technique to be used depends on you goal. The simplest and most good-looking these days, is image based lighting (IBL) with physically based shading (PBS). Basically, you use a cubemap and blur it more or less depending on the glossiness of the object. For a simple object viewer you don't need more.

OpenGL: Easiest way to make shadow and light Volumes?

I want to ask what is the easiest way to make shadow and light volume ? How can I bring to scene more realism? Do you know any nice tricks ? I hear that to make shadow i must use stencil buffer, but I don't know how:/ I can't find any super simple example how to make it.
There's no super simple way to do shadows. Sorry to disappoint you but shadows are one of the more complex problems in computer graphics, especially if they have to look good.
Now with that said here are some maybe helpful links for further reading:
The Theory of Stencil Shadow Volumes
Shadow Mapping with Today's OpenGL Hardware
Real-time Shadow Algorithms and Techniques
There's a simple example of shadow mapping in the NVIDIA SDK 9 here (Paper) which might be easy to adopt. There's also a section on shadows in all volumes of GPU Gems and a good overview in the Real-Time Rendering book (without code).
The Wolfire blog has had some good articles on shadows. Nothing too technical, no code samples, but to get a good overview of the concepts, they are great (and I love the pictures that always accompany the articles!).
Here is a full list of every article with "shadow" or "shadows" in the title. You may also choose to do a search on their blog for "shadow|shadows" to see every possible article, but beyond this list you probably won't find too much. Might also want to add "-alpha" so that you don't get any hits from their weekly alpha updates, which wouldn't have any worthwhile content.
2006/05/10: Starting shadows
2006/05/18: More shadows
2008/11/24: High-detail terrain shadows
2008/12/02: Object shadows
2009/03/29: Environment shadows - step 1
2009/04/03: Environment shadows - step 2
2009/04/07: Environment shadows - step 3
2009/04/10: Environment shadows - step 4
2009/11/13: Character shadows
2010/03/17: Two-part shadow maps
2010/04/19: Catching baked shadows
(list gathered 2010/05/19 by a google search for site:blog.wolfire.com intitle:shadow|shadows)
These questions are not easy to answer here, it'll require you some study and understanding of how graphic primitives works. However, there are some good sites over the web you can take a look, like Nehe and GameDev. There are lots of articles and tutorials there, just take some time to search and read them. There is also some rendering engines that you can use that will do a lot of nice things for you, like Ogre3d and Irrlicht but if you cant understand the principles behind them (like shadows, illumination...), I recomend you to try in OpenGL first, learn it, and then use some engine to get the work done for you.
In addition to the other useful sources mentioned here, you should consider getting an introductory text on linear algebra, or Eric Lengyel's excellent Mathematics for 3D Game Programming and Computer Graphics, Second Edition. Computer graphics are made of math, and at some level it gets really hard to implement things out of a cookbook without some understanding of the underlying algebra.