How to do ray tracing in modern OpenGL? - opengl

So I'm at a point that I should begin lighting my flatly colored models. The test application is a test case for the implementation of only latest methods so I realized that ideally it should be implementing ray tracing (since theoretically, it might be ideal for real time graphics in a few years).
But where do I start?
Assume I have never done lighting in old OpenGL, so I would be going directly to non-deprecated methods.
The application has currently properly set up vertex buffer objects, vertex, normal and color input and it correctly draws and transforms models in space, in a flat color.
Is there a source of information that would take one from flat colored vertices to all that is needed for a proper end result via GLSL? Ideally with any other additional lighting methods that might be required to complement it.

I would not advise to try actual ray tracing in OpenGL because you need a lot hacks and tricks for that and, if you ask me, there is not a point in doing this anymore at all.
If you want to do ray tracing on GPU, you should go with any GPGPU language, such as CUDA or OpenCL because it makes things a lot easier (but still, far from trivial).
To illustrate the problem a bit further:
For raytracing, you need to trace the secondary rays and test for intersection with the geometry. Therefore, you need access to the geometry in some clever way inside your shader, however inside a fragment shader, you cannot access the geometry, if you do not store it "coded" into some texture. The vertex shader also does not provide you with this geometry information natively, and geometry shaders only know the neighbors so here the trouble already starts.
Next, you need acceleration data-structures to get any reasonable frame-rates. However, traversing e.g. a Kd-Tree inside a shader is quite difficult and if I recall correctly, there are several papers solely on this problem.
If you really want to go this route, though, there are a lot papers on this topic, it should not be too hard to find them.
A ray tracer requires extremely well designed access patterns and caching to reach a good performance. However, you have only little control over these inside GLSL and optimizing the performance can get really tough.
Another point to note is that, at least to my knowledge, real time ray tracing on GPUs is mostly limited to static scenes because e.g. kd-trees only work (well) for static scenes. If you want to have dynamic scenes, you need other data-structures (e.g. BVHs, iirc?) but you constantly need to maintain those. If I haven't missed anything, there is still a lot of research currently going on just on this issue.

You may be confusing some things.
OpenGL is a rasterizer. Forcing it to do raytracing is possible, but difficult. This is why raytracing is not "ideal for real time graphics in a few years". In a few years, only hybrid systems will be viable.
So, you have three possibities.
Pure raytracing. Render only a fullscreen quad, and in your fragment shader, read your scene description packed in a buffer (like a texture), traverse the hierarchy, and compute ray-triangles intersections.
Hybrid raytracing. Rasterize your scene the normal way, and use raytracing in your shader on some parts of the scene that really requires it (refraction, ... but it can be simultated in rasterisation)
Pure rasterization. The fragment shader does its normal job.
What exactly do you want to achieve ? I can improve the answer depending on your needs.
Anyway, this SO question is highly related. Even if this particular implementation has a bug, it's definetely the way to go. Another possibility is openCL, but the concept is the same.

As for 2019 ray tracing is an option for real time rendering but requires high end GPUs most users don't have.
Some of these GPUs are designed specifically for ray tracing.
OpenGL currently does not support hardware accelerated ray tracing.
DirectX 12 on windows does have support for it. It is recommended to wait a few more years before creating a ray tracing only renderer although it is possible using DirectX 12 with current desktop and laptop hardware. Support from mobile may take a while.

Opengl (glsl) can be used for ray (path) tracing. however there are few better options: Nvidia OptiX (Cuda toolkit -- cross platform), directx 12 (with Nvidia ray tracing extension DXR -- windows only), vulkan (nvidia ray tracing extension VKR -- cross platform, and widely used), metal (only works on MacOS), falcor (DXR, VKR, OptiX based framework), Intel Embree (CPU ray tracing only).

I found some of the other answers to be verbose and wordy. For visual examples that YES, functional ray tracers absolutely CAN be built using the OpenGL API, I highly recommend checking out some of the projects people are making on https://www.shadertoy.com/ (Warning: lag)

To answer the topic: OpenGL has no RTX extension, but Vulkan has, and interop is possible. Example here: https://github.com/nvpro-samples/gl_vk_raytrace_interop
As for the actual question: To light the triangles, there are tons of techniques, look up for "forward", "forward+" or "deferred" renderers. The technique to be used depends on you goal. The simplest and most good-looking these days, is image based lighting (IBL) with physically based shading (PBS). Basically, you use a cubemap and blur it more or less depending on the glossiness of the object. For a simple object viewer you don't need more.

Related

C++ OpenGL wrapper: interface similar to fixed pipeline, can export .collada

I have opengl code that uses the fixed pipeline.
Hitting two birds with one stone, I need a wrapper that can help me with the following tasks:
Convert the code to the new shader-based pipeline with minimal effort.
I have a class that calls opengl functions, such as: glBegin(triangles/lines), glVertex, glPushMatrix, glTranslate, glColor, gluSphere.
Ideally, I'd like it to derive from a class that supplies these functions in the base class. Behind the scenes, it would use the same high level logic as the fixed pipeline.
I'd like to export an opengl scene to .collada to load in an external renderer.
Opengl is low level rendering, and it doesn't have the concept of a scene. For example, this reddit post:
"You realize that you have to write a shim to capture all API calls
you are interested in to do that. Then, when finally, a draw call is
emitted you have to parse every single vertex and collect the data
from all over the memory from the buffers that you have recorded from
the APi calls that set up VAOs, VBOs and IBOs. Then you have to parse
the shader source code so that you can see which uniforms and vertex
attributes contribute to vertex clip coordinate generation. Then you
also have to synthesize/guess which outputs are normal, color, texture
coordinate and so on from the shader source if the resulting program
even have those in .obj file format-wise.
This gets even more complicated if Compute is used to generate data
inside the GPU for any of the buffers. If geometry or tessellator is
used then you also have to implement one of those so that you get
accurate outputs from the vertex processing. TL;DR - you have to write
your own OpenGL 4.5 driver that does exactly the same things a real
hardware driver would do. Good luck with that."
However, my scene is simple, using the fixed pipeline operations above.
I'd like the wrapper to keep track and construct a scene that can be exported.
--
EDIT: Since recommendation is off-topic, I'll ask the following question.
What I need above seems like something obvious that many should have found useful. Since I can't find a library that accomplishes that, I'm wondering if my approach is unreasonable?
More specifically, how do people port their legacy opengl code; do they write the relevant part from scratch, or does everyone implement his own wrapper as I suggested?
What about constructing a scene to export to collada?
Posted also:
https://community.khronos.org/t/c-opengl-wrapper-interface-similar-to-fixed-pipeline-can-export-collada/105829
Although there are some parts in legacy OpenGL that are not optimized in current drivers (like glDrawPixels, the raster drawing operations and indexed color mode), between modern hardware and the modest requirements of legacy applications, legacy OpenGL stuff runs well enough on modern systems.
The main reason to "modernize" legacy OpenGL code is, if one want to make use of the modern features. Any sort of "wrapper" will just run into the same kind of design problems that the OpenGL API ran between OpenGL-1.5 to OpenGL-2.1: Lots of built-in variables, default state, implicit action, etc. etc. This is difficult to document properly, and even more difficult to make use of reliably. Which is the reason you usually don't find these kinds of wrappers.
If you find yourself in the situation, that you absolutely must port your legacy code to modern OpenGL, e.g. to be interoperable with core contexts, then your best course of action will be to do a proper rewrite. Replace implcit mode calls to filling vertex buffers, replace calls to glTexEnv…, glMaterial…, glLight… with loading appropriate shaders and setting their uniforms.
Or, if you want a quick and dirty method: Just create two contexts, a modern one, and a legacy one and switch between them; often you can establish "list" sharing between them.

Replicating Cathode retro terminal effect?

I'm trying to replicate the effect of Cathode but i'm not really aware of any rendering effects in SDL. Does anyone know the technique used in Cathode? Are they using OpenGL and shaders maybe?
If you are still interested in the subject I'm working on a similar project. The effects were obtained by using GLSL shaders.
You can grab the source code here: https://github.com/Swordifish90/cool-old-term/
The shaders strings might not be extremely readable due to the extensive use of the ternary operators (needed to customize the appearance) but they should give you a really good idea.
If you poke around a bit in the application bundle, you'll find that the only relevant framework is GLKit which, according to Apple, will "reduce the effort required to create new shader-based apps".
There's also a bunch of ".fragdata", ".vertdata", and ".glsldata" files, which are encrypted.
Very unfortunate for you.
So I would say: Yes, it's OpenGL shaders all the way.
Unfortunately, since the shaders are encrypted, you're going to have to locate suitable algorithms elsewhere.
(Perhaps it's possible to use the OpenGL debugging and profiling tools to capture the shader source as it is compiled, but I doubt it.)
You may have realized that Android phones have (had?) such animations when you put them to sleep. That code is available under in file named ElectronBeam.java.
However it is Java code and uses GLES 1.0 with GLES 1.1 Extenstions but algorithm for bending screen should be understandable.
Seems to be based on GLTerminal which uses OpenGL, it would have to use OpenGL and shaders for speed.
I guess the fastest approximation would be to render the text to buffers within OpenGL and use a deformed 2d grid to create the "rounded corners" radial distortion.
But it would take a lot of work to add all the features that cathode has, not to mention to run them quickly.
I suspect emulating a CRT perfectly is a bit like emulating an analog synth perfectly - hard to impossible.
If you want to work quickly and not killing the CPU, the GPU is the best solution! So pixel shaders. pixel shaders can do all of these effects. Once I made such an application. I wrote it in Silverlight, but it does not matter, I used the pixel shader.
Suggests to write this in Qt4 and add to the QWidget pixel shader effects.

(rendering particles) Should I learn shader or OpenCL?

I am trying to run 100000 and more particles.
I've been watching many tutorials and other examples that demonstrate the power of shaders and OpenCL.
In one example that I watched, particle's position was calculated based on the position of your mouse pointer(physical device that you hold with one hand and cursor on the screen).
The position of each particle was stored as RGB. R being x, G y, and B, z. And passed to pixel shader.And then each color pixel was drawn as position of particle afterward.
However I felt absurd towards this approach.
Isn't this approach or coding style rather to be avoided?
Shoudn't I learn how to use OpenCL and use the power of GPU's multithreading to directly state and pass my intended code?
Isn't this approach or coding style rather to be avoided?
Why?
The entire point of shaders is for you to be able to do what you want, to more effectively express what you want to do, and to allow yourself greater control over the hardware.
You should never, ever be afraid of re-purposing something for a different functionality. Textures do not store colors; they store data, which can be color, but it can also be other stuff. The sooner you stop thinking of textures as pictures, the better off you will be as a graphics programmer.
The GPU and API exist to be used. Use it as you see fit; do not allow how you think the API should be used to limit you.
Shoudn't I learn how to use OpenCL and use the power of GPU's multithreading to directly state and pass my intended code?
Yesterday, I would have said "yes". However, today this was released: OpenGL compute shaders.
The fact that the OpenGL ARB and Khronos created this shader type and so forth is a tacit admission that OpenCL/OpenGL interop is not the most efficient way to generate data for rendering purposes. After all, if it was, there would be no need for OpenGL to have generalized compute functionality. There were 3 versions of GL 4.x that didn't provide this. The fact that it's here now is basically the ARB saying, "Yeah, OK, we need this."
If the ARB, staffed by many people who make the hardware, think that CL/GL interop is not the fastest way to go, then it's pretty clear that you should use compute shaders.
Of course, if you're trying to do something right now, that won't help; only NVIDIA has compute shader support. And even that's only in beta drivers. It will take many months before AMD gets support for them, and many more before that support becomes solid and stable enough to use.
Even so, you don't need compute shaders to generate data. People have used transform feedback and geometry shaders to do LOD and frustum culling for instanced rendering. Do not be afraid to think outside of the "OpenGL draws stuff" box.
To simulate particles in OpenCL, you should try out "Yet Another Shader Editor" / http://yase.chnk.us/ - it takes away all the tricky parts and lets you get down to the meat of coding the particle control algorithms. IN YOUR BROWSER. Nothing to download, no accounts to create, just alter whatever examples you find. It's a blast.
https://lotsacode.wordpress.com/2013/04/16/fun-with-particles-yet-another-shader-editor/
I'm not affiliated with yase in any way.

OpenGL Picking from a large set

I'm trying to, in JOGL, pick from a large set of rendered quads (several thousands). Does anyone have any recommendations?
To give you more detail, I'm plotting a large set of data as billboards with procedurally created textures.
I've seen this post OpenGL GL_SELECT or manual collision detection? and have found it helpful. However it can take my program up to several minutes to complete a rendering of the full set, so I don't think drawing 2x (for color picking) is an option.
I'm currently drawing with calls to glBegin/glVertex.../glEnd. Given that I made the switch to batch rendering on the GPU with vao's and vbo's, do you think I would receive a speedup large enough to facilitate color picking?
If not, given all of the recommendations against using GL_SELECT, do you think it would be worth me using it?
I've investigated multithreaded CPU approaches to picking these quads that completely sidestep OpenGL all together. Do you think a OpenGL-less CPU solution is the way to go?
Sorry for all the questions. My main question remains to be, whats a good way that one can pick from a large set of quads using OpenGL (JOGL)?
The best way to pick from a large number of quad cannot be easily defined. I don't like color picking or similar techniques very much, because they seem to be to impractical for most situations. I never understood why there are so many tutorials that focus on people that are new to OpenGl or even programming focus on picking that is just useless for nearly everything. For exmaple: Try to get a pixel you clicked on in a heightmap: Not possible. Try to locate the exact mesh in a model you clicked on: Impractical.
If you have a large number of quads you will probably need a good spatial partitioning or at least (better also) a scene graph. Ok, you don't need this, but it helps A LOT. Look at some tutorials for scene graphs for further information's, it's a good thing to know if you start with 3D programming, because you get to know a lot of concepts and not only OpenGl code.
So what to do now to start with some picking? Take the inverse of your modelview matrix (iirc with glUnproject(...)) on the position where your mouse cursor is. With the orientation of your camera you can now cast a ray into your spatial structure (or your scene graph that holds a spatial structure). Now check for collisions with your quads. I currently have no link, but if you search for inverse modelview matrix you should find some pages that explain this better and in more detail than it would be practical to do here.
With this raycasting based technique you will be able to find your quad in O(log n), where n is the number of quads you have. With some heuristics based on the exact layout of your application (your question is too generic to be more specific) you can improve this a lot for most cases.
An easy spatial structure for this is for example a quadtree. However you should start with they raycasting first to fully understand this technique.
Never faced such problem, but in my opinion, I think the CPU based picking is the best way to try.
If you have a large set of quads, maybe you can group quads by space to avoid testing all quads. For example, you can group the quads in two boxes and firtly test which box you
I just implemented color picking but glReadPixels is slow here (I've read somehere that it might be bad for asynchron behaviour between GL and CPU).
Another possibility seems to me using transform feedback and a geometry shader that does the scissor test. The GS can then discard all faces that do not contain the mouse position. The transform feedback buffer contains then exactly the information about hovered meshes.
You probably want to write the depth to the transform feedback buffer too, so that you can find the topmost hovered mesh.
This approach works also nice with instancing (additionally write the instance id to the buffer)
I haven't tried it yet but I guess it will be a lot faster then using glReadPixels.
I only found this reference for this approach.
I'm using the solution that I've borrowed from DirectX SDK, there's a nice example how to detect the selected polygon in a vertext buffer object.
The same algorithm works nice with OpenGL.

Using shader for calculations

Is it possible to use shader for calculating some values and then return them back for further use?
For example I send mesh down to GPU, with some parameters about how it should be modified(change position of vertices), and take back resulting mesh? I see that rather impossible because I haven't seen any variable for comunication from shaders to CPU. I'm using GLSL so there are just uniform, atributes and varying. Should I use atribute or uniform, would they be still valid after rendering? Can I change values of those variables and read them back in CPU? There are methods for mapping data in GPU but would those be changed and valid?
This is the way I'm thinking about this, though there could be other way, which is unknow to me. I would be glad if someone could explain me this, as I've just read some books about GLSL and now I would like to program more complex shaders, and I wouldn't like to relieve on methods that are impossible at this time.
Thanks
Great question! Welcome to the brave new world of General-Purpose Computing on Graphics Processing Units (GPGPU).
What you want to do is possible with pixel shaders. You load a texture (that is: data), apply a shader (to do the desired computation) and then use Render to Texture to pass the resulting data from the GPU to the main memory (RAM).
There are tools created for this purpose, most notably OpenCL and CUDA. They greatly aid GPGPU so that this sort of programming looks almost as CPU programming.
They do not require any 3D graphics experience (although still preferred :) ). You don't need to do tricks with textures, you just load arrays into the GPU memory. Processing algorithms are written in a slightly modified version of C. The latest version of CUDA supports C++.
I recommend to start with CUDA, since it is the most mature one: http://www.nvidia.com/object/cuda_home_new.html
This is easily possible on modern graphics cards using either Open CL, Microsoft Direct Compute (part of DirectX 11) or CUDA. The normal shader languages are utilized (GLSL, HLSL for example). The first two work on both Nvidia and ATI graphics cards, cuda is nvidia exclusive.
These are special libaries for computing stuff on the graphics card. I wouldn't use a normal 3D API for this, althought it is possible with some workarounds.
Now you can use shader buffer objects in OpenGL to write values in shaders that can be read in host.
My best guess would be to send you to BehaveRT which is a library created to harness GPUs for behavorial models. I think that if you can formulate your modifications in the library, you could benefit from its abstraction
About the data passing back and forth between your cpu and gpu, i'll let you browse the documentation, i'm not sure about it