I am currently trying to learn ray casting on a 3D texture using something like glTexImage3D. I was following this tutorial from the start. My ultimate goal is to produce a program which can work like this:
My understanding is that this was rendered using a raycasting method and the model is imported as a 3d texture. The raycasting and texture sampling tasks were performed in the fragment shader. I hope I can replicate this program as a practice. Could you kindly answer my questions?
What file format should be used to import the 3D texture?
Which glsl functions should I use in detecting the distance between my ray and the texture?
What are the differences of 3D texture sampling and volume rendering?
Are there any available online tutorials for me to follow?
How can I produce my own 3D texture? (Is it possible to make one using blender?)
1. What file format should be used to import the 3D texture?
Doesn't matter, OpenGL doesn't deal with file formats.
2. Which glsl functions should I use in detecting the distance between my ray and the texture?
There's no "ready do use" raycasting function. You have to implement a raycaster yourself. I.e. between a start and end point sample the texture along a line (ray) and integrate the samples up to a final color value.
3. What are the differences of 3D texture sampling and volume rendering?
Sampling a 3D texture is not much different from sampling 2D, 1D, cubemap or whatever else the topology of a texture. For a given vector A a certain vector B is retured, namely either the value of the sample that's closest to the location pointed to by A (nearest sample) or a interpolated value.
4. Are there any available online tutorials for me to follow?
http://www.real-time-volume-graphics.org/?page_id=28
5. How can I produce my own 3D texture? (Is it possible to make one using blender?)
You can certainly use Blender, e.g. by baking volumetric data like fog density. But the whole subject is too broad to be sufficiently covered here.
Related
A question sort of addressing the problem and another question asking a related question.
I have a 2D texture that has 12x12 slices of a volume layered in a grid like this:
What I am doing now is to calculate the offset and sampling based of the 3D coordinate inside the volume using HLSL code myself. I have followed the descriptions found here and here, where the first link also talks about 3D sampling from a 2D sliced texture. I have also heard that modern hardware have the ability to sample 3D textures.
That being said, I have not found any description or example code that samples the 3D texture. What HLSL, or OpenGL, function can I use to sample this flipbook type of texture? If you can, please add a small example snippet with explanations. If you cant, pointing me to one or the documentation would be appreciated. I have found no sampler function where I can provide the number of layers in the U and V directions so I dont see how it can sample without knowing how many slices are per axis.
If I am misunderstanding this completely I would also appreciate being told so.
Thank you for your help.
OpenGL has support for true 3D textures for ages (actually 3D texture support already appeared in OpenGL-1.2). With that you upload your 3D texture not as a "flipbook" but simply as a stack of 2D images, using the function glTexImage3D. In GLSL you then just use the regular texture access function, but with a sampler3D and a 3 component texture coordinate vector (except in older versions of GLSL, i.e. before GLSL-1.5/OpenGL-3 where you use texture3D).
I've been reading up on a lot of various articles regarding to ray-marching in GLSL shaders (such as this one article: http://www.iquilezles.org/www/articles/rmshadows/rmshadows.htm) and it raised some questions that I wanted to ask.
In my application, I am rendering a scene with a couple of meshes and I wanted to experiment with shadows. While I seem to somewhat understand the concept of how raymarching works, I don't quite understand how to properly implement this in GLSL. I know how to compute the intersection of a ray and a plane but how would this be handled through GLSL shaders?
According to this thread here: (https://gamedev.stackexchange.com/questions/67719/how-do-raymarch-shaders-work) it mentions that you're measuring the distance between the start of the ray and the 'surface'. Is the surface he's referring to the mesh? Do I need to send an array of planes/points that makes up the mesh to the shader in order to compute the ray intersection test? Do I need to use the depth buffer to determine the distance of the surface?
It's depend of what your shader does vs what your rendering engin does. In pure demo shaders like shadertoy (see its shadow examples ) the whole scene is encoded in the shader so there is no problem shooting secondary rays or more (beside perfs).
If the scene is not managed by your shader, then you need a bit of cooperation from your engine. At least, to produce a shadowmap in a first pass (many different algorithms exists).
Note that with SVO representation, the scene is first converted into sparse voxels, which can then be marched by the shader for secondary rays. Could be even for primary ray, but you do can use regular Z-buffer here, and voxel cone-tracing (for instance) for all kinds of secondary rays ( see *Interactive Indirect Illumination Using Voxel Cone Tracing * here: http://gigavoxels.imag.fr/publications.html (ok, you might find it overkill in your simple application). For soft shadows and depth of field, see the seminal paper GigaVoxels : Ray-Guided Streaming for Efficient and Detailed Voxel Rendering . Note that the tree might even be a regular BSP of triangles, instead of on octree of voxels. But then you loose many advantage of SVO (perfs, increased for soft shadows).
I want to do a texture based volume render of CT data. I have a stack of 2d CT images that I'd like to use as a 3d texture in opengl (jogl really). I have to do it the way with polygon proxy geometry that shifts when viewing parameters change. How can I convert the 2d images to one 3d texture? I have not been able to find anything about how opengl expects 3d images to be formatted. I saw this: https://stackoverflow.com/questions/13509191/how-to-convert-2d-image-into-3d-image , but I don't it's the same.
Also, I am still in confusion about this volume rendering technique. Is it possible to take a 3d location in the 3d texture and map it to a 2d corner of a quad? I found this example: http://www.felixgers.de/teaching/jogl/texture3D.html but I don't know if it means you have to use 3d vertices. Does anyone know more sources with explicit examples?
See
http://www.opengl.org/registry/doc/glspec40.core.20100311.pdf
section 3.8.3, on defining 3D texture images.
This results in a 3d cube of texels, and yes, you can map a 3d location in this cube to a corner of a quad.
OpenGL does know a 3D texture format where each texel is a small subvolume in a [0;1]^3 cube. When you texture a triangle or a quad with this texture, it is like if you cut out a thin slice of this volume. If you want a volumetric you must write a volume raycaster. If you Google "GPU direct volume rendering" you should find plenty of tutorials.
Is there a way to extract a point cloud from a rendered 3D Scene (using OPENGL)?
in Detail:
The input should be a rendered 3D Scene.
The output should be e.g a three dimensional array with vertices(x,y,z).
Mission possible or impossible?
Render your scene using an orthographic view so that all of it fits on screen at once.
Use a g-buffer (search for this term or "fat pixel" or "deferred rendering") to capture
(X,Y,Z, R, G, B, A) at each sample point in the framebuffer.
Read back your framebuffer and put the (X,Y,Z,R,G,B,A) tuple at each sample point in a
linear array.
You now have a point cloud sampled from your conventional geometry using OpenGL. Apart from the readback from the GPU to the host, this will be very fast.
Going further with this:
Use depth peeling (search for this term) to generate samples on surfaces that are not
nearest to the camera.
Repeat the rendering from several viewpoints (or equivalently for several rotations
of the scene) to be sure of capturing fragments from a the nooks and crannies of the
scene and append the points generated from each pass into one big linear array.
I think you should take your input data and manually multiply it by your transformation and modelview matrices. No need to use OpenGL for that, just some vector/matrices math.
If I understand correctly, you want to deconstruct a final rendering (2D) of a 3D scene. In general, there is no capability built-in to OpenGL that does this.
There are however many papers describing approaches to analyzing a 2D image to generate a 3D representation. This is for example what the Microsoft Kinect does to some extent. Look at the papers presented at previous editions of SIGGRAPH for a starting point. Many implementations probably make use of the GPU (OpenGL, DirectX, CUDA, etc.) to do their magic, but that's about it. For example, edge-detection filters to identify the visible edges of objects and histogram functions can run on the GPU.
Depending on your application domain, you might be in for something near impossible or there might be a shortcut you can use to identify shapes and vertices.
edit
I think you might have a misunderstanding of how OpenGL rendering works. The application produces and sends to OpenGL the vertices of triangles forming polygons and 3d objects. OpenGL then rasterizes (i.e. converts to pixels) these objects to form a 2d rendering of the 3d scene from a particular point of view with a particular field of view. When you say you want to retrieve a "point cloud" of the vertices, it's hard to understand what you want since you are responsible for producing these vertices in the first place!
I am trying to write an optimized code that renders a 3D scene using OpenGL onto a sphere and then displays the unwrapped sphere on the screen ie producing a planar map of a purely reflective sphere. In math terms, I would like to produce a projection map where the x axis is the polar angle and y axis is the azimuth.
I am trying to do this by placing the camera at the center of the sphere probe and taking planar shots around so as to approximate spherical quads with planar tiles of the frustum. Then I can use this as texture to apply to a distorted planar patch.
Seems to me this is pretty tedious approach. I wonder if there is way to take this on using shaders or some GPU-smart method.
Thank you
S.
I can give you two solutions.
The first is to make a standard render-to-texture, but with a cubemap attached as the destination buffer. If your hardware is recent enough, it can be done in a single pass. This will deal with all the needed math in HW for you, but data repartition of cubemaps aren't ideal (quite a lot of distortion if the corners). In most cases, it should be enough though.
After this, you render a quad to the screen, and in a shader you map your UV coordinates to xyz vectors using staightforwad spherical mapping. The HW will compute for you which side of the cubemap to take, at which UV.
The second is more or less the same, but with a custom deformation and less HW support : dual paraboloids. Two paraboloids may not be enough, but you are free to slightly modify the equations and make 6 passes. The rendering pass is the same, but this time you're all by yourself to choose the right texture and compute the UVs.
By the time you've bothered to build the model, take the planar shots, apply non-affine transformations and stitch the whole thing together, you've probably gained no performance and considerable complexity. Just project the planar image mathematically and be done with it.
You seem to be asking for OpenGL's sphere mapping. NeHe has a tutorial on sphere mapping that might be useful.