As I know the texture is just an image (absolutely 2D), so why do we have GL_TEXTURE_3D? What does it mean? and usage?
A texture is not absolutely 2d. Most of the time it is 2d but you can also have 1d textures (a line) and 3d textures (a volume). A 3D texture is accessed using three texture coordinates. You can use it when your 3d model can be clipped by a plane. Then instead of seeing the other side of the object (the object is hollow), you can use a 3d texture to make a solid object and view what the plane clipped. So for example, if you model a cell phone and you cut it in half, instead of seeing the backside, you can see the circuitry inside.
Textures in OpenGL can be 1D, 2D, or 3D. 3D textures are, AFAIK, not that much used by games, but more by things like scientific visualization applications. E.g. you have a dataset with 3D coordinates (x,y,z) and some value (v). Then you can upload the dataset (or more likely, a reduced size version of it due to memory constraints) to the GPU and visualize it in some way (e.g. creating a 2D slice from a 3D texture is VERY fast compared to creating the 2D slice as a texture on the CPU and uploading it).
Related
I am currently trying to learn ray casting on a 3D texture using something like glTexImage3D. I was following this tutorial from the start. My ultimate goal is to produce a program which can work like this:
My understanding is that this was rendered using a raycasting method and the model is imported as a 3d texture. The raycasting and texture sampling tasks were performed in the fragment shader. I hope I can replicate this program as a practice. Could you kindly answer my questions?
What file format should be used to import the 3D texture?
Which glsl functions should I use in detecting the distance between my ray and the texture?
What are the differences of 3D texture sampling and volume rendering?
Are there any available online tutorials for me to follow?
How can I produce my own 3D texture? (Is it possible to make one using blender?)
1. What file format should be used to import the 3D texture?
Doesn't matter, OpenGL doesn't deal with file formats.
2. Which glsl functions should I use in detecting the distance between my ray and the texture?
There's no "ready do use" raycasting function. You have to implement a raycaster yourself. I.e. between a start and end point sample the texture along a line (ray) and integrate the samples up to a final color value.
3. What are the differences of 3D texture sampling and volume rendering?
Sampling a 3D texture is not much different from sampling 2D, 1D, cubemap or whatever else the topology of a texture. For a given vector A a certain vector B is retured, namely either the value of the sample that's closest to the location pointed to by A (nearest sample) or a interpolated value.
4. Are there any available online tutorials for me to follow?
http://www.real-time-volume-graphics.org/?page_id=28
5. How can I produce my own 3D texture? (Is it possible to make one using blender?)
You can certainly use Blender, e.g. by baking volumetric data like fog density. But the whole subject is too broad to be sufficiently covered here.
I'm working on a scanline rendering for a class project. The renderer works so far, it reads in a model (using the utah teapot mostly), computes vertex/surface normals, and can do flat and phong shading. I'm now working on adding texture mapping, which is where I'm running into problems (I cannot use any OpenGL methods other than actually drawing the points on the screen).
So, I read in a texture into my app and have a 2D array of RGB values. I know that the concept is to map the texture from 2D texture space to a simple 3D object (in my case, a cylinder). I then now that you then map the intermediate surface onto the object surface.
However, I don't actually know how to do those things :). I've found some formulas as to mapping a texture to a cylinder, but they always seem to leave details out such as which values to use. I also then don't know how to take the vertex coordinate of my object and get the cylinder value for that point. There's some other StackOverflow posts about mapping to a cylinder, but they 1) deal with newer OpenGL with shaders and such and 2) don't deal with intermediate surfaces, so I'm not sure how to translate the knowledge from them.
So, any help on pseudo code for mapping a texture onto a 3D object using a cylinder as an intermediate surface would be greatly appreciated.
You keep using the phrase "intermediate surface", which does not describe the process correctly, yet hints at what you have in your head.
Basically, you're asking for a way to map every point on the teapot's surface onto a cylinder (assuming that the texture will be "wrapped" on the cylinder).
Just convert your surface point into cylindrical coordinates (r, theta, height), then use theta as u and height as v (texcoords).
This is what you are trying to achieve:
I want to do a texture based volume render of CT data. I have a stack of 2d CT images that I'd like to use as a 3d texture in opengl (jogl really). I have to do it the way with polygon proxy geometry that shifts when viewing parameters change. How can I convert the 2d images to one 3d texture? I have not been able to find anything about how opengl expects 3d images to be formatted. I saw this: https://stackoverflow.com/questions/13509191/how-to-convert-2d-image-into-3d-image , but I don't it's the same.
Also, I am still in confusion about this volume rendering technique. Is it possible to take a 3d location in the 3d texture and map it to a 2d corner of a quad? I found this example: http://www.felixgers.de/teaching/jogl/texture3D.html but I don't know if it means you have to use 3d vertices. Does anyone know more sources with explicit examples?
See
http://www.opengl.org/registry/doc/glspec40.core.20100311.pdf
section 3.8.3, on defining 3D texture images.
This results in a 3d cube of texels, and yes, you can map a 3d location in this cube to a corner of a quad.
OpenGL does know a 3D texture format where each texel is a small subvolume in a [0;1]^3 cube. When you texture a triangle or a quad with this texture, it is like if you cut out a thin slice of this volume. If you want a volumetric you must write a volume raycaster. If you Google "GPU direct volume rendering" you should find plenty of tutorials.
I've implemented the volume render using ray-casting in CUDA. Now I need to add other 3D objects (like 3D terrain in my case) in the scene and then make it interact with the volume-render result. For example, when I move the volume-render result overlapping the terrain, I wish to modulate the volume render result such as clipping the overlapping part in the volume render result.
However, the volume render result comes from a ray accumulating color, so it is a 2D picture with no depth. So how to implement the interaction makes me very confuse. Somebody can give me a hint?
First you render your 3D rasterized objects. Then you take the depth buffer and use it as an additional data source in the volume raycaster as additional constraint on the integration limits.
Actually, I think the result of ray-casting is a 2D image, it cannot interact with other 3D objects as the usual way. So my solution is to take the ray-casting 2D image as a texture and blend it in the 3D scene. If I can control the view position and direction, we can map the ray-casting result in the exact place in the 3D scene. I'm still trying to implement this solution, but I think this idea is all right!
Is there a way to extract a point cloud from a rendered 3D Scene (using OPENGL)?
in Detail:
The input should be a rendered 3D Scene.
The output should be e.g a three dimensional array with vertices(x,y,z).
Mission possible or impossible?
Render your scene using an orthographic view so that all of it fits on screen at once.
Use a g-buffer (search for this term or "fat pixel" or "deferred rendering") to capture
(X,Y,Z, R, G, B, A) at each sample point in the framebuffer.
Read back your framebuffer and put the (X,Y,Z,R,G,B,A) tuple at each sample point in a
linear array.
You now have a point cloud sampled from your conventional geometry using OpenGL. Apart from the readback from the GPU to the host, this will be very fast.
Going further with this:
Use depth peeling (search for this term) to generate samples on surfaces that are not
nearest to the camera.
Repeat the rendering from several viewpoints (or equivalently for several rotations
of the scene) to be sure of capturing fragments from a the nooks and crannies of the
scene and append the points generated from each pass into one big linear array.
I think you should take your input data and manually multiply it by your transformation and modelview matrices. No need to use OpenGL for that, just some vector/matrices math.
If I understand correctly, you want to deconstruct a final rendering (2D) of a 3D scene. In general, there is no capability built-in to OpenGL that does this.
There are however many papers describing approaches to analyzing a 2D image to generate a 3D representation. This is for example what the Microsoft Kinect does to some extent. Look at the papers presented at previous editions of SIGGRAPH for a starting point. Many implementations probably make use of the GPU (OpenGL, DirectX, CUDA, etc.) to do their magic, but that's about it. For example, edge-detection filters to identify the visible edges of objects and histogram functions can run on the GPU.
Depending on your application domain, you might be in for something near impossible or there might be a shortcut you can use to identify shapes and vertices.
edit
I think you might have a misunderstanding of how OpenGL rendering works. The application produces and sends to OpenGL the vertices of triangles forming polygons and 3d objects. OpenGL then rasterizes (i.e. converts to pixels) these objects to form a 2d rendering of the 3d scene from a particular point of view with a particular field of view. When you say you want to retrieve a "point cloud" of the vertices, it's hard to understand what you want since you are responsible for producing these vertices in the first place!