I want to do a texture based volume render of CT data. I have a stack of 2d CT images that I'd like to use as a 3d texture in opengl (jogl really). I have to do it the way with polygon proxy geometry that shifts when viewing parameters change. How can I convert the 2d images to one 3d texture? I have not been able to find anything about how opengl expects 3d images to be formatted. I saw this: https://stackoverflow.com/questions/13509191/how-to-convert-2d-image-into-3d-image , but I don't it's the same.
Also, I am still in confusion about this volume rendering technique. Is it possible to take a 3d location in the 3d texture and map it to a 2d corner of a quad? I found this example: http://www.felixgers.de/teaching/jogl/texture3D.html but I don't know if it means you have to use 3d vertices. Does anyone know more sources with explicit examples?
See
http://www.opengl.org/registry/doc/glspec40.core.20100311.pdf
section 3.8.3, on defining 3D texture images.
This results in a 3d cube of texels, and yes, you can map a 3d location in this cube to a corner of a quad.
OpenGL does know a 3D texture format where each texel is a small subvolume in a [0;1]^3 cube. When you texture a triangle or a quad with this texture, it is like if you cut out a thin slice of this volume. If you want a volumetric you must write a volume raycaster. If you Google "GPU direct volume rendering" you should find plenty of tutorials.
Related
I'm new to OpenGL and programming in general and I'm trying to paste any picture on all surfaces of a cube and a pyramid so the picture can move with them and rotate with them
I tried a lot of tutorials but most of them focus on 2d or with other programming languages like C# .
how can I make texturing to my polygons
[...] but most of them focus on 2d [...]
You have to wrap 2d textures around the 3d mesh. Put different parts of a 2d texture on the faces of the 3d mesh. Each face of the 3d object is 2 dimensional. You have to define the texture coordinate attributes for each vertex of a face (respectively primitive), to select an area of the 2d texture, to display on the primitive.
See also How do opengl texture coordinates work?.
Note, 3d textures contain Voxels that's something completely different.
I'm working on a scanline rendering for a class project. The renderer works so far, it reads in a model (using the utah teapot mostly), computes vertex/surface normals, and can do flat and phong shading. I'm now working on adding texture mapping, which is where I'm running into problems (I cannot use any OpenGL methods other than actually drawing the points on the screen).
So, I read in a texture into my app and have a 2D array of RGB values. I know that the concept is to map the texture from 2D texture space to a simple 3D object (in my case, a cylinder). I then now that you then map the intermediate surface onto the object surface.
However, I don't actually know how to do those things :). I've found some formulas as to mapping a texture to a cylinder, but they always seem to leave details out such as which values to use. I also then don't know how to take the vertex coordinate of my object and get the cylinder value for that point. There's some other StackOverflow posts about mapping to a cylinder, but they 1) deal with newer OpenGL with shaders and such and 2) don't deal with intermediate surfaces, so I'm not sure how to translate the knowledge from them.
So, any help on pseudo code for mapping a texture onto a 3D object using a cylinder as an intermediate surface would be greatly appreciated.
You keep using the phrase "intermediate surface", which does not describe the process correctly, yet hints at what you have in your head.
Basically, you're asking for a way to map every point on the teapot's surface onto a cylinder (assuming that the texture will be "wrapped" on the cylinder).
Just convert your surface point into cylindrical coordinates (r, theta, height), then use theta as u and height as v (texcoords).
This is what you are trying to achieve:
I have already tried and succeeded loading a texture from a bmp file, and drawing quads and triangles with texture. However i need to apply the loaded texture to an object drawn with glutSolidDodecahedron and glutSolidSphere. How can i do this? Please include some code if possible
Note: I HAVE to use those functions, I'm not allowed to draw them from scratch.
Neither glutSolidDodecahedron nor glutSolidSphere specifies texture coordinates, at least not according to any documentation that a quick web search turns up. I had a quick look at the FreeGLUT implementations and those do indeed not specify texture coordinates.
If you can use shaders, you can derive the 2D texture coordinates from the 3D location of the vertices. Spheres and dodecahedrons are pretty regular shapes, so you can simply do a spherical projection (convert the vertex position to spherical coordinates and drop the radius component).
I've implemented the volume render using ray-casting in CUDA. Now I need to add other 3D objects (like 3D terrain in my case) in the scene and then make it interact with the volume-render result. For example, when I move the volume-render result overlapping the terrain, I wish to modulate the volume render result such as clipping the overlapping part in the volume render result.
However, the volume render result comes from a ray accumulating color, so it is a 2D picture with no depth. So how to implement the interaction makes me very confuse. Somebody can give me a hint?
First you render your 3D rasterized objects. Then you take the depth buffer and use it as an additional data source in the volume raycaster as additional constraint on the integration limits.
Actually, I think the result of ray-casting is a 2D image, it cannot interact with other 3D objects as the usual way. So my solution is to take the ray-casting 2D image as a texture and blend it in the 3D scene. If I can control the view position and direction, we can map the ray-casting result in the exact place in the 3D scene. I'm still trying to implement this solution, but I think this idea is all right!
As I know the texture is just an image (absolutely 2D), so why do we have GL_TEXTURE_3D? What does it mean? and usage?
A texture is not absolutely 2d. Most of the time it is 2d but you can also have 1d textures (a line) and 3d textures (a volume). A 3D texture is accessed using three texture coordinates. You can use it when your 3d model can be clipped by a plane. Then instead of seeing the other side of the object (the object is hollow), you can use a 3d texture to make a solid object and view what the plane clipped. So for example, if you model a cell phone and you cut it in half, instead of seeing the backside, you can see the circuitry inside.
Textures in OpenGL can be 1D, 2D, or 3D. 3D textures are, AFAIK, not that much used by games, but more by things like scientific visualization applications. E.g. you have a dataset with 3D coordinates (x,y,z) and some value (v). Then you can upload the dataset (or more likely, a reduced size version of it due to memory constraints) to the GPU and visualize it in some way (e.g. creating a 2D slice from a 3D texture is VERY fast compared to creating the 2D slice as a texture on the CPU and uploading it).