When we create cubemap in OpenGl, we need six images and with these, we can create cubemap. (simply put, GL_TEXTURE_CUBE_MAP)
At first, I guessed there was similar parameter such as GL_TEXTURE_SHPHERE_MAP,
but I could't find any thing related to it.
I wonder how we can create spherical environment map with 360 degree images?
(If you know the site about that, please link for me)
I wonder how we can create spherical environment map with 360 degree images?
Load the image as a regular 2D texture and do spherical texture coordinate mapping in the shader.
Related
I am currently trying to learn ray casting on a 3D texture using something like glTexImage3D. I was following this tutorial from the start. My ultimate goal is to produce a program which can work like this:
My understanding is that this was rendered using a raycasting method and the model is imported as a 3d texture. The raycasting and texture sampling tasks were performed in the fragment shader. I hope I can replicate this program as a practice. Could you kindly answer my questions?
What file format should be used to import the 3D texture?
Which glsl functions should I use in detecting the distance between my ray and the texture?
What are the differences of 3D texture sampling and volume rendering?
Are there any available online tutorials for me to follow?
How can I produce my own 3D texture? (Is it possible to make one using blender?)
1. What file format should be used to import the 3D texture?
Doesn't matter, OpenGL doesn't deal with file formats.
2. Which glsl functions should I use in detecting the distance between my ray and the texture?
There's no "ready do use" raycasting function. You have to implement a raycaster yourself. I.e. between a start and end point sample the texture along a line (ray) and integrate the samples up to a final color value.
3. What are the differences of 3D texture sampling and volume rendering?
Sampling a 3D texture is not much different from sampling 2D, 1D, cubemap or whatever else the topology of a texture. For a given vector A a certain vector B is retured, namely either the value of the sample that's closest to the location pointed to by A (nearest sample) or a interpolated value.
4. Are there any available online tutorials for me to follow?
http://www.real-time-volume-graphics.org/?page_id=28
5. How can I produce my own 3D texture? (Is it possible to make one using blender?)
You can certainly use Blender, e.g. by baking volumetric data like fog density. But the whole subject is too broad to be sufficiently covered here.
I have model of skull loaded from .obj file based on this tutorial . As long as I understand texture mapping of cube (make triangle on texture in range of [0,1], select one of six side, select triangle of two triangles on this side and map it with your triangle from texture), I have problem with thinking for any solution to texture mapping my skull. There are few thousands of triangles on it and I think that texture mapping them manually is more than wrong.
Is there any solution for this problem? I'll appreciate any piece of code since it may tell me more than just description of solution.
You can generate your UV coordinates automatically, but this will probably produce badly looking ouput except for very simple textures.
For detailed textures that have eyes, ears, etc., you need to crate your UV coordinates by hand in some 3d modeling tool like is Blender 3d, 3DS Max etc... There is a lot of tutorials all over the internet how to do that. (https://www.youtube.com/watch?v=eCGGe4jLo3M)
I'm working on a scanline rendering for a class project. The renderer works so far, it reads in a model (using the utah teapot mostly), computes vertex/surface normals, and can do flat and phong shading. I'm now working on adding texture mapping, which is where I'm running into problems (I cannot use any OpenGL methods other than actually drawing the points on the screen).
So, I read in a texture into my app and have a 2D array of RGB values. I know that the concept is to map the texture from 2D texture space to a simple 3D object (in my case, a cylinder). I then now that you then map the intermediate surface onto the object surface.
However, I don't actually know how to do those things :). I've found some formulas as to mapping a texture to a cylinder, but they always seem to leave details out such as which values to use. I also then don't know how to take the vertex coordinate of my object and get the cylinder value for that point. There's some other StackOverflow posts about mapping to a cylinder, but they 1) deal with newer OpenGL with shaders and such and 2) don't deal with intermediate surfaces, so I'm not sure how to translate the knowledge from them.
So, any help on pseudo code for mapping a texture onto a 3D object using a cylinder as an intermediate surface would be greatly appreciated.
You keep using the phrase "intermediate surface", which does not describe the process correctly, yet hints at what you have in your head.
Basically, you're asking for a way to map every point on the teapot's surface onto a cylinder (assuming that the texture will be "wrapped" on the cylinder).
Just convert your surface point into cylindrical coordinates (r, theta, height), then use theta as u and height as v (texcoords).
This is what you are trying to achieve:
I want to do a texture based volume render of CT data. I have a stack of 2d CT images that I'd like to use as a 3d texture in opengl (jogl really). I have to do it the way with polygon proxy geometry that shifts when viewing parameters change. How can I convert the 2d images to one 3d texture? I have not been able to find anything about how opengl expects 3d images to be formatted. I saw this: https://stackoverflow.com/questions/13509191/how-to-convert-2d-image-into-3d-image , but I don't it's the same.
Also, I am still in confusion about this volume rendering technique. Is it possible to take a 3d location in the 3d texture and map it to a 2d corner of a quad? I found this example: http://www.felixgers.de/teaching/jogl/texture3D.html but I don't know if it means you have to use 3d vertices. Does anyone know more sources with explicit examples?
See
http://www.opengl.org/registry/doc/glspec40.core.20100311.pdf
section 3.8.3, on defining 3D texture images.
This results in a 3d cube of texels, and yes, you can map a 3d location in this cube to a corner of a quad.
OpenGL does know a 3D texture format where each texel is a small subvolume in a [0;1]^3 cube. When you texture a triangle or a quad with this texture, it is like if you cut out a thin slice of this volume. If you want a volumetric you must write a volume raycaster. If you Google "GPU direct volume rendering" you should find plenty of tutorials.
I've implemented the volume render using ray-casting in CUDA. Now I need to add other 3D objects (like 3D terrain in my case) in the scene and then make it interact with the volume-render result. For example, when I move the volume-render result overlapping the terrain, I wish to modulate the volume render result such as clipping the overlapping part in the volume render result.
However, the volume render result comes from a ray accumulating color, so it is a 2D picture with no depth. So how to implement the interaction makes me very confuse. Somebody can give me a hint?
First you render your 3D rasterized objects. Then you take the depth buffer and use it as an additional data source in the volume raycaster as additional constraint on the integration limits.
Actually, I think the result of ray-casting is a 2D image, it cannot interact with other 3D objects as the usual way. So my solution is to take the ray-casting 2D image as a texture and blend it in the 3D scene. If I can control the view position and direction, we can map the ray-casting result in the exact place in the 3D scene. I'm still trying to implement this solution, but I think this idea is all right!