OpenGL eye position to texture space - opengl

I'm following this tutorial by nVidia for implementing a fluid simulation, however i'm confused about this part in the the ray marching algorithm section.
The ray direction is given by the vector from the eye to the entry point (both in texture space).
I already have the coordinates for the ray entry points so that's not an issue, but i don't understand how can i get the eye position in texture space.

Related

Measure cube volume without point cloud or depth

I would like to compute the volume of that cube in the figure without point cloud or depth map, I don't have access to them, but I have access to the corners of the cube in the screen space coordinates.
I know the ground mesh it's it a 0,0,0. Then I project a ray from origin 0,0,0 to all the points. I'm following the article to project a ray from camera to an image plane http://nghiaho.com/?page_id=363
My question is how would I know which points are candidates for the cube and which points are not candidates ?

OpenGL get 3D coordinates of nearest world 3D point to the current mouse Location

in an OpenGL context, I have seen it is possible to convert mouse coordinates to 3D world coordinates (e.g. MFC with Opengl get 3d coordinate from 2d coordinate of mouse). However, this does not work when I have simply a set of GLPoints and lots of empty space: when I'm hovering the mouse over empty space, the 3D coordinates have no meaning.
How can I get the coordinates of the nearest 3D point to my mouse position?
Recognize that "unprojecting" 2D mouse coordinates into a 3D world requires additional information. A 2D position on the screen corresponds to an infinite number of points along a line, from the near plane to the far plane in 3D. What this means is that to get a 3D point, you need to provide a depth value as well.
The gluUnProject function is convenient way of doing this and provides a parameter for this winZ. 0.0 would find the 3D point at the near plane and 1.0 would find it at the far plane.
gluUnProject(winX, winY, winZ, model, proj, view, objX, objY, objZ);
Note: If gluUnProject is not available you can figure out the same thing pretty easily with matrices. source here
When the mouse is used to interact with 3D objects this depth value is typically found by sampling from the depth buffer, or intersecting a ray with scene geometry, or a primitive (such as a sphere or box). If the objective is to interact with points, you could use a sphere around each point. If you just want a 3D point "somewhere out there" you could decide on a depth value (maybe 0.0 on the near plane or 0.5 - halfway between the near and far plane).

OpenGL beam spotlight

After reading up on OpenGL and GLSL I was wondering if there were examples out there to make something like this http://i.stack.imgur.com/FtoBj.png
I am particular interesting in the beam and intensity of light (god ray ?) .
Does anybody have a good start point ?
OpenGL just draws points, lines and triangles to the screen. It doesn't maintain a scene and the "lights" of OpenGL are actually just a position, direction and color used in the drawing calculations of points, lines or triangles.
That being said, it's actually possible to implement an effect like yours using a fragment shader, that implements a variant of the shadow mapping method. The difference would be, that instead of determining if a surface element of a primitive (point, line or triangle) lies in the shadow or not, you'd cast rays into a volume and for every sampling position along the ray test if that volume element (voxel) lies in the shadow or not and if it's illuminated add to the ray accumulator.

Mesh and cone intersection algorithm

I am looking for an efficient algorithm for mesh (set of triangles) and cone (given by origin, direction and angle from that direction) intersection. More precisely I want to find intersection point which is closest to the cone's origin. For now all what I can think about is to intersect a mesh with several rays from the cone origin and get the closest point. (Of course some spatial structure will be constructed for mesh to reject unnecessary intersections)
Also I found the following algo with brief description:
"Cone to mesh intersection is computed on the GPU by drawing the cone geometry with the mesh and reading the minimum depth value marking the intersection point".
Unfortunately it's implementation isn't obvious for me.
So can anyone suggest something more efficient than I have or explain in more details how it can be done on GPU using OpenGL?
on GPU I would do it like this:
set view
to cones origin
directing outwards
covering the bigest circle slice
for infinite cone use max Z value of mesh vertexes in view coordinate system
clear buffers
draw mesh
but in fragment shader draw only pixels intersecting cone
|fragment.xyz-screen_middle|=tan(cone_ang/2)*fragment.z
read z-buffer
read fragments and from valid (filled) select the closest one to cones origin
[notes]
if your gfx engine can handle also output values from your fragment shader
then you can skip bullet 4 and do the min distance search inside bullet 3 instead of rendering ...
that will speed up the process considerably (need just single xyz vector)

Display voxels using shaders in openGL

I am working on voxelisation using the rendering pipeline and now I successfully voxelise the scene using vertex+geometry+fragment shaders. Now my voxels are stored in a 3D texture which has size, for example, 128x128x128.
My original model of the scene is centered at (0,0,0) and it extends in both positive and negative axis. The texure, however, is centered at (63,63,63) in tex coordinates.
I implemented a simple ray marcing for visualizing but it doesn't take into account the camera movements (I can render only from very fixed positions because my ray have to be generated taking into account the different coordinates of the 3D texture).
My question is: how can I map my rays so that they are generated at point Po with direction D in the coordinates of my 3D model but intersect the voxels at the corresponding position in texture coordinates and every movement of the camera in the 3D world is remapped in the voxel coordinates?
Right now I generate the rays in this way:
create a quad in front of the camera at position (63,63,-20)
cast rays in direction towards (63,63,3)
I think you should store your entire view transform matrix in your shader uniform params. Then for each shader execution you can use its screen coords and view transform to compute the view ray direction for your particular pixel.
Having the ray direction and camera position you just use them the same as currently.
There's also another way to do this that you can try:
Let's say you have a cube (0,0,0)->(1,1,1) and for each corner you assign a color based on its position, like (1,0,0) is red, etc.
Now for every frame you draw your cube front faces to the texture, and cube back faces to the second texture.
In the final rendering you can use both textures to get enter and exit 3D vectors, already in the texture space, which makes your final shader much simpler.
You can read better descriptions here:
http://graphicsrunner.blogspot.com/2009/01/volume-rendering-101.html
http://web.cse.ohio-state.edu/~tong/vr/