Assoiating a screen-space distance to world-space distance - opengl

I am trying to implement a volumetric lines shader as show in here: http://prideout.net/blog/?p=61. Basically, a bounding volume mesh is generated for each line segment, then the segment's 2 end points are converted into screen space and passed into the fragment shader. In the fragment shader, the screen space distance between the fragment and the line segment end points in screen space is calculated. That distance is used to shade the line.
The part above works. The problem is that the distance calculated above is in screen space. Thus, the rendered line segment radius is tied to its screen size. As in, zooming out or zooming in the camera does not make the line rendered appear to become smaller or bigger in screen.
How do I adjust the distance calculated described above (or any other method) to make the line appear bigger or smaller based on its distance to camera?
Here is a picture to show the problem:
enter image description here
When camera is zoomed far away, I should adjust the distance attenuation but I do not know how to calculate the correct distance attenuatiuon to use. For example, to draw a line with radius of 1.0 in world space and not 1.0 in screen space.

Related

Modifying a texture on a mesh at given world coordinate

Im making an editor in which I want to build a terrain map. I want to use the mouse to increase/decrease terrain altitude to create mountains and lakes.
Technically I have a heightmap I want to modify at a certain texcoord that I pick out with my mouse. To do this I first go from screen coordinates to world position - I have done that. The next step, going from world position to picking the right texture coordinate puzzles me though. How do I do that?
If you are using a simple hightmap, that you use as a displacement map in lets say the y direction. The base mesh lays in the xz plain (y=0).
You can discard the y coordinate from world coordinate that you have calculated and you get the point on the base mesh. From there you can map it to texture space the way, you map your texture.
I would not implement it that way.
I would render the scene to a framebuffer and instead of rendering a texture the the mesh, colorcode the texture coordinate onto the mesh.
If i click somewhere in screen space, i can simple read the pixel value from the framebuffer and get the texture coordinate directly.
The rendering to the framebuffer should be very inexpensive anyway.
Assuming your terrain is a simple rectangle you first calculate the vector between the mouse world position and the origin of your terrain. (The vertex of your terrain quad where the top left corner of your height map is mapped to). E.g. mouse (50,25) - origin(-100,-100) = (150,125).
Now divide the x and y coordinates by the world space width and height of your terrain quad.
150 / 200 = 0.75 and 125 / 200 = 0.625. This gives you the texture coordinates, if you need them as pixel coordinates instead simply multiply with the size of your texture.
I assume the following:
The world coordinates you computed are those of the mouse pointer within the view frustrum. I name them mouseCoord
We also have the camera coordinates, camCoord
The world consists of triangles
Each triangle point has texture coordiantes, those are interpolated by barycentric coordinates
If so, the solution goes like this:
use camCoord as origin. Compute the direction of a ray as mouseCoord - camCoord.
Compute the point of intersection with a triangle. Naive variant is to check for every triangle if it is intersected, more sophisticated would be to rule out several triangles first by some other algorithm, like parting the world in cubes, trace the ray along the cubes and only look at the triangles that have overlappings with the cube. Intersection with a triangle can be computed like on this website: http://www.lighthouse3d.com/tutorials/maths/ray-triangle-intersection/
Compute the intersection points barycentric coordinates with respect to that triangle, like that: https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-rendering-a-triangle/barycentric-coordinates
Use the barycentric coordinates as weights for the texture coordinates of the corresponding triangle points. The result are the texture coordinates of the intersection point, aka what you want.
If I misunderstood what you wanted, please edit your question with additional information.
Another variant specific for a height map:
Assumed that the assumptions are changed like that:
The world has ground tiles over x and y
The ground tiles have height values in their corners
For a point within the tile, the height value is interpolated somehow, like by bilinear interpolation.
The texture is interpolated in the same way, again with given texture coordinates for the corners
A feasible algorithm for that (approximative):
Again, compute origin and direction.
Wlog, we assume that the direction has a higher change in x-direction. If not, exchange x and y in the algorithm.
Trace the ray in a given step length for x, that is, in each step, the x-coordinate changes by that step length. (take the direction, multiply it with step size divided by it's x value, add that new direction to the current position starting at the origin)
For your current coordinate, check whether it's z value is below the current height (aka has just collided with the ground)
If so, either finish or decrease step size and do a finer search in that vicinity, going backwards until you are above the height again, then maybe go forwards in even finer steps again et cetera. The result are the current x and y coordinates
Compute the relative position of your x and y coordinates within the current tile. Use that for weights for the corner texture coordinates.
This algorithm can theoretically jump over very thin tops. Choose a small enough step size to counter that. I cannot give an exact algorithm without knowing what type of interpolation the height map uses. Might be not the worst idea to create triangles anyway, out of bilinear interpolated coordinates maybe? In any case, the algorithm is good to find the tile in which it collides.
Another variant would be to trace the ray over the points at which it's x-y-coordinates cross the tile grid and then look if the z coordinate went below the height map. Then we know that it collides in this tile. This could produce a false negative if the height can be bigger inside the tile than at it's edges, as certain forms of interpolation can produce, especially those that consider the neighbour tiles. Works just fine with bilinear interpolation, though.
In bilinear interpolation, the exact intersection can be found like that: Take the two (x,y) coordinates at which the grid is crossed by the ray. Compute the height of those to retrieve two (x,y,z) coordinates. Create a line out of them. Compute the intersection of that line with the ray. The intersection of those is that of the intersection with the tile's height map.
Simplest way is to render the mesh as a pre-pass with the uvs as the colour. No screen to world needed. The uv is the value at the mouse position. Just be careful though with mips/filtering etv

opengl 3d object picking - raycast

I have a program displaying planes of cubes, like levels in a house, I have the planes displayed so that the display angle is consistent to the viewport projection plane. I would like to be able to allow the user to select them.
First I draw them relative to each other with the first square drawn at {0,0,0}
then I translate and rotate them, each plane has it's own rotate and translate.
Thanks to this this page I have code that can cast a ray using the user's last touch. If you notice in the picture above, there is a green square and blue square, this is debug graphic displaying the ray intersecting the near and far planes in the projection matrix after clicking in the centre (with z of zero in order to display them), so it appears to be working.
I can get a bounding box of the cube, but it's coordinates will think they are still up in the left corner.
My question is how do I use my ray to check intersections with the objects after they have been rotated and translated? I'm very confused as I once had this working when I was translating and rotating the whole grid as one, now each plane is being moved separately I can't work out how to do it.

Screen-space distance along line strip in GLSL

When rendering a line strip, how do I get the distance of a fragment to the start point of the whole strip along the line in pixels?
When rendering a single line segment between two points in 3D, the distance between those two points in screen space is simply the Euclidean distance between their 2D projections. If I render this segment, I can interpolate (layout qualifier noperspective in GLSL) the screen-space distance from the start point along the line for each fragment.
When rendering a line strip, however, this does not work, because in the geometry shader, I only have information about the start and end point of the current segment, not all previous segments. So what I can calculate with the method above is just the distance of each fragment to the start point of the line segment, not to the start point of the line strip. But this is what I want to achieve.
What do I need that for: stylized line rendering. E.g., coloring a polyline according to its screen coverage (length in pixels), adding a distance mark every 50 pixels, alternating multiple textures along the line strip, ...
What I currently do is:
project every point of the line beforehand on the CPU
calculate the lengths of all projected line segments in pixels
store the lengths in a buffer as vertex attribute (vertex 0 has distance 0, vertex 1 has the length of the segment 0->1, vertex 2 has the length 0->1 + 1->2, ...)
in the geometry shader, create the line segments and use the distances calculated on the CPU
interpolate the values without perspective correction for each fragment
This works, but there has to be a better way to do this. It's not feasible to project a few hundred or thousand line points on the CPU each frame. Is there a smart way to calculate this in the geometry shader? What I have is the world-space position of the start and end point of the current line segment, I have the world-space distance of both points to the start point of the line strip along the line (again vertex attribute 0, 0->1, 0->1 + 1->2, ...) and I can provided any other uniform data about the line strip (total length in world space units, number of segments, ...).
Edit: I do not want to compute the Euclidean distance to the start point of the line strip, but the distance along the whole line, i.e. the sum of the lengths of all projected line segments up to the current fragment.
I see two ways:
You can use the vertex shader. Add the start coordinates of the line strip as additional values of each vertex of the line. The vertex shader can then compute the distance to the start point and pass them interpolated to the fragment shader. There you have to rescale them to receive pixel values.
Or, you can tell the vertex shader about the start coordinates of each line strip by using uniforms. There you would need to transform them like the vertex coordinates are transformed and move them towards the fragment shader. There you would have to transform them to pixel coordinates and calculate the distance to the actual fragment.
I thought I was just missing something here and my task could be solved by simple perspective calculation magic. Thanks derhass for pointing out the pointlessness of my quest.
As always, formulating the problem was already half the solution. When you know what to look for ("continuous parameterization of a line"), you can stumble upon the paper
Forrester Cole and Adam Finkelstein. Two Fast Methods for High-Quality Line Visibility. IEEE Transactions on Visualization and Computer Graphics 16(5), February 2010.
which deals with this very problem. The solution is similar to what derhass already proposed. Cole and Finkelstein use a segment atlas which they calculate per frame on the GPU. It includes a list of all projected line points (among other attributes such as visibility) to keep track of the position along the line at each fragment. This segment atlas is computed in a framebuffer, as the paper (draft) dates back to 2009. Implementing the same method in a compute shader and storing the results in a buffer seems like the way to go.

Display voxels using shaders in openGL

I am working on voxelisation using the rendering pipeline and now I successfully voxelise the scene using vertex+geometry+fragment shaders. Now my voxels are stored in a 3D texture which has size, for example, 128x128x128.
My original model of the scene is centered at (0,0,0) and it extends in both positive and negative axis. The texure, however, is centered at (63,63,63) in tex coordinates.
I implemented a simple ray marcing for visualizing but it doesn't take into account the camera movements (I can render only from very fixed positions because my ray have to be generated taking into account the different coordinates of the 3D texture).
My question is: how can I map my rays so that they are generated at point Po with direction D in the coordinates of my 3D model but intersect the voxels at the corresponding position in texture coordinates and every movement of the camera in the 3D world is remapped in the voxel coordinates?
Right now I generate the rays in this way:
create a quad in front of the camera at position (63,63,-20)
cast rays in direction towards (63,63,3)
I think you should store your entire view transform matrix in your shader uniform params. Then for each shader execution you can use its screen coords and view transform to compute the view ray direction for your particular pixel.
Having the ray direction and camera position you just use them the same as currently.
There's also another way to do this that you can try:
Let's say you have a cube (0,0,0)->(1,1,1) and for each corner you assign a color based on its position, like (1,0,0) is red, etc.
Now for every frame you draw your cube front faces to the texture, and cube back faces to the second texture.
In the final rendering you can use both textures to get enter and exit 3D vectors, already in the texture space, which makes your final shader much simpler.
You can read better descriptions here:
http://graphicsrunner.blogspot.com/2009/01/volume-rendering-101.html
http://web.cse.ohio-state.edu/~tong/vr/

Screen space bounding box computation in OpenGL

I'm trying to implement tiled deferred rendering method and now I'm stuck. I'm computing min/max depth for each tile (32x32) and storing it in texture. Then I want to compute screen space bounding box (bounding square) represented by left down and top right coords of rectangle for every pointlight (sphere) in my scene (see pic from my app). This together with min/max depth will be used to check if light affects actual tile.
Problem is I have no idea how to do this. Any idea, source code or exact math?
Update
Screen-space is basically a 2D entity, so instead of a bounding box think of a bounding rectangle.
Here is a simple way to compute it:
Project 8 corner points of your world-space bounding box onto the screen using your ModelViewProjection matrix
Find a bounding rectangle of these points (which is just min/max X and Y coordinates of the points)
A more sophisticated way can be used to compute a screen-space bounding rect for a point light source. We calculate four planes that pass through the camera position and are tangent to the light’s sphere of illumination (the light radius). Intersections of each tangent plane with the image plane gives us 4 lines on the image plane. This lines define the resulting bounding rectangle.
Refer to this article for math details: http://www.altdevblogaday.com/2012/03/01/getting-the-projected-extent-of-a-sphere-to-the-near-plane/