Parallel plane in openGL - opengl

I'm working with OpenGL, I need to draw a plane in front of a triangle in the three dimensional space. So if one of the triangle points changes, the plane also changes
I have the 3 points, and using cross product, I can get the normal vector, so, to draw the plane, I only need to translate the triangle to the origin of the world in reference of one of the triangle points, translate a distance over the normal, rotate the normal angles in X, Y and Z, and draw the plane.
I need to know how to translate over the normal, and how to rotate the new plane, so, when one of the vertex changes, the normal changes, and the plane also changes.
As I understand, I can use the normal vector in glRotatef(angle, normal[x, y, z]), with angle =0. But the plane doesn't change when I change one of the triangle vertex.

OpenGl is not a scene graph. It will not deal with transforming objects for you. All OpenGL does is render what you tell it to render.
If you tell it to render a vertex (which YOU changed), and do not tell it to change the way it draws the plane, then of course the plane will not change.
Look into scene graphs, and how to do matrix and vector math. A simple scene graph is relatively easy to create.

Related

Modifying a texture on a mesh at given world coordinate

Im making an editor in which I want to build a terrain map. I want to use the mouse to increase/decrease terrain altitude to create mountains and lakes.
Technically I have a heightmap I want to modify at a certain texcoord that I pick out with my mouse. To do this I first go from screen coordinates to world position - I have done that. The next step, going from world position to picking the right texture coordinate puzzles me though. How do I do that?
If you are using a simple hightmap, that you use as a displacement map in lets say the y direction. The base mesh lays in the xz plain (y=0).
You can discard the y coordinate from world coordinate that you have calculated and you get the point on the base mesh. From there you can map it to texture space the way, you map your texture.
I would not implement it that way.
I would render the scene to a framebuffer and instead of rendering a texture the the mesh, colorcode the texture coordinate onto the mesh.
If i click somewhere in screen space, i can simple read the pixel value from the framebuffer and get the texture coordinate directly.
The rendering to the framebuffer should be very inexpensive anyway.
Assuming your terrain is a simple rectangle you first calculate the vector between the mouse world position and the origin of your terrain. (The vertex of your terrain quad where the top left corner of your height map is mapped to). E.g. mouse (50,25) - origin(-100,-100) = (150,125).
Now divide the x and y coordinates by the world space width and height of your terrain quad.
150 / 200 = 0.75 and 125 / 200 = 0.625. This gives you the texture coordinates, if you need them as pixel coordinates instead simply multiply with the size of your texture.
I assume the following:
The world coordinates you computed are those of the mouse pointer within the view frustrum. I name them mouseCoord
We also have the camera coordinates, camCoord
The world consists of triangles
Each triangle point has texture coordiantes, those are interpolated by barycentric coordinates
If so, the solution goes like this:
use camCoord as origin. Compute the direction of a ray as mouseCoord - camCoord.
Compute the point of intersection with a triangle. Naive variant is to check for every triangle if it is intersected, more sophisticated would be to rule out several triangles first by some other algorithm, like parting the world in cubes, trace the ray along the cubes and only look at the triangles that have overlappings with the cube. Intersection with a triangle can be computed like on this website: http://www.lighthouse3d.com/tutorials/maths/ray-triangle-intersection/
Compute the intersection points barycentric coordinates with respect to that triangle, like that: https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-rendering-a-triangle/barycentric-coordinates
Use the barycentric coordinates as weights for the texture coordinates of the corresponding triangle points. The result are the texture coordinates of the intersection point, aka what you want.
If I misunderstood what you wanted, please edit your question with additional information.
Another variant specific for a height map:
Assumed that the assumptions are changed like that:
The world has ground tiles over x and y
The ground tiles have height values in their corners
For a point within the tile, the height value is interpolated somehow, like by bilinear interpolation.
The texture is interpolated in the same way, again with given texture coordinates for the corners
A feasible algorithm for that (approximative):
Again, compute origin and direction.
Wlog, we assume that the direction has a higher change in x-direction. If not, exchange x and y in the algorithm.
Trace the ray in a given step length for x, that is, in each step, the x-coordinate changes by that step length. (take the direction, multiply it with step size divided by it's x value, add that new direction to the current position starting at the origin)
For your current coordinate, check whether it's z value is below the current height (aka has just collided with the ground)
If so, either finish or decrease step size and do a finer search in that vicinity, going backwards until you are above the height again, then maybe go forwards in even finer steps again et cetera. The result are the current x and y coordinates
Compute the relative position of your x and y coordinates within the current tile. Use that for weights for the corner texture coordinates.
This algorithm can theoretically jump over very thin tops. Choose a small enough step size to counter that. I cannot give an exact algorithm without knowing what type of interpolation the height map uses. Might be not the worst idea to create triangles anyway, out of bilinear interpolated coordinates maybe? In any case, the algorithm is good to find the tile in which it collides.
Another variant would be to trace the ray over the points at which it's x-y-coordinates cross the tile grid and then look if the z coordinate went below the height map. Then we know that it collides in this tile. This could produce a false negative if the height can be bigger inside the tile than at it's edges, as certain forms of interpolation can produce, especially those that consider the neighbour tiles. Works just fine with bilinear interpolation, though.
In bilinear interpolation, the exact intersection can be found like that: Take the two (x,y) coordinates at which the grid is crossed by the ray. Compute the height of those to retrieve two (x,y,z) coordinates. Create a line out of them. Compute the intersection of that line with the ray. The intersection of those is that of the intersection with the tile's height map.
Simplest way is to render the mesh as a pre-pass with the uvs as the colour. No screen to world needed. The uv is the value at the mouse position. Just be careful though with mips/filtering etv

OpenGL - Convert 2D Texture Coordinates into 3D Coordinates

I've got a 2D Texture on a 3D Sphere and I want to know how to transfer a 2D coordinate on the Texture into a 3D coordinate. I know it has to do with the clipping of the texture : I'm using the auto clipping function of OpenGL to put the texture on the Sphere.
Edit:
To clarify the problem:
I have a 2D plane which is an image containing borders drawn in red now I put objects on this plane, that have a collision radius and are wildly moving around. Whenever the objects collide with the red border they bounce back.
Now I take this 2D plane and warp it around a 3D sphere. At the position of the circles I want to put 3D-Models that move on the sphere. The problem now is to get from the "simple" 2D coordinates on the plane to the more complicates 3D coordinates on the sphere to position the 3D-Models correctly.
My first approach would be to map 2D coordinates to spherical coordinates which can easily be transferred into 3D coordinates but how would I do this?
You don't "convert" the 2D coordinate to a 3D coordinate. The 2D coordinates you have are UV coordinates (from 0 to 1) and they represent a position in the texture space. What you do is to map these UV coordinates to the vertices.
You can read more about UV mapping here.
In OpenGL, it depends on which version are you using. Either you use glTexCoord calls before the glVertex calls (for old versions of OpenGL), or you set it in a VBO to be processed at the fragment shader on newer versions of OpenGL.
If you are planning to use gluSphere() function, you don't need to worry about calculating UV texture coordinates since opengl does it for you with the right functions.
Here you can check the gluSphere() documentation
Here there is an example code
If you are planning to render your own sphere, check this question

Display voxels using shaders in openGL

I am working on voxelisation using the rendering pipeline and now I successfully voxelise the scene using vertex+geometry+fragment shaders. Now my voxels are stored in a 3D texture which has size, for example, 128x128x128.
My original model of the scene is centered at (0,0,0) and it extends in both positive and negative axis. The texure, however, is centered at (63,63,63) in tex coordinates.
I implemented a simple ray marcing for visualizing but it doesn't take into account the camera movements (I can render only from very fixed positions because my ray have to be generated taking into account the different coordinates of the 3D texture).
My question is: how can I map my rays so that they are generated at point Po with direction D in the coordinates of my 3D model but intersect the voxels at the corresponding position in texture coordinates and every movement of the camera in the 3D world is remapped in the voxel coordinates?
Right now I generate the rays in this way:
create a quad in front of the camera at position (63,63,-20)
cast rays in direction towards (63,63,3)
I think you should store your entire view transform matrix in your shader uniform params. Then for each shader execution you can use its screen coords and view transform to compute the view ray direction for your particular pixel.
Having the ray direction and camera position you just use them the same as currently.
There's also another way to do this that you can try:
Let's say you have a cube (0,0,0)->(1,1,1) and for each corner you assign a color based on its position, like (1,0,0) is red, etc.
Now for every frame you draw your cube front faces to the texture, and cube back faces to the second texture.
In the final rendering you can use both textures to get enter and exit 3D vectors, already in the texture space, which makes your final shader much simpler.
You can read better descriptions here:
http://graphicsrunner.blogspot.com/2009/01/volume-rendering-101.html
http://web.cse.ohio-state.edu/~tong/vr/

How to draw a Cartesian plane via OpenGL?

I need to draw a Cartesian plane (standard OXYZ), where i would construct planes from equations ax+by+cz+d=0 and some objects.
How can i do that via OpenGL? Anybody?
You need to create triangle or quad. Calculate points in plane using your equation and from those points construct geometry.
For rendering geometry, look for some tutorials. There are plenty of them around.
If I am interpreting your question correctly, you just want to draw the axes of the Cartesian planes xy, xz, yz.
You can achieve this very easily by drawing a non-solid cube (glutWireCube should do the job), such that its bottom-front-left corner is at (0,0,0) (or bottom-back-left corner, based on the direction of positive depth).

2d shadow mapping

I have been wondering about how to implement this with openGL:
I have a map, with a flat floor and walls. Every thing here is 2d, there is no 3d geometry, only 2d poligons that compose the map.
Using the vertex of the polygons I cast shadows, to define the viewable area.
The shadows define the field of view, but since the cells with walls obstructi view, they are also darkened. I can draw the walls on top of the shadows, but doing so would show even walls outside the field of view.
I have been suggested to approach this problem with shadow mapping. I should render the 2D scene into 4 different 1D textures that hold the depth of the distance to the first colliding surface.
The problem is that I have no idea how to render the projection of the 2d scene into the 1D texture. If I use, for example:
gluLookAt (x, y, 0.0, 0.0, x , y+1, 0.0, 0.0, 1.0);
To render the top view, the result is still 2D. Also, nothing would be rendered since all the vertex will be at the same plane, so all surfaces will be ortogonal to the camera.
Do you have any tip or idea of how to do these 2D-to-1D projections? I have been googling for scenarios like this one, but all of them are in 3D enviroments.
Shadow mapping assumes either a directional light, or a spotlight, and you have a point light. But since you only need shadow on the floor, you could model it as a spotlight that hovers e.g. 2 m above the floor and points downwards. All the walls would have to be at least 2 m high. In the first shadow mapping pass, you could render the floor and all the walls into the shadow buffer.
However, I would not go with shadow mapping, but use volumetric shadows instead. If you go from 3D to 2D, a 3D volume becomes a 2D polygon.
Assuming that all the walls are on a regular grid, we can compute view rays from the player's position P to all the corners of walls. For each corner, store the adjacent walls, and ignore all the walls that face away from the player. Then cast rays from P to each corner, convert the rays to polar coordinates, and sort them by their angle, say counter-clockwise. Now go through this sorted list in a sweeping motion, and build the shadow polygon.
The shadow polygon consists either of corner points in this list, or of intersections between a) a line that is parallel to a wall and b) a line that goes through P and a corner. The only thing that makes this a bit complicated is that you have to find the wall that receives the shadow. Since the input is so small, I would probably start with brute force (check the corner against each wall), and see how slow it is. Note that only player-facing walls can cast shadows. Also note that the point closest to the player doesn't need to be visible.
It's probably going to look really cool with a moving character.