OpenGL coordinates and scissor coordinates - opengl

I want to use the scissor test, but unfortunately scissors are expressed in pixels (int), and not in the opengl world coordinates (-1.0000 1.0000).
Can someone show me how to convert from world coordinates to int x y pixel way?

Scissor test is in screen coordinates because it is a raster operation. You might convert a world coordinate to screen space by multiplying it by the MVP matrix (getting -1,1 range which you then map to pixels), but that will not allow you to clip anything else than a resulting screen space rectangle (which will be always axis aligned). Hence it doesn't make much sense to do a scissor test with world space coordinates.
Maybe tell us what is the problem you're trying to solve with the scissor test and we can find a more fitting solution.

Related

Modifying a texture on a mesh at given world coordinate

Im making an editor in which I want to build a terrain map. I want to use the mouse to increase/decrease terrain altitude to create mountains and lakes.
Technically I have a heightmap I want to modify at a certain texcoord that I pick out with my mouse. To do this I first go from screen coordinates to world position - I have done that. The next step, going from world position to picking the right texture coordinate puzzles me though. How do I do that?
If you are using a simple hightmap, that you use as a displacement map in lets say the y direction. The base mesh lays in the xz plain (y=0).
You can discard the y coordinate from world coordinate that you have calculated and you get the point on the base mesh. From there you can map it to texture space the way, you map your texture.
I would not implement it that way.
I would render the scene to a framebuffer and instead of rendering a texture the the mesh, colorcode the texture coordinate onto the mesh.
If i click somewhere in screen space, i can simple read the pixel value from the framebuffer and get the texture coordinate directly.
The rendering to the framebuffer should be very inexpensive anyway.
Assuming your terrain is a simple rectangle you first calculate the vector between the mouse world position and the origin of your terrain. (The vertex of your terrain quad where the top left corner of your height map is mapped to). E.g. mouse (50,25) - origin(-100,-100) = (150,125).
Now divide the x and y coordinates by the world space width and height of your terrain quad.
150 / 200 = 0.75 and 125 / 200 = 0.625. This gives you the texture coordinates, if you need them as pixel coordinates instead simply multiply with the size of your texture.
I assume the following:
The world coordinates you computed are those of the mouse pointer within the view frustrum. I name them mouseCoord
We also have the camera coordinates, camCoord
The world consists of triangles
Each triangle point has texture coordiantes, those are interpolated by barycentric coordinates
If so, the solution goes like this:
use camCoord as origin. Compute the direction of a ray as mouseCoord - camCoord.
Compute the point of intersection with a triangle. Naive variant is to check for every triangle if it is intersected, more sophisticated would be to rule out several triangles first by some other algorithm, like parting the world in cubes, trace the ray along the cubes and only look at the triangles that have overlappings with the cube. Intersection with a triangle can be computed like on this website: http://www.lighthouse3d.com/tutorials/maths/ray-triangle-intersection/
Compute the intersection points barycentric coordinates with respect to that triangle, like that: https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-rendering-a-triangle/barycentric-coordinates
Use the barycentric coordinates as weights for the texture coordinates of the corresponding triangle points. The result are the texture coordinates of the intersection point, aka what you want.
If I misunderstood what you wanted, please edit your question with additional information.
Another variant specific for a height map:
Assumed that the assumptions are changed like that:
The world has ground tiles over x and y
The ground tiles have height values in their corners
For a point within the tile, the height value is interpolated somehow, like by bilinear interpolation.
The texture is interpolated in the same way, again with given texture coordinates for the corners
A feasible algorithm for that (approximative):
Again, compute origin and direction.
Wlog, we assume that the direction has a higher change in x-direction. If not, exchange x and y in the algorithm.
Trace the ray in a given step length for x, that is, in each step, the x-coordinate changes by that step length. (take the direction, multiply it with step size divided by it's x value, add that new direction to the current position starting at the origin)
For your current coordinate, check whether it's z value is below the current height (aka has just collided with the ground)
If so, either finish or decrease step size and do a finer search in that vicinity, going backwards until you are above the height again, then maybe go forwards in even finer steps again et cetera. The result are the current x and y coordinates
Compute the relative position of your x and y coordinates within the current tile. Use that for weights for the corner texture coordinates.
This algorithm can theoretically jump over very thin tops. Choose a small enough step size to counter that. I cannot give an exact algorithm without knowing what type of interpolation the height map uses. Might be not the worst idea to create triangles anyway, out of bilinear interpolated coordinates maybe? In any case, the algorithm is good to find the tile in which it collides.
Another variant would be to trace the ray over the points at which it's x-y-coordinates cross the tile grid and then look if the z coordinate went below the height map. Then we know that it collides in this tile. This could produce a false negative if the height can be bigger inside the tile than at it's edges, as certain forms of interpolation can produce, especially those that consider the neighbour tiles. Works just fine with bilinear interpolation, though.
In bilinear interpolation, the exact intersection can be found like that: Take the two (x,y) coordinates at which the grid is crossed by the ray. Compute the height of those to retrieve two (x,y,z) coordinates. Create a line out of them. Compute the intersection of that line with the ray. The intersection of those is that of the intersection with the tile's height map.
Simplest way is to render the mesh as a pre-pass with the uvs as the colour. No screen to world needed. The uv is the value at the mouse position. Just be careful though with mips/filtering etv

How should Z value be compared with depth value?

I'd like to know whether a different model is drawn just before coordinate (xyz) where it's here.
It doesn't work by Z value comparison of depth value and the coordinate I did world change of.
It seems that Z value is normalized in near=0, far=1, but depth value seems to make the point of view drawn at the most inside in View frustum 1.
When I moved a far plane to a far place, Z value decreased, but depth value didn't change.
thank you.
I am not sure I understand your question correctly, but I will make a guess and provide an answer. Apologies in advance if this is not what you were asking. In OpenGL you need to understand what the view frustum is. In it, you have an x and y coordinate and a depth value. The depth value represents how far from your eye the object (pixel) drawn is. This is so that you can avoid having objects i the background obfuscate objects that are closer in, giving a more real representation of the reality. You also have clipping planes, a near and a far clipping planes. Anything closer than the nera clipping plane will not be drawn, and anything farther than the far clipping plane won't be drawn. If, for example, I am drawing an image of the earth from space. I know I won't have to bother with anything that is on the other side of the Earth and can just clip it away, speeding things up. Usually, the near clipping plane is set at z=0, and the far clipping plane at depth=1. Then, this interval is subdivided (depending on your depth buffer) and OpenGL, as said, will put each pixel in each slot and decide what is closer to your eye and what is not (on the same line drawn from the eye to the pixel x,y). If you are in 3D and have x,y,z, the z-value from the scene won't match the value of the depth value, you need to use the view-matrix to map things right.
Hopefully this helps some.

Keeping OpenGL object a fixed size on screen

I'm trying to keep a simple cube a fixed size on screen no matter how far it translates into the scene (slides along the z axis).
I know that using an orthogonal projection, I could draw an object at fixed size, but I'm not interested in just having it look right. I need to read out the x and z coordinates (in world space) at any given time.
So the further along the -z axis the cube translates, the larger are its x and z values getting in order for the cube to still be a defined pixel size on screen (let's say the cube should be 50x50x50 pixel).
I'm not sure how to begin tackling this, any suggestions?

How to find out how many units across the screen plane in OpenGL

How would one get the relative size of the viewing plane in opengl's own units? I need to find out the width and height in "opengl units". Is there a function which will retrieve this information?
I assume that one unit (let us say 1.0f) in Z would be equivalent to one unit in X, even if conversion to a real measurement system in meaningless.
I know I can get the screen size either by use of GetSystemMetrics(SM_CXSCREEN) or glutGet(GLUT_SCREEN_WIDTH), but this is in pixels.
To handle the graphical window calls, I am using freeglut on non-windows OSes and the WinAPI on Windows.
Assuming you want to draw something like a UI, set your projection matrix to an Orthographic matrix with glOrtho, then you don't have any perspective and have a direct orthographic mapping between world coordinates and screen coordinates. The arguments to your glOrtho call determine how wide/high your view port is in world coordinates.
If you want to draw both a UI and a 3D scene, draw the UI with glOrtho and draw the scene with glPerspective using a clipping mask to make sure you don't ruin your UI.
If on the other hand you want to know the width of the view port in a 3D scene with perspective, so that you know how big to draw your object then you'll have to deal with the perspective projection. You need to know at which Z coordinate you want to know the witdh/height of the view port. You can use gluUnProject to calculate the world coordinate corresponding to a given screen coordinate and Z plane.
However it would probably be better to do it the other way around, always draw your object with a given size and then calculate what your projection matrix should be to have that object appear properly in your view port.

How to render a plane of seemingly infinite size?

How can i render a textured plane at some z-pos to be visible towards infinity?
I could achieve this by drawing really huge plane, but if i move my camera off the ground to higher altitude, then i would start to see the plane edges, which i want to avoid being seen.
If this is even possible, i would prefer non-shader method.
Edit: i tried with the 4d coordinate system as suggested, but: it works horribly bad. my textures will get distorted even at camera position 100, so i would have to draw multiple textured quads anyways. perhaps i could do that, and draw the farthest quads with the 4d coordinate system? any better ideas?
Edit2: for those who dont have a clue what opengl texture distortion is, here's example from the tests i did with 4d vertex coords:
(in case image not visible: http://img828.imageshack.us/img828/469/texturedistort.jpg )
note that it only happens when camera gets far enough, in this case its only 100.0 units away from middle! (middle = (0,0) where my 4 triangles starts to go towards infinity). usually this happens around at 100000.0 or something. but with 4d vertices it seems to happen earlier for some reason.
You cannot render an object of infinite size.
You are more than likely confusing the concept of projection with rendering objects of infinite size. A 4D homogeneous coordinate who's W is 0 represents a 3D position that is at infinity relative to the projection. But that doesn't mean a point infinitely far from the camera; it means a point infinitely close to the camera. That is, it represents a point who's Z coordinate (before multiplication with the perspective projection matrix) was equal to the camera position (in camera space, this is 0).
See under perspective projection, a point that is in the same plane as the camera is infinitely far away on the X and Y axes. That is the nature of the perspective projection. 4D homogeneous coordinates allow you to give them all finite numbers, and therefore you can do useful mathematics to them (like clipping).
4D homogeneous coordinates do not allow you to represent an infinitely large surface.
Drawing an infinitely large plane is easy - all you need is to compute the horizon line in screen coordinates. To do so, you have to simply take two non-collinear 4D directions (say, [1, 0, 0, 0] and [0, 0, 1, 0]), then compute their position on the screen (by multiplying manually with the view-matrix and the projection matrix, and then clipping into viewport coordinates. When you have these two points, you can compute a 2D line through the screen and clip it against it. There, you have your infinity plane (the lower polygon). However, it is difficult to display a texture on this plane, because it would be infinitely large. But if your texture is simple (e.g. a grid), then you can compute it yourself with 4D coordinates, using the same schema like above - computing points and their corresponding vanishing point and connecting them.