OpenGL: 2D Vertex coordinates to 2D viewing coordinates? - opengl

I'm implementing a rasterizer for a class project, and currently im stuck on what method/how i should convert vertex coordinates to viewing pane coordinates.
I'm given a list of verticies of 2d coordinates for a triangle, like
0 0 1
2 0 1
0 1 1
and im drawing in a viewing pane (using OpenGL and GLUT) of size 400X400 pixels, for example.
My question is how do i decide where in the viewing pane to put these verticies, assuming
1) I want the coordinate's to be centered around 0,0 at the center of the screen
2) I want to fill up most of the screen (lets say for this example, the screen is the maximum x coordinate + 1 lengths wide, etc)
3) I have any and all of OpenGL's and GLUT's standard library functions at my disposal.
Thanks!

http://www.opengl.org/sdk/docs/man/xhtml/glOrtho.xml
To center around 0 use symmetric left/right and bottom/top. Beware the near/far which are somewhat arbitrary but are often chosen (in examples) as -1..+1 which might be a problem for your triangles at z=1.
If you care about the aspect ratio make sure that right-left and bottom-top are proportional to the window's width/height.

You should consider the frustum which is your volumetric view and calculate the coordinates by transforming the your objects to consider their position, this explains the theory quite thoroughly..
basically you have to project the object using a specified projection matrix that is calculated basing on the characteristics of your view:
scale them according to a z (depth) value: you scale both y and x in so inversely proportionally to z
you scale and shift coordinates in order to fit the width of your view

Related

Modifying a texture on a mesh at given world coordinate

Im making an editor in which I want to build a terrain map. I want to use the mouse to increase/decrease terrain altitude to create mountains and lakes.
Technically I have a heightmap I want to modify at a certain texcoord that I pick out with my mouse. To do this I first go from screen coordinates to world position - I have done that. The next step, going from world position to picking the right texture coordinate puzzles me though. How do I do that?
If you are using a simple hightmap, that you use as a displacement map in lets say the y direction. The base mesh lays in the xz plain (y=0).
You can discard the y coordinate from world coordinate that you have calculated and you get the point on the base mesh. From there you can map it to texture space the way, you map your texture.
I would not implement it that way.
I would render the scene to a framebuffer and instead of rendering a texture the the mesh, colorcode the texture coordinate onto the mesh.
If i click somewhere in screen space, i can simple read the pixel value from the framebuffer and get the texture coordinate directly.
The rendering to the framebuffer should be very inexpensive anyway.
Assuming your terrain is a simple rectangle you first calculate the vector between the mouse world position and the origin of your terrain. (The vertex of your terrain quad where the top left corner of your height map is mapped to). E.g. mouse (50,25) - origin(-100,-100) = (150,125).
Now divide the x and y coordinates by the world space width and height of your terrain quad.
150 / 200 = 0.75 and 125 / 200 = 0.625. This gives you the texture coordinates, if you need them as pixel coordinates instead simply multiply with the size of your texture.
I assume the following:
The world coordinates you computed are those of the mouse pointer within the view frustrum. I name them mouseCoord
We also have the camera coordinates, camCoord
The world consists of triangles
Each triangle point has texture coordiantes, those are interpolated by barycentric coordinates
If so, the solution goes like this:
use camCoord as origin. Compute the direction of a ray as mouseCoord - camCoord.
Compute the point of intersection with a triangle. Naive variant is to check for every triangle if it is intersected, more sophisticated would be to rule out several triangles first by some other algorithm, like parting the world in cubes, trace the ray along the cubes and only look at the triangles that have overlappings with the cube. Intersection with a triangle can be computed like on this website: http://www.lighthouse3d.com/tutorials/maths/ray-triangle-intersection/
Compute the intersection points barycentric coordinates with respect to that triangle, like that: https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-rendering-a-triangle/barycentric-coordinates
Use the barycentric coordinates as weights for the texture coordinates of the corresponding triangle points. The result are the texture coordinates of the intersection point, aka what you want.
If I misunderstood what you wanted, please edit your question with additional information.
Another variant specific for a height map:
Assumed that the assumptions are changed like that:
The world has ground tiles over x and y
The ground tiles have height values in their corners
For a point within the tile, the height value is interpolated somehow, like by bilinear interpolation.
The texture is interpolated in the same way, again with given texture coordinates for the corners
A feasible algorithm for that (approximative):
Again, compute origin and direction.
Wlog, we assume that the direction has a higher change in x-direction. If not, exchange x and y in the algorithm.
Trace the ray in a given step length for x, that is, in each step, the x-coordinate changes by that step length. (take the direction, multiply it with step size divided by it's x value, add that new direction to the current position starting at the origin)
For your current coordinate, check whether it's z value is below the current height (aka has just collided with the ground)
If so, either finish or decrease step size and do a finer search in that vicinity, going backwards until you are above the height again, then maybe go forwards in even finer steps again et cetera. The result are the current x and y coordinates
Compute the relative position of your x and y coordinates within the current tile. Use that for weights for the corner texture coordinates.
This algorithm can theoretically jump over very thin tops. Choose a small enough step size to counter that. I cannot give an exact algorithm without knowing what type of interpolation the height map uses. Might be not the worst idea to create triangles anyway, out of bilinear interpolated coordinates maybe? In any case, the algorithm is good to find the tile in which it collides.
Another variant would be to trace the ray over the points at which it's x-y-coordinates cross the tile grid and then look if the z coordinate went below the height map. Then we know that it collides in this tile. This could produce a false negative if the height can be bigger inside the tile than at it's edges, as certain forms of interpolation can produce, especially those that consider the neighbour tiles. Works just fine with bilinear interpolation, though.
In bilinear interpolation, the exact intersection can be found like that: Take the two (x,y) coordinates at which the grid is crossed by the ray. Compute the height of those to retrieve two (x,y,z) coordinates. Create a line out of them. Compute the intersection of that line with the ray. The intersection of those is that of the intersection with the tile's height map.
Simplest way is to render the mesh as a pre-pass with the uvs as the colour. No screen to world needed. The uv is the value at the mouse position. Just be careful though with mips/filtering etv

Keeping OpenGL object a fixed size on screen

I'm trying to keep a simple cube a fixed size on screen no matter how far it translates into the scene (slides along the z axis).
I know that using an orthogonal projection, I could draw an object at fixed size, but I'm not interested in just having it look right. I need to read out the x and z coordinates (in world space) at any given time.
So the further along the -z axis the cube translates, the larger are its x and z values getting in order for the cube to still be a defined pixel size on screen (let's say the cube should be 50x50x50 pixel).
I'm not sure how to begin tackling this, any suggestions?

OpenGL 2D transformations without keeping aspect

I need to have a 2D layer in my OpenGL application.I have implemented it first using a typical ortho projection like this:
Mat4 ortho =Glm.ortho(0,viewWidth , 0 ,viewHeight);
The 2d worked fine except the fact that when running in different screen sizes the 2d shapes are scaled relatively to a new aspect.That is not what I want (opposite to what usually people need). I need the 2d shapes to get stretched or squeezed according to the new screen size.
I tried not to use the ortho matrix but just an identity.This one works but in such a case I have to use numbers in range 0 -1 to manipulate the objects in the visible frustum area.And I need to use numbers in regular (not normalized ) ranges.So it is sort of forcing me to get back to ortho projection which is problematic because of what already said.
So the question is how do I transform 2d object without perspective staying in the world coordinates system.
UPDATE:
The best example is 2D layers in Adobe AfterEffects. If one changes composition dimension ,2d layers don't get scaled according to new dimensions.That is what I am after.
It's tricky to know how to answer this, because to some degree your requirements are mutually exclusive. You don't want normalised coordinates, you want to use screen coordinates. But by definition, screen coordinates are defined in pixels, and pixels are usually square... So I think you need some form of normalised coordinates, albeit maybe uniformly scaled.
Perhaps what you want is to fix the ratio for width and height in your ortho. That would allow you to address the screen in some kind of pseudo-pixel unit, where one axis is "real" pixels, but the other can be stretched. So instead of height, pass 3/4 of the width for a 4:3 display, or 9/16ths on a 16:9, etc. This will be in units of pixels if the display is the "right" dimension, but will stretch in one dimension only if it's not.
You may need to switch which dimension is "real" pixels depending on the ratio being less or greater than your "optimal" ratio, but it's tricky to know what you're really shooting for here.

OpenGL: Size of a 3D bounding box on screen

I need a simple and fast way to find out how big a 3D bounding box appears on screen (for LOD calculation) by using OpenGL Modelview and Projection matrices and the OpenGL Viewport dimensions.
My first intention is to project all 8 box corners on screen by using gluProject() and calculate the area of the convex hull afterwards. This solution works only with bounding boxes that are fully within the view frustum.
But how can a get the covered area on screen for boxes that are not fully within the viewing volume? Imaging a box where 7 corners are behind the near plane and only one corner is in front of the near plane and thus within the view frustum.
I have found another very similar question Screen Projection and Culling united but it does not cover my problem.
what about using queries and get samples that passes rendering?
http://www.opengl.org/wiki/Query_Object and see GL_SAMPLES_PASSED,
that way you could measure how many fragments are rendered and compare it for proper LOD selection.
Why not just manually multiply the world-view-projection with the vertex positions? This will give you the vertices in "normalized device coordinates" where -1 is the bottom left of the screen and +1 is the top-right of the screen?
The only thing is if the projection is perspective, you have to divide your vertices by their 4th component, ie if the final vertex is (x,y,z,w) you would divide by w.
Take for example a position vector
v = {x, 0, -z, 1}
Given a vertical viewing angle view 'a' and an aspect ration 'r', the position of x' in normalized device coordinates (range 0 - 1) is this (this formula taken directly out of a graphics programming book):
x' = x * cot(a/2) / ( r * z )
So a perspective projection for given parameters these will be as follows (shown in row major format):
cot(a/2) / r 0 0 0
0 cot(a/2) 0 0
0 0 z1 -1
0 0 z2 0
When you multiply your vector by the projection matrix (assuming the world, view matrices are identity in this example) you get the following (i'm only computing the new "x" and "w" values cause only they matter in this example).
v' = { x * cot(a/2) / r, newY, newZ, z }
So finally when we divide the new vector by its fourth component we get
v' = { x * cot(a/2) / (r*z), newY/z, newZ/z, 1 }
So v'.x is now the screen space coordinate v.x. This is exactly what the graphics pipeline does to figure out where your vertex is on screen.
I've used this basic method before to figure out the size of geometry on screen. The nice part about it is that the math works regardless of whether or not the projection is perspective or orthographic, as long you divide by the 4th component of the vector (for orthographic projections, the 4th component will be 1).

OpenGL Orthographic View App Windowed-to-Fullscreen

I'm working on a tile-based 2D OpenGL game (top down, 2D Zelda style), and I'm using an orthographic projection. I'd like to make this game both windowed and fullscreen compatible.
Am I better off creating scaling 2D drawing functions so that tile sizes can be scaled up when they're drawn in fullscreen mode, or should I just design the game fully around windowed mode and then scale up the ENTIRE drawing area whenever a player is in fullscreen mode?
In the end, I'm hoping to maintain the best looking texture quality when my tiles are re-scaled.
UPDATE/CLARIFICATION: Here's my concern (which may or may not even be a real problem): If I draw the entire windowed-view-sized canvas area and then scale it up, am I potentially scaling down originally 128x128 px textures to 64x64 px windowed-sized textures and then re-scaling them up again to 80x80 when I scale the entire game area up for a full screen view?
Other background info: I'm making this game using Java/LWJGL and I'm drawing with OpenGL 1.1 functions.
Dont scale tiles because your fullscreen window is larger than the normal one, but play with the projection matrix.
The window is getting larger? Enlarge the parameters used for constructing the projection matrix.
If you want to mantain proportional the tile size respect the window size, don't change projection matrix depending on window size!
The key is, if you haven't catched yet, the projectiona matrix: thanks to it, vertices are projected onto the viewport, indeed vertices are "scaled", letting you to choose appropriate unit system without worring about scaling.
In few words, the orthographic projection is specified from 4 parameters: left, right, bottom and top. Those parameters are nothing but the XY coordinates of the vertex projected onto the window screen boundaries (really the viewport).
Let do some concrete example.
Example 1
Window size: 400x300
OpenGL vertex position (2D): 25x25
Orthographic projection matrix:
- Left= 0
- Right= 400
- Bottom= 0
- Top= 300
Model view matrix set to identity
--> Point will be drawn at 25x25 in window coordinates
Example 2
Window size: 800x600 (doubled w.r.t example 1)
OpenGL vertex position (2D): 25x25
Orthographic projection matrix:
- Left= 0
- Right= 400
- Bottom= 0
- Top= 300
Model view matrix set to identity
--> Point will be drawn at 50x50 in window coordinates. It is doubled because the orthographic projection haven't changed.
Example 3
Let say I want to project the vertex in example 1 even if the aspect ratio of the window changes. Previous windows were 4:3. The window in this example is 16:9: it is larger! The trick here is to force the ratio between (right - left) and (top - bottom)
Window size: 1600x900 (doubled w.r.t example 1)
OpenGL vertex position (2D): 25x25
Orthographic projection matrix:
Left= 0
Right= 533 (300 * 16 / 9)
Bottom= 0
Top= 300
Model view matrix set to identity
--> Point will be drawn at 25x25 (instead of 33.3x25, if we were using the projection matrix of the example 1) in window coordinates.