About OpenGL texture coordinates - opengl

I know that I must call one of the following before each call to glVertex:
glTexCoord(0,0);
glTexCoord(0,1);
glTexCoord(1,1);
glTexCoord(1,0);
But I have no idea what they mean. I know, however, that if I multiply (or is that divide?) the right side (or is it all the ones?) by two, my texture expands, and if I do the opposite, my texture repeats twice. I've managed to code a texture atlas by applying operations until it worked. But I have no proper idea about what's going on. Why does dividing these coordinates affect the image and why does reversing them mirror it? How do texture coordinates work?

Texture coordinates specify the point in the texture image that will correspond to the vertex you are specifying them for. Think of a rectangular rubber sheet with your texture image printed on it, where the length of each side is normalized to the range 0-1. Now let's say you wanted to draw a triangle using that texture. You'd take 3 pins and place them in the rubber sheet in the positions of each of your desired texture coordinates. (Say [0, 0], [1, 0] and [1, 1]) then move those pins (without taking them out) to your desired vertex coordinates (Say [0, 0], [0.5, 0] and [1, 1]), so that the rubber sheet is stretched out and the image is distorted. That's basically how texture coordinates work.
If you use texture coordinates greater than 1 and your texture is set to repeat, then it's as if the rubber sheet was infinite in size and the texture was tiled across it. Therefore if your texture coordinates for two vertices were 0, 0 and 4, 0, then the image would have to be repeated 4 times between those vertices.
#b1nary.atr0phy Image for all you visual thinkers!

OpenGL uses inverse texturing. It takes coordinates from world space (X,Y,Z) to texture space (X,Y) to discrete space(U,V), where the discrete space is in the [0,1] domain.
Take a polygon, think of it as a sheet of paper. With this:
glTexCoord(0,0);
glTexCoord(0,1);
glTexCoord(1,1);
glTexCoord(1,0);
You tell OpenGL to draw on the whole sheet of paper. When you apply modifications your texturing space modifies accordingly to the give coordinates. That is why for example when you divide you get the same texture twice, you tell OpenGL to map half of your sheet, instead of the whole sheet of paper.

Chapter 9 of the Red Book explains this in detail and is available for free online.
http://www.glprogramming.com/red/chapter09.html
Texture coordinates map x,y to the space 0-1 in width and height texture space. This is then stretched like a rubber sheet over the triangles. It is best explained with pictures and the Red Book does this.

For 2D image textures, 0,0 in texture coordinates corresponds to the bottom left corner of the image, and 1,1 in texture coordinates corresponds to the top right corner of the image. Note that "bottom left corner of the image" is not at the center of the bottom left pixel, but at the edge of the pixel.
Also interesting when uploading images:
8.5.3 Texture Image Structure
The texture image itself (referred to by data) is a sequence of groups of values. The first group is the lower left back corner of the texture image. Subsequent groups fill out rows of width width from left to right; height rows are stacked from bottom to top forming a single two-dimensional image slice; and depth slices are stacked from back to front.
Note that most image formats have the data start at the top, not at the bottom row.

Related

textureLod() returns 0 where texelFetch() returns 1 [duplicate]

I know that I must call one of the following before each call to glVertex:
glTexCoord(0,0);
glTexCoord(0,1);
glTexCoord(1,1);
glTexCoord(1,0);
But I have no idea what they mean. I know, however, that if I multiply (or is that divide?) the right side (or is it all the ones?) by two, my texture expands, and if I do the opposite, my texture repeats twice. I've managed to code a texture atlas by applying operations until it worked. But I have no proper idea about what's going on. Why does dividing these coordinates affect the image and why does reversing them mirror it? How do texture coordinates work?
Texture coordinates specify the point in the texture image that will correspond to the vertex you are specifying them for. Think of a rectangular rubber sheet with your texture image printed on it, where the length of each side is normalized to the range 0-1. Now let's say you wanted to draw a triangle using that texture. You'd take 3 pins and place them in the rubber sheet in the positions of each of your desired texture coordinates. (Say [0, 0], [1, 0] and [1, 1]) then move those pins (without taking them out) to your desired vertex coordinates (Say [0, 0], [0.5, 0] and [1, 1]), so that the rubber sheet is stretched out and the image is distorted. That's basically how texture coordinates work.
If you use texture coordinates greater than 1 and your texture is set to repeat, then it's as if the rubber sheet was infinite in size and the texture was tiled across it. Therefore if your texture coordinates for two vertices were 0, 0 and 4, 0, then the image would have to be repeated 4 times between those vertices.
#b1nary.atr0phy Image for all you visual thinkers!
OpenGL uses inverse texturing. It takes coordinates from world space (X,Y,Z) to texture space (X,Y) to discrete space(U,V), where the discrete space is in the [0,1] domain.
Take a polygon, think of it as a sheet of paper. With this:
glTexCoord(0,0);
glTexCoord(0,1);
glTexCoord(1,1);
glTexCoord(1,0);
You tell OpenGL to draw on the whole sheet of paper. When you apply modifications your texturing space modifies accordingly to the give coordinates. That is why for example when you divide you get the same texture twice, you tell OpenGL to map half of your sheet, instead of the whole sheet of paper.
Chapter 9 of the Red Book explains this in detail and is available for free online.
http://www.glprogramming.com/red/chapter09.html
Texture coordinates map x,y to the space 0-1 in width and height texture space. This is then stretched like a rubber sheet over the triangles. It is best explained with pictures and the Red Book does this.
For 2D image textures, 0,0 in texture coordinates corresponds to the bottom left corner of the image, and 1,1 in texture coordinates corresponds to the top right corner of the image. Note that "bottom left corner of the image" is not at the center of the bottom left pixel, but at the edge of the pixel.
Also interesting when uploading images:
8.5.3 Texture Image Structure
The texture image itself (referred to by data) is a sequence of groups of values. The first group is the lower left back corner of the texture image. Subsequent groups fill out rows of width width from left to right; height rows are stacked from bottom to top forming a single two-dimensional image slice; and depth slices are stacked from back to front.
Note that most image formats have the data start at the top, not at the bottom row.

Modifying a texture on a mesh at given world coordinate

Im making an editor in which I want to build a terrain map. I want to use the mouse to increase/decrease terrain altitude to create mountains and lakes.
Technically I have a heightmap I want to modify at a certain texcoord that I pick out with my mouse. To do this I first go from screen coordinates to world position - I have done that. The next step, going from world position to picking the right texture coordinate puzzles me though. How do I do that?
If you are using a simple hightmap, that you use as a displacement map in lets say the y direction. The base mesh lays in the xz plain (y=0).
You can discard the y coordinate from world coordinate that you have calculated and you get the point on the base mesh. From there you can map it to texture space the way, you map your texture.
I would not implement it that way.
I would render the scene to a framebuffer and instead of rendering a texture the the mesh, colorcode the texture coordinate onto the mesh.
If i click somewhere in screen space, i can simple read the pixel value from the framebuffer and get the texture coordinate directly.
The rendering to the framebuffer should be very inexpensive anyway.
Assuming your terrain is a simple rectangle you first calculate the vector between the mouse world position and the origin of your terrain. (The vertex of your terrain quad where the top left corner of your height map is mapped to). E.g. mouse (50,25) - origin(-100,-100) = (150,125).
Now divide the x and y coordinates by the world space width and height of your terrain quad.
150 / 200 = 0.75 and 125 / 200 = 0.625. This gives you the texture coordinates, if you need them as pixel coordinates instead simply multiply with the size of your texture.
I assume the following:
The world coordinates you computed are those of the mouse pointer within the view frustrum. I name them mouseCoord
We also have the camera coordinates, camCoord
The world consists of triangles
Each triangle point has texture coordiantes, those are interpolated by barycentric coordinates
If so, the solution goes like this:
use camCoord as origin. Compute the direction of a ray as mouseCoord - camCoord.
Compute the point of intersection with a triangle. Naive variant is to check for every triangle if it is intersected, more sophisticated would be to rule out several triangles first by some other algorithm, like parting the world in cubes, trace the ray along the cubes and only look at the triangles that have overlappings with the cube. Intersection with a triangle can be computed like on this website: http://www.lighthouse3d.com/tutorials/maths/ray-triangle-intersection/
Compute the intersection points barycentric coordinates with respect to that triangle, like that: https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-rendering-a-triangle/barycentric-coordinates
Use the barycentric coordinates as weights for the texture coordinates of the corresponding triangle points. The result are the texture coordinates of the intersection point, aka what you want.
If I misunderstood what you wanted, please edit your question with additional information.
Another variant specific for a height map:
Assumed that the assumptions are changed like that:
The world has ground tiles over x and y
The ground tiles have height values in their corners
For a point within the tile, the height value is interpolated somehow, like by bilinear interpolation.
The texture is interpolated in the same way, again with given texture coordinates for the corners
A feasible algorithm for that (approximative):
Again, compute origin and direction.
Wlog, we assume that the direction has a higher change in x-direction. If not, exchange x and y in the algorithm.
Trace the ray in a given step length for x, that is, in each step, the x-coordinate changes by that step length. (take the direction, multiply it with step size divided by it's x value, add that new direction to the current position starting at the origin)
For your current coordinate, check whether it's z value is below the current height (aka has just collided with the ground)
If so, either finish or decrease step size and do a finer search in that vicinity, going backwards until you are above the height again, then maybe go forwards in even finer steps again et cetera. The result are the current x and y coordinates
Compute the relative position of your x and y coordinates within the current tile. Use that for weights for the corner texture coordinates.
This algorithm can theoretically jump over very thin tops. Choose a small enough step size to counter that. I cannot give an exact algorithm without knowing what type of interpolation the height map uses. Might be not the worst idea to create triangles anyway, out of bilinear interpolated coordinates maybe? In any case, the algorithm is good to find the tile in which it collides.
Another variant would be to trace the ray over the points at which it's x-y-coordinates cross the tile grid and then look if the z coordinate went below the height map. Then we know that it collides in this tile. This could produce a false negative if the height can be bigger inside the tile than at it's edges, as certain forms of interpolation can produce, especially those that consider the neighbour tiles. Works just fine with bilinear interpolation, though.
In bilinear interpolation, the exact intersection can be found like that: Take the two (x,y) coordinates at which the grid is crossed by the ray. Compute the height of those to retrieve two (x,y,z) coordinates. Create a line out of them. Compute the intersection of that line with the ray. The intersection of those is that of the intersection with the tile's height map.
Simplest way is to render the mesh as a pre-pass with the uvs as the colour. No screen to world needed. The uv is the value at the mouse position. Just be careful though with mips/filtering etv

Antialiased GLSL impostors

If you draw a sphere using an impostor based ray-tracing approach as described for example here
http://www.arcsynthesis.org/gltut/Illumination/Tutorial%2013.html
you typically draw a quad and then use 'discard' to skip pixels that have a distance from the quad center larger than the sphere radius.
When you turn on anti-aliasing, GLSL will anti-alias the border of the primitive you draw - in this case the quad - but not the border between the drawn and discarded pixels.
I have attached two screen shots displaying the sphere and a blow-up of its border. Except for the top-most pixels, that lie on the quad border, clearly the sphere border has not been anti-aliased.
Is there any trick I can use to make the impostor spheres have a nice anti-aliased border?
Best regard,
Mads
Instead of just discarding the pixel, set your sphere to have inner and outer radius.
Everything inside the inner radius is fully opaque, everything outside the outer radius is discarded, and anything in between is linearly interpolated between 0 and 1 alpha values.
float alpha = (position - inner) / (outer - inner);
Kneejerk reaction would be to multisample for yourself: render to a texture that is e.g. four times as large as your actual output, then ensure you generate mip maps and render from that texture back onto your screen.
Alternatively do that directly in your shader and let OpenGL continue worrying about geometry edges: sample four rays per pixel and average them.

OpenGL Y texture coordinates behaving oddly

So basically I am making a 2D game with opengl/c++. I have a quad with a texture mapped on it and because I cant use non power of two images (or at least I shouldnt) I have an image inside a power of two image and I wish to remove the excess image with texture mapping.
GLfloat quadTexcoords[] = {
0.0, 0.0,
0.78125, 0.0,
0.78125, 0.8789,
0.0, 0.8789
};
glGenBuffers(1, &VBO_texcoords);
glBindBuffer(GL_ARRAY_BUFFER, VBO_texcoords);
glBufferData(GL_ARRAY_BUFFER, sizeof(quadTexcoords), quadTexcoords, GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
This is my texture code. Nothing special I know. The X coordinates (0.7256) work fine and it remove the excess image on the right side of the image. However the Y coordinate does not work. I have debugged my game and found the correct coordinate is sent to the shader but it wont map. It seems to be working in reverse sometimes but it is very unclear.
if I give it a coordinate for Y like 20 it repeats multiple times but still leaves a little white line a the top of the quad. I havent got the faintest idea what it could be.
Other details: The image I am trying to map is 800 by 450 and it is wrapped in an image of size 1024 by 512. I scale the quad by the aspect ratio of 800 by 450. I doubt this would make a difference but you never know!
Thanks for your time.
EDIT: here is an example of whats happening.
This is the full image mapped fully (0 to 1 in X and Y) The blue portion is 200 pixels high and the full image is 300 pixels high.
The second image is the image mapped to 2 thirds of the Y axis (i.e. 0 to 0.6666 in Y). This should remove the white at the top but that is not what is happening. I don't think the coordinates are back to front as I got the mapping of several tutorials online.
It seems to be working in reverse sometimes but it is very unclear.
OpenGL assumes the viewport origin in the lower left and texture coordinates running "along with" flat memory texel values in S, then T direction. In essence this means, that with one of the usual mappings, textures have their origin in the lower left, contrary to the upper left origin found in most image manipulation programs.
So in your case the white margin you see, is simply the padding you probably applied to the texture image at the bottom, instead of the top, where you should put it. Why can't you use NPO2 textures anyway? They're widely supported.
Not a real solution to you problem but maybe a way to go around the problem:
You can scale the image to 1024x1024 (which deforms the image) and use 0->1 texture coordinates. Because the aspect ratio of your quad is 800x450 the image should be displayed correctly.

OpenGL: 2D Vertex coordinates to 2D viewing coordinates?

I'm implementing a rasterizer for a class project, and currently im stuck on what method/how i should convert vertex coordinates to viewing pane coordinates.
I'm given a list of verticies of 2d coordinates for a triangle, like
0 0 1
2 0 1
0 1 1
and im drawing in a viewing pane (using OpenGL and GLUT) of size 400X400 pixels, for example.
My question is how do i decide where in the viewing pane to put these verticies, assuming
1) I want the coordinate's to be centered around 0,0 at the center of the screen
2) I want to fill up most of the screen (lets say for this example, the screen is the maximum x coordinate + 1 lengths wide, etc)
3) I have any and all of OpenGL's and GLUT's standard library functions at my disposal.
Thanks!
http://www.opengl.org/sdk/docs/man/xhtml/glOrtho.xml
To center around 0 use symmetric left/right and bottom/top. Beware the near/far which are somewhat arbitrary but are often chosen (in examples) as -1..+1 which might be a problem for your triangles at z=1.
If you care about the aspect ratio make sure that right-left and bottom-top are proportional to the window's width/height.
You should consider the frustum which is your volumetric view and calculate the coordinates by transforming the your objects to consider their position, this explains the theory quite thoroughly..
basically you have to project the object using a specified projection matrix that is calculated basing on the characteristics of your view:
scale them according to a z (depth) value: you scale both y and x in so inversely proportionally to z
you scale and shift coordinates in order to fit the width of your view