Simple Texture Mapping for a generic triangle mesh - opengl

Suppose that we have a triangle mesh without information about normals and texture coordinates.
(Basically an OBJ file with only vertices and face elements).
The objective is to show something decent using Opengl with a program written in C.
To calculate the normals of every triangle is easy...
But what about texture mapping?
Can anyone recommend me a simple algorithm/documentation/resource to map the normalized UV coordinates of an image to a generic mesh of triangles?
(For a mesh with a single triangle it is easy, ex: [0][0], [1][0], [0][1])
The result doesn't have to be perfect, even professional softwares can't do that without UV unwrapping and UV seams.

The only algorithm I know is for 2D screen coordinates (screen space):
I already answered a question similar to this here, focus on the algorithm (ie., texturePos = (vPos - 0.5) * 2) of conversion between textureCoords and 2D vertices
EDIT:
Note; The following is a theory:
There might be a method with 3D space. Eventually the transformations lead to the vertices being rendered in 2D screen coordinates.
local space --> world space --> view space --> NDC space --> screen coordinates
Using the general convention above and the 3 matrices (Model, View, Projection),
and since the vertices will end up in 2D space, you could create some form of algorithm to back track the textureCoordinates using the inverse Matrices back to 3D space and move on from there.
This, btw, still is not a defined and perfect algorithm (maybe there is and someone will edit and add the algorithm here in the future...)

Related

Why does OpenGL allow/use fractional values as the location of vertices?

As far as I understand, location of a point/pixel cannot be a fraction, at least on a raster graphics system where hardwares use pixels to display images.
Then, why and how does OpenGL use fractional values for plotting pixels?
For example, how is it possible: glVertex2f(0.15f, 0.51f); ?
This command does not plot any pixels. It merely defines the location of a point in 3D space (you'll notice that there are 3 coordinates, while for a pixel on the screen you'd only need 2). This is the starting point for the OpenGL pipeline. This point then goes through a lot of transformations before it ends up on the screen.
Also, the coordinates are unitless. For example, you can say that your viewport is between 0.0f and 1.0f, then these coordinates make a lot of sense. Basically you have to think of these point in terms of mathematics, not pixels.
I would suggest some reading on how OpenGL transformations work, for example here, here or the tutorial here.
The vectors you pass into OpenGL are not viewport positions but arbitrary numbers in some vector space. Only after a chain of transformations these numbers are mapped into viewport pixel positions. With the old fixed function pipeline this could be anything that can be represented by a vector–matrix multiplication.
These days, where everything is programmable (shaders) the mapping can very well be any kind of function you can think of. For example the values you pass into glVertex (immediate mode call, but available to shaders with OpenGL-2.1) may be interpreted as polar coordinates in the vertex shader:
This is a perfectly valid OpenGL-2.1 vertex shader that interprets the vertex position to be in polar coordinates. Note that due to triangles and lines being straight edges and polar coordinates being curvilinear this gives good visual results only for points or highly tesselated primitives.
#version 110
void main() {
gl_Position =
gl_ModelViewProjectionMatrix
* vec4( gl_Vertex.y*vec2(sin(gl_Vertex.x),cos(gl_Vertex.x)) , 0, 1);
}
As you can see here the valus passed to glVertex are actually arbitrary, unitless components of vectors in some vector space. Only by applying some transformation to the viewport space these vectors gain meaning. Hence it makes no way to impose a certain value range onto the values that go into the vertex attribute.
Vertex and pixel are very different things.
It's quite possible to have all your vertices within one pixel (although in this case you probably need help with LODing).
You might want to start here...
http://www.glprogramming.com/blue/ch01.html
Specifically...
Primitives are defined by a group of one or more vertices. A vertex defines a point, an endpoint of a line, or a corner of a polygon where two edges meet. Data (consisting of vertex coordinates, colors, normals, texture coordinates, and edge flags) is associated with a vertex, and each vertex and its associated data are processed independently, in order, and in the same way.
And...
Rasterization produces a series of frame buffer addresses and associated values using a two-dimensional description of a point, line segment, or polygon. Each fragment so produced is fed into the last stage, per-fragment operations, which performs the final operations on the data before it's stored as pixels in the frame buffer.
For your example, before glVertex2f(0.15f, 0.51f) is on the screen, there are many transforms to be done. Making complex thing crudely simpler, after moving your vertex to view space (applying camera position and direction), the magic here is (1) projection matrix, and (2) viewport setting.
Internally, OpenGL "screen coordinates" are in a cube (-1, -1, -1) - (1, 1, 1), :
http://www.matrix44.net/cms/wp-content/uploads/2011/03/ogl_coord_object_space_cube.png
Projection matrix 'squeezes' the frustum in this cube (which you do in vertex shader), assuming you have perspective transform - if projection is orthogonal, the projection is just a tube, limited by near and far values (and like in both cases, scaling factors):
http://www.songho.ca/opengl/files/gl_projectionmatrix01.png
EDIT: Maybe better example here:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/#The_Projection_matrix
(EDIT: The Z-coordinate is used as depth value) When fragments are finally transferred to pixels on texture/framebuffer/screen, these are multiplied with viewport settings:
https://www3.ntu.edu.sg/home/ehchua/programming/opengl/images/GL_2DViewportAspectRatio.png
Hope this helps!

Texture Mapping without OpenGL

So I'm supposed to Texture Map a specific model I've loaded into a scene (with a Framebuffer and a Planar Pinhole Camera), however I'm not allowed to use OpenGL and I have no idea how to do it otherwise (we do use glDrawPixels for other functionality, but that's the only function we can use).
Is anyone here able enough to give me a run-through on how to texture map without OpenGL functionality?
I'm supposed to use these slides: https://www.cs.purdue.edu/cgvlab/courses/334/Fall_2014/Lectures/TMapping.pdf
But they make very little sense to me.
What I've gathered so far is the following:
You iterate over a model, and assign each triangle "texture coordinates" (which I'm not sure what those are), and then use "model space interpolation" (again, I don't understand what that is) to apply the texture with the right perspective.
I currently have my program doing the following:
TL;DR:
1. What is model space interpolation/how do I do it?
2. What explicitly are texture coordinates?
3. How, on a high level (in layman's terms) do I texture map a model without using OpenGL.
OK, let's start by making sure we're both on the same page about how the color interpolation works. Lines 125 through 143 set up three vectors redABC, greenABC and blueABC that are used to interpolate the colors across the triangle. They work one color component at a time, and each of the three vectors helps interpolate one color component.
By convention, s,t coordinates are in source texture space. As provided in the mesh data, they specify the position within the texture of that particular vertex of the triangle. The crucial thing to understand is that s,t coordinates need to be interpolated across the triangle just like colors.
So, what you want to do is set up two more ABC vectors: sABC and tABC, exactly duplicating the logic used to set up redABC, but instead of using the color components of each vertex, you just use the s,t coordinates of each vertex. Then for each pixel, instead of computing ssiRed etc. as unsigned int values, you compute ssis and ssit as floats, they should be in the range 0.0f through 1.0f assuming your source s,t values are well behaved.
Now that you have an interpolated s,t coordinate, multiply ssis by the texel width of the texture, and ssit by the texel height, and use those coordinates to fetch the texel. Then just put that on the screen.
Since you are not using OpenGL I assume you wrote your own software renderer to render that teapot?
A texture is simply an image. A texture coordinate is a 2D position in the texture. So (0,0) is bottom-left and (1,1) is top-right. For every vertex of your 3D model you should store a 2D position (u,v) in the texture. That means that at that vertex, you should use the colour the texture has at that point.
To know the UV texture coordinate of a pixel in between vertices you need to interpolate the texture coordinates of the vertices around it. Then you can use that UV to look up the colour in the texture.

Math Behind Flash Vector Graphics?

I've been searching for vector graphics and flash for quite some time but I haven't really found what I was looking for. Can anyone tell me exactly what area of mathematics is required for building vector images in 3D space? Is this just vector math? I saw some C++ libraries for it but I wasn't sure if it was the sort of vectors meant to for smaller file size like flash images are. Thanks in advance.
If you're wanting to do something from scratch (there are plenty of open-source libraries out there if you don't), keep in mind that "vector graphics" (this is different than the idea of a 3D space vector) themselves are typically based on parametric curves like Bezier curves, which are essentially 3rd degree polynomials for each x, y, and/or z point parameterized from a value t that goes from 0 to 1. Now projecting the texture-map image you create with those curves (i.e., the so-called "vector graphics" image) onto triangle polygon via uv coordinates would involve some interpolation, which is fairly straight forward linear algebra, as you would utilize the barycentric coordinate of the 3D point on the surface of the triangle polygon in order to calculate the uv point you want to look-up from the texture.
So essentially the steps are:
Create the parametric-curve based image (i.e, the "vector graphic") and make a texture map out of it
That texture map will have uv coordinates
When you rasterize the 3D triangle polygon, you will get a barycentric coordinate on the surface of the triangle from the actual 3D points of the triangle polygon. Those points of the polygon should also have UV coordinates assigned to them.
Use the barycentric coordinates to calculate the uv coordinate on the texture map.
When you get that color from the texture map, then shade the triangle (i.e, calculate lighting, etc. if that's what you're doing, or just save that color of the pixel if there is no lighting).
Please note I haven't gotten into antialiasing, that's a completely different beast. Best thing if you don't know what you're doing there is to simply brute-force antialias through super-sampling (i.e., render a really big image and then average pixels to shrink it back to the desired size).
If you've taken multivariable calculus, the concepts behind parametric curves and surfaces should be familiar, and a basic understanding of linear algebra would be necessary in order to work with barycentric coordinates and linear interpolation from 3D vectors.

calculating normals for quad mesh

I have a struct QUAD that stores 4 pointers to 4 VECTOR3D (which contains 3 floats) so that I can draw the quad mesh.
From what I understand is whenever I draw a mesh, I need normal as well to properly light/shade a mesh and it's relatively easy when it's a mesh laying on a plain, using normal per face.
When I have 2 by 2 quad meshes laying on XZ coordinate and tried to raise it's centre (0,0,0) by a certain point, say (0, 4, 0) it would start to form real 3D shapes, then I need to calculate normals again. I'm having hard time understanding how and what is to be to calculated normals. As expected, the 3D shape shades like it's still a flat mesh, so it does not represent real shape. One of the explanation says I need to calculate normals per vertex instead of per face.
Does it mean I need to calculate normals for all corners of mesh? once i have normals what would i do? I was still using old glBegin glEnd methods but now I feel like i need to use DrawArray method. I'm deeply confused and I'm pretty sure I don't make much sound but i'd much appreciate your help.
If you need flat looking surface then your normals will be normals to the quad plane. If you need "soft looking" surface you need to blend(read this and watch this cool simple video) normals - that will add sort of gradient.

Mapping from 2D projection back to 3D point cloud

I have a 3D model consisting of point vertices (XYZ) and eventually triangular faces.
Using OpenGL or camera-view-matrix-projection I can project the 3D model to a 2D plane, i.e. a view window or an image with m*n resolution.
The question is how can I determine the correspondence between a pixel from the 2D projection plan and its corresponding vertex (or face) from the original 3D model.
Namely,
What is the closest vertices in 3D model for a given pixel from 2D projection?
It sounds like picking in openGL or raytracing problem. Is there however any easy solution?
With the idea of ray tracing it is actually about finding the first vertex/face intersected with the ray from a view point. Can someone show me some tutorial or examples? I would like to find an algorithm independent from using OpenGL.
Hit testing in OpenGL usually is done without raytracing. Instead, as each primitive is rendered, a plane in the output is used to store the unique ID of the primitive. Hit testing is then as simple as reading the ID plane at the cursor location.
My (possibly-naive) thought would be to create an array of the vertices and then sort them by their distance (or distance-squared, for speed) once projected to your screen point. The first item in the list will be closest. It will be O(n) for n vertices, but no worse.
Edit: Better for speed and memory: simply loop through all vertices and keep track of the vertex whose projection is closest (distance squared) to your viewport pixel. This assumes that you are able to perform the projection yourself, without relying on OpenGL.
For example, in pseudo-code:
function findPointFromViewPortXY( pointOnViewport )
closestPoint = false
bestDistance = false
for (each point in points)
projectedXY = projectOntoViewport(point)
distanceSquared = distanceBetween(projectedXY, pointOnViewport)
if bestDistance==false or distanceSquared<bestDistance
closestPoint = point
bestDistance = distanceSquared
return closestPoint
In addition to Ben Voigt's answer:
If you do a separate pass over pickable objects, then you can set the viewport to contain only a single pixel that you will read.
You can also encode triangle ID by using geometry shader (gl_PrimitiveID).