So when drawing a rectangle on OpenGL, if you give the corners of the rectangle texture coordinates of (0,0), (1,0), (1,1) and (0, 1), you'll get the standard rectangle.
However, if you turn it into something that's not rectangular, you'll get a weird stretching effect. Just like the following:
I saw from this page below that this can be fixed, but the solution given is only for trapezoidal values only. Also, I have to be doing this over many rectangles.
And so, the questions is, what is the proper way, and most efficient way to get the right "4D" texture coordinates for drawing stretched quads?
Implementations are allowed to decompose quads into two triangles and if you visualize this as two triangles you can immediately see why it interpolates texture coordinates the way it does. That texture mapping is correct ... for two independent triangles.
That diagonal seam coincides with the edge of two independently interpolated triangles.
Projective texturing can help as you already know, but ultimately the real problem here is simply interpolation across two triangles instead of a single quad. You will find that while modifying the Q coordinate may help with mapping a texture onto your quadrilateral, interpolating other attributes such as colors will still have serious issues.
If you have access to fragment shaders and instanced vertex arrays (probably rules out OpenGL ES), there is a full implementation of quadrilateral vertex attribute interpolation here. (You can modify the shader to work without "instanced arrays", but it will require either 4x as much data in your vertex array or a geometry shader).
Incidentally, texture coordinates in OpenGL are always "4D". It just happens that if you use something like glTexCoord2f (s, t) that r is assigned 0.0 and q is assigned 1.0. That behavior applies to all vertex attributes; vertex attributes are all 4D whether you explicitly define all 4 of the coordinates or not.
Related
I've created a plane with six vertices per square that form a terrain.
I colour each vertex using the terrain height value in the pixel shader.
I'm looking for a way to colour pixels between vertexes black, while keeping everything else the same to create a grid effect. The same effect you get from wireframe mode, except for the diagonal line, and the transparent part should be the normal colour.
My terrain, and how it looks in wireframe mode:
How would one go about doing this in pixel shader, or otherwise?
See "Solid Wireframe" - NVIDIA paper from a few years ago.
The idea is basically this: include a geometry shader that generates barycentric coordinates as a varying for each vertex. In your fragment / pixel shader, check the value of the bary components. If they are below a certain threshold, you color the pixel however you'd like your wireframe to be colored. Otherwise, light it as you normally would.
Given a face with vertices A,B,C, you'd generate barycentric values of:
A: 1,0,0
B: 0,1,0
C: 0,0,1
In your fragment shader, see if any component of the bary for that fragment is less than, say, 0.1. If so, it means that it's close to one of the edges of the face. (Which component is below the threshold will also tell you which edge, if you want to get fancy.)
I'll see if I can find a link and update here in a few.
Note that the paper is also ~10 years old. There are ways to get bary coordinates without the geometry shader these days in some situations, and I'm sure there are other workarounds. (Geometry shaders have their place, but are not the greatest friend of performance.)
Also, while geom shaders come with a perf hit, they're significantly faster than a second pass to draw a wireframe. Drawing in wireframe mode in GL (or DX) carries a significant performance penalty because you're asking the rasterizer to simulate Bresenham's line algorithm. That's not how rasterizers work, and it is freaking slow.
This approach also solves any z-fighting issues that you may encounter with two passes.
If your mesh were a single triangle, you could skip the geometry shader and just pack the needed values into a vertex buffer. But, since vertices are shared between faces in any model other than a single triangle, things get a little complicated.
Or, for fun: do this as a post processing step. Look for high ddx()/ddy() (or dFdx()/dFdy(), depending on your API) values in your fragment shader. That also lets you make some interesting effects.
Given that you have a vertex buffer containing all the vertices of your grid, make an index buffer that utilizes the vertex buffer but instead of making groups of 3 for triangles, use pairs of 2 for line segments. This will be a Line List and should contain all the pairs that make up the squares of the grid. You could generate this list automatically in your program.
Rough algorithm for rendering:
Render your terrain as normal
Switch your primitive topology to Line List
Assign the new index buffer
Disable Depth Culling (or add a small height value to each point in the vertex shader so the grid appears above the terrain)
Render the Line List
This should produce the effect you are looking for of the terrain drawn and shaded with a square grid on top of it. You will need to put a switch (via a constant buffer) in your pixel shader that tells it when it is rendering the grid so it can draw the grid black instead of using the height values.
I have a texture of the earth which I want to map onto a sphere.
As it is a unit sphere and the model itself has no texture coordinates, the easiest thing I could think of is to just calculate spherical coordinates for each vertex and use them as texture coordinates.
textureCoordinatesVarying = vec2(atan(modelPositionVarying.y, modelPositionVarying.x)/(2*M_PI)+.5, acos(modelPositionVarying.z/sqrt(length(modelPositionVarying.xyz)))/M_PI);
When doing this in the fragment shader, this works fine, as I calculate the texture coordinates from the (interpolated) vertex positions.
But when I do this in the vertex shader, which I also would do if the model itself has texture coordinates, I get the result as shown in the image below. The vertices are shown as points and a texture coordinate (u) lower than 0.5 is red while all others are blue.
So it looks like that the texture coordinate (u) of two adjacent red/blue vertices have value (almost) 1.0 and 0.0. The variably is then smoothly interpolated and therefore yields values somewhere between 0.0 and 1.0. This of course is wrong, because the value should either be 1.0 or 0.0 but nothing in between.
Is there a way to work with spherical coordinates as texture coordinates without getting those effects shown above? (if possible, without changing the model)
This is a common problem. The seams between two texture coordinate topologies, where you want the texture coordinate to seamlessly wrap from 1.0 to 0.0 requires the mesh to properly handle this. To do this, the mesh must duplicate every vertex along the seam. One of the vertices will have a 0.0 texture coordinate and will be connected to the vertices coming from the right (in your example). The other will have a 1.0 texture coordinate and will be connected to the vertices coming from the left (in your example).
This is a mesh problem, and it is best to solve it in the mesh itself. The same position needs two different texture coordinates, so you must duplicate the position in question.
Alternatively, you could have the fragment shader generate the texture coordinate from an interpolated vertex normal. Of course, this is more computationally expensive, as it requires doing a conversion from a direction to a pair of angles (and then to the [0, 1] texture coordinate range).
As far as I understand, location of a point/pixel cannot be a fraction, at least on a raster graphics system where hardwares use pixels to display images.
Then, why and how does OpenGL use fractional values for plotting pixels?
For example, how is it possible: glVertex2f(0.15f, 0.51f); ?
This command does not plot any pixels. It merely defines the location of a point in 3D space (you'll notice that there are 3 coordinates, while for a pixel on the screen you'd only need 2). This is the starting point for the OpenGL pipeline. This point then goes through a lot of transformations before it ends up on the screen.
Also, the coordinates are unitless. For example, you can say that your viewport is between 0.0f and 1.0f, then these coordinates make a lot of sense. Basically you have to think of these point in terms of mathematics, not pixels.
I would suggest some reading on how OpenGL transformations work, for example here, here or the tutorial here.
The vectors you pass into OpenGL are not viewport positions but arbitrary numbers in some vector space. Only after a chain of transformations these numbers are mapped into viewport pixel positions. With the old fixed function pipeline this could be anything that can be represented by a vector–matrix multiplication.
These days, where everything is programmable (shaders) the mapping can very well be any kind of function you can think of. For example the values you pass into glVertex (immediate mode call, but available to shaders with OpenGL-2.1) may be interpreted as polar coordinates in the vertex shader:
This is a perfectly valid OpenGL-2.1 vertex shader that interprets the vertex position to be in polar coordinates. Note that due to triangles and lines being straight edges and polar coordinates being curvilinear this gives good visual results only for points or highly tesselated primitives.
#version 110
void main() {
gl_Position =
gl_ModelViewProjectionMatrix
* vec4( gl_Vertex.y*vec2(sin(gl_Vertex.x),cos(gl_Vertex.x)) , 0, 1);
}
As you can see here the valus passed to glVertex are actually arbitrary, unitless components of vectors in some vector space. Only by applying some transformation to the viewport space these vectors gain meaning. Hence it makes no way to impose a certain value range onto the values that go into the vertex attribute.
Vertex and pixel are very different things.
It's quite possible to have all your vertices within one pixel (although in this case you probably need help with LODing).
You might want to start here...
http://www.glprogramming.com/blue/ch01.html
Specifically...
Primitives are defined by a group of one or more vertices. A vertex defines a point, an endpoint of a line, or a corner of a polygon where two edges meet. Data (consisting of vertex coordinates, colors, normals, texture coordinates, and edge flags) is associated with a vertex, and each vertex and its associated data are processed independently, in order, and in the same way.
And...
Rasterization produces a series of frame buffer addresses and associated values using a two-dimensional description of a point, line segment, or polygon. Each fragment so produced is fed into the last stage, per-fragment operations, which performs the final operations on the data before it's stored as pixels in the frame buffer.
For your example, before glVertex2f(0.15f, 0.51f) is on the screen, there are many transforms to be done. Making complex thing crudely simpler, after moving your vertex to view space (applying camera position and direction), the magic here is (1) projection matrix, and (2) viewport setting.
Internally, OpenGL "screen coordinates" are in a cube (-1, -1, -1) - (1, 1, 1), :
http://www.matrix44.net/cms/wp-content/uploads/2011/03/ogl_coord_object_space_cube.png
Projection matrix 'squeezes' the frustum in this cube (which you do in vertex shader), assuming you have perspective transform - if projection is orthogonal, the projection is just a tube, limited by near and far values (and like in both cases, scaling factors):
http://www.songho.ca/opengl/files/gl_projectionmatrix01.png
EDIT: Maybe better example here:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/#The_Projection_matrix
(EDIT: The Z-coordinate is used as depth value) When fragments are finally transferred to pixels on texture/framebuffer/screen, these are multiplied with viewport settings:
https://www3.ntu.edu.sg/home/ehchua/programming/opengl/images/GL_2DViewportAspectRatio.png
Hope this helps!
So I'm supposed to Texture Map a specific model I've loaded into a scene (with a Framebuffer and a Planar Pinhole Camera), however I'm not allowed to use OpenGL and I have no idea how to do it otherwise (we do use glDrawPixels for other functionality, but that's the only function we can use).
Is anyone here able enough to give me a run-through on how to texture map without OpenGL functionality?
I'm supposed to use these slides: https://www.cs.purdue.edu/cgvlab/courses/334/Fall_2014/Lectures/TMapping.pdf
But they make very little sense to me.
What I've gathered so far is the following:
You iterate over a model, and assign each triangle "texture coordinates" (which I'm not sure what those are), and then use "model space interpolation" (again, I don't understand what that is) to apply the texture with the right perspective.
I currently have my program doing the following:
TL;DR:
1. What is model space interpolation/how do I do it?
2. What explicitly are texture coordinates?
3. How, on a high level (in layman's terms) do I texture map a model without using OpenGL.
OK, let's start by making sure we're both on the same page about how the color interpolation works. Lines 125 through 143 set up three vectors redABC, greenABC and blueABC that are used to interpolate the colors across the triangle. They work one color component at a time, and each of the three vectors helps interpolate one color component.
By convention, s,t coordinates are in source texture space. As provided in the mesh data, they specify the position within the texture of that particular vertex of the triangle. The crucial thing to understand is that s,t coordinates need to be interpolated across the triangle just like colors.
So, what you want to do is set up two more ABC vectors: sABC and tABC, exactly duplicating the logic used to set up redABC, but instead of using the color components of each vertex, you just use the s,t coordinates of each vertex. Then for each pixel, instead of computing ssiRed etc. as unsigned int values, you compute ssis and ssit as floats, they should be in the range 0.0f through 1.0f assuming your source s,t values are well behaved.
Now that you have an interpolated s,t coordinate, multiply ssis by the texel width of the texture, and ssit by the texel height, and use those coordinates to fetch the texel. Then just put that on the screen.
Since you are not using OpenGL I assume you wrote your own software renderer to render that teapot?
A texture is simply an image. A texture coordinate is a 2D position in the texture. So (0,0) is bottom-left and (1,1) is top-right. For every vertex of your 3D model you should store a 2D position (u,v) in the texture. That means that at that vertex, you should use the colour the texture has at that point.
To know the UV texture coordinate of a pixel in between vertices you need to interpolate the texture coordinates of the vertices around it. Then you can use that UV to look up the colour in the texture.
Let there be a vertex which is part of a triangle, and of a quad.
To my best understanding, the normal of that vertex is the average of the normal of the quad and the normal of the triangle.
The triangle is drawn before the quad. When should I call glNormal and with what vector?
Should I call glNormal 2 times, each time with the same vector (the average normal vector)?
Should I call glNormal the last time the vertex is drawn, with the average normal vector?
To my best understanding, the normal of that vertex is the average of
the normal of the quad and the normal of the triangle.
Ideally, the normal vector should be orthogonal to the surface that you are rendering, on any point. However, the GL only supports rendering surfaces only as polygonal models (at least directly). So there are two principal possibilities:
The polygonal representation does exactly represent the object you want to visualize. A simple example would be a cube.
The polygonal represantation is just an (picewise linear) approximation of the surface you want to visualize. Think of smooth surfaces.
In case 1, you need one nomral per triangle (as the normal is unchaning for a flat surface defined by a triangle). However, this means that either for neighboring triangles who share an edge or corner, the normals will have to be different. From GL's point of view, each of the trianlges use different vertices, even if those vertices share the position in space. A vertex is the set of all attributes, not just the position. For the cube, that means that you will need not just 8 different vertices, but 24, so you have 3 at each corner.
In case 2, you do want to cover up the polygonal structure of the model as good as possible. One aspect of this is using smooth shading techniques. Averaging the normales of adjacent traingles at each vertex is one heuristic of doing so. In this case, neighboring primitives actually can share vertices, as the normal and the position of some corner point is the same for any triangle connected to it.
This heuristic has some drawbacks, especially if your surface does contain both smooth parts and "sharp edges" you want to preserve. There are some improved heuristics which try to detect sharp edges and splitting vertices to allow different normals for the connected triangles to not shooth such edges. But all such heuristics might fail in some cases - ideally, the normals are provided when the model is created in the first place.
The triangle is drawn before the quad. When should I call glNormal and
with what vector?
OpenGL is a state machine, meaning that things you set kepp that way until you channge them again - and setting normals is no exception. The second thing to note is that normals are a vertex attribute. So for every vertex, every arrtibute has always some value (but depending on the rest of your GL state, not all of these attributes are used when rendering).
Since you use the fixed-function GL, normals are builtin vertex attributes - so every vertex you issue in some way has some value as its normal attribute - in immediate mode rendering with glBegin()/End(), it will be the one you set with the most recent glNormal() call (or it will have the initial default value if you never called glNormal()).
So to answer you question:
YOu have to set that normal before you issue the glVertex() call for that particular vertex for the first time, and you have to re-issue that normal command for the second time drawing with "this" vertex (which technically is a different vertex anyway) if you did change it inbetween when specifying some other vertices.
To my best understanding, the normal of that vertex is the average of the normal of the quad and the normal of the triangle.
No. The normal of a plane is a vector pointing 'out of' the plane at a 90 degree angle. In OpenGL, this is used in shading calculations, and to support various effects, OpenGL lets you specify whatever normal you want instead of calculating it from the primitive. For flat lighting, the normal should be set to the mathematical definition of the normal for each primitive, while for smooth lighting, the normal should be set to the average normal of all primitives that share the vertex.
glNormal sets a value in OpenGL that is read whenever you call glVertex, and is persistent until you call glNormal again. So this code
glNormal3d(0,0,1)
glVertex3d(1,0,0)
glVertex3d(1,1,0)
glVertex3d(0,1,0)
glVertex3d(0,0,0)
specifies 4 vertices, each with a normal of (0,0,1).