I'm trying to render a texture-mapped cube, as part of my self-taught (or rather SO taught) learning.
I found this example online that packs the vertex coords and texture coords into one array, like so:
Vertex Vertices[4] = { Vertex(Vector3f(-1.0f, -1.0f, 0.5773f), Vector2f(0.0f, 0.0f)),
Vertex(Vector3f(0.0f, -1.0f, -1.15475), Vector2f(0.5f, 0.0f)),
Vertex(Vector3f(1.0f, -1.0f, 0.5773f), Vector2f(1.0f, 0.0f)),
Vertex(Vector3f(0.0f, 1.0f, 0.0f), Vector2f(0.5f, 1.0f)) };
I guess it works for a pyramid-shaped object, but it doesn't work so well for my cube. The problem is that I need to use a different texture coordinate for the same vertex which is shared with another face.
So I thought, "Oh I know! I'll just pack the texture coordinates with the indices instead!" and I merrily created my data structure mapping the indexes to texture coordinates, but now I've ran into a snag: Indices need to go into the GL_ELEMENT_ARRAY_BUFFER and texture coordinates need to go into the GL_ARRAY_BUFFER.
Does this mean that there's no way for me to pack this data into one buffer? I have to split out the index array and texture coordinate array into two separate structures?
Furthermore, I just realized that there would no longer be a 1:1 mapping between vertex positions and texture coordinates... I have no idea how I'd rewrite my vertex shader.
Or am I supposed to do it the way the tutorial does (pack the vertex positions and texture coords together) and just repeat vertices where necessary?
I thought the whole idea behind separating the indices and the vertex positions in the first place was to reduce data redundancy, but now I have to add that redundancy back in as soon as I want to use textures?
You fell for a common misconception, identifying vertices with just their position. This is not what a vertex is, though.
In reality a vertex is the full combination of all it's attributes, i.e. position, normal, texture coordinates. So if the texture coordinates differ, you have a very different vertex, with it's own index. So you have to duplicate the position, normal, etc. data, except that different texture coordinate.
Related
I have texture already rendered and I'm mapping a quad/rectangle on it. (Quad may be smaller or equal to total texture size)
Once the Quad is mapped, I want to remove the rest (what ever is drawn outside quad).
So far i can map quad and get my sub texture(not to be removed) however I'm unable to delete the remaining region(outside quad).
Following Images show the procedure;
1.Original Image
2.Original Image with quad in red color
3.Everything removed except quad. Texture after Cropping
I don't know how you compute your texture coordinates in your code but there is not millions way to do it, so I'll give a solution for the three easiest way I have in mind :
You only have a vertex array containing the positions of your vertice for your quad, and use them to compute your texture coordinates. In that case, just modify the position of your vertice to your crop area before drawing.
You have a vertex array containing both the positions and texture coordinates (or two vertex arrays, one for each). You must change the area covered in both. For your specific use case I would advise to compute the texture coordinates from the vertice positions in the vetex shader for simplicity and efficiency.
You send your cropping area as a uniform to your fragment shader. This solution assumes you work in ortho space at the picture will always fill the screen. In that case, from the input vector position, you know where you are. With a simple if condition, you can check if you are out of boundaries. If so, set the pixel to black or use discard to cancel the drawing of the pixel. Conditions are time consuming so I would only advise this solution is you wish to set the cropped pixels to black. If you prefer to have them not displayed at all, the solution 1 is the fastest.
I have solved it using Nehe's Lesson 3. I used
glColor3f(0.0f,0.0f,0.0f); // Set The Color To Black
glBegin(GL_QUADS); // Start Drawing Quads
glVertex3f(-1.0f, 1.0f, 0.0f); // Left And Up 1 Unit (Top Left)
glVertex3f( 1.0f, 1.0f, 0.0f); // Right And Up 1 Unit (Top Right)
glVertex3f( 1.0f,-1.0f, 0.0f); // Right And Down One Unit(Bottom Right)
glVertex3f(-1.0f,-1.0f, 0.0f); // Left And Down One Unit (Bottom Left)
glEnd(); // Done Drawing A Quad`
to draw 4 quads of black color, to crop the region outside my selected region.
Thanks to Nehe.
I recently learned that:
glBegin(GL_TRIANGLES);
glVertex3f( 0.0f, 1.0f, 0.0f);
glVertex3f(-1.0f,-1.0f, 0.0f);
glVertex3f( 1.0f,-1.0f, 0.0f);
glEnd();
is not really how its done. I've been working with the tutorials at opengl-tutorial.org which introduced me to VBOs. Now I'm migrating it all into a class and trying to refine my understanding.
My situation is similar to this. I understand how to use matrices for rotations and I could do it all myself then hand it over to gl and friends. But I'm sure thats far less efficient and it would involve more communication with the graphics card. Tutorial 17 on that website shows how to rotate things, it uses:
// Send our transformation to the currently bound shader,
// in the "MVP" uniform
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);
glUniformMatrix4fv(ModelMatrixID, 1, GL_FALSE, &ModelMatrix[0][0]);
glUniformMatrix4fv(ViewMatrixID, 1, GL_FALSE, &ViewMatrix[0][0]);
to rotate objects. I assume this is more efficient then anything I could ever produce. What I want is to do something like this, but only multiply the matrix by some of the mesh, without breaking the mesh into pieces (because that would disrupt the triangle-vertex-index relationship and I'd end up stitching it back together manually).
Is there a separate function for that? Is there some higher level library that handles meshes and bones that I should be using (as some of the replies to the other guys post seems to suggest)? I don't want to get stuck using something outdated and inefficient again, only to end up redoing everything again later.
Uniforms are so named because they are uniform: unchanging over the course of a render call. Shaders can only operate on input values (which are provided per input type. Per-vertex for vertex shaders, per-fragment for fragment shaders, etc), uniforms (which are fixed for a single rendering call), and global variables (which are reset to their original values for every instantiation of a shader).
If you want to do different stuff for different parts of an object within a single rendering call, you must do this based on input variables, because only inputs change within a single rendering call. It sounds like you're trying to do something with matrix skinning or hierarchies of objects, so you probably want to give each vertex a matrix index or something as an input. You use this index to look up a uniform matrix array to get the actual matrix you want to use.
OpenGL is not a scene graph. It doesn't think in meshes or geometry. When you specify a uniform, it won't get "applied" to the mesh. It merely sets a register to be accessed by a shader. Later when you draw primitives from a Vertex Array (maybe contained in a VBO), the call to glDraw… determines which parts of the VA are batched for drawing. It's perfectly possible and reasonable to glDraw… just a subset of the VA, then switch uniforms, and draw another subset.
In any case OpenGL will not change the data in the VA.
I've got a shader to procedurally generate geometric shapes inside a quad. Essentially, you render a quad with this fragment shader active, and it calculates which fragments are on the border of the shape and discards everything else.
The problem is the dimensions of the quad. At the moment, I have to pass in the vertex data twice, once to the VBO and a second time as uniform variables to the shader, so it knows how big of a shape it's supposed to be creating.
Is there any way to only have to do this once, by having some way to get the coordinates of the top-left and bottom-right vertices of the current quad when I'm inside the fragment shader, so that I could simply give the vertex data to OpenGL once and have the shader calculate the largest shape that will fit inside the quad?
I think you probably want to use a geometry shader. Each vertex would consist of the position of a corner of the quad (a vector of 2-4 values) and the size of the quad (which could be a single value or upto 9 depending on how general you need the quad to be).
The geometry shader would generate the additional vertices for the quad and pass the size through to the fragment shader.
Depending on what exactly you're doing you may also be able to use point sprites and use the implicit coordinates that they have (gl_PointCoord). However, point sprites have a maximum size (which can be queried via GL_POINT_SIZE_RANGE and GL_POINT_SIZE_GRANULARITY).
You could pull the vertices yourself. You could create a Uniform Buffer or a Texture Buffer with the vertex data and just access this buffer in the fragment shader. In the vertex shader, in order to know what vertex to output you could just use the built-in variable gl_VertexID
I'd pass the top left and bottom right vertices of the quad as two extra input attributes for each vertex. The quads themselves get rendered as triangles.
In the vertex shader, declare two output attributes as flat (so they don't get interpolated) and copy the input attributes to these outputs.
So if I'm to draw three-sided pyramid with GL_TRIANGLE_FAN I provide one vertice for center and three for bottom (actually four but you know what I mean, right?!).
I can calculate face normals for all three faces (sides) of pyramid.
Question is how can I assign different normal to the first (center) vertice for every face (side) if I have only one call to draw that vertice ?
Basically I need to assign same face normal to all three vertices that compose triangle and than same thing for next two triangles.
But don't know how to assigne normal for the first (center) vertice three times when I call that vertice draw function only once (is that even possible with GL_TRIANGLE_FAN ?!).
Setting that vertice normal to glNormal3f(0.0f, 0.0f, 1.0f) is no good (though it seems correct) because that way color interpolation between vertices is not correct.
It's a common misconception that a vertex is just the position. A vertex is the whole set of position, normal, texture coordinates, and so on. If you change only one attribute of the vertex vector, you get a very different vertex.
Hence it is not possible to have only one vertex, but several normals. This contradicts the very way a vertex is defined as.
I've been trying to render a GL_QUAD (which is shaped as a trapezoid) with a square texture. I'd like to try and use OpenGL only to pull this off. Right now the texture is getting heavily distorted and it's really annoying.
Normally, I would load the texture compute a homography but that means a lot of work and an additional linear programming library/direct linear transform function. I'm under the impression OpenGL can simplify this process for me.
I've looked around the web and have seen "Perspective-Correct Texturing, Q Coordinates, and GLSL" and "Skewed/Sheared Texture Mapping in OpenGL".
These all seem to assume you'll do some type of homography computation or use some parts of OpenGL I'm ignorant of ... any advice?
Update:
I've been reading "Navigating Static Environments Using Image-Space Simplification and Morphing" [PDF] - page 9 appendix A.
It looks like they disable perspective correction by multiplying the (s,t,r,q) texture coordinate with the vertex of a model's world space z component.
so for a given texture coordinate (s, r, t, q) for a quad that's shaped as a trapezoid, where the 4 components are:
(0.0f, 0.0f, 0.0f, 1.0f),
(0.0f, 1.0f, 0.0f, 1.0f),
(1.0f, 1.0f, 0.0f, 1.0f),
(1.0f, 0.0f, 0.0f, 1.0f)
This is as easy as glTexCoord4f (svert.z, rvert.z, t, q*vert.z)? Or am I missing some step? like messing with the GL_TEXTURE glMatrixMode?
Update #2:
That did the trick! Keep it in mind folks, this problem is all over the web and there weren't any easy answers. Most involved directly recalculating the texture with a homography between the original shape and the transformed shape...aka lots of linear algebra and an external BLAS lib dependency.
Here is a good explanation of the issue & solution.
http://www.xyzw.us/~cass/qcoord/
working link: http://replay.web.archive.org/20080209130648/http://www.r3.nu/~cass/qcoord/
Partly copied and adapted from above link, created by Cass
One of the more interesting aspects of texture mapping is the space that texture coordinates live in. Most of us like to think of texture space as a simple 2D affine plane. In most cases this is perfectly acceptable, and very intuitive, but there are times when it becomes problematic.
For example, suppose you have a quad that is trapezoidal in its spatial coordinates but square in its texture coordinates.
OpenGL will divide the quad into triangles and compute the slopes of the texture coordinates (ds/dx, ds/dy, dt/dx, dt/dy) and use those to interpolate the texture coordinate over the interior of the polygon. For the lower left triangle, dx = 1 and ds = 1, but for the upper right triangle, dx < 1 while ds = 1. This makes ds/dx for the upper right triangle greater than ds/dx for the lower one. This produces an unpleasant image when texture mapped.
Texture space is not simply a 2D affine plane even though we generally leave the r=0 and q=1defaults alone. It's really a full-up projective space (P3)! This is good, because instead of specifying the texture coordinates for the upper vertices as (s,t) coordinates of (0, 1) and (1, 1), we can specify them as (s,t,r,q) coordinates of (0, width, 0, width) and (width, width, 0, width)! These coordinates correspond to the same location in the texture image, but LOOK at what happened to ds/dx - it's now the same for both triangles!! They both have the same dq/dx and dq/dy as well.
Note that it is still in the z=0 plane. It can become quite confusing when using this technique with a perspective camera projection because of the "false depth perception" that this produces. Still, it may be better than using only (s,t). That is for you to decide.
I would guess that most people wanting to fit a rectangular texture on a trapezoid are thinking of one of two results:
perspective projection: the trapezoid looks like a rectangle seen from an oblique angle.
"stretchy" transformation: the trapezoid looks like a rectangular piece of rubber that has been stretched/shrunk into shape.
Most solutions here on SO fall into the first group, whereas I recently found myself in the second.
The easiest way I found to achieve effect 2. was to split the trapezoid into a rectangle and right triangles. In my case the trapezoid was regular, so a quad and two triangles solved the problem.
Hope this can help:
Quoted from the paper:
"
At each pixel, a division is performed using the interpolated values of (s=w; t=w; r=w; q=w), yielding (s=q; t=q), which
are the final texture coordinates. To disable this effect, which is not
possible in OpenGL directly. "
In GLSL, (now at least) this is possible. You can add:
noperspective out vec4 v_TexCoord;
there's an explanation:
https://www.geeks3d.com/20130514/opengl-interpolation-qualifiers-glsl-tutorial/