Problem
When using a vertex buffer objects to draw textured primitives, we have to define a separate buffer to store a list of texture coordinates at each vertex. This texture coordinate array has to map to each vertex in the VBO, thus, we need a large array if we are drawing many primitives.
However, if all primitives have the same texture coordinates, it is a waste to repeat the coordinates for each primitive that will be drawn with glDraw* call.
Example
I have a large number of textured quads that need to be drawn. The quads share the same texture. All vertices are put in a VBO and a large texture coordinate array is created. The textured quads are drawn with a call to glDrawArrays.
4----3 8----7
| /| | /|
| / | | / |
| / | | / |
|/ | |/ |
1----2 5----6
vertex buffer: { 1, 2, 3, 4, 5, 6, 7, 8 }
Texture coordinate array: { 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1 }
<-------------------->
coordinates repeated
for second quad
The texture coordinate array repeats every 4 vertices. This buffer has to be recreated every time a batch of quads has to be drawn as we do not know the number of quads beforehand. The repeating texture coordinates also waste memory.
Question
Is there a way to remove the repetition in texture coordinate arrays when the primitives to be drawn have all the same texture coordinates?
Related
I have a cube made with a triangle strip, and I am trying to find the UV coordinates for it.
vert = new VBO<Vector3>(new Vector3[] {
new Vector3(1, 1, 1),
new Vector3(0, 1, 1),
new Vector3(1, 1, 0),
new Vector3(0, 1, 0),
new Vector3(1, 0, 1),
new Vector3(0, 0, 1),
new Vector3(0, 0, 0),
new Vector3(1, 0, 0)
});
ind = new VBO<uint>(new uint[] { 3, 2, 6, 7, 4, 2, 0, 3, 1, 6, 5, 4, 1, 0 }, BufferTarget.ElementArrayBuffer);
Does anyone know what they would be?
Short Answer: You can assign any value to the UV coordinates, even if they overlap ( albeit, this isn't usually desirable ). So long as you create a UV coordinate for every vertex coordinate. If you're Ok with overlaps, you could just declare 8 Vector2(s) as your UV coordinates and assign them with any value between -1 and 1.
Long answer:
This all depends on the way you index your coordinates.
UV coordinates tell you how to map a 2D polygonal region of a 2D texture to your 3D model geometry. There should be a UV coordinate for every vertex coordinate, if your UV(s) and vertices use the same indices ( which doesn't seem optimal for your vertex coordinates as they are ).
Indices designate which of your coordinates ( 3 indices for triangles, 4 for squares ) correlate to a 2D (texture) or 3D (model) polygon. The way your vertex coordinates are defined, unless you duplicated every vertex in a way that every 3 vertices defines a triangle, you'd have to use indexing to indicate which of your 8 vertices is a polygon. For example, the indices { 0, 1, and 3 } indicate the top-right-rear ( rear here, meaning further on the positive Z axis ) triangle on top of your cube.
An issue comes with using the same index array for your vertices and UV(s). If you indexed your model as is, your model faces wouldn't have any problems but some of your UV faces would overlap with other previously defined UV faces. This is because the vertices of some faces will be shared with other vertices on the the other side of your texture space. Think about your cube as if it were a cross and to put it together, you would wrap the base back around to the top. You can't do that if your cube's geometry only exists in 2 dimensions ( as it would in your UV coordinates ).
The seemingly best solution in this case would be to use cube projection, which I don't know how to do yet. So, I'll recommend what I understand is the next best solution:
Duplicate any vertices that would cause the UV faces to wrap over one another ( the base of the cross ) and optionally vertices that would cause too much distortion in the way the texture would be applied to the vertex coordinates; the 2 outer vertices of the head, "hip"(?), and arms of the cross would be further spaced out, requiring distortion in the texture to produce the desired outputs.
Doing so should result in you having 10 vertex coordinates, 10 UV coordinates, and 36 indices, where every 3 indices defines a triangle ( 12 triangles ).
Keep in mind, there multiple ways of achieving what you're asking so deeper research is recommended.
A visual representation of the previously described coordinate and indexing alignment.
( Fixed the Z axis )
This represents duplication of the vertex and UV coordinates at index 0 and 1 to indices 8 and 9. Vertex coordinates 8 and 9 hold the same 3D location value as vertex 0 and 1, whereas the UV coordinates 8 and 9 are located lower on the Y axis than coordinates 6 and 7.
I forgot to put this in the example image but the indices in the example would be:
int indices[] = {
0, 1, 2,
1, 2, 3,
2, 3, 4,
3, 4, 5,
0, 2, 4,
0, 4, 6,
1, 3, 5,
1, 5, 7,
4, 5, 6,
5, 6, 7,
6, 7, 8,
7, 8, 9
}
This will give you 12 modelspace triangles and 12 UV triangles, where every 3 indices is a triangle.
EDIT: As per the link provided by #Rabbid76, 14 vertex and UV coordinates would be better as you wouldn't get the distortion. The way I mentioned is just another way of doing it that has its ups and downs( more distortion, slightly less memory usage ).
I'm trying to draw a point at every pixel on the screen but I'm getting a weird anomaly: two red lines, one vertical, one horizontal:
My setup is like this:
I've got an ortho projection matrix like this:
glm::mat4 projection=glm::ortho(0.f, screen_width-1.f, 0.f, screen_height-1.f, 1.f, -1.f);
It's just setting the opengl coordinate system to be the same size as the screen's pixel dimensions.
Then, I'm creating filling the data into a vector like this:
for (int i=0; i<screen_height; ++i){
for (int j=0; j<screen_width; ++j){
idata.push_back(j);
idata.push_back(i);
idata.push_back(1);
}
}
I'm uploading integers into a VBO with this:
glBufferData(GL_ARRAY_BUFFER, sizeof(GLint)*idata.size(), &idata[0], GL_DYNAMIC_DRAW);
Setting up the attributes like so:
glVertexAttribIPointer(0, 3, GL_INT, 3*sizeof(GLint), (void *)(0*sizeof(GLint)));
glEnableVertexAttribArray(0);
This is the layout of my shader:
layout (location=0) in ivec3 position;
And this is my draw call:
glDrawArrays(GL_POINTS, 0, idata.size()/3);
That's a rounding issue, because you did set up the projection in a very weird way:
glm::ortho(0.f,screen_width-1.f,0.f,screen_height-1.f,1.f,-1.f)
In OpenGL's window space, pixels are an actual area (of squares of size 1x1).
So, If your screen is for example 4 pixel wide, this is what you get in terms of pixels
+--+--+--+--+
|0 |1 |2 |3 | integer pixel coors
...
However, the space you work in is continuous (in principle, at least), adn OpenGL's window space will go from 0 to 4 (not 3), which makes perfect sense as there are four pixels, and each pixel being one unit wide:
0 1 2 3 4 OpenGL window space
+--+--+--+--+
|0 |1 |2 |3 | integer pixel coords
....
Using OpenGL's window space conventions, the pixel centers lie at half-integer coordinates.
Now you're not drawing in OpenGL window space directly, but you can set up transformation to basically undo the viewport transform (which goes from NDC to window space) to get a pixel-exact mapping. But you didn't do that. what you did instead is:
3/4 9/4
0 6/4 3 your coordinate system
+--+--+--+--+
|0 |1 |2 |3 | integer pixel coords
....
And you're draw at full integers in that weird system, which just does not map very well to the actual pixels.
So you should do
glm::ortho(0.f,screen_width,0,screen_height,1.f,-1.f)
However, since you seem to be drawing at integer positions, you end up drawing exactly at the pixel corners, and might still see some rounding issues (and the rounding direction will be up to the implementation, as per the spec).
So, you better shift it so that integer coordinates lie at pixel centers
glm::ortho(-0.5f, screen_width-0.5f, -0.5f, screen_height-0.5f, 1.f, -1.f)
0.5 2.5
-0.5 1.5 3.5 your coordinate system
+--+--+--+--+
|0 |1 |2 |3 | integer pixel coords
....
//component
glRotatef((GLfloat)-90, 1, 0, 0);
gluCylinder(qObj,t_width/2,t_width/2,t_height+2*UDwall, 20, 20);
glRotatef((GLfloat)90, 1, 0, 0);
I want to draw a Cylinder attaching the part of texture.
glBindTexture(GL_TEXTURE_2D, texName[1]);//+
But not like glVertex3f, when I bind a texture that can not do by using TexCoord.
(so just whole texture printed ;ㅅ;)
First is What can I do for adjusting the part of texture.
Second is (someone suggested using texture atlas)can I change the texture's Max coord(0.0~1.0) to other number?
You could use a TextureMatrix to transform the coordinates of the texture so the desired rectangle shape (from texture altas) is in the right position.
So lets say you want to texture a rect that has coordinates (x,y) and dimensions (a,b). What we want to achieve here is to have the texture be at (0,0) for (x,y) and (1,1) for (x+a, y+b).
Solution
use Texture Matrix
translate by (-x, -y)
scale by (1.0 / a, 1.0 / b)
I'm learning OpenGL with right now and I'd like to draw some sprites to the screen from a sprite sheet. I'm note sure if I'm doing this the right way though.
What I want to do is to build a world out of tiles à la Terraria. That means that all tiles that build my world are 1x1, but I might want things later like entities that are 2x1, 1x2, 2x2 etc.
What I do right now is that I have a class named "Tile" which contains the tile's transform matrix and a pointer to its buffer. Very simple:
Tile::Tile(glm::vec2 position, GLuint* vbo)
{
transformMatrix = glm::translate(transformMatrix, glm::vec3(position, 0.0f));
buffer = vbo;
}
Then when I draw the tile I just bind the buffer and update the shader's UV-coords and vertex position. After that I pass the tile's transform matrix to the shader and draw it using glDrawElements:
glEnableVertexAttribArray(positionAttrib);
glEnableVertexAttribArray(textureAttrib);
for(int i = 0; i < 5; i++)
{
glBindBuffer(GL_ARRAY_BUFFER, *tiles[i].buffer);
glVertexAttribPointer(positionAttrib, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(GLfloat), 0);
glVertexAttribPointer(textureAttrib, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(GLfloat), (void*)(2 * sizeof(GLfloat)));
glUniformMatrix4fv(transformMatrixLoc, 1, GL_FALSE, value_ptr(tiles[i].transformMatrix));
glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_INT, 0);
}
glDisableVertexAttribArray(positionAttrib);
glDisableVertexAttribArray(textureAttrib);
Could I do this more efficiently? I was thinking that I could have one buffer for 1x1 tiles, one buffer for 2x1 tiles etc. etc. and then just have the Tile class contain UVpos and UVsize and then just send those to the shader, but I'm not sure how I'd do that.
I think what I described with one buffer for 1x1 and one for 2x1 sounds like it would be a lot faster.
Could I do this more efficiently?
I don't think you could do it less efficiently. You are binding a whole buffer object for each quad. You are then uploading a matrix. For each quad.
The way tilemap drawing (and only the map, not the entities) normally works is that you build a buffer object that contains some portion of the visible screen's tiles. Empty space is rendered as a transparent tile. You then render all of the tiles for that region of the screen, all in one drawing call. You provide one matrix for all of the tiles in that region.
Normally, you'll have some number of such visible regions, to make it easy to update the tiles for that region when they change. Regions that go off-screen are re-used for regions that come on-screen, so you fill them with new tile data.
I have a varray full of triangles that make a large square. I'm trying to texture this large square and all i'm getting (it seems) is a one pixel column that is then stretched across the primitive.
I'm texturing each right angled triangle with UV coords (0,1), (0,0), (1,0) and (0,1), (1,1), (1,0) depending on which way up the triangle is.
rendering code.
glBindTexture(GL_TEXTURE_2D, m_texture.id());
//glEnableClientState(GL_INDEX_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
//glIndexPointer(GL_INT, 0, &m_indices[0]);
glVertexPointer(3, GL_FLOAT, sizeof(VertexData), &m_vertexData[0].x);
glNormalPointer(GL_FLOAT, sizeof(VertexData), &m_vertexData[0].nx);
glTexCoordPointer(2, GL_FLOAT, sizeof(UVData), &m_uvData[0].u);
glDrawElements(GL_TRIANGLES, m_indices.size(), GL_UNSIGNED_INT, &m_indices[0]);
glBindTexture(GL_TEXTURE_2D, 0);
texture is wrap mode is GL_CLAMP_TO_EDGE
Screenshot
EDIT
outputting the first two triangles i get values ...
Triangle 1:
Indices(0,1,31)
Vertex0 (-15, 0, -15), Vertex1 (-15,0,-14), Vertex31 (-14, 0, -14)
UV (0,1), (0,0), (1,0)
Triangle 2:
Indices(0, 30, 31)
Vertex0(-15, 0, -15), Vertex30(-14, 0, -15) Vertex31 (-14, 0, -14)
UV (0, 1), (1, 1), (1, 0)
For posterity the full code is here
The two triangles make up a square with the diagonal cut from top left to bottom right.
Don't use the glIndexPointer, is not what you think it is, is used for color-index mode, to do indexed meshes just use glDrawElements.
Looks like you're associating the UV data with the indices, not with the vertices. UV data is part of the vertex data, so you should probably pack it in the VertexData object, rather than keeping it separate. You can keep it separate, but you need one per vertex, not one per index.