My problem is very simple, I build an array with the 2 square coord :
var vertices = [ -64, -32, 0.0,
64, -32, 0.0,
64, 32, 0.0,
-64, -32, 0.0,
-64, 32, 0.0,
64, 32, 0.0 ];
vertices.push( -64 + 200, -32, 0.0,
64 + 200, -32, 0.0,
64 + 200, 32, 0.0,
-64 + 200, -32, 0.0,
-64 + 200, 32, 0.0,
64 + 200, 32, 0.0 );
But the resulting drawing looks like :
Expected result should be 2 separate rect with black color between them.
And I don't understand that behavior.
When using triangle strips the vertices are implicitly indexed:
Draws a series of triangles (three-sided polygons) using vertices v0, v1, v2, then v2, v1, v3 (note the order), then v2, v3, v4, and so on. The ordering is to ensure that the triangles are all drawn with the same orientation so that the strip can correctly form part of a surface.
If one would want to have separate triangles using triangle strips, one would need to add "stop vertices" to generate a degenerated triangle.
Read this answer for an explanation(with images) of TRIANGLE_STRIP and TRIANGLE_FAN.
However triangle strips(and fans) can be quite cumbersome to work with if you're not planning to explicitly build strips or fans. This and looking at your vertex layout I'm assuming you just want to use plain TRIANGLES to render your geometry.
Related
I'm trying to render a series of 2D shapes (Rectangle, Circle) etc in modern opengl, hopefully without the use of any transformation matrices. I would like for me to be able to specify the coordinates for say a rectangle of 2 triangles like so:
float V[] = { 20.0, 20.0, 0.0,
40.0, 20.0, 0.0,
40.0, 40.0, 0.0,
40.0, 40.0, 0.0,
20.0, 40.0, 0.0,
20.0, 20.0, 0.0 }
You can see that the vertex coordinates are specified in viewport? space (I believe thats what its called). Now, when this get rendered by opengl, it doesnt work because clip space goes from -1.0 to 1.0 with the origin in the center.
What would be the correct way for me to handle this? I initially thought adjusting glClipControl to upper left and 0 to 1 would work, but it didnt. With clip control set to upper left and 0 to 1, the origin was still at the center, but it did allow for the Y-Axis to increase as it moved downward (which is a start).
Ideally, I would love to get opengl to have 0.0,0.0 to be the top left and 1.0, 1.0 to be the bottom right, then I just normalise each vertex position, but I have no idea how to get opengl to use that type of coordinate system.
One can easily do these transformation without matrices in the vertex shader:
// From pixels to 0-1
vPos.xy /= vResolution.xy;
// Flip Y so that 0 is top
vPos.y = (1.-vPos.y);
// Map to NDC -1,+1
vPos.xy = vPos.xy*2.-1.;
I'm trying to map a texture on a cube which is basicly a triangle strip with 8 vertices and 14 indicies:
static const GLfloat vertices[8] =
{
-1.f,-1.f,-1.f,
-1.f,-1.f, 1.f,
-1.f, 1.f,-1.f,
-1.f, 1.f, 1.f,
1.f,-1.f,-1.f,
1.f,-1.f, 1.f,
1.f, 1.f,-1.f,
1.f, 1.f, 1.f
};
static const GLubyte indices[14] =
{
2, 0, 6, 4, 5, 0, 1, 2, 3, 6, 7, 5, 3, 1
};
As you can see it starts drawing the back with 4 indices 2, 0, 6, 4, then the bottom with 3 indices 5, 0, 1 and then starting off with triangles only 1, 2, 3 is a triangle on the left, 3, 6, 7 is a triangle on the top, and so on...
I'm a bit lost how to map a texture on this cube. This is my texture (you get the idea):
I manage to get the back textured and somehow can add something to the front, but the other 4 faces are totally messed up and I'm a bit confused how the shader deals with the triangles regarding to the texture coordinates.
The best I could achieve is this:
You can clearly see the triangles on the sides. And these are my texture coordinates:
static const GLfloat texCoords[] = {
0.5, 0.5,
1.0, 0.5,
0.5, 1.0,
1.0, 1.0,
0.5, 0.5,
0.5, 1.0,
1.0, 0.5,
1.0, 1.0,
// ... ?
};
But whenever I try to add more coordinates it's totally creating something different I can not explain really why. Any idea how to improve this?
The mental obstacle you're running into is assuming that your cube has only 8 vertices. Yes, there are only 8 corer positions. But each face adjacent to that corner shows a different part of the image and hence has a different texture coordinate at that corner.
Vertices are tuples of
position
texture coordinate
…
any other attribute you can come up
As soon as one of that attribute changes you're dealing with an entirely different vertex. Which means for you, that you're dealing with 8 corner positions, but 3 different vertices at each corner, because there are meeting faces with different texture coordinates at that corner. So you actually need 24 vertices that make up 6 different faces which share no vertices at all.
To make things easier for you as a beginner, don't put vertex positions and texture coordinates into different arrays. Instead write it like this:
struct vertex_pos3_tex2 {
float x,y,z;
float s,t;
} cube_vertices[24] =
{
/* 24 vertices of position and texture coordinate */
};
i am new to using textures in pyglet (and OpenGL generally), and i am stumped over something that is probably a dumb mistake: i am attempting to apply a texture, derived from a png image, to a square that is composed of two triangles. i can successfully use indexed vertex lists to define geometry, but when i specify texture coordinates (u,v) for each vertex of each triangle, i get:
Traceback (most recent call last):
File "test_tex.py", line 37, in module
('t2f', texture_coords))
ValueError: Can only assign sequence of same size
suggesting that my list of texture coordinates is not the correct size. anyone see the problem? a related post that did not quite help me: Triangle texture mapping OpenGL
please check out my code below for details, thanks!
import pyglet
config = pyglet.gl.Config(sample_buffers=1, samples=4,
depth_size=16, double_buffer=True)
window = pyglet.window.Window(resizable=True, config=config, vsync=True)
# create vertex data
num_verts = 4
side_length = 1.0
half_side = side_length / 2.0
# vertex positions of a square centered at the origin,
# ordered counter-clockwise, starting at lower right corner
vertex_positions = [ half_side, -half_side,
half_side, half_side,
-half_side, half_side,
-half_side, -half_side]
# six pairs of texture coords, one pair (u,v) for each vertex
# of each triangle
texture_coords = [1.0, 0.0,
1.0, 1.0,
0.0, 1.0,
0.0, 1.0,
0.0, 0.0,
1.0, 0.0]
# indices of the two triangles that make the square
# counter-clockwise orientation
triangle_indices = [0, 1, 2,
2, 3, 0]
# use indexed vertex list
vertex_array = pyglet.graphics.vertex_list_indexed(num_verts,
triangle_indices,
('v2f', vertex_positions),
('t2f', texture_coords))
# enable face culling, depth testing
pyglet.gl.glEnable(pyglet.gl.GL_CULL_FACE)
pyglet.gl.glEnable(pyglet.gl.GL_DEPTH_TEST)
# texture set up
pic = pyglet.image.load('test.png')
texture = pic.get_texture()
pyglet.gl.glEnable(texture.target)
pyglet.gl.glBindTexture(texture.target, texture.id)
# set modelview matrix
pyglet.gl.glMatrixMode(pyglet.gl.GL_MODELVIEW)
pyglet.gl.glLoadIdentity()
pyglet.gl.gluLookAt(0, 0, 5, 0, 0, 0, 0, 1, 0)
#window.event
def on_resize(width, height):
pyglet.gl.glViewport(0, 0, width, height)
pyglet.gl.glMatrixMode(pyglet.gl.GL_PROJECTION)
pyglet.gl.glLoadIdentity()
pyglet.gl.gluPerspective(45.0, width / float(height), 1.0, 100.0)
return pyglet.event.EVENT_HANDLED
#window.event
def on_draw():
window.clear()
vertex_array.draw(pyglet.gl.GL_TRIANGLES)
pyglet.app.run()
It's probably complaining because you have 6 sets of texture coordinates, but only 4 vertices. You need texture coordinates for each vertex, so there should be 4 pairs of floats in your texture_coord array:
texture_coords = [1.0, 0.0,
1.0, 1.0,
0.0, 1.0,
0.0, 0.0]
I can't manage to texture a glu Quadric (gluSphere):
What i get instead of the texture, is an average color of the texture.
gl.glEnable(GL.GL_DEPTH_TEST);
gl.glEnable(GL.GL_BLEND);
gl.glEnable(GL.GL_TEXTURE_GEN_S);
gl.glEnable(GL.GL_TEXTURE_GEN_T);
sunTexture = TextureIO.newTexture(new File("sun.jpg"),false);
float[] rgba = {1f, 1f, 1f};
gl.glMaterialfv(GL.GL_FRONT, GL.GL_AMBIENT, rgba, 0);
gl.glMaterialfv(GL.GL_FRONT, GL.GL_SPECULAR, rgba, 0);
gl.glMaterialf(GL.GL_FRONT, GL.GL_SHININESS, 0.5f);
sunTexture.enable();
sunTexture.bind();
GLUquadric sun = glu.gluNewQuadric();
glu.gluQuadricTexture(sun, true);
glu.gluSphere(sun, 5, DETAIL, DETAIL);
sunTexture.disable();
As GLU generates texture coordinates itsself and transmits them as glTexCoord, I think, there is no need to enable texcoord generation (GL_TEXTURE_GEN_S/T). I suppose the GLU-generated texCoords get overwritten with the ones from texgen.
I also see, that you submit an array of three floats to glMaterial, which expects RGBA (4 floats). But since I work with C++, I maybe wrong and this works in JoGL.
I found the problem:
i had set
gl.glFrustum(-20, 20, -20, 20, 0.1, 400);
after setting
gl.glFrustum(-20, 20, -20, 20, 1, 400);
it appears ok.
I have the following code:
glNormal3f(0, 0, 1);
glColor3f(1, 0, 0);
glBegin(GL_POINTS);
glVertex3f(-45, 75, -5);
glVertex3f(-45, 90, -5);
glVertex3f(-30, 90, -5);
glVertex3f(-30, 80, -5);
glVertex3f(-35, 80, -5);
glVertex3f(-35, 75, -5);
glVertex3f(-45, 75, -5);
glEnd();
glColor3f(1, 1, 0);
glBegin(GL_POLYGON);
glVertex3f(-45, 75, -5);
glVertex3f(-45, 90, -5);
glVertex3f(-30, 90, -5);
glVertex3f(-30, 80, -5);
glVertex3f(-35, 80, -5);
glVertex3f(-35, 75, -5);
glVertex3f(-45, 75, -5);
glEnd();
Notice how the code between glBegin and glEnd in each instance is identical.
But the vertices of the GL_POLYGON (yellow) don't match up with the GL_POINTS (red).
Here is a screenshot:
The more I use openGL the more I'm hating it. But I guess it's probably something I'm doing wrong... What is up?
That's because your polygon is not convex - the top right corner is concave. GL_POLYGON only works for convex polygons.
Try using GL_TRIANGLE_FAN instead and start from the lower-left corner: Your polygon is star-shaped, so you can draw it with a single GL_TRIANGLE_FAN if you start from a point in its kernel (of your vertices, the lower-left one is the only one that satisfies this condition).
If you expect more complicated polygons, you need to break them up into convex bits (triangles would be best) to render them.
The specification says for GL_POLYGON:
Only convex polygons are guaranteed to be drawn correctly by the GL. If a
specied polygon is nonconvex when
projected onto the window, then the
rendered polygon need only lie within
the convex hull of the projected
vertices dening its boundary.
Since you are defining a concave polygon, this is a valid behaviour.
Try using a triangle strip instead of a polygon. It would be much faster because a polygon needs to be triangulated by the GL.
BTW: I haven't used GL_POLYGON so far. But I think you do not need to specify the last vertex (which equals the first one). As far as I know, it will be connected automatically.
The GLU Tesselator is a handy way to automatically convert concave (and other complex) polygons into proper opengl friendly ones. Check it out:
http://glprogramming.com/red/chapter11.html
http://www.songho.ca/opengl/gl_tessellation.html