I'm trying to map a texture on a cube which is basicly a triangle strip with 8 vertices and 14 indicies:
static const GLfloat vertices[8] =
{
-1.f,-1.f,-1.f,
-1.f,-1.f, 1.f,
-1.f, 1.f,-1.f,
-1.f, 1.f, 1.f,
1.f,-1.f,-1.f,
1.f,-1.f, 1.f,
1.f, 1.f,-1.f,
1.f, 1.f, 1.f
};
static const GLubyte indices[14] =
{
2, 0, 6, 4, 5, 0, 1, 2, 3, 6, 7, 5, 3, 1
};
As you can see it starts drawing the back with 4 indices 2, 0, 6, 4, then the bottom with 3 indices 5, 0, 1 and then starting off with triangles only 1, 2, 3 is a triangle on the left, 3, 6, 7 is a triangle on the top, and so on...
I'm a bit lost how to map a texture on this cube. This is my texture (you get the idea):
I manage to get the back textured and somehow can add something to the front, but the other 4 faces are totally messed up and I'm a bit confused how the shader deals with the triangles regarding to the texture coordinates.
The best I could achieve is this:
You can clearly see the triangles on the sides. And these are my texture coordinates:
static const GLfloat texCoords[] = {
0.5, 0.5,
1.0, 0.5,
0.5, 1.0,
1.0, 1.0,
0.5, 0.5,
0.5, 1.0,
1.0, 0.5,
1.0, 1.0,
// ... ?
};
But whenever I try to add more coordinates it's totally creating something different I can not explain really why. Any idea how to improve this?
The mental obstacle you're running into is assuming that your cube has only 8 vertices. Yes, there are only 8 corer positions. But each face adjacent to that corner shows a different part of the image and hence has a different texture coordinate at that corner.
Vertices are tuples of
position
texture coordinate
…
any other attribute you can come up
As soon as one of that attribute changes you're dealing with an entirely different vertex. Which means for you, that you're dealing with 8 corner positions, but 3 different vertices at each corner, because there are meeting faces with different texture coordinates at that corner. So you actually need 24 vertices that make up 6 different faces which share no vertices at all.
To make things easier for you as a beginner, don't put vertex positions and texture coordinates into different arrays. Instead write it like this:
struct vertex_pos3_tex2 {
float x,y,z;
float s,t;
} cube_vertices[24] =
{
/* 24 vertices of position and texture coordinate */
};
Related
Following is the part of code that I am using to draw a rectangle.
And I can see the rectangle on the display but confused with the quadrants and co-ordinates on display plane.
int position_loc = glGetAttribLocation(ProgramObject, "vertex");
int color_loc = glGetAttribLocation(ProgramObject, "color_a");
GLfloat Vertices[4][4] = {
-0.8f, 0.6f, 0.0f, 1.0f,
-0.1f, 0.6, 0.0f, 1.0f,
-0.8f, 0.8f, 0.0f, 1.0f,
-0.1f, 0.8f, 0.0f, 1.0f
};
GLfloat red[4] = {1, 0, 1, 1};
glUniform4fv(glGetUniformLocation(ProgramObject, "color"), 1, red);
PrintGlError();
glEnableVertexAttribArray(position_loc);
PrintGlError();
printf("\nAfter Enable Vertex Attrib Array");
glBindBuffer(GL_ARRAY_BUFFER, VBO);
PrintGlError();
glVertexAttribPointer(position_loc, 4, GL_FLOAT, GL_FALSE, 0, 0);
PrintGlError();
glBufferData(GL_ARRAY_BUFFER, sizeof Vertices, Vertices, GL_DYNAMIC_DRAW);
PrintGlError();
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
PrintGlError();
So keeping in mind the above vertices
GLfloat Vertices[4][4] = {
x, y, p, q,
x1, y1, p1, q1,
x2, y2, p2, q2,
x3, y3, p3, q3,
};
what is p,q .. p1,q1.. ? on what basis are these last two points determined?
And how does it effect x,y or x1,y1 .. and so on?
OpenGL works with a 3-dimensional coordinate system with a homogeneous coordinate. Usually the values are donated [x,y,z,w] with w being the homogeneous part. Before any projection, [x,y,z] describe the position of the point in 3D space. w will usually be 1 for positions and 0 for directions.
During rendering, OpenGL handles transformations (vertex shader) resulting in a new point [x', y', z', w']. The w component is needed here because it allows us to describe all transformations, especially translations and (perspective) projections as 4x4 matrices. Have a look at 1 and 2 for details about transformations.
Afterwards clipping happens and the resulting vectors gets divided by the w component giving so-called Normalized device coordinates [x'/w', y'/w', z'/w', 1]. This NDC coordinates is what is actually used to draw to the screen. The first and second component (x'/w' and y'/w') are multiplied by the viewport size to get to the final pixel coordinates. The third component (z'/w', aka depth) is used to determine which points are in front during depth-testing. The last coordinate has no purpose here anymore.
In your case, without using any transformations or projections, you are drawing directly in NDC space, thus z can be used to order triangles in depth and w always has to be 1.
For practice I am setting up a 2d/orthographic rendering pipeline in openGL to be used for a simple game, but I am having issues related to the coordinate system.
In short, rotations distort 2d shapes, and I cannot seem to figure why. I am also not entirely sure that my coordinate system is sound.
First I looked for previous answers, but the following (the most relevant 2D opengl rotation causes sprite distortion) indicates that the problem was an incorrect ordering of transformations, but for now I am using just a view matrix and projection matrix, multiplied in the correct order in the vertex shader:
gl_Position = projection * view * model vec4(1.0); //(The model is just the identity matrix.)
To summarize my setup so far:
- I am successfully uploading a quad that should stretch across the whole screen:
GLfloat vertices[] = {
-wf, hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // top left
-wf, -hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // bottom left
wf, -hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // bottom right
wf, hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // top right
};
GLuint indices[] = {
0, 1, 2, // first Triangle
2, 3, 0, // second Triangle
};
wf and hf are 1, and I am trying to use a -1 to 1 coordinate system so I don't need to scale by the resolution in shaders (though I am not sure that this is correct to do.)
My viewport and orthographic matrix:
glViewport(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
...
glm::mat4 mat_ident(1.0f);
glm::mat4 mat_projection = glm::ortho(-1.0f, 1.0f, -1.0f, 1.0f, -1.0f, 1.0f);
... though this clearly does not factor in the screen width and height. I have seen others use width and height instead of 1s, but this seems to break the system or display nothing.
I rotate with a static method that modifies a struct containing a glm::quaternion (time / 1000) to get seconds:
main_cam.rotate((GLfloat)curr_time / TIME_UNIT_TO_SECONDS, 0.0f, 0.0f, 1.0f);
// which does: glm::angleAxis(angle, glm::vec3(x, y, z) * orientation)
Lastly, I pass the matrix as a uniform:
glUniformMatrix4fv(MAT_LOC, 1, GL_FALSE, glm::value_ptr(mat_projection * FreeCamera_calc_view_matrix(&main_cam) * mat_ident));
...and multiply in the vertex shader
gl_Position = u_matrix * vec4(a_position, 1.0);
v_position = a_position.xyz;
The full-screen quad rotates on its center (0, 0 as I wanted), but its length and width distort, which means that I didn't set something correctly.
My best guess is that I haven't created the right ortho matrix, but admittedly I have had trouble finding anything else on stack overflow or elsewhere that might help debug. Most answers suggest that the matrix multiplication order is wrong, but that is not the case here.
A secondary question is--should I not set my coordinates to 1/-1 in the context of a 2d game? I did so in order to make writing shaders easier. I am also concerned about character/object movement once I add model matrices.
What might be causing the issue? If I need to multiply the arguments to gl::ortho by width and height, then how do I transform coordinates so v_position (my "in"/"varying" interpolated version of the position attribute) works in -1 to 1 as it should in a shader? What are the implications of choosing a particular coordinates system when it comes to ease of placing entities? The game will use sprites and textures, so I was considering a pixel coordinate system, but that quickly became very challenging to reason about on the shader side. I would much rather have THIS working.
Thank you for your help.
EDIT: Is it possible that my varying/interpolated v_position should be set to the calculated gl_Position value instead of the attribute position?
Try accounting for the aspect ratio of the window you are displaying on in the first two parameters of glm::ortho to reflect the aspect ratio of your display.
GLfloat aspectRatio = SCREEN_WIDTH / SCREEN_HEIGHT;
glm::mat4 mat_projection = glm::ortho(-aspectRatio, aspectRatio, -1.0f, 1.0f, -1.0f, 1.0f);
I was working on learning OpenGL and I was tasked with creating the figure below:
This is what my intention was, but the first time I wrote it I buffered the colors as floats from 0 - 255 instead of 0.0 - 1.0. Clearly that was wrong, but this is what was displayed:
Only the center triangle is displayed, only with outlines and the colors are the first three vertex colors. Why did this happen? What does OpenGL do when I buffer colors that aren't in the range [0.0, 1.0]? I couldn't find documentation on this.
My shaders are as follows:
vertex:
layout (location = 0) in vec3 Position;
layout (location = 2) in vec4 vertexColor;
out vec4 vertexColor0;
void main() {
gl_Position = vec4(Position, 1.0f);
vertexColor0 = vertexColor;
}
fragment:
in vec4 vertexColor0;
void main() {
gl_FragColor = vertexColor0;
}
And here's the code I use for buffering data and drawing data:
static const int npoints = 9;
static const glm::vec3 points[npoints] = {
glm::vec3(-0.5, 0.5, 0.0),
glm::vec3(-0.7, 0.0, 0.0),
glm::vec3(-0.3, 0.0, 0.0),
glm::vec3(0.2, 0.0, 0.0),
glm::vec3(-0.2, 0.0, 0.0),
glm::vec3(0.0, -0.5, 0.0),
glm::vec3(0.5, 0.5, 0.0),
glm::vec3(0.3, 0.0, 0.0),
glm::vec3(0.7, 0.0, 0.0)
};
//the incorrect version, in the correct version 255 is replaced with 1.0f and 127 with 0.5f
static const glm::vec4 colors[npoints] = {
glm::vec4(0, 255, 255, 255),
glm::vec4(255, 0, 255, 255),
glm::vec4(255, 255, 0, 255),
glm::vec4(255, 0, 0, 255),
glm::vec4(0, 255, 0, 255),
glm::vec4(0, 0, 255, 255),
glm::vec4(0, 0, 0, 255),
glm::vec4(255, 255, 255, 255),
glm::vec4(127, 127, 127, 255)
};
//Create the VAO and the buffer data
void Figure::initialize() {
glUseProgram(shaderProgram); //shaderProgram is a member set to the built shader above
glGenVertexArrays(1, &vertexArrayObject); //vertexArrayObject is also a member
glBindVertexArray(vertexArrayObject);
GLuint VBO;
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER,
npoints * sizeof(glm::vec3),
points,
GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
GLuint CBO;
glGenBuffers(1, &CBO);
glBindBuffer(GL_ARRAY_BUFFER, CBO);
glBufferData(GL_ARRAY_BUFFER,
npoints * sizeof(glm::vec4),
colors,
GL_STATIC_DRAW);
glVertexAttribPointer(2, 4, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(2);
}
//draw the figure
void Figure::draw() {
glUseProgram(shaderProgram);
glBindVertexArray(vertexArrayObject);
glDrawArrays(GL_TRIANGLES, 0, npoints);
}
Since your values are so high, any points inside the triangle will have a little bit of each color corner. Each corner gives one of the RGB components. Let's evaluate a pixel a small distance away from the red corner. This pixel might get the unclipped values vec3(253.0f, 1.2f, 1.9f). If we clip this value to the range 0.0f - 1.0f, you will see that this becomes vec3(1.0f,1.0f,1.0f) or white.
The reason the edges are not white is that this is the only place where the interpolation distance is low enough that 0.0f - 255.0f does not overflow 1.0f for one of the components. Anywhere on the line between the red and blue point there is so little green that it does not overflow 1.0, therefore we get a purple line. If you look closely at the corners, you also see that this is the only place where there is only the color of the corner (or at least a small amount from the other corners).
Anywhere else on the triangle will be clipped to vec3(1.0f,1.0f,1.0f) and you get a white triangle.
EDIT: The reason why the triangle on the left does not have these edges is because the corners have full intensity (255.0f) on two of the RGB components (vec3(255.0f,255.0f,0.0f),vec3(0.0f,255.0f,255,f) and vec3(255.0f,0.0f,255.0f)). On one of the edges it interpolates between vec3(255.0f,255.0f,0.0f) and vec3(255.0f,0.0f,255.0f). Going only slightly away from one of the corners, the only component being 0.0f will interpolate towards 255.0f since the other corner will always have fully intensity on that particular RGB component. So as soon as you move slightly away from the corner you will get a value like for instance vec3(255.0f,253.7f,1.3f). This will clip to white there for in this case the edges will also be white. If you increase the resolution you might see that exactly on the corner there might be one pixel that is not fully white, but I am not sure about that.
The triangle on the right have full intensity on all RGB components in all corners except the one that is black. As soon as you move slightly away from the black corner, the values will be something like vec3(1.3f, 1.3f, 1.3f) which will clip towards white and the entire triangle will appear white. Again, if you increase the resolution you might see a black dot at the black corner.
i am new to using textures in pyglet (and OpenGL generally), and i am stumped over something that is probably a dumb mistake: i am attempting to apply a texture, derived from a png image, to a square that is composed of two triangles. i can successfully use indexed vertex lists to define geometry, but when i specify texture coordinates (u,v) for each vertex of each triangle, i get:
Traceback (most recent call last):
File "test_tex.py", line 37, in module
('t2f', texture_coords))
ValueError: Can only assign sequence of same size
suggesting that my list of texture coordinates is not the correct size. anyone see the problem? a related post that did not quite help me: Triangle texture mapping OpenGL
please check out my code below for details, thanks!
import pyglet
config = pyglet.gl.Config(sample_buffers=1, samples=4,
depth_size=16, double_buffer=True)
window = pyglet.window.Window(resizable=True, config=config, vsync=True)
# create vertex data
num_verts = 4
side_length = 1.0
half_side = side_length / 2.0
# vertex positions of a square centered at the origin,
# ordered counter-clockwise, starting at lower right corner
vertex_positions = [ half_side, -half_side,
half_side, half_side,
-half_side, half_side,
-half_side, -half_side]
# six pairs of texture coords, one pair (u,v) for each vertex
# of each triangle
texture_coords = [1.0, 0.0,
1.0, 1.0,
0.0, 1.0,
0.0, 1.0,
0.0, 0.0,
1.0, 0.0]
# indices of the two triangles that make the square
# counter-clockwise orientation
triangle_indices = [0, 1, 2,
2, 3, 0]
# use indexed vertex list
vertex_array = pyglet.graphics.vertex_list_indexed(num_verts,
triangle_indices,
('v2f', vertex_positions),
('t2f', texture_coords))
# enable face culling, depth testing
pyglet.gl.glEnable(pyglet.gl.GL_CULL_FACE)
pyglet.gl.glEnable(pyglet.gl.GL_DEPTH_TEST)
# texture set up
pic = pyglet.image.load('test.png')
texture = pic.get_texture()
pyglet.gl.glEnable(texture.target)
pyglet.gl.glBindTexture(texture.target, texture.id)
# set modelview matrix
pyglet.gl.glMatrixMode(pyglet.gl.GL_MODELVIEW)
pyglet.gl.glLoadIdentity()
pyglet.gl.gluLookAt(0, 0, 5, 0, 0, 0, 0, 1, 0)
#window.event
def on_resize(width, height):
pyglet.gl.glViewport(0, 0, width, height)
pyglet.gl.glMatrixMode(pyglet.gl.GL_PROJECTION)
pyglet.gl.glLoadIdentity()
pyglet.gl.gluPerspective(45.0, width / float(height), 1.0, 100.0)
return pyglet.event.EVENT_HANDLED
#window.event
def on_draw():
window.clear()
vertex_array.draw(pyglet.gl.GL_TRIANGLES)
pyglet.app.run()
It's probably complaining because you have 6 sets of texture coordinates, but only 4 vertices. You need texture coordinates for each vertex, so there should be 4 pairs of floats in your texture_coord array:
texture_coords = [1.0, 0.0,
1.0, 1.0,
0.0, 1.0,
0.0, 0.0]
I have a Sphere . I would like to clip some planes like below picture. I need more than 10 clipping plane but maximum glClipPlane limit is 6. How can I solve this problem.
My Sample Code below;
double[] eqn = { 0.0, 1.0, 0.0, 0.72};
double[] eqn2 = { -1.0, 0.0, -0.5, 0.80 };
double[] eqnK = { 0.0, 0.0, 1.0, 0.40 };
/* */
Gl.glClipPlane(Gl.GL_CLIP_PLANE0, eqn);
Gl.glEnable(Gl.GL_CLIP_PLANE0);
/* */
Gl.glClipPlane(Gl.GL_CLIP_PLANE1, eqn2);
Gl.glEnable(Gl.GL_CLIP_PLANE1);
Gl.glClipPlane(Gl.GL_CLIP_PLANE2, eqnK);
Gl.glEnable(Gl.GL_CLIP_PLANE2);
//// draw sphere
Gl.glColor3f(0.5f, .5f, 0.5f);
Glu.gluSphere(quadratic, 0.8f, 50, 50);
Glu.gluDeleteQuadric(quadratic);
Gl.glDisable(Gl.GL_CLIP_PLANE0);
Gl.glDisable(Gl.GL_CLIP_PLANE1);
Gl.glDisable(Gl.GL_CLIP_PLANE2);
You should consider multi-pass rendering and the stencil buffer.
Say you need 10 user clip-planes and you are limited to 6, you can setup the first 6, render the scene into the stencil buffer and then do a second pass with the remaining 4 clip planes. You would then use the stencil buffer to reject parts of the screen that were clipped on the prior pass. So this way you get the effect of 10 user clip planes when the implementation only supports 6.
// In this example you want 10 clip planes but you can only do 6 per-pass,
// so you need 1 extra pass.
const int num_extra_clip_passes = 1;
glClear (GL_STENCIL_BUFFER_BIT);
// Disable color and depth writes for the extra clipping passes
glDepthMask (GL_FALSE);
glColorMask (GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
// Increment the stencil buffer value by 1 for every part of the sphere
// that is not clipped.
glStencilOp (GL_KEEP, GL_KEEP, GL_INCR);
glStencilFunc (GL_ALWAYS, 1, 0xFFFF);
// Setup Clip Planes: 0 through 5
// Draw Sphere
// Reject any part of the sphere that did not pass _all_ of the clipping passes
glStencilFunc (GL_EQUAL, num_extra_clip_passes, 0xFFFF);
// Re-enable color and depth writes
glDepthMask (GL_TRUE);
glColorMask (GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
// Setup Leftover Clip Planes
// DrawSphere
It is not perfect, it is quite fill-rate intensive and limits you to a total of 1536 clip planes (given an 8-bit stencil buffer), but it will get the job done without resorting to features present only in GLSL 130+ (namely gl_ClipDistance []).
You can just reuse "Gl.glEnable(Gl.GL_CLIP_PLANE1);" because you was disabled it later ...