texture coordinates using vertex_list_indexed in pyglet - opengl

i am new to using textures in pyglet (and OpenGL generally), and i am stumped over something that is probably a dumb mistake: i am attempting to apply a texture, derived from a png image, to a square that is composed of two triangles. i can successfully use indexed vertex lists to define geometry, but when i specify texture coordinates (u,v) for each vertex of each triangle, i get:
Traceback (most recent call last):
File "test_tex.py", line 37, in module
('t2f', texture_coords))
ValueError: Can only assign sequence of same size
suggesting that my list of texture coordinates is not the correct size. anyone see the problem? a related post that did not quite help me: Triangle texture mapping OpenGL
please check out my code below for details, thanks!
import pyglet
config = pyglet.gl.Config(sample_buffers=1, samples=4,
depth_size=16, double_buffer=True)
window = pyglet.window.Window(resizable=True, config=config, vsync=True)
# create vertex data
num_verts = 4
side_length = 1.0
half_side = side_length / 2.0
# vertex positions of a square centered at the origin,
# ordered counter-clockwise, starting at lower right corner
vertex_positions = [ half_side, -half_side,
half_side, half_side,
-half_side, half_side,
-half_side, -half_side]
# six pairs of texture coords, one pair (u,v) for each vertex
# of each triangle
texture_coords = [1.0, 0.0,
1.0, 1.0,
0.0, 1.0,
0.0, 1.0,
0.0, 0.0,
1.0, 0.0]
# indices of the two triangles that make the square
# counter-clockwise orientation
triangle_indices = [0, 1, 2,
2, 3, 0]
# use indexed vertex list
vertex_array = pyglet.graphics.vertex_list_indexed(num_verts,
triangle_indices,
('v2f', vertex_positions),
('t2f', texture_coords))
# enable face culling, depth testing
pyglet.gl.glEnable(pyglet.gl.GL_CULL_FACE)
pyglet.gl.glEnable(pyglet.gl.GL_DEPTH_TEST)
# texture set up
pic = pyglet.image.load('test.png')
texture = pic.get_texture()
pyglet.gl.glEnable(texture.target)
pyglet.gl.glBindTexture(texture.target, texture.id)
# set modelview matrix
pyglet.gl.glMatrixMode(pyglet.gl.GL_MODELVIEW)
pyglet.gl.glLoadIdentity()
pyglet.gl.gluLookAt(0, 0, 5, 0, 0, 0, 0, 1, 0)
#window.event
def on_resize(width, height):
pyglet.gl.glViewport(0, 0, width, height)
pyglet.gl.glMatrixMode(pyglet.gl.GL_PROJECTION)
pyglet.gl.glLoadIdentity()
pyglet.gl.gluPerspective(45.0, width / float(height), 1.0, 100.0)
return pyglet.event.EVENT_HANDLED
#window.event
def on_draw():
window.clear()
vertex_array.draw(pyglet.gl.GL_TRIANGLES)
pyglet.app.run()

It's probably complaining because you have 6 sets of texture coordinates, but only 4 vertices. You need texture coordinates for each vertex, so there should be 4 pairs of floats in your texture_coord array:
texture_coords = [1.0, 0.0,
1.0, 1.0,
0.0, 1.0,
0.0, 0.0]

Related

pyopengl gluLookAt() clarity

I'm trying to understand what I'm doing wrong displaying two different cubes with a grid through the x and z axis. I'm using gluLookAt() to view both cubes at the same angle. I'm very confused why the first viewport does not show the grid but the second one does. Here's my code and an example picture of why I'm confused.
def draw(c1, c2):
glClearColor(0.7, 0.7, 0.7, 0)
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
glBegin(GL_LINES)
for edge in grid_edges:
for vertex in edge:
glColor3fv((0.0, 0.0, 0.0))
glVertex3fv(grid_vertices[vertex])
glEnd()
glViewport(0, 0, WIDTH // 2, HEIGHT)
glLoadIdentity()
gluPerspective(90, (display[0] / display[1]) / 2, 0.1, 50.0)
gluLookAt(c1.center_pos[0], c1.center_pos[1], c1.center_pos[2] + 8, c1.center_pos[0], c1.center_pos[1], c1.center_pos[2], 0, 1, 0)
glPushMatrix()
glTranslatef(c1.center_pos[0], c1.center_pos[1], c1.center_pos[2])
glRotatef(c1.rotation[0], c1.rotation[1], c1.rotation[2], c1.rotation[3])
glTranslatef(-c1.center_pos[0], -c1.center_pos[1], -c1.center_pos[2])
glBegin(GL_LINES)
for edge in c1.edges:
for vertex in edge:
glColor3fv((0, 0, 0))
glVertex3fv(c1.vertices[vertex])
glEnd()
glPopMatrix()
glViewport(WIDTH // 2, 0, WIDTH // 2, HEIGHT)
glLoadIdentity()
gluPerspective(90, (display[0] / display[1]) / 2, 0.1, 50.0)
gluLookAt(c2.center_pos[0], c2.center_pos[1], c2.center_pos[2] + 8, c2.center_pos[0], c2.center_pos[1], c2.center_pos[2], 0, 1, 0)
glPushMatrix()
glTranslatef(c2.center_pos[0], c2.center_pos[1], c2.center_pos[2])
glRotatef(c2.rotation[0], c2.rotation[1], c2.rotation[2], c2.rotation[3])
glTranslatef(-c2.center_pos[0], -c2.center_pos[1], -c2.center_pos[2])
glBegin(GL_LINES)
for edge in c2.edges:
for vertex in edge:
glColor3fv((0, 0, 0))
glVertex3fv(c2.vertices[vertex])
glEnd()
glPopMatrix()
OpenGL is a state machine. Once a state is set, it persists even beyond frames. This means if you change the viewport or set a matrix, that viewport and matrix are the same at the beginning of the next frame. These states are not "reset" from one frame to the next. You need to set the viewport and set the identity matrix at the beginning of draw:
def draw(c1, c2):
glClearColor(0.7, 0.7, 0.7, 0)
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
glViewport(0, 0, WIDTH, HEIGHT)
glLoadIdentity()
glBegin(GL_LINES)
for edge in grid_edges:
for vertex in edge:
glColor3fv((0.0, 0.0, 0.0))
glVertex3fv(grid_vertices[vertex])
glEnd()
# [...]

Translating a 3d model to 2d using assimp

I'm using c++ to translate a 3d model entered using command line arguments into a 2d picture in assimp. However I'm not sure of the best way to go about it. I have the basic hard coding for to create a set object but I need to redo it using vectors and loops. What's the best way to go about it?
void createSimpleQuad(Mesh &m) {
// Clear out vertices and elements
m.vertices.clear();
m.indices.clear();
// Create four corners
Vertex upperLeft, upperRight;
Vertex lowerLeft, lowerRight;
Vertex upperMiddle;
// Set positions of vertices
// Note: glm::vec3(x, y, z)
upperLeft.position = glm::vec3(-0.5, 0.5, 0.0);
upperRight.position = glm::vec3(0.5, 0.5, 0.0);
lowerLeft.position = glm::vec3(-0.5, -0.5, 0.0);
lowerRight.position = glm::vec3(0.5, -0.5, 0.0);
upperMiddle.position = glm::vec3(-0.9, 0.5, 0.0);
// Set vertex colors (red, green, blue, white)
// Note: glm::vec4(red, green, blue, alpha)
upperLeft.color = glm::vec4(1.0, 0.0, 0.0, 1.0);
upperRight.color = glm::vec4(0.0, 1.0, 0.0, 1.0);
lowerLeft.color = glm::vec4(0.0, 0.0, 1.0, 1.0);
lowerRight.color = glm::vec4(1.0, 1.0, 1.0, 1.0);
upperMiddle.color = glm::vec4(0.5, 0.15, 0.979797979, 1.0);
// Add to mesh's list of vertices
m.vertices.push_back(upperLeft);
m.vertices.push_back(upperRight);
m.vertices.push_back(lowerLeft);
m.vertices.push_back(lowerRight);
m.vertices.push_back(upperMiddle);
// Add indices for two triangles
m.indices.push_back(0);
m.indices.push_back(3);
m.indices.push_back(1);
m.indices.push_back(0);
m.indices.push_back(2);
m.indices.push_back(3);
m.indices.push_back(0);
m.indices.push_back(2);
m.indices.push_back(4);
}
If you want to generate a 2D-picture out of a 3D-Model you need to:
Import the model
Render it via a common render-lib into a texture or manually by using our viewer and take a snapshot
At this moment there is no post-process to generate a 2D-View automatically in Assimp.
But when you want to do this with your own render-code this is not so hard to do. After importing your model you have to:
Get the bounding box for your imported asset, just check the opengl-samples in the assimp-repo for some tips
Calculate the diameter for this bounding box.
Create a camera, for OpenGL you can use glm for calculating the View-Matrix
Place the asset at (0|0|0) world coordinate system
Move your camera by the diameter at let it view onto (0|0|0)
Render the view into a 2D-Texture or just take a screenshot

2D Shape Coordinates with Modern Opengl

I'm trying to render a series of 2D shapes (Rectangle, Circle) etc in modern opengl, hopefully without the use of any transformation matrices. I would like for me to be able to specify the coordinates for say a rectangle of 2 triangles like so:
float V[] = { 20.0, 20.0, 0.0,
40.0, 20.0, 0.0,
40.0, 40.0, 0.0,
40.0, 40.0, 0.0,
20.0, 40.0, 0.0,
20.0, 20.0, 0.0 }
You can see that the vertex coordinates are specified in viewport? space (I believe thats what its called). Now, when this get rendered by opengl, it doesnt work because clip space goes from -1.0 to 1.0 with the origin in the center.
What would be the correct way for me to handle this? I initially thought adjusting glClipControl to upper left and 0 to 1 would work, but it didnt. With clip control set to upper left and 0 to 1, the origin was still at the center, but it did allow for the Y-Axis to increase as it moved downward (which is a start).
Ideally, I would love to get opengl to have 0.0,0.0 to be the top left and 1.0, 1.0 to be the bottom right, then I just normalise each vertex position, but I have no idea how to get opengl to use that type of coordinate system.
One can easily do these transformation without matrices in the vertex shader:
// From pixels to 0-1
vPos.xy /= vResolution.xy;
// Flip Y so that 0 is top
vPos.y = (1.-vPos.y);
// Map to NDC -1,+1
vPos.xy = vPos.xy*2.-1.;

How to texture a "perfect cube" drawn with triangles?

I'm trying to map a texture on a cube which is basicly a triangle strip with 8 vertices and 14 indicies:
static const GLfloat vertices[8] =
{
-1.f,-1.f,-1.f,
-1.f,-1.f, 1.f,
-1.f, 1.f,-1.f,
-1.f, 1.f, 1.f,
1.f,-1.f,-1.f,
1.f,-1.f, 1.f,
1.f, 1.f,-1.f,
1.f, 1.f, 1.f
};
static const GLubyte indices[14] =
{
2, 0, 6, 4, 5, 0, 1, 2, 3, 6, 7, 5, 3, 1
};
As you can see it starts drawing the back with 4 indices 2, 0, 6, 4, then the bottom with 3 indices 5, 0, 1 and then starting off with triangles only 1, 2, 3 is a triangle on the left, 3, 6, 7 is a triangle on the top, and so on...
I'm a bit lost how to map a texture on this cube. This is my texture (you get the idea):
I manage to get the back textured and somehow can add something to the front, but the other 4 faces are totally messed up and I'm a bit confused how the shader deals with the triangles regarding to the texture coordinates.
The best I could achieve is this:
You can clearly see the triangles on the sides. And these are my texture coordinates:
static const GLfloat texCoords[] = {
0.5, 0.5,
1.0, 0.5,
0.5, 1.0,
1.0, 1.0,
0.5, 0.5,
0.5, 1.0,
1.0, 0.5,
1.0, 1.0,
// ... ?
};
But whenever I try to add more coordinates it's totally creating something different I can not explain really why. Any idea how to improve this?
The mental obstacle you're running into is assuming that your cube has only 8 vertices. Yes, there are only 8 corer positions. But each face adjacent to that corner shows a different part of the image and hence has a different texture coordinate at that corner.
Vertices are tuples of
position
texture coordinate
…
any other attribute you can come up
As soon as one of that attribute changes you're dealing with an entirely different vertex. Which means for you, that you're dealing with 8 corner positions, but 3 different vertices at each corner, because there are meeting faces with different texture coordinates at that corner. So you actually need 24 vertices that make up 6 different faces which share no vertices at all.
To make things easier for you as a beginner, don't put vertex positions and texture coordinates into different arrays. Instead write it like this:
struct vertex_pos3_tex2 {
float x,y,z;
float s,t;
} cube_vertices[24] =
{
/* 24 vertices of position and texture coordinate */
};

Light and shadow not working in opengl and c++

I am creating the solar system and I keep running into problems with the lighting. The first problem is that the moon casts no shadows on the earth and the earth casts no shadows on the moon.
The other problem is that the light that is shining on the the earth and the moon are not coming from my sun, but from the center point of the orbit. I added the red lines in the picture below to show what I mean.
the picture below should illustrate what my two problems are.
Here is the code that is dealing with the lights and the planets.
glDisable(GL_LIGHTING);
drawCircle(800, 720, 1, 50);
//SUN
//Picture location, major radius, minor radius, major orbit, minor orbit, angle
Planet Sun ("/home/rodrtu/Desktop/SolarSystem/images/Sun.png",
100, 99, 200.0, 0.0, 0.0);
double sunOrbS = 0;
double sunRotS = rotatSpeed/10;
cout << sunRotS << " Sun Rotation" << endl;
//orbit speed, rotation speed, moon reference coordinates (Parent planet's major and minor Axis)
Sun.displayPlanet(sunOrbS, sunRotS, 0.0, 0.0);
//Orbit path
//EARTH
GLfloat light_diffuse[] = { 1.5, 1.5, 1.5, 1.5 };
GLfloat pos[] = { 0.0, 0.0, 0.0, 200.0 };
glEnable(GL_LIGHTING);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light_diffuse);
glLightfv(GL_LIGHT0, GL_POSITION, pos);
Planet Earth ("/home/rodrtu/Desktop/SolarSystem/images/EarthTopography.png",
50, 49, 500.0, 450.0, 23.5);
double eaOrbS = orbitSpeed;
double eaRotS = rotatSpeed*3;
Earth.displayPlanet(eaOrbS, eaRotS, 0.0, 0.0);
//EARTH'S MOON
Planet Moon ("/home/rodrtu/Desktop/SolarSystem/images/moonTest.png",
25, 23, 100.0, 100.0, 15);
double moOrbS = rotatSpeed*4;
double moRotS = eaOrbS;
Moon.displayPlanet(moOrbS, moRotS, Earth.getMajorAxis(), Earth.getMinorAxis());
orbitSpeed+=.9;
if (orbitSpeed > 359.0)
orbitSpeed = 0.0;
rotatSpeed+=2.0;
if (rotatSpeed > 7190.0)
rotatSpeed = 0.0;
This next functions are used to determine the orbit coordinate and location of each planet
void Planet::setOrbit(double orbitSpeed, double rotationSpeed,
double moonOrbitX, double moonOrbitY)
{
majorAxis = orbitSemiMajor * cos(orbitSpeed / 180.0 * Math::Constants<double>::pi);
minorAxis = orbitSemiMinor * sin(orbitSpeed / 180.0 * Math::Constants<double>::pi);
glTranslate(majorAxis+moonOrbitX, minorAxis+moonOrbitY, 0.0);
glRotatef(orbitAngle, 0.0, 1.0, 1.0);
glRotatef(rotationSpeed, 0.0, 0.0, 1.0);
}
void Planet::displayPlanet(double orbitSpeed,double rotationSpeed,
double moonOrbitX, double moonOrbitY)
{
GLuint surf;
Images::RGBImage surfaceImage;
surfaceImage=Images::readImageFile(texture);
glEnable(GL_TEXTURE_2D);
glGenTextures(0, &surf);
glBindTexture(GL_TEXTURE_2D, surf);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
surfaceImage.glTexImage2D(GL_TEXTURE_2D,0,GL_RGB);
glPushMatrix();
setOrbit(orbitSpeed,rotationSpeed, moonOrbitX, moonOrbitY);
drawSolidPlanet(equatRadius, polarRadius, 1, 40, 40);
glPopMatrix();
}
What am I doing wrong? I read up on the w component of GL_POSITION and I changed my position to be 200 (where the sun is centered), but the light source is still coming from the center of the orbit.
To make a proper reply for the light position issue..
[X, Y, Z, W] is called homogenous coordinates
A coordinate [X, Y, Z, W] in homogenous space is will be [X/W, Y/W, Z/W] in 3D space.
Now, consider the following W values :
W=1.0 : [1.0, 1.0, 1.0, 1.0] is [1.0, 1.0, 1.0] in 3D place.
W=0.1 : [1.0, 1.0, 1.0, 0.1] is [10.0, 10.0, 10.0] in 3D place.
W=0.001 : [1.0, 1.0, 1.0, 0.001] is [1000.0, 1000.0, 1000.0] in 3D place.
When we keep moving towards W=0 the [X/W, Y/W, Z/W] values approaches a point at infinity. It's actually no longer a point, but a direction from [0,0,0] to [X,Y,Z].
So when defining the light position we need to make sure to get this right.
W=0 defines a directional light, so x,y,z is a directional vector
W=1 defined a positional light, so x,y,z is a position in 3D space
You'll get to play around with this a lot once you dig deeper into matrix math. If you try to transform a direction (W=0) with a translation matrix for example, it will not have any effect. This is very relevant here as well since the light position will be affected by the modelview matrix.
Some easy to understand information here for further reading :
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
If OpenGL doesn't have a "cast shadow" function, how could I acomplish this then?
What you must understand is, that OpenGL has no concept of a "scene". All OpenGL does is drawing points, lines or triangles to the screen, one at a time. After it's drawn, it has no influence on the following drawing operations.
So to do something fancy like shadows, you must get, well, artistic. By that I mean, like an artist who paints a plastic picture which has depth with "just" a brush and a palette of colours, you must use OpenGL in a artistic way to recreate with it the effects you desire. Drawing a shadow can be done in various ways. But the most popular one is known by the term Shadow Mapping.
Shadow Mapping is a two step process. In the first step the scene is rendered into a "grayscale" picture "seen" from the points of view of the light, where the distance from the light is drawn as the "gray" value. This is called a Shadow Depth Map.
In the second step the scene is drawn as usual, where the lights' shadow depth map(s) are projected into the scene, as if the lights were a slide projector (where everything receives that image, as OpenGL doesn't shadow). In a shader the depth value in the shadow depth map is compared with the actual distance to the light source for each processed fragments; if the distance to the light is farther than the corresponding pixel in the shadow map this means that while rendering the shadow map something got in front of the currently processed geometry fragment, which hence lies in the shadow, so it's drawn in a shadow color (usually the ambient illumination color); you might want to combine this with an Ambient Occlusion effect to simulate soft, self shadowing ambient illumination.