I use openctm to save 3d data. Here I can save texture coordinates.
The texture is later loaded this way:
const CTMfloat * texCoords = ctm.GetFloatArray(CTM_UV_MAP_1);
for(CTMuint i = 0; i < numVertices; ++ i)
{
aMesh->mTexCoords[i].u = texCoords[i * 2];
aMesh->mTexCoords[i].v = texCoords[i * 2 + 1];
}
texCoords is the float array (2 floats for one point).
Later the texture is used this way:
glTexCoordPointer(2, GL_FLOAT, 0, &aMesh->mTexCoords[0]);
My problem is that I need to generate the texCoords array. What does u and v mean? Is it the pixel position? Do I have to scale them by 1/255?
While not being related to programming, this tutorial gives a good introduction about what UV coordinates are: http://en.wikibooks.org/wiki/Blender_3D:_Noob_to_Pro/UV_Map_Basics
In general you don't "create" or "generate" the UV map programmatically. You have your 3D modeling artists create them as part of the modeling process.
Related
I want to understand how to create loads of similar 2-D objects and then animate each one separately, using OpenGL.
I have a feeling that it will be done using this and glfwGetTime().
Can anyone here help point me in the right direction?
Ok, so here is what is the general thing that have tried so far:
We have this vector that handles translations created the following code, which I have modified slightly to make a shift in location based on time.
glm::vec2 translations[100];
int index = 0;
float offset = 0.1f;
float time = glfwGetTime(); // newcode
for (int y = -10; y < 10; y += 2)
{
for (int x = -10; x < 10; x += 2)
{
glm::vec2 translation;
translation.x = (float)x / 10.0f + offset + time; // new adjustment
translation.y = (float)y / 10.0f + offset + time*time; // new adjustmet
translations[index++] = translation;
}
}
Later, in the render loop,
while (!glfwWindowShouldClose(window))
{
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
shader.use();
glBindVertexArray(quadVAO);
glDrawArraysInstanced(GL_TRIANGLES, 0, 6, 100); // 100 triangles of 6 vertices each
glBindVertexArray(0);
time = glfwGetTime(); // new adjustment
glfwSwapBuffers(window);
glfwPollEvents();
}
is what I have tried. I suppose I am misunderstanding the way the graphics pipeline works. As I mentioned earlier, my guess is that I need to use some glm matrices to make this work as I imagined it, but am not sure ...
The general direction would be, during initialization:
Allocate a buffer to hold the positions of your instances (glNamedBufferStorage).
Set up an instanced vertex attribute for your VAO that sources the data from that buffer (glVertexArrayBindingDivisor and others).
Update your vertex shader to apply the position of your instance (coming from the instanced attribute) to the total transformation calculated within the shader.
Then, once per frame (or when the position changes):
Calculate the positions of of all your instances (the code you posted).
Submit those to the previously allocated buffer with glNamedBufferSubData.
So far you showed the code calculating the position. From here try to implement the rest, and ask a specific question if you have difficulties with any particular part of it.
I posted an example of using instancing with multidraw that you can use for reference. Note that in your case you don't need the multidraw, however, just the instancing part.
I want to understand how to create loads of similar 2-D objects and then animate each one separately, using OpenGL.
I have a feeling that it will be done using this and glfwGetTime().
Can anyone here help point me in the right direction?
Ok, so here is what is the general thing that have tried so far:
We have this vector that handles translations created the following code, which I have modified slightly to make a shift in location based on time.
glm::vec2 translations[100];
int index = 0;
float offset = 0.1f;
float time = glfwGetTime(); // newcode
for (int y = -10; y < 10; y += 2)
{
for (int x = -10; x < 10; x += 2)
{
glm::vec2 translation;
translation.x = (float)x / 10.0f + offset + time; // new adjustment
translation.y = (float)y / 10.0f + offset + time*time; // new adjustmet
translations[index++] = translation;
}
}
Later, in the render loop,
while (!glfwWindowShouldClose(window))
{
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
shader.use();
glBindVertexArray(quadVAO);
glDrawArraysInstanced(GL_TRIANGLES, 0, 6, 100); // 100 triangles of 6 vertices each
glBindVertexArray(0);
time = glfwGetTime(); // new adjustment
glfwSwapBuffers(window);
glfwPollEvents();
}
is what I have tried. I suppose I am misunderstanding the way the graphics pipeline works. As I mentioned earlier, my guess is that I need to use some glm matrices to make this work as I imagined it, but am not sure ...
The general direction would be, during initialization:
Allocate a buffer to hold the positions of your instances (glNamedBufferStorage).
Set up an instanced vertex attribute for your VAO that sources the data from that buffer (glVertexArrayBindingDivisor and others).
Update your vertex shader to apply the position of your instance (coming from the instanced attribute) to the total transformation calculated within the shader.
Then, once per frame (or when the position changes):
Calculate the positions of of all your instances (the code you posted).
Submit those to the previously allocated buffer with glNamedBufferSubData.
So far you showed the code calculating the position. From here try to implement the rest, and ask a specific question if you have difficulties with any particular part of it.
I posted an example of using instancing with multidraw that you can use for reference. Note that in your case you don't need the multidraw, however, just the instancing part.
CONTEXT:
I am trying to create an open world nature-like scene in openGL. An array terrainVertices of length 3*NUM_OF_VERTICES contains the vertex data for the terrain, which is a noise generated heightmap. Each vertex has x,y,z coordinates, and based on the value of y a certain color is assigned to the vertex, producing a somewhat smooth transition between deep waters and mountain peaks.
PROBLEM:
The lakes are formed when a neighbourhood of vertices has a y value, such that y<0. The colouring is performed as expected, however the result is not realistic. It looks like a blue pit:
The way I decided to tackle this issue is by creating a layer of vertices, that appear over the lake, having y=0 and a light blue color with low alpha value, thus creating the illusion of a surface, beneath which lies the actual lake.
The terrainVertices array is indexed by an element array elements[NUM_OF_ELEMENTS]. I iterate over the element array and try to find triplets of indices, that correspond to vertices with y<0. I gather every triplet that matches this condition in a vector, then create a new vertex array object, with the same vertex data as the terrain, but with the new element buffer object I just described. The vertices that are "underwater" have their y value replaced by 0, so as to stick to the surface of the lake.
Here's the code I used to accomplish this:
std::vector<GLuint> waterElementsV;
//iterating over every triangle
for (int i = 0; i < NUM_OF_ELEMENTS / 3; i++) {
//accessing each vertex based on its index from the element array
//checking if its height is lower than zero
//elements[3*i+0, +1, +2] represents the indices for a triangle
//terrainVertices[index + 1] represents the height of the vertex
if (terrainVertices[3 * (elements[3 * i]) + 1] < 0 &&
terrainVertices[3 * (elements[3 * i + 1]) + 1] < 0 &&
terrainVertices[3 * (elements[3 * i + 2]) + 1] < 0) {
//since its a valid triangle, add its indices in the elements array.
waterElementsV.push_back(elements[3 * i]);
waterElementsV.push_back(elements[3 * i + 1]);
waterElementsV.push_back(elements[3 * i + 2]);
}
}
//iterating through the terrain vertices, and setting
//each appropriate vertex's y value to water surface level y = 0
for (unsigned int i = 0; i < waterElementsV.size(); i++) {
currentIndex = waterElementsV[i];
terrainVertices[3 * currentIndex + 1] = 0;
}
//get the vector's underlying array
waterElements = waterElementsV.data();
glGenVertexArrays(1, &waterVAO);
glBindVertexArray(waterVAO);
glGenBuffers(1, &waterVerticesVBO);
glBindBuffer(GL_ARRAY_BUFFER, waterVerticesVBO);
glBufferData(GL_ARRAY_BUFFER, NUM_OF_VERTICES, terrainVertices, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(0);
glGenBuffers(1, &waterEBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, waterEBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(waterElements), waterElements, GL_STATIC_DRAW);
The shaders are very simple, as I only need to assign a position and a colour. No world view projection matrices are used, because every object is drawn relative to the terrain anyway.
Vertex Shader:
#version 330 core
layout (location = 0) in vec3 pos;
void main()
{
gl_Position = vec4(pos, 1.0);
}
Fragment Shader:
#version 330 core
out vec4 fragColor;
void main()
{
fragColor = vec4(0, 0, 0, 1);
}
The result is exactly the same as in the above picture. I've been picking my brain about what is going on but I cannot seem to figure it out... Note that the lakes are spread out at random places, and are not necessarily adjacent to each other.
Any help or tips are greatly appreciated :)
I am drawing some orthographic representations in bulk around one million in my model drawing.
(I will draw these things with some flag)
Camera is also implemented. rotation etc are possible.
All these orthograhic representations will change their positions when I rotate the model.
So that, it looks like, all these are in the same place on the model.
Now I would like to draw these orthographic things through graphics card, because, when these are huge in number, model rotation is very very slow.
I feel like there would not be any advantage, because, every time I have to recompute the postions based on the projection matrix.
1) Am I correct?
2) And also please let me know, how to improve performance when i am drawing bulk orthographic representations using opengl.
3) I also feel instancing will not work here, because for each orthographic rep is drawn between 2/3 positions. Am I correct ?
Usually, OpenGL does the projection calculation for you while drawing: The positions handed over to GL are world or model coordinates, and GL rendering uses the model-view-projection matrix (while rendering) to calculate the screen coordinates for the current projection etc. If the camera moves, the only thing that changes is the MVP matrix handed to GL.
This shouldn't really depend on the kind of projection you are using. So I don't think you need to / should update the positions in your array.
Here is my approach:
You create a vertex buffer that contains each vertex position 6 times and 6 texture coordinates (that you need anyways if you want to draw your representation with textures) from which you make a quad in the vertex shader. In that you would emulate the openGL projection and then offset the vertex by its texture coordinate to create the quad with constant size.
When constructing the model:
vector<vec3>* positionList = new vector<vec3>();
vector<vec2>* texCoordList = new vector<vec2>();
for (vector<vec3>::iterator it = originalPositions->begin(); it != originalPositions->end(); ++it) {
for (int i = 0; i < 6; i++) //each quad consists of 2 triangles, therefore 6 vertices
positionList->push_back(vec3(*it));
texCoordList->push_back(vec2(0, 0)); //corresponding texture coordinates
texCoordList->push_back(vec2(1, 0));
texCoordList->push_back(vec2(0, 1));
texCoordList->push_back(vec2(1, 0));
texCoordList->push_back(vec2(1, 1));
texCoordList->push_back(vec2(0, 1));
}
vertexCount = positionList->size();
glGenBuffers(1, &VBO_Positions); //Generate the buffer for the vertex positions
glBindBuffer(GL_ARRAY_BUFFER, VBO_Positions);
glBufferData(GL_ARRAY_BUFFER, positionList->size() * sizeof(vec3), positionList->data(), GL_STATIC_DRAW);
glGenBuffers(1, &VBO_texCoord); //Generate the buffer for texture coordinates, which we are also going to use as offset values
glBindBuffer(GL_ARRAY_BUFFER, VBO_texCoord);
glBufferData(GL_ARRAY_BUFFER, texCoordList->size() * sizeof(vec2), texCoordList->data(), GL_STATIC_DRAW);
Vertex Shader:
void main() {
fs_texCoord = vs_texCoord;
vec4 transformed = (transform * vec4(vs_position, 1));
transformed.xyz /= transformed.w; //This is how the openGL pipeline does projection
vec2 offset = (vs_texCoord * 2 - 1) * offsetScale;
//Map the texture coordinates from [0, 1] to [-offsetScale, offsetScale]
offset.x *= invAspectRatio;
gl_Position = vec4(transformed.xy + offset, 0, 1);
//We pass the new position to the pipeline with w = 1 so openGL keeps the position we calculated
}
}
Note that you need to adapt to the aspect ratio yourself, since there is no actual orthogonal matrix in this that would do this for you, which is this line:
offset.x *= invAspectRatio;
When you create a brush in a 3D map editor such as Valve's Hammer Editor, textures of the object are by default repeated and aligned to the world coordinates.
How can i implement this functionality using OpenGL?
Can glTexGen be used to achieve this?
Or do i have to use texture matrix somehow?
If i create 3x3 box, then it's easy:
Set GL_TEXTURE_WRAP_T to GL_REPEAT
Set texturecoords to 3,3 at the edges.
But if the object is not an axis aligned convexhull it gets, uh, a bit complicated.
Basically i want to create functionality of face edit sheet from Valve Hammer:
Technically you can use texture coordinate generation for this. But I recommend using a vertex shader that generates the texture coordinates from the transformed vertex coordinates. Be a bit more specific (I don't know Hammer very well).
After seeing the video I understand your confusion. I think you should know, that Hammer/Source probably don't have the drawing API generate texture coordinates, but produce them internally.
So what you can see there are textures that are projected on either the X, Y or Z plane, depending in which major direction the face is pointing to. It then uses the local vertex coordinates as texture coordinates.
You can implement this in the code loading a model to a Vertex Buffer Object (more efficient, since the computation is done only once), or in a GLSL vertex shader. I'll give you the pseudocode:
cross(v1, v2):
return { x = v1.y * v2.z - v1.z * v2.y,
y = v2.x * v1.z - v2.z * v1.x, // <- "swapped" order!
z = v1.x * v2.y - v1.y * v2.x }
normal(face):
return cross(face.position[1] - face.position[0], face.position[2] - face.position[0])
foreach face in geometry:
n = normal(face) // you'd normally precompute the normals and store them
if abs(n.x) > max(abs(n.y), abs(n.z)): // X major axis, project to YZ plane
foreach (i, pos) in enumerate(face.position):
face.texcoord[i] = { s = pos.y, t = pos.z }
if abs(n.y) > max(abs(n.x), abs(n.z)): // Y major axis, project to XZ plane
foreach (i, pos) in enumerate(face.position):
face.texcoord[i] = { s = pos.x, t = pos.z }
if abs(n.z) > max(abs(n.y), abs(n.x)): // Z major axis, project to XY plane
foreach (i, pos) in enumerate(face.position):
face.texcoord[i] = { s = pos.x, t = pos.y }
To make this work with glTexGen texture coordinate generation, you'd have to split your models into parts of each major axis. What glTexGen does is just the mapping step face.texcoord[i] = { s = pos.<>, t = pos.<> }. In a vertex shader you can do the branching directly.