I am trying to draw a grid of velocity vectors, I expect the velocity at each grid point to be a line with a slop of 1. A slanting line, but I always end up with a vertical line. I'm not sure what I'm doing wrong. Is there something I'm overlooking?
Here is how my vertex buffer looks :
float vel_pos[6*(N+2)*(N+2)];
int index1 = 0;
for (int i=0;i<N+2;i++)
{
for (int j=0;j<N+2;j++)
{
vel_pos[index1] = float(i);
vel_pos[index1+1] = float(j);
vel_pos[index1+2] = 0.0f;
vel_pos[index1+3] = float(i) +0.5f;
vel_pos[index1+4] = float(j) + 0.5f;
vel_pos[index1+5] = 0.0f;
index1 += 6;
}
}
Here is how I am creating my VBO and VAO :
unsigned int VBO, VAO;
glGenVertexArrays(1, &VAO);
glGenBuffers(1, &VBO);
// Bind vertex array object first and then bind the vertex buffer objects
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vel_pos), vel_pos, GL_STREAM_DRAW);
GLint velAttrib = glGetAttribLocation(ourShader.ID, "aPos");
// iterpreting data from buffer
glVertexAttribPointer(velAttrib, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(float), (void*)0);
glEnableVertexAttribArray(0);
Here is my vertex shader :
out vec4 vertexColor;
layout (location = 0) in vec3 aPos;
layout (location = 1) in float densitySource; /* source of density */
uniform mat4 transform;
uniform mat4 projection;
void main()
{
gl_Position = projection*transform * vec4(aPos, 1.0);
vertexColor = vec4(1, 0.0, 0.0, 1.0);
}
And here's my drawing code :
ourShader.use();
glm::mat4 trans = glm::mat4(1.0f);
trans = glm::translate(trans, glm::vec3(-0.5f, -0.5f, 0.0f));
unsigned int transformMatrixLocation = glGetUniformLocation(ourShader.ID, "transform");
glUniformMatrix4fv(transformMatrixLocation, 1, GL_FALSE, glm::value_ptr(trans));
glm::mat4 projection = glm::ortho(-10.0f, 110.0f, -1.0f, 110.0f, -1.0f, 100.0f);
unsigned int projectionMatrixLocation = glGetUniformLocation(ourShader.ID, "projection");
glUniformMatrix4fv(projectionMatrixLocation, 1, GL_FALSE, glm::value_ptr(projection));
glBindVertexArray(VAO);
glLineWidth(1.0f);
glDrawArrays(GL_LINES, 0, (N+2)*(N+2));
This is the image I get :
resulting image
The 5th parameter (stride) of glVertexAttribPointer is the offset between two vertex coordinates and not between to primitives. Since your vertex coordinates have 3 components of type float, the offset has to be 3 * sizeof(float):
glVertexAttribPointer(velAttrib, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0);
Because you set an offset of 6 * sizeof(float), you have skipped every 2 coordinate and have drawn lines between the points of the grid.
But note, if stride is 0, the generic vertex attributes are understood to be tightly packed in the array. This is the case, so you ca use an offset of 0:
glVertexAttribPointer(velAttrib, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
Related
i'm working on a project where i'm using OpenMesh to read stl and obj files and draw them on the screen using openGL.
i've been doing the following,
#include <OpenMesh/Core/Mesh/TriMesh_ArrayKernelT.hh>
#include <OpenMesh/Core/IO/MeshIO.hh>
OpenMesh::TriMesh_ArrayKernelT<> mesh;
std::vector<point> vertices;
std::vector<point> normals;
void readMesh(std::string file)
{
OpenMesh::IO::read_mesh(mesh, file);
mesh.requestFaceNormals();
mesh.request_vertex_normals();
mesh.updateNormals();
vertices.clear();
normals.clear();
for (auto face : mesh.faces())
{
for (auto vertex : mesh.fv_range(face))
{
auto point = mesh.point(vertex);
auto normal = mesh.normal(face);
vertices.push_back(point);
normals.push_back(normal);
}
}
mesh.releaseFaceNormals();
mesh.releaseVertexNormals();
}
and when drawing i just pass the vertices and normals vectors to the vertex shader like this
void paint()
{
glSetAttributeArray(0, vertices.data());
glSetAttributeArray(1, normals.data());
glDrawArrays(GL_TRIANGLES, 0, vertices.length());
}
where the vertex shader looks like this:
attribute vec3 position;
attribute vec3 normal;
uniform mat4 modelViewMatrix;
void main(void)
{
vec4 color = vec4(0.25, 0.25, 0.25, 0.0);
vec4 P = vec4(position, 0);
vec4 N = vec4(normal, 0);
vec3 L = vec3(20, 20, 20) - position;
vec3 V = -position;
N = normalize(N);
L = normalize(L);
V = normalize(V);
vec3 R = reflect(-L, vec3(N));
vec3 diffuse = max(dot(vec3(N), L), 0.0) * color.rgb;
vec3 specular = pow(max(dot(R, V), 0.0), 0.2) * vec3(0.1, 0.1, 0.1);
color = vec4(color.a * (ambient + diffuse + specular), color.a);
color = clamp(color, 0.0, 1.0);
gl_Color = color;
gl_Position = modelViewMatrix * P;
}
and the fragment shader is:
void main(void)
{
gl_FragColor = gl_Color;
}
this produces pretty good results, but the idea of having another copy of the vertices and normals stored in another location (normals and vertices) to be able to draw the mesh looks very counter-intuitive.
i was wondering if i can use openGL buffers with openMesh to optimize this. i've been searching for anything concerning this topic for a while but found nothing.
See Vertex Specification. You can create 2 Vertex Buffer Object for the verticex cooridantes and nortmal vertors:
GLuint vbos[2];
glGenBuffers(2, vbos);
glBindBuffer(GL_ARRAY_BUFFER, vbos[0]);
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(vertices[0]), vertices.data(), GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, vbos[1]);
glBufferData(GL_ARRAY_BUFFER, normals.size() * sizeof(normals[0]), normals.data(), GL_STATIC_DRAW);
If you use OpenGL 3.0 or later, then you can specify a Vertex Array Object a nd state the vertex specification:
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, vbos[0]);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, nullptr);
glBindBuffer(GL_ARRAY_BUFFER, vbos[1]);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, nullptr);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
When you want to draw the mesh, then it is sufficient to bind the VAO:
glBindVertexArray(vao);
glDrawArrays(GL_TRIANGLES, 0, vertices.length());
If you use OpenGL 2.0, the you cannot create a VAO, thus you have to specify the arrays of generic vertex attribute data, before drawing the mesh:
glBindBuffer(GL_ARRAY_BUFFER, vbos[0]);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, nullptr);
glBindBuffer(GL_ARRAY_BUFFER, vbos[1]);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, nullptr);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glDrawArrays(GL_TRIANGLES, 0, vertices.length());
Furthermore note, that the attribute indices are not guaranteed to be 0 and 1. The attribute indices can be any arbitrary number.
If you would use GLSL version 3.30 the it would be possible to set the attribute indices in the shader code by Layout Qualifier.
Anyway you an define the attribute indices by glBindAttribLocation before linking the program or retrieve the attribute indices by glGetAttribLocation after linking the program.
I am sorry to carry this here, but I spend around 7 hours maybe for a really easy thing. Maybe some of you identify the problem.
I try to render some pixel coordinates to screen. The code is below.
For a 800x600 screen. Simply trying to calculate the position of the lines, and then to render it to the screen.
For example: Point A(400, 300, 0) and point B(500,300,0) shall be a simple black line from the center of the screen to the right.
As I call this class function in render, I thought I might be creating a separate rendering session. However, when I write something such as glCleanColor, the background changes.
#include <GLFW/glfw3.h>
#include <GL/gl.h>
#include <GL/glew.h>
#include <iostream>
#include <vector>
Vertex shader:
const GLchar *vertex_shader =
"#version 410\n"
"layout (location = 0) in vec3 pos;\n"
"layout (location = 1) in vec4 col;\n"
"uniform mat4 projection;\n"
"out vec4 Frag_Color;\n"
"void main()\n"
"{\n"
" Frag_Color = col;\n"
" gl_Position =projection*vec4(pos.xyz,1);\n"
"}\n";
Fragment Shader:
const GLchar *fragment_shader =
"#version 410\n"
"in vec4 Frag_Color;\n"
"layout (location = 0) out vec4 Out_Color;\n"
"void main()\n"
"{\n"
" Out_Color = Frag_Color;\n"
"}\n";
Vertex structure
struct Vrtx
{
float pos[3];
float col[4] = {0.0f, 0.0f, 0.0f, 1.0f};
};
coord axis class:
class CoordinateAxis
{
public:
void coordinate_axis()
{
vertices.resize(6);
for (int i = 0; i < 3; i++)
{
vertices[2 * i].pos[0] = 400;
vertices[2 * i].pos[1] = 300;
vertices[2 * i].pos[2] = 0;
}
vertices[1].pos[0] = 500;
vertices[1].pos[1] = 300;
vertices[1].pos[2] = 0;
vertices[3].pos[0] = 400;
vertices[3].pos[1] = 400;
vertices[3].pos[2] = 0;
vertices[3].pos[0] = 400;
vertices[3].pos[1] = 430;
vertices[3].pos[2] = 100;
setupRender();
glBindVertexArray(VAO);
glDrawElements(GL_LINE, 6, GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
glUseProgram(0);
}
CoordinateAxis()
{
initShaderProgram();
};
private:
void initShaderProgram()
{
// Vertex shader
GLuint vHandle = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vHandle, 1, &vertex_shader, NULL);
glCompileShader(vHandle);
// Fragment shader
GLuint fHandle = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fHandle, 1, &fragment_shader, NULL);
glCompileShader(fHandle);
// Create Program
handleProgram = glCreateProgram();
glAttachShader(handleProgram, vHandle);
glAttachShader(handleProgram, fHandle);
glLinkProgram(handleProgram);
attribLocationProj = glGetUniformLocation(handleProgram, "projection");
glGenVertexArrays(1, &VAO);
// CreateBuffers
glGenBuffers(1, &vboHandle);
glGenBuffers(1, &iboHandle);
}
void setupRender()
{
GLint last_viewport[4];
glGetIntegerv(GL_VIEWPORT, last_viewport);
float L = last_viewport[0];
float R = L + last_viewport[2];
float B = last_viewport[1];
float T = B + last_viewport[3];
const float ortho_projection[4][4] =
{
{2.0f / (R - L), 0.0f, 0.0f, 0.0f},
{0.0f, 2.0f / (T - B), 0.0f, 0.0f},
{0.0f, 0.0f, -1.0f, 0.0f},
{(R + L) / (L - R), (T + B) / (B - T), 0.0f, 1.0f},
};
glUseProgram(handleProgram);
glUniformMatrix4fv(attribLocationProj, 1, GL_FALSE, &ortho_projection[0][0]);
glBindVertexArray(VAO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, iboHandle);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices.size() * sizeof(GLuint), indices.data(), GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, vboHandle);
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(Vrtx), vertices.data(), GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, vboHandle);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, vertices.size() * sizeof(Vrtx), 0);
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, vertices.size() * sizeof(Vrtx), (void *)(3 * sizeof(float)));
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
}
std::vector<Vrtx> vertices;
std::vector<GLuint> indices;
GLuint handleProgram, VAO, vboHandle, iboHandle, attribLocationProj;
};
GL_LINE is not a valid primitive type. GL_LINE is a mode for glPolygonMode.
A valid line primitive type is GL_LINES:
glDrawElements(GL_LINE, 6, GL_UNSIGNED_INT, 0);
glDrawElements(GL_LINES, 6, GL_UNSIGNED_INT, 0);
Furthermore there is an issue when you set up the array of vertex attribute data by glVertexAttribPointer. The 5th parameter (strid) is the byte offset between consecutive attribute tuples, rather than the size of the buffer. sizeof(Vrtx) rather than vertices.size() * sizeof(Vrtx):
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vrtx), 0);
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, sizeof(Vrtx), (void *)(3*sizeof(float)));
Note, the array of indices seems to be empty. Either initialize the array of indices:
indices = { 0, 1, 2, 3, 4, 5 };
or use glDrawArrays instead:
glDrawArrays(GL_LINES, 0, 6);
I cannot get texture coordinates to have any effect on what is rendered on screen. My texture constantly seemed to rendering from 0,0 to 1,1.
The method I am using is to send the following buffer layout: x,y,u,v (xy: vertex position and u,v texture coordinates).
However, changing the values u & v have no effect on the rendered texture.
I've removed a lot of surrounding code to try and make it easier to read, but I can post it in full if the problem isn't immediately obvious to someone.
Vertex Shader:
#version 330 core
layout(location = 0) in vec2 position;
layout(location = 1) in vec2 mytextpos;
out vec2 v_TexCoord;
uniform mat4 u_Model;
void main()
{
gl_Position = u_Model * vec4(position, 0.0f, 1.0f);
v_TexCoord = mytextpos;
}
Fragment Shader:
#version 330 core
layout(location = 0) out vec4 color;
in vec2 v_TexCoord;
uniform sampler2D u_Texture;
void main()
{
color = texture(u_Texture, v_TexCoord);
}
My Rectangle Class Constructor:
// GENERATE BUFFERS
glGenVertexArrays(1, &VertexArrayObject);
glGenBuffers(1, &VertexBufferId);
glGenBuffers(1, &IndexBufferObjectId);
Indices = {
0, 1, 2,
2, 3, 0
};
unsigned int numberOfVertices = Vertices.size();
unsigned int numberOfIndices = Indices.size();
glBindVertexArray(VertexArrayObject);
glBindBuffer(GL_ARRAY_BUFFER, VertexBufferId);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexBufferObjectId);
Vertices = {
0.2f, 0.0f, 0.0f, 0.5f,
1.0f, 0.0f, 1.5f, 0.5f,
1.0f, 1.0f, 1.5f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f
};
// ADD BUFFER DATA
glBufferData( GL_ARRAY_BUFFER, Vertices.size() * sizeof(float), Vertices.data(), GL_STATIC_DRAW );
// ARRANGE ATTRIBUTES
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), nullptr);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), nullptr);
glEnableVertexAttribArray(1);
glBufferData( GL_ELEMENT_ARRAY_BUFFER, numberOfIndices * sizeof(unsigned int), Indices.data(), GL_STATIC_DRAW );`
Render Function:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
GLint u_Texture = glGetUniformLocation( shader.getId(), "u_Texture" );
glUniform1i( u_Texture, 0 );
GLint u_Model = glGetUniformLocation( shader.getId(), "u_Model" );
glUniformMatrix4fv( u_Model, 1, GL_FALSE, glm::value_ptr( rect.TransformMatrix() ) );
glDrawElements(GL_TRIANGLES, rect.Indices.size(), GL_UNSIGNED_INT, nullptr);
// SWAP BUFFERS
glfwSwapBuffers( _window->WindowInstance );
glfwPollEvents();
My program runs, but my texture is mapped between 0,0 and 1,0 no matter what my vertex or u,v positions are. I would expect the texture to be interpolated between the u,v coordinates for each vertex?
You attribute setup is wrong:
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), nullptr);
This tells OpenGL that it should attach the first two floats of each vertex to the mytextpos attribute. But you actually want it to read the 3rd and 4th float. Thus you have to set the offset such that it skips the first two floats:
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), (void*)(2 * sizeof(float)));
I have been having difficulty texturing a plane I made. The first quad on the plane is textured properly, but the rest of the plane then only seems to use the first pixel of the texture so it all ends up as a solid color. It seems to work properly if I make one giant plane and just texture that, but when I try to break up the plane into sections I keep getting this issue. I’m assuming I am missing something as far as the coordinates go, but from my understanding I thought they were always supposed to be between 0 and 1? Any help is appreciated.
[![enter image description here][1]][1]
Texture coordinates
GLfloat grassTexCoords[]
{
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f
};
Setting up VAO
GLuint makePlane()
{
float size = 5;
for (int i = -5; i < 5; ++i)
{
for (int j = -5; j < 5; ++j)
{
verts.push_back({ glm::vec3((i * size), -11.f, (j * size)) });
verts.push_back({ glm::vec3((i * size), -11.f, (j * size) + size) });
verts.push_back({ glm::vec3((i * size) + size, -11.f, (j * size)) });
verts.push_back({ glm::vec3((i * size) + size, -11.f, (j * size)) });
verts.push_back({ glm::vec3((i * size), -11.f, (j * size) + size) });
verts.push_back({ glm::vec3((i * size) + size, -11.f, (j * size) + size) });
}
}
GLuint vbo;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, verts.size() * sizeof(VertexPos), verts.data(), GL_STATIC_DRAW);
GLuint vboTex;
glGenBuffers(1, &vboTex);
glBindBuffer(GL_ARRAY_BUFFER, vboTex);
glBufferData(GL_ARRAY_BUFFER, 2 * 6 * sizeof(GLfloat), &grassTexCoords, GL_STATIC_DRAW);
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, vboTex);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, NULL);
return vao;
}
Rendering
void render()
{
glViewport(0, 0, window.getSize().x, window.getSize().y);
glClearColor(.4f, .4f, .4f, 1.f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//1st program
glUseProgram(sphereProgram);
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glBindVertexArray(vao);
glDrawArrays(GL_TRIANGLES, 0, objPointCount);
//2nd program
glFrontFace(GL_CCW);
glDepthMask(GL_FALSE);
glUseProgram(cubeProgram);
glBindTexture(GL_TEXTURE_CUBE_MAP, textureID);
glBindVertexArray(cubeVao);
glDrawArrays(GL_TRIANGLES, 0, 36);
glDepthMask(GL_TRUE);
//3rd program
glFrontFace(GL_CCW);
glDisable(GL_CULL_FACE);
glEnable(GL_TEXTURE_2D);
sf::Texture::bind(&grassTex);
glUseProgram(planeProgram);
glBindVertexArray(planeVao);
glDrawArrays(GL_TRIANGLES, 0, verts.size());
//-----------------------
window.display();
//window.setFramerateLimit(FPS);
window.setVerticalSyncEnabled(true);
}
Vertex shader
#version 410
layout (location = 0) in vec3 vertexPos;
layout (location = 1) in vec2 texCoords;
uniform mat4 view, proj;
out vec3 posEye;
out vec2 coords;
void main()
{
coords = texCoords; //repeat texture over plane
gl_Position = proj * view * vec4(vertexPos, 1.0);
posEye = (view * vec4(vertexPos, 1.0)).xyz;
}
Fragment shader
#version 410
in vec3 posEye;
in vec2 coords;
out vec4 fragColor;
uniform sampler2D tex;
//fog
const vec3 fogColor = vec3(0.2, 0.2, 0.2);
const float minFogRad = 300;
const float maxFogRad = 900;
void main()
{
vec4 texture = texture2D(tex, coords);
fragColor = texture;
float distance = length(-posEye);
float fogFactor = (distance - minFogRad) / (maxFogRad - minFogRad);
fogFactor = clamp(fogFactor, 0.0, 1.0);
fragColor.rgb = mix(fragColor.rgb, fogColor, fogFactor);
}
The problem here is, that texture coordinates are just supplied for the first quad (for the first 6 vertices). All other vertices seem to get [0,0], which causes them to only read the top-left texel. The solution here is to provide enough texture coordinates for all of the vertices.
Texture coordinates in general are not necessarily between 0 and 1. One can specify how values outside of [0,1] should be treated by setting GL_TEXTURE_WRAP_[RST].
Note, that not supplying enough data in a VBO can lead to crashes (depends on the driver) when OpenGL tries to read outside the buffer.
You are using a GL_ARRAY_BUFFER which stores per vertex data.
The 2 and GL_FLOAT arguments in glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, NULL); are declaring that you have 2 floats in vboTex per vertex but you haven't done this, you have 2 floats for each vertex in the first quad only.
In the same way, the 3 and GL_FLOAT arguments in glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL); are declaring that you have 3 floats in vbo per vertex which you have done.
The easiest fix for this is to create a bigger GL_ARRAY_BUFFER which repeats the same texture coordinates for every quad.
I'm trying to implement instancing into my 2d Game Engine so that it can support particle systems without losing any performance. My class, ISprite, is derived from a working Sprite class. I just went through and removed all the functionality affecting single sprites and replaced it with an instancing plan in mind. Unfortunately, nothing is drawing on the screen.
Here is the relevant information:
Vertex Shader
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec2 texCoords;
layout (location = 2) in vec4 colorSource;
layout (location = 3) in mat4 transform;
out vec2 TexCoords;
out vec4 Color;
uniform mat4 uniformView;
uniform mat4 uniformProjection;
void main()
{
gl_Position = uniformProjection * uniformView * transform * vec4(position, 1.0f);
TexCoords = texCoords;
Color = colorSource;
}
Fragment Shader
#version 330 core
in vec2 TexCoords;
in vec4 Color;
out vec4 color;
uniform sampler2D Texture;
uniform vec4 uniformColor;
void main()
{
vec4 texColor = texture(Texture, TexCoords) * Color;
if(texColor.a < 0.1)
discard;
color = texColor;
}
Load - Prepares all sprites for drawing, called once.
void ISprite::Load(Shader spriteShader)
{
spriteShader.Use();
GLfloat vertices[] = {
//X Y Z
0.5f, -0.5f, 0.0f,
-0.5f, 0.5f, 0.0f,
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.5f, 0.5f, 0.0f,
-0.5f, 0.5f, 0.0f
};
glGenVertexArrays(1, &vertexArray);
glGenBuffers(1, &positionBuffer);
glGenBuffers(1, &texCoordsBuffer);
glGenBuffers(1, &colorBuffer);
glGenBuffers(1, &matrixBuffer);
glBindVertexArray(vertexArray);
//The vertex data will never change, so send that data now.
glBindBuffer(GL_ARRAY_BUFFER, positionBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
//For vertex Position
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3*sizeof(GLfloat), (GLvoid*)0);
//For texture coordinates
glBindBuffer(GL_ARRAY_BUFFER, texCoordsBuffer);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 2*sizeof(GLfloat), (GLvoid*)0);
//For Color
glBindBuffer(GL_ARRAY_BUFFER, colorBuffer);
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 4, GL_FLOAT, GL_FALSE, 4 * sizeof(GLfloat), (GLvoid*)0);
//For Transformation Matrix
glBindBuffer(GL_ARRAY_BUFFER, matrixBuffer);
for (int i = 0; i < 4; ++i)
{
glEnableVertexAttribArray(3 + i);
glVertexAttribPointer(3 + i, 4, GL_FLOAT, GL_FALSE,
4 * 4 * sizeof(GLfloat), (GLvoid*)(4 * i * sizeof(GLfloat)));
}
glBindBuffer(GL_ARRAY_BUFFER, positionBuffer);
glBindBuffer(GL_ARRAY_BUFFER, texCoordsBuffer);
glBindBuffer(GL_ARRAY_BUFFER, colorBuffer);
glBindBuffer(GL_ARRAY_BUFFER, matrixBuffer);
glBindVertexArray(0);
glVertexAttribDivisor(positionBuffer, 0);
glVertexAttribDivisor(texCoordsBuffer, 1);
glVertexAttribDivisor(colorBuffer, 1);
glVertexAttribDivisor(matrixBuffer, 1);
glVertexAttribDivisor(matrixBuffer + 1, 1);
glVertexAttribDivisor(matrixBuffer + 2, 1);
glVertexAttribDivisor(matrixBuffer + 3, 1);
ISprite::shader = &spriteShader;
}
Prepare Draw - called by each sprite, each frame. Sends data to static vectors
void ISprite::prepareDraw(void)
{
//Adds their personal data to vectors shared by class
glm::mat4 transform = calculateTransorm();
for (int i = 0; i < 4; ++i)
{
for (int j = 0; j < 4; ++j)
ISprite::transformMatrices.push_back(transform[i][j]);
}
texture.updateAnimation();
for (int i = 0; i < 12; ++i)
ISprite::textureCoordinatesAll.push_back(texture.textureCoordinates[i]);
ISprite::colorValues.push_back(color.x);
ISprite::colorValues.push_back(color.y);
ISprite::colorValues.push_back(color.z);
ISprite::colorValues.push_back(color.w);
}
Draw Sprites - called once each frame, actually draws the sprites
void ISprite::drawSprites(Texture testTexture)
{
shader->Use();
for (std::vector<ISprite*>::iterator it = Isprites.begin(); it != Isprites.end(); ++it)
(*it)->prepareDraw();
glBindVertexArray(vertexArray);
glBindTexture(GL_TEXTURE_2D, testTexture.ID);
//Bind texture here if you want textures to work. if not, a single texture atlas will be bound
glBindBuffer(GL_ARRAY_BUFFER, texCoordsBuffer);
glBufferData(GL_ARRAY_BUFFER, textureCoordinatesAll.size() * sizeof(GLfloat),
textureCoordinatesAll.data(), GL_STREAM_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, colorBuffer);
glBufferData(GL_ARRAY_BUFFER, colorValues.size() * sizeof(GLfloat),
colorValues.data(), GL_STREAM_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, matrixBuffer);
glBufferData(GL_ARRAY_BUFFER, transformMatrices.size() * sizeof(GLfloat),
transformMatrices.data(), GL_STREAM_DRAW);
glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, 6, Isprites.size());
textureCoordinatesAll.clear();
colorValues.clear();
transformMatrices.clear();
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
glBindVertexArray(0);
}
There could be a lot of reasons why nothing is rendered. Problems with the transformations, coordinates out of range, etc. But one thing related to instancing definitely looks wrong in the posted code:
glBindVertexArray(0);
glVertexAttribDivisor(positionBuffer, 0);
glVertexAttribDivisor(texCoordsBuffer, 1);
glVertexAttribDivisor(colorBuffer, 1);
...
The first argument to glVertexAttribDivisor() is the location of a vertex attribute, not the name of a buffer. Also, the state set by this call is part of the VAO state, so you should make these calls while the VAO is still bound.
So the calls should look like this:
glVertexAttribDivisor(0, 0);
glVertexAttribDivisor(1, 0);
glVertexAttribDivisor(2, 1);
...
glBindVertexArray(0);
where the first arguments to glVertexAttribDivisor() match the location values you also use as the first argument to glVertexAttribPointer() and glEnableVertexAttribArray().
The divisor value for the texture coordinates (attribute 1) should most likely be 0, since you want the texture coordinates to be set per vertex, just like the positions. For the colors and other remaining attributes, 1 is the correct value so that they are applied per instance.
Also, as I mentioned in a comment, you may also want to look into using point sprites. While they do not offer the same flexibility you can get from drawing individual quads, they can often be used for sprites. With point sprites, you only need one vertex per sprite, and the texture coordinates are generated automatically. My answer here gives an outline on how point sprites are used, including how to apply textures to them: Render large circular points in modern OpenGL.