OpenGL Index Buffer Object element order incorrectly drawn - opengl

Notice how my program draws a single triangle, but instead what I am trying to express in code is to draw a square. My tri_indicies index buffer object I believe correctly orders these elements such that a square should be drawn, but when executing the program the draw order I have defined in the tri_indicies is not reflected in the window. Not sure if the error is rooted in tri_indicies, despite my changes to the element order not effecting my rendered output I want to believe it is here, but is it most likely somewhere else.
My program uses abstractions to notably the VertexBuffer, VertexArray, and IndexBuffer all detailed below.
const int buffer_object_size = 8;
const int index_buffer_object_size = 6;
float tri_verticies[buffer_object_size] = {
-0.7f, -0.7f, // 0
0.7f, -0.7f, // 1
0.7f, 0.7f, // 2
-0.7f, 0.7f // 3
};
unsigned int tri_indicies[index_buffer_object_size] = {
0, 1, 2,
2, 3, 0
};
VertexArray vertexArray;
VertexBuffer vertexBuffer(tri_verticies, buffer_object_size * sizeof(float)); // no call vertexBuffer.bind() constructor does it
VertexBufferLayout vertexBufferLayout;
vertexBufferLayout.push<float>(3);
vertexArray.add_buffer(vertexBuffer, vertexBufferLayout);
IndexBuffer indexBuffer(tri_indicies, index_buffer_object_size);
ShaderManager shaderManager;
ShaderSource shaderSource = shaderManager.parse_shader("BasicUniform.shader"); // ensure debug working dir is relative to $(ProjectDir)
unsigned int shader = shaderManager.create_shader(shaderSource.vertex_source, shaderSource.fragment_source);
MyGLCall(glUseProgram(shader));
Later in main I have a loop that is supposed to draw my square to the screen and fade the blue color value between 1.0f and 0.0f.
while (!glfwWindowShouldClose(window))
{
MyGLCall(glClear(GL_COLOR_BUFFER_BIT));
vertexArray.bind();
indexBuffer.bind();
MyGLCall(glDrawElements(GL_TRIANGLES, index_buffer_object_size, GL_UNSIGNED_INT, nullptr)); // nullptr since we bind buffers using glGenBuffers
if (blue > 1.0f) {
increment_color = -0.05f;
}
else if (blue < 0.0f) {
increment_color = 0.05f;
}
blue += increment_color;
glfwSwapBuffers(window);
glfwPollEvents();
}

The array tri_verticies consists of vertex coordinates with 2 components (x, y). So the tuple size for the specification of the array of generic vertex attribute data has to be 2 rather than 3:
vertexBufferLayout.push<float>(3);
vertexBufferLayout.push<float>(2);
What you actually do, is to specify an array with the following coordinates:
-0.7, -0.7, 0.7 // 0
-0.7, 0.7, 0.7 // 1
???, ???, ??? // 2
???, ???, ??? // 3
In general out-of-bound access to buffer objects has undefined results.
See OpenGL 4.6 API Core Profile Specification - 6.4 Effects of Accessing Outside Buffer Bounds, page 79

Related

change Vector<float> to heightmap coordinates OpenGL

I have a file which has terrain coordinates in it like this:
//The first value is how many rows and columns the map has (assuming its a square)
5
-0.9 -0.6 -0.4 -0.6 -0.9
-0.2 0.1 0.3 0.1 -0.3
0 0.4 0.8 0.4 0
-0.2 0.1 0.3 0.1 -0.3
0.5 -0.6 -0.4 -0.6 -0.9
I have put these values into a vector(float) and now need to format it so that I can render the vertices and make a terrain. How do I go about doing this?
std::fstream myfile("heights.csv", std::ios_base::in);
std::vector<float> numbers;
float a;
while (myfile >> a){/*printf("%f ", a);*/
numbers.push_back(a);
}
Just for reference, here is the rest of my code, it's basically code for rending 2 triangles on the screen, which I plan to remove later (just trying to learn)
/**
* A typical program flow and methods for rendering simple polygons
* using freeglut and openGL + GLSL
*/
#include <stdio.h>
// GLEW loads OpenGL extensions. Required for all OpenGL programs.
#include <GL/glew.h>
#ifdef __APPLE__
#include <GLUT/glut.h>
#else
#include <GL/glut.h>
#endif
// Utility code to load and compile GLSL shader programs
#include "shader.hpp"
#include <iostream>
#include <fstream>
#include <vector>
#define WINDOW_WIDTH 400
#define WINDOW_HEIGHT 400
#define VALS_PER_VERT 3
#define VALS_PER_COLOUR 4
#define NUM_VERTS 3 // Total number of vertices to load/render
using namespace std;
// Handle to our VAO generated in setShaderData method
unsigned int vertexVaoHandle;
unsigned int vertexVaoHandle2;
// Handle to our shader program
unsigned int programID;
/**
* Sets the shader uniforms and vertex data
* This happens ONCE only, before any frames are rendered
* #param id, Shader program object to use
* #returns 0 for success, error otherwise
*/
int setShaderData(const unsigned int &id)
{
/*
* What we want to draw
* Each set of 3 vertices (9 floats) defines one triangle
* You can define more triangles to draw here
*/
float vertices[ NUM_VERTS*VALS_PER_VERT ] = {
-0.5f, -0.5f, -0.0f, // Bottom left
0.5f, -0.5f, -0.0f, // Bottom right
0.0f, 0.5f, -0.0f // Top
};
float vertices2[ NUM_VERTS*VALS_PER_VERT ] = {
-0.9f, -0.9f, -0.0f, // Bottom left
0.9f, -0.9f, -0.0f, // Bottom right
0.9f, 0.5f, -0.0f // Top
};
float heightmap[ NUM_VERTS*VALS_PER_VERT ] = {
-0.9f, -0.9f, -0.0f, // Bottom left
0.9f, -0.9f, -0.0f, // Bottom right
0.9f, 0.5f, -0.0f
};
// Colours for each vertex; red, green, blue and alpha
// This data is indexed the same order as the vertex data, but reads 4 values
// Alpha will not be used directly in this example program
float colours[ NUM_VERTS*VALS_PER_COLOUR ] = {
0.8f, 0.7f, 0.5f, 1.0f,
0.3f, 0.7f, 0.1f, 1.0f,
0.8f, 0.2f, 0.5f, 1.0f,
};
float colours2[ NUM_VERTS*VALS_PER_COLOUR ] = {
0.8f, 0.7f, 0.5f, 1.0f,
0.3f, 0.7f, 0.1f, 1.0f,
0.8f, 0.2f, 0.5f, 1.0f,
};
// Generate storage on the GPU for our triangle and make it current.
// A VAO is a set of data buffers on the GPU
glGenVertexArrays(1, &vertexVaoHandle);
glBindVertexArray(vertexVaoHandle);
// Generate new buffers in our VAO
// A single data buffer store for generic, per-vertex attributes
unsigned int buffer[2];
glGenBuffers(2, buffer);
// Allocate GPU memory for our vertices and copy them over
glBindBuffer(GL_ARRAY_BUFFER, buffer[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*NUM_VERTS*VALS_PER_VERT, vertices, GL_STATIC_DRAW);
// Do the same for our vertex colours
glBindBuffer(GL_ARRAY_BUFFER, buffer[1]);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*NUM_VERTS*VALS_PER_COLOUR, colours, GL_STATIC_DRAW);
// Now we tell OpenGL how to interpret the data we just gave it
// Tell OpenGL what shader variable it corresponds to
// Tell OpenGL how it's formatted (floating point, 3 values per vertex)
int vertLoc = glGetAttribLocation(id, "a_vertex");
glBindBuffer(GL_ARRAY_BUFFER, buffer[0]);
glEnableVertexAttribArray(vertLoc);
glVertexAttribPointer(vertLoc, VALS_PER_VERT, GL_FLOAT, GL_FALSE, 0, 0);
// Do the same for the vertex colours
int colourLoc = glGetAttribLocation(id, "a_colour");
glBindBuffer(GL_ARRAY_BUFFER, buffer[1]);
glEnableVertexAttribArray(colourLoc);
glVertexAttribPointer(colourLoc, VALS_PER_COLOUR, GL_FLOAT, GL_FALSE, 0, 0);
// An argument of zero un-binds all VAO's and stops us
// from accidentally changing the VAO state
glBindVertexArray(0);
// The same is true for buffers, so we un-bind it too
glBindBuffer(GL_ARRAY_BUFFER, 0);
// SECOND TRI
// Generate storage on the GPU for our triangle and make it current.
// A VAO is a set of data buffers on the GPU
glGenVertexArrays(1, &vertexVaoHandle2);
glBindVertexArray(vertexVaoHandle2);
// Generate new buffers in our VAO
// A single data buffer store for generic, per-vertex attributes
unsigned int buffer2[2];
glGenBuffers(2, buffer2);
// Allocate GPU memory for our vertices and copy them over
glBindBuffer(GL_ARRAY_BUFFER, buffer2[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*NUM_VERTS*VALS_PER_VERT, vertices2, GL_STATIC_DRAW);
// Do the same for our vertex colours
glBindBuffer(GL_ARRAY_BUFFER, buffer2[1]);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*NUM_VERTS*VALS_PER_COLOUR, colours2, GL_STATIC_DRAW);
// Now we tell OpenGL how to interpret the data we just gave it
// Tell OpenGL what shader variable it corresponds to
// Tell OpenGL how it's formatted (floating point, 3 values per vertex)
int vertLoc2 = glGetAttribLocation(id, "a_vertex");
glBindBuffer(GL_ARRAY_BUFFER, buffer2[0]);
glEnableVertexAttribArray(vertLoc2);
glVertexAttribPointer(vertLoc2, VALS_PER_VERT, GL_FLOAT, GL_FALSE, 0, 0);
// Do the same for the vertex colours
int colourLoc2 = glGetAttribLocation(id, "a_colour");
glBindBuffer(GL_ARRAY_BUFFER, buffer2[1]);
glEnableVertexAttribArray(colourLoc2);
glVertexAttribPointer(colourLoc2, VALS_PER_COLOUR, GL_FLOAT, GL_FALSE, 0, 0);
// An argument of zero un-binds all VAO's and stops us
// from accidentally changing the VAO state
glBindVertexArray(0);
// The same is true for buffers, so we un-bind it too
glBindBuffer(GL_ARRAY_BUFFER, 0);
return 0; // return success
}
/**
* Renders a frame of the state and shaders we have set up to the window
* Executed each time a frame is to be drawn.
*/
void render()
{
// Clear the previous pixels we have drawn to the colour buffer (display buffer)
// Called each frame so we don't draw over the top of everything previous
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(programID);
// Make the VAO with our vertex data buffer current
glBindVertexArray(vertexVaoHandle);
// Send command to GPU to draw the data in the current VAO as triangles
glDrawArrays(GL_TRIANGLES, 0, NUM_VERTS);
glBindVertexArray(0); // Un-bind the VAO
// SECOND TRIANGLE
// Make the VAO with our vertex data buffer current
glBindVertexArray(vertexVaoHandle2);
// Send command to GPU to draw the data in the current VAO as triangles
glDrawArrays(GL_TRIANGLES, 0, NUM_VERTS);
glBindVertexArray(0); // Un-bind the VAO
// XXXXXXXX
glutSwapBuffers(); // Swap the back buffer with the front buffer, showing what has been rendered
glFlush(); // Guarantees previous commands have been completed before continuing
}
/**
* Program entry. Sets up OpenGL state, GLSL Shaders and GLUT window and function call backs
* Takes no arguments
*/
int main(int argc, char **argv) {
//READ IN FILE//
std::fstream myfile("heights.csv", std::ios_base::in);
std::vector<float> numbers;
float a;
while (myfile >> a){/*printf("%f ", a);*/
numbers.push_back(a);
}
for (int i=0; i<numbers.size();i++){cout << numbers[i] << endl;}
getchar();
//READ IN FILE//
// Set up GLUT window
glutInit(&argc, argv); // Starts GLUT systems, passing in command line args
glutInitWindowPosition(100, 0); // Positions the window on the screen relative to top left
glutInitWindowSize(WINDOW_WIDTH, WINDOW_HEIGHT); // Size in pixels
// Display mode takes bit flags defining properties you want the window to have;
// GLUT_RGBA : Set the pixel format to have Red Green Blue and Alpha colour channels
// GLUT_DOUBLE : Each frame is drawn to a hidden back buffer hiding the image construction
// GLUT_DEPTH : A depth buffer is kept so that polygons can be drawn in-front/behind others (not used in this application)
#ifdef __APPLE__
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GLUT_DEPTH | GLUT_3_2_CORE_PROFILE);
#else
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GLUT_DEPTH );
#endif
glutCreateWindow("Hello World!"); // Makes the actual window and displays
// Initialize GLEW
glewExperimental = true; // Needed for core profile
if (glewInit() != GLEW_OK) {
fprintf(stderr, "Failed to initialize GLEW\n");
return -1;
}
// Sets the (background) colour for each time the frame-buffer (colour buffer) is cleared
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
// Set up the shaders we are to use. 0 indicates error.
programID = LoadShaders("minimal.vert", "minimal.frag");
if (programID == 0)
return 1;
// Set this shader program in use
// This is an OpenGL state modification and persists unless changed
glUseProgram(programID);
// Set the vertex data for the program
if (setShaderData(programID) != 0)
return 1;
// Render call to a function we defined,
// that is called each time GLUT thinks we need to update
// the window contents, this method has our drawing logic
glutDisplayFunc(render);
// Start an infinite loop where GLUT calls methods (like render)
// set with glut*Func when needed.
// Runs until something kills the window
glutMainLoop();
return 0;
}
Actually , i could not understand what do you want to do. But maybe this helps.
Read first line and get row,col size. Allocate multidimensional array. Go ahead to read next lines. Put the values inside array that are extracted from lines.
std::ifstream file( "heights.csv" );
std::string line;
getline(file , line);
int ColRowSize;
line >> ColRowSize;
/// allocate map
float** data = malloc(ColRowSize * sizeof(float *));
for(i = 0; i < ColRowSize; i++)
{
array[i] = malloc(ColRowSize * sizeof(float));
}
/// read from file
for(i = 0; i < ColRowSize; i++)
{
getline(file , line);
for(j = 0; j < ColRowSize; j++)
line >> data[i][j];
}

OpenGL Height Map from text segmentation fault

I am trying to create a heightmap from 25 float values like so:
#define HEIGHT_VERTS 5
#define VALS_PER_VERT_HEIGHT 5
float heightmapVerts[ HEIGHT_VERTS*VALS_PER_VERT_HEIGHT ] = {
//5
-0.9, -0.6, -0.4, -0.6, -0.9,
-0.2, 0.1, 0.3, 0.1, -0.3,
0, 0.4, 0.8, 0.4, 0,
-0.2, 0.1, 0.3, 0.1, -0.3,
0.5, -0.6, -0.4, -0.6, -0.9,
};
I am getting a segmentation fault when calling:
glDrawArrays(GL_TRIANGLES, 0, HEIGHT_VERTS);
I have been suggested that it's because the size argument of glVertexAttribPointer() must be 1, 2, 3, or 4. I pass 5 with:
glVertexAttribPointer(vertLocHeight, VALS_PER_VERT_HEIGHT, GL_FLOAT, GL_FALSE, 0, 0);
but I get another error saying that I have too many vertices if these values are smaller (eg: #define VALS_PER_VERT_HEIGHT 3)
error: too many initializers for ‘float [15]’
I have attached the rest of my code for some context, I am very very new to OpenGL so I apologize if the code is messy.
#include <stdio.h>
// GLEW loads OpenGL extensions. Required for all OpenGL programs.
#include <GL/glew.h>
#ifdef __APPLE__
#include <GLUT/glut.h>
#else
#include <GL/glut.h>
#endif
// Utility code to load and compile GLSL shader programs
#include "shader.hpp"
#include <iostream>
#include <fstream>
#include <vector>
#include "glm/glm.hpp"
#define WINDOW_WIDTH 400
#define WINDOW_HEIGHT 400
#define VALS_PER_VERT_HEIGHT 5//5
#define VALS_PER_COLOUR_HEIGHT 4
#define HEIGHT_VERTS 5 //height map vertices per line
using namespace std;
// Handle to our VAO generated in setShaderData method
//heightmap
unsigned int vertexVaoHandleHeight;
// Handle to our shader program
unsigned int programID;
/**
* Sets the shader uniforms and vertex data
* This happens ONCE only, before any frames are rendered
* #param id, Shader program object to use
* #returns 0 for success, error otherwise
*/
int setShaderData(const unsigned int &id)
{
float heightmapVerts[ HEIGHT_VERTS*VALS_PER_VERT_HEIGHT ] = {
//5
-0.9, -0.6, -0.4, -0.6, -0.9,
-0.2, 0.1, 0.3, 0.1, -0.3,
0, 0.4, 0.8, 0.4, 0,
-0.2, 0.1, 0.3, 0.1, -0.3,
0.5, -0.6, -0.4, -0.6, -0.9,
};
// Colours for each vertex; red, green, blue and alpha
// This data is indexed the same order as the vertex data, but reads 4 values
// Alpha will not be used directly in this example program
float heightColours[ HEIGHT_VERTS*VALS_PER_COLOUR_HEIGHT ] = {
0.8f, 0.7f, 0.5f, 1.0f,
0.3f, 0.7f, 0.1f, 1.0f,
0.8f, 0.2f, 0.5f, 1.0f,
};
// heightmap stuff ##################################################
// Generate storage on the GPU for our triangle and make it current.
// A VAO is a set of data buffers on the GPU
glGenVertexArrays(1, &vertexVaoHandleHeight);
glBindVertexArray(vertexVaoHandleHeight);
// Generate new buffers in our VAO
// A single data buffer store for generic, per-vertex attributes
unsigned int bufferHeight[2];
glGenBuffers(2, bufferHeight);
// Allocate GPU memory for our vertices and copy them over
glBindBuffer(GL_ARRAY_BUFFER, bufferHeight[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*HEIGHT_VERTS*VALS_PER_VERT_HEIGHT, heightmapVerts, GL_STATIC_DRAW);
// Do the same for our vertex colours
glBindBuffer(GL_ARRAY_BUFFER, bufferHeight[1]);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*HEIGHT_VERTS*VALS_PER_COLOUR_HEIGHT, heightColours, GL_STATIC_DRAW);
// Now we tell OpenGL how to interpret the data we just gave it
// Tell OpenGL what shader variable it corresponds to
// Tell OpenGL how it's formatted (floating point, 3 values per vertex)
int vertLocHeight = glGetAttribLocation(id, "a_vertex");
glBindBuffer(GL_ARRAY_BUFFER, bufferHeight[0]);
glEnableVertexAttribArray(vertLocHeight);
glVertexAttribPointer(vertLocHeight, VALS_PER_VERT_HEIGHT, GL_FLOAT, GL_FALSE, 0, 0);
// Do the same for the vertex colours
int colourLocHeight = glGetAttribLocation(id, "a_colour");
glBindBuffer(GL_ARRAY_BUFFER, bufferHeight[1]);
glEnableVertexAttribArray(colourLocHeight);
glVertexAttribPointer(colourLocHeight, VALS_PER_COLOUR_HEIGHT, GL_FLOAT, GL_FALSE, 0, 0);
// heightmap stuff ##################################################
// An argument of zero un-binds all VAO's and stops us
// from accidentally changing the VAO state
glBindVertexArray(0);
// The same is true for buffers, so we un-bind it too
glBindBuffer(GL_ARRAY_BUFFER, 0);
return 0; // return success
}
/**
* Renders a frame of the state and shaders we have set up to the window
* Executed each time a frame is to be drawn.
*/
void render()
{
// Clear the previous pixels we have drawn to the colour buffer (display buffer)
// Called each frame so we don't draw over the top of everything previous
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(programID);
// HEIGHT MAP STUFF ###################################
// Make the VAO with our vertex data buffer current
glBindVertexArray(vertexVaoHandleHeight);
// Send command to GPU to draw the data in the current VAO as triangles
//CRASHES HERE
glDrawArrays(GL_TRIANGLES, 0, HEIGHT_VERTS);
glBindVertexArray(0); // Un-bind the VAO
// HEIGHT MAP STUFF ###################################
glutSwapBuffers(); // Swap the back buffer with the front buffer, showing what has been rendered
glFlush(); // Guarantees previous commands have been completed before continuing
}
You are messing things up.
1) With glVertexAttribPointer(), you setup vertex attributes - those are almost always vectors of some kind. For vertex position, if you need to draw scene in 2D, pass size = 2 (because each vertex has x and y coordinates), for 3D - pass 3 (x, y, z).
2) I think your interpretation of heightmap is also quite incorrect. You filled array only with height values (in 3D space, those are Y coordinates). But where are X and Z? You need to render vertices, so you need to pass all x, y and z coords, so OpenGL can know where each point should be rendered.
Your program crashes, because you send not enough data and OpenGL tries to read from memory, that doesn't belong to you.
I assume, that you want a heightmap, which is a 5x5 grid? Init data this way:
float heightmapVerts[25] =
{
//leave this as it is right now
};
vec3 vertices[5][5];
for(int z_num = 0; z_num < 5; ++z_num)
{
for(int x_num = 0; x_num < 5; ++x_num)
{
vertices[z_num][x_num].x = x_num * 0.5f;
vertices[z_num][x_num].z = z_num * 0.5f;
vertices[z_num][x_num].y = heightmapVerts[z_num * 5 + x_num];
}
}
Then you can call:
glVertexAttribPointer(vertLocHeight, 3, GL_FLOAT, GL_FALSE, 0, 0);
Update:
vec3 stands for 3-dimensional vector. I wrote it as pseudocode to
illustrate conception, but for the sake of simplicity, you may want to
use this great library: OpenGL Mathematics.
Update 2:
One more thing: color data is also set improperly. You probably want
color to be an RGB value, so every vertex needs additional 3 floats to
represent its color. Also, position and color should be place in
single VBO, there is no need for separating them.
I am not sure, if you got all basics required to do any simple
drawings. You may want to read these articles:
OpenGL Wiki
This nice tutorial
Lighthouse 3D

Modern equivalent of `gluOrtho2d `

What is the modern equivalent of the OpenGL function gluOrtho2d? clang is giving me deprecation warnings. I believe I need to write some kind of vertex shader? What should it look like?
I started off this answer thinking "It's not that different, you just have to...".
I started writing some code to prove myself right, and ended up not really doing so. Anyway, here are the fruits of my efforts: a minimal annotated example of "modern" OpenGL.
There's a good bit of code you'll need before modern OpenGL will start to act like old-school OpenGL. I'm not going to get into the reasons why you might like to do it the new way (or not) -- there are countless other answers that give a pretty good rundown. Instead I'll post some minimal code that can get you running if you're so inclined.
You should end up with this stunning piece of art:
Basic Render Process
Part 1: Vertex buffers
void TestDraw(){
// create a vertex buffer (This is a buffer in video memory)
GLuint my_vertex_buffer;
glGenBuffers(1 /*ask for one buffer*/, &my_vertex_buffer);
const float a_2d_triangle[] =
{
200.0f, 10.0f,
10.0f, 200.0f,
400.0f, 200.0f
};
// GL_ARRAY_BUFFER indicates we're using this for
// vertex data (as opposed to things like feedback, index, or texture data)
// so this call says use my_vertex_data as the vertex data source
// this will become relevant as we make draw calls later
glBindBuffer(GL_ARRAY_BUFFER, my_vertex_buffer);
// allocate some space for our buffer
glBufferData(GL_ARRAY_BUFFER, 4096, NULL, GL_DYNAMIC_DRAW);
// we've been a bit optimistic, asking for 4k of space even
// though there is only one triangle.
// the NULL source indicates that we don't have any data
// to fill the buffer quite yet.
// GL_DYNAMIC_DRAW indicates that we intend to change the buffer
// data from frame-to-frame.
// the idea is that we can place more than 3(!) vertices in the
// buffer later as part of normal drawing activity
// now we actually put the vertices into the buffer.
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(a_2d_triangle), a_2d_triangle);
Part 2: Vertex Array Object:
We need to define how the data contained in my_vertex_array is structured. This state is contained in a vertex array object (VAO). In modern OpenGL there needs to be at least one of these
GLuint my_vao;
glGenVertexArrays(1, &my_vao);
//lets use the VAO we created
glBindVertexArray(my_vao);
// now we need to tell the VAO how the vertices in my_vertex_buffer
// are structured
// our vertices are really simple: each one has 2 floats of position data
// they could have been more complicated (texture coordinates, color --
// whatever you want)
// enable the first attribute in our VAO
glEnableVertexAttribArray(0);
// describe what the data for this attribute is like
glVertexAttribPointer(0, // the index we just enabled
2, // the number of components (our two position floats)
GL_FLOAT, // the type of each component
false, // should the GL normalize this for us?
2 * sizeof(float), // number of bytes until the next component like this
(void*)0); // the offset into our vertex buffer where this element starts
Part 3: Shaders
OK, we have our source data all set up, now we can set up the shader which will transform it into pixels
// first create some ids
GLuint my_shader_program = glCreateProgram();
GLuint my_fragment_shader = glCreateShader(GL_FRAGMENT_SHADER);
GLuint my_vertex_shader = glCreateShader(GL_VERTEX_SHADER);
// we'll need to compile the vertex shader and fragment shader
// and then link them into a full "shader program"
// load one string from &my_fragment_source
// the NULL indicates that the string is null-terminated
const char* my_fragment_source = FragmentSourceFromSomewhere();
glShaderSource(my_fragment_shader, 1, &my_fragment_source, NULL);
// now compile it:
glCompileShader(my_fragment_shader);
// then check the result
GLint compiled_ok;
glGetShaderiv(my_fragment_shader, GL_COMPILE_STATUS, &compiled_ok);
if (!compiled_ok){ printf("Oh Noes, fragment shader didn't compile!\n"); }
else{
glAttachShader(my_shader_program, my_fragment_shader);
}
// and again for the vertex shader
const char* my_vertex_source = VertexSourceFromSomewhere();
glShaderSource(my_vertex_shader, 1, &my_vertex_source, NULL);
glCompileShader(my_vertex_shader);
glGetShaderiv(my_vertex_shader, GL_COMPILE_STATUS, &compiled_ok);
if (!compiled_ok){ printf("Oh Noes, vertex shader didn't compile!\n"); }
else{
glAttachShader(my_shader_program, my_vertex_shader);
}
//finally, link the program, and set it active
glLinkProgram(my_shader_program);
glUseProgram(my_shader_program);
Part 4: Drawing things on the screen
//get the screen size
float my_viewport[4];
glGetFloatv(GL_VIEWPORT, my_viewport);
//now create a projection matrix
float my_proj_matrix[16];
MyOrtho2D(my_proj_matrix, 0.0f, my_viewport[2], my_viewport[3], 0.0f);
//"uProjectionMatrix" refers directly to the variable of that name in
// shader source
GLuint my_projection_ref =
glGetUniformLocation(my_shader_program, "uProjectionMatrix");
// send our projection matrix to the shader
glUniformMatrix4fv(my_projection_ref, 1, GL_FALSE, my_proj_matrix );
//clear the background
glClearColor(0.3, 0.4, 0.4, 1.0);
glClear(GL_COLOR_BUFFER_BIT| GL_DEPTH_BUFFER_BIT);
// *now* after that tiny setup, we're ready to draw the best 24 bytes of
// vertex data ever.
// draw the 3 vertices starting at index 0, interpreting them as triangles
glDrawArrays(GL_TRIANGLES, 0, 3);
// now just swap buffers however your window manager lets you
}
And That's it!
... except for the actual
Shaders
I started to get a little tired at this point, so the comments are a bit lacking. Let me know if you'd like anything clarified.
const char* VertexSourceFromSomewhere()
{
return
"#version 330\n"
"layout(location = 0) in vec2 inCoord;\n"
"uniform mat4 uProjectionMatrix;\n"
"void main()\n"
"{\n"
" gl_Position = uProjectionMatrix*(vec4(inCoord, 0, 1.0));\n"
"}\n";
}
const char* FragmentSourceFromSomewhere()
{
return
"#version 330 \n"
"out vec4 outFragColor;\n"
"vec4 DebugMagenta(){ return vec4(1.0, 0.0, 1.0, 1.0); }\n"
"void main() \n"
"{\n"
" outFragColor = DebugMagenta();\n"
"}\n";
}
The Actual Question you asked: Orthographic Projection
As noted, the actual math is just directly from Wikipedia.
void MyOrtho2D(float* mat, float left, float right, float bottom, float top)
{
// this is basically from
// http://en.wikipedia.org/wiki/Orthographic_projection_(geometry)
const float zNear = -1.0f;
const float zFar = 1.0f;
const float inv_z = 1.0f / (zFar - zNear);
const float inv_y = 1.0f / (top - bottom);
const float inv_x = 1.0f / (right - left);
//first column
*mat++ = (2.0f*inv_x);
*mat++ = (0.0f);
*mat++ = (0.0f);
*mat++ = (0.0f);
//second
*mat++ = (0.0f);
*mat++ = (2.0*inv_y);
*mat++ = (0.0f);
*mat++ = (0.0f);
//third
*mat++ = (0.0f);
*mat++ = (0.0f);
*mat++ = (-2.0f*inv_z);
*mat++ = (0.0f);
//fourth
*mat++ = (-(right + left)*inv_x);
*mat++ = (-(top + bottom)*inv_y);
*mat++ = (-(zFar + zNear)*inv_z);
*mat++ = (1.0f);
}
Modern OpenGL is significantly different. You won't be able to just drop in a new function. Read up...
http://duriansoftware.com/joe/An-intro-to-modern-OpenGL.-Chapter-1:-The-Graphics-Pipeline.html
http://www.arcsynthesis.org/gltut/index.html
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-2-the-first-triangle/

depth buffer got by glReadPixels is always 1

I'm using glReadPixels to get depth value of select pixel, but i always get 1, how can i solve it? here is the code:
glEnable(GL_DEPTH_TEST);
..
glReadPixels(x, viewport[3] - y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, z);
Do I miss anything? And my rendering part is shown below. I use different shaders to draw different part of scene, so how should i make it correct to read depth value from buffer?
void onDisplay(void)
{
// Clear the window and the depth buffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// calculate the view matrix.
GLFrame eyeFrame;
eyeFrame.MoveUp(gb_eye_height);
eyeFrame.RotateWorld(gb_eye_theta * 3.1415926 / 180.0, 1.0, 0.0, 0.0);
eyeFrame.RotateWorld(gb_eye_phi * 3.1415926 / 180.0, 0.0, 1.0, 0.0);
eyeFrame.MoveForward(-gb_eye_radius);
eyeFrame.GetCameraMatrix(gb_hit_modelview);
gb_modelViewMatrix.PushMatrix(gb_hit_modelview);
// draw coordinate system
if(gb_bCoord)
{
DrawCoordinateAxis();
}
if(gb_bTexture)
{
GLfloat vEyeLight[] = { -100.0f, 100.0f, 150.0f };
GLfloat vAmbientColor[] = { 0.2f, 0.2f, 0.2f, 1.0f };
GLfloat vDiffuseColor[] = { 1.0f, 1.0f, 1.0f, 1.0f};
glUseProgram(normalMapShader);
glUniform4fv(locAmbient, 1, vAmbientColor);
glUniform4fv(locDiffuse, 1, vDiffuseColor);
glUniform3fv(locLight, 1, vEyeLight);
glUniform1i(locColorMap, 0);
glUniform1i(locNormalMap, 1);
gb_treeskl.Display(SetGeneralColor, SetSelectedColor, 0);
}
else
{
if(!gb_bOnlyVoxel)
{
if(gb_bPoints)
{
//GLfloat vPointColor[] = { 1.0, 1.0, 0.0, 0.6 };
GLfloat vPointColor[] = { 0.2, 0.0, 0.0, 0.9 };
gb_shaderManager.UseStockShader(GLT_SHADER_FLAT, gb_transformPipeline.GetModelViewProjectionMatrix(), vPointColor);
gb_treeskl.Display(NULL, NULL, 1);
}
if(gb_bSkeleton)
{
GLfloat vEyeLight[] = { -100.0f, 100.0f, 150.0f };
glUseProgram(adsPhongShader);
glUniform3fv(locLight, 1, vEyeLight);
gb_treeskl.Display(SetGeneralColor, SetSelectedColor, 0);
}
}
if(gb_bVoxel)
{
GLfloat vEyeLight[] = { -100.0f, 100.0f, 150.0f };
glUseProgram(adsPhongShader);
glUniform3fv(locLight, 1, vEyeLight);
SetVoxelColor();
glPolygonMode(GL_FRONT, GL_LINE);
glLineWidth(1.0f);
gb_treeskl.DisplayVoxel();
glPolygonMode(GL_FRONT, GL_FILL);
}
}
//glUniformMatrix4fv(locMVP, 1, GL_FALSE, gb_transformPipeline.GetModelViewProjectionMatrix());
//glUniformMatrix4fv(locMV, 1, GL_FALSE, gb_transformPipeline.GetModelViewMatrix());
//glUniformMatrix3fv(locNM, 1, GL_FALSE, gb_transformPipeline.GetNormalMatrix());
//gb_sphereBatch.Draw();
gb_modelViewMatrix.PopMatrix();
glutSwapBuffers();
}
I think you are reading correctly the only problem is that you are not linearize the depth from buffer back to <znear...zfar> range hence the ~1 value for whole screen due to logarithmic dependence of depth (almost all the values are very close to 1).
I am doing this like this:
double glReadDepth(double x,double y,double *per=NULL) // x,y [pixels], per[16]
{
GLfloat _z=0.0; double m[16],z,zFar,zNear;
if (per==NULL){ per=m; glGetDoublev(GL_PROJECTION_MATRIX,per); } // use actual perspective matrix if not passed
zFar =0.5*per[14]*(1.0-((per[10]-1.0)/(per[10]+1.0))); // compute zFar from perspective matrix
zNear=zFar*(per[10]+1.0)/(per[10]-1.0); // compute zNear from perspective matrix
glReadPixels(x,y,1,1,GL_DEPTH_COMPONENT,GL_FLOAT,&_z); // read depth value
z=_z; // logarithmic
z=(2.0*z)-1.0; // logarithmic NDC
z=(2.0*zNear*zFar)/(zFar+zNear-(z*(zFar-zNear))); // linear <zNear,zFar>
return -z;
}
Do not forget that x,y is in pixels and (0,0) is bottom left corner !!! The returned depth is in range <zNear,zFar>. The function is assuming you are using perspective transform like this:
void glPerspective(double fovy,double aspect,double zNear,double zFar)
{
double per[16],f;
for (int i=0;i<16;i++) per[i]=0.0;
// original gluProjection
// f=divide(1.0,tan(0.5*fovy*deg))
// per[ 0]=f/aspect;
// per[ 5]=f;
// corrected gluProjection
f=divide(1.0,tan(0.5*fovy*deg*aspect));
per[ 0]=f;
per[ 5]=f*aspect;
// z range
per[10]=divide(zFar+zNear,zNear-zFar);
per[11]=-1.0;
per[14]=divide(2.0*zFar*zNear,zNear-zFar);
glLoadMatrixd(per);
}
Beware the depth accuracy will be good only for close to camera object without linear depth buffer. For more info see:
How to correctly linearize depth in OpenGL ES in iOS?
If the problem persist there might be also another reason for this. Do you have Depth buffer in your pixel format? In windows You can check like this:
Getting a window's pixel format
Missing depth buffer could explain why the value is always 1 (not like ~0.997). In such case you need to change the init of your window enabling some bits for depth buffer (16/24/32). See:
What is the proper OpenGL initialisation on Intel HD 3000?
For more detailed info about using this technique (with C++ example) see:
OpenGL 3D-raypicking with high poly meshes
Well, you missed to past the really relevent parts of the code. Also the status of the depth testing unit has no influence on what glReadPixels delivers. How about you post your rendering code as well.
Update
After a buffer swap SwapBuffers the contents of the back buffer are undefined and the default state for frame buffer reads is to read from the back buffer. Technically double buffering happens on only the color component, not the depth and stencil component. But you might run into a driver issue with that.
I suggest two tests to rule out those:
Do a read of the depth buffer with glReadBuffer(GL_BACK); right before the SwapBuffers.
Select the front buffer with glReadBuffer(GL_FRONT); for reading after SwapBuffers
Also please specify in which context (program, not OpenGL, well the later, too) you did your glReadPixels when this problem occours. Also check if you can read color value correctly.

How to re-write 2D OpenGL app for OpenGL ES?

I am working on an OpenGL 2D game with sprite graphics. I was recently advised that I should use OpenGL ES calls as it is a subset of OpenGL and would allow me to port it more easily to mobile platforms. The majority of the code is just calls to a draw_image function, which is defined so:
void draw_img(float x, float y, float w, float h, GLuint tex,float r=1,float g=1, float b=1) {
glColor3f(r,g,b);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tex);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f( x, y);
glTexCoord2f(1.0f, 0.0f);
glVertex2f(w+x, y);
glTexCoord2f(1.0f, 1.0f);
glVertex2f( w+x, h+y);
glTexCoord2f(0.0f, 1.0f);
glVertex2f( x, h+y);
glEnd();
}
What do I need to change to make this OpenGL ES compatible? Also, the reason I am using fixed-function rather than shaders is that I am developing on a machine which doesn't support GLSL.
In OpenGL ES 1.1 use the glVertexPointer(), glColorPointer(), glTexCoordPointer() and glDrawArrays() functions to draw a quad. In contrast to your OpenGL implementation, you will have to describe the structures (vectors, colors, texture coordinates) that your quad consists of instead of just using the built-in glTexCoord2f, glVertex2f and glColor3f methods.
Here is some example code that should do what you want. (I have used the argument names you used in your function definition, so it should be simple to port your code from the example.)
First, you need to define a structure for one vertex of your quad. This will hold the quad vertex positions, colors and texture coordinates.
// Define a simple 2D vector
typedef struct Vec2 {
float x,y;
} Vec2;
// Define a simple 4-byte color
typedef struct Color4B {
GLbyte r,g,b,a;
};
// Define a suitable quad vertex with a color and tex coords.
typedef struct QuadVertex {
Vec2 vect; // 8 bytes
Color4B color; // 4 bytes
Vec2 texCoords; // 8 bytes
} QuadVertex;
Then, you should define a structure describing the whole quad consisting of four vertices:
// Define a quad structure
typedef struct Quad {
QuadVertex tl;
QuadVertex bl;
QuadVertex tr;
QuadVertex br;
} Quad;
Now, instantiate your quad and assign quad vertex information (positions, colors, texture coordinates):
Quad quad;
quad.bl.vect = (Vec2){x,y};
quad.br.vect = (Vec2){w+x,y};
quad.tr.vect = (Vec2){w+x,h+y};
quad.tl.vect = (Vec2){x,h+y};
quad.tl.color = quad.tr.color = quad.bl.color = quad.br.color
= (Color4B){r,g,b,255};
quad.tl.texCoords = (Vec2){0,0};
quad.tr.texCoords = (Vec2){1,0};
quad.br.texCoords = (Vec2){1,1};
quad.bl.texCoords = (Vec2){0,1};
Now tell OpenGL how to draw the quad. The calls to gl...Pointer provide OpenGL with the right offsets and sizes to your vertex structure's values, so it can later use that information for drawing the quad.
// "Explain" the quad structure to OpenGL ES
#define kQuadSize sizeof(quad.bl)
long offset = (long)&quad;
// vertex
int diff = offsetof(QuadVertex, vect);
glVertexPointer(2, GL_FLOAT, kQuadSize, (void*)(offset + diff));
// color
diff = offsetof(QuadVertex, color);
glColorPointer(4, GL_UNSIGNED_BYTE, kQuadSize, (void*)(offset + diff));
// texCoods
diff = offsetof(QuadVertex, texCoords);
glTexCoordPointer(2, GL_FLOAT, kQuadSize, (void*)(offset + diff));
Finally, assign the texture and draw the quad. glDrawArrays tells OpenGL to use the previously defined offsets together with the values contained in your Quad object to draw the shape defined by 4 vertices.
glBindTexture(GL_TEXTURE_2D, tex);
// Draw the quad
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindTexture(GL_TEXTURE_2D, 0);
Please also note that it is perfectly OK to use OpenGL ES 1 if you don't need shaders. The main difference between ES1 and ES2 is that, in ES2, there is no fixed pipeline, so you would need to implement a matrix stack plus shaders for the basic rendering on your own. If you are fine with the functionality offered by the fixed pipeline, just use OpenGL ES 1.