why do i need glClear(GL_DEPTH_BUFFER_BIT) - opengl

here is my code:
#include <GL/glew.h> // include GLEW and new version of GL on Windows
#include <GLFW/glfw3.h> // GLFW helper library
#include <stdio.h>
int main () {
// start GL context and O/S window using the GLFW helper library
if (!glfwInit ()) {
fprintf (stderr, "ERROR: could not start GLFW3\n");
return 1;
}
// uncomment these lines if on Apple OS X
glfwWindowHint (GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint (GLFW_CONTEXT_VERSION_MINOR, 1);
glfwWindowHint (GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint (GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
GLFWwindow* window = glfwCreateWindow (640, 480, "Hello Triangle", NULL, NULL);
if (!window) {
fprintf (stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent (window);
// start GLEW extension handler
glewExperimental = GL_TRUE;
glewInit ();
// get version info
const GLubyte* renderer = glGetString (GL_RENDERER); // get renderer string
const GLubyte* version = glGetString (GL_VERSION); // version as a string
printf ("Renderer: %s\n", renderer);
printf ("OpenGL version supported %s\n", version);
// tell GL to only draw onto a pixel if the shape is closer to the viewer
glEnable (GL_DEPTH_TEST); // enable depth-testing
glDepthFunc (GL_LESS); // depth-testing interprets a smaller value as "closer"
/* OTHER STUFF GOES HERE NEXT */
float points[] = {
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
GLuint vbo = 0;
glGenBuffers (1, &vbo);
glBindBuffer (GL_ARRAY_BUFFER, vbo);
glBufferData (GL_ARRAY_BUFFER, 9 * sizeof (float), points, GL_STATIC_DRAW);
GLuint vao = 0;
glGenVertexArrays (1, &vao);
glBindVertexArray (vao);
glEnableVertexAttribArray (0);
glBindBuffer (GL_ARRAY_BUFFER, vbo);
glVertexAttribPointer (0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
const char* vertex_shader =
"#version 410\n"
"layout(location = 0) in vec4 vPosition;"
"void main () {"
" gl_Position = vPosition;"
"}";
const char* fragment_shader =
"#version 410\n"
"out vec4 frag_colour;"
"void main () {"
" frag_colour = vec4 (0.5, 0.0, 0.5, 1.0);"
"}";
GLuint vs = glCreateShader (GL_VERTEX_SHADER);
glShaderSource (vs, 1, &vertex_shader, NULL);
glCompileShader (vs);
GLuint fs = glCreateShader (GL_FRAGMENT_SHADER);
glShaderSource (fs, 1, &fragment_shader, NULL);
glCompileShader (fs);
GLuint shader_programme = glCreateProgram ();
glAttachShader (shader_programme, fs);
glAttachShader (shader_programme, vs);
glLinkProgram (shader_programme);
while (!glfwWindowShouldClose (window)) {
// wipe the drawing surface clear
glClear (GL_DEPTH_BUFFER_BIT);
const GLfloat color[]={0.0,0.2,0.0,1.0};
//glClearBufferfv(GL_COLOR,0,color);
glUseProgram (shader_programme);
glBindVertexArray (vao);
// draw points 0-3 from the currently bound VAO with current in-use shader
glDrawArrays (GL_TRIANGLES, 0, 3);
// update other events like input handling
glfwPollEvents ();
// put the stuff we've been drawing onto the display
glfwSwapBuffers (window);
} // close GL context and any other GLFW resources
glfwTerminate();
return 0;
}
when i comment line glClear(GL_DEPTH_BUFFER_BIT),the window showing up did not display anything,does this routine matter?
i am using Xcode and Mac OS X 10.1.2,please help me with this ,thanks

The depth buffer is used to decide if geometry you render is closer to the viewer than geometry you rendered previously. This allows the elimination of hidden geometry.
This test is executed per fragment (pixel). Any time a fragment is rendered, its depth is compared to the corresponding value in the depth buffer. If the new depth is bigger, the fragment is eliminated by the depth test. Otherwise, the fragment is written to the color buffer, and the value in the depth buffer is updated with the depth of the new fragment. The functionality is controlled by these calls you make during setup:
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
This way, if a fragment is covered by multiple triangles, the color of the final pixels will be given by the geometry that was closest to the viewer.
Now, if you don't clear the depth buffer at the start of each frame, the comparison to the value in the depth buffer described above will use whatever value happens to be in the depth buffer. This could be a value from a previous frame, or an uninitialized garbage value. Therefore, fragments can be eliminated by the depth test even though no fragments in the current frame were drawn at the same position before. In the extreme case, all fragments are eliminated by the depth test, and you see nothing at all.
Unless you are certain that you will render something to all pixels in your window, you will also want to clear the color buffer at the start of the frame. So your clear call should be:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

// wipe the drawing surface clear
, glClear (GL_DEPTH_BUFFER_BIT);
What that comment means above the code means is that it clears the depth buffer. The depth buffer is the part of the frame buffer, that makes objects being obstructed by other objectsin front of them. Without clearing the depth buffer, you'd draw into the depth structure of the previous drawing.

Related

glVertexAttribPointer raise GL_INVALID_OPERATION version 330

I am trying to simplify my example to show exact abnormal case. I am running this example on MacOS 10.14.6, compiling clang LLVM compiler, using GLFW3. Also I am trying to run same example on Windows 10/64 using SFML and got same error, therefore the problem is not in the environment
OpenGL version 4.1 ATI-2.11.20
Shading language version 4.10
Exact code where problem is
glUseProgram(id_shader);
glEnableVertexAttribArray(param_Position);
//HERE IS ERROR "OpenGL ERROR: 0x00000502 GL_INVALID_OPERATION" RAISED
glVertexAttribPointer(param_Position, 3, GL_FLOAT, (GLboolean) false, 0, vertices);
Here is source full code
#include <stdlib.h>
#include <OpenGL/gl3.h>
#include <GLFW/glfw3.h>
#include "engine/Camera.h"
static const char *get_error_string_by_enum(GLenum err)
{
switch (err) {
case GL_INVALID_ENUM :
return "GL_INVALID_ENUM";
case GL_INVALID_VALUE :
return "GL_INVALID_VALUE";
case GL_INVALID_OPERATION :
return "GL_INVALID_OPERATION";
case GL_STACK_OVERFLOW :
return "GL_STACK_OVERFLOW";
case GL_STACK_UNDERFLOW :
return "GL_STACK_UNDERFLOW";
case GL_OUT_OF_MEMORY :
return "GL_OUT_OF_MEMORY";
#ifdef GL_INVALID_FRAMEBUFFER_OPERATION
case GL_INVALID_FRAMEBUFFER_OPERATION :
return "GL_INVALID_FRAMEBUFFER_OPERATION";
#endif
default: {
return "UNKNOWN";
}
}
}
static void check_gl()
{
char line[300];
GLenum err;
err = glGetError();
if (err != GL_NO_ERROR) {
sprintf(line, "OpenGL ERROR: 0x%.8X %s", err, get_error_string_by_enum(err));
printf("%s\n", line);
exit(-1);
}
}
int main()
{
char line[2000];
unsigned int windowWidth = 1024;
unsigned int windowHeight = 1024;
GLFWwindow* window;
//SETUP WINDOW AND CONTEXT
if (!glfwInit()){
fprintf(stdout, "ERROR on glfwInit");
return -1;
}
glfwWindowHint(GLFW_SAMPLES, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
window = glfwCreateWindow(windowWidth, windowHeight, "OpenGL", NULL, NULL);
if (!window)
{
fprintf(stderr, "Unable to create window.");
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
sprintf(line, "OpenGL version %s\n", (const char *) glGetString(GL_VERSION));
fprintf(stdout, line);
sprintf(line, "Shading language version %s\n", (const char *) glGetString(GL_SHADING_LANGUAGE_VERSION));
fprintf(stdout, line);
//SETUP OPENGL
glViewport(0, 0, windowWidth, windowHeight);
glEnable(GL_BLEND);
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glDepthMask(true);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
//SETUP SHADER PROGRAM
GLint success;
GLuint id_v = glCreateShader(GL_VERTEX_SHADER);
GLuint id_f = glCreateShader(GL_FRAGMENT_SHADER);
const char *vertex_shader_source = "#version 330\n"
"precision mediump float;\n"
"\n"
"in vec4 Position;\n"
"uniform mat4 MVPMatrix;\n"
"\n"
"void main()\n"
"{\n"
"\tgl_Position = MVPMatrix * Position;\n"
"\tgl_PointSize = 10.0;\n"
"}";
const char *fragment_shader_source = "#version 330\n"
"precision mediump float;\n"
"\n"
"uniform vec4 Color;\n"
"out vec4 FragCoord;\n"
"\n"
"void main()\n"
"{\n"
" FragCoord = Color;\n"
"}";
glShaderSource(id_v, 1, &vertex_shader_source, NULL);
glCompileShader(id_v);
glGetShaderiv(id_v, GL_COMPILE_STATUS, &success);
if (!success) {
glGetShaderInfoLog(id_v, 2000, NULL, line);
fprintf(stderr, line);
exit(-1);
}
glShaderSource(id_f, 1, &fragment_shader_source, NULL);
glCompileShader(id_f);
glGetShaderiv(id_f, GL_COMPILE_STATUS, &success);
if (!success) {
glGetShaderInfoLog(id_f, 2000, NULL, line);
fprintf(stderr, line);
exit(-1);
}
GLuint id_shader = glCreateProgram();
glAttachShader(id_shader, id_v);
glAttachShader(id_shader, id_f);
glLinkProgram(id_shader);
glGetProgramiv(id_shader, GL_LINK_STATUS, &success);
if (!success) {
glGetProgramInfoLog(id_shader, 2000, NULL, line);
fprintf(stderr, "program link error");
fprintf(stderr, line);
exit(-1);
}
GLuint param_Position = glGetAttribLocation(id_shader, "Position");
GLuint param_MVPMatrix = glGetUniformLocation(id_shader, "MVPMatrix");
GLuint param_Color = glGetUniformLocation(id_shader, "Color");
sprintf(line, "Params: param_Position=%d param_MVPMatrix=%d param_Color=%d\n", param_Position, param_MVPMatrix, param_Color);
fprintf(stdout, line);
//SETUP MATRIX
Camera *c = new Camera();
c->setCameraType(CameraType::PERSPECTIVE);
c->setWorldSize(100, 100);
c->lookFrom(5, 5, 5);
c->lookAt(0, 0, 0);
c->setFOV(100);
c->setUp(0, 0, 1);
c->calc();
c->getResultMatrix().dump();
//SETUP TRIANGLE
float vertices[] = {
0, 0, 0,
1, 0, 0,
1, 1, 0
};
while (!glfwWindowShouldClose(window))
{
//CLEAR FRAME
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glClearColor(0.3f, 0.3f, 0.3f, 1.0f);
//RENDER TRIANGLE
glUseProgram(id_shader);
glUniformMatrix4fv(param_MVPMatrix, 1, (GLboolean) false, c->getResultMatrix().data);
glUniform4f(param_Color, 1.0f, 0.5f, 0.0f, 1.0f);
glEnableVertexAttribArray(param_Position);
check_gl();
//HERE IS ERROR "OpenGL ERROR: 0x00000502 GL_INVALID_OPERATION" RAISED
glVertexAttribPointer(param_Position, 3, GL_FLOAT, (GLboolean) false, 0, vertices);
check_gl();
glDrawArrays(GL_TRIANGLES, 0, 3);
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwTerminate();
return 0;
}
Before line
glVertexAttribPointer(param_Position, 3, GL_FLOAT, (GLboolean) false, 0, vertices);
No errors detected before and just after this line GL_INVALID_OPERATION raised.
Program output is:
Environment versions:
OpenGL version 4.1 ATI-2.11.20
Shading language version 4.10
Shader param names:
Params: param_Position=0 param_MVPMatrix=1 param_Color=0
Matrix
-0.593333 -0.342561 -0.577350 -0.577350
0.593333 -0.342561 -0.577350 -0.577350
0.000000 0.685122 -0.577350 -0.577350
0.000000 0.000000 8.640252 8.660253
Error
OpenGL ERROR: 0x00000502 GL_INVALID_OPERATION
I already spent few days on that problem and have no more idea to put it on. I would be grateful for any advice and clarifications.
P.S. Here is glfwinfo output for my system
/glfwinfo -m3 -n2 --profile=compat
GLFW header version: 3.4.0
GLFW library version: 3.4.0
GLFW library version string: "3.4.0 Cocoa NSGL EGL OSMesa"
OpenGL context version string: "4.1 ATI-2.11.20"
OpenGL context version parsed by GLFW: 4.1.0
OpenGL context flags (0x00000001): forward-compatible
OpenGL context flags parsed by GLFW: forward-compatible
OpenGL profile mask (0x00000001): core
OpenGL profile mask parsed by GLFW: core
OpenGL context renderer string: "AMD Radeon R9 M370X OpenGL Engine"
OpenGL context vendor string: "ATI Technologies Inc."
OpenGL context shading language version: "4.10"
OpenGL framebuffer:
red: 8 green: 8 blue: 8 alpha: 8 depth: 24 stencil: 8
samples: 0 sample buffers: 0
Vulkan loader: missing
Since you use a core profile context (GLFW_OPENGL_CORE_PROFILE), the default Vertex Array Object 0 is not valid further you've to use Vertex Buffer Object.
When glVertexAttribPointer is called, then the vertex array specification is stored in the state vector of the currently bound vertex array object. The buffer which is currently bound to the target ARRAY_BUFFER is associated to the attribute and the name (value) of the object is stored in the state vector of the VAO.
In compatibility profile there exists the default vertex array object 0, which can be used at any time but this is not valid in a core profile context. Further it is not necessary in a compatibility profile to use a VBO, the last parameter of glVertexAttribPointer can be a pointer to the vertex data.
The easiest solution is to switch to a compatibility profile GLFW_OPENGL_COMPAT_PROFILE:
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_COMPAT_PROFILE);
If you don't want to do that or your system doesn't provide that, then you've to read about Vertex Specification. Create a vertex buffer object a vertex array object before the program loop:
// vertex buffer object
GLuint vbo;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
// vertex array object
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vbo); // this is not necessary, because "vbo" is still bound
glVertexAttribPointer(param_Position, 3, GL_FLOAT, (GLboolean) false, 0, nullptr);
// the following is not necessary, you can let them bound
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
And use it in the loop to draw the mesh:
glBindVertexArray(vao);
glDrawArrays(GL_TRIANGLES, 0, 3);

How to setup shaders in openGL

I'm working on developing code in OpenGL, and I was completing one of the tutorials for a lesson. However, the code that I completed did not color the triangle. Based off of the tutorial, my triangle should come out as green, but it keeps turning out white. I think there is an error in the code for my shaders, but I can't seem to find the error.
I tried altering the code a few times, and I even moved on to the next tutorial, which shades each vertex. However, my triangle is still coming out as white.
#include <iostream> //Includes C++ i/o stream
#include <GL/glew.h> //Includes glew header
#include <GL/freeglut.h> //Includes freeglut header
using namespace std; //Uses the standard namespace
#define WINDOW_TITLE "Modern OpenGL" //Macro for window title
//Vertex and Fragment Shader Source Macro
#ifndef GLSL
#define GLSL(Version, Source) "#version " #Version "\n" #Source
#endif
//Variables for window width and height
int WindowWidth = 800, WindowHeight = 600;
/* User-defined Function prototypes to:
* initialize the program, set the window size,
* redraw graphics on the window when resized,
* and render graphics on the screen
* */
void UInitialize(int, char*[]);
void UInitWindow(int, char*[]);
void UResizeWindow(int, int);
void URenderGraphics(void);
void UCreateVBO(void); //This step is missing from Tutorial 3-3
void UCreateShaders(void);
/*Vertex Shader Program Source Code*/
const GLchar * VertexShader = GLSL(440,
in layout(location=0) vec4 vertex_Position; //Receive vertex coordinates from attribute 0. i.e. 2
void main(){
gl_Position = vertex_Position; //Sends vertex positions to gl_position vec 4
}
);
/*Fragment Shader Program Source Code*/
const GLchar * FragmentShader = GLSL(440,
void main(){
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0); //Sets the pixels / fragments of the triangle to green
}
);
//main function. Entry point to the OpenGL Program
int main(int argc, char* argv[])
{
UInitialize(argc, argv); //Initialize the OpenGL program
glutMainLoop(); // Starts the Open GL loop in the background
exit(EXIT_SUCCESS); //Terminates the program successfully
}
//Implements the UInitialize function
void UInitialize(int argc, char* argv[])
{
//glew status variable
GLenum GlewInitResult;
UInitWindow(argc, argv); //Creates the window
//Checks glew status
GlewInitResult = glewInit();
if(GLEW_OK != GlewInitResult)
{
fprintf(stderr, "Error: %s\n", glewGetErrorString(GlewInitResult));
exit(EXIT_FAILURE);
}
//Displays GPU OpenGL version
fprintf(stdout, "INFO: OpenGL Version: %s\n", glGetString(GL_VERSION));
UCreateVBO(); //Calls the function to create the Vertex Buffer Object
UCreateShaders(); //Calls the function to create the Shader Program
//Sets the background color of the window to black. Optional
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
}
//Implements the UInitWindow function
void UInitWindow(int argc, char* argv[])
{
//Initializes freeglut
glutInit(&argc, argv);
//Sets the window size
glutInitWindowSize(WindowWidth, WindowHeight);
//Memory buffer setup for display
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
//Creates a window with the macro placeholder title
glutCreateWindow(WINDOW_TITLE);
glutReshapeFunc(UResizeWindow); //Called when the window is resized
glutDisplayFunc(URenderGraphics); //Renders graphics on the screen
}
//Implements the UResizeWindow function
void UResizeWindow(int Width, int Height)
{
glViewport(0,0, Width, Height);
}
//Implements the URenderGraphics function
void URenderGraphics(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); //Clears the screen
/*Creates the triangle*/
GLuint totalVertices = 3; //Specifies the number of vertices for the triangle i.e. 3
glDrawArrays(GL_TRIANGLES, 0, totalVertices); //Draws the triangle
glutSwapBuffers(); //Flips the back buffer with the front buffer every frame. Similar to GL Flush
}
//Implements the CreateVBO function
void UCreateVBO(void)
{
//Specifies coordinates for triangle vertices on x and y
GLfloat verts[] =
{
0.0f, 1.0f, //top-center of the screen
-1.0f, -1.0f, //bottom-left of the screen
1.0f, -1.0f //bottom-right of the screen
};
//Stores the size of the verts array / number of the coordinates needed for the triangle i.e. 6
float numVertices = sizeof(verts);
GLuint myBufferID; //Variable for vertex buffer object id
glGenBuffers(1, &myBufferID); //Creates 1 buffer
glBindBuffer(GL_ARRAY_BUFFER, myBufferID); //Activates the buffer
glBufferData(GL_ARRAY_BUFFER, numVertices, verts, GL_STATIC_DRAW); //Sends vertex or coordinate data to GPU
/*Creates the Vertex Attribute Pointer*/
GLuint floatsPerVertex = 2; //Number of coordinates per vertex
glEnableVertexAttribArray(0); //Specifies the initial position of the coordinates in the buffer
/*Instructs the GPU on how to handle the vertex bugger object data.
* Parameters: attribPointerPosition | coordinates per vertex | data type | deactivate normalization | 0 strides | 0 offset
*/
glVertexAttribPointer(0, floatsPerVertex, GL_FLOAT, GL_FALSE, 0, 0);
}
//Implements the UCreateShaders function
void UCreateShaders(void)
{
//Create a shader program object
GLuint ProgramId = glCreateProgram();
GLuint vertexShaderId = glCreateShader(GL_VERTEX_SHADER); //Create a Vertex Shader Object
GLuint fragmentShaderId = glCreateShader(GL_FRAGMENT_SHADER); //Create a Fragment Shader Object
glShaderSource(vertexShaderId, 1, &VertexShader, NULL); //Retrieves the vertex shader source code
glShaderSource(fragmentShaderId, 1, &FragmentShader, NULL); //Retrieves the fragment shader source code
glCompileShader(vertexShaderId); //Compile the vertex shader
glCompileShader(fragmentShaderId); //Compile the fragment shader
//Attaches the vertex and fragment shaders to the shader program
glAttachShader(ProgramId, vertexShaderId);
glAttachShader(ProgramId, fragmentShaderId);
glLinkProgram(ProgramId); //Links the shader program
glUseProgram(ProgramId); //Uses the shader program
}
When completed correctly, the code should result in a solid green triangle.
The variable gl_FragColor is unavailable in GLSL 4.4 core profile since it was deprecated. Because you don't specify a compatibility profile, the default core is assumed. Either use
#version 440 compatibility
for your shaders, or, even better, use the GLSL 4.4 onwards approach:
#version 440 core
layout(location = 0) out vec4 OUT;
void main(){
OUT = vec4(0.0, 1.0, 0.0, 1.0);
}

Opengl Triangles Benchmark

I am trying to test how may triangles I can draw on my laptop, so I am doing the following on my system:
OS: Windows 10
CPU: Intel Core i5 5200U
GPU: NVIDIA Geforce 820M
Code:
glfwWindowHint(GLFW_SAMPLES, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); // To make MacOS happy; should not be needed
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
// Open a window and create its OpenGL context
window = glfwCreateWindow( 1024, 768, "Tutorial 02 - Red triangle", NULL, NULL);
if( window == NULL ){
fprintf( stderr, "Failed to open GLFW window. If you have an Intel GPU, they are not 3.3 compatible. Try the 2.1 version of the tutorials.\n" );
getchar();
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
// Initialize GLEW
glewExperimental = true; // Needed for core profile
if (glewInit() != GLEW_OK) {
fprintf(stderr, "Failed to initialize GLEW\n");
getchar();
glfwTerminate();
return -1;
}
// Ensure we can capture the escape key being pressed below
glfwSetInputMode(window, GLFW_STICKY_KEYS, GL_TRUE);
// Dark blue background
glClearColor(0.0f, 0.0f, 0.4f, 0.0f);
GLuint VertexArrayID;
glGenVertexArrays(1, &VertexArrayID);
glBindVertexArray(VertexArrayID);
// Create and compile our GLSL program from the shaders
GLuint programID = LoadShaders( "SimpleVertexShader.vertexshader", "SimpleFragmentShader.fragmentshader" );
// Number of triangles
const int v = 200;
static GLfloat g_vertex_buffer_data[v*9] = {
-1.0f, -1.0f, 1.0f,
0.8f, 0.f, 0.0f,
0.5f, 1.0f, 0.0f,
};
// fill buffer of triangles
for (int i = 9; i < v * 9; i += 9)
{
g_vertex_buffer_data[i] = g_vertex_buffer_data[0];
g_vertex_buffer_data[i+1] = g_vertex_buffer_data[1];
g_vertex_buffer_data[i+2] = g_vertex_buffer_data[2];
g_vertex_buffer_data[i+3] = g_vertex_buffer_data[3];
g_vertex_buffer_data[i+4] = g_vertex_buffer_data[4];
g_vertex_buffer_data[i+5] = g_vertex_buffer_data[5];
g_vertex_buffer_data[i+6] = g_vertex_buffer_data[6];
g_vertex_buffer_data[i+7] = g_vertex_buffer_data[7];
g_vertex_buffer_data[i+8] = g_vertex_buffer_data[8];
}
GLuint vertexbuffer;
glGenBuffers(1, &vertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
int frameNr = 0;
char text[100];
do{
// Clear the screen
glClear( GL_COLOR_BUFFER_BIT );
// Use our shader
glUseProgram(programID);
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the triangle !
glDrawArrays(GL_TRIANGLES, 0, v*3); // 3 indices starting at 0 -> 1 triangle
glDisableVertexAttribArray(0);
// Swap buffers
glfwSwapBuffers(window);
glfwPollEvents();
//glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
frameNr++;
sprintf_s(text, "%d %d %d", frameNr, clock() / 1000, (frameNr * 1000) / (clock() + 1));
glfwSetWindowTitle(window, text);
} // Check if the ESC key was pressed or the window was closed
while( glfwGetKey(window, GLFW_KEY_ESCAPE ) != GLFW_PRESS &&
glfwWindowShouldClose(window) == 0 );
// Cleanup VBO
glDeleteBuffers(1, &vertexbuffer);
glDeleteVertexArrays(1, &VertexArrayID);
glDeleteProgram(programID);
// Close OpenGL window and terminate GLFW
glfwTerminate();
return 0;
What wonders me there is that I only get about 80 fps with v = 200 Triangles.
This would be about 16000 Triangles per second what is pretty bad isnt it?
What am I doing wrong here in the code, or can my graphics card really just handle such a low amount of triangles?
How many triangles can a modern gpu like a 1080ti handle (I heard in theory 11 billion ones -although I know in reality it's much lower).
Since I don't yet have enough reputation to comment, let me ask here: How large are your triangles? It's hard to tell without having seen the vertex shader, but assuming those coordinates in your code are directly mapped to normalized device coordinates, your triangle covers a significant part of the screen. If I'm not mistaken, you basically draw the same triangle over and over on top of itself. Thus, you will most likely be fillrate limited. To get more meaningful results, you might rather want to just draw a grid of non-overlapping triangles or at least a random triangle soup instead. To further minimize fillrate and framebuffer bandwidth requirements, you might wanna make sure that depth buffering and blending are turned off.
If you're interested in raw triangles per second, why do you enable MSAA? Doing so just artificially amplifies rasterizer load. As others have noted too, V-Sync is likely switched off as 80 Hz would be a rather weird refresh rate, but better make sure and explicitly switch it off via glfwSwapInterval(0). Rather than estimating total frame time like you do, you might want to consider measuring the actual drawing time on the GPU using a GL_TIME_ELAPSED query.

GLFW - many errors is this code

So my university lecturer gave us this code and it doesn't work.. it never has and no one has been able to get it to work so far.. are we being stupid or is our lecturer giving us broken material? I seriously can't figure this out and need help, i managed to get part way through in fixing many mistakes but after that the issues got harder and harder to solve despite this being '100% working' code.... side note: all the directories are formatted correctly and additional dependencies have all been set up correctly to the best of my knowledge.
//First Shader Handling Program
#include "stdafx.h"
#include "gl_core_4_3.hpp"
#include <GLFW/glfw3.h>
int _tmain(int argc, _TCHAR* argv[])
{
//Select the 4.3 core profile
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
//Start the OpenGL context and open a window using the //GLFW helper library
if (!glfwInit()) {
fprintf(stderr, "ERROR: could not start GLFW3\n");
glfwTerminate();
return 1;
}
GLFWwindow* window = glfwCreateWindow(640, 480, "First GLSL Triangle", NULL, NULL);
if (!window) {
fprintf(stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent(window);
//Load the OpenGL functions for C++ gl::exts::LoadTest didLoad = gl::sys::LoadFunctions(); if (!didLoad) {
//Load failed
fprintf(stderr, "ERROR: GLLoadGen failed to load functions\n");
glfwTerminate();
return 1;
}
printf("Number of functions that failed to load : %i.\n", didLoad.GetNumMissing());
//Tell OpenGL to only draw a pixel if its shape is closer to //the viewer
//i.e. Enable depth testing with smaller depth value //interpreted as being closer gl::Enable(gl::DEPTH_TEST); gl::DepthFunc(gl::LESS);
//Set up the vertices for a triangle
float points[] = {
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
//Create a vertex buffer object to hold this data GLuint vbo=0;
gl::GenBuffers(1, &vbo);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::BufferData(gl::ARRAY_BUFFER, 9 * sizeof(float), points,
gl::STATIC_DRAW);
//Create a vertex array object
GLuint vao = 0;
gl::GenVertexArrays(1, &vao);
gl::BindVertexArray(vao);
gl::EnableVertexAttribArray(0);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::VertexAttribPointer(0, 3, gl::FLOAT, FALSE, 0, NULL);
//The shader code strings which later we will put in //separate files
//The Vertex Shader
const char* vertex_shader =
"#version 400\n"
"in vec3 vp;"
"void main() {"
" gl_Position = vec4(vp, 1.0);"
"}";
//The Fragment Shader
const char* fragment_shader =
"#version 400\n"
"out vec4 frag_colour;"
"void main() {"
" frag_colour = vec4(1.0, 0.5, 0.0, 1.0);"
"}";
//Load the strings into shader objects and compile GLuint vs = gl::CreateShader(gl::VERTEX_SHADER); gl::ShaderSource(vs, 1, &vertex_shader, NULL); gl::CompileShader(vs);
GLuint fs = gl::CreateShader(gl::FRAGMENT_SHADER); gl::ShaderSource(fs, 1, &fragment_shader, NULL); gl::CompileShader(fs);
//Compiled shaders must be compiled into a single executable //GPU shader program
//Create empty program and attach shaders GLuint shader_program = gl::CreateProgram(); gl::AttachShader(shader_program, fs); gl::AttachShader(shader_program, vs); gl::LinkProgram(shader_program);
//Now draw
while (!glfwWindowShouldClose(window)) {
//Clear the drawing surface
gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT);
gl::UseProgram(shader_program);
gl::BindVertexArray(vao);
//Draw point 0 to 3 from the currently bound VAO with
//current in-use shader
gl::DrawArrays(gl::TRIANGLES, 0, 3);
//update GLFW event handling
glfwPollEvents();
//Put the stuff we have been drawing onto the display glfwSwapBuffers(window);
}
//Close GLFW and end
glfwTerminate();
return 0;
}
Your line endings seems to been mangled.
There are multiple lines in your code where actual code was not broken into two lines, so that code is now on the same line as a comment and therefor not being executed. This is your program with proper line endings:
//First Shader Handling Program
#include "stdafx.h"
#include "gl_core_4_3.hpp"
#include <GLFW/glfw3.h>
int _tmain(int argc, _TCHAR* argv[])
{
//Select the 4.3 core profile
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
//Start the OpenGL context and open a window using the
//GLFW helper library
if (!glfwInit()) {
fprintf(stderr, "ERROR: could not start GLFW3\n");
glfwTerminate();
return 1;
}
GLFWwindow* window = glfwCreateWindow(640, 480, "First GLSL Triangle", NULL, NULL);
if (!window) {
fprintf(stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent(window);
//Load the OpenGL functions for C++
gl::exts::LoadTest didLoad = gl::sys::LoadFunctions();
if (!didLoad) {
//Load failed
fprintf(stderr, "ERROR: GLLoadGen failed to load functions\n");
glfwTerminate();
return 1;
}
printf("Number of functions that failed to load : %i.\n", didLoad.GetNumMissing());
//Tell OpenGL to only draw a pixel if its shape is closer to
//the viewer
//i.e. Enable depth testing with smaller depth value
//interpreted as being closer
gl::Enable(gl::DEPTH_TEST);
gl::DepthFunc(gl::LESS);
//Set up the vertices for a triangle
float points[] = {
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
//Create a vertex buffer object to hold this data
GLuint vbo=0;
gl::GenBuffers(1, &vbo);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::BufferData(gl::ARRAY_BUFFER, 9 * sizeof(float), points, gl::STATIC_DRAW);
//Create a vertex array object
GLuint vao = 0;
gl::GenVertexArrays(1, &vao);
gl::BindVertexArray(vao);
gl::EnableVertexAttribArray(0);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::VertexAttribPointer(0, 3, gl::FLOAT, FALSE, 0, NULL);
//The shader code strings which later we will put in
//separate files
//The Vertex Shader
const char* vertex_shader =
"#version 400\n"
"in vec3 vp;"
"void main() {"
" gl_Position = vec4(vp, 1.0);"
"}";
//The Fragment Shader
const char* fragment_shader =
"#version 400\n"
"out vec4 frag_colour;"
"void main() {"
" frag_colour = vec4(1.0, 0.5, 0.0, 1.0);"
"}";
//Load the strings into shader objects and compile
GLuint vs = gl::CreateShader(gl::VERTEX_SHADER);
gl::ShaderSource(vs, 1, &vertex_shader, NULL);
gl::CompileShader(vs);
GLuint fs = gl::CreateShader(gl::FRAGMENT_SHADER);
gl::ShaderSource(fs, 1, &fragment_shader, NULL);
gl::CompileShader(fs);
//Compiled shaders must be compiled into a single executable
//GPU shader program
//Create empty program and attach shaders
GLuint shader_program = gl::CreateProgram();
gl::AttachShader(shader_program, fs);
gl::AttachShader(shader_program, vs);
gl::LinkProgram(shader_program);
//Now draw
while (!glfwWindowShouldClose(window)) {
//Clear the drawing surface
gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT);
gl::UseProgram(shader_program);
gl::BindVertexArray(vao);
//Draw point 0 to 3 from the currently bound VAO with
//current in-use shader
gl::DrawArrays(gl::TRIANGLES, 0, 3);
//update GLFW event handling
glfwPollEvents();
//Put the stuff we have been drawing onto the display
glfwSwapBuffers(window);
}
//Close GLFW and end
glfwTerminate();
return 0;
}

Opengl will not display anything

I'm trying to get a small triangle to display.
Here is my initialization code:
void PlayerInit(Player P1)
{
glewExperimental = GL_TRUE;
glewInit();
//initialize buffers needed to draw the player
GLuint vao;
GLuint buffer;
glGenVertexArrays(1, &vao);
glGenBuffers(1, &buffer);
//bind the buffer and vertex objects
glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
//Set up a buffer to hold 6 floats for position, and 9 floats for color
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*18, NULL, GL_STATIC_DRAW);
//push the vertices of the player into the buffer
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(float)*9, CalcPlayerPoints(P1.GetPosition()));
//push the color of each player vertex into the buffer
glBufferSubData(GL_ARRAY_BUFFER, sizeof(float)*9, sizeof(float)*9, CalcPlayerColor(1));
//create and compile the vertex/fragment shader objects
GLuint vs = create_shader("vshader.glsl" ,GL_VERTEX_SHADER);
GLuint fs = create_shader("fshader.glsl" ,GL_FRAGMENT_SHADER);
//create a program object and link the shaders
GLuint program = glCreateProgram();
glAttachShader(program, vs);
glAttachShader(program, fs);
glLinkProgram(program);
//error checking for linking
GLint linked;
glGetProgramiv(program, GL_LINK_STATUS, &linked);
if(!linked)
{
std::cerr<< "Shader program failed to link" <<std::endl;
GLint logSize;
glGetProgramiv(program, GL_INFO_LOG_LENGTH, &logSize);
char* logMsg = new char[logSize];
glGetProgramInfoLog(program, logSize, NULL, logMsg);
std::cerr<< logMsg << std::endl;
delete [] logMsg;
exit(EXIT_FAILURE);
}
glUseProgram(program);
//create attributes for color and position to pass to shaders
//enable each attribute
GLuint Pos = glGetAttribLocation( program, "vPosition");
glEnableVertexAttribArray(Pos);
GLuint Col = glGetAttribLocation( program, "vColor");
glEnableVertexAttribArray(Col);
//set a pointer at the proper offset into the buffer for each attribute
glVertexAttribPointer(Pos, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*) (sizeof(float)*0));
glVertexAttribPointer(Col, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*) (sizeof(float)*9));
}
I don't have much experience writing shader linkers, so I think that is where the problem might be. I have some error checking in the shader loader, and nothing comes up. So I think that is fine.
Next I have my display and main function:
//display function for the game
void GameDisplay( void )
{
//set the background color
glClearColor(1.0, 0.0, 1.0, 1.0);
//clear the screen
glClear(GL_COLOR_BUFFER_BIT);
//Draw the Player
glDrawArrays(GL_TRIANGLES, 0, 3);
glutSwapBuffers();
}
//main function
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
glutInitWindowSize(500, 500);
glutCreateWindow("Asteroids");
Player P1 = Player(0.0, 0.0);
PlayerInit(P1);
glutDisplayFunc(GameDisplay);
glutMainLoop();
return 0;
}
Vertex shader :
attribute vec3 vPosition;
attribute vec3 vColor;
varying vec4 color;
void main()
{
color = vec4(vColor, 1.0);
gl_Position = vec4(vPosition, 1.0);
}
Fragment shader
varying vec4 color;
void main()
{
gl_FragColor = color;
}
That's all of the relevant code. CalcPlayerPoints just returns a float array of size 9 to hold the triangle coordinates. CalcPlayerColor does something similar.
One last problem that may help with diagnosing the problem is that whenever I try to exit the program by closing the window of the application, I get a breakpoint in the glutmainloop, however if I close the console window, it exits fine.
Edit: I added the shaders for reference.
Edit: I am using opengl version 3.1
Without the shaders, we can't say if the faulty code isn't GLSL (bad vertices transformations, etc.)
Have you tried checking glGetError to see if the problem doesn't come from your initialization code ?
Maybe try to set the output of your fragment shader to, say, vec4(1.0, 0.0, 0.0, 1.0) to check if its normal output is ill-formed.
Your last problem seems to unveil an undefined behavior, like bad memory allocation/deallocation, which may take place in your Player class (by the way consider passing the object as a reference in your initialization code, because it may at the moment trigger a shallow copy and then a double-free of some pointer).
Turns out there was something wrong with how I was returning the array of vertices from the function used to compute them. The rest of the code worked fine after that fix.