Opengl Triangles Benchmark - c++

I am trying to test how may triangles I can draw on my laptop, so I am doing the following on my system:
OS: Windows 10
CPU: Intel Core i5 5200U
GPU: NVIDIA Geforce 820M
Code:
glfwWindowHint(GLFW_SAMPLES, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); // To make MacOS happy; should not be needed
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
// Open a window and create its OpenGL context
window = glfwCreateWindow( 1024, 768, "Tutorial 02 - Red triangle", NULL, NULL);
if( window == NULL ){
fprintf( stderr, "Failed to open GLFW window. If you have an Intel GPU, they are not 3.3 compatible. Try the 2.1 version of the tutorials.\n" );
getchar();
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
// Initialize GLEW
glewExperimental = true; // Needed for core profile
if (glewInit() != GLEW_OK) {
fprintf(stderr, "Failed to initialize GLEW\n");
getchar();
glfwTerminate();
return -1;
}
// Ensure we can capture the escape key being pressed below
glfwSetInputMode(window, GLFW_STICKY_KEYS, GL_TRUE);
// Dark blue background
glClearColor(0.0f, 0.0f, 0.4f, 0.0f);
GLuint VertexArrayID;
glGenVertexArrays(1, &VertexArrayID);
glBindVertexArray(VertexArrayID);
// Create and compile our GLSL program from the shaders
GLuint programID = LoadShaders( "SimpleVertexShader.vertexshader", "SimpleFragmentShader.fragmentshader" );
// Number of triangles
const int v = 200;
static GLfloat g_vertex_buffer_data[v*9] = {
-1.0f, -1.0f, 1.0f,
0.8f, 0.f, 0.0f,
0.5f, 1.0f, 0.0f,
};
// fill buffer of triangles
for (int i = 9; i < v * 9; i += 9)
{
g_vertex_buffer_data[i] = g_vertex_buffer_data[0];
g_vertex_buffer_data[i+1] = g_vertex_buffer_data[1];
g_vertex_buffer_data[i+2] = g_vertex_buffer_data[2];
g_vertex_buffer_data[i+3] = g_vertex_buffer_data[3];
g_vertex_buffer_data[i+4] = g_vertex_buffer_data[4];
g_vertex_buffer_data[i+5] = g_vertex_buffer_data[5];
g_vertex_buffer_data[i+6] = g_vertex_buffer_data[6];
g_vertex_buffer_data[i+7] = g_vertex_buffer_data[7];
g_vertex_buffer_data[i+8] = g_vertex_buffer_data[8];
}
GLuint vertexbuffer;
glGenBuffers(1, &vertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
int frameNr = 0;
char text[100];
do{
// Clear the screen
glClear( GL_COLOR_BUFFER_BIT );
// Use our shader
glUseProgram(programID);
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the triangle !
glDrawArrays(GL_TRIANGLES, 0, v*3); // 3 indices starting at 0 -> 1 triangle
glDisableVertexAttribArray(0);
// Swap buffers
glfwSwapBuffers(window);
glfwPollEvents();
//glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
frameNr++;
sprintf_s(text, "%d %d %d", frameNr, clock() / 1000, (frameNr * 1000) / (clock() + 1));
glfwSetWindowTitle(window, text);
} // Check if the ESC key was pressed or the window was closed
while( glfwGetKey(window, GLFW_KEY_ESCAPE ) != GLFW_PRESS &&
glfwWindowShouldClose(window) == 0 );
// Cleanup VBO
glDeleteBuffers(1, &vertexbuffer);
glDeleteVertexArrays(1, &VertexArrayID);
glDeleteProgram(programID);
// Close OpenGL window and terminate GLFW
glfwTerminate();
return 0;
What wonders me there is that I only get about 80 fps with v = 200 Triangles.
This would be about 16000 Triangles per second what is pretty bad isnt it?
What am I doing wrong here in the code, or can my graphics card really just handle such a low amount of triangles?
How many triangles can a modern gpu like a 1080ti handle (I heard in theory 11 billion ones -although I know in reality it's much lower).

Since I don't yet have enough reputation to comment, let me ask here: How large are your triangles? It's hard to tell without having seen the vertex shader, but assuming those coordinates in your code are directly mapped to normalized device coordinates, your triangle covers a significant part of the screen. If I'm not mistaken, you basically draw the same triangle over and over on top of itself. Thus, you will most likely be fillrate limited. To get more meaningful results, you might rather want to just draw a grid of non-overlapping triangles or at least a random triangle soup instead. To further minimize fillrate and framebuffer bandwidth requirements, you might wanna make sure that depth buffering and blending are turned off.
If you're interested in raw triangles per second, why do you enable MSAA? Doing so just artificially amplifies rasterizer load. As others have noted too, V-Sync is likely switched off as 80 Hz would be a rather weird refresh rate, but better make sure and explicitly switch it off via glfwSwapInterval(0). Rather than estimating total frame time like you do, you might want to consider measuring the actual drawing time on the GPU using a GL_TIME_ELAPSED query.

Related

OpenGL 3 Rendering Problems Linux (Ubuntu Mate)

I'm unable to get OpenGL (with GLFW) to render content to the screen. I'm not even able set a clear color and have that be displayed when I run my application, I'm just consistently presented with a black screen.
I have installed requisite dependencies on my system and set up the build environment such that I'm able to successfully compile my applications (and dependencies) without error. Here is a snippet of the problematic code... You will note much of the rendering code has actually been commented out. For now it will be sufficient to just have the Clear Color I chose displayed to verify that everything is set up correctly:
// Include standard headers
#include <stdio.h>
#include <stdlib.h>
//Include GLEW. Always include it before gl.h and glfw3.h, since it's a bit magic.
#include <GL/glew.h>
// Include GLFW
#include <GLFW/glfw3.h>
// Include GLM
#include <glm/glm.hpp>
#include <GL/glu.h>
#include<common/shader.h>
#include <iostream>
using namespace glm;
int main()
{
// Initialise GLFW
glewExperimental = true; // Needed for core profile
if( !glfwInit() )
{
fprintf( stderr, "Failed to initialize GLFW\n" );
return -1;
}
// Open a window and create its OpenGL context
GLFWwindow* window; // (In the accompanying source code, this variable is global for simplicity)
window = glfwCreateWindow( 1024, 768, "Tutorial 02", NULL, NULL);
if( window == NULL ){
fprintf( stderr, "Failed to open GLFW window. If you have an Intel GPU, they are not 3.3 compatible. Try the 2.1 version of the tutorials.\n" );
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window); // Initialize GLEW
//glewExperimental=true; // Needed in core profile
if (glewInit() != GLEW_OK) {
fprintf(stderr, "Failed to initialize GLEW\n");
return -1;
}
//INIT VERTEX ARRAY OBJECT (VAO)...
//create Vertex Array Object (VAO)
GLuint VertexArrayID;
//Generate 1 buffer, put the resulting identifier in our Vertex array identifier.
glGenVertexArrays(1, &VertexArrayID);
//Bind the Vertex Array Object (VAO) associated with the specified identifier.
glBindVertexArray(VertexArrayID);
// Create an array of 3 vectors which represents 3 vertices
static const GLfloat g_vertex_buffer_data[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
};
//INIT VERTEX BUFFER OBJECT (VBO)...
// This will identify our vertex buffer
GLuint VertexBufferId;
// Generate 1 buffer, put the resulting identifier in VertexBufferId
glGenBuffers(1, &VertexBufferId);
//Bind the Vertex Buffer Object (VBO) associated with the specified identifier.
glBindBuffer(GL_ARRAY_BUFFER, VertexBufferId);
// Give our vertices to OpenGL.
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
//Compile our Vertex and Fragment shaders into a shader program.
/**
GLuint programId = LoadShaders("../tutorial2-drawing-triangles/SimpleVertexShader.glsl","../tutorial2-drawing-triangles/SimpleFragmentShader.glsl");
if(programId == -1){
printf("An error occured whilst attempting to load one or more shaders. Exiting....");
exit(-1);
}
//glUseProgram(programId); //use our shader program
*/
// Ensure we can capture the escape key being pressed below
glfwSetInputMode(window, GLFW_STICKY_KEYS, GL_TRUE);
do{
// Clear the screen. It's not mentioned before Tutorial 02, but it can cause flickering, so it's there nonetheless.
glClearColor(8.0f, 0.0f, 0.0f, 0.3f);
//glClearColor(1.0f, 1.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// DRAW OUR TRIANGE...
/**
glBindBuffer(GL_ARRAY_BUFFER, VertexBufferId);
glEnableVertexAttribArray(0); // 1st attribute buffer : vertices
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// plot the triangle !
glDrawArrays(GL_TRIANGLES, 0, 3); // Starting from vertex 0; 3 vertices total -> 1 triangle
glDisableVertexAttribArray(0); //clean up attribute array
*/
// Swap buffers
glfwSwapBuffers(window);
//poll for and process events.
glfwPollEvents();
} // Check if the ESC key was pressed or the window was closed
while( glfwGetKey(window, GLFW_KEY_ESCAPE ) != GLFW_PRESS &&
glfwWindowShouldClose(window) == 0 );
}
Again, pretty straight forward as far as OpenGL goes, all rendering logic, loading of shaders,etc has been commented out I'm just trying to set a clear color and have it displayed to be sure my environment is configured correctly. To build the application I'm using QTCreator with a custom CMAKE file. I can post the make file if you think it may help determine the problem.
So I managed to solve the problem. I'll attempt to succinctly outline the source of the problem and how I arrived at a resolution in the hope that it may be useful to others that encounter the same issue:
In a nutshell, the source of the problem was a driver issue, I neglected to mention that I was actually running OpenGL inside an Ubuntu Mate 18.0 VM (via Parallels 16) on a MacBook Pro (with dedicated graphics) Therein, lies the problem; up until very recently both Parallels and Ubuntu simply did not support more modern, OpenGL 3.3 and upwards. I discovered this by adding the following lines to the posted code in order to force a specific OpenGL version:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
On doing this the application immediately begin to crash and glGetError() reported that I needed to downgrade to an earlier version of OpenGL as 3.3 was not compatible with my system.
The solution was two-fold:
Update Parallels to version 17 which now includes a dedicated, third-party virtual GPU (virGL) that is capable of running OpenGL 3.3 code.
Update Ubuntu or at the very least the kernel as virGL only works with linux kernel versions 5.10 and above. (Ubuntu Mate 18 only ships with kernel version 5.04.)
Thats it, making the changes, as described, enabled me to run the code exactly as posted and successfully render a basic triangle to the screen.

how can I make a triangle appear on the screen using OpenGL? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I'm trying to make a triangle appear on the screen using OpenGL but when I run the code, a black screen appears
I am currently using a guide I found on the internet (here is the link: http://www.opengl-tutorial.org/beginners-tutorials/tutorial-2-the-first-triangle/).
this is the code :
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
int larghezza = 1024;
int altezza = 768;
int main (){
glewExperimental = true;
if (!glfwInit() )
{
fprintf (stderr, "non è stato possibile inizzializzare glfw\n");
return -1;
}
glfwWindowHint(GLFW_SAMPLES, 4);// 4x antialiasing
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR,4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 1);// utilizza OpenGL 4.1
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);//NON VOGLIO VECCHIE VERSIONI DI OPENGL
GLFWwindow* window; // crea una finestra
window = glfwCreateWindow(larghezza, altezza, "FINESTRA 1", NULL, NULL);
if (!window){
fprintf (stderr, "Non è stato possibile aprire la finestra!\n");
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window); //inizzializza GLEW
glewExperimental = true;
if (glewInit() != GLEW_OK){
fprintf (stderr, "non è stato possibile inizzializzare GLEW");
return -1;
}
glfwSetInputMode(window, GLFW_STICKY_KEYS, GL_TRUE);
do {
glClear(GL_COLOR_BUFFER_BIT);
glfwSwapBuffers(window);
glfwPollEvents();
}
while( glfwGetKey(window, GLFW_KEY_ESCAPE ) != GLFW_PRESS &&
glfwWindowShouldClose(window) == 0 );
GLuint VertexArrayID;
glGenVertexArrays(1, &VertexArrayID);
glBindVertexArray(VertexArrayID);
static const GLfloat g_vertex_buffer_data[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
};
GLuint vertexbuffer;
// Generate 1 buffer, put the resulting identifier in vertexbuffer
glGenBuffers(1, &vertexbuffer);
// The following commands will talk about our 'vertexbuffer' buffer
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
// Give our vertices to OpenGL.
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the triangle !
glDrawArrays(GL_TRIANGLES, 0, 3); // Starting from vertex 0; 3 vertices total -> 1 triangle
glDisableVertexAttribArray(0);
}
how can i fix it?
The glDrawArrays() call should be in the main rendering event loop. Specifically within the do {} block.
Ask: Why will this code render if your draw call does not get called while it is within the do{} loop ?
When you press the Escape button, it will draw but in the hidden buffer, which is basically swapped to the screen once the draw is completed. Since you don't swap after the glDrawArrays() call, you will never see it.
I don't know what tutorial site you are using
However, I suggest a proper tutorial like from this site - learnopengl.com. Even going through the Getting Started section is enough for a good intro.

why do i need glClear(GL_DEPTH_BUFFER_BIT)

here is my code:
#include <GL/glew.h> // include GLEW and new version of GL on Windows
#include <GLFW/glfw3.h> // GLFW helper library
#include <stdio.h>
int main () {
// start GL context and O/S window using the GLFW helper library
if (!glfwInit ()) {
fprintf (stderr, "ERROR: could not start GLFW3\n");
return 1;
}
// uncomment these lines if on Apple OS X
glfwWindowHint (GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint (GLFW_CONTEXT_VERSION_MINOR, 1);
glfwWindowHint (GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint (GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
GLFWwindow* window = glfwCreateWindow (640, 480, "Hello Triangle", NULL, NULL);
if (!window) {
fprintf (stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent (window);
// start GLEW extension handler
glewExperimental = GL_TRUE;
glewInit ();
// get version info
const GLubyte* renderer = glGetString (GL_RENDERER); // get renderer string
const GLubyte* version = glGetString (GL_VERSION); // version as a string
printf ("Renderer: %s\n", renderer);
printf ("OpenGL version supported %s\n", version);
// tell GL to only draw onto a pixel if the shape is closer to the viewer
glEnable (GL_DEPTH_TEST); // enable depth-testing
glDepthFunc (GL_LESS); // depth-testing interprets a smaller value as "closer"
/* OTHER STUFF GOES HERE NEXT */
float points[] = {
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
GLuint vbo = 0;
glGenBuffers (1, &vbo);
glBindBuffer (GL_ARRAY_BUFFER, vbo);
glBufferData (GL_ARRAY_BUFFER, 9 * sizeof (float), points, GL_STATIC_DRAW);
GLuint vao = 0;
glGenVertexArrays (1, &vao);
glBindVertexArray (vao);
glEnableVertexAttribArray (0);
glBindBuffer (GL_ARRAY_BUFFER, vbo);
glVertexAttribPointer (0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
const char* vertex_shader =
"#version 410\n"
"layout(location = 0) in vec4 vPosition;"
"void main () {"
" gl_Position = vPosition;"
"}";
const char* fragment_shader =
"#version 410\n"
"out vec4 frag_colour;"
"void main () {"
" frag_colour = vec4 (0.5, 0.0, 0.5, 1.0);"
"}";
GLuint vs = glCreateShader (GL_VERTEX_SHADER);
glShaderSource (vs, 1, &vertex_shader, NULL);
glCompileShader (vs);
GLuint fs = glCreateShader (GL_FRAGMENT_SHADER);
glShaderSource (fs, 1, &fragment_shader, NULL);
glCompileShader (fs);
GLuint shader_programme = glCreateProgram ();
glAttachShader (shader_programme, fs);
glAttachShader (shader_programme, vs);
glLinkProgram (shader_programme);
while (!glfwWindowShouldClose (window)) {
// wipe the drawing surface clear
glClear (GL_DEPTH_BUFFER_BIT);
const GLfloat color[]={0.0,0.2,0.0,1.0};
//glClearBufferfv(GL_COLOR,0,color);
glUseProgram (shader_programme);
glBindVertexArray (vao);
// draw points 0-3 from the currently bound VAO with current in-use shader
glDrawArrays (GL_TRIANGLES, 0, 3);
// update other events like input handling
glfwPollEvents ();
// put the stuff we've been drawing onto the display
glfwSwapBuffers (window);
} // close GL context and any other GLFW resources
glfwTerminate();
return 0;
}
when i comment line glClear(GL_DEPTH_BUFFER_BIT),the window showing up did not display anything,does this routine matter?
i am using Xcode and Mac OS X 10.1.2,please help me with this ,thanks
The depth buffer is used to decide if geometry you render is closer to the viewer than geometry you rendered previously. This allows the elimination of hidden geometry.
This test is executed per fragment (pixel). Any time a fragment is rendered, its depth is compared to the corresponding value in the depth buffer. If the new depth is bigger, the fragment is eliminated by the depth test. Otherwise, the fragment is written to the color buffer, and the value in the depth buffer is updated with the depth of the new fragment. The functionality is controlled by these calls you make during setup:
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
This way, if a fragment is covered by multiple triangles, the color of the final pixels will be given by the geometry that was closest to the viewer.
Now, if you don't clear the depth buffer at the start of each frame, the comparison to the value in the depth buffer described above will use whatever value happens to be in the depth buffer. This could be a value from a previous frame, or an uninitialized garbage value. Therefore, fragments can be eliminated by the depth test even though no fragments in the current frame were drawn at the same position before. In the extreme case, all fragments are eliminated by the depth test, and you see nothing at all.
Unless you are certain that you will render something to all pixels in your window, you will also want to clear the color buffer at the start of the frame. So your clear call should be:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// wipe the drawing surface clear
, glClear (GL_DEPTH_BUFFER_BIT);
What that comment means above the code means is that it clears the depth buffer. The depth buffer is the part of the frame buffer, that makes objects being obstructed by other objectsin front of them. Without clearing the depth buffer, you'd draw into the depth structure of the previous drawing.

XCode error at run-time: "address doesn't contain a section that points to a section in a object file"

I'm trying to learn OpenGL on Mac, by following the tutorials here. This involves setting up GLEW, GLFW, and GLM, which I did using homebrew. Being completely new to OpenGL, XCode, and C++, it took me a bit of googling, but I managed to figure out all the various header paths, library paths, and linker arguments to get set up.
Now I'm receiving the error "address doesn't contain a section that points to a section in a object file" when running this code. Being unfamiliar with the (frankly bizarre) XCode UI, I'm having trouble tracing the source of the problem. Google just points me at Objective-C related articles about ARC. Well, this isn't Obj-C, and I'm not using ARC, so no luck there.
Any ideas what might be causing it?
// Include standard headers
#include <stdio.h>
#include <stdlib.h>
// Include GLEW. Always include it before gl.h and glfw.h, since it's a bit magic.
#include <GL/glew.h>
// Include GLFW
#include <GL/glfw.h>
// Include GLM
#include <glm/glm.hpp>
using namespace glm;
//http://www.opengl-tutorial.org/beginners-tutorials/tutorial-1-opening-a-window/
int main(int argv, char ** argc){
// Initialise GLFW
if( !glfwInit() )
{
fprintf( stderr, "Failed to initialize GLFW\n" );
return -1;
}
glfwOpenWindowHint(GLFW_FSAA_SAMPLES, 4); // 4x antialiasing
//http://www.glfw.org/faq.html#42__how_do_i_create_an_opengl_30_context
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 3);
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 2);
glfwOpenWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwOpenWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwOpenWindowHint(GLFW_OPENGL_PROFILE, 0); //We don't want the old OpenGL // Open a window and create its OpenGL context
if( !glfwOpenWindow( 1024, 768, 0,0,0,0, 32,0, GLFW_WINDOW ) )
{
fprintf( stderr, "Failed to open GLFW window\n" );
glfwTerminate();
return -1;
}
//We can't call glGetString until the window is created (on mac).
//http://www.idevgames.com/forums/thread-4218.html
const GLubyte* v=glGetString(GL_VERSION);
printf("OpenGL version: %s\n", (char*)v);
/***** BAD CODE SOMEWHERE IN THIS BLOCK.*/
{
//http://www.opengl-tutorial.org/beginners-tutorials/tutorial-2-the-first-triangle/
GLuint vertexArrayID;
//generate a vertexArray, put the identifier in VertexArrayID
glGenVertexArrays(1, &vertexArrayID);
//bind it. (?)
glBindVertexArray(vertexArrayID);
// An array of 3 vectors which represents 3 vertices
static const GLfloat g_vertex_buffer_data[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
};
// This will identify our vertex buffer
GLuint vertexBufferId;
// Generate 1 buffer, put the resulting identifier in vertexbuffer
glGenBuffers(1, &vertexBufferId);
// The following commands will talk about our 'vertexbuffer' buffer
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferId);
// Give our vertices to OpenGL.
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
/* END BAD CODE BLOCK */
}
// Initialize GLEW
glewExperimental=true; // Needed in core profile
if (glewInit() != GLEW_OK) {
fprintf(stderr, "Failed to initialize GLEW\n");
return -1;
}
glfwSetWindowTitle( "Tutorial 01" );
// Ensure we can capture the escape key being pressed below
glfwEnable( GLFW_STICKY_KEYS );
do{
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferId);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the triangle !
glDrawArrays(GL_TRIANGLES, 0, 3); // Starting from vertex 0; 3 vertices total -> 1 triangle
glDisableVertexAttribArray(0);
// Swap buffers
glfwSwapBuffers();
} // Check if the ESC key was pressed or the window was closed
while( glfwGetKey( GLFW_KEY_ESC ) != GLFW_PRESS &&
glfwGetWindowParam( GLFW_OPENED ) );
}

glm::rotate() call fails to compile?

I can't get the call to glm::rotate(mat4, int, vec3(x,y,z)); to work. VS 2010 is telling me
Intellisense: no instance of function template "glm::gtc::matrix_transform::rotate" matches the argument
I'm using glm-0.9.1
I've seen this stack overflow question, but the solution still triggers an error with the intellisense: glm rotate usage in Opengl
I can't really get it to accept any of the overload methods. I might just be missing something obvious though.
I tried to make the rotate call apparent in the code, it is a bit of the way down.
here is some code:
#include <GL/glew.h>
#include <GL/glfw.h>
#include <GL/glut.h>
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include "loadShader.h"
#include <stdio.h>
#include <stdlib.h>
glm::mat4 Projection;
glm::mat4 View;
glm::mat4 Model;
glm::mat4 MVP;
glm::mat4 t;
glm::mat4 s;
glm::mat4 r;
GLuint programID;
int main()
{
// Initialise GLFW
if( !glfwInit() )
{
fprintf( stderr, "Failed to initialize GLFW\n" );
return -1;
}
glfwOpenWindowHint(GLFW_FSAA_SAMPLES, 4);
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 3);
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 3);
glfwOpenWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
// Open a window and create its OpenGL context
if( !glfwOpenWindow( 1024, 768, 0,0,0,0, 32,0, GLFW_WINDOW ) )
{
fprintf( stderr, "Failed to open GLFW window. If you have an Intel GPU, they are not 3.3 compatible. Try the 2.1 version of the tutorials.\n" );
glfwTerminate();
return -1;
}
// Initialize GLEW
if (glewInit() != GLEW_OK) {
fprintf(stderr, "Failed to initialize GLEW\n");
return -1;
}
glfwSetWindowTitle( "Tutorial 02" );
// Ensure we can capture the escape key being pressed below
glfwEnable( GLFW_STICKY_KEYS );
glewExperimental = GL_TRUE;
glewInit();
// Dark blue background
glClearColor(0.0f, 0.0f, 0.3f, 0.0f);
GLuint VertexArrayID;
glGenVertexArrays(1, &VertexArrayID);
glBindVertexArray(VertexArrayID);
// Create and compile our GLSL program from the shaders
programID = LoadShaders( "vertexShader.glsl", "fragmentShader.glsl" );
//Pojectio matrix : 45 degree Field of view, 4:3 ratio, display range : 0.1 unit <-> 100 units
Projection = glm::perspective( 45.0f, 4.0f / 3.0f, 0.1f, 100.0f );
// Camera matrix
View = glm::lookAt(
glm::vec3(4,3,3), // Camera is at (4,3,3), in World Space
glm::vec3(0,0,0), // and looks at the origin
glm::vec3(0,1,0) // Head is up (set to 0, -1,0 to look upside down)
);
// Model matrix : an identity matrix (model will be at the origin)
Model = glm::mat4(1.0f); // Changes for each Model !
//INTELLISENSE ERROR ==================================================================>>>>
r = glm::rotate(Model, 45, glm::vec3(1,0,0));
// Our ModelViewProjection : multiplication of our 3 matrices
MVP = Projection * View * Model; // Remember matrix multiplication is the other way around
// Get a handle for our "MVP" uniform.
// Only at initialisation time.
GLuint MatrixID = glGetUniformLocation(programID, "MVP");
// Send our transformation to the currently bound shader,
// in the "MVP" uniform
// For each model you fender, since the MVP will be different (at least the M part)
static const GLfloat g_vertex_buffer_data[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
};
GLuint vertexbuffer;
glGenBuffers(1, &vertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
do{
// Clear the screen
glClear( GL_COLOR_BUFFER_BIT );
// Use our shader
glUseProgram(programID);
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the triangle !
glDrawArrays(GL_TRIANGLES, 0, 3); // From index 0 to 3 -> 1 triangle
glDisableVertexAttribArray(0);
// Swap buffers
glfwSwapBuffers();
} // Check if the ESC key was pressed or the window was closed
while( glfwGetKey( GLFW_KEY_ESC ) != GLFW_PRESS &&
glfwGetWindowParam( GLFW_OPENED ) );
// Close OpenGL window and terminate GLFW
glfwTerminate();
// Cleanup VBO
glDeleteBuffers(1, &vertexbuffer);
glDeleteVertexArrays(1, &VertexArrayID);
return 0;
}
I don't know enough about templates to give you a great answer, but I believe from my memory of GLM is that it was very picky about types.
Can you try explicitly changing 45 to 45.f to see if it accepts that? I think you need to have consistent parameters (float matrix, float, float vector). I think the int confuses it somehow.
glm::rotate(Model, (glm::mediump_float)45, glm::vec3(1,0,0));
I found that this cast is helping because of the type used in glm::vec3 as template.
This function is that defined:
glm::detail::tmat4x4<glm::lowp_float> glm::rotate<glm::lowp_float> ( const glm::detail::tmat4x4<glm::lowp_float> &m, const glm::lowp_float &angle, const glm::detail::tvec3<glm::mediump_float> &axis)
So you must use appropriate type in your angle value.
I think problems actually at "glm.h" file. To be honest, I don't know much about technical knowledge to explain why but as I try to fix this error, it worked. So I just want to share my experience to someone that is in need.
As you know that library files that we define our-self should be (or need to be) put between the double quotes (" ") rather than the angle brackets (< >).
So when I right clicked at #include"glm.h" (Go To Document "glm.h") to see what happened, it directed me to that file. Inside it, I can see 3 other lines:
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>
Then I changed the <> by " " as I said before:
#include "glm/glm.hpp"
#include "glm/gtc/matrix_transform.hpp"
#include "glm/gtc/type_ptr.hpp"
My glm::rotate() function now work!!!