I'm rewriting a large part of my texturing code. I would like to be able to specify certain internal formats: GL_RGB8I, GL_RGB8UI, GL_RGB16I, GL_RGB16UI, GL_RGB32I, and GL_RGB32UI. These tokens do not exist in OpenGL 2.
When specifying these internal formats as arguments to glTexImage2D, the texturing fails (the texture appears as white). When checking for errors, I get [EDIT:] 1282 ("invalid operation"). I take this to mean that the OpenGL is still using OpenGL 2 for glTexImage2D, and so the call is failing. Obviously, it will need to use a newer version to succeed. Enums like GL_RGB, GL_RGBA, and (oddly) GL_RGB32F, GL_RGBA32F work as expected.
I configure to use GLEW or GLee for extensions. I can use OpenGL 4 calls with no problem elsewhere (e.g., glPatchParameteri, glBindFramebuffer, etc.), and the enums in question certainly exist. For completeness, glGetString(GL_VERSION) returns "4.2.0". My question: can I force one of these extension libraries to use the OpenGL 4.2 version? If so, how?
EDIT: The code is too complicated to post, but here is a simple, self-contained example using GLee that also demonstrates the problem:
#include <GLee5_4/GLee.h>
#include <GL/gl.h>
#include <GL/glu.h>
#include <gl/glut.h>
//For Windows
#pragma comment(lib,"GLee.lib")
#pragma comment(lib,"opengl32.lib")
#pragma comment(lib,"glu32.lib")
#pragma comment(lib,"glut32.lib")
#include <stdlib.h>
#include <stdio.h>
const int screen_size[2] = {512,512};
#define TEXTURE_SIZE 64
//Choose a selection. If you see black, then texturing is working. If you see red, then the quad isn't drawing. If you see white, texturing has failed.
#define TYPE 1
void error_check(void) {
GLenum error_code = glGetError();
const GLubyte* error_string = gluErrorString(error_code);
(error_string==NULL) ? printf("%d = (unrecognized error--an extension error?)\n",error_code) : printf("%d = \"%s\"\n",error_code,error_string);
}
#if TYPE==1 //############ 8-BIT TESTS ############
inline GLenum get_type(int which) { return (which==1)? GL_RGB8: GL_RGB; } //works
#elif TYPE==2
inline GLenum get_type(int which) { return (which==1)? GL_RGBA8:GL_RGBA; } //works
#elif TYPE==3
inline GLenum get_type(int which) { return (which==1)? GL_RGB8UI: GL_RGB; } //doesn't work (invalid op)
#elif TYPE==4
inline GLenum get_type(int which) { return (which==1)? GL_RGB8I: GL_RGB; } //doesn't work (invalid op)
#elif TYPE==5
inline GLenum get_type(int which) { return (which==1)? GL_RGBA8UI:GL_RGBA; } //doesn't work (invalid op)
#elif TYPE==6
inline GLenum get_type(int which) { return (which==1)? GL_RGBA8I:GL_RGBA; } //doesn't work (invalid op)
#elif TYPE==7 //############ 16-BIT TESTS ############
inline GLenum get_type(int which) { return (which==1)? GL_RGB16: GL_RGB; } //works
#elif TYPE==8
inline GLenum get_type(int which) { return (which==1)? GL_RGBA16:GL_RGBA; } //works
#elif TYPE==9
inline GLenum get_type(int which) { return (which==1)? GL_RGB16UI: GL_RGB; } //doesn't work (invalid op)
#elif TYPE==10
inline GLenum get_type(int which) { return (which==1)? GL_RGB16I: GL_RGB; } //doesn't work (invalid op)
#elif TYPE==11
inline GLenum get_type(int which) { return (which==1)?GL_RGBA16UI:GL_RGBA; } //doesn't work (invalid op)
#elif TYPE==12
inline GLenum get_type(int which) { return (which==1)? GL_RGBA16I:GL_RGBA; } //doesn't work (invalid op)
#elif TYPE==13 //############ 32-BIT TESTS ############
inline GLenum get_type(int which) { return (which==1)? GL_RGB32: GL_RGB; } //token doesn't exist
#elif TYPE==14
inline GLenum get_type(int which) { return (which==1)? GL_RGBA32:GL_RGBA; } //token doesn't exist
#elif TYPE==15
inline GLenum get_type(int which) { return (which==1)? GL_RGB32UI: GL_RGB; } //doesn't work (invalid op)
#elif TYPE==16
inline GLenum get_type(int which) { return (which==1)? GL_RGB32I: GL_RGB; } //doesn't work (invalid op)
#elif TYPE==17
inline GLenum get_type(int which) { return (which==1)?GL_RGBA32UI:GL_RGBA; } //doesn't work (invalid op)
#elif TYPE==18
inline GLenum get_type(int which) { return (which==1)? GL_RGBA32I:GL_RGBA; } //doesn't work (invalid op)
#elif TYPE==19 //############ 32-BIT FLOAT ############
inline GLenum get_type(int which) { return (which==1)? GL_RGB32F: GL_RGB; } //works
#elif TYPE==20
inline GLenum get_type(int which) { return (which==1)? GL_RGBA32F:GL_RGBA; } //works
#endif
GLuint texture;
void create_texture(void) {
printf(" Status before texture setup: "); error_check();
glGenTextures(1,&texture);
glBindTexture(GL_TEXTURE_2D,texture);
printf(" Status after texture created: "); error_check();
GLenum data_type = GL_UNSIGNED_BYTE;
int data_length = TEXTURE_SIZE*TEXTURE_SIZE*4; //maximum number of channels, so it will work for everything
unsigned char* data = new unsigned char[data_length];
for (int i=0;i<data_length;++i) {
data[i] = (unsigned char)(0);
};
glTexImage2D(GL_TEXTURE_2D,0,get_type(1), TEXTURE_SIZE,TEXTURE_SIZE, 0,get_type(2),data_type,data);
printf(" Status after glTexImage2D: "); error_check();
delete [] data;
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
printf(" Status after texture filters defined: "); error_check();
}
void keyboard(unsigned char key, int x, int y) {
switch (key) {
case 27: //esc
exit(0);
break;
}
}
void draw(void) {
glClearColor(1.0,0.0,0.0,1.0); //in case the quad doesn't draw
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glViewport(0,0,screen_size[0],screen_size[1]);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0,screen_size[0],0,screen_size[1]);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBegin(GL_QUADS);
glTexCoord2f(0,0); glVertex2f(0,0);
glTexCoord2f(2,0); glVertex2f(screen_size[0],0);
glTexCoord2f(2,2); glVertex2f(screen_size[0],screen_size[1]);
glTexCoord2f(0,2); glVertex2f(0,screen_size[1]);
glEnd();
glutSwapBuffers();
}
int main(int argc, char* argv[]) {
glutInit(&argc,argv);
glutInitWindowSize(screen_size[0],screen_size[1]);
glutInitDisplayMode(GLUT_RGB|GLUT_DOUBLE|GLUT_DEPTH);
glutCreateWindow("Texture Types - Ian Mallett");
glEnable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
printf("Status after OpenGL setup: "); error_check();
create_texture();
printf("Status after texture setup: "); error_check();
glutDisplayFunc(draw);
glutIdleFunc(draw);
glutKeyboardFunc(keyboard);
glutMainLoop();
return 0;
}
When checking for errors, I get [EDIT:] 1282 ("invalid operation"). I take this to mean that the OpenGL is still using OpenGL 2 for glTexImage2D, and so the call is failing.
OpenGL errors are not that complex to understand. GL_INVALID_ENUM/VALUE are thrown when you pass something an enum or value that is unexpected, unsupported, or out-of-range. If you pass "17" as the internal format to glTexImage2D, you will get GL_INVALID_ENUM, because 17 is not a valid enum number for an internal format. If you pass 103,422 as the width to glTexImage2D, you will get GL_INVALID_VALUE, because 103,422 is almost certainly larger than GL_MAX_TEXTURE_2D's size.
GL_INVALID_OPERATION is always used for combinations of state that go wrong. Either there is some context state previously set that doesn't mesh with the function you're calling, or two or more parameters combined are causing a problem. The latter is the case you have here.
If your implementation didn't support integer textures at all, then you would get INVALID_ENUM (because the internal format is not a valid format). Getting INVALID_OPERATION means that something else is wrong.
Namely, this:
glTexImage2D(GL_TEXTURE_2D,0,get_type(1), TEXTURE_SIZE,TEXTURE_SIZE, 0,get_type(2),data_type,data);
Your get_type(2) call returns GL_RGB or GL_RGBA in all cases. However, when using integral image formats, you must use a pixel transfer format with _INTEGER at the end.
So your get_type(2) needs to be this:
inline GLenum get_type(int which) { return (which==1)? GL_RGB16UI: GL_RGB_INTEGER; }
And similarly for other integral image formats.
Related
It seems that I cant use glDebugMessageCallback function, the throws an Access violation error on the very next line of code.
ERROR: Exception thrown at 0x0000000000000000 in DEBUG.exe:
0xC0000005: Access violation executing location 0x0000000000000000.
ErrorHandler.hpp
#define GLCall(x) x;\
if(isError) __debugbreak();
static bool isError{ false };
namespace ErrorHandler {
void APIENTRY GLDebugMessageCallback(GLenum source, GLenum type, GLuint id, GLenum severity, GLsizei length, const GLchar* message, const void* userParam);
}
ErrorHandler.cpp
void APIENTRY ErrorHandler::GLDebugMessageCallback(GLenum source, GLenum type, GLuint id, GLenum severity, GLsizei length, const GLchar* message, const void* userParam)
{
isError = true;
const char* _source;
const char* _type;
const char* _severity;
switch (source) ...
switch (type) ...
switch (severity) ...
if (_severity != "NOTIFICATION") {
fprintf(stderr, "OpenGL error [%d]: %s of %s severity, raised from %s: %s\n",
id, _type, _severity, _source, message);
}
}
Game.cpp
Game::Game(const char* title, uint16_t width, uint16_t height)
{
if (SDL_Init(SDL_INIT_VIDEO) < 0) ...
m_window = SDL_CreateWindow(title, SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, width, height, SDL_WINDOW_OPENGL);
if (!m_window) ...
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_FLAGS, SDL_GL_CONTEXT_DEBUG_FLAG);
SDL_GL_SetSwapInterval(1);
glewExperimental = GL_TRUE;
if (glewInit() != GLEW_OK) ...
m_context = SDL_GL_CreateContext(m_window);
if (!m_context) ...
printf("%s\n", glGetString(GL_VERSION));
#ifdef _DEBUG
glEnable(GL_DEBUG_OUTPUT);
glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS);
glDebugMessageCallback(ErrorHandler::GLDebugMessageCallback, 0);
#endif
m_run();
}
I've tried:
Moving the glDebugMessageCallback to diffrent lines (directly after initializing glew, after creating context).
I've tried to use another function as a callback.
I've tried to explicitly set the OpenGL version (4.6.0) and (4.4.0).
I've tried to remove any sdl flags (profile flag and version flags).
Everything gives the same result (Access violation).
You must call
SDL_GL_MakeCurrent(m_window, m_context);
to activate your OpenGL context prior to calling any OpenGL functions. Otherwise the OpenGL functions do not know which context to operate on. Presumably GLFW did that for you automatically, but SDL does not.
I'm not familiar with GLEW, but there's a good chance that glewInit also expects a valid context to be active. So the order of operations should be as follows:
m_window = SDL_CreateWindow(title, SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, width, height, SDL_WINDOW_OPENGL);
// ...
m_context = SDL_GL_CreateContext(m_window);
SDL_GL_MakeCurrent(m_window, m_context);
glewExperimental = GL_TRUE;
if (!glewInit()) // ...
printf("%s\n", glGetString(GL_VERSION));
// ...
glDebugMessageCallback(ErrorHandler::GLDebugMessageCallback, 0);
I'm rewriting a large part of my texturing code. I would like to be able to specify certain internal formats: GL_RGB8I, GL_RGB8UI, GL_RGB16I, GL_RGB16UI, GL_RGB32I, and GL_RGB32UI. These tokens do not exist in OpenGL 2.
When specifying these internal formats as arguments to glTexImage2D, the texturing fails (the texture appears as white). When checking for errors, I get [EDIT:] 1282 ("invalid operation"). I take this to mean that the OpenGL is still using OpenGL 2 for glTexImage2D, and so the call is failing. Obviously, it will need to use a newer version to succeed. Enums like GL_RGB, GL_RGBA, and (oddly) GL_RGB32F, GL_RGBA32F work as expected.
I configure to use GLEW or GLee for extensions. I can use OpenGL 4 calls with no problem elsewhere (e.g., glPatchParameteri, glBindFramebuffer, etc.), and the enums in question certainly exist. For completeness, glGetString(GL_VERSION) returns "4.2.0". My question: can I force one of these extension libraries to use the OpenGL 4.2 version? If so, how?
EDIT: The code is too complicated to post, but here is a simple, self-contained example using GLee that also demonstrates the problem:
#include <GLee5_4/GLee.h>
#include <GL/gl.h>
#include <GL/glu.h>
#include <gl/glut.h>
//For Windows
#pragma comment(lib,"GLee.lib")
#pragma comment(lib,"opengl32.lib")
#pragma comment(lib,"glu32.lib")
#pragma comment(lib,"glut32.lib")
#include <stdlib.h>
#include <stdio.h>
const int screen_size[2] = {512,512};
#define TEXTURE_SIZE 64
//Choose a selection. If you see black, then texturing is working. If you see red, then the quad isn't drawing. If you see white, texturing has failed.
#define TYPE 1
void error_check(void) {
GLenum error_code = glGetError();
const GLubyte* error_string = gluErrorString(error_code);
(error_string==NULL) ? printf("%d = (unrecognized error--an extension error?)\n",error_code) : printf("%d = \"%s\"\n",error_code,error_string);
}
#if TYPE==1 //############ 8-BIT TESTS ############
inline GLenum get_type(int which) { return (which==1)? GL_RGB8: GL_RGB; } //works
#elif TYPE==2
inline GLenum get_type(int which) { return (which==1)? GL_RGBA8:GL_RGBA; } //works
#elif TYPE==3
inline GLenum get_type(int which) { return (which==1)? GL_RGB8UI: GL_RGB; } //doesn't work (invalid op)
#elif TYPE==4
inline GLenum get_type(int which) { return (which==1)? GL_RGB8I: GL_RGB; } //doesn't work (invalid op)
#elif TYPE==5
inline GLenum get_type(int which) { return (which==1)? GL_RGBA8UI:GL_RGBA; } //doesn't work (invalid op)
#elif TYPE==6
inline GLenum get_type(int which) { return (which==1)? GL_RGBA8I:GL_RGBA; } //doesn't work (invalid op)
#elif TYPE==7 //############ 16-BIT TESTS ############
inline GLenum get_type(int which) { return (which==1)? GL_RGB16: GL_RGB; } //works
#elif TYPE==8
inline GLenum get_type(int which) { return (which==1)? GL_RGBA16:GL_RGBA; } //works
#elif TYPE==9
inline GLenum get_type(int which) { return (which==1)? GL_RGB16UI: GL_RGB; } //doesn't work (invalid op)
#elif TYPE==10
inline GLenum get_type(int which) { return (which==1)? GL_RGB16I: GL_RGB; } //doesn't work (invalid op)
#elif TYPE==11
inline GLenum get_type(int which) { return (which==1)?GL_RGBA16UI:GL_RGBA; } //doesn't work (invalid op)
#elif TYPE==12
inline GLenum get_type(int which) { return (which==1)? GL_RGBA16I:GL_RGBA; } //doesn't work (invalid op)
#elif TYPE==13 //############ 32-BIT TESTS ############
inline GLenum get_type(int which) { return (which==1)? GL_RGB32: GL_RGB; } //token doesn't exist
#elif TYPE==14
inline GLenum get_type(int which) { return (which==1)? GL_RGBA32:GL_RGBA; } //token doesn't exist
#elif TYPE==15
inline GLenum get_type(int which) { return (which==1)? GL_RGB32UI: GL_RGB; } //doesn't work (invalid op)
#elif TYPE==16
inline GLenum get_type(int which) { return (which==1)? GL_RGB32I: GL_RGB; } //doesn't work (invalid op)
#elif TYPE==17
inline GLenum get_type(int which) { return (which==1)?GL_RGBA32UI:GL_RGBA; } //doesn't work (invalid op)
#elif TYPE==18
inline GLenum get_type(int which) { return (which==1)? GL_RGBA32I:GL_RGBA; } //doesn't work (invalid op)
#elif TYPE==19 //############ 32-BIT FLOAT ############
inline GLenum get_type(int which) { return (which==1)? GL_RGB32F: GL_RGB; } //works
#elif TYPE==20
inline GLenum get_type(int which) { return (which==1)? GL_RGBA32F:GL_RGBA; } //works
#endif
GLuint texture;
void create_texture(void) {
printf(" Status before texture setup: "); error_check();
glGenTextures(1,&texture);
glBindTexture(GL_TEXTURE_2D,texture);
printf(" Status after texture created: "); error_check();
GLenum data_type = GL_UNSIGNED_BYTE;
int data_length = TEXTURE_SIZE*TEXTURE_SIZE*4; //maximum number of channels, so it will work for everything
unsigned char* data = new unsigned char[data_length];
for (int i=0;i<data_length;++i) {
data[i] = (unsigned char)(0);
};
glTexImage2D(GL_TEXTURE_2D,0,get_type(1), TEXTURE_SIZE,TEXTURE_SIZE, 0,get_type(2),data_type,data);
printf(" Status after glTexImage2D: "); error_check();
delete [] data;
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
printf(" Status after texture filters defined: "); error_check();
}
void keyboard(unsigned char key, int x, int y) {
switch (key) {
case 27: //esc
exit(0);
break;
}
}
void draw(void) {
glClearColor(1.0,0.0,0.0,1.0); //in case the quad doesn't draw
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glViewport(0,0,screen_size[0],screen_size[1]);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0,screen_size[0],0,screen_size[1]);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBegin(GL_QUADS);
glTexCoord2f(0,0); glVertex2f(0,0);
glTexCoord2f(2,0); glVertex2f(screen_size[0],0);
glTexCoord2f(2,2); glVertex2f(screen_size[0],screen_size[1]);
glTexCoord2f(0,2); glVertex2f(0,screen_size[1]);
glEnd();
glutSwapBuffers();
}
int main(int argc, char* argv[]) {
glutInit(&argc,argv);
glutInitWindowSize(screen_size[0],screen_size[1]);
glutInitDisplayMode(GLUT_RGB|GLUT_DOUBLE|GLUT_DEPTH);
glutCreateWindow("Texture Types - Ian Mallett");
glEnable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
printf("Status after OpenGL setup: "); error_check();
create_texture();
printf("Status after texture setup: "); error_check();
glutDisplayFunc(draw);
glutIdleFunc(draw);
glutKeyboardFunc(keyboard);
glutMainLoop();
return 0;
}
When checking for errors, I get [EDIT:] 1282 ("invalid operation"). I take this to mean that the OpenGL is still using OpenGL 2 for glTexImage2D, and so the call is failing.
OpenGL errors are not that complex to understand. GL_INVALID_ENUM/VALUE are thrown when you pass something an enum or value that is unexpected, unsupported, or out-of-range. If you pass "17" as the internal format to glTexImage2D, you will get GL_INVALID_ENUM, because 17 is not a valid enum number for an internal format. If you pass 103,422 as the width to glTexImage2D, you will get GL_INVALID_VALUE, because 103,422 is almost certainly larger than GL_MAX_TEXTURE_2D's size.
GL_INVALID_OPERATION is always used for combinations of state that go wrong. Either there is some context state previously set that doesn't mesh with the function you're calling, or two or more parameters combined are causing a problem. The latter is the case you have here.
If your implementation didn't support integer textures at all, then you would get INVALID_ENUM (because the internal format is not a valid format). Getting INVALID_OPERATION means that something else is wrong.
Namely, this:
glTexImage2D(GL_TEXTURE_2D,0,get_type(1), TEXTURE_SIZE,TEXTURE_SIZE, 0,get_type(2),data_type,data);
Your get_type(2) call returns GL_RGB or GL_RGBA in all cases. However, when using integral image formats, you must use a pixel transfer format with _INTEGER at the end.
So your get_type(2) needs to be this:
inline GLenum get_type(int which) { return (which==1)? GL_RGB16UI: GL_RGB_INTEGER; }
And similarly for other integral image formats.
I am trying to move some openGL processing to a C++ class, which is wrapped in an Objective-C class for use with iOS. Most of it seems to work, but I'm not getting the rendering into the frame buffer. When I bracket every openGL call with glGetError() - both in the Objective-C wrapper and the C++ class - I get an error 1281 (GL_INVALID_VALUE) upon calling glUseProgram (from within the C++ method renderTextures.)
(FWIW, this is then followed by GL_INVALID_OPERATION (1282) on two subsequent calls: glUniform1i and glUniformMatrix4fv, which I suppose makes sense if these are associated with the shader program. P.S. I used a custom wrapper function on glGetError that loops until the return value is zero - these are the only three errors I get.)
I can set and retrieve arbitrary values from the frame buffer (using glClearColor and glClear to set them, and glReadPixels to retrieve them), so the frame buffer seems to be set up OK. But the rendering (via glDrawElements) seems to fail, and I am supposing this is related to the error I get on glUseProgram. Notice that the argument _program for glUseProgram gets passed in from the Objective-C wrapper, via the call to MyClass::renderTextures. The value is the same (it's just a handle, right?) but the call fails inside the C++ class.
So... any ideas why glUseProgram fails? Is it how I set up the argument _program? That I'm passing it from Objective-C to C++? (Something about losing access to the context from inside the C++?) Something else that anyone can see?
Code follows below (much based on boilerplate from Xcode)
Objective-C wrapper:
#import “MyClass.h”
// OBJECTIVE-C WRAPPER CLASS
#interface ObjCWrapperClass () {
MyClass *_myObject;
GLuint _program;
GLint _mvpUniform;
GLint _textureUniform;
GLKMatrix4 _modelViewProjectionMatrix;
}
#property EAGLContext *myContext;
#end
#implementation ObjCWrapperClass
-(id)init {
if (self = [super init]) {
self.myContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
_myObject = new MyClass();
BOOL result = [self loadShaders];
}
return self;
}
-(void)doRender {
// Has to be in Objective-C
[EAGLContext setCurrentContext:self.queryContext];
// ---- Use C++ ------------------------------
// 1. Create frame buffer
_myObject->createFrameBuffer();
// 2. Get Texture List
_myObject->createTextureList();
// 3. Create the Texture Geometry
_myObject->createTextureGeometry();
// 4. Load textures
_myObject->loadTextures();
if ([NSThread isMainThread]) {
[self doRenderInCPP];
}
else {
dispatch_sync(dispatch_get_main_queue(), ^{
[self doRenderInCPP];
} );
}
_myObject->deleteTextures();
// ---- END C++ ------------------------------
}
-(void)doRenderInCPP
{
// Render textures into framebuffer
_myObject->renderTextures(_program, _mvpUniform, _textureUniform);
}
#pragma mark - OpenGL ES 2 shader compilation
- (BOOL)loadShaders
{
GLuint vertShader, fragShader;
NSString *vertShaderPathname, *fragShaderPathname;
// Create shader program.
_program = glCreateProgram();
// Create and compile vertex shader.
vertShaderPathname = [[NSBundle mainBundle] pathForResource:#"Shader" ofType:#"vsh"];
if (![self compileShader:&vertShader type:GL_VERTEX_SHADER file:vertShaderPathname]) {
NSLog(#"Failed to compile vertex shader");
return NO;
}
// Create and compile fragment shader.
fragShaderPathname = [[NSBundle mainBundle] pathForResource:#“Shader" ofType:#"fsh"];
if (![self compileShader:&fragShader type:GL_FRAGMENT_SHADER file:fragShaderPathname]) {
NSLog(#"Failed to compile fragment shader");
return NO;
}
// Attach vertex shader to program.
glAttachShader(_program, vertShader);
// Attach fragment shader to program.
glAttachShader(_program, fragShader);
// Bind attribute locations.
// This needs to be done prior to linking.
glBindAttribLocation(_program, GLKVertexAttribPosition, "position");
glBindAttribLocation(_program, GLKVertexAttribTexCoord0, "texCoord");
// Link program.
if (![self linkProgram:_program]) {
NSLog(#"Failed to link program: %d", _program);
if (vertShader) {
glDeleteShader(vertShader);
vertShader = 0;
}
if (fragShader) {
glDeleteShader(fragShader);
fragShader = 0;
}
if (_program) {
glDeleteProgram(_program);
_program = 0;
}
return NO;
}
// Get uniform locations.
_mvpUniform = glGetUniformLocation(_program, "modelViewProjectionMatrix");
_textureUniform = glGetUniformLocation(_program, "tileTexture");
// Release vertex and fragment shaders.
if (vertShader) {
glDetachShader(_program, vertShader);
glDeleteShader(vertShader);
}
if (fragShader) {
glDetachShader(_program, fragShader);
glDeleteShader(fragShader);
}
return YES;
}
- (BOOL)compileShader:(GLuint *)shader type:(GLenum)type file:(NSString *)file
{
GLint status;
const GLchar *source;
source = (GLchar *)[[NSString stringWithContentsOfFile:file encoding:NSUTF8StringEncoding error:nil] UTF8String];
if (!source) {
NSLog(#"Failed to load vertex shader");
return NO;
}
*shader = glCreateShader(type);
glShaderSource(*shader, 1, &source, NULL);
glCompileShader(*shader);
#if defined(DEBUG)
GLint logLength;
glGetShaderiv(*shader, GL_INFO_LOG_LENGTH, &logLength);
if (logLength > 0) {
GLchar *log = (GLchar *)malloc(logLength);
glGetShaderInfoLog(*shader, logLength, &logLength, log);
NSLog(#"Shader compile log:\n%s", log);
free(log);
}
#endif
glGetShaderiv(*shader, GL_COMPILE_STATUS, &status);
if (status == 0) {
glDeleteShader(*shader);
return NO;
}
return YES;
}
- (BOOL)linkProgram:(GLuint)prog
{
GLint status;
glLinkProgram(prog);
#if defined(DEBUG)
GLint logLength;
glGetProgramiv(prog, GL_INFO_LOG_LENGTH, &logLength);
if (logLength > 0) {
GLchar *log = (GLchar *)malloc(logLength);
glGetProgramInfoLog(prog, logLength, &logLength, log);
NSLog(#"Program link log:\n%s", log);
free(log);
}
#endif
glGetProgramiv(prog, GL_LINK_STATUS, &status);
if (status == 0) {
return NO;
}
return YES;
}
#end
C++ (Relevant bits):
//
// MyClass.cpp
//
#include “MyClass.h”
void MyClass::createFrameBuffer()
{
glGenFramebuffers(1, &_frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer);
// Create the texture:
glGenTextures(1, &_frameBufferTexture);
glBindTexture(GL_TEXTURE_2D, _frameBufferTexture);
glTexImage2D(GL_TEXTURE_2D, 0, _drawFormatEnum, _destinationSizeWidth, _destinationSizeHeight, 0, _drawFormatEnum, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _frameBufferTexture, 0);
GLenum error = glGetError();
if (error != 0) {
printf("Error Creating Depth Buffer: %i (backing size: %i %i)\n", error, _destinationSizeWidth, _destinationSizeHeight);
}
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
printf("Failed to make complete framebuffer object %x\n", glCheckFramebufferStatus(GL_FRAMEBUFFER));
}
glClearColor(0.015625, 0.03125, 0.0, 1.0); // For testing - put distinctive values in to see if we find these in Framebuffer
glClear(GL_COLOR_BUFFER_BIT);
}
void MyClass::renderTextures(GLint program, GLint mvpUniform, GLint textureUniform)
{
// Clear the draw buffer
glClearColor(0.0, 0.0, 0.0625, 1.0); // TEST: clear to distinctive values
glClear(GL_COLOR_BUFFER_BIT);
// Draw each segment in a different area of frame buffer
for (int segment_index = 0; segment_index < _numSegments; segment_index++) {
// Set draw region
glScissor(segment_index*(_segmentWidthPixels), 0, _segmentWidthPixels, _segmentHeightPixels);
glEnable(GL_SCISSOR_TEST);
int segment_horz_offset = getSegmentHorzOffset(segment_index);
int segment_vert_offset = getSegmentVertOffset(segment_index);
FFGLKMatrix4 modelViewProjectionMatrix = createMVPmatrix(segment_horz_offset, segment_vert_offset);
// Render the object ES2
glUseProgram(program); // Error after glUseProgram:, GL_INVALID_VALUE (1281)
glUniform1i(textureUniform, 0); //GL_INVALID_OPERATION (1282)
glUniformMatrix4fv(mvpUniform, 1, 0, modelViewProjectionMatrix.m); //GL_INVALID_OPERATION (1282)
glEnableVertexAttribArray(FFGLKVertexAttribPosition);
glEnableVertexAttribArray(FFGLKVertexAttribTexCoord0);
glActiveTexture(GL_TEXTURE0);
for (auto &texture: _textures) {
uint8_t *data = (uint8_t *)texture.geometryData;
glVertexAttribPointer(FFGLKVertexAttribPosition, 2, GL_FLOAT, 0, sizeof(float)*4, data);
glVertexAttribPointer(FFGLKVertexAttribTexCoord0, 2, GL_FLOAT, 0, sizeof(float)*4, data+8);
glBindTexture(GL_TEXTURE_2D, texture.getTextureID());
glDrawElements(GL_TRIANGLE_STRIP, _textureVertexIndicesCount, GL_UNSIGNED_SHORT, _textureVertexIndices);
}
glDisable((GL_SCISSOR_TEST));
// Test - are correct values rendered into the frame buffer?
uint8_t *outdata = new uint8_t[100*4];
glReadPixels(0, 0, (GLsizei)2, (GLsizei)4, GL_RGBA, GL_UNSIGNED_BYTE, outdata);
for (int i=0; i < 8; i++) {
printf("render: Value: %i\n", outdata[i]); // Prints values as specified in glClearColor above (0,0,16,255)
}
printf("glGetError: %d\n", glGetError() );
delete [] outdata;
}
}
Error 1281 resolved (openGL newbie mistake) - needed to set context:
(Still not rendering into frame buffer, but another hurdle cleared.)
-(id)init {
if (self = [super init]) {
self.myContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
[EAGLContext setCurrentContext:self.myContext]; // <-- ADDED
_myObject = new MyClass();
BOOL result = [self loadShaders];
}
return self;
}
I'm having problems using a uniform in a vertex shader
heres the code
// gcc main.c -o main `pkg-config --libs --cflags glfw3` -lGL -lm
#include <GLFW/glfw3.h>
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
void gluErrorString(const char* why,GLenum errorCode);
void checkShader(GLuint status, GLuint shader, const char* which);
float verts[] = {
-0.5f, 0.5f, 0.0f,
0.5f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
const char* vertex_shader =
"#version 330\n"
"in vec3 vp;\n"
"uniform float u_time;\n"
"\n"
"void main () {\n"
" vec4 p = vec4(vp, 1.0);\n"
" p.x = p.x + u_time;\n"
" gl_Position = p;\n"
"}";
const char* fragment_shader =
"#version 330\n"
"out vec4 frag_colour;\n"
"void main () {\n"
" frag_colour = vec4 (0.5, 0.0, 0.5, 1.0);\n"
"}";
int main () {
if (!glfwInit ()) {
fprintf (stderr, "ERROR: could not start GLFW3\n");
return 1;
}
glfwWindowHint (GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint (GLFW_CONTEXT_VERSION_MINOR, 2);
//glfwWindowHint (GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint (GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
GLFWwindow* window = glfwCreateWindow (640, 480, "Hello Triangle", NULL, NULL);
if (!window) {
fprintf (stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent (window);
// vert arrays group vert buffers together unlike GLES2 (no vert arrays)
// we *must* have one of these even if we only need 1 vert buffer
GLuint vao = 0;
glGenVertexArrays (1, &vao);
glBindVertexArray (vao);
GLuint vbo = 0;
glGenBuffers (1, &vbo);
glBindBuffer (GL_ARRAY_BUFFER, vbo);
// each vert takes 3 float * 4 verts in the fan = 12 floats
glBufferData (GL_ARRAY_BUFFER, 12 * sizeof (float), verts, GL_STATIC_DRAW);
gluErrorString("buffer data",glGetError());
glEnableVertexAttribArray (0);
glBindBuffer (GL_ARRAY_BUFFER, vbo);
// 3 components per vert
glVertexAttribPointer (0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
gluErrorString("attrib pointer",glGetError());
GLuint vs = glCreateShader (GL_VERTEX_SHADER);
glShaderSource (vs, 1, &vertex_shader, NULL);
glCompileShader (vs);
GLint success = 0;
glGetShaderiv(vs, GL_COMPILE_STATUS, &success);
checkShader(success, vs, "Vert Shader");
GLuint fs = glCreateShader (GL_FRAGMENT_SHADER);
glShaderSource (fs, 1, &fragment_shader, NULL);
glCompileShader (fs);
glGetShaderiv(fs, GL_COMPILE_STATUS, &success);
checkShader(success, fs, "Frag Shader");
GLuint shader_program = glCreateProgram ();
glAttachShader (shader_program, fs);
glAttachShader (shader_program, vs);
glLinkProgram (shader_program);
gluErrorString("Link prog",glGetError());
glUseProgram (shader_program);
gluErrorString("use prog",glGetError());
GLuint uniT = glGetUniformLocation(shader_program,"u_time"); // ask gl to assign uniform id
gluErrorString("get uniform location",glGetError());
printf("uniT=%i\n",uniT);
glEnable (GL_DEPTH_TEST);
glDepthFunc (GL_LESS);
float t=0;
while (!glfwWindowShouldClose (window)) {
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
gluErrorString("clear",glGetError());
glUseProgram (shader_program);
gluErrorString("use prog",glGetError());
t=t+0.01f;
glUniform1f( uniT, (GLfloat)sin(t));
gluErrorString("set uniform",glGetError());
float val;
glGetUniformfv(shader_program, uniT, &val);
gluErrorString("get uniform",glGetError());
printf("val=%f ",val);
glBindVertexArray (vao);
gluErrorString("bind array",glGetError());
glDrawArrays (GL_TRIANGLE_FAN, 0, 4);
gluErrorString("draw arrays",glGetError());
glfwPollEvents ();
glfwSwapBuffers (window);
gluErrorString("swap buffers",glGetError());
}
glfwTerminate();
return 0;
}
void checkShader(GLuint status, GLuint shader, const char* which) {
if (status==GL_TRUE) return;
int length;
char buffer[1024];
glGetShaderInfoLog(shader, sizeof(buffer), &length, buffer);
fprintf (stderr,"%s Error: %s\n", which,buffer);
glfwTerminate();
exit(-1);
}
struct token_string
{
GLuint Token;
const char *String;
};
static const struct token_string Errors[] = {
{ GL_NO_ERROR, "no error" },
{ GL_INVALID_ENUM, "invalid enumerant" },
{ GL_INVALID_VALUE, "invalid value" },
{ GL_INVALID_OPERATION, "invalid operation" },
{ GL_STACK_OVERFLOW, "stack overflow" },
{ GL_STACK_UNDERFLOW, "stack underflow" },
{ GL_OUT_OF_MEMORY, "out of memory" },
{ GL_TABLE_TOO_LARGE, "table too large" },
#ifdef GL_EXT_framebuffer_object
{ GL_INVALID_FRAMEBUFFER_OPERATION_EXT, "invalid framebuffer operation" },
#endif
{ ~0, NULL } /* end of list indicator */
};
void gluErrorString(const char* why,GLenum errorCode)
{
if (errorCode== GL_NO_ERROR) return;
int i;
for (i = 0; Errors[i].String; i++) {
if (Errors[i].Token == errorCode) {
fprintf (stderr,"error: %s - %s\n",why,Errors[i].String);
glfwTerminate();
exit(-1);
}
}
}
When the code runs, the quad flickers as if the uniform is getting junk values, also getting the value of the uniform shows some odd values like 36893488147419103232.000000 where it should be just a simple sine value
The problem with your code is only indirectly related to GL at all - your GL code is OK.
However, you are using modern OpenGL functions without loading the function pointers as an extension. This might work on some platforms, but not at others. MacOS does guarantee that these functions are exported in the system's GL libs. On windows, Microsofts opengl32.dll never contains function beyond GL 1.1 - your code wouldn't link there. On Linux, you're somewhere inbetween. There is only this old Linux OpenGL ABI document, which guarantees that OpenGL 1.2 functions must be exported by the library. In practice, most GL implementation's libs on Linux export everything (but the fact that the function is there does not mean that it is supported). But you should never directly link these functions, because nobody is guaranteeing anything.
However, the story does not end here: You apparently did this on an implementation which does export the symbols. However, you did not include the correct headers. And you have set up your compiler very poorly. In C, it is valid (but poor style) to call a function which has not been declared before. The compiler will asslume that it returns int and that all parameters are ints. In effect, you are calling these functions, but the compiler will convert the arguments to int.
You would have noticed that if you had set up your compiler to produce some warnings, like -Wall on gcc:
a.c: In function ‘main’:
a.c:74: warning: implicit declaration of function ‘glGenVertexArrays’
a.c:75: warning: implicit declaration of function ‘glBindVertexArray’
[...]
However, the code compiles and links, and I can reproduces results you described (I'm using Linux/Nvidia here).
To fix this, you should use a OpenGL Loader Library. For example, I got your code working by using GLEW. All I had to do was adding at the very top at the file
#define GLEW_NO_GLU // because you re-implemented some glu-like functions with a different interface
#include <glew.h>
and calling
glewExperimental=GL_TRUE;
if (glewInit() != GLEW_OK) {
fprintf (stderr, "ERROR: failed to initialize GLEW\n");
glfwTerminate();
return 1;
}
glGetError(); // read away error generated by GLEW, it is broken in core profiles...
The GLEW headers include declarations for all the functions, so no implicit type conversions do occur any more. GLEW might not be the best choice for core profiles, however I just used it because that's the loader I'm most familiar with.
I'm experiencing a strange stutter in my simple opengl (via GLFW3) app. Although vsync is enabled (frame rate is almost steady 60 fps), the motion of the spinning triangle is not always smooth - it's almost like some frames are skipped sometimes. I tried looking at the time difference between consecutive calls to glSwapBuffers(), but those seem pretty consistent.
Am I doing something wrong? Should I use some kind of motion blur filtering to make it appear smoother?
The code:
#include <cstdlib>
#include <cstdio>
#include <cmath>
#include <cfloat>
#include <cassert>
#include <minmax.h>
#include <string>
#include <iostream>
#include <fstream>
#include <vector>
#include <Windows.h>
#include <GL/glew.h>
#include <gl/GLU.h>
//#include <GL/GL.h>
#include <GLFW/glfw3.h>
#include <glm/glm.hpp>
#include <glm/gtc/type_ptr.hpp>
#ifdef _WIN32
#pragma warning(disable:4996)
#endif
static int swap_interval;
static double frame_rate;
GLuint LoadShaders(const char * vertex_file_path,const char * fragment_file_path){
// Create the shaders
GLuint VertexShaderID = glCreateShader(GL_VERTEX_SHADER);
GLuint FragmentShaderID = glCreateShader(GL_FRAGMENT_SHADER);
// Read the Vertex Shader code from the file
std::string VertexShaderCode;
std::ifstream VertexShaderStream(vertex_file_path, std::ios::in);
if(VertexShaderStream.is_open()){
std::string Line = "";
while(getline(VertexShaderStream, Line))
VertexShaderCode += "\n" + Line;
VertexShaderStream.close();
}else{
printf("Impossible to open %s. Are you in the right directory ? Don't forget to read the FAQ !\n", vertex_file_path);
return 0;
}
// Read the Fragment Shader code from the file
std::string FragmentShaderCode;
std::ifstream FragmentShaderStream(fragment_file_path, std::ios::in);
if(FragmentShaderStream.is_open()){
std::string Line = "";
while(getline(FragmentShaderStream, Line))
FragmentShaderCode += "\n" + Line;
FragmentShaderStream.close();
}
GLint Result = GL_FALSE;
int InfoLogLength;
// Compile Vertex Shader
printf("Compiling shader : %s\n", vertex_file_path);
char const * VertexSourcePointer = VertexShaderCode.c_str();
glShaderSource(VertexShaderID, 1, &VertexSourcePointer , NULL);
glCompileShader(VertexShaderID);
// Check Vertex Shader
glGetShaderiv(VertexShaderID, GL_COMPILE_STATUS, &Result);
if (Result != GL_TRUE)
{
glGetShaderiv(VertexShaderID, GL_INFO_LOG_LENGTH, &InfoLogLength);
if ( InfoLogLength > 0 ){
std::vector<char> VertexShaderErrorMessage(InfoLogLength+1);
glGetShaderInfoLog(VertexShaderID, InfoLogLength, NULL, &VertexShaderErrorMessage[0]);
printf("%s\n", &VertexShaderErrorMessage[0]);
}
}
// Compile Fragment Shader
printf("Compiling shader : %s\n", fragment_file_path);
char const * FragmentSourcePointer = FragmentShaderCode.c_str();
glShaderSource(FragmentShaderID, 1, &FragmentSourcePointer , NULL);
glCompileShader(FragmentShaderID);
// Check Fragment Shader
glGetShaderiv(FragmentShaderID, GL_COMPILE_STATUS, &Result);
if (Result != GL_TRUE)
{
glGetShaderiv(FragmentShaderID, GL_INFO_LOG_LENGTH, &InfoLogLength);
if ( InfoLogLength > 0 ){
std::vector<char> FragmentShaderErrorMessage(InfoLogLength+1);
glGetShaderInfoLog(FragmentShaderID, InfoLogLength, NULL, &FragmentShaderErrorMessage[0]);
printf("%s\n", &FragmentShaderErrorMessage[0]);
}
}
// Link the program
printf("Linking program\n");
GLuint ProgramID = glCreateProgram();
glAttachShader(ProgramID, VertexShaderID);
glAttachShader(ProgramID, FragmentShaderID);
glLinkProgram(ProgramID);
// Check the program
glGetProgramiv(ProgramID, GL_LINK_STATUS, &Result);
if (Result != GL_TRUE)
{
glGetProgramiv(ProgramID, GL_INFO_LOG_LENGTH, &InfoLogLength);
if ( InfoLogLength > 0 ){
std::vector<char> ProgramErrorMessage(InfoLogLength+1);
glGetProgramInfoLog(ProgramID, InfoLogLength, NULL, &ProgramErrorMessage[0]);
printf("%s\n", &ProgramErrorMessage[0]);
}
}
#ifdef _DEBUG
glValidateProgram(ProgramID);
#endif
glDeleteShader(VertexShaderID);
glDeleteShader(FragmentShaderID);
return ProgramID;
}
static void framebuffer_size_callback(GLFWwindow* window, int width, int height)
{
glViewport(0, 0, width, height);
}
static void set_swap_interval(GLFWwindow* window, int interval)
{
swap_interval = interval;
glfwSwapInterval(swap_interval);
}
static void key_callback(GLFWwindow* window, int key, int scancode, int action, int mods)
{
if (key == GLFW_KEY_SPACE && action == GLFW_PRESS)
set_swap_interval(window, 1 - swap_interval);
}
static bool init(GLFWwindow** win)
{
if (!glfwInit())
exit(EXIT_FAILURE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_COMPAT_PROFILE);
// creating a window using the monitor param will open it full screen
const bool useFullScreen = false;
GLFWmonitor* monitor = useFullScreen ? glfwGetPrimaryMonitor() : NULL;
*win = glfwCreateWindow(640, 480, "", monitor, NULL);
if (!(*win))
{
glfwTerminate();
exit(EXIT_FAILURE);
}
glfwMakeContextCurrent(*win);
GLenum glewError = glewInit();
if( glewError != GLEW_OK )
{
printf( "Error initializing GLEW! %s\n", glewGetErrorString( glewError ) );
return false;
}
//Make sure OpenGL 2.1 is supported
if( !GLEW_VERSION_2_1 )
{
printf( "OpenGL 2.1 not supported!\n" );
return false;
}
glfwMakeContextCurrent(*win);
glfwSetFramebufferSizeCallback(*win, framebuffer_size_callback);
glfwSetKeyCallback(*win, key_callback);
// get version info
const GLubyte* renderer = glGetString (GL_RENDERER); // get renderer string
const GLubyte* version = glGetString (GL_VERSION); // version as a string
printf("Renderer: %s\n", renderer);
printf("OpenGL version supported %s\n", version);
return true;
}
std::string string_format(const std::string fmt, ...) {
int size = 100;
std::string str;
va_list ap;
while (1) {
str.resize(size);
va_start(ap, fmt);
int n = vsnprintf((char *)str.c_str(), size, fmt.c_str(), ap);
va_end(ap);
if (n > -1 && n < size) {
str.resize(n);
return str;
}
if (n > -1)
size = n + 1;
else
size *= 2;
}
return str;
}
int main(int argc, char* argv[])
{
srand(9); // constant seed, for deterministic results
unsigned long frame_count = 0;
GLFWwindow* window;
init(&window);
// An array of 3 vectors which represents 3 vertices
static const GLfloat g_vertex_buffer_data[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
};
GLuint vbo;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
// acclocate GPU memory and copy data
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
unsigned int vao = 0;
glGenVertexArrays (1, &vao);
glBindVertexArray (vao);
glEnableVertexAttribArray (0);
glBindBuffer (GL_ARRAY_BUFFER, vbo);
glVertexAttribPointer (0, 3, GL_FLOAT, GL_FALSE, 0, 0);
// Create and compile our GLSL program from the shaders
GLuint programID = LoadShaders( "1.vert", "1.frag" );
// Use our shader
glUseProgram(programID);
GLint locPosition = glGetAttribLocation(programID, "vertex");
assert(locPosition != -1);
glm::mat4 world(1.0f);
GLint locWorld = glGetUniformLocation(programID, "gWorld");
assert(locWorld != -1 && "Error getting address (was it optimized out?)!");
glUniformMatrix4fv(locWorld, 1, GL_FALSE, glm::value_ptr(world));
GLenum err = glGetError();
GLint loc = glGetUniformLocation(programID, "time");
assert(loc != -1 && "Error getting uniform address (was it optimized out?)!");
bool isRunning = true;
while (isRunning)
{
static float time = 0.0f;
static float oldTime = 0.0f;
static float fpsLastUpdateTime = 0.0f;
oldTime = time;
time = (float)glfwGetTime();
static std::string fps;
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram (programID);
glUniform1f(loc, time);
glBindVertexArray (vao);
glDrawArrays (GL_TRIANGLES, 0, 3);
glfwSwapBuffers(window);
glfwPollEvents();
isRunning = !glfwWindowShouldClose(window);
float dT = time-oldTime;
if (time-fpsLastUpdateTime > 0.5)
{
static const char* fmt = "frame rate: %.1f frames per second";
glfwSetWindowTitle(window, string_format(fmt, 1.0f/(dT)).c_str());
fpsLastUpdateTime = time;
}
}
glfwDestroyWindow(window);
glfwTerminate();
return 0;
}
////////////////////////////////////////
// 1.frag
////////////////////////////////////////
#version 330 core
// Ouput data
out vec3 color;
void main()
{
// Output color = red
color = vec3(1,0,0);
}
//////////////////////////////////////////////
// 1.vert
//////////////////////////////////////////////
#version 330 core
// Input vertex data, different for all executions of this shader.
in vec3 vertex;
uniform mat4 gWorld;
uniform float time;
void main()
{
gl_Position = gWorld * vec4(vertex, 1.0f);
gl_Position.x += sin(time);
gl_Position.y += cos(time)/2.0f;
gl_Position.w = 1.0;
}
OK. I got home and did more testing.
First I tried to disable the V-Sync, but I couldn't! I had to disable the windows' desktop effects (Aero) to be able to do so, and lo and behold - once Aero was disabled, the stutter disappeared (with V-Sync on).
Then I tested it with V-Sync off, and of course, I got much higher frame rate with the occasional expected tearing.
Then I tested it in full screen. The rendering was smooth with Aero and without it.
I couldn't find anyone else who share this problem. Do you think it's a GLFW3 bug? a driver/hardware issue (I have GTS450 with the latest drivers)?
Thank you all for you answers. I learned a lot, but my problem is still unsolved.
It's a strange Windows dwm (Desktop Window Manager) composition mode and glfwSwapBuffers() interaction problem. I didn't got down to the root of the problem yet. But you can workaround the stuttering by doing one of the following:
go fullscreen
disable dwm window composition (see my answer to Linear movement stutter)
enable multi sampling: glfwWindowHint(GLFW_SAMPLES, 4);
Without seeing this stutter problem it is difficult to say what the problem is. But the first impression of your program is ok.
So I guess you observe that a frame once in a while is shown twice. Leading to a very small stutter. This happens usually when you try to output 60 frames on 60Hz Monitor with vsync.
In such a setup you must not miss one vsync period or you will see a stutter, because of the frame shown twice.
On the other hand it is nearly impossible to guarantee this because the scheduler on a windows platforms schedules threads for 15ms(about that I don't know the correct value by heart).
So it is possible that a higher priority thread will use the CPU and your presenting thread is not able to swap the buffers for a new frame in time. When you increase the values e.g. 120 frames on 120 Hz monitor you will see those stutters even more often.
So I don't know any solution how you can prevent this on the windows platform. But If someone else knows I would be happy to know it too.
It's hard to tell without visualizing your problem but unless we are talking about some severe stuttering it's rarely a rendering issue. The motion/physics in your program is handled/processed by the CPU. The way you are implementing your animation, is handled in a way that is solely depended on the CPU.
What this means is that:
Say you are rotating your triangle by a fixed amount every CPU cycle. This is very depended on the time a CPU cycle takes to complete. Things like cpu workload can have huge impact on your screen result (not necessarily though). And it doesn't even take huge CPU occupation to notice a difference. All it takes is a background process to wake up and query for updates. This could result in a 'spike' of which could be observed as a tiny pause in your animation flow (due to the small delay the CPU can cause in your animation cycle). This can be interpreted as a stutter.
Now understanding the above there are a few ways to solve your issue (but in my opinion it doesn't worth investing for what you are trying to do above). You need to find a way to have consistent animation steps (with a small margin for variation).
This is a great article to explore:
http://gafferongames.com/game-physics/fix-your-timestep/
Ultimately most of the methods implemented above will result in a better rendering flow. But still not all of them guarantee physics-rendering precision. Without trying it out myself yet, i would say that one would have to go as far as implementing interpolation in his/her rendering process to guarantee smooth drawing as best as possible.
Now what i wanted to explain to you most, is that stuttering is usually caused by the CPU because it intervenes directly with your way of handling physics. But overall, using time for handling your physics and interpolating inside your rendering cycles is a topic definitely worth to explore.