I have narrowed down the error to this OpenGL call:
glVertexAttribPointer(var.vertex, 4, GL_FLOAT, GL_FALSE, 0, 0);
there are no errors before it, and there is a GL_INVALID_OPERATION error after it.
The output to my console is:
Error in Shader.cpp : bindMesh : 294
OpenGL Error: Invalid Operation
(0x502)
Bound Buffer: 1
var.vertex = 1
According to this, the only GL_INVALID_OPERATION condition that could apply is a bound buffer ID of 0. But the bound buffer ID is 1.
I'm checking (and printing) the error with:
printErrors();
int i = 0;
glGetIntegerv(GL_ARRAY_BUFFER_BINDING, &i);
printf("\tBound Buffer: %i\n", i);
printf("\tvar.vertex = %i\n", var.vertex);
where printErrors() is defined as
void PrintErrors(const char* file, const char* func, int line);
#define printErrors() PrintErrors(__BASE_FILE__,__func__,__LINE__)
void PrintErrors(const char* file, const char* func, int line)
{
const char* errors[] = {
"OpenGL Error: Invalid Enumeration\n",
"OpenGL Error: Invalid Value\n",
"OpenGL Error: Invalid Operation\n",
"OpenGL Error: Invalid Frame-Buffer Operation\n",
"OpenGL Error: Out of Memory\n",
"OpenGL Error: Stack Underflow\n",
"OpenGL Error: Stack Overflow\n",
};
uint32 i = glGetError();
while (i != GL_NO_ERROR)
{
printf("Error in %s : %s : %i\n\t%s\t(0x%x)\n", file, func, line, errors[i - GL_INVALID_ENUM], i);
i = glGetError();
}
}
What's going wrong?
Edit:
var.vertex is found with:
var.vertex = glGetAttribLocation(programID, "Vertex");
and there is a check to ensure that var.vertex was found in the bound shader by an if-statement:
if (var.vertex != INVALID_SHADER_VARIABLE) // INVALID_SHADER_VARIABLE = 0xFFFFFFFF (-1 if it where a signed int)
{
glBindBuffer(...);
glBuffferData(...);
glEnableVertexAttribArray(var.vertex);
glVertexAttribPointer(var.vertex, 4, GL_FLOAT, GL_FALSE, 0, 0);
}
You must have a non-zero Vertex Array Object bound in an OpenGL 3.1 context without the extension GL_ARB_compatibility or a core profile context. This is one of the hidden conditions that will generate a GL_INVALID_OPERATION whenever you attempt to do anything related to vertex arrays (e.g. draw, setup vertex pointers, etc.)
The good news is this is a trivial issue to fix.
At minimum, all you need to do is generate (glGenVertexArrays (...)) and bind (glBindVertexArray (...)) a vertex array object when you initialize your program.
Related
I'm trying to fill a CUDA Texture Object with some data but the call to cudaCreateTextureObject fails with the following error (edit: on both a GTX 1080TI and a RTX 2080TI):
GPU ERROR! 'invalid argument' (err code 11)
It works if I put less data into my texture so my guess is that my computation about how much data I can fit into a texture is off.
My thought process is as follows:
(executable code follows below)
My data comes in the form of (76,76) images where each pixel is a float. What I would like to do is to store a column of images in a Texture Object; as I understand it, cudaMallocPitch is the way to do this.
When computing the number of images I can store in one texture I'm using the following formula to determine how much space a single image needs:
GTX_1080TI_MEM_PITCH * img_dim_y * sizeof(float)
Where the first argument should be the memory pitch on a GTX 1080TI card (512 bytes). The number of bytes that I can store in a 1D texture is given as 2^27 here. When I divide the latter by the former I get 862.3, assuming this is the number of images I can store in one Texture Object. However, when I try to store more than 855 images in my buffer the program crashes with the error above.
Here's the code:
In the following the main function (a) sets up all the relevant parameters, (b) allocates the memory using cudaMallocPitch, and (c) configures and creates a CUDA Texture Object:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <cassert>
#define GTX_1080TI_MEM_PITCH 512
#define GTX_1080TI_1DTEX_WIDTH 134217728 // 2^27
//=====================================================================[ util ]
// CUDA error checking for library functions
#define CUDA_ERR_CHK(func){ cuda_assert( (func), __FILE__, __LINE__ ); }
inline void cuda_assert( const cudaError_t cu_err, const char* file, int line ){
if( cu_err != cudaSuccess ){
fprintf( stderr, "\nGPU ERROR! \'%s\' (err code %d) in file %s, line %d.\n\n", cudaGetErrorString(cu_err), cu_err, file, line );
exit( EXIT_FAILURE );
}
}
// CUDA generic error checking (used after kernel calls)
#define GPU_ERR_CHK(){ gpu_assert(__FILE__, __LINE__); }
inline void gpu_assert( const char* file, const int line ){
cudaError cu_err = cudaGetLastError();
if( cu_err != cudaSuccess ){
fprintf( stderr, "\nGPU KERNEL ERROR! \'%s\' (err code %d) in file %s, line %d.\n\n", cudaGetErrorString(cu_err), cu_err, file, line );
exit(EXIT_FAILURE);
}
}
//=====================================================================[ main ]
int main(){
// setup
unsigned int img_dim_x = 76;
unsigned int img_dim_y = 76;
unsigned int img_num = 856; // <-- NOTE: set this to 855 and it should work - but we should be able to put 862 here?
unsigned int pitched_img_size = GTX_1080TI_MEM_PITCH * img_dim_y * sizeof(float);
unsigned int img_num_per_tex = GTX_1080TI_1DTEX_WIDTH / pitched_img_size;
fprintf( stderr, "We should be able to stuff %d images into one texture.\n", img_num_per_tex );
fprintf( stderr, "We use %d (more than 855 leads to a crash).\n", img_num );
// allocate pitched memory
size_t img_tex_pitch;
float* d_img_tex_data;
CUDA_ERR_CHK( cudaMallocPitch( &d_img_tex_data, &img_tex_pitch, img_dim_x*sizeof(float), img_dim_y*img_num ) );
assert( img_tex_pitch == GTX_1080TI_MEM_PITCH );
fprintf( stderr, "Asking for %zd bytes allocates %zd bytes using pitch %zd. Available: %zd/%d\n",
img_num*img_dim_x*img_dim_y*sizeof(float),
img_num*img_tex_pitch*img_dim_y*sizeof(float),
img_tex_pitch,
GTX_1080TI_1DTEX_WIDTH - img_num*img_tex_pitch*img_dim_y*sizeof(float),
GTX_1080TI_1DTEX_WIDTH );
// generic resource descriptor
cudaResourceDesc res_desc;
memset(&res_desc, 0, sizeof(res_desc));
res_desc.resType = cudaResourceTypePitch2D;
res_desc.res.pitch2D.desc = cudaCreateChannelDesc<float>();
res_desc.res.pitch2D.devPtr = d_img_tex_data;
res_desc.res.pitch2D.width = img_dim_x;
res_desc.res.pitch2D.height = img_dim_y*img_num;
res_desc.res.pitch2D.pitchInBytes = img_tex_pitch;
// texture descriptor
cudaTextureDesc tex_desc;
memset(&tex_desc, 0, sizeof(tex_desc));
tex_desc.addressMode[0] = cudaAddressModeClamp;
tex_desc.addressMode[1] = cudaAddressModeClamp;
tex_desc.filterMode = cudaFilterModeLinear; // for linear interpolation (NOTE: this breaks normal integer indexing!)
tex_desc.readMode = cudaReadModeElementType;
tex_desc.normalizedCoords = false; // we want to index using [0;img_dim] rather than [0;1]
// make sure there are no lingering errors
GPU_ERR_CHK();
fprintf(stderr, "No CUDA error until now..\n");
// create texture object
cudaTextureObject_t img_tex_obj;
CUDA_ERR_CHK( cudaCreateTextureObject(&img_tex_obj, &res_desc, &tex_desc, NULL) );
fprintf(stderr, "bluppi\n");
}
This should crash when cudaCreateTextureObject is called. If the img_num parameter (at the start of main) is changed from 856 to 855, however, the code should execute successfully. (edit: The expected behavior would be that the code runs through with a value of 862 but fails with a value of 863 since that actually requires more bytes than the documented buffer size offers.)
Any help would be appreciated!
Since you're working with a 2D texture here, the number of bytes you can store in a 1D texture (the "width") is of no relevance here.
2D textures may have different characteristics depending on the type of memory that provides the backing for the texture. Two examples are linear memory and CUDA Array. You have chosen to use a linear memory backing (that which is provided by cudaMalloc* operations other than cudaMallocArray).
The primary problem you are running into is the maximum texture height. To discover what this is, we could refer to the table 14 in the programming guide, which lists:
Maximum width and height for a 2D texture reference bound to linear memory 65000 x 65000
You are exceeding this 65000 number when going from 855 to 856 images, for an image height of 76 rows. 856*76 = 65056, 855*76 = 64980
"But wait" you say, that table 14 entry says texture reference, and I am using a texture object.
You are correct, and table 14 doesn't explicitly list the corresponding limit for texture objects. In that case, we have to refer to the device properties readable from the device at runtime, using cudaGetDeviceProperties(). If we review the data available there, we see this readable item:
maxTexture2DLinear[3] contains the maximum 2D texture dimensions for 2D textures bound to pitch linear memory.
(I suspect the 3 is a typo, but no matter, we only need the first 2 values).
This is the value we want to be sure. If we modify your code to obey that limit, there are no problems:
$ cat t382.cu
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <cassert>
#define GTX_1080TI_MEM_PITCH 512
#define GTX_1080TI_1DTEX_WIDTH 134217728 // 2^27
//=====================================================================[ util ]
// CUDA error checking for library functions
#define CUDA_ERR_CHK(func){ cuda_assert( (func), __FILE__, __LINE__ ); }
inline void cuda_assert( const cudaError_t cu_err, const char* file, int line ){
if( cu_err != cudaSuccess ){
fprintf( stderr, "\nGPU ERROR! \'%s\' (err code %d) in file %s, line %d.\n\n", cudaGetErrorString(cu_err), cu_err, file, line );
exit( EXIT_FAILURE );
}
}
// CUDA generic error checking (used after kernel calls)
#define GPU_ERR_CHK(){ gpu_assert(__FILE__, __LINE__); }
inline void gpu_assert( const char* file, const int line ){
cudaError cu_err = cudaGetLastError();
if( cu_err != cudaSuccess ){
fprintf( stderr, "\nGPU KERNEL ERROR! \'%s\' (err code %d) in file %s, line %d.\n\n", cudaGetErrorString(cu_err), cu_err, file, line );
exit(EXIT_FAILURE);
}
}
//=====================================================================[ main ]
int main(){
cudaDeviceProp prop;
cudaGetDeviceProperties(&prop, 0);
size_t max2Dtexturelinearwidth = prop.maxTexture2DLinear[0]; // texture x dimension
size_t max2Dtexturelinearheight = prop.maxTexture2DLinear[1]; // texture y dimension
fprintf( stderr, "maximum 2D linear texture dimensions (width,height): %lu,%lu\n", max2Dtexturelinearwidth, max2Dtexturelinearheight);
// setup
unsigned int img_dim_x = 76;
unsigned int img_dim_y = 76;
//unsigned int img_num = 856; // <-- NOTE: set this to 855 and it should work - but we should be able to put 862 here?
unsigned int img_num = max2Dtexturelinearheight/img_dim_y;
fprintf( stderr, "maximum number of images per texture: %u\n", img_num);
unsigned int pitched_img_size = GTX_1080TI_MEM_PITCH * img_dim_y * sizeof(float);
unsigned int img_num_per_tex = GTX_1080TI_1DTEX_WIDTH / pitched_img_size;
fprintf( stderr, "We should be able to stuff %d images into one texture.\n", img_num_per_tex );
fprintf( stderr, "We use %d (more than 855 leads to a crash).\n", img_num );
// allocate pitched memory
size_t img_tex_pitch;
float* d_img_tex_data;
CUDA_ERR_CHK( cudaMallocPitch( &d_img_tex_data, &img_tex_pitch, img_dim_x*sizeof(float), img_dim_y*img_num ) );
assert( img_tex_pitch == GTX_1080TI_MEM_PITCH );
fprintf( stderr, "Asking for %zd bytes allocates %zd bytes using pitch %zd. Available: %zd/%d\n",
img_num*img_dim_x*img_dim_y*sizeof(float),
img_num*img_tex_pitch*img_dim_y*sizeof(float),
img_tex_pitch,
GTX_1080TI_1DTEX_WIDTH - img_num*img_tex_pitch*img_dim_y*sizeof(float),
GTX_1080TI_1DTEX_WIDTH );
// generic resource descriptor
cudaResourceDesc res_desc;
memset(&res_desc, 0, sizeof(res_desc));
res_desc.resType = cudaResourceTypePitch2D;
res_desc.res.pitch2D.desc = cudaCreateChannelDesc<float>();
res_desc.res.pitch2D.devPtr = d_img_tex_data;
res_desc.res.pitch2D.width = img_dim_x;
res_desc.res.pitch2D.height = img_dim_y*img_num;
res_desc.res.pitch2D.pitchInBytes = img_tex_pitch;
// texture descriptor
cudaTextureDesc tex_desc;
memset(&tex_desc, 0, sizeof(tex_desc));
tex_desc.addressMode[0] = cudaAddressModeClamp;
tex_desc.addressMode[1] = cudaAddressModeClamp;
tex_desc.filterMode = cudaFilterModeLinear; // for linear interpolation (NOTE: this breaks normal integer indexing!)
tex_desc.readMode = cudaReadModeElementType;
tex_desc.normalizedCoords = false; // we want to index using [0;img_dim] rather than [0;1]
// make sure there are no lingering errors
GPU_ERR_CHK();
fprintf(stderr, "No CUDA error until now..\n");
// create texture object
cudaTextureObject_t img_tex_obj;
CUDA_ERR_CHK( cudaCreateTextureObject(&img_tex_obj, &res_desc, &tex_desc, NULL) );
fprintf(stderr, "bluppi\n");
}
$ nvcc -o t382 t382.cu
$ cuda-memcheck ./t382
========= CUDA-MEMCHECK
maximum 2D linear texture dimensions (width,height): 131072,65000
maximum number of images per texture: 855
We should be able to stuff 862 images into one texture.
We use 855 (more than 855 leads to a crash).
Asking for 19753920 bytes allocates 133079040 bytes using pitch 512. Available: 1138688/134217728
No CUDA error until now..
bluppi
========= ERROR SUMMARY: 0 errors
$
I'm trying to step into using shaders with OpenGL, but it seems I can't compile the shader. More frustratingly, it also appears the error log is empty. I've searched through this site extensively and checked many different tutorials but there doesn't seem to be an explanation for why it fails. I even bought a book dealing with shaders but it doesn't account for vanishing log files.
My feeling is that there must be an error with how I am linking the shader source.
//Create shader
GLuint vertObject;
vertObject = glCreateShader(GL_VERTEX_SHADER);
//Stream
ifstream vertfile;
//Try for open - vertex
try
{
//Open
vertfile.open(vertex.c_str());
if(!vertfile)
{
// file couldn't be opened
cout << "Open " << vertex << " failed.";
}
else
{
getline(vertfile, verttext, '\0');
//Test file read worked
cout << verttext;
}
//Close
vertfile.close();
}
catch(std::ifstream::failure e)
{
cout << "Failed to open" << vertfile;
}
//Link source
GLint const shader_length = verttext.size();
GLchar const *shader_source = verttext.c_str();
glShaderSource(vertObject, 1, &shader_source, &shader_length);
//Compile
glCompileShader(vertObject);
//Check for compilation result
GLint isCompiled = 0;
glGetShaderiv(vertObject, GL_COMPILE_STATUS, &isCompiled);
if (!isCompiled)
{
//Did not compile, why?
std::cout << "The shader could not be compiled\n" << std::endl;
char errorlog[500] = {'\0'};
glGetShaderInfoLog(vertObject, 500, 0, errorlog);
string failed(errorlog);
printf("Log size is %d", failed.size());
}
printf("Compiled state = %d", isCompiled);
The shader code is as trivial as can be.
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
I can't get either my fragment or my vertex shader to compile. If I can get the error log to show, then at least I will be able to start error checking my own work. For now, though, I can't even get the errors.
It seems that the reason glCompile is failing without a log in this case is because I attempted to perform this action before the OpenGL context had been initialised (glutCreateWindow etc).
If anybody else gets this problem in future, try getting your version just before you try to create any GLSL objects.
printf("OpenGL version is (%s)\n", glGetString(GL_VERSION));
If you get "OpenGL version is (null)", then you don't have a valid context to create your shaders with. Find where you create your context, and make sure your shader creation comes afterwards.
So i am making a short of tower defense game. I shared a build with them so i can check if everything performs as it should on another host.
And what actually happens is that while everything renders perfectly on my side (both on my mac/xcode + windows/visual studio 2012), on my friend's side it seems like the geometry is messed up. Each object on my screen is represented by a VBO which i use every time to render to different locations. But it seems like my VBOs have all the geometry imported from all models. (Hence the Tower with the tree on the side.)
Here's the result:
(My computer)(My friend's computer)
As by now i've managed to debug this issue up to a certain point. I can tell it's not the way i'm importing my models because i'm creating a debug.txt file with all the vectors before i send them to gpu as VBOs and on both computers they output the same values. So i their vectors are not getting messed up by memory issues or anything like that. So i am thinking maybe it is the way i am setting up or rendering my VBOs
What strikes me the most though is why things work on my computer while they are not on my friends computer. One difference i know for sure is that my computer is a developer station(whatever this means) while my friend's computer is not.
This is my VBO loading function and my VBO drawing function:
I use glfw to create my window and context and include glew headers to enable some of the newer opengl functions.
void G4::Renderer::LoadObject(
G4::TILE_TYPES aType,
std::vector<float> &v_data,
std::vector<float> &n_data,
std::vector<float> &t_data,
float scale,
bool has_texture,
unsigned int texture_id
)
{
unsigned int vertices_id, vertices_size, normals_id, texturecoordinates_id;
vertices_size = static_cast<unsigned int>(v_data.size());
glGenBuffers(1, &vertices_id);
glGenBuffers(1, &normals_id);
//::->Vertex array buffer upload.
glBindBuffer(GL_ARRAY_BUFFER, vertices_id);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*v_data.size(), &v_data.front(), GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
//::->Normal Array buffer upload.
glBindBuffer(GL_ARRAY_BUFFER, normals_id);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*n_data.size(), &n_data.front(), GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
if (has_texture)
{
glGenBuffers(1, &texturecoordinates_id);
glBindBuffer(GL_ARRAY_BUFFER, texturecoordinates_id);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*t_data.size(), &(t_data[0]), GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
this->vbos[aType].Update(vertices_id, vertices_size, normals_id, texture_id, texturecoordinates_id, scale, has_texture);
}
Draw code:
void G4::Renderer::DrawUnit(G4::VBO aVBO, bool drawWithColor, float r, float g, float b, float a)
{
bool model_has_texture = aVBO.HasTexture();
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
if (model_has_texture && !drawWithColor) {
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
}
if (drawWithColor)
{
glColor4f(r, g, b, a);
}
glScalef(aVBO.GetScaleValue(), aVBO.GetScaleValue(), aVBO.GetScaleValue());
glBindBuffer(GL_ARRAY_BUFFER, aVBO.GetVerticesID());
glVertexPointer(3, GL_FLOAT, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, aVBO.GetNormalsID());
glNormalPointer(GL_FLOAT, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
if (model_has_texture && !drawWithColor)
{
glBindTexture(GL_TEXTURE_2D, aVBO.GetTextureID());
glBindBuffer(GL_ARRAY_BUFFER, aVBO.GetTextureCoordsID());
glTexCoordPointer(2, GL_FLOAT, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
glDrawArrays(GL_TRIANGLES, 0, aVBO.GetVerticesSize());
if (model_has_texture && !drawWithColor) {
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
}
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
}
I'm out of ideas i hope someone can direct me on how to debug this any further.
The OpenGL spec. does not specify the exact behaviour that should occur when you issue a draw call with more vertices than your buffer stores. The reason this may work correctly on one machine and not on another comes down to implementation. Each vendor is free to do whatever they want if this situation occurs, so the render artifacts might show up on AMD hardware but not at all on nVIDIA or Intel. Making matters worse, there is actually no error state generated by a call to glDrawArrays (...) when it is asked to draw too many vertices. You definitely need to test your software on hardware sourced from multiple vendors to catch these sorts of errors; who manufactures the GPU in your computer, and the driver version, is just as important as the operating system and compiler.
Nevertheless there are ways to catch these silly mistakes. gDEBugger is one, and there is also a new OpenGL extension I will discuss below. I prefer to use the new extension because in my experience, in addition to deprecated API calls and errors (which gDEBugger can be configured to monitor), the extension can also warn you for using inefficiently aligned data structures and various other portability and performance issues.
I wanted to add some code I use to use OpenGL Debug Output in my software, since this is an example of an errant behaviour that does not actually generate an error that you can catch with glGetError (...). Sometimes, you can catch these mistakes with Debug Output (though, I just tested it and this is not one of those situations). You will need an OpenGL Debug Context for this to work (the process of setting this up is highly platform dependent), but it is a context flag just like forward/backward compatible and core (glfw should make this easy for you).
Automatic breakpoint macro for x86 platforms
// Breakpoints that should ALWAYS trigger (EVEN IN RELEASE BUILDS) [x86]!
#ifdef _MSC_VER
# define eTB_CriticalBreakPoint() if (IsDebuggerPresent ()) __debugbreak ();
#else
# define eTB_CriticalBreakPoint() asm (" int $3");
#endif
Enable OpenGL Debug Output (requires a Debug Context and a relatively recent driver, OpenGL 4.x era)
// SUPER VERBOSE DEBUGGING!
if (glDebugMessageControlARB != NULL) {
glEnable (GL_DEBUG_OUTPUT_SYNCHRONOUS_ARB);
glDebugMessageControlARB (GL_DONT_CARE, GL_DONT_CARE, GL_DONT_CARE, 0, NULL, GL_TRUE);
glDebugMessageCallbackARB ((GLDEBUGPROCARB)ETB_GL_ERROR_CALLBACK, NULL);
}
Some important utility functions to replace enumerant values with more meaningful text
const char*
ETB_GL_DEBUG_SOURCE_STR (GLenum source)
{
static const char* sources [] = {
"API", "Window System", "Shader Compiler", "Third Party", "Application",
"Other", "Unknown"
};
int str_idx =
min ( source - GL_DEBUG_SOURCE_API,
sizeof (sources) / sizeof (const char *) );
return sources [str_idx];
}
const char*
ETB_GL_DEBUG_TYPE_STR (GLenum type)
{
static const char* types [] = {
"Error", "Deprecated Behavior", "Undefined Behavior", "Portability",
"Performance", "Other", "Unknown"
};
int str_idx =
min ( type - GL_DEBUG_TYPE_ERROR,
sizeof (types) / sizeof (const char *) );
return types [str_idx];
}
const char*
ETB_GL_DEBUG_SEVERITY_STR (GLenum severity)
{
static const char* severities [] = {
"High", "Medium", "Low", "Unknown"
};
int str_idx =
min ( severity - GL_DEBUG_SEVERITY_HIGH,
sizeof (severities) / sizeof (const char *) );
return severities [str_idx];
}
DWORD
ETB_GL_DEBUG_SEVERITY_COLOR (GLenum severity)
{
static DWORD severities [] = {
0xff0000ff, // High (Red)
0xff00ffff, // Med (Yellow)
0xff00ff00, // Low (Green)
0xffffffff // ??? (White)
};
int col_idx =
min ( severity - GL_DEBUG_SEVERITY_HIGH,
sizeof (severities) / sizeof (DWORD) );
return severities [col_idx];
}
My Debug Output Callback (somewhat messy, because it prints each field in a different color in my software)
void
ETB_GL_ERROR_CALLBACK (GLenum source,
GLenum type,
GLuint id,
GLenum severity,
GLsizei length,
const GLchar* message,
GLvoid* userParam)
{
eTB_ColorPrintf (0xff00ffff, "OpenGL Error:\n");
eTB_ColorPrintf (0xff808080, "=============\n");
eTB_ColorPrintf (0xff6060ff, " Object ID: ");
eTB_ColorPrintf (0xff0080ff, "%d\n", id);
eTB_ColorPrintf (0xff60ff60, " Severity: ");
eTB_ColorPrintf ( ETB_GL_DEBUG_SEVERITY_COLOR (severity),
"%s\n",
ETB_GL_DEBUG_SEVERITY_STR (severity) );
eTB_ColorPrintf (0xffddff80, " Type: ");
eTB_ColorPrintf (0xffccaa80, "%s\n", ETB_GL_DEBUG_TYPE_STR (type));
eTB_ColorPrintf (0xffddff80, " Source: ");
eTB_ColorPrintf (0xffccaa80, "%s\n", ETB_GL_DEBUG_SOURCE_STR (source));
eTB_ColorPrintf (0xffff6060, " Message: ");
eTB_ColorPrintf (0xff0000ff, "%s\n\n", message);
// Force the console to flush its contents before executing a breakpoint
eTB_FlushConsole ();
// Trigger a breakpoint in gDEBugger...
glFinish ();
// Trigger a breakpoint in traditional debuggers...
eTB_CriticalBreakPoint ();
}
Since I could not actually get your situation to trigger a debug output event, I figured I would at least show an example of an event I was able to trigger. This is not an error that you can catch with glGetError (...), or an error at all for that matter. But it is certainly a draw call issue that you might be completely oblivious to for the duration of your project without using this extension :)
OpenGL Error:
=============
Object ID: 102
Severity: Medium
Type: Performance
Source: API
Message: glDrawElements uses element index type 'GL_UNSIGNED_BYTE' that is not optimal for the current hardware configuration; consider using 'GL_UNSIGNED_SHORT' instead.
After further debugging sessions with my friends and much tryouts i managed to find the problem. This took me two solid days to figure out and really it was just a silly mistake.
glDrawArrays(GL_TRIANGLES, 0, aVBO.GetVerticesSize());
The above code does not get the vertices size (as points) but as a total number of floats stored there. So everything is multiplied by 3. Adding a /3 solved it.
So i assume since the total points where multiplied by 3 times the vbo "stole" data from other vbos stored on the gpu. (Hence the tree model stack to my tower).
What i can't figure out yet though, and would like an answer on that, is why on my computer everything rendered fine but not on other computers. As i state in my original question a hint would be that my computer is actually a developer station while my friend's not.
Anyone who is kind enough to explain why this effect doesn't reproduce on me i will gladly accept his answer as a solution to my problem.
Thank you
After rearranging some OpenCL/GL interop code it stopped working and gave the error in the title.
I already read many threads that discussed this issue and blamed the (nvidia) driver for it. But as the code ran before this should not be my problem.
Code
As I already said I have too much code to post all of it. Also I am not able to reproduce the behaviour in minimal examples so far, so I am going to describe most of the program in pseudo code and parts in excerpts from my real code.
I use Qt, therefore following functions are all members of my subclass of QGLWidget:
void initializeGL()
{
glewInit();
cl::Platform::get(&clPlatforms);
cl_context_properties props[] = {
CL_GL_CONTEXT_KHR, (cl_context_properties)glXGetCurrentContext(),
CL_GLX_DISPLAY_KHR, (cl_context_properties)glXGetCurrentDisplay(),
CL_CONTEXT_PLATFORM, (cl_context_properties)clPlatforms[0](),
0
};
clContext = cl::Context(CL_DEVICE_TYPE_GPU, props, cl_notify, &err);
std::vector<cl::Device> devices = clContext.getInfo<CL_CONTEXT_DEVICES>();
clDevice = devices[0];
clQueue = cl::CommandQueue(clContext, clDevice, 0, 0);
clProgram = cl::Program(context, sources);
clProgram.build();
clKernel = cl::Kernel(clProgram, "main");
glGenBuffers(1, &vbo);
glBindBuffers(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, bufferSize, buffer, GL_DYNAMIC_DRAW);
glBindBuffers(GL_ARRAY_BUFFER, 0);
vboBuffer = cl::BufferGL(clContext, CL_MEM_READ_WRITE, vbo);
clKernel.setArg(0, vboBuffer);
}
void paintGL()
{
simulate();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//camera transformations ...
glEnable(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glVertexPointer(3, GL_FLOAT, sizeof(cl_float3), 0);
glDrawElements(GL_TRIANGLES, 3 * numTriangles, GL_UNSIGNED_INT, triangleIdicesArray);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
void simulate()
{
std::vector<cl::Memory> sharedObjects;
sharedObjects.push_back(vboBuffer);
clQueue.enqueueAcquireGLObjects(&sharedObjects); //this triggers the error
clQueue.enqueNDRangKernel(clKernel, cl::NullRange, cl::NDRangle(numVertices), cl::NullRange);
clQueue.enqueueReleaseGLObjects(&sharedObjects);
}
Some further investigation about glEnqueueAcquireGLObjects turned up (in the notes section):
Prior to calling clEnqueueAcquireGLObjects, the application must ensure that any pending GL operations which access the objects specified in mem_objects have completed. This may be accomplished portably by issuing and waiting for completion of a glFinish command on all GL contexts with pending references to these objects.
Adding a finish on the OpenCL command queue before and after the acquire/release got me rid of these errors.
As with other errors I find the error code misleading and the notify function did not add any valuable information (it said the same).
I must admit this is my first time implementing shaders, previously I have only worked with fixed function pipeline; however, though I am certain that everything I did is correct - there must be an error.
glLinkProgram(program) - returns GL_FALSE when queried for GL_LINK_STATUS. In addition, the info log is empty (when I query the log length - it is 1, which is the null terminator per the docs, it checks out). So linker errors, and no logs. In addition, I had just discovered that the linker problems occur as soon as I make any use of gl_Position variable in vertex shader, both during assignment and if I use it for calculations. I tried all sorts of shader variations, it errors but it fails to produce the logs - it just seems to return GL_FALSE any time gl_Position is touched. Interestingly enough, fragment shader doesn't cause any problems.
Both fragment and vertex shaders compile fine with no errors. When I introduce syntax errors, they are detected, printed, and the process is aborted prior to program creation (so it seems to work fine). I debugged and made sure that files get loaded properly, source is null terminated, sizes are correct, I checked the number of attached programs after attachment and it is 2 (correct lol). It is removed for clarity, I have the checkForErrors() method that checks and prints opengl errors - none are detected.
I am stumped, please someone help! I've been losing sleep over this for 2 days now...
This is the code to load the shader:
FILE *file = fopen(fileName.c_str(), "rb");
if(file == NULL)
{
Log::error("file does not exist: \"" + fileName + "\"");
return NULL;
}
// Calculate file size
fseek(file , 0 , SEEK_END);
int size = ftell(file);
rewind(file);
// If file size is 0
if(size == 0)
{
Log::error("file is empty: \"" + fileName + "\"");
return NULL;
}
char **source = new char*[1];
source[0] = new char[size+1];
source[0][size] = '\0';
// If we weren't able to read the entire file into memory
int readSize = fread(source[0], sizeof(char), size, file);
if(size != readSize)
{
int fileError = ferror(file);
// Free the source, log an error and return false
delete source[0];
delete [] source;
fclose(file);
Log::error("couldn't load file into memory [ferror(" + toString<int>(fileError) + ")]: \"" + fileName + "\" (Size: " + toString<int>(readSize) + "/" + toString<int>(size) + " bytes)");
return NULL;
}
// Close the file
fclose(file);
// Create the shader object
// shaderType is GLenum that is set based on the file extension. I assure you it is correctly set to either GL_VERTEX_SHADER or GL_FRAGMENT_SHADER
GLuint shaderID = glCreateShader(shaderType);
// If we could not create the shader object, check for OpenGL errors and return false
if(shaderID == 0)
{
checkForErrors();
Log::error("error creating shader \"" + name);
delete source[0];
delete [] source;
return NULL;
}
// Load shader source and compile it
glShaderSource(shaderID, 1, (const GLchar**)source, NULL);
glCompileShader(shaderID);
GLint success;
glGetShaderiv(shaderID, GL_COMPILE_STATUS, &success);
if(!success)
{
GLchar error[1024];
glGetShaderInfoLog(shaderID, 1024, NULL, error);
Log::error("error compiling shader \"" + name + "\"\n Log:\n" + error);
delete source[0];
delete [] source;
return NULL;
}
else
{
Log::debug("success! Loaded shader \"" + name + "\"");
}
// Clean up
delete source[0];
delete [] source;
Quick note: glShaderSource - I cast to (const GLchar**) because GCC complains about const to non-const char pointers; Otherwise, I believe I'm entirely compliant.
No errors there, by the way. The shaders (below) compile without any errors.
Vertex Shader:
void main()
{
// If I comment the line below, the link status is GL_TRUE.
gl_Position = vec4(1.0, 1.0, 1.0, 1.0);
}
Fragment Shader:
void main()
{
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
Below is the code that creates the shader program, attaches objects and links etc.:
// Create a new shader program
GLuint program = glCreateProgram();
if(program == 0)
{
Log::error("RenderSystem::loadShaderProgram() - could not create OpenGL shader program; This one is fatal, sorry.");
getContext()->fatalErrorIn("RenderSystem::loadShaderProgram(shader1, shader2)", "OpenGL failed to create object using glCreateProgram()");
return NULL;
}
// Attach shader objects to program
glAttachShader(program, vertexShaderID);
glAttachShader(program, fragmentShaderID);
// Link the program
GLint success = GL_FALSE;
glLinkProgram(program);
checkForErrors();
glGetProgramiv(program, GL_LINK_STATUS, &success);
if(success == GL_FALSE)
{
GLchar errorLog[1024] = {0};
glGetProgramInfoLog(program, 1024, NULL, errorLog);
Log::error(std::string() + "error linking program: " + errorLog);
return NULL;
}
success = GL_FALSE;
glValidateProgram(program);
glGetProgramiv(program, GL_VALIDATE_STATUS, &success);
if(success == GL_FALSE)
{
GLchar errorLog[1024] = {0};
glGetProgramInfoLog(program, 1024, NULL, errorLog);
checkForErrors();
Log::error(std::string() + "error validating shader program; Details: " + errorLog);
return NULL;
}
Usually it doesn't even get as far as validating the program... I'm so frustrated with this, it's very hard not to be vulgar.
Your help is needed and any assistance is genuinely appreciated!
EDIT: I'm running everything on Intel HD 3000 (with support for OpenGL up to 3.1). My target OpenGL version is 2.0.
EDIT2: I would also like to note that I had some issues with reading the shaders from text files if I set the "r" or "rt" flags in fopen - the read size was smaller than actual size by about 10 bytes (consistently on all files) - and feof() would return true. When I switched to reading in binary ("rb") the problem went away and files were read in fully. I did try several alternative implementations and they all produced the same error during linking (and I print shader source to console right after reading the file to ensure it looks correct, it does.).
OK, I knew when I was posting that this was going to be embarrassing, but it is quite bad: the Intel graphics config application that usually accompanies an Intel driver has 3D settings tab; Now, the "custom settings" check box was unchecked, but the grayed out option under the setting "Vertex Processing" spelled out "Enable Software Processing". Even though it was unchecked and everything was grayed out I simply entered the custom settings and checked everything to "Application Settings".
That fixed it! Everything links like it should! Don't know who thought of that option, why would that ever be useful?! Its the only option that looks stunningly out of place.
I went through several driver re-installs, intense debugging and configuration research, I had to speculate and second guess on absolutely everything! Terrible, wouldn't wish it on my worst enemies.