Opengl error 1281 when trying to glUseProgram - opengl

Any ideas how I could debug this opengl error further? 1281
I'm loading source from files, compiling, linking and then trying to check for errors after glUseProgram
In my object's draw method ..
log.info(gl2.glIsProgram(shaderProgram)); // true
gl2.glUseProgram(shaderProgram);
int error;
while ((error = gl2.glGetError()) != GL2.GL_NO_ERROR) {
throw new RuntimeException("glUseProgram" + ": glError " + error);
}
Output ..
[13:38:08] INFO (IARectangle.java:99) - true
java.lang.RuntimeException: glUseProgram: glError 1281
This is how I load my shader source, from .glsl files ..
Vector<Integer> shaders = new Vector<Integer>();
try {
shaders.add(compileSource(
loadSource("shaders/vertexShader.glsl"),
loadSource("shaders/fragmentShader.glsl")));
return shaders;
} catch (Exception e) {
e.printStackTrace();
return shaders;
}
public String[] loadSource(String filename){
StringBuilder sb = new StringBuilder();
try {
InputStream is = getClass().getClassLoader().getResourceAsStream(filename);
BufferedReader br = new BufferedReader(new InputStreamReader(is, "UTF-8"));
String line;
while ((line = br.readLine()) != null) {
sb.append(line);
sb.append('\n');
}
is.close();
} catch (Exception e) {
e.printStackTrace();
}
return new String[] { sb.toString() };
}
public final int compileSource(final String[] vertexSource, final String[] fragmentSource) throws Exception {
vertexShaderProgram;
int fragmentShaderProgram;
int shaderProgram;
// load vertexShader source, compile and verify
vertexShaderProgram = gl2.glCreateShader(GL2.GL_VERTEX_SHADER);
gl2.glShaderSource(vertexShaderProgram, 1, vertexSource, null, 0);
gl2.glCompileShader(vertexShaderProgram);
verifyCompile(gl2, vertexShaderProgram);
// load fragmentShader source, compile and verify
fragmentShaderProgram = gl2.glCreateShader(GL2.GL_FRAGMENT_SHADER);
gl2.glShaderSource(fragmentShaderProgram, 1, fragmentSource, null, 0);
gl2.glCompileShader(fragmentShaderProgram);
verifyCompile(gl2, fragmentShaderProgram);
shaderProgram = gl2.glCreateProgram();
gl2.glAttachShader(shaderProgram, vertexShaderProgram);
gl2.glAttachShader(shaderProgram, fragmentShaderProgram);
gl2.glLinkProgram(shaderProgram);
IntBuffer intBuffer = IntBuffer.allocate(1);
gl2.glGetProgramiv(shaderProgram, GL2.GL_LINK_STATUS, intBuffer);
if (intBuffer.get(0) != 1){
String infoLog = null;
gl2.glGetProgramiv(shaderProgram, GL2.GL_INFO_LOG_LENGTH, intBuffer);
int size = intBuffer.get(0);
log.error("Program link error: ");
if (size > 0) {
ByteBuffer byteBuffer = ByteBuffer.allocate(size);
gl2.getGL2().glGetProgramInfoLog(shaderProgram, size, intBuffer, byteBuffer);
byte[] sizeBytes = new byte[size];
byteBuffer.get(sizeBytes, 0, size);
infoLog = new String(sizeBytes);
log.error("info: " + infoLog);
} else {
log.error("Unknown");
}
System.exit(1);
return shaderProgram;
} else {
return shaderProgram;
}
}
Vertex shader source ..
#version 120
uniform mat4 uMVPMatrix;
attribute vec4 vPosition;
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
//gl_Position = uMVPMatrix * vPosition;
}
Fragment shader source ..
#version 120
uniform vec4 vColor;
void main() {
gl_FragColor = vColor;
}

At first glance, I'm not sure why glGetError is returning an error code. But to answer your specific question of 'How can I debug this error further?', I do have a suggestion.
Change your draw code to this:
// Logging errors before the call to glUseProgram
int error;
while ((error = gl2.glGetError()) != GL2.GL_NO_ERROR) {
log.info(error);
}
log.info(gl2.glIsProgram(shaderProgram)); // true
gl2.glUseProgram(shaderProgram);
int error;
while ((error = gl2.glGetError()) != GL2.GL_NO_ERROR) {
throw new RuntimeException("glUseProgram" + ": glError " + error);
}
Note that the difference here is we've added a block of code to log errors returned by glGetError before the call to glUseProgram. The reason is because the error is not necessarily originating in your call to glUseProgram. If you see the 1281 error logged using the code above, you can determine that the error is actually originating from an OpenGL call made before the glUseProgram call.
Take a look at the documentation for glGetError:
glGetError returns the value of the error flag. Each detectable error
is assigned a numeric code and symbolic name. When an error occurs,
the error flag is set to the appropriate error code value. No other
errors are recorded until glGetError is called, the error code is
returned, and the flag is reset to GL_NO_ERROR.
So, if one of your earlier OpenGL calls (perhaps something in your compileSource function, for example) recorded the 1281 error, and you did not call glGetError anywhere between that point and your call to glUseProgram, you can not reasonably assume that the error is actually originating from the glUseProgram call.
In summary, glGetError does not return the most recent error recorded by an OpenGL call. You need to call glGetError with more granularity in order to pinpoint where the error is originating. This will allow you to troubleshoot the issue you're having and determine exactly which OpenGL call is recording the error.

Related

GLSL: Force error if any Uniform is not bound

I am using GLSL 1.0 with WebGL 1.0 and 2.0 and I just spent hours troubleshooting an issue that in my opinion should have thrown error before things got started.
I have uniforms and sampler2D's in my fragment Shader. I had changed one line of code and that change had caused no input textures or arrays be bound to the locations of Shader uniforms. The program however runs with no issues but produces zeros when those uniforms are read. For example a call to texture2D(MyTexture, vec2(x,y)) does not throw any errors but rather just returns 0.
Is there anyway for me to force this as an error before or during rendering?
There is no way to make WebGL itself check your errors. You can write your own wrappers if you want to check for errors. As one example there's the webgl-debug context wrapper that calls gl.getError after every single WebGL command.
Following a similar pattern you could try to check for not setting uniforms either by wrapping all the functions related to drawing, programs, uniforms, attributes, etc. or by just making functions you call
function myUseProgram(..args..) {
checkUseProgramStuff();
gl.useProgram(..);
}
function myDrawArrays(..args..) {
checkDrawArraysStuff();
gl.drawArrays(..args..);
}
For uniforms you'd need to track when a program linked successfully then loop over all of its uniforms (which you can query). Track what the current program is. Track calls to gl.uniform to track if uniforms are set.
Here's an example
(function() {
const gl = null; // just to make sure we don't see global gl
const progDB = new Map();
let currentUniformMap;
const limits = {};
const origGetUniformLocationFn = WebGLRenderingContext.prototype.getUniformLocation;
function init(gl) {
[gl.MAX_COMBINED_TEXTURE_IMAGE_UNITS].forEach((pname) => {
limits[pname] = gl.getParameter(pname);
});
}
function isBuiltIn(info) {
const name = info.name;
return name.startsWith("gl_") || name.startsWith("webgl_");
}
function addProgramToDB(gl, prg) {
const uniformMap = new Map();
const numUniforms = gl.getProgramParameter(prg, gl.ACTIVE_UNIFORMS);
for (let ii = 0; ii < numUniforms; ++ii) {
const uniformInfo = gl.getActiveUniform(prg, ii);
if (isBuiltIn(uniformInfo)) {
continue;
}
const location = origGetUniformLocationFn.call(gl, prg, uniformInfo.name);
uniformMap.set(location, {set: false, name: uniformInfo.name, type: uniformInfo.type, size: uniformInfo.size});
}
progDB.set(prg, uniformMap);
}
HTMLCanvasElement.prototype.getContext = function(origFn) {
return function(type, ...args) {
const ctx = origFn.call(this, type, ...args);
if (ctx && type === 'webgl') {
init(ctx);
}
return ctx;
}
}(HTMLCanvasElement.prototype.getContext);
// getUniformLocation does not return the same location object
// for the same location so mapping a location to uniform data
// would be a PITA. So, let's make it return the same location objects.
WebGLRenderingContext.prototype.getUniformLocation = function(origFn) {
return function(prg, name) {
const uniformMap = progDB.get(prg);
for (const [location, uniformInfo] of uniformMap.entries()) {
// note: not handling names like foo[0] vs foo
if (uniformInfo.name === name) {
return location;
}
}
return null;
};
}(WebGLRenderingContext.prototype.getUniformLocation);
WebGLRenderingContext.prototype.linkProgram = function(origFn) {
return function(prg) {
origFn.call(this, prg);
const success = this.getProgramParameter(prg, this.LINK_STATUS);
if (success) {
addProgramToDB(this, prg);
}
};
}(WebGLRenderingContext.prototype.linkProgram);
WebGLRenderingContext.prototype.useProgram = function(origFn) {
return function(prg) {
origFn.call(this, prg);
currentUniformMap = progDB.get(prg);
};
}(WebGLRenderingContext.prototype.useProgram);
WebGLRenderingContext.prototype.uniform1i = function(origFn) {
return function(location, v) {
const uniformInfo = currentUniformMap.get(location);
if (v === undefined) {
throw new Error(`bad value for uniform: ${uniformInfo.name}`); // do you care? undefined will get converted to 0
}
const val = parseFloat(v);
if (isNaN(val) || !isFinite(val)) {
throw new Error(`bad value NaN or Infinity for uniform: ${uniformInfo.name}`); // do you care?
}
switch (uniformInfo.type) {
case this.SAMPLER_2D:
case this.SAMPLER_CUBE:
if (val < 0 || val > limits[this.MAX_COMBINED_TEXTURE_IMAGE_UNITS]) {
throw new Error(`texture unit out of range for uniform: ${uniformInfo.name}`);
}
break;
default:
break;
}
uniformInfo.set = true;
origFn.call(this, location, v);
};
}(WebGLRenderingContext.prototype.uniform1i);
WebGLRenderingContext.prototype.drawArrays = function(origFn) {
return function(...args) {
const unsetUniforms = [...currentUniformMap.values()].filter(u => !u.set);
if (unsetUniforms.length) {
throw new Error(`unset uniforms: ${unsetUniforms.map(u => u.name).join(', ')}`);
}
origFn.call(this, ...args);
};
}(WebGLRenderingContext.prototype.drawArrays);
}());
// ------------------- above is wrapper ------------------------
// ------------------- below is test ---------------------------
const gl = document.createElement('canvas').getContext('webgl');
const vs = `
uniform float foo;
uniform float bar;
void main() {
gl_PointSize = 1.;
gl_Position = vec4(foo, bar, 0, 1);
}
`;
const fs = `
precision mediump float;
uniform sampler2D tex;
void main() {
gl_FragColor = texture2D(tex, vec2(0));
}
`;
const prg = twgl.createProgram(gl, [vs, fs]);
const fooLoc = gl.getUniformLocation(prg, 'foo');
const barLoc = gl.getUniformLocation(prg, 'bar');
const texLoc = gl.getUniformLocation(prg, 'tex');
gl.useProgram(prg);
test('fails with undefined', () => {
gl.uniform1i(fooLoc, undefined);
});
test('fails with non number string', () => {
gl.uniform1i(barLoc, 'abc');
});
test('fails with NaN', () => {
gl.uniform1i(barLoc, 1/0);
});
test('fails with too large texture unit', () => {
gl.uniform1i(texLoc, 1000);
})
test('fails with not all uniforms set', () => {
gl.drawArrays(gl.POINTS, 0, 1);
});
test('fails with not all uniforms set',() => {
gl.uniform1i(fooLoc, 0);
gl.uniform1i(barLoc, 0);
gl.drawArrays(gl.POINTS, 0, 1);
});
test('passes with all uniforms set', () => {
gl.uniform1i(fooLoc, 0);
gl.uniform1i(barLoc, 0);
gl.uniform1i(texLoc, 0);
gl.drawArrays(gl.POINTS, 0, 1); // note there is no texture so will actually generate warning
});
function test(msg, fn) {
const expectFail = msg.startsWith('fails');
let result = 'success';
let fail = false;
try {
fn();
} catch (e) {
result = e;
fail = true;
}
log('black', msg);
log(expectFail === fail ? 'green' : 'red', ' ', result);
}
function log(color, ...args) {
const elem = document.createElement('pre');
elem.textContent = [...args].join(' ');
elem.style.color = color;
document.body.appendChild(elem);
}
pre { margin: 0; }
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
The code above only wraps gl.uniform1i. It doesn't handle arrays of uniforms nor does it handle individual array element locations. It does show one way to track uniforms and whether or not they've been set.
Following similar pattern you could check that each texture unit has a texture assigned etc, that each attribute is turned on, etc...
Of course you could also write your own WebGL framework that tracks all that stuff instead of hacking the WebGL context itself. In other words for example three.js could track that all it's uniforms are set at a higher level than the WebGL level and your own code could do something similar.
As for why WebGL doesn't emit errors there are lots of reasons. For one, not setting a uniform is not an error. Uniforms have default values and it's perfectly fine to use the default.
The browser does catch some problems but because WebGL is pipelined it can't give you the error at the moment you issue the command without a huge slow down in performance (the debug context mentioned above will do that for you). So, the browser can sometimes give an warning in the console but it can't stop your JavaScript at the point you issued the command. It might not be that helpful anyway is the only place it can often give an error is at draw time. In other words to 30-100 commands issued before to setup WebGL state are not an error until you draw since you could fix that state at anytime before you draw. So you get the error on draw but that doesn't tell you which of the 30-100 previous commands caused the issue.
Finally there's the philosophical issue of trying to support native ports from OpenGL/OpenGL ES via emscripten or WebAssembly. Many native apps ignore
lots of GL errors and yet still run. This is one reason why WebGL doesn't throw, to stay compatible with OpenGL ES (as well as the reason above). It's also why most WebGL implementations only show a few errors and then print "no more webgl errors will be shown" since browsers didn't want programs that ignore their WebGL errors to fill memory with logging messages.
Fortunately if you really want it you can write your own checking.

Cannot retrieve shader binary

I'm having problem retrieving shader program's binary.
I made a class named "Building" that has the following static members:
static GLenum binary_format;
static unsigned char * program_binary;
static GLsizei binary_size;
static GLsizei binary_length;
I use program_binary to store the shader program,
I made these members static in order to retrive their values from
any instance of the class, once the first instance fills them with proper values.
This class also has the following member method, which I use as routine to setup the shader program :
void Building::setShaders()
{
if(program_binary == nullptr)
{
shader_prog = glCreateProgram();
GLchar const *vss[]=
{
*smanager.getVShaderSource(0),
*smanager.getVShaderSource(1)
};
GLuint vs = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vs,2,vss,NULL);
glCompileShader(vs);
GLuint fs = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fs,1,smanager.getFShaderSource(0),NULL);
glCompileShader(fs);
//CHECKING COMPILE STATUS
GLint vs_status;
glGetShaderiv(vs,GL_COMPILE_STATUS,&vs_status);
GLint fs_status;
glGetShaderiv(fs,GL_COMPILE_STATUS,&fs_status);
if(vs_status == 0 || fs_status == 0)
{
if(vs_status == 0)
{
GLint log_length;
glGetShaderiv(vs,GL_INFO_LOG_LENGTH,&log_length);
char log[log_length];
glGetShaderInfoLog(vs,log_length,NULL,log);
printVShaderLog(log);
}
if(fs_status == 0)
{
GLint log_length;
glGetShaderiv(fs,GL_INFO_LOG_LENGTH,&log_length);
char log[log_length];
glGetShaderInfoLog(fs,log_length,NULL,log);
printFShaderLog(log);
}
}
else
{
glAttachShader(shader_prog,vs);
glAttachShader(shader_prog,fs);
glProgramParameteri(shader_prog,GL_PROGRAM_BINARY_RETRIEVABLE_HINT,GL_TRUE);
glLinkProgram(shader_prog);
GLint param;
glGetProgramiv(shader_prog,GL_LINK_STATUS,&param);
if(param)
{
printShaderLinkState(param,NULL);
glGetProgramiv(shader_prog,GL_PROGRAM_BINARY_LENGTH,&binary_size);
std::cerr<<"binary_size is "<<binary_size<<std::endl;
//binary_size is zero !
program_binary = new unsigned char [binary_size];
glGetProgramBinary(shader_prog,binary_size,&binary_length,&binary_format,program_binary);
}
else
{
GLint logLength;
glGetProgramiv(shader_prog,GL_INFO_LOG_LENGTH,&logLength);
char log[logLength];
glGetProgramInfoLog(shader_prog,logLength,NULL,log);
printShaderLinkState(param,log);
}
}
glDeleteShader(vs);
glDeleteShader(fs);
}//if(program_binary == nullptr)
else
glProgramBinary(shader_prog,binary_format,program_binary,binary_length);
}
The problem is that the length provided by glGetProgramiv(shader_prog,GL_PROGRAM_BINARY_LENGTH,&binary_size);
is zero.However, I tried to put binary_size = 100 just to see if something is written into the program_binary pointer, but the function glGetProgramBinary(shader_prog,binary_size,&binary_length,&binary_format,program_binary); isn't working either, in fact program_binary remains just empty. I have no clues why this is happening .

Failed to load model with assimp, access violation

I got a problem when I want to import a simple model using assimp, whenever I compile the code it throws:
0xC0000005: Access violation reading location 0x00000000.
I know this is something about a null pointer but I just can't find it, the code goes as follows:
Model::Model(GLchar* path)
{
loadModel(path);
}
void Model::loadModel(std::string path)
{
Assimp::Importer import;
const aiScene* scene = import.ReadFile(
path,
aiProcess_Triangulate |
aiProcess_FlipUVs);
if (!scene || scene->mFlags == AI_SCENE_FLAGS_INCOMPLETE || !scene->mRootNode){
std::cout << "ERROR::ASSIMP::" << import.GetErrorString() << std::endl;
return;
}
directory = path.substr(0, path.find_last_of('/'));
aiNode* node = scene->mRootNode;
for (GLuint i = 0; i < node->mNumChildren; i++){
aiMesh* mesh = scene->mMeshes[node->mMeshes[i]];
meshes.push_back(processMesh(mesh, scene));
}
for (GLuint i = 0; i < node->mNumChildren; i++){
processNode(node->mChildren[i], scene);
}
}
I use this Model class as a global variable:
//include stuff
//other global variable
Model mymodel("D:/Project/xxx/xxx.obj");
int main(){
//...
return 0;
}
The thing is that the error happens just between the line directory = path.substr(0, path.find_last_of('/')); and the line aiNode* node = scene->mRootNode; so I don't know how to debug it, could anyone tell me how to fix this? I use Visual Studio 2013-64 and assimp-3.1.1.
Thank you very much.
I think the problem could be in this part of the code:
Model::Model(GLchar* path)
{
loadModel(path); // path is declared a string in loadModel function - type conversion might not be happening as expected.
}
Check, if you are getting a correct/valid value in the path variable on the line:
directory = path.substr(0, path.find_last_of('/'));
This link might be helpful:
GLchar could not be resolved
Note: I am not familiar with OpenGL, but looking at the error you are getting this is the first place I would check.

OpenGL showing blank screen. Maybe due to shaders?

My OpenGL ES application isn't working. I'm using SDL for windowing management, and it holds the context. After looking around, I noticed that the vertex shader and the fragment shader showed up as 0 on the debugger. Even the program was 0. Could this be a reason? I followed my shader compiling and linking code to a template that was previously made.
If it is, what is wrong? Here is the code:
GLuint ShaderHelper::compileShader(GLenum type, std::string fileName) {
std::string fileContents;
std::ifstream fin;
std::string path;
// Getting the necessary path...
// These are abstractions for getting file contents
// from the main bundle.
if (type == GL_VERTEX_SHADER) {
FileOpener opener;
path = opener.retriveFileFromBundle(fileName, "vsh");
} else if (type == GL_FRAGMENT_SHADER) {
FileOpener opener;
path = opener.retriveFileFromBundle(fileName, "fsh");
} else {
std::cout << "ERROR: Invalid shader type at filename " << fileName << std::endl;
exit(1);
}
fin.open(path);
if (!fin.is_open()) {
std::cout << "ERROR: Failed to open file " << fileName << std::endl;
exit(1);
}
// Retrieving the string from the file...
while (!fin.eof()) {
char CLine[255];
fin.getline(CLine, 255);
std::string line = CLine;
fileContents = fileContents + line;
fileContents = fileContents + " \n";
}
fin.close();
// I'm creating these variables because a pointer is needed for
// glShaderSource
GLuint shaderHandle = glCreateShader(type);
const GLint shaderStringLength = (GLint)fileContents.size();
const GLchar *shaderCString = fileContents.c_str();
glShaderSource(shaderHandle, 1, &shaderCString, &shaderStringLength);
glCompileShader(shaderHandle);
return shaderHandle;
}
void ShaderHelper::linkProgram(std::vector<GLuint *> shaderArray) {
program = glCreateProgram();
for (int i = 0; i < shaderArray.size(); i++) {
glAttachShader(program, *shaderArray[i]);
}
glLinkProgram(program);
}
void ShaderHelper::addUniform(uniform_t uniform) {
std::string name = uniform.name;
uniforms[name] = uniform;
// With that step done, we need to assign the location...
uniforms[name].location = glGetUniformLocation(program, uniforms[name].name.c_str());
}
EDIT: After suggestions, I ran my code through glError(). I fixed an error, but I still got a blank screen. I'm no longer getting 0 as my shader values. I set glClearColor to a white image, and it's just appearing pure white. I adjusted numbers in the MV matrix and projection matrix, but there's still nothing at all. I disabled face culling, but still nothing. Also, shaders are compiling and linking fine. So now what?
The dreaded blank screen can be caused by a variety of problems:
Your context is not created correctly or you're not presenting your scene properly. Does changing your clear color to something else than black show anything?
Your transformation matrix is wrong. How are you setting up the position of the "camera"? Are you using something like GLM to set up a matrix? If so, have a look at glm::perspective() and glm:::lookAt(). Make sure you're passing the matrix to the shader and that you're using it to set gl_Position in your vertex shader.
The geometry you're trying to display is facing away from the viewer. Try glDisable(GL_CULL_FACE). If it works, reverse the order of your vertices.
An OpenGL call is failing somewhere. Make sure you check glGetError() after every call. I usually have something like the following:
struct gl_checker
{
~gl_checker()
{
const auto e = glGetError();
assert(e == GL_NO_ERROR);
}
};
template <class F>
inline auto gl_call(F f) -> decltype(f())
{
gl_checker gc;
return f();
}
which can be used like this:
gl_call([]{ glDisable(GL_CULL_FACE); });
Your shaders fail to compile or link. Have a look at glGetShaderiv() with GL_COMPILE_STATUS and glGetProgramiv() with GL_LINK_STATUS. If they report an error, have a look at glGetShaderInfoLog() and glGetProgramInfoLog().
As for the partial code you provided, I see nothing strictly wrong with it. Providing the shaders and a smaller, complete program might help with finding the problem.

Unity shader: Parser error: syntax error at line 19

I was making a shader script in Unity and I got a syntax error.
I don't see any mistake in there, though.
Does anyone else see the trouble maker?
Shader "Custom/Toon" {
Properties {
...
_OutlineColor ("Outline Color", Color) = (0,0,0,1)
...
}
SubShader {
Pass {
Tags { "LightMode" = "ForwardBase" }
GLSLPROGRAM
...
uniform vec4 _OutlineColor; //Line 19
...
Problem solved.
All I had to do is changing "GLSLPROGRAM" and "ENDGLSL" into "CGPROGRAM" and "ENDCG".