The chunk of code below works correctly as-is. But if I uncomment the call to glGetUniformLocation() it crashes at that line:
GLint pipeline;
glGenProgramPipelines(1, &pipeline);
glUseProgram(0);
glBindProgramPipeline(pipeline);
GLint vert_pgm = glCreateShaderProgramv(...);
glUseProgramStages(pipeline, GL_VERTEX_SHADER_BIT, vert_pgm);
GLint frag_pgm = glCreateShaderProgramv(...);
glUseProgramStages(pipeline, GL_FRAGMENT_SHADER_BIT, frag_pgm);
// If I uncomment the line below, it crashes:
//
// GLint myArg_loc = glGetUniformLocation(frag_pgm, "material_id");
glBindBufferBase(GL_UNIFORM_BUFFER, ubo_index, mp_data->ubo);
glBindVertexArray(vao);
glEnableVertexAttribArray(0)
...
glEnableVertexAttribArray(10)
glBindVertexBuffer(...);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ...);
glDrawElements(GL_TRIANGLES, ...);
The inputs for the fragment shader are declared as follows:
layout( binding = 0, std140, column_major )
uniform uniform_data
{
transform_data transform;
material_data material;
lighting_data lighting;
vec4 select;
vec4 normal;
}
uniforms;
uniform int material_id;
in attribute_data
{
smooth vec4 pos;
smooth vec4 colour;
smooth float radius;
smooth vec3 tangent;
smooth vec3 normal;
flat int face_id;
}
inputs;
Any ideas why it's crashing?
Is it not legal to use interface blocks and individual uniforms together in the same shader?
Check for: NULL pointer AND recursion.
My wrapper function for glGetUniformLocation was calling itself rather
than the function pointer it was supposed to wrap. Ironically the wrapper function is there for my protection. It checks to make sure the function pointer isn't null and throws an error if it is.
typedef
GLint
(*PFN_glGetUniformLocation)
(
/**/ GLuint program
, const GLchar* name
);
PFN_glGetUniformLocation
pfn_glGetUniformLocation;
GLint
glGetUniformLocation
(
/**/ GLuint program
, const GLchar* name
){
//! My Mistake: !//
//! Exists(glGetUniformLocation);!//
//! return(glGetUniformLocation( !//
//! program, name !//
//! ));; !//
//:My Fix:
Exists(pfn_glGetUniformLocation);
return(pfn_glGetUniformLocation(
program, name
));;
}
"Exists" throws an error if the function pointer is NULL.
"pfn" is a prefix for "Pointer to a FuNction"
Related
I'm referring to the OpenGL SuperBible. I use their framework to create an own program. I wanted to do something with an Interface Block (specifically a Uniform Block). If I call
glGetActiveUniformsiv(program, 1, uniformIndices, GL_UNIFORM_OFFSET, uniformOffsets);
I get an error, namely GL_INVALID_VALUE.
But if I call the same function with a 0 instead of a 1, it doesn't make that error. I assumed then, that I have no active uniforms. I should have 3 of them, however.
How do I activate them? Here's my shader:
#version 450 core
layout (location = 0) in vec4 position;
layout (location = 1) in vec4 color;
out vec4 vs_color;
uniform TransformBlock {
mat4 translation;
mat4 rotation;
mat4 projection_matrix;
};
void main(void)
{
mat4 mvp = projection_matrix * translation * rotation ;
gl_Position = mvp * position;
vs_color = color;
}
Here is some code from the startup method:
static const GLchar* uniformNames[3] = {
"TransformBlock.translation",
"TransformBlock.rotation",
"TransformBlock.projection_matrix",
};
GLuint uniformIndices[3];
glUseProgram(program);
glGetUniformIndices(program, 3, uniformNames, uniformIndices);
GLint uniformOffsets[3];
GLint matrixStrides[3];
glGetActiveUniformsiv(program, 3, uniformIndices, GL_UNIFORM_OFFSET, uniformOffsets);
glGetActiveUniformsiv(program, 3, uniformIndices, GL_UNIFORM_MATRIX_STRIDE, matrixStrides);
unsigned char* buffer1 = (unsigned char*)malloc(4096);
//fill buffer1 in a for-loop
GLuint block_index = glGetUniformBlockIndex(program, "TransformBlock");
glUniformBlockBinding(program, block_index, 0);
glBindBufferBase(GL_UNIFORM_BUFFER, 0, (GLuint)buffer1);
free(buffer1);
However, as a consequence of the function returning GL_INVALID_VALUE there's an error with the calls:
*((float *)(buffer1 + offset)) = ...
and the whole program interrupts. Without adding the offset, I don't get an error here, so I think the second error depends on the first error.
I think it goes wrong at glGetUniformIndices, because you prefixed your uniform names with TransformBlock. You don't use that to access the uniforms with that prefix in the GLSL code, either. If you wanted that, you'd had to set an instance name for the uniform block, the block name is not relevant for accessing / naming the uniforms at all. It is only used for matching interfaces if you link together multiple shaders accessing the same interface block.
I am currently trying to render text in OpenGL using bitmap files. When it's by itself, the font looks as expected.
Exhibit A:
When adding a separate texture (a picture) in a separate VAO OR more text in the same VAO, "This engine can render text!" still looks the same.
However, when adding both the texture in a separate VAO AND more text in the same VAO, the texture of "This engine can render text!" gets modified.
Exhibit B:
What's really strange to me is that the textures seem to be blended, and that it only affects a few vertices rather than the entire VBO.
Is this a problem of OpenGL/poor drivers, or is it something else? I double checked the vertices and the 'his' aren't being rendered with the picture texture active. I am using OSX which is notorious for poor OpenGL support, if it might help.
My rendering loop:
//scene is just a class that bundles all the necessary information for rendering.
//when rendering text, it is all batched inside of one scene,
//so independent textures of text characters should be impossible
glUseProgram(prgmid);
for(auto& it : scene->getTextures() )
{
//load textures
const Texture::Data* data = static_cast<const Texture::Data*>(it->getData() );
glActiveTexture(GL_TEXTURE0 + it->getID() );
glBindTexture(GL_TEXTURE_2D, it->getID() );
glUniform1i(glGetUniformLocation(prgmid, data->name), it->getID() );
}
for(auto& it : scene->getUniforms() )
{
processUniforms(scene, it, prgmid);
}
glBindVertexArray(scene->getMesh()->getVAO() );
glDrawElements(GL_TRIANGLES, scene->getMesh()->getDrawCount(), GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
Shaders of the text:
//Vertex
#version 330 core
layout (location = 0) in vec4 pos;
layout (location = 1) in vec2 fontTexCoord;
out vec2 fontTexCoords;
uniform mat4 __projection;
void main()
{
fontTexCoords = vec2(fontTexCoord.x, fontTexCoord.y);
gl_Position = __projection * pos;
}
//Frag
#version 330 core
in vec2 fontTexCoords;
out vec4 color;
uniform sampler2D fontbmp;
void main()
{
color = texture(fontbmp, fontTexCoords);
if(color.rgb == vec3(0.0, 0.0, 0.0) ) discard;
}
Shaders of the picture:
//vert
#version 330 core
layout (location = 0) in vec4 pos;
layout (location = 1) in vec2 texCoord;
out vec2 TexCoords;
uniform mat4 __projection;
uniform float __spriteFrameRatio;
uniform float __spriteFramePos;
uniform float __flipXMult;
uniform float __flipYMult;
void main()
{
TexCoords = vec2(((texCoord.x + __spriteFramePos) * __spriteFrameRatio) * __flipXMult, texCoord.y * __flipYMult);
gl_Position = __projection * pos;
}
//frag
#version 330 core
in vec2 TexCoords;
out vec4 color;
uniform sampler2D __image;
uniform vec4 __spriteColor;
uniform bool __is_texture;
void main()
{
if(__is_texture)
{
color = __spriteColor * texture(__image, TexCoords);
}
else
{
color = __spriteColor;
}
}
EDIT:
I believe the code that is causing the problem has to do with generating the buffers. It's called everytime when rendered for each scene (VAO, VBO, EBO, texture) object.
if(!REALLOCATE_BUFFER && !ATTRIBUTE_ADDED) return;
glBindVertexArray(_vao);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _ebo);
if(REALLOCATE_BUFFER)
{
size_t vsize = _vert.size() * sizeof(decltype(_vert)::value_type);
size_t isize = _indc.size() * sizeof(decltype(_indc)::value_type);
if(_prevsize != vsize)
{
_prevsize = vsize;
glBufferData(GL_ARRAY_BUFFER, vsize, &_vert[0], _mode);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, isize, &_indc[0], _mode);
}
else
{
glBufferSubData(GL_ARRAY_BUFFER, 0, vsize, &_vert[0]);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, 0, isize, &_indc[0]);
}
}
if(ATTRIBUTE_ADDED)
{
for(auto& itt : _attrib)
{
glVertexAttribPointer(itt.index, itt.size, GL_FLOAT, itt.normalized, _currstride * sizeof(GLfloat), (GLvoid*)(itt.pointer * sizeof(GLfloat) ) );
glEnableVertexAttribArray(itt.index);
}
}
glBindVertexArray(0);
When we comment out glBufferSubData so that glBufferData is always called, the problem area flickers and iterates through all textures, including the ones in other VAOs.
EDIT 2:
For some reason, everything works as expected when the text is rendered with a different mode than the picture, say GL_STREAM_DRAW and GL_DYNAMIC_DRAW, for instance. How can this be?
So the thing I messed up was that getDrawCount() was returning the number of VERTICES rather than the number of indices. Astonishingly, this didn't cause OpenGL to throw any errors and fixing it solved the flickering.
This code throws nvoglv32.dll exception. I think there is an error in glShaderSource somewherer, but I can't find it
ifstream ifs("vertexShader.txt");
string vertexShadersSource((istreambuf_iterator<char>(ifs)),
(std::istreambuf_iterator<char>()));
ifs.close();
ifs.open("fragmentShader.txt");
string fragmentShadersSource((istreambuf_iterator<char>(ifs)),
(std::istreambuf_iterator<char>()));
cout << fragmentShadersSource.c_str() << endl;
cout << vertexShadersSource.c_str() << endl;
GLuint shaderProgram;
GLuint fragmentShader, vertexShader;
vertexShader = glCreateShader(GL_VERTEX_SHADER);
fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
const char *data = vertexShadersSource.c_str();
glShaderSource(vertexShader, 1, &data, (GLint*)vertexShadersSource.size());
data = fragmentShadersSource.c_str();
glShaderSource(fragmentShader, 1, &data, (GLint*)fragmentShadersSource.size());
EDIT:
Although I think the shader is correct, here you can see the shader code
VertexShader:
#version 150
// in_Position was bound to attribute index 0 and in_Color was bound to attribute index 1
//in vec2 in_Position;
//in vec3 in_Color;
// We output the ex_Color variable to the next shader in the chain
out vec3 ex_Color;
void main(void) {
// Since we are using flat lines, our input only had two points: x and y.
// Set the Z coordinate to 0 and W coordinate to 1
//gl_Position = vec4(in_Position.x, in_Position.y, 0.0, 1.0);
// GLSL allows shorthand use of vectors too, the following is also valid:
// gl_Position = vec4(in_Position, 0.0, 1.0);
// We're simply passing the color through unmodified
ex_Color = vec3(1.0, 1.0, 0.0);
}
FragmentShader:
#version 150
// It was expressed that some drivers required this next line to function properly
precision highp float;
in vec3 ex_Color;
out vec4 gl_FragColor;
void main(void) {
// Pass through our original color with full opacity.
gl_FragColor = vec4(ex_Color,1.0);
}
Your call is wrong:
glShaderSource(vertexShader, 1, &data, (GLint*)vertexShadersSource.size());
and the type cast from a size_t to a pointer type should have raised some red flags while you wrote it.
glShaderSource() expects a pointer to an array of string lenghts - one element per separate string. Since you only use 1 string, it will try to access lenght[0]. This means it treats your string size as an address, and that address is very likely to not belong to your process.
Since you already use 0-terminated C-Strings, you can simply use NULL as the length parameter. Or, if you absolutely want to use it, you have to just pass a pointer to an GLint:
GLint len=vertexShadersSource.size();
glShaderSource(..., 1, ..., &len);
This is the stacktrace ..
com.jogamp.opengl.GLException: Thread[AWT-EventQueue-0,6,main] glGetError() returned the following error codes after a call to glEnableVertexAttribArray(<int> 0xFFFFFFFF): GL_INVALID_VALUE ( 1281 0x501),
at com.jogamp.opengl.DebugGL4bc.writeGLError(DebugGL4bc.java:30672)
at com.jogamp.opengl.DebugGL4bc.glEnableVertexAttribArray(DebugGL4bc.java:4921)
In object's draw() ..
float[] color = {1.0f, 0.0f, 0.0f, 1.0f};
// enable glsl
gl2.glUseProgram(shaderProgram);
// enable alpha
gl2.glEnable(GL.GL_BLEND);
gl2.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA);
// Set color for drawing
setmColorHandle(gl2.glGetUniformLocation(shaderProgram, "vColor"));
gl2.glUniform4fv(getmColorHandle(), 1, color, 0);
// get handle to vertex shader's vPosition member
mPositionHandle = gl2.glGetAttribLocation(shaderProgram, "vPosition");
// Enable a handle to the triangle vertices
gl2.glEnableVertexAttribArray(mPositionHandle);
vertex shader ..
#version 120
uniform mat4 uMVPMatrix;
attribute vec4 vPosition;
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
fragment shader ..
#version 120
uniform vec4 vColor;
void main() {
gl_FragColor = vColor;
}
Inside your vertex shader you don't use the vPosition attribute anywhere, so the driver will most likely optimize it away when compiling. That means that glGetAttribLocation will not return a valid value, which means that glEnableVertexAttribArray will fail. Change your vertex shader to actually use the uMVPMatrix and vPosition variables that you are declaring, i.e:
uniform mat4 uMVPMatrix;
attribute vec4 vPosition;
void main() {
gl_Position = uMVPMatrix * vPosition;
}
Make sure you actually pass in a value for uMVPMatrix (it's not clear whether you do so later in your code).
Again applying their considerable help with a problem. This is my code:
float ctex[] = {0.0,0.0,
0.0,1.0,
1.0,1.0,
1.0,0.0};
float data[] = {1.0, 1.0,-5.0,
-1.0,-1.0,-5.0,
1.0,-1.0,-5.0,
-1.0, 1.0,-5.0};
GLuint ind[] = {0,1,2,0,3,1};
LoadTexture();
glGenBuffers(1,&triangleVBO);
glBindBuffer(GL_ARRAY_BUFFER,triangleVBO);
glBufferData(GL_ARRAY_BUFFER,sizeof(data),data,GL_STATIC_DRAW);
glGenBuffers(1,&triangleIND);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,triangleIND);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,sizeof(ind),ind,GL_STATIC_DRAW);
glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0);
glGenBuffers(1,&triangleT[0]);
glBindBuffer(GL_ARRAY_BUFFER,triangleT[0]);
glBufferData(GL_ARRAY_BUFFER,sizeof(ctex),ctex,GL_STATIC_DRAW);
glVertexAttribPointer(1,2,GL_FLOAT,GL_FALSE,0,0);
GLuint v,f,p;
v = glCreateShader(GL_VERTEX_SHADER);
f = glCreateShader(GL_FRAGMENT_SHADER);
p = glCreateProgram();
char *vsFuente = LeeShader("shaders/shader.vert");
char *fsFuente = LeeShader("shaders/shader.frag");
const char *vs = vsFuente;
const char *fs = fsFuente;
glShaderSource(v,1,&vs,NULL);
glShaderSource(f,1,&fs,NULL);
free(vsFuente);free(fsFuente);
glCompileShader(v);
glCompileShader(f);
glAttachShader(p,v);
glAttachShader(p,f);
glLinkProgram(p);
//main loop
while(1){
...etc.
glUseProgram(p);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER,triangleVBO);
glDrawElements(GL_TRIANGLES,6,GL_UNSIGNED_INT,0);
glDisableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER,triangleTex);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,idtextere);
glEnable(GL_TEXTURE_2D);
glDisableVertexAttribArray(1);
glUseProgram(0);
...etc.
}
This is my Vertex Shader:
void main(){
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = ftransform();
}
And this my Fragment Shader:
uniform sampler2D tex;
void main(){
vec4 color = texture2D(tex,gl_TexCoord[0].st);
gl_FragColor = color;
}
The problem is that the texture does not appear or anything else. I could say What's the problem?
Thank you very much in advance.
You are using the old fixed-function attributes in your shader (like gl_MultiTexCoord0 or gl_Vertex as used by ftransform). But in your application code you try to load them with the generic attribute interface (like glVertexAttribPointer and glEnableVertexAttribArray). This won't work (it might work for attribute 0, which is an alias for gl_Vertex, though, but that's counter-intuitive anyway).
There are two way to fix this. Either don't use the generic attribute API, but the old fixed-function attributes, so replace glVertexAttribPointer(0, ...) with glVertexPointer and glVertexAttribPointer(1, ...) with glTexCoordPointer and likewise glEnableVertexAttribArray with glEnableClientState(GL_VERTEX_ARRAY) and glEnableClientState(GL_TEXTURE_COORD_ARRAY).
Or, the more modern and future-proof approach, drop the usage of the old fixed-function stuff inside your shaders and put in your attributes as generic attributes:
attribute vec4 position;
attribute vec2 texCoord;
void main() {
gl_TexCoord[0] = texCoord;
gl_Position = gl_ModelViewProjectionMatrix * position;
}
And don't forget to call glBindAttribLocation(p, 0, "position") and glBindAttribLocation(p, 1, "texCoord") before linking the program in order to assign the attribute indices to the correct attributes.
But the second approach, even if preferred, might be a bit too heavy a change for you right now, since it should actually be accompanied by dropping any use of old fixed-function stuff inside your shaders, like the modelview projection matrix, which should rather be a custom uniform, or the gl_TexCoord[0] varying, which should rather be a custom varying.