512 bytes length vertex shaders not supported - c++

I'm working on a simple game based on shaders and as my vertex shader file grew up, its size reaches 512 bytes and I'm now unable to load it in c++..
I don't think it is a common issue but I guess it comes from my custom shader loader and not from opengl limitations.
Here the code of my simple vertex shader (it is supposed to map cartesian coordinates to spherical ones, its size is 583bytes):
#version 330 core
uniform mat4 projection;
uniform mat4 camera;
layout (location=0) in vec3 vertex;
layout (location=1) in vec4 color;
in mat4 object;
out vec4 vColor;
void main()
{
float x=sqrt(max(1.0-0.5*pow(vertex.y,2)-0.5*pow(vertex.z,2)+pow(vertex.y*vertex.z,2)/3.0,0.0));
float y=sqrt(max(1.0-0.5*pow(vertex.x,2)-0.5*pow(vertex.z,2)+pow(vertex.x*vertex.z,2)/3.0,0.0));
float z=sqrt(max(1.0-0.5*pow(vertex.x,2)-0.5*pow(vertex.y,2)+pow(vertex.x*vertex.y,2)/3.0,0.0));
gl_Position=projection*camera*object*vec4(x,y,z,1.0);
vColor=color;
}
And the code of the loader:
GLuint vs;
vs=glCreateShader(GL_VERTEX_SHADER);
std::ifstream vertexShaderStream(_vsPath);
vertexShaderStream.seekg(0,std::ios::end);
unsigned int vsSourceLen=(unsigned int)vertexShaderStream.tellg();
vertexShaderStream.seekg(0,std::ios::beg);
char vertexShaderSource[vsSourceLen];
vertexShaderStream.read(vertexShaderSource,vsSourceLen);
vertexShaderStream.close();
const char *vsConstSource(vertexShaderSource);
glShaderSource(vs,1,&vsConstSource,NULL);
glCompileShader(vs);
int status;
glGetShaderiv(vs,GL_COMPILE_STATUS,&status);
if(!status)
{
char log[256];
glGetShaderInfoLog(vs,sizeof log,NULL,log);
std::cout << log << std::endl;
return -1;
}
When I reduce the size below 512bytes(2^9...) (511 and less) It works well.
I'm using GLFW3 to load openGL.
Did you ever seen a problem like that?

One issue is that you're reading your shader source into a char[] buffer with read, without ever adding a NUL terminator, and then you call glShaderSource with NULL for a length vector, so it will look for a NUL terminator to figure out the length of the string. So I would expect that to randomly fail, depending on what happens to be in memory after the string (whether it luckily appears as NUL terminated, because the next byte happens to 0 or not).
You need to either properly NUL terminate your string, or pass a pointer to the length of the string as the 4th argument to glShaderSource

Adding std::ifstream::binary as a second parameter in the ifstream constructor solved the issue. I don’t exactly understand why the ifstream, which by default treat files as text files, stops to count end of line when the length of the file reaches 512 bytes and adds some random bytes to match the actual size. Anyway, using ifstream is certainly not the best way to load files but as far as it is just to deal with shaders I would say this is ok (with the appropriate “ios::binary” flag).

Related

Undefined token error when shader file saved in unicode

Why do i get undefined token error when the shader text file is saved in unicode , when i save it back in ansi the error goes away.
#version 330 core
layout( location = 0 ) in vec2 aPos;
layout( location = 1) in vec2 aTexCoord;
void main()
{
gl_Position = vec4(aPos , 0.0 , 1.0);
};
The OpenGL spec takes a char* as the array to compile the data. char's are usually a byte so they can take at most 256 values, which is ASCII.
If you give it a string that tries to encode a larger character representation (e.g UTF-16) it will interpret the data as if it was a char* which means the compiler is now reading the byte prefix in the larger representation as if it was its own, unique value.
This is of course undefined behaviour and likely to not work.

Wrong alignment for floats array

Im passing uniform buffer to compute shader in vulkan. The buffer contains an array of 49 floating point numbers (gaussian matrix). Everything is fine, but when I read array in the shader, it gives only 13 values, the others are 0 or gunk, and they correspond to 0, 4, 8, etc. of initial array. I think its some kind of alignment problem
Shader layouts are
struct Pixel
{
vec4 value;
};
layout(push_constant) uniform params_t
{
int width;
int height;
} params;
layout(std140, binding = 0) buffer buf
{
Pixel imageData[];
};
layout (binding = 1) uniform sampler2D inputTex;
layout (binding = 2) uniform unf_t
{
float gauss[SAMPLE_SIZE*SAMPLE_SIZE];
};
Could that be binding 0 influencing binding 2? and if so how can I copy array to buffer with needed alignment? Currently I use
vkCmdUpdateBuffer(a_cmdBuff, a_uniform, 0, a_gaussSize, (const uint32_t *)gauss)
or may be better to split on different sets?
Edit: by expanding buffer and array i manage to pass it with alignment of 16 and all great, but it looks like a waste of memory. How can I align floats by 4?
Uniform blocks require that array elements are aligned to vec4 (16 bytes).
To work around this you use a vec4 instead and you can pass 52 floats and then take the correct component based on index/4 and index%4.

Get the size of compiled glsl shader uniform parameter from C++ code

I am trying to get the size of uniform parameter in already compiled glsl shader program. I have found some functions to do it for default-typed uniforms only. But is there a way to do it for uniform parameters with custom type?
For example:
struct Sphere
{
vec3 position;
float raduis;
};
#define SPHERES 10
uniform Sphere spheres[SPHERES];
I'm assuming that your end goal basically is spheres.length and the result being 10.
The most optimal way would be to have that length stored elsewhere, as it isn't possible to change the size after the shader has been compiled anyways.
There's no simple way to get the length of the array. Because there isn't any array per se. When compiled each element of the array (as well as each element of the struct) ends up becoming their own individual uniform. Which is evident by the need of doing:
glGetUniformLocation(program, "spheres[4].position")
The thing is that if your shader only uses spheres[4].position and spheres[8].position then all the other spheres[x].position are likely to be optimized away and thus won't exist.
So how do you get the uniform array length?
You could accomplish this by utilizing glGetActiveUniform() and regex or sscanf(). Say you want to check how many spheres[x].position is available, then you could do:
GLint count;
glGetProgramiv(program, GL_ACTIVE_UNIFORMS, &count);
const GLsizei NAME_MAX_LENGTH = 64
GLint location, size;
GLenum type;
GLchar name[NAME_MAX_LENGTH];
GLsizei nameLength;
int index, charsRead;
for (GLint i = 0; i < count; ++i)
{
glGetActiveUniform(program, (GLuint)i, NAME_MAX_LENGTH, &nameLength, &size, &type, name);
if (sscanf(name, "spheres[%d].position%n", &index, &charsRead) && (charsRead == nameLength))
{
// Now we know spheres[index].position is available
}
}
You can additionally compare type to GL_FLOAT or GL_FLOAT_VEC3 to figure out which data type it is.
Remember that if you add an int count and increment it for each match. Then even if count is 3 at the end. That doesn't mean it's element 0, 1, 2 that's available. It could easily be element 0, 5, 8.
Additional notes:
name is a null terminated string
%n is the number of characters read so far

OpenGL - Calling glBindBufferBase with index = 1 breaks rendering (Pitch black)

There's an array of uniform blocks in my shader which is defined as such:
layout (std140) uniform LightSourceBlock
{
int shadowMapID;
int type;
vec3 position;
vec4 color;
float dist;
vec3 direction;
float cutoffOuter;
float cutoffInner;
float attenuation;
} LightSources[12];
To be able to bind my buffer objects to each LightSource, I've bound each uniform to a uniform block index:
for(unsigned int i=0;i<12;i++)
glUniformBlockBinding(program,locLightSourceBlock[i],i); // locLightSourceBlock contains the locations of each element in LightSources[]
When rendering, I'm binding my buffers to the respective index using:
glBindBufferBase(GL_UNIFORM_BUFFER,i,buffer);
This works fine, as long as I only bind a single buffer to the binding index 0. As soon as there's more, everything is pitch black (Even things that use entirely different shaders). (glGetError returns no errors)
If I change the block indices range from 0-11 to 2-13 (Skipping index 1), everything works as it should. I figured if I use index 1, I'm overwriting something, but I don't have any other uniform blocks in my shader, and I'm not using glUniformBlockBinding or glBindBufferBase anywhere else in my code, so I'm not sure.
What could be causing such behavior? Is the index 1 reserved for something?
1) Dont use multiple blocks. Use one block with array. Something like this:
struct Light{
...
}
layout(std430, binding=0) uniform lightBuffer{
Light lights[42];
}
skip glUniformBlockBinding and only glBindBufferBase to index specified in shader
2) Read up on alignment for std140, std430. In short, buffer variable are aligned so they dont cross 128bit boundaries. So in your case position would start at byte 16 (not 8). This results in mismatch of CPU/GPU side access. (Reorder variables or add padding)

QGLShaderProgram::setAttributeArray(0, ...) VERSUS QGLShaderProgram::setAttributeArray("position", ...)

I have a vertex shader:
#version 430
in vec4 position;
void main(void)
{
//gl_Position = position; => works in ALL cases
gl_Position = vec4(0,0,0,1);
}
if I do:
m_program.setAttributeArray(0, m_vertices.constData());
m_program.enableAttributeArray(0);
everything works fine. However, if I do:
m_program.setAttributeArray("position", m_vertices.constData());
m_program.enableAttributeArray("position");
NOTE: m_program.attributeLocation("position"); returns -1.
then, I get an empty window.
Qt help pages state:
void QGLShaderProgram::setAttributeArray(int location, const QVector3D
* values, int stride = 0)
Sets an array of 3D vertex values on the attribute at location in this shader program. The stride indicates the
number of bytes between vertices. A default stride value of zero
indicates that the vertices are densely packed in values.
The array will become active when enableAttributeArray() is called on
the location. Otherwise the value specified with setAttributeValue()
for location will be used.
and
void QGLShaderProgram::setAttributeArray(const char * name, const
QVector3D * values, int stride = 0)
This is an overloaded function.
Sets an array of 3D vertex values on the attribute called name in this
shader program. The stride indicates the number of bytes between
vertices. A default stride value of zero indicates that the vertices
are densely packed in values.
The array will become active when enableAttributeArray() is called on
name. Otherwise the value specified with setAttributeValue() for name
will be used.
So why is it working when using the "int version" and not when using the "const char * version"?
It returns -1 because you commented out the only line in your shader that actually uses position.
This is not an error, it is a consequence of a misunderstanding how attribute locations are assigned. Uniforms and attributes are only assigned locations after all shader stages are compiled and linked. If a uniform or attribute is not used in an active code path it will not be assigned a location. Even if you use the variable to do something like this:
#version 130
in vec4 dead_pos; // Location: N/A
in vec4 live_pos; // Location: Probably 0
void main (void)
{
vec4 not_used = dead_pos; // Not used for vertex shader output, so this is dead.
gl_Position = live_pos;
}
It actually goes even farther than this. If something is output from a vertex shader but not used in a geometry, tessellation or fragment shader, then its code path is considered inactive.
Vertex attribute location 0 is implicitly vertex position, by the way. It is the only vertex attribute that the GLSL spec. allows to alias to a fixed-function pointer function (e.g. glVertexPointer (...) == glVertexAttribPointer (0, ...))