Undefined token error when shader file saved in unicode - opengl

Why do i get undefined token error when the shader text file is saved in unicode , when i save it back in ansi the error goes away.
#version 330 core
layout( location = 0 ) in vec2 aPos;
layout( location = 1) in vec2 aTexCoord;
void main()
{
gl_Position = vec4(aPos , 0.0 , 1.0);
};

The OpenGL spec takes a char* as the array to compile the data. char's are usually a byte so they can take at most 256 values, which is ASCII.
If you give it a string that tries to encode a larger character representation (e.g UTF-16) it will interpret the data as if it was a char* which means the compiler is now reading the byte prefix in the larger representation as if it was its own, unique value.
This is of course undefined behaviour and likely to not work.

Related

when do i need GL_EXT_nonuniform_qualifier?

I want to compile the following code into SPIR-V
#version 450 core
#define BATCH_ID (PushConstants.Indices.x >> 16)
#define MATERIAL_ID (PushConstants.Indices.x & 0xFFFF)
layout (push_constant) uniform constants {
ivec2 Indices;
} PushConstants;
layout (constant_id = 1) const int MATERIAL_SIZE = 32;
in Vertex_Fragment {
layout(location = 0) vec4 VertexColor;
layout(location = 1) vec2 TexCoord;
} inData;
struct ParameterFrequence_3 {
int ColorMap;
};
layout (set = 3, binding = 0, std140) uniform ParameterFrequence_3 {
ParameterFrequence_3[MATERIAL_SIZE] data;
} Frequence_3;
layout (location = 0) out vec4 out_Color;
layout (set = 2, binding = 0) uniform sampler2D[] Sampler2DResources;
void main(void) {
vec4 color = vec4(1.0);
color *= texture(Sampler2DResources[Frequence_3.data[MATERIAL_ID].ColorMap], inData.TexCoord);
color *= inData.VertexColor;
out_Color = color;
}
(The code is generated by a program I am developing which is why the code might look a little strange, but it should make the problem clear)
When trying to do so, I am told
error: 'variable index' : required extension not requested: GL_EXT_nonuniform_qualifier
(for the third last line where the texture lookup also happens)
After I followed a lot of discussion around how dynamically uniform is specified and that the shading language spec basically says the scope is specified by the API while neither OpenGL nor Vulkan really do so (maybe that changed), I am confused why i get that error.
Initially I wanted to use instanced vertex attributes for the indices, those however are not dynamically uniform which is what I thought the PushConstants would be.
So when PushConstants are constant during the draw call (which is the max scope for dynamically uniform requirement), how can the above shader end up in any dynamically non-uniform state?
Edit: Does it have to do with the fact that the buffer backing the storage for the "ColorMap" could be aliased by another buffer via which the content might be modified during the invocation? Or is there a way to tell the compiler this is a "restricted" storage so it knows it is constant?
Thanks
It is 3 am in the morning over here, I should just go to sleep.
Chances anyone end up having the same problem are small, but I'd still rather answer it myself than delete it:
I simply had to add a SpecializationConstant to set the size of the sampler2D array, now it works without requiring any extension.
Good night

512 bytes length vertex shaders not supported

I'm working on a simple game based on shaders and as my vertex shader file grew up, its size reaches 512 bytes and I'm now unable to load it in c++..
I don't think it is a common issue but I guess it comes from my custom shader loader and not from opengl limitations.
Here the code of my simple vertex shader (it is supposed to map cartesian coordinates to spherical ones, its size is 583bytes):
#version 330 core
uniform mat4 projection;
uniform mat4 camera;
layout (location=0) in vec3 vertex;
layout (location=1) in vec4 color;
in mat4 object;
out vec4 vColor;
void main()
{
float x=sqrt(max(1.0-0.5*pow(vertex.y,2)-0.5*pow(vertex.z,2)+pow(vertex.y*vertex.z,2)/3.0,0.0));
float y=sqrt(max(1.0-0.5*pow(vertex.x,2)-0.5*pow(vertex.z,2)+pow(vertex.x*vertex.z,2)/3.0,0.0));
float z=sqrt(max(1.0-0.5*pow(vertex.x,2)-0.5*pow(vertex.y,2)+pow(vertex.x*vertex.y,2)/3.0,0.0));
gl_Position=projection*camera*object*vec4(x,y,z,1.0);
vColor=color;
}
And the code of the loader:
GLuint vs;
vs=glCreateShader(GL_VERTEX_SHADER);
std::ifstream vertexShaderStream(_vsPath);
vertexShaderStream.seekg(0,std::ios::end);
unsigned int vsSourceLen=(unsigned int)vertexShaderStream.tellg();
vertexShaderStream.seekg(0,std::ios::beg);
char vertexShaderSource[vsSourceLen];
vertexShaderStream.read(vertexShaderSource,vsSourceLen);
vertexShaderStream.close();
const char *vsConstSource(vertexShaderSource);
glShaderSource(vs,1,&vsConstSource,NULL);
glCompileShader(vs);
int status;
glGetShaderiv(vs,GL_COMPILE_STATUS,&status);
if(!status)
{
char log[256];
glGetShaderInfoLog(vs,sizeof log,NULL,log);
std::cout << log << std::endl;
return -1;
}
When I reduce the size below 512bytes(2^9...) (511 and less) It works well.
I'm using GLFW3 to load openGL.
Did you ever seen a problem like that?
One issue is that you're reading your shader source into a char[] buffer with read, without ever adding a NUL terminator, and then you call glShaderSource with NULL for a length vector, so it will look for a NUL terminator to figure out the length of the string. So I would expect that to randomly fail, depending on what happens to be in memory after the string (whether it luckily appears as NUL terminated, because the next byte happens to 0 or not).
You need to either properly NUL terminate your string, or pass a pointer to the length of the string as the 4th argument to glShaderSource
Adding std::ifstream::binary as a second parameter in the ifstream constructor solved the issue. I don’t exactly understand why the ifstream, which by default treat files as text files, stops to count end of line when the length of the file reaches 512 bytes and adds some random bytes to match the actual size. Anyway, using ifstream is certainly not the best way to load files but as far as it is just to deal with shaders I would say this is ok (with the appropriate “ios::binary” flag).

OpenGL - Calling glBindBufferBase with index = 1 breaks rendering (Pitch black)

There's an array of uniform blocks in my shader which is defined as such:
layout (std140) uniform LightSourceBlock
{
int shadowMapID;
int type;
vec3 position;
vec4 color;
float dist;
vec3 direction;
float cutoffOuter;
float cutoffInner;
float attenuation;
} LightSources[12];
To be able to bind my buffer objects to each LightSource, I've bound each uniform to a uniform block index:
for(unsigned int i=0;i<12;i++)
glUniformBlockBinding(program,locLightSourceBlock[i],i); // locLightSourceBlock contains the locations of each element in LightSources[]
When rendering, I'm binding my buffers to the respective index using:
glBindBufferBase(GL_UNIFORM_BUFFER,i,buffer);
This works fine, as long as I only bind a single buffer to the binding index 0. As soon as there's more, everything is pitch black (Even things that use entirely different shaders). (glGetError returns no errors)
If I change the block indices range from 0-11 to 2-13 (Skipping index 1), everything works as it should. I figured if I use index 1, I'm overwriting something, but I don't have any other uniform blocks in my shader, and I'm not using glUniformBlockBinding or glBindBufferBase anywhere else in my code, so I'm not sure.
What could be causing such behavior? Is the index 1 reserved for something?
1) Dont use multiple blocks. Use one block with array. Something like this:
struct Light{
...
}
layout(std430, binding=0) uniform lightBuffer{
Light lights[42];
}
skip glUniformBlockBinding and only glBindBufferBase to index specified in shader
2) Read up on alignment for std140, std430. In short, buffer variable are aligned so they dont cross 128bit boundaries. So in your case position would start at byte 16 (not 8). This results in mismatch of CPU/GPU side access. (Reorder variables or add padding)

QGLShaderProgram::setAttributeArray(0, ...) VERSUS QGLShaderProgram::setAttributeArray("position", ...)

I have a vertex shader:
#version 430
in vec4 position;
void main(void)
{
//gl_Position = position; => works in ALL cases
gl_Position = vec4(0,0,0,1);
}
if I do:
m_program.setAttributeArray(0, m_vertices.constData());
m_program.enableAttributeArray(0);
everything works fine. However, if I do:
m_program.setAttributeArray("position", m_vertices.constData());
m_program.enableAttributeArray("position");
NOTE: m_program.attributeLocation("position"); returns -1.
then, I get an empty window.
Qt help pages state:
void QGLShaderProgram::setAttributeArray(int location, const QVector3D
* values, int stride = 0)
Sets an array of 3D vertex values on the attribute at location in this shader program. The stride indicates the
number of bytes between vertices. A default stride value of zero
indicates that the vertices are densely packed in values.
The array will become active when enableAttributeArray() is called on
the location. Otherwise the value specified with setAttributeValue()
for location will be used.
and
void QGLShaderProgram::setAttributeArray(const char * name, const
QVector3D * values, int stride = 0)
This is an overloaded function.
Sets an array of 3D vertex values on the attribute called name in this
shader program. The stride indicates the number of bytes between
vertices. A default stride value of zero indicates that the vertices
are densely packed in values.
The array will become active when enableAttributeArray() is called on
name. Otherwise the value specified with setAttributeValue() for name
will be used.
So why is it working when using the "int version" and not when using the "const char * version"?
It returns -1 because you commented out the only line in your shader that actually uses position.
This is not an error, it is a consequence of a misunderstanding how attribute locations are assigned. Uniforms and attributes are only assigned locations after all shader stages are compiled and linked. If a uniform or attribute is not used in an active code path it will not be assigned a location. Even if you use the variable to do something like this:
#version 130
in vec4 dead_pos; // Location: N/A
in vec4 live_pos; // Location: Probably 0
void main (void)
{
vec4 not_used = dead_pos; // Not used for vertex shader output, so this is dead.
gl_Position = live_pos;
}
It actually goes even farther than this. If something is output from a vertex shader but not used in a geometry, tessellation or fragment shader, then its code path is considered inactive.
Vertex attribute location 0 is implicitly vertex position, by the way. It is the only vertex attribute that the GLSL spec. allows to alias to a fixed-function pointer function (e.g. glVertexPointer (...) == glVertexAttribPointer (0, ...))

GLSL, adding a single line makes nothing draw

I have the following GLSL vertex shader:
attribute vec3 a_vTangent;
attribute vec3 a_vBinormal;
attribute vec2 a_vCustomParams;
varying vec3 i_vTangent;
varying vec3 i_vBinormal;
varying vec4 i_vColor;
varying vec2 i_vCustomParams;
void main()
{
i_vTangent = a_vTangent;
i_vBinormal = a_vBinormal;
i_vColor = gl_Color;
//i_vCustomParams = a_vCustomParams;
gl_Position = gl_Vertex;
}
If I uncomment the line i_vCustomParams = a_vCustomParams;, none of anything rendered with this shader draws anymore, no GL errors, no shader compile or link errors. This surprises me, as the geometry shader doesn't even use i_vCustomParams yet, and furthermore, the other two vertex attributes (a_vTangent and a_vBinormal) work perfectly fine.
I know it's correct, but provided anyway is my vertex setup
layout = new VertexLayout(new VertexElement[]
{
new VertexPosition(VertexPointerType.Short, 3),
new VertexAttribute(VertexAttribPointerType.UnsignedByte, 2, 3), //2 elements, location 3
new VertexColor(ColorPointerType.UnsignedByte, 4),
new VertexAttribute(VertexAttribPointerType.Byte, 4, 1), //4 elements location 1
new VertexAttribute(VertexAttribPointerType.Byte, 4, 2), //4 elements, location 2
});
for this vertex struct:
struct CubeVertex : IVertex
{
public ushort3 Position;
public byte2 Custom;
public byte4 Color;
public sbyte4 Tangent;
public sbyte4 Binormal;
}
Any ideas why this happens?
When you uncomment the line your vertex shader starts to think the custom attribute is needed (because it's output depends on this attribute) and hence the shader program considers this attribute as used. This, in consequence, may change the assignments of the the input attributes to indices (if you are not forcing it by calling glBindAttribLocation). So you finish up with passing your data into incorrect attribute slots, resulting in a trashy (or empty) output.
Solution:
Either force the attribute locations before linking the program or query GL for the auto-assigned locations before passing your data.