I want to compile the following code into SPIR-V
#version 450 core
#define BATCH_ID (PushConstants.Indices.x >> 16)
#define MATERIAL_ID (PushConstants.Indices.x & 0xFFFF)
layout (push_constant) uniform constants {
ivec2 Indices;
} PushConstants;
layout (constant_id = 1) const int MATERIAL_SIZE = 32;
in Vertex_Fragment {
layout(location = 0) vec4 VertexColor;
layout(location = 1) vec2 TexCoord;
} inData;
struct ParameterFrequence_3 {
int ColorMap;
};
layout (set = 3, binding = 0, std140) uniform ParameterFrequence_3 {
ParameterFrequence_3[MATERIAL_SIZE] data;
} Frequence_3;
layout (location = 0) out vec4 out_Color;
layout (set = 2, binding = 0) uniform sampler2D[] Sampler2DResources;
void main(void) {
vec4 color = vec4(1.0);
color *= texture(Sampler2DResources[Frequence_3.data[MATERIAL_ID].ColorMap], inData.TexCoord);
color *= inData.VertexColor;
out_Color = color;
}
(The code is generated by a program I am developing which is why the code might look a little strange, but it should make the problem clear)
When trying to do so, I am told
error: 'variable index' : required extension not requested: GL_EXT_nonuniform_qualifier
(for the third last line where the texture lookup also happens)
After I followed a lot of discussion around how dynamically uniform is specified and that the shading language spec basically says the scope is specified by the API while neither OpenGL nor Vulkan really do so (maybe that changed), I am confused why i get that error.
Initially I wanted to use instanced vertex attributes for the indices, those however are not dynamically uniform which is what I thought the PushConstants would be.
So when PushConstants are constant during the draw call (which is the max scope for dynamically uniform requirement), how can the above shader end up in any dynamically non-uniform state?
Edit: Does it have to do with the fact that the buffer backing the storage for the "ColorMap" could be aliased by another buffer via which the content might be modified during the invocation? Or is there a way to tell the compiler this is a "restricted" storage so it knows it is constant?
Thanks
It is 3 am in the morning over here, I should just go to sleep.
Chances anyone end up having the same problem are small, but I'd still rather answer it myself than delete it:
I simply had to add a SpecializationConstant to set the size of the sampler2D array, now it works without requiring any extension.
Good night
I'm trying to send a variable from my vertex shader to my fragment shader, but when I include a specific the in variable in an if statement, it causes nothing to show up. Removing the if statement causes everything to appear and work normally. What's weird is that if statement isn't actually doing anything and that no errors are being generated by the fragment shader.
I have several other variables I'm sending from my vertex shader to my fragment shader but this one specifically is the only one causing issues. I know type is being set correct because I'm using it for something else that's working correctly.
vertex shader
#version 150
in float type;
out int roofBool;
void main(void)
{
textureXY = texcoords;
roofBool = 0;
if(type == 2){
roofBool = 1;
}
}
fragment shader
#version 150
in int roofBool;
// The output. Always a color
out vec4 fragColor;
void main()
{
int a = 0;
if(roofBool == 1){ //removing this causes everything to work
a = 2;
}
}
int variables cannot be interpolated by the GL. You must declare both the output and the corresponding input with the flat qualifier`.
From the behavior you described, it seems like you are not properly checking the compile and link status of your shaders/programs, and don't seem to retrieve the compiler/linker info log. You would vert likely have gotten a useful error message if you did.
I have some problems with the binding of an uniform buffer object to several shaders.
The execution of the following code:
for(auto& shaderIter : shaderHandler.getShaderPrograms()){
shaderIter.second->bind();
GLuint programID = shaderIter.second->programId();
GLuint index = glFuncs->glGetUniformBlockIndex(programID, "MatrixUBO");
glFuncs->glUniformBlockBinding(programID, index, UBO_MATRICES_BINDING_POINT);
shaderIter.second->release();
}
causes the error message
QOpenGLDebugMessage("APISource", 1281, "GL_INVALID_VALUE error generated. Uniform block index exceeds the maximum supported uniform buffers.", "HighSeverity", "ErrorType")
The type of the shader programs is QOpenGLShaderProgram. I use vertex, geometry, fragment and compute shaders with these shader programs.
The value of GL_MAX_{VERTEX, FRAGMENT, GEOMETRY}_UNIFORM_BLOCKS is 14. The output of index is for each program 0 except for one where it is 4294967295.
It is not possible to bind a uniform block buffer to a compute shader. That's why the output of index is 4294967295 for the shader program with the compute shader.
EDIT: Because 4294967295 is the value of GL_INVALID_INDEX.
There are two possible solutions in my oppinion:
Divide the list of shader programs in two parts. The first one for all rendering shaders and the second one, for all compute shaders. After that, just iterate over the rendering ones.
Ask for each shader if it's a compute shader and just do the binding for the rendering shaders. But I don't know if there is the possibility to access the information if it is a rendering or compute shader.
EDIT: There is the possibility to get a QList of all shaders related to a shader program and the type of each shader can be checked. So i changed the code to the following, which works for me.
for(auto& shaderProgramIter : shaderHandler.getShaderPrograms()){
bool isComputeShader = false;
for(auto& shaderIter : shaderProgramIter.second->shaders())
{
if(shaderIter->shaderType() == QOpenGLShader::Compute)
isComputeShader = true;
}
if(!isComputeShader)
{
shaderProgramIter.second->bind();
GLuint programID = shaderProgramIter.second->programId();
GLuint index = glFuncs->glGetUniformBlockIndex(programID, "MatrixUBO");
glFuncs->glUniformBlockBinding(programID, index, UBO_MATRICES_BINDING_POINT);
shaderProgramIter.second->release();
}
}
I'm currently trying to get a triangle to render using OpenGL 3.3 and C++ with the GLM, GLFW3 and GLEW libraries, but get an error when trying to create my shaderprogram.
Vertex info
(0) : error C5145: must write to gl_Position
I already tried to find out why this happens and asked on other forums, but no one knew what the reason is. There are three possible points where this error could have his origin - in my main.cpp, where I create the window, the context, the program, the vao etc. ...
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <glm/glm.hpp>
#include <iostream>
#include <string>
#include "util/shaderutil.hpp"
#define WIDTH 800
#define HEIGHT 600
using namespace std;
using namespace glm;
GLuint vao;
GLuint shaderprogram;
void initialize() {
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glClearColor(0.5, 0.7, 0.9, 1.0);
string vShaderPath = "shaders/shader.vert";
string fShaderPath = "shaders/shader.frag";
shaderprogram = ShaderUtil::createProgram(vShaderPath.c_str(), fShaderPath.c_str());
}
void render() {
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(shaderprogram);
glDrawArrays(GL_TRIANGLES, 0, 3);
}
void clean() {
glDeleteProgram(shaderprogram);
}
int main(int argc, char** argv) {
if (!glfwInit()) {
cerr << "GLFW ERROR!" << endl;
return -1;
}
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
GLFWwindow* win = glfwCreateWindow(WIDTH, HEIGHT, "Rendering a triangle!", NULL, NULL);
glfwMakeContextCurrent(win);
glewExperimental = GL_TRUE;
if (glewInit() != GLEW_OK) {
cerr << "GLEW ERROR!" << endl;
return -1;
} else {
glGetError();
//GLEW BUG: SETTING THE ERRORFLAG TO INVALID_ENUM; THEREFORE RESET
}
initialize();
while (!glfwWindowShouldClose(win)) {
render();
glfwPollEvents();
glfwSwapBuffers(win);
}
clean();
glfwDestroyWindow(win);
glfwTerminate();
return 0;
}
...the ShaderUtil class, where I read in the shader files, compile them, do error checking and return a final program...
#include "shaderutil.hpp"
#include <iostream>
#include <string>
#include <fstream>
#include <vector>
using namespace std;
GLuint ShaderUtil::createProgram(const char* vShaderPath, const char* fShaderPath) {
/*VARIABLES*/
GLuint vertexShader;
GLuint fragmentShader;
GLuint program;
ifstream vSStream(vShaderPath);
ifstream fSStream(fShaderPath);
string vSCode, fSCode;
/*CREATING THE SHADER AND PROGRAM OBJECTS*/
vertexShader = glCreateShader(GL_VERTEX_SHADER);
fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
program = glCreateProgram();
/*READING THE SHADERCODE*/
/*CONVERTING THE SHADERCODE TO CHAR POINTERS*/
while (vSStream.is_open()) {
string line = "";
while (getline(vSStream, line)) {
vSCode += "\n" + line;
}
vSStream.close();
}
const char* vSCodePointer = vSCode.c_str();
while (fSStream.is_open()) {
string line = "";
while (getline(fSStream, line)) {
fSCode += "\n" + line;
}
fSStream.close();
}
const char* fSCodePointer = fSCode.c_str();
/*COMPILING THE VERTEXSHADER*/
glShaderSource(vertexShader, 1, &vSCodePointer, NULL);
glCompileShader(vertexShader);
/*VERTEXSHADER ERROR CHECKING*/
GLint vInfoLogLength;
glGetShaderiv(vertexShader, GL_INFO_LOG_LENGTH, &vInfoLogLength);
if (vInfoLogLength > 0) {
vector<char> vInfoLog(vInfoLogLength + 1);
glGetShaderInfoLog(vertexShader, vInfoLogLength, &vInfoLogLength, &vInfoLog[0]);
for(int i = 0; i < vInfoLogLength; i++) {
cerr << vInfoLog[i];
}
}
/*COMPILING THE FRAGMENTSHADER*/
glShaderSource(fragmentShader, 1, &fSCodePointer, NULL);
glCompileShader(fragmentShader);
/*FRAGMENTSHADER ERROR CHECKING*/
GLint fInfoLogLength;
glGetShaderiv(fragmentShader, GL_INFO_LOG_LENGTH, &fInfoLogLength);
if (fInfoLogLength > 0) {
vector<char> fInfoLog(fInfoLogLength + 1);
glGetShaderInfoLog(fragmentShader, fInfoLogLength, &fInfoLogLength, &fInfoLog[0]);
for(int i = 0; i < fInfoLogLength; i++) {
cerr << fInfoLog[i];
}
}
/*LINKING THE PROGRAM*/
glAttachShader(program, vertexShader);
glAttachShader(program, fragmentShader);
glLinkProgram(program);
//glValidateProgram(program);
/*SHADERPROGRAM ERROR CHECKING*/
GLint programInfoLogLength;
glGetProgramiv(program, GL_INFO_LOG_LENGTH, &programInfoLogLength);
if (programInfoLogLength > 0) {
vector<char> programInfoLog(programInfoLogLength + 1);
glGetProgramInfoLog(program, programInfoLogLength, &programInfoLogLength, &programInfoLog[0]);
for(int i = 0; i < programInfoLogLength; i++) {
cerr << programInfoLog[i];
}
}
/*CLEANUP & RETURNING THE PROGRAM*/
glDeleteShader(vertexShader);
glDeleteShader(fragmentShader);
return program;
}
...and the vertex shader itself, which is nothing special. I just create an array of vertices and push them into gl_Position.
#version 330 core
void main() {
const vec3 VERTICES[3] = vec3[3] {
0.0, 0.5, 0.5,
0.5,-0.5, 0.5,
-0.5,-0.5, 0.5
};
gl_Position.xyz = VERTICES;
gl_Position.w = 1.0;
}
The fragmentshader just outputs a vec4 called color, which is set to (1.0, 0.0, 0.0, 1.0). The compiler doesn't show me any errors, but when I try to execute the program, I just get a window without the triangle and the error message that's shown above.
There's a few things I already tried to solve this problem, but none of them worked:
I tried creating the vertices inside my main.cpp and pushing them into the vertex-shader via a vertex buffer object; I changed some code inspired by opengl-tutorials.org and finally got a triangle to show up, but the shaders weren't applied; I only got the vertices inside my main.cpp to show up on the screen, but the "must write to gl_Position" problem remained.
I tried using glGetError() on different places and got 2 different error-codes: 1280 and 1282; the first one was caused by a bug inside GLEW, which causes the state to change from GL_NO_ERROR to GL_INVALID_ENUM or something like that. I was told to ignore this one and just change the state back to GL_NO_ERROR by using glGetError() after initializing GLEW. The other error code appeared after using glUseProgram() in the render-function. I wanted to get some information out of this, but the gluErrorString() function is deprecated in OpenGL 3.3 and I couldn't find an alternative provided by any of my libraries.
I tried validating my program via glValidateProgram() after linking it. When I did this, the gl_Position error message didn't show up anymore, but the triangle didn't either, so I assumed that this function just clears the infolog to put in some new information about the validation process
So right now, I have no idea what causes this error.
The problem got solved! I tried to print the source that OpenGL tries to compile and saw that there was no source loaded by the ifstream. Things I had to change:
Change the "while (vVStream.is_open())" to "if (vVStream.is_open())".
Error check, if the condition I listed first is executed (add "else {cerr << "OH NOES!" << endl}
Add a second parameter to the ifstreams I'm creating: change "ifstream(path)" to "ifstream(path, ios::in)"
Change the path I'm passing from a relative path (e.g "../shaders/shader.vert") to an absolute path (e.g "/home/USERNAME/Desktop/project/src/shaders/shader.vert"); this somehow was necessary, because the relative path wasn't understood; using an absolute one isn't a permanent solution though, but it fixes the problem of not finding the shader.
Now it actually loads and compiles the shaders; there are still some errors to fix, but if someone has the same "must write to gl_Position" problem, double, no, triple-check if the source you're trying to compile is actually loaded and if the ifstream is actually open.
I thank everyone who tried to help me, especially #MtRoad. This problem almost made me go bald.
Vertex shaders run on each vertex individually, so gl_Position is the output vertex after whatever transforms you wish to apply to the vertex being processed by vertex shader, so trying to emit multiple vertices doesn't make sense. Geometry shaders can emit additional geometry on the fly and can be used to do this to create motion blur for example.
For typical drawing, you bind a vertex array object like you did, but put data into buffers called Vertex Buffer Objects and tell OpenGL how to interpret the data's "attributes" using glVertexAttrib which you can read in your shaders.
Recently encountered this issue & I suspect the cause may be same as yours.
I'm not familiar with g++ however, on VS ones' build environment & the location on where your *exe is running from when you're debugging can impact on this. For example one such setting:
Project Properties -> General -> Output directory ->
Visual Studios Express - change debug output directory
And another similar issue here "The system cannot find the file specified" when running C++ program
You need to make sure if you've change the build environment and you're debugging from a different output directory that any of the relevant files are relative from where the *exe is being executed from.
This would explain why you've had to resort to using "if (vVStream.is_open())", which I suspect fails, & so then subsequently use full filepath of the shaders as the original referenced files are not relative.
My issue was exactly as yours but only in release mode. Once I copied over my shaders, into the release folder where the *exe could access them, the problem went away.
I have a vertex shader:
#version 430
in vec4 position;
void main(void)
{
//gl_Position = position; => works in ALL cases
gl_Position = vec4(0,0,0,1);
}
if I do:
m_program.setAttributeArray(0, m_vertices.constData());
m_program.enableAttributeArray(0);
everything works fine. However, if I do:
m_program.setAttributeArray("position", m_vertices.constData());
m_program.enableAttributeArray("position");
NOTE: m_program.attributeLocation("position"); returns -1.
then, I get an empty window.
Qt help pages state:
void QGLShaderProgram::setAttributeArray(int location, const QVector3D
* values, int stride = 0)
Sets an array of 3D vertex values on the attribute at location in this shader program. The stride indicates the
number of bytes between vertices. A default stride value of zero
indicates that the vertices are densely packed in values.
The array will become active when enableAttributeArray() is called on
the location. Otherwise the value specified with setAttributeValue()
for location will be used.
and
void QGLShaderProgram::setAttributeArray(const char * name, const
QVector3D * values, int stride = 0)
This is an overloaded function.
Sets an array of 3D vertex values on the attribute called name in this
shader program. The stride indicates the number of bytes between
vertices. A default stride value of zero indicates that the vertices
are densely packed in values.
The array will become active when enableAttributeArray() is called on
name. Otherwise the value specified with setAttributeValue() for name
will be used.
So why is it working when using the "int version" and not when using the "const char * version"?
It returns -1 because you commented out the only line in your shader that actually uses position.
This is not an error, it is a consequence of a misunderstanding how attribute locations are assigned. Uniforms and attributes are only assigned locations after all shader stages are compiled and linked. If a uniform or attribute is not used in an active code path it will not be assigned a location. Even if you use the variable to do something like this:
#version 130
in vec4 dead_pos; // Location: N/A
in vec4 live_pos; // Location: Probably 0
void main (void)
{
vec4 not_used = dead_pos; // Not used for vertex shader output, so this is dead.
gl_Position = live_pos;
}
It actually goes even farther than this. If something is output from a vertex shader but not used in a geometry, tessellation or fragment shader, then its code path is considered inactive.
Vertex attribute location 0 is implicitly vertex position, by the way. It is the only vertex attribute that the GLSL spec. allows to alias to a fixed-function pointer function (e.g. glVertexPointer (...) == glVertexAttribPointer (0, ...))