Shader GLSL file Not compiling - opengl

I am writing my first program using OpenGL, and I have gotten to the point where I am trying to get it to compile my extremely simple shader program. I always get the error that it failed to compile the shader. I use the following code to compile the shader:
struct Shader
{
const char* filename;
GLenum type;
GLchar* source;
};
...
static char* readShaderSource(const char* shaderFile)
{
FILE* fp = fopen(shaderFile, "r");
if ( fp == NULL ) { return NULL; }
fseek(fp, 0L, SEEK_END);
long size = ftell(fp);
fseek(fp, 0L, SEEK_SET);
char* buf = new char[size + 1];
fread(buf, 1, size, fp);
buf[size] = '\0';
fclose(fp);
return buf;
}
...
Shader s;
s.filename = "<name of shader file>";
s.type = GL_VERTEX_SHADER;
s.source = readShaderSource( s.filename );
GLuint shader = glCreateShader( s.type );
glShaderSource( shader, 1, (const GLchar**) &s.source, NULL );
glCompileShader( shader );
And my shader file source is as follows:
#version 150
in vec4 vPosition;
void main()
{
gl_Position = vPosition;
}
I have also tried replacing "in" with "attribute" as well as deleting the version line. Nothing compiles.
Note:My actual C program compiles and runs. The shader program that runs on the GPU is what is failing to compile.
I have also made sure to download my graphics card's latest driver. I have an NVIDIA 8800 GTS 512;
Any ideas on how to get my shader program (written in GLSL) to compile?

As wrote in comments, does compile shader output anything to console? To my surprise, while I was using ATI, I got message that shader program compiled successfuly, however when I started using Nvidia, I was staring at screen for first time, because nothing got output... however shaders were working. So maybe you are successfuly compiling and just don't take it? And if they are not working in context you try to use shader program and nothing happens, I think you're missing the linking of shader (it may be further in code however). Google has some good answers on how to correctly perform every step, you can compare your code to this example. I also made an interface for working whit shaders, you can take a look my UniShader. Project lacks english documentation and is mainly used for GPGPU, but you can easily load any shader, and the code itself is written whit english naming, so it should be quite comfortable. Look in folder UniShader in that zip for source codes. There are also few examples, the one named Ukazkovy program na GPGPU got also source included so you can look how to use those classes.. good luck!

Related

Why doesn't Vulkan recognize my SPIR-V shaders?

I'm writing a simple Vulkan application to get familiar with the API. When I call vkCreateGraphicsPipelines my program prints "LLVM ERROR: Invalid SPIR-V magic number" to stderr and exits.
The SPIR-V spec (https://www.khronos.org/registry/spir-v/specs/1.2/SPIRV.pdf, chapter 3 is relevant here I think) states that shader modules are assumed to be a stream of words, not bytes, and my SPIR-V files were a stream of bytes.
So I byteswapped the first two words of my SPIR-V files, and it did recognize the magic number, but vkCreateGraphicsPipelines exited with error code -1000012000 (the definition of VK_ERROR_INVALID_SHADER_NV) meaning the shader stage failed to compile (see https://www.khronos.org/registry/vulkan/specs/1.2-extensions/man/html/vkCreateShaderModule.html). The exact same thing happens when I byteswap the entire SPIR-V files (with "dd conv=swab").
I'm not sure what the issue is in the first place, since https://www.khronos.org/registry/vulkan/specs/1.2-extensions/man/html/VkShaderModuleCreateInfo.html states that the format of the SPIR-V code is automatically determined. If anyone can recommend a fix, even it's a hack, I would appreciate it.
I'm generating SPIR-V with glslangValidator, if that matters.
The code that loads the shader module:
std::vector<char> readFile(const std::string& filename) {
std::ifstream file(filename, std::ios::ate | std::ios::binary);
size_t fileSize = (size_t) file.tellg();
std::vector<char> buffer(fileSize);
file.seekg(0);
file.read(buffer.data(), fileSize);
file.close();
return buffer;
}
VkShaderModule getShadMod(VkDevice dev, const std::string shadFileName) {
std::vector<char> shader = readFile(shadFileName);
VkShaderModuleCreateInfo smci;
smci.sType = VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO;
smci.pNext = NULL;
smci.flags = 0;
smci.codeSize = shader.size();
smci.pCode = reinterpret_cast<uint32_t *>(shader.data());
VkShaderModule shadMod;
asr(vkCreateShaderModule(dev, &smci, NULL, &shadMod),
"create shader module error");
return shadMod;
}

Reading from file in C++ results in empty string "sometimes"

I know that this "sometimes" is a little bit against the rules of stack overflow, but I don't know how to describe my problem in a better way:
I have this code:
static const std::string parseShader(const std::string &fileName){
std::ifstream ifs(fileName);
std::stringstream buffer;
buffer << ifs.rdbuf();
std::string s = buffer.str();
return s;
}
And in my main function I have this code:
const GLuint vertex_shader = glCreateShader(GL_VERTEX_SHADER);
const char* vertex_shader_text = parseShader("res/shaders/basic.vertex.glsl").c_str();
std::cout << "Vertex shader length is " << strlen(vertex_shader_text) << std::endl;
glShaderSource(vertex_shader, 1, &vertex_shader_text, NULL);
glCompileShader(vertex_shader);
const GLuint fragment_shader = glCreateShader(GL_FRAGMENT_SHADER);
const char* fragment_shader_text = parseShader("res/shaders/basic.fragment.glsl").c_str();
std::cout << "Fragment shader length is " << strlen(fragment_shader_text) << std::endl;
glShaderSource(fragment_shader, 1, &fragment_shader_text, NULL);
glCompileShader(fragment_shader);
So, if I execute my program, without moving files or change any code, sometimes I get:
Vertex shader length is 160
Fragment shader length is 90
And sometimes:
Vertex shader length is 0
Fragment shader length is 0
And even, most of times
Vertex shader length is 160
Fragment shader length is 0
So, seems like some part of file reading would be asynchronous and slower than the rest of the program. I'm working with C++17, CLion and MacOS Mojave, as additional information....
Also, when both files are read correctly, then the image in opengl (a triangle) is correctly painted, but when some of the files are incorrectly read, nothing is shown.
The problem is that the string object that the function returns is temporary and will end its life-time almost immediately. That will leave you with a pointer to a string that no longer exists.
Use a std::string as the destination instead, and only use the c_str member function to get a pointer when absolutely needed.

Loading text file from zip archive with minizip/zlib, garbage characters at end of file

I am trying to load shader source from inside a zip file, which is a plain text file created with notepad. loading code is as follows (error checking code removed from the below snippet):
std::string retrievestringfromarchive(std::string filename)
{
//data is the zip resource attached elsewhere
unz_file_info info;
Uint8* rwh;
unzLocateFile(data, filename.c_str(), NULL);
unzOpenCurrentFile(data);
unzGetCurrentFileInfo(data, &info, NULL, 0, NULL, 0, NULL, 0)
rwh = (Uint8*)malloc(info.uncompressed_size);
unzReadCurrentFile( data, rwh, info.uncompressed_size );
//garbage at end of file
const char* rwh1 = reinterpret_cast<char*>(rwh);
std::stringstream tempstream(rwh1);
std::string tempstring = tempstream.str();
free(rwh);
return tempstring;
}
The output of the string returned is as follows:
//FRAGMENT SHADER
#version 120
//in from vertex shader
varying vec2 f_texcoord;
varying vec4 f_color;
uniform sampler2D mytexture;
void main(void)
{
gl_FragColor = texture2D(mytexture, f_texcoord) * f_color;
}
//endfile««««««««îþîþ
Notes:
i checked the info struct, both compressed and uncompressed size matches with information from 7zip
the buffer "rwh" itself has the garbage characters at the end, when inspected with gdb
I am on Win7 64bit, using codeblocks and TDM-GCC-32 4.8.1 to compile
the "//endfile" comment neatly avoids gl shader compile issue, but that has gotta go.
rwh = (Uint8*)malloc(info.uncompressed_size);
unzReadCurrentFile( data, rwh, info.uncompressed_size );
I highly doubt that unzReadCurrentFile adds a 0 terminator in the buffer - there would be no space anyway - and your are using the pointer as a 0-terminated string.
In case it really makes sense to interpret the buffer as a string, you can do it like so:
std::string tempstring(rwh1, info.uncompressed_size);
The decompressor gives you a block of decompressed data – but it doesn't know the data is a plain text and that you are planning to use it as a C-language string. So it doesn't append a terminating NUL (zero) character at the end. Thats all. Simply copy a given number of characters and do not assume the data block is zero-terminated.

Is there anything wrong with my glsl shader loading code?

The problem is that, shaders (pretty simple ones, as I'm learning OpenGL) fail to compile in a seemingly random manner (and gives random error messages * ).
The same shaders, however, compile after about 3 or 4 tries.
Here is the code:
Shader::Shader(GLenum Type,std::string filename)
{
shader_type = Type;
std::ifstream ifs(filename);
if(!ifs)
throw(std::runtime_error("File:"+filename+" not opened."));
std::ostringstream stream;
stream<<ifs.rdbuf();
const GLchar* data = stream.str().c_str();
handle = glCreateShader(shader_type);
glShaderSource(handle,1,static_cast<const GLchar**>(&data),0);
glCompileShader(handle);
int status;
glGetShaderiv(handle,GL_COMPILE_STATUS,&status);
if(status == GL_FALSE)
{
int loglength;
glGetShaderiv(handle,GL_INFO_LOG_LENGTH,&loglength);
auto data = new char[loglength];
glGetShaderInfoLog(handle,loglength,&loglength,data);
std::string strdata(data);
delete [] data;
throw(std::runtime_error(strdata));
}
}
Note that the shaders aren't missing newlines at the end, has an extra space after the last semicolon and uses tabs instead of spaces. (as suggested in various old posts over the internet!).
Here are two error messages produced from the same vertex shader here, not at the same time:
#version 330
in vec2 Position;
uniform mat4 transform;
void main()
{
gl_Position = transform*vec4(Position,0.0f,1.0f);
}
Errors:
0(1) : error C0000: syntax error, unexpected $undefined at token "<undefined>"
0(6) : error C0000: syntax error, unexpected '!', expecting ',' or ')' at token "!"
And sometimes it just works !
Is it a problem with my drivers ?
(I'm using the recent 302.x stable nvidia binary drivers on Arch Linux 64 bit, with an aged 9600 GSO card )
P.S: The code works as expected ,whenever the shader compiles correctly, so I think it shoud be correct.
I'll be happy to post a working(sometimes !) example as a zip file if the problem can't be found from this, and someone wants to take a look.
const GLchar* data = stream.str().c_str();
This is bad. If you want the string's data, you need to store it. str will return a copy of the buffer, which you then get a pointer to with c_str. Once that temporary is destroyed (at the end of this line), that pointer will point to memory you no longer have access to.
The correct code is this:
std::string dataString = stream.str();
const GLchar *data = reinterpret_cast<GLchar*>(dataString.c_str());

glsl shader compilation issue at runtime

I'm working on a project that uses OpenGL 4.0 shaders.
I have to supply the call to glShaderSource() with an array of char arrays, which represents the source of the shader.
The shader compilation is failing, with the following errors:
(0) : error C0206: invalid token "<null atom>" in version line
(0) : error C0000: syntax error, unexpected $end at token "<EOF>"
Here's my (hello world) shader - straight from OpenGL 4.0 shading language cookbook
#version 400
in vec3 VertexPosition;
in vec3 VertexColor;
out vec3 Color;
void main()
{
Color = VertexColor;
gl_Position = vec4( VertexColor, 1.0 );
}
And here's my code to read the shader file into my C++ code, and compile the shader at runtime:
const int nMaxLineSize = 1024;
char sLineBuffer[nMaxLineSize];
ifstream stream;
vector<string> vsLines;
GLchar** ppSrc;
GLint* pnSrcLineLen;
int nNumLines;
stream.open( m_sShaderFile.c_str(), std::ios::in );
while( (stream.good()) && (stream.getline(sLineBuffer, nMaxLineSize)) )
{
if( strlen(sLineBuffer) > 0 )
vsLines.push_back( string(sLineBuffer) );
}
stream.close();
nNumLines = vsLines.size();
pnSrcLineLen = new GLint[nNumLines];
ppSrc = new GLchar*[nNumLines];
for( int n = 0; n < nNumLines; n ++ )
{
string & sLine = vsLines.at(n);
int nLineLen = sLine.length();
char * pNext = new char[nLineLen+1];
memcpy( (void*)pNext, sLine.c_str(), nLineLen );
pNext[nLineLen] = '\0';
ppSrc[n] = pNext;
pnSrcLineLen[n] = nLineLen+1;
}
vsLines.clear();
// just for debugging purposes (lines print out just fine..)
for( int n = 0; n < nNumLines; n ++ )
ATLTRACE( "line %d: %s\r\n", n, ppSrc[n] );
// Create the shader
m_nShaderId = glCreateShader( m_nShaderType );
// Compile the shader
glShaderSource( m_nShaderId, nNumLines, (const GLchar**)ppSrc, (GLint*) pnSrcLineLen );
glCompileShader( m_nShaderId );
// Determine compile status
GLint nResult = GL_FALSE;
glGetShaderiv( m_nShaderId, GL_COMPILE_STATUS, &nResult );
The C++ code executes as expected, but the shader compilation fails. Can anyone spot what I might be doing wrong?
I have a feeling that this may be to do with end of line characters somehow, but as this is my first attempt at shader compilation, I'm stuck!
I've read other SO answers on shader compilation, but they seem specific to Java / other languages, not C++. If it helps, I'm on the win32 platform.
You have made a mistake that others have made. This is the definition of glShaderSource:
void glShaderSource(GLuint shader, GLsizei count, const GLchar **string, const GLint *length);
The string is an array of strings. It is not intended to be an array of lines in your shader. The way the compiler will interpret this array of strings is by concatenating them together, one after another. Without newlines.
Since stream.getline will not put the \n character in the string, each of the shader strings you generate will not have a newline at the end. Therefore, when glShaderSource goes to compile them, your shader will look like this:
#version 400in vec3 VertexPosition;in vec3 VertexColor;out vec3 Color;...
That's not legal GLSL.
The proper way to do this is to load the file as a string.
std::ifstream shaderFile(m_sShaderFile.c_str());
if(!shaderFile)
//Error out here.
std::stringstream shaderData;
shaderData << shaderFile.rdbuf(); //Loads the entire string into a string stream.
shaderFile.close();
const std::string &shaderString = shaderData.str(); //Get the string stream as a std::string.
Then you can just pass that along to glShaderSource easily enough:
m_nShaderId = glCreateShader( m_nShaderType );
const char *strShaderVar = shaderString.c_str();
GLint iShaderLen = shaderString.size();
glShaderSource( m_nShaderId, 1, (const GLchar**)&strShaderVar, (GLint*)&iShaderLen );
glCompileShader( m_nShaderId );
If you copied this loading code from somewhere, then I strongly suggest you find a different place to learn about OpenGL. Because that's terrible coding.
glShaderSource( m_nShaderId, nNumLines, (const GLchar**)ppSrc, (GLint*) pnSrcLineLen );
I know the signature of glShaderSource looks tempting to send each line of the shader separately. But that's now what it's meant for. The point of being able to send is multiple arrays is so that one can mix multiple primitive shader sources into a single shader, kind of like include files. Understanding this, makes it much simpler to read in a shader file – and avoids such nasty bugs.
Using C++ you can do it much nicer and cleaner. I already wrote the follwing in Getting garbage chars when reading GLSL files
You're using C++, so I suggest you leverage that. Instead of reading into a self allocated char array I suggest you read into a std::string:
#include <string>
#include <fstream>
std::string loadFileToString(char const * const fname)
{
std::ifstream ifile(fname);
std::string filetext;
while( ifile.good() ) {
std::string line;
std::getline(ifile, line);
filetext.append(line + "\n");
}
return filetext;
}
That automatically takes care of all memory allocation and proper delimiting -- the keyword is RAII: Resource Allocation Is Initialization. Later on you can upload the shader source with something like
void glcppShaderSource(GLuint shader, std::string const &shader_string)
{
GLchar const *shader_source = shader_string.c_str();
GLint const shader_length = shader_string.size();
glShaderSource(shader, 1, &shader_source, &shader_length);
}
You can use those two functions together like this:
void load_shader(GLuint shaderobject, char * const shadersourcefilename)
{
glcppShaderSource(shaderobject, loadFileToString(shadersourcefilename));
}
Just a quick hunch:
Have you tried calling glShaderSource with NULL as length parameter? In that case OpenGL will assume your code to be null-terminated.
(Edited because of stupidity)