Why doesn't Vulkan recognize my SPIR-V shaders? - glsl

I'm writing a simple Vulkan application to get familiar with the API. When I call vkCreateGraphicsPipelines my program prints "LLVM ERROR: Invalid SPIR-V magic number" to stderr and exits.
The SPIR-V spec (https://www.khronos.org/registry/spir-v/specs/1.2/SPIRV.pdf, chapter 3 is relevant here I think) states that shader modules are assumed to be a stream of words, not bytes, and my SPIR-V files were a stream of bytes.
So I byteswapped the first two words of my SPIR-V files, and it did recognize the magic number, but vkCreateGraphicsPipelines exited with error code -1000012000 (the definition of VK_ERROR_INVALID_SHADER_NV) meaning the shader stage failed to compile (see https://www.khronos.org/registry/vulkan/specs/1.2-extensions/man/html/vkCreateShaderModule.html). The exact same thing happens when I byteswap the entire SPIR-V files (with "dd conv=swab").
I'm not sure what the issue is in the first place, since https://www.khronos.org/registry/vulkan/specs/1.2-extensions/man/html/VkShaderModuleCreateInfo.html states that the format of the SPIR-V code is automatically determined. If anyone can recommend a fix, even it's a hack, I would appreciate it.
I'm generating SPIR-V with glslangValidator, if that matters.
The code that loads the shader module:
std::vector<char> readFile(const std::string& filename) {
std::ifstream file(filename, std::ios::ate | std::ios::binary);
size_t fileSize = (size_t) file.tellg();
std::vector<char> buffer(fileSize);
file.seekg(0);
file.read(buffer.data(), fileSize);
file.close();
return buffer;
}
VkShaderModule getShadMod(VkDevice dev, const std::string shadFileName) {
std::vector<char> shader = readFile(shadFileName);
VkShaderModuleCreateInfo smci;
smci.sType = VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO;
smci.pNext = NULL;
smci.flags = 0;
smci.codeSize = shader.size();
smci.pCode = reinterpret_cast<uint32_t *>(shader.data());
VkShaderModule shadMod;
asr(vkCreateShaderModule(dev, &smci, NULL, &shadMod),
"create shader module error");
return shadMod;
}

Related

C++, OpenCV: Fastest way to read a file containing non-ASCII characters on windows

I am writing a program using OpenCV that shall work on Windows as well as on Linux. Now the problem with OpenCV is, that its cv::imread function can not handle filepaths that contain non-ASCII characters on Windows. A workaround is to first read the file into a buffer using other libraries (for example std-libraries or Qt) and then read the file from that buffer using the cv::imdecode function. This is what I currently do. However, it's not very fast and much slower than just using cv::imread. I have a TIF image that is about 1GB in size. Reading it with cv::imread takes approx. 1s, reading it with the buffer method takes about 14s. I assume that imread just reads those parts of the TIF that are necessary for displaying the image (no layers etc.). Either this, or my code for reading a file into a buffer is bad.
Now my question is if there is a better way to do it. Either a better way with regard to OpenCV or a better way with regard to reading a file into a buffer.
I tried two different methods for the buffering, one using the std libraries and one using Qt (actually they both use QT for some things). They both are equally slow.:
Method 1
std::shared_ptr<std::vector<char>> readFileIntoBuffer(QString const& path) {
#ifdef Q_OS_WIN
std::ifstream file(path.toStdWString(), std::iostream::binary);
#else
std::ifstream file(path.toStdString(), std::iostream::binary);
#endif
if (!file.good()) {
return std::shared_ptr<std::vector<char>>(new std::vector<char>());
}
file.exceptions(std::ifstream::badbit | std::ifstream::failbit | std::ifstream::eofbit);
file.seekg(0, std::ios::end);
std::streampos length(file.tellg());
std::shared_ptr<std::vector<char>> buffer(new std::vector<char>(static_cast<std::size_t>(length)));
if (static_cast<std::size_t>(length) == 0) {
return std::shared_ptr<std::vector<char>>(new std::vector<char>());
}
file.seekg(0, std::ios::beg);
try {
file.read(buffer->data(), static_cast<std::size_t>(length));
} catch (...) {
return std::shared_ptr<std::vector<char>>(new std::vector<char>());
}
file.close();
return buffer;
}
And then for reading the image from the buffer:
std::shared_ptr<std::vector<char>> buffer = utility::readFileIntoBuffer(path);
cv::Mat image = cv::imdecode(*buffer, cv::IMREAD_UNCHANGED);
Method 2
QByteArray readFileIntoBuffer(QString const & path) {
QFile file(path);
if (!file.open(QIODevice::ReadOnly)) {
return QByteArray();
}
return file.readAll();
}
And for decoding the image:
QByteArray buffer = utility::readFileIntoBuffer(path);
cv::Mat matBuffer(1, buffer.size(), CV_8U, buffer.data());
cv::Mat image = cv::imdecode(matBuffer, cv::IMREAD_UNCHANGED);
UPDATE
Method 3
This method maps the file into memory using QFileDevice::map and then uses cv::imdecode.
QFile file(path);
file.open(QIODevice::ReadOnly);
unsigned char * fileContent = file.map(0, file.size(), QFileDevice::MapPrivateOption);
cv::Mat matBuffer(1, file.size(), CV_8U, fileContent);
cv::Mat image = cv::imdecode(matBuffer, cv::IMREAD_UNCHANGED);
However, also this approach didn't result in a shorter time than the other two. I also did some time measurements and found that reading the file in the memory or mapping it to the memory is actually not the bottleneck. The operation that takes the majority of the time is the cv::imdecode. I don't know why this is the case, since using cv::imread with the same image only takes a fraction of the time.
Potential Workaround
I tried obtaining an 8.3 pathname on Windows for files that contain non-ascii characters using the following code:
QString getShortPathname(QString const & path) {
#ifndef Q_OS_WIN
return QString();
#else
long length = 0;
WCHAR* buffer = nullptr;
length = GetShortPathNameW(path.toStdWString().c_str(), nullptr, 0);
if (length == 0) return QString();
buffer = new WCHAR[length];
length = GetShortPathNameW(path.toStdWString().c_str(), buffer, length);
if (length == 0) {
delete[] buffer;
return QString();
}
QString result = QString::fromWCharArray(buffer);
delete[] buffer;
return result;
#endif
}
However, I had to find out that 8.3 pathname generation is disabled on my machine, so it potentially is on others as well. So I wasn't able to test this yet and it does not seem to provide a reliable workaround. I also have the problem that the function doesn't tell me that 8.3 pathname generation is disabled.
There is an open ticket on this in OpenCV GitHub: https://github.com/opencv/opencv/issues/4292
One of the comments there suggest a workaround without reading the whole file to memory by using memory-mapped file (with help from Boost):
mapped_file map(path(L"filename"), ios::in);
Mat file(1, numeric_cast<int>(map.size()), CV_8S, const_cast<char*>(map.const_data()), CV_AUTOSTEP);
Mat image(imdecode(file, 1));

OpenGL program works only in Debug mode in Visual Studio 2013

my small OpenGL application works fine in Debug mode, but if I build it in Release mode I often get this error:
Shader Creation Error:
- Vertex shader failed to compile with the following errors:
ERROR: 0:22: error(#132) Syntax error: "]" parse error
ERROR: error(#273) 1 compilation errors. No code generateder code here
The strange thing is, that the error occurs most of the time, but sometimes the program works fine. I think it has something to do with the filestream, but I cannot figure out what it is.
This is the corresponding part of my code:
std::ifstream file(fp);
if(!file) crit_error("Shader Loading", ("file "+fp+" doesn't exist").c_str());
file.seekg(0, file.end);
GLint len = GLint(file.tellg());
file.seekg(0, file.beg);
GLchar* buf = new GLchar[len];
file.read(buf, len);
file.close();
std::string type = fp.substr(fp.size()-4, 4);
if(type == ".vsh")
id = glCreateShader(GL_VERTEX_SHADER);
else if(type == ".fsh")
id = glCreateShader(GL_FRAGMENT_SHADER);
else if(type == ".csh")
id = glCreateShader(GL_COMPUTE_SHADER);
glShaderSource(id, 1, (const GLchar**)&buf, &len);
glCompileShader(id);
delete[] buf;
Your problem lies here:
file.seekg(0, file.end);
GLint len = GLint(file.tellg());
file.seekg(0, file.beg);
GLchar* buf = new GLchar[len];
file.read(buf, len);
file.close();
This code reads exactly the length of the file and nothing more. And unfortunately file sizes don't actually tell you about how much there's actually to read; if the file read exits short it will leave in any garbage that was in the memory buf points to before it was allocated to your program. This explains why it works in debug mode: In debug mode buffers are usually allocated a little bit larger to allow for out-of-bounds access detection and variables and buffers left uninitialized by the programmer are set to zero. While useful for some debugging this may turn regular bugs into Heisenbugs.
Furthermore ifstream::read may return less than the requested amount of bytes, for example if you run into an end of file situation and leave the rest of the buffer untouched. As it happens ifstream::get will return NUL if you're hitting the end of file, so it will fill up your buffer with terminating NUL bytes.
The proper way to read a file being passed into C string processing functions is this:
file.seekg(0, file.end);
GLint len = GLint(file.tellg());
file.seekg(0, file.beg);
GLchar* buf = new GLchar[len + 1];
buf[len] = 0;
file.read(buf, len);
streamsize rb = file.gcount();
if( rb < len ) {
/* file read short */
/* either way zero out the remainder of
* the buffer untouched by the read. */
memset(buf + rb, 0, len - rb);
/* should also log some warning message here. */
}
file.close();
OK. I simply tried a different reading method and it worked.
GLchar* buf = new GLchar[len];
for(int i = 0; i < len; ++i)
buf[i] = file.get();
file.close();
I think this is a bug of the Visual C++ compiler.

Loading text file from zip archive with minizip/zlib, garbage characters at end of file

I am trying to load shader source from inside a zip file, which is a plain text file created with notepad. loading code is as follows (error checking code removed from the below snippet):
std::string retrievestringfromarchive(std::string filename)
{
//data is the zip resource attached elsewhere
unz_file_info info;
Uint8* rwh;
unzLocateFile(data, filename.c_str(), NULL);
unzOpenCurrentFile(data);
unzGetCurrentFileInfo(data, &info, NULL, 0, NULL, 0, NULL, 0)
rwh = (Uint8*)malloc(info.uncompressed_size);
unzReadCurrentFile( data, rwh, info.uncompressed_size );
//garbage at end of file
const char* rwh1 = reinterpret_cast<char*>(rwh);
std::stringstream tempstream(rwh1);
std::string tempstring = tempstream.str();
free(rwh);
return tempstring;
}
The output of the string returned is as follows:
//FRAGMENT SHADER
#version 120
//in from vertex shader
varying vec2 f_texcoord;
varying vec4 f_color;
uniform sampler2D mytexture;
void main(void)
{
gl_FragColor = texture2D(mytexture, f_texcoord) * f_color;
}
//endfile««««««««îþîþ
Notes:
i checked the info struct, both compressed and uncompressed size matches with information from 7zip
the buffer "rwh" itself has the garbage characters at the end, when inspected with gdb
I am on Win7 64bit, using codeblocks and TDM-GCC-32 4.8.1 to compile
the "//endfile" comment neatly avoids gl shader compile issue, but that has gotta go.
rwh = (Uint8*)malloc(info.uncompressed_size);
unzReadCurrentFile( data, rwh, info.uncompressed_size );
I highly doubt that unzReadCurrentFile adds a 0 terminator in the buffer - there would be no space anyway - and your are using the pointer as a 0-terminated string.
In case it really makes sense to interpret the buffer as a string, you can do it like so:
std::string tempstring(rwh1, info.uncompressed_size);
The decompressor gives you a block of decompressed data – but it doesn't know the data is a plain text and that you are planning to use it as a C-language string. So it doesn't append a terminating NUL (zero) character at the end. Thats all. Simply copy a given number of characters and do not assume the data block is zero-terminated.

Error creating shader from memory

Because of d3dcompiler*.dll becomes undesireable, I'm trying to exclude dependencies to it from my app. And there is something wrong with my new Compiled Shader Object (.cso) file loading code.
ifstream fstream;
fstream.open (vsfile);
if(fstream.fail())
return false;
fstream.seekg( 0, ios::end );
size_t size = size_t(fstream.tellg());
char* data = new char[size];
fstream.seekg(0, ios::beg);
fstream.read( data, size);
fstream.close();
XTRACE2(pDevice->CreateVertexShader(&data, size, 0, &m_pVertexShader))
The problem: CreateVertexShader() returning E_INVALIDARG error.
Old code with D3DReadFileToBlob() works fine. Blob returns buffer of the same size that is my char* or std::vector<char> and its equal to .cso file size.
I know, there are new Windows 8 examples on MSDN, but they use some new Metro stuff. I want to do It with plain C++.
XTRACE2 is just DirectX error checking macro.
Thanks in advance!
Error was caused by reading binary in text mode. Works as intended if add binary flag to read function. Something like that:
std::ifstream fstream;
fstream.open (filename, std::ifstream::in | std::ifstream::binary);
if(fstream.fail())
return false;
fstream.seekg( 0, std::ios::end );
size_t size = size_t(fstream.tellg());
data.resize(size);
fstream.seekg(0, std::ios::beg);
fstream.read( &data[0], size);
fstream.close();
Shame on me =\

Shader GLSL file Not compiling

I am writing my first program using OpenGL, and I have gotten to the point where I am trying to get it to compile my extremely simple shader program. I always get the error that it failed to compile the shader. I use the following code to compile the shader:
struct Shader
{
const char* filename;
GLenum type;
GLchar* source;
};
...
static char* readShaderSource(const char* shaderFile)
{
FILE* fp = fopen(shaderFile, "r");
if ( fp == NULL ) { return NULL; }
fseek(fp, 0L, SEEK_END);
long size = ftell(fp);
fseek(fp, 0L, SEEK_SET);
char* buf = new char[size + 1];
fread(buf, 1, size, fp);
buf[size] = '\0';
fclose(fp);
return buf;
}
...
Shader s;
s.filename = "<name of shader file>";
s.type = GL_VERTEX_SHADER;
s.source = readShaderSource( s.filename );
GLuint shader = glCreateShader( s.type );
glShaderSource( shader, 1, (const GLchar**) &s.source, NULL );
glCompileShader( shader );
And my shader file source is as follows:
#version 150
in vec4 vPosition;
void main()
{
gl_Position = vPosition;
}
I have also tried replacing "in" with "attribute" as well as deleting the version line. Nothing compiles.
Note:My actual C program compiles and runs. The shader program that runs on the GPU is what is failing to compile.
I have also made sure to download my graphics card's latest driver. I have an NVIDIA 8800 GTS 512;
Any ideas on how to get my shader program (written in GLSL) to compile?
As wrote in comments, does compile shader output anything to console? To my surprise, while I was using ATI, I got message that shader program compiled successfuly, however when I started using Nvidia, I was staring at screen for first time, because nothing got output... however shaders were working. So maybe you are successfuly compiling and just don't take it? And if they are not working in context you try to use shader program and nothing happens, I think you're missing the linking of shader (it may be further in code however). Google has some good answers on how to correctly perform every step, you can compare your code to this example. I also made an interface for working whit shaders, you can take a look my UniShader. Project lacks english documentation and is mainly used for GPGPU, but you can easily load any shader, and the code itself is written whit english naming, so it should be quite comfortable. Look in folder UniShader in that zip for source codes. There are also few examples, the one named Ukazkovy program na GPGPU got also source included so you can look how to use those classes.. good luck!