my small OpenGL application works fine in Debug mode, but if I build it in Release mode I often get this error:
Shader Creation Error:
- Vertex shader failed to compile with the following errors:
ERROR: 0:22: error(#132) Syntax error: "]" parse error
ERROR: error(#273) 1 compilation errors. No code generateder code here
The strange thing is, that the error occurs most of the time, but sometimes the program works fine. I think it has something to do with the filestream, but I cannot figure out what it is.
This is the corresponding part of my code:
std::ifstream file(fp);
if(!file) crit_error("Shader Loading", ("file "+fp+" doesn't exist").c_str());
file.seekg(0, file.end);
GLint len = GLint(file.tellg());
file.seekg(0, file.beg);
GLchar* buf = new GLchar[len];
file.read(buf, len);
file.close();
std::string type = fp.substr(fp.size()-4, 4);
if(type == ".vsh")
id = glCreateShader(GL_VERTEX_SHADER);
else if(type == ".fsh")
id = glCreateShader(GL_FRAGMENT_SHADER);
else if(type == ".csh")
id = glCreateShader(GL_COMPUTE_SHADER);
glShaderSource(id, 1, (const GLchar**)&buf, &len);
glCompileShader(id);
delete[] buf;
Your problem lies here:
file.seekg(0, file.end);
GLint len = GLint(file.tellg());
file.seekg(0, file.beg);
GLchar* buf = new GLchar[len];
file.read(buf, len);
file.close();
This code reads exactly the length of the file and nothing more. And unfortunately file sizes don't actually tell you about how much there's actually to read; if the file read exits short it will leave in any garbage that was in the memory buf points to before it was allocated to your program. This explains why it works in debug mode: In debug mode buffers are usually allocated a little bit larger to allow for out-of-bounds access detection and variables and buffers left uninitialized by the programmer are set to zero. While useful for some debugging this may turn regular bugs into Heisenbugs.
Furthermore ifstream::read may return less than the requested amount of bytes, for example if you run into an end of file situation and leave the rest of the buffer untouched. As it happens ifstream::get will return NUL if you're hitting the end of file, so it will fill up your buffer with terminating NUL bytes.
The proper way to read a file being passed into C string processing functions is this:
file.seekg(0, file.end);
GLint len = GLint(file.tellg());
file.seekg(0, file.beg);
GLchar* buf = new GLchar[len + 1];
buf[len] = 0;
file.read(buf, len);
streamsize rb = file.gcount();
if( rb < len ) {
/* file read short */
/* either way zero out the remainder of
* the buffer untouched by the read. */
memset(buf + rb, 0, len - rb);
/* should also log some warning message here. */
}
file.close();
OK. I simply tried a different reading method and it worked.
GLchar* buf = new GLchar[len];
for(int i = 0; i < len; ++i)
buf[i] = file.get();
file.close();
I think this is a bug of the Visual C++ compiler.
Related
I did a sample project to read a file into a buffer.
When I use the tellg() function it gives me a larger value than the
read function is actually read from the file. I think that there is a bug.
here is my code:
EDIT:
void read_file (const char* name, int *size , char*& buffer)
{
ifstream file;
file.open(name,ios::in|ios::binary);
*size = 0;
if (file.is_open())
{
// get length of file
file.seekg(0,std::ios_base::end);
int length = *size = file.tellg();
file.seekg(0,std::ios_base::beg);
// allocate buffer in size of file
buffer = new char[length];
// read
file.read(buffer,length);
cout << file.gcount() << endl;
}
file.close();
}
main:
void main()
{
int size = 0;
char* buffer = NULL;
read_file("File.txt",&size,buffer);
for (int i = 0; i < size; i++)
cout << buffer[i];
cout << endl;
}
tellg does not report the size of the file, nor the offset
from the beginning in bytes. It reports a token value which can
later be used to seek to the same place, and nothing more.
(It's not even guaranteed that you can convert the type to an
integral type.)
At least according to the language specification: in practice,
on Unix systems, the value returned will be the offset in bytes
from the beginning of the file, and under Windows, it will be
the offset from the beginning of the file for files opened in
binary mode. For Windows (and most non-Unix systems), in text
mode, there is no direct and immediate mapping between what
tellg returns and the number of bytes you must read to get to
that position. Under Windows, all you can really count on is
that the value will be no less than the number of bytes you have
to read (and in most real cases, won't be too much greater,
although it can be up to two times more).
If it is important to know exactly how many bytes you can read,
the only way of reliably doing so is by reading. You should be
able to do this with something like:
#include <limits>
file.ignore( std::numeric_limits<std::streamsize>::max() );
std::streamsize length = file.gcount();
file.clear(); // Since ignore will have set eof.
file.seekg( 0, std::ios_base::beg );
Finally, two other remarks concerning your code:
First, the line:
*buffer = new char[length];
shouldn't compile: you have declared buffer to be a char*,
so *buffer has type char, and is not a pointer. Given what
you seem to be doing, you probably want to declare buffer as
a char**. But a much better solution would be to declare it
as a std::vector<char>& or a std::string&. (That way, you
don't have to return the size as well, and you won't leak memory
if there is an exception.)
Second, the loop condition at the end is wrong. If you really
want to read one character at a time,
while ( file.get( buffer[i] ) ) {
++ i;
}
should do the trick. A better solution would probably be to
read blocks of data:
while ( file.read( buffer + i, N ) || file.gcount() != 0 ) {
i += file.gcount();
}
or even:
file.read( buffer, size );
size = file.gcount();
EDIT: I just noticed a third error: if you fail to open the
file, you don't tell the caller. At the very least, you should
set the size to 0 (but some sort of more precise error
handling is probably better).
In C++17 there are std::filesystem file_size methods and functions, so that can streamline the whole task.
std::filesystem::file_size - cppreference.com
std::filesystem::directory_entry::file_size - cppreference.com
With those functions/methods there's a chance not to open a file, but read cached data (especially with the std::filesystem::directory_entry::file_size method)
Those functions also require only directory read permissions and not file read permission (as tellg() does)
void read_file (int *size, char* name,char* buffer)
*buffer = new char[length];
These lines do look like a bug: you create an char array and save to buffer[0] char. Then you read a file to buffer, which is still uninitialized.
You need to pass buffer by pointer:
void read_file (int *size, char* name,char** buffer)
*buffer = new char[length];
Or by reference, which is the c++ way and is less error prone:
void read_file (int *size, char* name,char*& buffer)
buffer = new char[length];
...
fseek(fptr, 0L, SEEK_END);
filesz = ftell(fptr);
will do the file if file opened through fopen
using ifstream,
in.seekg(0,ifstream::end);
dilesz = in.tellg();
would do similar
I am still learning Cpp, so please advise if I am misunderstanding here.
Using an ESP32, I am trying to read / write files to Flash / FFat. This is the method I have created which should read a file from flash and load it into PSRAM:
unsigned char* storage_read(char* path) {
File file = FFat.open(path);
if(!file) {
Serial.println("no file");
return 0x00;
}
int count = file.size();
unsigned char* buffer = (unsigned char*)ps_malloc(count);
Serial.printf("Bytes: %d\n", count);
Serial.printf("Count: %d\n", sizeof(buffer));
for (int i = 0; i < count; i++) {
buffer[i] = (unsigned char)file.read();
}
file.close();
return buffer;
}
The problem is that I get the contents of my b64 data file, with the addition of several extra bytes of data globbed on the end.
Calling the method with:
Serial.printf("Got: %s", storage_read("/frame/testframe-000.b64"));
I get the output:
Bytes: 684
Count: 4
Got: <myb64string> + <68B of garbage>
Why would sizeof not be returning the proper size?
What would be the proper way of loading this string into a buffer?
Why would sizeof not be returning the proper size?
That's because sizeof() has a very specific function (not very intuitive). It is used - compile time - to query the size of the data type passed to it. Calling sizeof(buffer) returns the size, in bytes, of the type of variable buffer. It's an unsigned char*, so a 4-byte memory address. So that's what you get.
What would be the proper way of loading this string into a buffer?
What I noticed is that you're expecting to load string data from your file, but you don't explicitly terminate it with a zero byte. As you probably know, all C strings must be terminated with a zero byte. Data that you load from the file most likely doesn't have one (unless you took extra care to add it while saving). So when you read a string from a file sized N bytes, allocate a buffer of N+1 bytes, load the file into it and terminate it with a zero. Something like this:
unsigned char* storage_read(char* path) {
File file = FFat.open(path);
if(!file) {
Serial.println("no file");
return 0x00;
}
int count = file.size();
unsigned char* buffer = (unsigned char*)ps_malloc(count + 1); //< Updated
Serial.printf("Bytes: %d\n", count);
Serial.printf("Count: %d\n", sizeof(buffer));
for (int i = 0; i < count; i++) {
buffer[i] = (unsigned char)file.read();
}
buffer[count] = 0; //< Added
file.close();
return buffer;
}
And since you're returning a heap-allocated buffer from your function, take extra care to remember to delete it in caller when finished. This line in your code will leak the memory:
Serial.printf("Got: %s", storage_read("/frame/testframe-000.b64"));
I'm writing a simple Vulkan application to get familiar with the API. When I call vkCreateGraphicsPipelines my program prints "LLVM ERROR: Invalid SPIR-V magic number" to stderr and exits.
The SPIR-V spec (https://www.khronos.org/registry/spir-v/specs/1.2/SPIRV.pdf, chapter 3 is relevant here I think) states that shader modules are assumed to be a stream of words, not bytes, and my SPIR-V files were a stream of bytes.
So I byteswapped the first two words of my SPIR-V files, and it did recognize the magic number, but vkCreateGraphicsPipelines exited with error code -1000012000 (the definition of VK_ERROR_INVALID_SHADER_NV) meaning the shader stage failed to compile (see https://www.khronos.org/registry/vulkan/specs/1.2-extensions/man/html/vkCreateShaderModule.html). The exact same thing happens when I byteswap the entire SPIR-V files (with "dd conv=swab").
I'm not sure what the issue is in the first place, since https://www.khronos.org/registry/vulkan/specs/1.2-extensions/man/html/VkShaderModuleCreateInfo.html states that the format of the SPIR-V code is automatically determined. If anyone can recommend a fix, even it's a hack, I would appreciate it.
I'm generating SPIR-V with glslangValidator, if that matters.
The code that loads the shader module:
std::vector<char> readFile(const std::string& filename) {
std::ifstream file(filename, std::ios::ate | std::ios::binary);
size_t fileSize = (size_t) file.tellg();
std::vector<char> buffer(fileSize);
file.seekg(0);
file.read(buffer.data(), fileSize);
file.close();
return buffer;
}
VkShaderModule getShadMod(VkDevice dev, const std::string shadFileName) {
std::vector<char> shader = readFile(shadFileName);
VkShaderModuleCreateInfo smci;
smci.sType = VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO;
smci.pNext = NULL;
smci.flags = 0;
smci.codeSize = shader.size();
smci.pCode = reinterpret_cast<uint32_t *>(shader.data());
VkShaderModule shadMod;
asr(vkCreateShaderModule(dev, &smci, NULL, &shadMod),
"create shader module error");
return shadMod;
}
In the following code the read method doesn't seem to fill the given buffer:
ifstream pkcs7_file(file_name, std::ios::binary);
if ( pkcs7_file.fail() )
{
std::cout << "File failed before reading!\n";
}
pkcs7_file.seekg(0, pkcs7_file.end);
size_t len = pkcs7_file.tellg();
char * buffer = new char[len];
pkcs7_file.read(buffer, len);
pkcs7_file.close();
When debugging with VS 2012 and printing, the Len variable is as expected (and not zero) but the buffer doesn't change after the read function - it remains with the same value from before the read.
What am I doing wrong?
You seek to end-of-file, and then try to read. Of course it fails - the file is positioned at EOF, there's no data to read.
Because of d3dcompiler*.dll becomes undesireable, I'm trying to exclude dependencies to it from my app. And there is something wrong with my new Compiled Shader Object (.cso) file loading code.
ifstream fstream;
fstream.open (vsfile);
if(fstream.fail())
return false;
fstream.seekg( 0, ios::end );
size_t size = size_t(fstream.tellg());
char* data = new char[size];
fstream.seekg(0, ios::beg);
fstream.read( data, size);
fstream.close();
XTRACE2(pDevice->CreateVertexShader(&data, size, 0, &m_pVertexShader))
The problem: CreateVertexShader() returning E_INVALIDARG error.
Old code with D3DReadFileToBlob() works fine. Blob returns buffer of the same size that is my char* or std::vector<char> and its equal to .cso file size.
I know, there are new Windows 8 examples on MSDN, but they use some new Metro stuff. I want to do It with plain C++.
XTRACE2 is just DirectX error checking macro.
Thanks in advance!
Error was caused by reading binary in text mode. Works as intended if add binary flag to read function. Something like that:
std::ifstream fstream;
fstream.open (filename, std::ifstream::in | std::ifstream::binary);
if(fstream.fail())
return false;
fstream.seekg( 0, std::ios::end );
size_t size = size_t(fstream.tellg());
data.resize(size);
fstream.seekg(0, std::ios::beg);
fstream.read( &data[0], size);
fstream.close();
Shame on me =\