Unable to read the entire file correctly using fseek() and fread() - c++

I have a file with shader source which i want to read that looks like this:
#version 460 core
layout(location = 0) in vec2 pos;
layout(location = 1) in vec3 color;
layout(location = 0) out vec3 fragColor;
uniform float rand;
out gl_PerVertex
{
vec4 gl_Position;
float gl_PointSize;
float gl_ClipDistance[];
};
void main()
{
fragColor = color * rand;
gl_Position = vec4(pos.x + 0.5 * gl_InstanceID, pos.y, 0, 1);
}
first i find out the size of my file:
fseek(m_file, 0L, SEEK_END);
size_t size = ftell(m_file);
This returns 364. The problem is that if i just copy+paste the file content into a R"()" string and get a strlen it returns 347.
After getting file size i try reading the whole file:
fseek(m_file, 0, SEEK_SET);
size_t count = fread(buffer, 1, size, m_file);
where buffer is allocated with 364 bytes. But fread returns 347 and feof(m_file) returns true. So as a result i get this:
(File explorer also shows that the file size is 364).
However, when i read the same file into a string using std::ifstream, everything works properly:
std::ifstream ifs(filename);
std::string content((std::istreambuf_iterator<char>(ifs)),
(std::istreambuf_iterator<char>()));
auto size = content.size();
and size is equal to 347.
The question is: am i doing something wrong or Windows/fseek just show the file size wrongly?

The difference between the two sizes is 19 which coincidencally is the number of lines in your shader.
My guess is this has something to do with line ending conversations. Open the file as binary and the discrepency should go away.

Related

GLSL uint_fast64_t type

how can i get an input to the vertex shader of type uint_fast64_t?
there is not such type available in the language how can i pass it differently?
my code is this:
#version 330 core
#define CHUNK_SIZE 16
#define BLOCK_SIZE_X 0.1
#define BLOCK_SIZE_Y 0.1
#define BLOCK_SIZE_Z 0.1
// input vertex and UV coordinates, different for all executions of this shader
layout(location = 0) in uint_fast64_t vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;
// Output data ; will be interpolated for each fragment.
out vec2 UV;
// model view projection matrix
uniform mat4 MVP;
int getAxis(uint_fast64_t p, int choice) { // axis: 0=x 1=y 2=z 3=index_x 4=index_z
switch (choice) {
case 0:
return (int)((p>>59 ) & 0xF); //extract the x axis int but i only want 4bits
case 1:
return (int)((p>>23 ) & 0xFF);//extract the y axis int but i only want 8bits
case 2:
return (int)((p>>55 ) & 0xF);//extract the z axis int but i only want 4bits
case 3:
return (int)(p & 0x807FFFFF);//extract the index_x 24bits
case 4:
return (int)((p>>32) & 0x807FFFFF);//extract the index_z 24bits
}
}
void main()
{
// assign vertex position
float x = (getAxis(vertexPosition_modelspace,0) + getAxis(vertexPosition_modelspace,3)*CHUNK_SIZE)*BLOCK_SIZE_X;
float y = getAxis(vertexPosition_modelspace,1)*BLOCK_SIZE_Y;
float z = (getAxis(vertexPosition_modelspace,2) + getAxis(vertexPosition_modelspace,3)*CHUNK_SIZE)*BLOCK_SIZE_Z;
gl_Position = MVP * vec4(x,y,z, 1.0);
// UV of the vertex. No special space for this one.
UV = vertexUV;
}
the error message i am takeing is :
i tried to put uint64_t but the same problem
Unextended GLSL for OpenGL does not have the ability to directly use 64-bit integer values. And even the fairly widely supported ARB extension that allows for the use of 64-bit integers within shaders doesn't actually allow you to use them as vertex shader attributes. That requires an NVIDIA extension supported only by... NVIDIA.
However, you can send 32-bit integers, and a 64-bit integer is just two 32-bit integers. You can put 64-bit integers into the buffer and pass them as 2 32-bit unsigned integers in your vertex attribute format:
glVertexAttribIFormat(0, 2, GL_UNSIGNED_INT, <byte_offset>);
Your shader will retrieve them as a uvec2 input:
layout(location = 0) in uvec2 vertexPosition_modelspace;
The x component of the vector will have the first 4 bytes and the y component will store the second 4 bytes. But since "first" and "second" are determined by your CPU's endian, you'll need to know whether your CPU is little endian or big endian to be able to use them. Since most desktop GL implementations are paired with little endian CPUs, we'll assume that is the case.
In this case, vertexPosition_modelspace.x contains the low 4 bytes of the 64-bit integer, and vertexPosition_modelspace.y contains the high 4 bytes.
So your code could be adjusted as follows (with some cleanup):
const vec3 BLOCK_SIZE(0.1, 0.1, 0.1);
//Get the three axes all at once.
uvec3 getAxes(in uvec2 p)
{
return uvec3(
(p.y >> 27) & 0xF),
(p.x >> 23) & 0xFF),
(p.y >> 23) & 0xF)
);
}
//Get the indices
uvec2 getIndices(in uvec2 p)
{
return p & 0x807FFFFF; //Performs component-wise bitwise &
}
void main()
{
uvec3 iPos = getAxes(vertexPosition_modelspace);
uvec2 indices = getIndices(vertexPosition_modelspace);
vec3 pos = vec3(
iPos.x + (indices.x * CHUNK_SIZE),
iPos.y,
iPos.z + (indices.x * CHUNK_SIZE) //You used index 3 in your code, so I used .x here, but I think you meant index 4.
);
pos *= BLOCK_SIZE;
...
}

Find the maximum float in the array

I have a compute shader program which looks for the maximum value in the float array. it uses reduction (compare two values and save the bigger one to the output buffer).
Now I am not quite sure how to run this program from the Java code (using jogamp). In the display() method I run the program once (every time with the halved array in the input SSBO = result from previous iteration) and finish this when the array with results has only one item - the maximum.
Is this the correct method? Every time in the display() method creating and binding input and output SSBO, running the shader program and then check how many items was returned?
Java code:
FloatBuffer inBuffer = Buffers.newDirectFloatBuffer(array);
gl.glBindBuffer(GL3ES3.GL_SHADER_STORAGE_BUFFER, buffersNames.get(1));
gl.glBufferData(GL3ES3.GL_SHADER_STORAGE_BUFFER, itemsCount * Buffers.SIZEOF_FLOAT, inBuffer,
GL3ES3.GL_STREAM_DRAW);
gl.glBindBufferBase(GL3ES3.GL_SHADER_STORAGE_BUFFER, 1, buffersNames.get(1));
gl.glDispatchComputeGroupSizeARB(groupsCount, 1, 1, groupSize, 1, 1);
gl.glMemoryBarrier(GL3ES3.GL_SHADER_STORAGE_BARRIER_BIT);
ByteBuffer output = gl.glMapNamedBuffer(buffersNames.get(1), GL3ES3.GL_READ_ONLY);
Shader code:
#version 430
#extension GL_ARB_compute_variable_group_size : enable
layout (local_size_variable) in;
layout(std430, binding = 1) buffer MyData {
vec4 elements[];
} data;
void main() {
uint index = gl_GlobalInvocationID.x;
float n1 = data.elements[index].x;
float n2 = data.elements[index].y;
float n3 = data.elements[index].z;
float n4 = data.elements[index].w;
data.elements[index].x = max(max(n1, n2), max(n3, n4));
}

Extra characters sometimes getting tacked on to the end of a character array containing characters that have been extracted from a file

I'm having trouble reading in shader files for OpenGL. Sometimes the files are read correctly and the shaders can be correctly initialized. Other times, they are not read correctly.
The way I read these files (containing the code for the shaders) is simple. I extract each character from the file into an array. That array then effectively becomes the code for the shader and can be passed to opengl for use.
The problem is, sometimes some extra characters are tacked onto the end of the array when I read shader file
So instead of this:
#version 410 core
in vec4 fragmentColor;
out vec4 color;
void main(){
color = fragmentColor;
}
I get this:
#version 410 core
in vec4 fragmentColor;
out vec4 color;
void main(){
color = fragmentColor;
}\377
And I end up with a black screen.
Again, this doesn't happen all the time, happening seemingly at random.
This is my function that actually reads the shader files and extracts the characters into an array. (Heavy with comments since I'm assuming posting long code on here is a sin. But the function itself really isn't all that complicated)
const GLchar ** ReadShaders(std::string vertex, std::string fragment){
char tempVert[10000];
char tempFrag[10000];
//Open shader files
std::ifstream vert(vertex);
std::ifstream frag(fragment);
//Don't skip white lines
vert >> std::noskipws;
frag >> std::noskipws;
//Read in vertex shader file by extracting each character into "tempVert"
int vCount = 0;
while( !vert.eof() ){
vert >> tempVert[vCount];
//Break if there are no more characters to read (I know this is redundant, will fix later)
if(vert.eof()){
break;
}
else{
vCount++;
}
}
//Debug
std::cout << vCount << std::endl;
//Create new char array the size of the number of characters extracted from shader file
char * tempVertexSource = new (std::nothrow) GLchar[vCount];
//Copy extracted characters into new "tempVertexSource" array
for(int i = 0; i < vCount; i++){
tempVertexSource[i] = tempVert[i];
}
//Debug
std::cout << tempVertexSource << std::endl;
//Read in fragment shader file by extracting each character into "tempFrag"
int fCount = 0;
while( !frag.eof() ){
frag >> tempFrag[fCount];
//Break if there are no more characters to read (I know this is redundant, will fix later)
if(frag.eof()){
break;
}
else{
fCount++;
}
}
//Debug
std::cout << fCount << std::endl;
//Create new char array the size of the number of characters extracted from shader file
GLchar * tempFragmentSource = new (std::nothrow) GLchar[fCount];
//Copy extracted characters into new "tempFragmentSource" array
for(int i = 0; i < fCount; i++){
tempFragmentSource[i] = tempFrag[i];
}
//Debug
std::cout << tempFragmentSource << std::endl;
//Create an array to hold both shader arrays (for return)
const GLchar ** returnShaders = new (std::nothrow) const GLchar*[2];
//Assign shaders
returnShaders[0] = tempVertexSource;
returnShaders[1] = tempFragmentSource;
//Close files
vert.close();
frag.close();
//Return
return returnShaders;
}
Now, in my function, the number of characters that are extracted from a file is counted. So, this next part is where I'm confused (other then the fact that sometimes the error happens, and sometimes it doesn't).
When outputting (checking) the number of characters that have been extracted from a file I'll get this for a correct run:
Number of characters: 100
#version 410 core
in vec4 fragmentColor;
out vec4 color;
void main(){
color = fragmentColor;
}
However, I'll get the same number of characters when an incorrect reading of the file happens as well:
Number of characters: 100
#version 410 core
in vec4 fragmentColor;
out vec4 color;
void main(){
color = fragmentColor;
}\377
This doesn't make sense, since:
1) The random extra characters seemingly come out of nowhere and at random
2) The number of characters that have been extracted from the file are used for the size of the new array that will contain those characters, so this shouldn't even be possibly without those extra characters all being a part of the last character in the array.
So, for example: Going off of what I know, for this particular shader file that contains 100 characters, if I hardcode 99 for the size of the char array and the number of characters it will copy into this new array, sure I don't get any of the random characters at the end at times, but it also deletes the "}" character.
Making it:
#version 410 core
in vec4 fragmentColor;
out vec4 color;
void main(){
color = fragmentColor;
I'm very confused, Thanks for any help
After the while loop, explicitly add a zero terminator:
tempVert[++vCount] = 0;
It would be way better to (1) get the file size first, (2) allocate memory for it, plus 1 for the zero terminator, then (3) read the entire file into memory at once. As it is, you need to make sure your files are smaller than the static buffers you allocate here.

Strange behaviour using in/out block data with OpenGL/GLSL

I have implemented normal mapping shader in my OpenGL/GLSL application. To compute the bump and shadow factor in the fragment shader I need to send from the vertex shader some data like the light direction in tangent space and the vertex position in light space for each light of my scene. So to do job I need the declare 2 output variables like below (vertex shader):
#define MAX_LIGHT_COUNT 5
[...]
out vec4 ShadowCoords[MAX_LIGHT_COUNT]; //Vertex position in light space
out vec3 lightDir_TS[MAX_LIGHT_COUNT]; //light direction in tangent space
uniform int LightCount;
[...]
for (int idx = 0; idx < LightCount; idx++)
{
[...]
lightDir_TS[idx] = TBN * lightDir_CS;
ShadowCoords[idx] = ShadowInfos[idx].ShadowMatrix * VertexPosition;
[...]
}
And in the fragment shader I recover these variables thanks to the followings input declarations:
in vec3 lightDir_TS[MAX_LIGHT_COUNT];
in vec4 ShadowCoords[MAX_LIGHT_COUNT];
The rest of the code is not important to explain my problem.
So now here's the result in image:
As you can see until here all is ok!
But now, for a sake of simplicity I want to use a single output declaration rather than 2! So the logical choice is to use an input/output data block like below:
#define MAX_LIGHT_COUNT 5
[...]
out LightData_VS
{
vec3 lightDir_TS;
vec4 ShadowCoords;
} LightData_OUT[MAX_LIGHT_COUNT];
uniform int LightCount;
[...]
for (int idx = 0; idx < LightCount; idx++)
{
[...]
LightData_OUT[idx].lightDir_TS = TBN * lightDir_CS;
LightData_OUT[idx].ShadowCoords = ShadowInfos[idx].ShadowMatrix * VertexPosition;
[...]
}
And in the fragment shader the input data block:
in LightData_VS
{
vec3 lightDir_TS;
vec4 ShadowCoords;
} LightData_IN[MAX_LIGHT_COUNT];
But this time when I execute my program I have the following display:
As you can see the specular light is not the same than in the first case above!
However I noticed if I replace the line:
for (int idx = 0; idx < LightCount; idx++) //Use 'LightCount' uniform variable
by the following one:
for (int idx = 0; idx < 1; idx++) //'1' value hard coded
or
int count = 1;
for (int idx = 0; idx < count; idx++)
the shading result is correct!
The problem seems to come from the fact I use uniform variable in the 'for' condition. However this works when I used seperates output variables like in the first case!
I checked: the uniform variable 'LightCount' is correct and equal to '1'; (I tried unsigned int data type without success and it's the same thing using a 'while' loop)
How can you explain a such result?
I use:
OpenGL: 4.4.0 NVIDIA driver 344.75
GLSL: 4.40 NVIDIA via Cg compiler
I already used input/output data block without problem but it was not arrays but just simple blocks like below:
[in/out] VertexData_VS
{
vec3 viewDir_TS;
vec4 Position_CS;
vec3 Normal_CS;
vec2 TexCoords;
} VertexData_[IN/OUT];
Do you think it's not possible to use input/output data blocks as arrays in a loop using a uniform variable in the for conditions ?
UPDATE
I tried using 2 vec4 (for a sake of data alignment like for uniform block (for this case data need to be aligned on a vec4)) into the data structure like below:
[in/out] LightData_VS
{
vec4 lightDir_TS; //vec4((TBN * lightDir_CS), 0.0f);
vec4 ShadowCoords;
} LightData_[IN/OUT][MAX_LIGHT_COUNT];
without success...
UPDATE 2
Here's the code concerning shader compilation log:
core::FileSystem file(filename);
std::ifstream ifs(file.GetFullName());
if (ifs)
{
GLint compilationError = 0;
std::string fileContent, line;
char const *sourceCode;
while (std::getline(ifs, line, '\n'))
fileContent.append(line + '\n');
sourceCode = fileContent.c_str();
ifs.close();
this->m_Handle = glCreateShader(this->m_Type);
glShaderSource(this->m_Handle, 1, &sourceCode, 0);
glCompileShader(this->m_Handle);
glGetShaderiv(this->m_Handle, GL_COMPILE_STATUS, &compilationError);
if (compilationError != GL_TRUE)
{
GLint errorSize = 0;
glGetShaderiv(this->m_Handle, GL_INFO_LOG_LENGTH, &errorSize);
char *errorStr = new char[errorSize + 1];
glGetShaderInfoLog(this->m_Handle, errorSize, &errorSize, errorStr);
errorStr[errorSize] = '\0';
std::cout << errorStr << std::endl;
delete[] errorStr;
glDeleteShader(this->m_Handle);
}
}
And the code concerning the program log:
GLint errorLink = 0;
glGetProgramiv(this->m_Handle, GL_LINK_STATUS, &errorLink);
if (errorLink != GL_TRUE)
{
GLint sizeError = 0;
glGetProgramiv(this->m_Handle, GL_INFO_LOG_LENGTH, &sizeError);
char *error = new char[sizeError + 1];
glGetShaderInfoLog(this->m_Handle, sizeError, &sizeError, error);
error[sizeError] = '\0';
std::cerr << error << std::endl;
glDeleteProgram(this->m_Handle);
delete[] error;
}
Unfortunatly, I don't have any error log!

Order independent transparency with MSAA

I have implemented OIT based on the demo in "OpenGL Programming Guide" 8th edition.(The red book).Now I need to add MSAA.Just enabling MSAA screws up the transparency as the layered pixels are resolved x times equal to the number of sample levels.I have read this article on how it is done with DirectX where they say the pixel shader should be run per sample and not per pixel.How id it done in OpenGL.
I won't put out here the whole implementation but the fragment shader chunk in which the final resolution of the layered pixels occurs:
vec4 final_color = vec4(0,0,0,0);
for (i = 0; i < fragment_count; i++)
{
/// Retrieving the next fragment from the stack:
vec4 modulator = unpackUnorm4x8(fragment_list[i].y) ;
/// Perform alpha blending:
final_color = mix(final_color, modulator, modulator.a);
}
color = final_color ;
Update:
I have tried the solution proposed here but it still doesn't work.Here are the full fragment shader for the list build and resolve passes:
List build pass :
#version 420 core
layout (early_fragment_tests) in;
layout (binding = 0, r32ui) uniform uimage2D head_pointer_image;
layout (binding = 1, rgba32ui) uniform writeonly uimageBuffer list_buffer;
layout (binding = 0, offset = 0) uniform atomic_uint list_counter;
layout (location = 0) out vec4 color;//dummy output
in vec3 frag_position;
in vec3 frag_normal;
in vec4 surface_color;
in int gl_SampleMaskIn[];
uniform vec3 light_position = vec3(40.0, 20.0, 100.0);
void main(void)
{
uint index;
uint old_head;
uvec4 item;
vec4 frag_color;
index = atomicCounterIncrement(list_counter);
old_head = imageAtomicExchange(head_pointer_image, ivec2(gl_FragCoord.xy), uint(index));
vec4 modulator =surface_color;
item.x = old_head;
item.y = packUnorm4x8(modulator);
item.z = floatBitsToUint(gl_FragCoord.z);
item.w = int(gl_SampleMaskIn[0]);
imageStore(list_buffer, int(index), item);
frag_color = modulator;
color = frag_color;
}
List resolve :
#version 420 core
// The per-pixel image containing the head pointers
layout (binding = 0, r32ui) uniform uimage2D head_pointer_image;
// Buffer containing linked lists of fragments
layout (binding = 1, rgba32ui) uniform uimageBuffer list_buffer;
// This is the output color
layout (location = 0) out vec4 color;
// This is the maximum number of overlapping fragments allowed
#define MAX_FRAGMENTS 40
// Temporary array used for sorting fragments
uvec4 fragment_list[MAX_FRAGMENTS];
void main(void)
{
uint current_index;
uint fragment_count = 0;
current_index = imageLoad(head_pointer_image, ivec2(gl_FragCoord).xy).x;
while (current_index != 0 && fragment_count < MAX_FRAGMENTS )
{
uvec4 fragment = imageLoad(list_buffer, int(current_index));
int coverage = int(fragment.w);
//if((coverage &(1 << gl_SampleID))!=0) {
fragment_list[fragment_count] = fragment;
current_index = fragment.x;
//}
fragment_count++;
}
uint i, j;
if (fragment_count > 1)
{
for (i = 0; i < fragment_count - 1; i++)
{
for (j = i + 1; j < fragment_count; j++)
{
uvec4 fragment1 = fragment_list[i];
uvec4 fragment2 = fragment_list[j];
float depth1 = uintBitsToFloat(fragment1.z);
float depth2 = uintBitsToFloat(fragment2.z);
if (depth1 < depth2)
{
fragment_list[i] = fragment2;
fragment_list[j] = fragment1;
}
}
}
}
vec4 final_color = vec4(0,0,0,0);
for (i = 0; i < fragment_count; i++)
{
vec4 modulator = unpackUnorm4x8(fragment_list[i].y);
final_color = mix(final_color, modulator, modulator.a);
}
color = final_color;
}
Without knowing how your code actually works, you can do it very much the same way that your linked DX11 demo does, since OpenGL provides the same features needed.
So in the first shader that just stores all the rendered fragments, you also store the sample coverage mask for each fragment (along with the color and depth, of course). This is given as fragment shader input variable int gl_SampleMaskIn[] and for each sample with id 32*i+j, bit j of glSampleMaskIn[i] is set if the fragment covers that sample (since you probably won't use >32xMSAA, you can usually just use glSampleMaskIn[0] and only need to store a single int as coverage mask).
...
fragment.color = inColor;
fragment.depth = gl_FragCoord.z;
fragment.coverage = gl_SampleMaskIn[0];
...
Then the final sort and render shader is run for each sample instead of just for each fragment. This is achieved implicitly by making use of the input variable int gl_SampleID, which gives us the ID of the current sample. So what we do in this shader (in addition to the non-MSAA version) is that the sorting step just accounts for the sample, by only adding a fragment to the final (to be sorted) fragment list if the current sample is actually covered by this fragment:
What was something like (beware, pseudocode extrapolated from your small snippet and the DX-link):
while(fragment.next != 0xFFFFFFFF)
{
fragment_list[count++] = vec2(fragment.depth, fragment.color);
fragment = fragments[fragment.next];
}
is now
while(fragment.next != 0xFFFFFFFF)
{
if(fragment.coverage & (1 << gl_SampleID))
fragment_list[count++] = vec2(fragment.depth, fragment.color);
fragment = fragments[fragment.next];
}
Or something along those lines.
EDIT: To your updated code, you have to increment fragment_count only inside the if(covered) block, since we don't want to add the fragment to the list if the sample is not covered. Incrementing it always will likely result in the artifacts you see at the edges, which are the regions where the MSAA (and thus the coverage) comes into play.
On the other hand the list pointer has to be forwarded (current_index = fragment.x) in each loop iteration and not only if the sample is covered, as otherwise it can result in an infinite loop, like in your case. So your code should look like:
while (current_index != 0 && fragment_count < MAX_FRAGMENTS )
{
uvec4 fragment = imageLoad(list_buffer, int(current_index));
uint coverage = fragment.w;
if((coverage &(1 << gl_SampleID))!=0)
fragment_list[fragment_count++] = fragment;
current_index = fragment.x;
}
The OpenGL 4.3 Spec states in 7.1 about the gl_SampleID builtin variable:
Any static use of this variable in a fragment shader causes the entire shader to be evaluated per-sample.
(This has already been the case in the ARB_sample_shading and is also the case for gl_SamplePosition or a custom variable declared with the sample qualifier)
Therefore it is quite automatic, because you will probably need the SampleID anyway.