GLSL cast input int to output float [duplicate] - casting

This question already has an answer here:
Can I pack both floats and ints into the same array buffer?
(1 answer)
Closed 2 years ago.
this does't work and everything turns black : (this is the vs)
#version 330
layout (location=0) in vec3 pos;
layout (location=1) in vec2 tex;
layout (location=2) in int lighting;
out vec2 texCoord;
out float color;
uniform mat4 cam_mat;
void main() {
float light = float(lighting);
color = (1.0f - (light / 3.0f)) / 2.0 + .5f;
however this does work :
layout (location=2) in float lighting;
when I make it into a float it does work and everything is fine
latter I would like to pack all the info into a single integer and casting that int to a float is pivotal to reducing the amount of vram i use <- I'm making a voxel game engine
Also I tried making an array of floats and using the int as an index but that didn't work either! ( I used some if statements and it seems all the ints I give it : {0, 1, 2, 3} goto : {0, >3, >3, >3} (>3 means the number was greater than 3 )) so wtf is happening?

wait solved:
glVertexAttribIPointer(2, 1, GL_INT, 0, 0);

Related

Banding Problem in Multi Step Shader with Ping Pong Buffers, does not happen in ShaderToy

I am trying to implement a Streak shader, which is described here:
http://www.chrisoat.com/papers/Oat-SteerableStreakFilter.pdf
Short explanation: Samples a point with a 1d kernel in a given direction. The kernel size grows exponentially in each step. Color values are weighted based on distance to sampled point and summed. The result is a smooth tail/smear/light streak effect on that direction. Here is the frag shader:
precision highp float;
uniform sampler2D u_texture;
varying vec2 v_texCoord;
uniform float u_Pass;
const float kernelSize = 4.0;
const float atten = 0.95;
vec4 streak(in float pass, in vec2 texCoord, in vec2 dir, in vec2 pixelStep) {
float kernelStep = pow(kernelSize, pass - 1.0);
vec4 color = vec4(0.0);
for(int i = 0; i < 4; i++) {
float sampleNum = float(i);
float weight = pow(atten, kernelStep * sampleNum);
vec2 sampleTexCoord = texCoord + ((sampleNum * kernelStep) * (dir * pixelStep));
vec4 texColor = texture2D(u_texture, sampleTexCoord) * weight;
color += texColor;
}
return color;
}
void main() {
vec2 iResolution = vec2(512.0, 512.0);
vec2 pixelStep = vec2(1.0, 1.0) / iResolution.xy;
vec2 dir = vec2(1.0, 0.0);
float pass = u_Pass;
vec4 streakColor = streak(pass, v_texCoord, dir, pixelStep);
gl_FragColor = vec4(streakColor.rgb, 1.0);
}
It was going to be used for a starfield type of effect. And here is the implementation on ShaderToy which works fine:
https://www.shadertoy.com/view/ll2BRG
(Note: Disregard the first shader in Buffer A, it just filters out the dim colors in the input texture to emulate a star field since afaik ShaderToy doesn't allow uploading custom textures)
But when I use the same shader in my own code and render using ping-pong FrameBuffers, it looks different. Here is my own implementation ported over to WebGL:
https://jsfiddle.net/1b68eLdr/87755/
I basically create 2 512x512 buffers, ping-pong the shader 4 times increasing kernel size at each iteration according to the algorithm and render the final iteration on the screen.
The problem is visible banding, and my streaks/tails seem to be losing brightness a lot faster: (Note: the image is somewhat inaccurate, the lengths of the streaks are same/correct, its color values that are wrong)
I have been struggling with this for a while in Desktop OpenGl / LWJGL, I ported it over to WebGL/Javascript and uploaded on JSFiddle in hopes someone can spot what the problem is. I suspect it's either about texture coordinates or FrameBuffer configuration since shaders are exactly the same.
The reason it works on Shadertoys is because it uses a floating-point render target.
Simply use gl.FLOAT as the type of your framebuffer texture and the issue is fixed (I could verify it with the said modification on your JSFiddle).
So do this in your createBackingTexture():
// Just request the extension (MUST be done).
gl.getExtension('OES_texture_float');
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, this._width, this._height, 0, gl.RGBA, gl.FLOAT, null);

GLSL - unable to access second index of a SSBO array for multiple lights

In my application I add two lights. One at (0,0,2) and the second one at (2,0,0). Here's what I get (the x,y,z axes are represented respectively by the red, green & blue lines):
Notice how only the first light is working and the second is not. I made my application core-profile compliant to inspect the buffers with various tools like RenderDoc and NSight and both show me that the second light's data is present in the buffer (picture taken while running Nsight):
The positions seem to be correctly transfered to the gpu memory buffer. Here's the implementation of my fragment shader that uses a SSBO to handle multiple lights in my application:
#version 430
struct Light {
vec3 position;
vec3 color;
float intensity;
float attenuation;
float radius;
};
layout (std140, binding = 0) uniform CameraInfo {
mat4 ProjectionView;
vec3 eye;
};
layout (std430, binding = 1) readonly buffer LightsData {
Light lights[];
};
uniform vec3 ambient_light_color;
uniform float ambient_light_intensity;
in vec3 ex_FragPos;
in vec4 ex_Color;
in vec3 ex_Normal;
out vec4 out_Color;
void main(void)
{
// Basic ambient light
vec3 ambient_light = ambient_light_color * ambient_light_intensity;
int i;
vec3 diffuse = vec3(0.0,0.0,0.0);
vec3 specular = vec3(0.0,0.0,0.0);
for (i = 0; i < lights.length(); ++i) {
Light wLight = lights[i];
// Basic diffuse light
vec3 norm = normalize(ex_Normal); // in this project the normals are all normalized anyway...
vec3 lightDir = normalize(wLight.position - ex_FragPos);
float diff = max(dot(norm, lightDir), 0.0);
diffuse += diff * wLight.color;
// Basic specular light
vec3 viewDir = normalize(eye - ex_FragPos);
vec3 reflectDir = reflect(-lightDir, norm);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), 32);
specular += wLight.intensity * spec * wLight.color;
}
out_Color = ex_Color * vec4(specular + diffuse + ambient_light,1.0);
}
Note that I've read the section 7.6.2.2 of the OpenGL 4.5 spec and that, if I understood correctly, my alignment should follow the size of the biggest member of my struct, which is a vec3 and my struct size is 36 bytes so everything should be fine here. I also tried different std version (e.g. std140) and adding some padding, but nothing fixes the issue with the second light. In my C++ code, I have those definitions to add the lights in my application:
light_module.h/.cc:
struct Light {
glm::f32vec3 position;
glm::f32vec3 color;
float intensity;
float attenuation;
float radius;
};
...
constexpr GLuint LIGHTS_SSBO_BINDING_POINT = 1U;
std::vector<Light> _Lights;
...
void AddLight(const Light &light) {
// Add to _Lights
_Lights.push_back(light);
UpdateSSBOBlockData(
LIGHTS_SSBO_BINDING_POINT, _Lights.size()* sizeof(Light),
static_cast<void*>(_Lights.data()), GL_DYNAMIC_DRAW);
}
shader_module.h/.cc:
using SSBOCapacity = GLuint;
using BindingPoint = GLuint;
using ID = GLuint;
std::map<BindingPoint, std::pair<ID, SSBOCapacity> > SSBO_list;
...
void UpdateSSBOBlockData(GLuint a_unBindingPoint,
GLuint a_unSSBOSize, void* a_pData, GLenum a_eUsage) {
auto SSBO = SSBO_list.find(a_unBindingPoint);
if (SSBO != SSBO_list.end()) {
GLuint unSSBOID = SSBO->second.first;
glBindBuffer(GL_SHADER_STORAGE_BUFFER, unSSBOID);
glBufferData(GL_SHADER_STORAGE_BUFFER, a_unSSBOSize, a_pData, a_eUsage);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0); //unbind
}
else
// error handling...
}
Basically, I'm trying to update/reallocate the SSBO size with glBufferData each time a light is added in my app.
Now, since I'm having issues processing the second light data, I changed my fragment shader code to only execute the second light in my SSBO array by forcing i = 1 and looping until i < 2, but I get the following errors:
(50) : error C1068: ... or possible array index out of bounds
(50) : error C5025: lvalue in field access too complex
(56) : error C1068: ... or possible array index out of bounds
(56) : error C5025: lvalue in field access too complex
Lines 50 and 56 refer to diffuse += diff * wLight.color; and specular += wLight.intensity * spec * wLight.color; respectively. Is there really an out of bounds access even if I add my lights before the first draw call? Why is the shader compiling correctly when I'm using lights.length() instead of 2?
Finally, I've added a simple if (i == 1) in my for-loop to see if lights.length() is equal to 2, but it doesn't go in it. Yet the initial size of my buffer is 0 and then I add a light that sets the buffer size to 36 bytes and we can see that the first light works fine. Why is the update/reallocate not working the second time?
So what I did was to add some padding at the end of the declaration of my struct on the C++ side only. The padding required was float[3] or 12 bytes, which sums up to 48 bytes. I'm still not sure why this is required, since the specifications state (as highlighted in this post)
If the member is a structure, the base alignment of the structure is N, where N is the largest base alignment value of any of its
members, and rounded up to the base alignment of a vec4. The
individual members of this sub-structure are then assigned offsets by
applying this set of rules recursively, where the base offset of the
first member of the sub-structure is equal to the aligned offset of
the structure. The structure may have padding at the end; the base
offset of the member following the sub-structure is rounded up to the
next multiple of the base alignment of the structure.
[...]
When using the std430 storage layout, shader storage blocks will be
laid out in buffer storage identically to uniform and shader storage
blocks using the std140 layout, except that the base alignment and
stride of arrays of scalars and vectors in rule 4 and of structures in
rule 9 are not rounded up a multiple of the base alignment of a vec4.
My guess is that structures such as vec3 and glm::f32vec3 defined by glm are recursively rounded up to vec4 when using std430 and therefore my struct must follow the alignment of a vec4. If anyone can confirm this, it would be interesting since the linked post above deals with vec4 directly and not vec3.
Picture with both lights working :
EDIT:
After more investigation, it turns out that the last 3 fields of the Light struct (intensity, attenuation and radius) were not usable. I fixed this by changing the position and color from glm::f32vec3 to glm::vec4 instead. More information can be found in a similar post. I also left a single float for padding, because of the alignment mentioned earlier.

Strange dynamic iteration error in OpenGL shader

I'm trying to turn a float array into a vector2 array since a float array is the only array I can pass over from Java code to the shader. I'm turning for example float 3.8 into vec2(3,8). But that's pretty irrelevant. The real problem is that the shader stops working when adding a certain line in the for loop. Here's my code:
attribute vec4 a_color;
attribute vec3 a_position;
attribute vec2 a_texCoord0;
uniform float a_tileCoords[144];
uniform mat4 u_projTrans;
varying vec4 v_color;
varying vec2 v_texCoord0;
varying vec2 v_tileCoords[144];
int length = 144;
void main(){
v_color = a_color;
v_texCoord0 = a_texCoord0;
for(int i = 0; i < length; i++){
int x = int(a_tileCoords[i]);
int y = int((a_tileCoords[i] - int(a_tileCoords[i])) * 10);
v_tileCoords[i] = vec2(x,y);
}
gl_Position = u_projTrans * vec4(a_position + vec3(0, 1, 0), 1.);
}
It's about this line:
v_tileCoords[i] = vec2(x,y);
When I remove that line from the for loop the shader works again (the texture gets drawn.) But I want this line to work of course. So the question for you is: What could I have done wrong, or is this even possible what I want to achieve?
The maximum number of varying vectors can be determined by MAX_VARYING_VECTORS and limits the number of vectors you can pass to the next shader stage.
I'm very sure you exceeded the limit.
see also the following questions:
WebGL shaders: maximum number of varying variables
Max Varying Vectors and Floats for GLES 2.0

Can I pack both floats and ints into the same array buffer?

...because the floats seem to be coming out fine, but there's something wrong with the ints.
Essentially I have a struct called "BlockInstance" which holds a vec3 and an int. I've got an array of these BlockInstances which I buffer like so (translating from C# to C for clarity):
glBindBuffer(GL_ARRAY_BUFFER, bufferHandle);
glBufferData(GL_ARRAY_BUFFER, sizeof(BlockInstance)*numBlocks, blockData, GL_DYNAMIC_DRAW);
glVertexAttribPointer(3,3,GL_FLOAT,false,16,0);
glVertexAttribPointer(4,1,GL_INT,false,16,12);
glVertexAttribDivisor(3,1);
glVertexAttribDivisor(4,1);
And my vertex shader looks like this:
#version 330
layout (location = 0) in vec3 Position;
layout (location = 1) in vec2 TexCoord;
layout (location = 2) in vec3 Normal;
layout (location = 3) in vec3 Translation;
layout (location = 4) in int TexIndex;
uniform mat4 ProjectionMatrix;
out vec2 TexCoord0;
void main()
{
mat4 trans = mat4(
1,0,0,0,
0,1,0,0,
0,0,1,0,
Translation.x,Translation.y,Translation.z,1);
gl_Position = ProjectionMatrix * trans * vec4(Position, 1.0);
TexCoord0 = vec2(TexCoord.x+TexIndex,TexCoord.y)/16;
}
When I replace TexIndex on the last line of my GLSL shader with a constant like 0, 1, or 2, my textures come out fine, but if I leave it like it is, they come out all mangled, so there must be something wrong with the number, right? But I don't know what it's coming out as so it's hard to debug.
I've looked at my array of BlockInstances, and they're all set to 1,2, or 19 so I don't think my input is wrong...
What else could it be?
Note that I'm using a sprite map texture where each of the tiles is 16x16 px but my TexCoords are in the range 0-1, so I add a whole number to it to choose which tile, and then divide it by 16 (the map is also 16x16 tiles) to put it back into the proper range. The idea is I'll replace that last line with
TexCoord0 = vec2(TexCoord.x+(TexIndex%16),TexCoord.y+(TexIndex/16))/16;
-- GLSL does integer math, right? An int divided by an int will come out as whole number?
If I try this:
TexCoord0 = vec2(TexCoord.x+(TexIndex%16),TexCoord.y)/16;
The texture looks fine, but it's not using the right sprite. (Looks to be using the first sprite)
If I do this:
TexCoord0 = vec2(TexCoord.x+(TexIndex%16),TexCoord.y+(TexIndex/16))/16;
It comes out all white. This leads me to believe that TexIndex is coming out to be a very large number (bigger than 256 anyway) and that it's probably a multiple of 16.
layout (location = 4) in int TexIndex;
There's your problem.
glVertexAttribPointer is used to send data that will be converted to floating-point values. It's used to feed floating-point attributes. Passing integers is possible, but those integers are converted to floats, because that's what glVertexAttribPointer is for.
What you need is glVertexAttribIPointer (notice the I). This is used for providing signed and unsigned integer data.
So if you declare a vertex shader input as a float or some non-prefixed vec, you use glVertexAttribPointer to feed it. If you declare the input as int, uint, ivec or uvec, then you use glVertexAttribIPointer.

Why won't this GLSL Vertex Shader compile?

I'm writing my own shader with OpenGL, and I am stumped why this shader won't compile. Could anyone else have a look at it?
What I'm passing in as a vertex is 2 floats (separated as bytes) in this format:
Float 1:
Byte 1: Position X
Byte 2: Position Y
Byte 3: Position Z
Byte 4: Texture Coordinate X
Float 2:
Byte 1: Color R
Byte 2: Color G
Byte 3: Color B
Byte 4: Texture Coordinate Y
And this is my shader:
in vec2 Data;
varying vec3 Color;
varying vec2 TextureCoords;
uniform mat4 projection_mat;
uniform mat4 view_mat;
uniform mat4 world_mat;
void main()
{
vec4 dataPosition = UnpackValues(Data.x);
vec4 dataColor = UnpackValues(Data.y);
vec4 position = dataPosition * vec4(1.0, 1.0, 1.0, 0.0);
Color = dataColor.xyz;
TextureCoords = vec2(dataPosition.w, dataColor.w)
gl_Position = projection_mat * view_mat * world_mat * position;
}
vec4 UnpackValues(float value)
{
return vec4(value % 255, (value >> 8) % 255, (value >> 16) % 255, value >> 24);
}
If you need any more information, I'd be happy to comply.
You need to declare UnpackValues before you call it. GLSL is like C and C++; names must have a declaration before they can be used.
BTW: What you're trying to do will not work. Floats are floats; unless you're working with GLSL 4.00 (and since you continue to use old terms like "varying", I'm guessing not), you cannot extract bits out of a float. Indeed, the right-shift operator is only defined for integers; attempting to use it on floats will fail with a compiler error.
GLSL is not C or C++ (ironically).
If you want to pack your data, use OpenGL to pack it for you. Send two vec4 attributes that contain normalized unsigned bytes:
glVertexAttribPointer(X, 4, GL_UNSIGNED_BYTE, GL_TRUE, 8, *);
glVertexAttribPointer(Y, 4, GL_UNSIGNED_BYTE, GL_TRUE, 8, * + 4);
Your vertex shader would take two vec4 values as inputs. Since you are not using glVertexAttribIPointer, OpenGL knows that you're passing values that are to be interpreted as floats.