From the Vulkan Tutorial:
#version 450
#extension GL_ARB_separate_shader_objects : enable
out gl_PerVertex {
vec4 gl_Position;
};
vec2 positions[3] = vec2[](
vec2(0.0, -0.5),
vec2(0.5, 0.5),
vec2(-0.5, 0.5)
);
void main() {
gl_Position = vec4(positions[gl_VertexIndex], 0.0, 1.0);
}
Question 1: What does this designate?
out gl_PerVertex {
vec4 gl_Position;
};
Question 2: What explains the syntax vec2 positions[3] = vec2[](...)? To initialize the array, shouldn't the syntax be
vec2 positions[3] = {
vec2(0.0, -0.5),
vec2(0.5, 0.5),
vec2(-0.5, 0.5)
};
Is this shader-specific syntax, or can arrayType[](...) be used as a constructer in C++ also?
Question 1: What does this designate?
This
out gl_PerVertex {
vec4 gl_Position;
};
is an Output Interface Block. (See further GLSL Specification - 4.3.9 Interface Blocks)
Question 2: What explains the syntax vec2 positions3 = vec2? To initialize the array, shouldn't the syntax be ...
No
See Array constructors
Arrays can be constructed using array constructor syntax. In this case, the
type also contains the [] array notation:
const float array[3] = float[3](2.5, 7.0, 1.5);
See GLSL Specification - 4.1.11 Initializers:
If an initializer is a list of initializers enclosed in curly braces, the variable being declared must be a
vector, a matrix, an array, or a structure.
int i = { 1 }; // illegal, i is not a composite
....
All of the following declarations result in a compile-time error.
float a[2] = { 3.4, 4.2, 5.0 }; // illegal
....
If an initializer (of either form) is provided for an unsized array, the size of the array is determined by the
number of top-level (non-nested) initializers within the initializer. All of the following declarations create
arrays explicitly sized with five elements:
float a[] = float[](3.4, 4.2, 5.0, 5.2, 1.1);
float b[] = { 3.4, 4.2, 5.0, 5.2, 1.1 };
float c[] = a; // c is explicitly size 5
float d[5] = b; // means the same thing
Related
In GLSL I can do
vec4 data = vec4(1.0, 2.0, 3.0, 4.0);
gl_FragColor = data.xxyy;
But can I somehow do?
vec4 data = vec4(1.0, 2.0, 3.0, 4.0);
ivec4 indices = ivec4(0, 0, 1, 1);
gl_FragColor = data[indices];
No you cannot. Actually you want to Swizzle something like data.indices.
You have to construct a vec4:
gl_FragColor =
vec4(data[indices[0]], data[indices[1]], data[indices[2]], data[indices[3]]);
Alternatively you can use matrix multiplication:
indices = mat4(vec4(1,1,0,0), vec4(0,0,1,1), vec4(0,0,0,0), vec4(0,0,0,0));
gl_FragColor = indices * data;
This question already has an answer here:
Can I pack both floats and ints into the same array buffer?
(1 answer)
Closed 2 years ago.
this does't work and everything turns black : (this is the vs)
#version 330
layout (location=0) in vec3 pos;
layout (location=1) in vec2 tex;
layout (location=2) in int lighting;
out vec2 texCoord;
out float color;
uniform mat4 cam_mat;
void main() {
float light = float(lighting);
color = (1.0f - (light / 3.0f)) / 2.0 + .5f;
however this does work :
layout (location=2) in float lighting;
when I make it into a float it does work and everything is fine
latter I would like to pack all the info into a single integer and casting that int to a float is pivotal to reducing the amount of vram i use <- I'm making a voxel game engine
Also I tried making an array of floats and using the int as an index but that didn't work either! ( I used some if statements and it seems all the ints I give it : {0, 1, 2, 3} goto : {0, >3, >3, >3} (>3 means the number was greater than 3 )) so wtf is happening?
wait solved:
glVertexAttribIPointer(2, 1, GL_INT, 0, 0);
What obvious thing am I missing? I'm trying to compile/run this vertex shader:
// an attribute will receive data from a buffer
attribute vec2 a_position;
uniform vec2 u_resolution;
varying vec4 v_color;
// all shaders have a main function
void main() {
// convert the position from pixels to 0.0 to 1.0
vec2 zeroToOne = a_position / u_resolution;
// convert from 0->1 to 0->2
vec2 zeroToTwo = zeroToOne * 2.0;
// convert from 0->2 to -1->+1 (clip space)
vec2 clipSpace = zeroToTwo - 1.0;
gl_Position = vec4(clipSpace, 0, 1);
vec3 c = hsv2rgb(vec3(0.5, 0.5, 0.5));
/*temporary*/ v_color = gl_Position * 0.5 + 0.5;
gl_PointSize = 1.0;
}
// All components are in the range [0…1], including hue.
vec3 hsv2rgb(vec3 c)
{
vec4 K = vec4(1.0, 2.0 / 3.0, 1.0 / 3.0, 3.0);
vec3 p = abs(fract(c.xxx + K.xyz) * 6.0 - K.www);
return c.z * mix(K.xxx, clamp(p - K.xxx, 0.0, 1.0), c.y);
}
using code I found at From RGB to HSV in OpenGL GLSL but get the error:
https://webglfundamentals.org/webgl/lessons/webgl-boilerplate.html
webgl-utils.js:66 *** Error compiling shader '[object WebGLShader]':ERROR: 0:19: 'hsv2rgb' : no matching overloaded function found
ERROR: 0:19: '=' : dimension mismatch
ERROR: 0:19: '=' : cannot convert from 'const mediump float' to 'highp 3-component vector of float'
The error is specific to the hsv2rgb call. I have tried a number of things, including making the parameter a variable (i.e. adding vec3 v = vec3(0.5, 0.5, 0.5) and passing v into hsv2rgb), and making a skeleton hbv2rgb that simply returns its parameter. In the referenced SO post, I saw that another user seemed to have exactly the same problem, but I am properly passing in a vec3 rather than 3 floats.
If it makes any difference, here is the fragment shader as well:
// fragment shaders don't have a default precision so we need
// to pick one. mediump is a good default
precision mediump float;
varying vec4 v_color;
void main() {
// gl_FragColor is a special variable a fragment shader
// is responsible for setting
gl_FragColor = v_color;
}
From OpenGL ES Shading Language 1.00 Specification - 6.1 Function Definitions:
All functions must be either declared with a prototype or defined with a body before they are called.
Hence, you have to declare the function hsv2rgb before it is used the first time or you have to declare a function prototype:
vec3 hsv2rgb(vec3 c);
void main() {
// [...]
vec3 c = hsv2rgb(vec3(0.5, 0.5, 0.5));
// [...]
}
vec3 hsv2rgb(vec3 c)
{
vec4 K = vec4(1.0, 2.0 / 3.0, 1.0 / 3.0, 3.0);
vec3 p = abs(fract(c.xxx + K.xyz) * 6.0 - K.www);
return c.z * mix(K.xxx, clamp(p - K.xxx, 0.0, 1.0), c.y);
}
So I have this fragment shader, which was working great until I refactored some logic out into a separate function. I want to be able to call it multiple times to layer different versions of the effect on top of each other.
However, as soon as I created this custom function, the shader starts throwing the error:
ERROR: 0:33: 'grid' : no matching overloaded function found
Which is wierd, because it appears to be compiling it as function. If I remove the return from grid() I get this error too:
ERROR: 0:36: '' : function does not return a value: grid
So what am I missing here about declaring functions?
Full shader here:
uniform float brightness;
uniform float shiftX;
uniform float shiftY;
uniform vec4 color;
varying vec3 vPos;
void main() {
gl_FragColor = vec4( grid(200.0), 0.0, 0.0, 1.0 );
}
float grid(float size) {
float x = pow(abs(0.5 - mod(vPos.x + shiftX, 200.0) / 200.0), 4.0);
float y = pow(abs(0.5 - mod(vPos.y + shiftY, 200.0) / 200.0), 4.0);
return (x+y) * 5.0 * pow(brightness, 2.0);
}
You either have to put the grid function before the main or forward declare it as you would in c.
Such as:
float grid(float size);
before the main method.
I'm writing my own shader with OpenGL, and I am stumped why this shader won't compile. Could anyone else have a look at it?
What I'm passing in as a vertex is 2 floats (separated as bytes) in this format:
Float 1:
Byte 1: Position X
Byte 2: Position Y
Byte 3: Position Z
Byte 4: Texture Coordinate X
Float 2:
Byte 1: Color R
Byte 2: Color G
Byte 3: Color B
Byte 4: Texture Coordinate Y
And this is my shader:
in vec2 Data;
varying vec3 Color;
varying vec2 TextureCoords;
uniform mat4 projection_mat;
uniform mat4 view_mat;
uniform mat4 world_mat;
void main()
{
vec4 dataPosition = UnpackValues(Data.x);
vec4 dataColor = UnpackValues(Data.y);
vec4 position = dataPosition * vec4(1.0, 1.0, 1.0, 0.0);
Color = dataColor.xyz;
TextureCoords = vec2(dataPosition.w, dataColor.w)
gl_Position = projection_mat * view_mat * world_mat * position;
}
vec4 UnpackValues(float value)
{
return vec4(value % 255, (value >> 8) % 255, (value >> 16) % 255, value >> 24);
}
If you need any more information, I'd be happy to comply.
You need to declare UnpackValues before you call it. GLSL is like C and C++; names must have a declaration before they can be used.
BTW: What you're trying to do will not work. Floats are floats; unless you're working with GLSL 4.00 (and since you continue to use old terms like "varying", I'm guessing not), you cannot extract bits out of a float. Indeed, the right-shift operator is only defined for integers; attempting to use it on floats will fail with a compiler error.
GLSL is not C or C++ (ironically).
If you want to pack your data, use OpenGL to pack it for you. Send two vec4 attributes that contain normalized unsigned bytes:
glVertexAttribPointer(X, 4, GL_UNSIGNED_BYTE, GL_TRUE, 8, *);
glVertexAttribPointer(Y, 4, GL_UNSIGNED_BYTE, GL_TRUE, 8, * + 4);
Your vertex shader would take two vec4 values as inputs. Since you are not using glVertexAttribIPointer, OpenGL knows that you're passing values that are to be interpreted as floats.