I'm trying to test colour values and change them if they're greater than 0.5.
I started out with this test which didn't compile:
if(colourIn.b > 0.5){
colourIn.b=0.0;
}
I read through some post on here and found this post which explains relational operators only work on scalar integer and scalar floating-point expressions.
So after finding out a good way to efficiently test frag values and using the built in functions, changed it to:
float mixValue = clamp(ceil(colourIn.b * 2.0) - 1.0, 0.0, 1.0);
if(greaterThan(colourIn.b,0.5)){
colourIn.b = mix(colourIn.b, 0.0, mixValue);}
Unfortunately it still doesn't compile; it gives the following errors:
ERROR: 0:15 error(202) No matching overloaded function found greaterThan
ERROR: 0:16 error(164) 1-value required assigned "colourIn" (can't modify an input)
ERROR: 0:15 error(179) Boolean expression expected
For this I get that the greatThan function is being used wrong? (I can't find an example in similar circumstances) and that the colourIn value cannot be changed?
I may be wrong... Please help me figure this out.
Basically, I want to change any pixels with Blue values greater than 0.5 to white (0.0).
Yes, it is true that relational operators only work on scalars... what on Earth is colourIn declared as (bvec)? Considering boolean colors do not make a lot of sense, usually colourIn.b would be a scalar component from a vecN or ivecN type. Please include the actual body of the shader you are trying to compile.
Additionally, greaterThan (...) does not work on scalar types, only vector. What it returns is a boolean vector containing the result of the test v1 > v2 for each component in v1 and v2.
So for instance consider the following pseudo-code,
vec3 v1 = vec3 (1,2,3);
vec3 v2 = vec3 (3,2,1);
bvec3 gt = greaterThan (v1, v2);
Then the boolean vector gt would have the following form:
gt.x = false;
gt.y = false;
gt.z = true;
However, the biggest problem you have is you are trying to modify an input value. You cannot do this, fragment shader inputs are interpolated from vertex transform (vertex shader, geometry shader, tessellation shader) outputs during rasterization and are read-only. Vertex shader inputs are from your vertex buffer and are also read-only. The only thing shaders are capable of doing is computing the output for the next stage in the pipeline.
In a fragment shader, the next stage would be blending for final pixel output. In a vertex shader it would be tessellation (GL4+) and primitive assembly (geometry shader) and rasterization (fragment shader).
Related
I am writing a shader and I would like to pass a vec3 along to the input.
however everything I could find is always passing either a single float a vec4, texture or number range. Is it possible to send a vanilla vec3 along to a shader in unity?
Properties
{
offset ("formula Offset", Vector) = (0, 0, 0)
}
Doesn't seem to work as I hoped.
To get it to compile I have been doing this:
Properties
{
offset ("formula Offset", Vector) = (0, 0, 0, 0)
}
// offset.xyz //Extract relevant data from vector
this just doesn't feel right. Is there a better way?
Looks like when you mark a property as Vector it has to have 4 components. Even the documentation says: "Vector properties are displayed as four number fields."
This really isn't as bad as it looks, just set the last components to zero.
Note that annoyingly the matching variable is NOT "vector", it's "float4".
Full list:
https://stackoverflow.com/a/37749687/294884
I have a vertex shader:
#version 430
in vec4 position;
void main(void)
{
//gl_Position = position; => works in ALL cases
gl_Position = vec4(0,0,0,1);
}
if I do:
m_program.setAttributeArray(0, m_vertices.constData());
m_program.enableAttributeArray(0);
everything works fine. However, if I do:
m_program.setAttributeArray("position", m_vertices.constData());
m_program.enableAttributeArray("position");
NOTE: m_program.attributeLocation("position"); returns -1.
then, I get an empty window.
Qt help pages state:
void QGLShaderProgram::setAttributeArray(int location, const QVector3D
* values, int stride = 0)
Sets an array of 3D vertex values on the attribute at location in this shader program. The stride indicates the
number of bytes between vertices. A default stride value of zero
indicates that the vertices are densely packed in values.
The array will become active when enableAttributeArray() is called on
the location. Otherwise the value specified with setAttributeValue()
for location will be used.
and
void QGLShaderProgram::setAttributeArray(const char * name, const
QVector3D * values, int stride = 0)
This is an overloaded function.
Sets an array of 3D vertex values on the attribute called name in this
shader program. The stride indicates the number of bytes between
vertices. A default stride value of zero indicates that the vertices
are densely packed in values.
The array will become active when enableAttributeArray() is called on
name. Otherwise the value specified with setAttributeValue() for name
will be used.
So why is it working when using the "int version" and not when using the "const char * version"?
It returns -1 because you commented out the only line in your shader that actually uses position.
This is not an error, it is a consequence of a misunderstanding how attribute locations are assigned. Uniforms and attributes are only assigned locations after all shader stages are compiled and linked. If a uniform or attribute is not used in an active code path it will not be assigned a location. Even if you use the variable to do something like this:
#version 130
in vec4 dead_pos; // Location: N/A
in vec4 live_pos; // Location: Probably 0
void main (void)
{
vec4 not_used = dead_pos; // Not used for vertex shader output, so this is dead.
gl_Position = live_pos;
}
It actually goes even farther than this. If something is output from a vertex shader but not used in a geometry, tessellation or fragment shader, then its code path is considered inactive.
Vertex attribute location 0 is implicitly vertex position, by the way. It is the only vertex attribute that the GLSL spec. allows to alias to a fixed-function pointer function (e.g. glVertexPointer (...) == glVertexAttribPointer (0, ...))
My goal was to color the vertexes according to their order
EDIT: long time goal: access to preceding and following vertexes to simulate gravity behavior
i've used following code
#version 120
#extension GL_EXT_geometry_shader4 : enable
void main( void ) {
for( int i = 0 ; i < gl_VerticesIn ; i++ ) {
gl_FrontColor = vec4(float(i)/float(gl_VerticesIn),0.0,0.0,1.0);
gl_Position = gl_PositionIn[i];
EmitVertex();
}
}
but all vertexes are drawn black, it seem that i is always evaluated as 0, am i missing something or doing it wrong?
EDIT: figured the meta-problem out: how to feed all me model geometry into single geometry shader call, so the mainloop iterates over all the vertexes, not for every triangle.
You don't let a single geometry shader invocation iterate over all your vertexes, it is called for every original primitive (point, line, triangle, ...).
The solution is much easier: In the vertex shader (that is actually called for every vertex) you can read the special variable gl_VertexID, which contains the vertex's index. That index is either just a counter incremented for every vertex (if using glDrawArrays) and reset by every draw call, or the index from the index array (if using glDrawElements).
EDIT: Regarding the long time goal. Not directly but you might use a texture buffer for that. This basically enables you to get direct linear array-access to a buffer object (in your case the vertex buffer) which you can then just index with this vertex index. But there might also be other ways to accomplish that, which may suffice for another question.
I am using Nvidia CG and Direct3D9 and have the question about the following code.
It compiles, but doesn't "loads" (using cgLoadProgram wrapper) and the resulting failure is described simplyas D3D failure happened.
It's a part of the pixel shader compiled with shader model set to 3.0
What may be interesting is that this shader loads fine in the following cases:
1) Manually unrolling the while statement (to many if { } statements).
2) Removing the line with the tex2D function in the loop.
3) Switching to shader model 2_X and manually unrolling the loop.
Problem part of the shader code:
float2 tex = float2(1, 1);
float2 dtex = float2(0.01, 0.01);
float h = 1.0 - tex2D(height_texture1, tex);
float height = 1.00;
while ( h < height )
{
height -= 0.1;
tex += dtex;
// Remove the next line and it works (not as expected,
// of course)
h = tex2D( height_texture1, tex );
}
If someone knows why this can happen or could test the similiar code in non-CG environment or could help me in some other way, I'm waiting for you ;)
Thanks.
I think you need to determine the gradients before the loop using ddx/ddy on the texture coordinates and then use tex2D(sampler2D samp, float2 s, float2 dx, float2 dy)
The GPU always renders quads not pixels (even on pixel borders - superfluous pixels are discarded by the render backend). This is done because it allows it to always calculate the screen space texture derivates even when you use calculated texture coordinates. It just needs to take the difference between the values at the pixel centers.
But this doesn't work when using dynamic branching like in the code in the question, because the shader processors at the individual pixels could diverge in control flow. So you need to calculate the derivates manually via ddx/ddy before the program flow can diverge.
Reading the GLSL 1.40 specification:
Fragment outputs can only be float,
floating-point vectors, signed or
unsigned integers or integer vectors,
or arrays of any these. Matrices and
structures cannot be output. Fragment
outputs are declared as in the
following examples:
out vec4 FragmentColor;
out uint Luminosity;
The fragment color is defined writing gl_FragColor... is it right? Somebody could clear my ideas about these outputs? May I write only 'FragmentColor' of the example to determine fragment color? May I read back them ('Luminosity' for example)?
The global output variable gl_FragColor is deprecated after GLSL version 120.
Now you have to give it a name and type by yourself, as in your example.
Regarding several outputs,
this link gives you information about the mapping: http://www.opengl.org/wiki/GLSL_Objects#Program_linking
(And I found that link at: http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=270999 )
Hope this helps! :D
Ooops! I see that kvark gave the relevant information. Anyway, maybe you got something out of my text too.
Your example has 2 outputs. They have corresponding FBO slots associated after GLSL program linking. You can redirect them using glBindFragDataLocation.
Once you activated the shader and bound FBO, it all depends on a draw mask, set by glDrawBuffers. For example, if you passed GL_COLOR_ATTACHMENT0 and GL_COLOR_ATTACHMENT2 there, that would mean that output index 0 will go to the attachment 0, and output index 1 would go to the color attachment 2.
i want to give some examples:
void add(in float a, in float b, out float c)
{
//you can not use c here. only set value
//changing values of a and b does not change anything.
c = a + b;
}
void getColor(out float r, out float g, out float b)
{
//you can not use r, g, b here. only set value
r = gl_FragColor.r;
g = gl_FragColor.g;
b = gl_FragColor.b;
}
void amplify(inout vec4 pixelColor, in value)
{
//inout is like a reference
pixelColor = pixelColor * value;
}