I pass a 1D array as sampler1D and a 2D array (basically a matrix) as sampler2D to my vertex shader.
Everything works fine - i checked the values, every value is where it should be.
BUT - I can't seem to multiply two values of those samplers with each other.
float pos=0.0;
vec4 f = texture1D(xk,ki);
vec4 H = texture2D(er,vec2(0,i));
pos=f[0]*H[0];
colorcheck=pos;
I pass colorcheck to my fragment shader, but it won't render my object, instead everything is just black (passing colorcheck=1.0 works fine). I checked both vectors after the lookup - both have valid values in all fields.
I've tried f.x*H.x, and all combinations I can think of.. I even tried multiplying in the fragment shader - won't work either..
EDIT
simplified vertex shader (doesnt work either/works when I pass colorcheck=1.0/f.x/H.x.. anything)
uniform sampler1D xk;
uniform sampler2D eigenraum;
varying float colorcheck;
void main(){
vec4 f = texture1D(xk,0);
vec4 H = texture2D(eigenraum,vec2(0,0));
colorcheck=f[0]*H[0];
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
and fragment shader:
varying float colorcheck;
void main()
{
gl_FragColor=vec4(colorcheck,1,1,1.0);
}
Thanks for your help!
EDIT2 - turns out I can't substract/add them either..
I found the solution - it's not in the shader code. I forgot the glActiveTexture call when binding the arrays to the textures!
Related
I'm able to translate the code and run it, but it behaves diferrent from the orginal fork.
https://www.shadertoy.com/view/llS3zc --orignal
https://editor.p5js.org/jorgeavav/sketches/i9cd4lE7H - translate
Here is the code:
uniform vec2 resolution;
uniform float time;
uniform float mouse;
uniform sampler2D texture;
uniform sampler2D texture2;
void main() {
vec2 uv = gl_FragCoord.xy / resolution.xy;
vec4 texCol = vec4(texture2D(texture, uv+time/10.0));
mat3 tfm;
tfm[0] = vec3(texCol.z,0.0,0);
tfm[1] = vec3(0.0,texCol.y,0);
tfm[2] = vec3(0,0,1.0);
vec2 muv = (vec3(uv,1.0)*tfm).xy - 0.1*time;
texCol = vec4(texture2D(texture2, muv));
gl_FragColor = texCol;
}
You have two issues:
Your textures are a lot larger than the shader toy ones, either use smaller textures or scale down the uv coordinates (uv*=0.1 results in a similar scale).
Your textures are not wrapping and their dimensions are not a power of two(which is required to enable wrapping [in WebGL1]), you need to resize the textures and apply wrapping using textureWrap(REPEAT) or wrap in the shader, for example by using fract to wrap the lookup coordinates in your texture2D calls.
I'm drawing a simple textured quad (2 triangles) using a one dimensional texture that hold 512 values ranging from 0 to 1. I'm using RGBA_32F on a NVIDIA GeForce GT 750M, GL_LINEAR interpolation and GL_CLAMP_TO_EDGE.
I draw the quad using the following shader:
varying vec2 v_texcoord;
uniform sampler1D u_texture;
void main()
{
float x = v_texcoord.x;
float v = 512.0 * abs(texture1D(u_texture,x).r - x);
gl_FragColor = vec4(v,v,v,1);
}
Basically, I'm displaying the difference between texture values and frag coordinates. I was hoping to get a black quad (no difference) but here is what I get instead:
I tried to narrow down the problem and try to generate a one dimensional texture with only two values (0 and 1) and display it using:
varying vec2 v_texcoord;
uniform sampler1D u_texture;
void main()
{
float x = v_texcoord.x;
float v = texture1D(u_texture,x).r;
gl_FragColor = vec4(v,v,v,1);
}
and then I get:
Obviously this is not a linear interpolation from 0 to 1. The result seems to be split in 3 areas: black, interpolated and white. I tried different wrapping values with no success (but different results). Any idea what I'm doing wrong here ?
After searching a bit more, it seems the texture needs a small adjustement depending on texture size:
varying vec2 v_texcoord;
uniform sampler1D u_texture;
uniform float u_texture_shape;
void main()
{
float epsilon = 1.0/u_texture_shape;
float x = epsilon/2.0 +(1.0-epsilon)*v_texcoord.x;
float v = 512.0*abs(texture1D(u_texture,x).r -v_texcoord.x);
gl_FragColor = vec4(v,v,v,1);
}
I guess this is related to wrapping mode but I did not find informations on how wrapping is enforced at GPU level.
My vertex shader looks as follows:
#version 120
uniform float m_thresh;
varying vec2 texCoord;
void main(void)
{
gl_Position = ftransform();
texCoord = gl_TexCoord[0].xy;
}
and my fragment shader:
#version 120
uniform float m_thresh;
uniform sampler2D grabTexture;
varying vec2 texCoord;
void main(void)
{
vec4 grab = vec4(texture2D(grabTexture, texCoord.xy));
vec3 colour = vec3(grab.xyz * m_thresh);
gl_FragColor = vec4( colour, 0.5 );
}
basically i am getting the error message "Error in shader -842150451 - 0<9> : error C7565: assignment to varying 'texCoord'"
But I have another shader which does the exact same thing and I get no error when I compile that and it works!!!
Any ideas what could be happening?
For starters, there is no sensible reason to construct a vec4 from texture2D (...). Texture functions in GLSL always return a vec4. Likewise, grab.xyz * m_thresh is always a vec3, because a scalar multiplied by a vector does not change the dimensions of the vector.
Now, here is where things get interesting... the gl_TexCoord [n] GLSL built-in you are using is actually a pre-declared varying. You should not be reading from this in a vertex shader, because it defines a vertex shader output / fragment shader input.
The appropriate vertex shader built-in variable in GLSL 1.2 for getting the texture coordinates for texture unit N is actually gl_MultiTexCoord<N>
Thus, your vertex and fragment shaders should look like this:
Vertex Shader:
#version 120
//varying vec2 texCoord; // You actually do not need this
void main(void)
{
gl_Position = ftransform();
//texCoord = gl_MultiTexCoord0.st; // Same as comment above
gl_TexCoord [0] = gl_MultiTexCoord0;
}
Fragment Shader:
#version 120
uniform float m_thresh;
uniform sampler2D grabTexture;
//varying vec2 texCoord;
void main(void)
{
//vec4 grab = texture2D (grabTexture, texCoord.st);
vec4 grab = texture2D (grabTexture, gl_TexCoord [0].st);
vec3 colour = grab.xyz * m_thresh;
gl_FragColor = vec4( colour, 0.5 );
}
Remember how I said gl_TexCoord [n] is a built-in varying? You can read/write to this instead of creating your own custom varying vec2 texCoord; in GLSL 1.2. I commented out the lines that used a custom varying to show you what I meant.
The OpenGLĀ® Shading Language (1.2) - 7.6 Varying Variables - pp. 53
The following built-in varying variables are available to write to in a vertex shader. A particular one should be written to if any functionality in a corresponding fragment shader or fixed pipeline uses it or state derived from it.
[...]
varying vec4 gl_TexCoord[]; // at most will be gl_MaxTextureCoords
The OpenGLĀ® Shading Language (1.2) - 7.3 Vertex Shader Built-In Attributes - pp. 49
The following attribute names are built into the OpenGL vertex language and can be used from within a vertex shader to access the current values of attributes declared by OpenGL.
[...]
attribute vec4 gl_MultiTexCoord0;
The bottom line is that gl_MultiTexCoord<N> defines vertex attributes (vertex shader input), gl_TexCoord [n] defines a varying (vertex shader output, fragment shader input). It is also worth mentioning that these are not available in newer (core) versions of GLSL.
I've been trying to do simple texture mapping with SOIL, and I've been having bizarre outputs..
The output ONLY displays when a PNG texture is loaded. (SOIL_load_OGL_texture)
Other textures appear as a greyish or a white.
vertices passed as:
struct Vertex {
float position[3];
float texturepos[2];
};
Vertex vertices[] = { // w and h are width and height of square.
0,0,0,0,0,
w,0,0,1,0,
0,h,0,0,1,
w,0,0,1,0,
w,h,0,1,1,
0,h,0,0,1
};
vertex shader:
attribute vec2 texpos;
attribute vec3 position;
uniform mat4 transform;
varying vec2 pixel_texcoord;
void main(void) {
gl_Position=transform * vec4(position,1.0);
pixel_texcoord=texpos;
}
fragment shader:
varying vec2 pixel_texcoord;
uniform sampler2D texture;
void main(void) {
gl_FragColor=texture2D(texture,pixel_texcoord);
}
All of the uniforms and attributes are validated.
Texture trying to render:
(is 128x128, power of 2.)
Output [with normal shaders]:
However, I think the problem lies entirely in something really bizarre that happened when I tried to debug it.
I changed the fragment shader to:
varying vec2 pixel_texcoord;
uniform sampler2D texture;
void main(void) {
gl_FragColor=vec4(pixel_texcoord.x,0,pixel_texcoord.y,1);
}
And got this result:
Something is very wrong with the texture coordinates, as according to the shader, Y is now X, and X no longer exists.
Can anyone explain this?
If my texture coordinates are correctly positioned, then I'll start looking at another image library..
[EDIT] I tried loading an image through raw gimp-generated data, and it had the same problem.
It's as if the texture coordinates are 1-dimensional..
Found the problem! Thanks to starmole's advice, I took another look at the glVertexAttribPointer calls, which were formatted as this:
glVertexAttribPointer(attribute_vertex,3,GL_FLOAT,GL_FALSE,sizeof(Vertex),0);
glVertexAttribPointer(attribute_texture_coordinate,2,GL_FLOAT,GL_FALSE,sizeof(Vertex),(void*) (sizeof(GLfloat) * 2));
The 2 in (void*) (sizeof(GLfloat) * 2)); should have been a 3, as there were 3 vertex coordinates.
Everything works perfectly now.
It's amazing how such a small typo can break it so badly.
As a somewhat similar to a problem I had before and posted before, I'm trying to get normals to display correctly in my GLSL app.
For the purposes of my explanation, I'm using the ninjaHead.obj model provided with RenderMonkey for testing purposes (you can grab it here). Now in the preview window in RenderMonkey, everything looks great:
and the vertex and fragment code generated respectively is:
Vertex:
uniform vec4 view_position;
varying vec3 vNormal;
varying vec3 vViewVec;
void main(void)
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
// World-space lighting
vNormal = gl_Normal;
vViewVec = view_position.xyz - gl_Vertex.xyz;
}
Fragment:
uniform vec4 color;
varying vec3 vNormal;
varying vec3 vViewVec;
void main(void)
{
float v = 0.5 * (1.0 + dot(normalize(vViewVec), vNormal));
gl_FragColor = v* color;
}
I based my GLSL code on this but I'm not quite getting the expected results...
My vertex shader code:
uniform mat4 P;
uniform mat4 modelRotationMatrix;
uniform mat4 modelScaleMatrix;
uniform mat4 modelTranslationMatrix;
uniform vec3 cameraPosition;
varying vec4 vNormal;
varying vec4 vViewVec;
void main()
{
vec4 pos = gl_ProjectionMatrix * P * modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix * gl_Vertex;
gl_Position = pos;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_FrontColor = gl_Color;
vec4 normal4 = vec4(gl_Normal.x,gl_Normal.y,gl_Normal.z,0);
// World-space lighting
vNormal = normal4*modelRotationMatrix;
vec4 tempCameraPos = vec4(cameraPosition.x,cameraPosition.y,cameraPosition.z,0);
//vViewVec = cameraPosition.xyz - pos.xyz;
vViewVec = tempCameraPos - pos;
}
My fragment shader code:
varying vec4 vNormal;
varying vec4 vViewVec;
void main()
{
//gl_FragColor = gl_Color;
float v = 0.5 * (1.0 + dot(normalize(vViewVec), vNormal));
gl_FragColor = v * gl_Color;
}
However my render produces this...
Does anyone know what might be causing this and/or how to make it work?
EDIT
In response to kvark's comments, here is the model rendered without any normal/lighting calculations to show all triangles being rendered.
And here is the model shading with the normals used for colors. I believe the problem has been found! Now the reason is why it is being rendered like this and how to solve it? Suggestions are welcome!
SOLUTION
Well everyone the problem has been solved! Thanks to kvark for all his helpful insight that has definitely helped my programming practice but I'm afraid the answer comes from me being a MASSIVE tit... I had an error in the display() function of my code that set the glNormalPointer offset to a random value. It used to be this:
gl.glEnableClientState(GL.GL_NORMAL_ARRAY);
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, getNormalsBufferObject());
gl.glNormalPointer(GL.GL_FLOAT, 0, getNormalsBufferObject());
But should have been this:
gl.glEnableClientState(GL.GL_NORMAL_ARRAY);
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, getNormalsBufferObject());
gl.glNormalPointer(GL.GL_FLOAT, 0, 0);
So I guess this is a lesson. NEVER mindlessly Ctrl+C and Ctrl+V code to save time on a Friday afternoon AND... When you're sure the part of the code you're looking at is right, the problem is probably somewhere else!
What is your P matrix? (I suppose it's a world->camera view transform).
vNormal = normal4*modelRotationMatrix; Why did you change the order of arguments? Doing that you are multiplying the normal by inversed rotation, what you don't really want. Use the standard order instead (modelRotationMatrix * normal4)
vViewVec = tempCameraPos - pos. This is entirely incorrect. pos is your vertex in the homogeneous clip-space, while tempCameraPos is in world space (I suppose). You need to have the result in the same space as your normal is (world space), so use world-space vertex position (modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix * gl_Vertex) for this equation.
You seem to be mixing GL versions a bit? You are passing the matrices manually via uniforms, but use fixed function to pass vertex attributes. Hm. Anyway...
I sincerely don't like what you're doing to your normals. Have a look:
vec4 normal4 = vec4(gl_Normal.x,gl_Normal.y,gl_Normal.z,0);
vNormal = normal4*modelRotationMatrix;
A normal only stores directional data, why use a vec4 for it? I believe it's more elegant to just use just vec3. Furthermore, look what happens next- you multiply the normal by the 4x4 model rotation matrix... And additionally your normal's fourth cordinate is equal to 0, so it's not a correct vector in homogenous coordinates. I'm not sure that's the main problem here, but I wouldn't be surprised if that multiplication would give you rubbish.
The standard way to transform normals is to multiply a vec3 by the 3x3 submatrix of the model-view matrix (since you're only interested in the orientation, not the translation). Well, precisely, the "correctest" approach is to use the inverse transpose of that 3x3 submatrix (this gets important when you have scaling). In old OpenGL versions you had it precalculated as gl_NormalMatrix.
So instead of the above, you should use something like
// (...)
varying vec3 vNormal;
// (...)
mat3 normalMatrix = transpose(inverse(mat3(modelRotationMatrix)));
// or if you don't need scaling, this one should work too-
mat3 normalMatrix = mat3(modelRotationMatrix);
vNormal = gl_Normal*normalMatrix;
That's certainly one thing to fix in your code - I hope it solves your problem.