I have a 2d texture that I loaded with
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, gs.width(), gs.height(), 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, gs.buffer());
where gs is an object that with methods that return the proper types.
In the fragment shader I sample from the texture and attempt to use that as the alpha channel for the resultant color. If I use the sampled value for other channels in the output texture it produces what I would expect. Any value that I use for the alpha channel appears to be ignored, because it always draws Color.
I am clearing the screen using:
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
Can anyone suggest what I might be doing wrong? I am getting an OpenGL 4.0 context with 8 red, 8 green, 8 blue, and 8 alpha bits.
Vertex Shader:
#version 150
in vec2 position;
in vec3 color;
in vec2 texcoord;
out vec3 Color;
out vec2 Texcoord;
void main()
{
Texcoord = texcoord;
Color = color;
gl_Position = vec4(position, 0.0, 1.0);
}
Fragment Shader:
#version 150
in vec3 Color;
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D tex;
void main()
{
float t = texture(tex, Texcoord);
outColor = vec4(Color, t);
}
Frankly, I am surprised this actually works. texture (...) returns a vec4 (unless you are using a shadow/integer sampler, which you are not). You really ought to be swizzling that texture down to just a single component if you intend to store it in a float.
I am guessing you want the alpha component of your texture, but who honestly knows -- try this instead:
float t = texture (tex, Texcoord).a; // Get the alpha channel of your texture
A half-way decent GLSL compiler would warn/error you for doing what you are trying to do right now. I suspect yours is as well, but you are not checking the shader info log when you compile your shader.
Update:
The original answer did not even begin to address the madness you are doing with your GL_DEPTH_COMPONENT internal format texture. I completely missed that because the code did not fit on screen.
Why are you using gs.rgba() to pass data to a texture whose internal and pixel transfer format is exactly 1 component? Also, if you intend to use a depth texture in your shader then the reason it is always returning a=1.0 is actually very simple:
Beginning with GLSL 1.30, when sampled using texture (...), depth textures are automatically setup to return the following vec4:
vec4 (r, r, r, 1.0).
The RGB components are replaced with the value of R (the floating-point depth), and A is replaced with a constant value of 1.0.
Your issue is that you're only passing in a vec3 when you need a vec4. RGBA - 4 components, not just three.
Related
I've encoded some data into a 44487x1.0 luminance texture:
Now I would like to "scrub" this data across my shader, so that a slice of the texture equal in width to the pixel width of my canvas is displayed. So if the canvas is 500px wide, then 500 pixels from the texture will be shown. The texture is then translated by some offset value so that different values within the texture can be displayed.
//vertex shader
export const vs = GLSL`
#version 300 es
in vec4 position;
void main() {
gl_Position = position;
}
`;
//fragment shader
#version 300 es
#ifdef GL_ES
precision highp float;
#endif
uniform vec2 u_resolution;
uniform float u_time;
uniform sampler2D u_texture_7; //data texture
out vec4 fragColor;
void main(){
//data texture dimensions
vec2 dims = vec2(44487., 1.0);
//amount by which to translate the data texture
vec2 offset = vec2(u_time*.5, 0.);
//canvas coords
vec2 uv = gl_FragCoord.xy/u_resolution.xy;
//textuer asspect ratio, w/h
float textureAspect = 44487. / 1.;
vec3 col = vec3(0.);
//texture width is 44487*larger than uv, I guess?
vec2 textCoords = vec2((uv.x/textureAspect)+offset.x, uv.y);
//get texture values
vec3 text = texture(u_texture_7, textCoords).rgb;
//output
fragColor = vec4(text, 1.);
}
However, this doesn't seem to work. All I get is a black screen. Is using a wide texture like this a good way to go about getting the array values into the shader? The texture is very small in size, but I'm wondering if the dimensions might still be causing an issue.
Alternatively to providing one large texture, I could provide a smaller texture, but update the texture uniform values via js?
After trying several different approaches, the work around I ended up using was uploading the 44487x1.0 image to a separate 2d canvas, and then performing the transformations of the texture in the 2d canvas, and not the shader. The canvas is then sent to the shader as a texture.
Might not be the most efficient solution, but it avoids having to mess around with the texture too much in the shader.
I have several objects without texture coordinates UV passed into the fragment shader, and i only have two other objects with texture coordinates UV passed into the fragment shader. The objects without texture were still visible but with a dull color. But after plugging in the light equations it becomes black and non-visible. How do i draw the non-texturized objects without changing it's original color and also keeping the light equation (i've already created color arrays for them and passed them into the vertex shader). I've tried this but my fragment shader wouldn't compile.
#version 330
in vec3 fragmentColor;
in vec3 fragmentNormal;
in vec2 UV;
in vec4 Position;
uniform vec4 lighteye;
uniform float intensityh;
uniform float intensityd;
uniform float objectd;
uniform vec4 worldCoord;
// Data for the texture
uniform sampler2D texture_Colors;
if(UV.x >= 0.0)
color = intensityh * texture2D( texture_Colors, UV ).rgb * diffuse + (intensityd * texture2D( texture_Colors, UV ).rgb * something) ;
else
color = vec4(fragmentColor,1.0);
As far as I understood, you could do something like this:
color r = intensityh * texture2D( texture_Colours, worldCoord.xy).rgb * diffuse+( intensityd* texture2D (texture_Colours, worldCoord.xy).rgb * something
This should texture your objects based on their position in the 3D space. Don't forget, however, to enable texture repeating by CPU code for each texture, like this:
glTexParameterf(GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(GL_TEXTURE_WRAP_T, GL_REPEAT);
Recently, I have read article about sun shader (XNA Sun Shader) and decided to implement it using OpenGL ES 2.0. But I faced with a problem connected with shader:
I have two textures, one of them is fire gradient texture:
And another one is texture each white part of which must be colored by the first texture:
So, I'm going to have a result like below (do not pay attention that the result texture is rendered on sphere mesh):
I really hope that somebody knows how to implement this shader.
You can first sampling the original texture, if the color is white, then sampling the gradient texture.
uniform sampler2D Texture0; // original texture
uniform sampler2D Texture1; // gradient texture
varying vec2 texCoord;
void main(void)
{
gl_FragColor = texture2D( Texture0, texCoord );
// If the color in original texture is white
// use the color in gradient texture.
if (gl_FragColor == vec4(1.0, 1.0, 1.0,1.0)) {
gl_FragColor = texture2D( Texture1, texCoord );
}
}
I recently started learning GLSL, and now i have a problem with texturing. I've read all topics about it, i've found the same problem solid color problem, but there was a different problem that caused that. So, i have a simple quadrilateral(ground) and i simply want to render a grass texture on it. Shaders:
Fragment:
#version 330
uniform sampler2D color_texture;
in vec4 color;
out vec2 texCoord0;
void main()
{
gl_FragColor = color+texture(color_texture,texCoord0.st);
}
Vertex:
#version 330
uniform mat4 projection_matrix;
uniform mat4 modelview_matrix;
in vec3 a_Vertex;
in vec3 a_Color;
in vec2 a_texCoord0;
out vec4 color;
out vec2 texCoord0;
void main()
{
texCoord0 = a_texCoord0;
gl_Position = (projection_matrix * modelview_matrix) * vec4(a_Vertex, 1.0);
color = vec4(a_Color,0.3);
}
My texture and primitive coords:
static GLint m_primcoords[12]=
{0,0,0,
0,0,100,
100,0,100,
100,0,0};
static GLfloat m_texcoords[8]=
{0.0f,0.0f,
0.0f,1.0f,
1.0f,1.0f,
1.0f,0.0f};
Buffers:
glBindBuffer(GL_ARRAY_BUFFER,vertexcBuffer);
glBufferData(GL_ARRAY_BUFFER,sizeof(GLint)*12,m_primcoords,GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER,colorBuffer);
glBufferData(GL_ARRAY_BUFFER,sizeof(GLfloat)*12,m_colcoords,GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER,textureBuffer);
glBufferData(GL_ARRAY_BUFFER,sizeof(GLfloat)*8,m_texcoords,GL_STATIC_DRAW);
and my rendering method:
GLfloat modelviewMatrix[16];
GLfloat projectionMatrix[16];
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
cameraMove();
GLuint texturegrass = ploadtexture("grass.BMP");
glBindTexture(GL_TEXTURE_2D, texturegrass);
glGetFloatv(GL_MODELVIEW_MATRIX,modelviewMatrix);
glGetFloatv(GL_PROJECTION_MATRIX,projectionMatrix);
shaderProgram->sendUniform4x4("modelview_matrix",modelviewMatrix);
shaderProgram->sendUniform4x4("projection_matrix",projectionMatrix);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glActiveTexture(GL_TEXTURE0);
shaderProgram->sendUniform("color_texture",0);
glBindBuffer(GL_ARRAY_BUFFER,colorBuffer);
glVertexAttribPointer((GLint)1,3,GL_FLOAT,GL_FALSE,0,0);
glBindBuffer(GL_ARRAY_BUFFER,textureBuffer);
glVertexAttribPointer((GLint)2,2,GL_FLOAT,GL_FALSE,0,(GLvoid*)m_texcoords);
glBindBuffer(GL_ARRAY_BUFFER,vertexcBuffer);
glVertexAttribPointer((GLint)0,3,GL_INT,GL_FALSE,0,0);
glDrawArrays(GL_QUADS,0,12);
So, it looks like the code only reads 4 pixels from my texture(corners) and the output color will be outColor = ctopleft+ctopright+cbotleft+cbotright like this.
I send more code if you want, but i think the problem lies behind these lines.
I tried different coordinates, ordering, everything. I also read almost all topics about problems like this. Im using the beginning ogl game programming 2nd ed., but dont have cd, so i cant check if I'm coding well, cuz only parts of codes are in the book.
There are a couple of problems with your code.
In the fragment shader, you have declared texCoord0 as out, it should be in in the fragment shader and out in the vertex shader, since it is passed from one to the other.
You are binding your texture before you set the "active" texture unit. It defaults to GL_TEXTURE0, but this is still bad practice.
I have two geometry passes. In the first pass, I write the fragment's depth value to a float texture with glBlendEquation(GL_MIN), similar to dual depth peeling. In the second pass I use it for early depth testing in the fragment shader.
However, for some fragments the depth test fails unless I slightly offset the min depth value (eps below):
Setting up the texture:
glBindTexture(GL_TEXTURE_2D, offscreenDepthMapTextureId);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RG32F, screenWidth, screenHeight);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, offscreenDepthMapFboId);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D,
offscreenDepthMapTextureId, 0);
Note that the texture is used as a color attachment, not the depth attachment. I have disabled GL_DEPTH_TEST in both passes, since I can't use it in the final pass.
Vertex shader, used in both passes:
#version 430
layout(location = 0) in vec4 position;
uniform mat4 mvp;
invariant gl_Position;
void main()
{
gl_Position = mvp * position;
}
First pass fragment shader, with offscreenDepthMapFboId bound, so it writes the depth as color to the texture. Blending makes sure that only the min value ends up in the red component.
#version 430
out vec4 outputColor;
void main()
{
outputColor.rg = vec2(gl_FragCoord.z, -gl_FragCoord.z);
}
Second pass, writing to the default framebuffer. The texture is used as depthTex.
#version 430
out vec4 outputColor;
uniform sampler2D depthTex;
void main()
{
vec2 zwMinMax = texelFetch(depthTex, ivec2(gl_FragCoord.xy), 0).rg;
float zwMin = zwMinMax.r;
float zwMax = -zwMinMax.g;
float eps = 0;// doesn't work
//float eps = 0.0000001; // works
if (gl_FragCoord.z > zwMin + eps)
discard;
outputColor = vec4(1, 0, 0, 1);
}
The invariant qualifier in the vertex shader didn't help. Using eps = 0.0000001 seems like a crude workaround as I can't be sure that this particular value will always work. How do I get the two shaders to produce the exact same gl_FragCoord.z? Or is the depth texture the problem? Wrong format? Are there any conversions happening that I'm not aware of?
Have you tried glDepthFunc(GL_EQUAL) instead of your in-shader-comparison-approach?
Are you reading and writing to the same depth-texture in the same pass?
Im not sure if gl_FragCoord's format and precision is correlated with the one of the depth-buffer. Also it might be a driver glitch, but glDepthFunc(GL_EQUAL) should work as expected.
Do not use gl_FragCoord.z as the depth value.
In vertex shader:
out float depth;
void main()
{
...
depth = ((gl_DepthRange.diff * (gl_Position.z / glPosition.w)) +
gl_DepthRange.near + gl_DepthRange.far) / 2.0;
}