I was wondering if there was a way I could process OpenGL texture buffers so that a buffer of grayscale values is converted to rgb values on the fly through some formula of my choosing.
I already have a function like, which works well but outputs a color 3 vector.
rgb convertToColor(float value);
I am very new to OpenGL and was wondering what kind of shader I should use and where it should go. My program currently cycles frames as such:
program1.bind();
program1.setUniformValue("texture", 0);
program1.enableAttributeArray(vertexAttr1);
program1.enableAttributeArray(vertexTexr1);
program1.setAttributeArray(vertexAttr1, vertices.constData());
program1.setAttributeArray(vertexTexr1, texCoords.constData());
glBindTexture(GL_TEXTURE_2D, textures[0]);
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,widthGL,heightGL, GL_LUMINANCE,GL_UNSIGNED_BYTE, &displayBuff[displayStart]);
glDrawArrays(GL_TRIANGLES, 0, vertices.size());
glBindTexture(GL_TEXTURE_2D,0);
//...
Shader
QGLShader *fshader1 = new QGLShader(QGLShader::Fragment, this);
const char *fsrc1 =
"uniform sampler2D texture;\n"
"varying mediump vec4 texc;\n"
"void main(void)\n"
"{\n"
" gl_FragColor = texture2D(texture, texc.st);\n"
"}\n";
I am trying to recreate effects in matlab like imagesc as seen for example in the image below:
Something like this would work:
"uniform sampler2D texture;\n"
"uniform sampler1D mappingTexture;\n"
"varying mediump vec4 texc;\n"
"void main(void)\n"
"{\n"
" gl_FragColor = texture1D(mappingTexture, texture2D(texture, texc.st).s);\n"
"}\n";
where mapping texture is 1D textuer that maps grayscale to color.
Of course, you could also write function that calculates rgb color based on grayscale, but (depending on your hardware) it might be faster just to do texture lookup.
it is not clear how to bind two buffers to an OpenGL program at once
Binding textures to samplers.
It looks like you already have a program loaded and set up to read the texture (program1 in your code). Assuming the vertex shader is already set up to pass the pixel shader texture coordinates to look up into the texture (in the below program this is "texcoord"), you should be able to change the pixel shader to something like this:
uniform texture2d texture; // this is the greyscale texture
varying vec2 texcoord; // passed from your vertex shader
void main()
{
float luminance = tex2D(texture, texcoord).r; // grab luminance texture
gl_FragColor = convertToColor(luminance); // run your function
}
this reads in the luminance texture and calls your function which converts a luminance to a color. if your function only returns a 3-component rgb vector, you can change the last line to:
gl_FragColor = vec4(convertToColor(luminance), 1.0);
Related
I've encoded some data into a 44487x1.0 luminance texture:
Now I would like to "scrub" this data across my shader, so that a slice of the texture equal in width to the pixel width of my canvas is displayed. So if the canvas is 500px wide, then 500 pixels from the texture will be shown. The texture is then translated by some offset value so that different values within the texture can be displayed.
//vertex shader
export const vs = GLSL`
#version 300 es
in vec4 position;
void main() {
gl_Position = position;
}
`;
//fragment shader
#version 300 es
#ifdef GL_ES
precision highp float;
#endif
uniform vec2 u_resolution;
uniform float u_time;
uniform sampler2D u_texture_7; //data texture
out vec4 fragColor;
void main(){
//data texture dimensions
vec2 dims = vec2(44487., 1.0);
//amount by which to translate the data texture
vec2 offset = vec2(u_time*.5, 0.);
//canvas coords
vec2 uv = gl_FragCoord.xy/u_resolution.xy;
//textuer asspect ratio, w/h
float textureAspect = 44487. / 1.;
vec3 col = vec3(0.);
//texture width is 44487*larger than uv, I guess?
vec2 textCoords = vec2((uv.x/textureAspect)+offset.x, uv.y);
//get texture values
vec3 text = texture(u_texture_7, textCoords).rgb;
//output
fragColor = vec4(text, 1.);
}
However, this doesn't seem to work. All I get is a black screen. Is using a wide texture like this a good way to go about getting the array values into the shader? The texture is very small in size, but I'm wondering if the dimensions might still be causing an issue.
Alternatively to providing one large texture, I could provide a smaller texture, but update the texture uniform values via js?
After trying several different approaches, the work around I ended up using was uploading the 44487x1.0 image to a separate 2d canvas, and then performing the transformations of the texture in the 2d canvas, and not the shader. The canvas is then sent to the shader as a texture.
Might not be the most efficient solution, but it avoids having to mess around with the texture too much in the shader.
I have several objects without texture coordinates UV passed into the fragment shader, and i only have two other objects with texture coordinates UV passed into the fragment shader. The objects without texture were still visible but with a dull color. But after plugging in the light equations it becomes black and non-visible. How do i draw the non-texturized objects without changing it's original color and also keeping the light equation (i've already created color arrays for them and passed them into the vertex shader). I've tried this but my fragment shader wouldn't compile.
#version 330
in vec3 fragmentColor;
in vec3 fragmentNormal;
in vec2 UV;
in vec4 Position;
uniform vec4 lighteye;
uniform float intensityh;
uniform float intensityd;
uniform float objectd;
uniform vec4 worldCoord;
// Data for the texture
uniform sampler2D texture_Colors;
if(UV.x >= 0.0)
color = intensityh * texture2D( texture_Colors, UV ).rgb * diffuse + (intensityd * texture2D( texture_Colors, UV ).rgb * something) ;
else
color = vec4(fragmentColor,1.0);
As far as I understood, you could do something like this:
color r = intensityh * texture2D( texture_Colours, worldCoord.xy).rgb * diffuse+( intensityd* texture2D (texture_Colours, worldCoord.xy).rgb * something
This should texture your objects based on their position in the 3D space. Don't forget, however, to enable texture repeating by CPU code for each texture, like this:
glTexParameterf(GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(GL_TEXTURE_WRAP_T, GL_REPEAT);
I've got two shaders like this:
const char* vertexShaderData =
"#version 450 \n"
"in vec3 vp;"
"in vec3 color;\n"
"out vec3 Color;\n"
"void main(){"
"Color=color;"
"gl_Position = vec4(vp, 1.0);"
"}";
const char* fragShaderData =
"#version 410\n"
"uniform vec4 incolor;\n"
"in vec3 Color;"
"out vec4 outColor;"
"void main(){"
"outColor = vec4(Color, 1.0);"
"}";
I understand that each shader is called for each vertex.
Where do the in paremters in my vertexShaderData get their values? In no point in the code do I specify what vp is or what color is. In the second shader, I get that the invalue comes from the first shader's out value. But where do thoes inital ins come from?
About the out value of the fragShaderData: How is this value used? In other words, how does OpenGL know that this is an RGB color value and know to paint the triangle with this color?
For the vertex shader,
you can use glGetAttribLocationin C++ to get the driver assigned location or manually set it like this: layout (location = 0) in vec3 vp; in GLSL to get the location for the attribute. Then you upload the data in C++ like this:
// (Vertex buffer must be bound at this point)
glEnableVertexAttribArray( a ); // 'a' would be 0 if you did the latter
glVertexAttribPointer( a, 3, GL_FLOAT, GL_FALSE, sizeof( your vertex ), nullptr );
For the fragment shader,
'in' variables must match vertex shader's 'out' variables, like in your sample code out vec3 Color; -> in vec3 Color;
gl_Position controls where outColor is painted.
You feed the data to the vertex shader from your OpenGL calls (in CPU). Once you compiled the program (vertex shader + fragment shader), you feed the vertex you want.
Different than the vertex shader, this fragment shader will run for once for EVERY pixel inside the triangle you are rendering. The outColor will be a vec4 (R,G,B,A) that "goes to your framebuffer". About the color, in theory, this is abstract for OpenGL. They are called RGBA for convenience... you can even access the same data as XYZW (it's an alias for RGBA). OpenGL will output NUMBERS to the framebuffer you desire (according the rules of color attachments, etc). In fact you will have 4 channels THAT BY THE WAY are used in the monitor to output RGB (and A used for transparency).... In other words, you can used GL programs to create triangles that will output 1 channel, or 2 channels, depending on your needs, and these channels can mean anything you need. For example, you can interpolate and YUV image, or a UV plane (2 channels). If you output these to monitor, you won't have the colors correct, once the monitor is expecting RGB, but the OpenGL concept is abroader than RGB. It will interpolate numbers for every pixel inside the triangle. That's it.
Recently, I have read article about sun shader (XNA Sun Shader) and decided to implement it using OpenGL ES 2.0. But I faced with a problem connected with shader:
I have two textures, one of them is fire gradient texture:
And another one is texture each white part of which must be colored by the first texture:
So, I'm going to have a result like below (do not pay attention that the result texture is rendered on sphere mesh):
I really hope that somebody knows how to implement this shader.
You can first sampling the original texture, if the color is white, then sampling the gradient texture.
uniform sampler2D Texture0; // original texture
uniform sampler2D Texture1; // gradient texture
varying vec2 texCoord;
void main(void)
{
gl_FragColor = texture2D( Texture0, texCoord );
// If the color in original texture is white
// use the color in gradient texture.
if (gl_FragColor == vec4(1.0, 1.0, 1.0,1.0)) {
gl_FragColor = texture2D( Texture1, texCoord );
}
}
I have a 2d texture that I loaded with
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, gs.width(), gs.height(), 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, gs.buffer());
where gs is an object that with methods that return the proper types.
In the fragment shader I sample from the texture and attempt to use that as the alpha channel for the resultant color. If I use the sampled value for other channels in the output texture it produces what I would expect. Any value that I use for the alpha channel appears to be ignored, because it always draws Color.
I am clearing the screen using:
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
Can anyone suggest what I might be doing wrong? I am getting an OpenGL 4.0 context with 8 red, 8 green, 8 blue, and 8 alpha bits.
Vertex Shader:
#version 150
in vec2 position;
in vec3 color;
in vec2 texcoord;
out vec3 Color;
out vec2 Texcoord;
void main()
{
Texcoord = texcoord;
Color = color;
gl_Position = vec4(position, 0.0, 1.0);
}
Fragment Shader:
#version 150
in vec3 Color;
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D tex;
void main()
{
float t = texture(tex, Texcoord);
outColor = vec4(Color, t);
}
Frankly, I am surprised this actually works. texture (...) returns a vec4 (unless you are using a shadow/integer sampler, which you are not). You really ought to be swizzling that texture down to just a single component if you intend to store it in a float.
I am guessing you want the alpha component of your texture, but who honestly knows -- try this instead:
float t = texture (tex, Texcoord).a; // Get the alpha channel of your texture
A half-way decent GLSL compiler would warn/error you for doing what you are trying to do right now. I suspect yours is as well, but you are not checking the shader info log when you compile your shader.
Update:
The original answer did not even begin to address the madness you are doing with your GL_DEPTH_COMPONENT internal format texture. I completely missed that because the code did not fit on screen.
Why are you using gs.rgba() to pass data to a texture whose internal and pixel transfer format is exactly 1 component? Also, if you intend to use a depth texture in your shader then the reason it is always returning a=1.0 is actually very simple:
Beginning with GLSL 1.30, when sampled using texture (...), depth textures are automatically setup to return the following vec4:
vec4 (r, r, r, 1.0).
The RGB components are replaced with the value of R (the floating-point depth), and A is replaced with a constant value of 1.0.
Your issue is that you're only passing in a vec3 when you need a vec4. RGBA - 4 components, not just three.