Set GLSL pixels of texture2D - glsl

I want to create a 2D plotter in GLSL (with SFML for window handling). I import an empty texture into the fragment shader via uniform sampler2D texture (which works). Then I try iterating through the gl_TexCoord and set the pixels a colour.
Doing this changes the colour to red
vec4 pixel = texture2D(texture, gl_TexCoord[0].xy);
pixel = vec4(1.0, 0.0, 0.0, 1.0);
gl_FragColor = pixel * gl_Color;
However, this turns the whole thing red as well:
for (int j = 0; j < gl_TexCoord[0].y; j++)
for (int i = 0; i < gl_TexCoord[0].x; i++)
{
vec4 pixel = texture2D(texture, vec2(i, j));
if (i * 2 == j) // y = 2x
{
pixel = vec4(1.0, 0.0, 0.0, 1.0);
}
else
{
pixel = vec4(0.0, 0.0, 0.0, 1.0);
}
gl_FragColor = pixel * gl_Color;
}
This is supposed to only colour the pixels that have coordinates where y = 2x.
I am not very sure whether I understood the idea of texture2D correctly or not. If this is not how to, than how do you change the pixel of an empty texture?

texture2D(texture, vec2(u,v))
texture2D samples texels from the texture bound to sampler at texture coordinate (u,v).
It is an input parameter, you don't write into the texture. you write into the framebuffer.
The fragment shader main task is just to provide a color (RGBA) that will be displayed in the framebuffer (color texture) (the one that you see in the screen by default).
If you want to create a texture using GLSL, you can use a framebuffer with a color(empty) texture. When drawing, you texture will be filled. See details here:
Framebuffers
Other option is to write directly in the texture from the CPU, without GLSL. See glGetTextureSubImage to modify a texture already passed to the GPU.
glTexSubImage?D
In your example, it could be easier to create a 2D array with the vertex colors and use it to fill the Texture with
unsigned char* vertexColors= new char[widthheight4];
// fill vertexColors
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, vertexColors))
glTextImage?D

Related

sampling GL_DEPTH_COMPONENTs of type GL_UNSIGNED_SHORT in GLSL shader

I have access to a depth camera's output. I want to visualise this in opengl using a compute shader.
The depth feed is given as a frame and i know the width and height ahead of time. How do I sample the texture and retrieve the depth value in the shader? Is this possible? I've read through the OpenGl types here and can't find anything on unsigned shorts so am starting to worry. Are there any workarounds?
My current compute shader
#version 430
layout(local_size_x = 1, local_size_y = 1) in;
layout(rgba32f, binding = 0) uniform image2D img_output;
uniform float width;
uniform float height;
uniform sampler2D depth_feed;
void main() {
// get index in global work group i.e x,y position
vec2 sample_coords = ivec2(gl_GlobalInvocationID.xy) / vec2(width, height);
float visibility = texture(depth_feed, sample_coords).r;
vec4 pixel = vec4(1.0, 1.0, 0.0, visibility);
// output to a specific pixel in the image
imageStore(img_output, ivec2(gl_GlobalInvocationID.xy), pixel);
}
The depth texture definition is as follows:
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, width, height, 0,GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, nullptr);
Currently my code produces a plain yellow screen.
If you use perspective projection, then the depth value is not linear. See LearnOpenGL - Depth testing.
If all the depth values are near 0.0, and you use the following expression:
vec4 pixel = vec4(vec3(visibility), 1.0);
then all the pixels appear almost black. Actually the pixels are not completely black, but the difference is barely noticeable.
This happens, when the far plane is "too" far away. To verify that you can compute the power of 1.0 - visibility, to make the different depth values ​​recognizable. For instance:
float exponent = 5.0;
vec4 pixel = vec4(vec3(pow(1.0-visibility, exponent)), 1.0);
If you want a more sophisticated solution, you can linearize the depth values as explained in the answer to How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?.
Please note that for a satisfactory visualization you should use the entire range of the depth buffer ([0.0, 1.0]). The geometry must be between the near and far planes, but try to move the near and far planes as close to the geometry as possible.

Display Part of Texture in GLSL

I'm using GLSL to draw sprites from a sprite-sheet. I'm using jME 3, yet there are only small differences, and only with regards to deprecated functions.
The most important part of drawing a sprite from a sprite sheet is to draw only a subset/range of pixels, for example the range from (100, 0) to (200, 100). In the following test case sprite-sheet, and using the previous bounds, only the green part of the sprite-sheet would be drawn.
.
This is what I have so far:
Definition:
MaterialDef Solid Color {
//This is the list of user-defined variables to be used in the shader
MaterialParameters {
Vector4 Color
Texture2D ColorMap
}
Technique {
VertexShader GLSL100: Shaders/tc_s1.vert
FragmentShader GLSL100: Shaders/tc_s1.frag
WorldParameters {
WorldViewProjectionMatrix
}
}
}
.vert file:
uniform mat4 g_WorldViewProjectionMatrix;
attribute vec3 inPosition;
attribute vec4 inTexCoord;
varying vec4 texture_coordinate;
void main(){
gl_Position = g_WorldViewProjectionMatrix * vec4(inPosition, 1.0);
texture_coordinate = vec4(inTexCoord);
}
.frag:
uniform vec4 m_Color;
uniform sampler2D m_ColorMap;
varying vec4 texture_coordinate;
void main(){
vec4 color = vec4(m_Color);
vec4 tex = texture2D(m_ColorMap, texture_coordinate);
color *= tex;
gl_FragColor = color;
}
In jME 3, inTexCoord refers to gl_MultiTexCoord0, and inPosition refers to gl_Vertex.
As you can see, I tried to give the texture_coordinate a vec4 type, rather than a vec2, so as to be able to reference its p and q values (texture_coordinate.p and texture_coordinate.q). Modifying them only resulted in different hues.
m_Color refers to the color, inputted by the user, and serves the purpose of altering the hue. In this case, it should be disregarded.
So far, the shader works as expected and the texture displays correctly.
I've been using resources and tutorials from NeHe (http://nehe.gamedev.net/article/glsl_an_introduction/25007/) and Lighthouse3D (http://www.lighthouse3d.com/tutorials/glsl-tutorial/simple-texture/).
Which functions/values I should alter to get the desired effect of displaying only part of the texture?
Generally, if you want to only display part of a texture, then you change the texture coordinates associated with each vertex. Since you don't show your code for how you're telling OpenGL about your vertices, I'm not sure what to suggest. But in general, if you're using older deprecated functions, instead of doing this:
// Lower Left of triangle
glTexCoord2f(0,0);
glVertex3f(x0,y0,z0);
// Lower Right of triangle
glTexCoord2f(1,0);
glVertex3f(x1,y1,z1);
// Upper Right of triangle
glTexCoord2f(1,1);
glVertex3f(x2,y2,z2);
You could do this:
// Lower Left of triangle
glTexCoord2f(1.0 / 3.0, 0.0);
glVertex3f(x0,y0,z0);
// Lower Right of triangle
glTexCoord2f(2.0 / 3.0, 0.0);
glVertex3f(x1,y1,z1);
// Upper Right of triangle
glTexCoord2f(2.0 / 3.0, 1.0);
glVertex3f(x2,y2,z2);
If you're using VBOs, then you need to modify your array of texture coordinates to access the appropriate section of your texture in a similar manner.
For the sampler2D the texture coordinates are normalized so that the leftmost and bottom-most coordinates are 0, and the rightmost and topmost are 1. So for your example of a 300-pixel-wide texture, the green section would be between 1/3rd and 2/3rds the width of the texture.

opengl 3d texture issue

I'm trying to use a 3d texture in opengl to implement volume rendering. Each voxel has an rgba colour value and is currently rendered as a screen facing quad.(for testing purposes). I just can't seem to get the sampler to give me a colour value in the shader. The quads always end up black. When I change the shader to generate a colour (based on xyz coords) then it works fine. I'm loading the texture with the following code:
glGenTextures(1, &tex3D);
glBindTexture(GL_TEXTURE_3D, tex3D);
unsigned int colours[8];
colours[0] = Colour::AsBytes<unsigned int>(Colour::Blue);
colours[1] = Colour::AsBytes<unsigned int>(Colour::Red);
colours[2] = Colour::AsBytes<unsigned int>(Colour::Green);
colours[3] = Colour::AsBytes<unsigned int>(Colour::Magenta);
colours[4] = Colour::AsBytes<unsigned int>(Colour::Cyan);
colours[5] = Colour::AsBytes<unsigned int>(Colour::Yellow);
colours[6] = Colour::AsBytes<unsigned int>(Colour::White);
colours[7] = Colour::AsBytes<unsigned int>(Colour::Black);
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA, 2, 2, 2, 0, GL_RGBA, GL_UNSIGNED_BYTE, colours);
The colours array contains the correct data, i.e. the first four bytes have values 0, 0, 255, 255 for blue. Before rendering I bind the texture to the 2nd texture unit like so:
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_3D, tex3D);
And render with the following code:
shaders["DVR"]->Use();
shaders["DVR"]->Uniforms["volTex"].SetValue(1);
shaders["DVR"]->Uniforms["World"].SetValue(Mat4(vl_one));
shaders["DVR"]->Uniforms["viewProj"].SetValue(cam->GetViewTransform() * cam->GetProjectionMatrix());
QuadDrawer::DrawQuads(8);
I have used these classes for setting shader params before and they work fine. The quaddrawer draws eight instanced quads. The vertex shader code looks like this:
#version 330
layout(location = 0) in vec2 position;
layout(location = 1) in vec2 texCoord;
uniform sampler3D volTex;
ivec3 size = ivec3(2, 2, 2);
uniform mat4 World;
uniform mat4 viewProj;
smooth out vec4 colour;
void main()
{
vec3 texCoord3D;
int num = gl_InstanceID;
texCoord3D.x = num % size.x;
texCoord3D.y = (num / size.x) % size.y;
texCoord3D.z = (num / (size.x * size.y));
texCoord3D /= size;
texCoord3D *= 2.0;
texCoord3D -= 1.0;
colour = texture(volTex, texCoord3D);
//colour = vec4(texCoord3D, 1.0);
gl_Position = viewProj * World * vec4(texCoord3D, 1.0) + (vec4(position.x, position.y, 0.0, 0.0) * 0.05);
}
uncommenting the line where I set the colour value equal to the texcoord works fine, and makes the quads coloured. The fragment shader is simply:
#version 330
smooth in vec4 colour;
out vec4 outColour;
void main()
{
outColour = colour;
}
So my question is, what am I doing wrong, why is the sampler not getting any colour values from the 3d texture?
[EDIT]
Figured it out but can't self answer (new user):
As soon as I posted this I figured it out, I'll put the answer up to help anyone else (it's not specifically a 3d texture issue, and i've also fallen afoul of it before, D'oh!). I didn't generate mipmaps for the texture, and the default magnification/minification filters weren't set to either GL_LINEAR, or GL_NEAREST. Boom! no textures. Same thing happens with 2d textures.
As soon as I posted this I figured it out, I'll put the answer up to help anyone else (it's not specifically a 3d texture issue, and i've also fallen afoul of it before, D'oh!). I didn't generate mipmaps for the texture, and the default magnification/minification filters weren't set to either GL_LINEAR, or GL_NEAREST. Boom! no textures. Same thing happens with 2d textures.

volume rendering (using glsl) with ray casting algorithm

I am learning volume rendering using ray casting algorithm. I have found a good demo and tuturial in here. but the problem is that I have a ATI graphic card instead of nVidia which make me can't using the cg shader in the demo, so I want to change the cg shader to glsl shader. I have gone through the red book (7 edition) of OpenGL, but not familiar with glsl and cg.
does anyone can help me change the cg shader in the demo to glsl? or is there any materials to the simplest demo of volume rendering using ray casting (of course in glsl).
here is the cg shader of the demo. and it can work on my friend's nVidia graphic card. what most confusing me is that I don't know how to translate the entry part of cg to glsl, for example:
struct vertex_fragment
{
float4 Position : POSITION; // For the rasterizer
float4 TexCoord : TEXCOORD0;
float4 Color : TEXCOORD1;
float4 Pos : TEXCOORD2;
};
what's more, I can write a program bind 2 texture object with 2 texture unit to the shader provided that I assign two texcoord when draw the screen, for example
glMultiTexCoord2f(GL_TEXTURE0, 1.0, 0.0);
glMultiTexCoord2f(GL_TEXTURE1, 1.0, 0.0);
In the demo the program will bind to two texture (one 2D for backface_buffer one 3D for volume texture), but with only one texture unit like glMultiTexCoord3f(GL_TEXTURE1, x, y, z); I think the GL_TEXTURE1 unit is for the volume texture, but which one (texure unit) is for the backface_buffer? as far as I know in order to bind texture obj in a shader, I must get a texture unit to bind for example:
glLinkProgram(p);
texloc = glGetUniformLocation(p, "tex");
volume_texloc = glGetUniformLocation(p, "volume_tex");
stepsizeloc = glGetUniformLocation(p, "stepsize");
glUseProgram(p);
glUniform1i(texloc, 0);
glUniform1i(volume_texloc, 1);
glUniform1f(stepsizeloc, stepsize);
//When rendering an object with this program.
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, backface_buffer);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_3D, volume_texture);
the program is compiled fine and linked ok. but I only got -1 of all three location(texloc, volume_texloc and stepsizeloc). I know it may be optimized out.
anyone can help me translate the cg shader to glsl shader?
Edit: If you are interest in modern OpenGL API implementation(C++ source code) with glsl:Volume_Rendering_Using_GLSL
Problem solved. the glsl version of the demo:
vertex shader
void main()
{
gl_Position = gl_ModelViewProjectionMatrix*gl_Vertex;
//gl_FrontColor = gl_Color;
gl_TexCoord[2] = gl_Position;
gl_TexCoord[0] = gl_MultiTexCoord1;
gl_TexCoord[1] = gl_Color;
}
fragment shader
uniform sampler2D tex;
uniform sampler3D volume_tex;
uniform float stepsize;
void main()
{
vec2 texc = ((gl_TexCoord[2].xy/gl_TexCoord[2].w) + 1) / 2;
vec4 start = gl_TexCoord[0];
vec4 back_position = texture2D(tex, texc);
vec3 dir = vec3(0.0);
dir.x = back_position.x - start.x;
dir.y = back_position.y - start.y;
dir.z = back_position.z - start.z;
float len = length(dir.xyz); // the length from front to back is calculated and used to terminate the ray
vec3 norm_dir = normalize(dir);
float delta = stepsize;
vec3 delta_dir = norm_dir * delta;
float delta_dir_len = length(delta_dir);
vec3 vect = start.xyz;
vec4 col_acc = vec4(0,0,0,0); // The dest color
float alpha_acc = 0.0; // The dest alpha for blending
float length_acc = 0.0;
vec4 color_sample; // The src color
float alpha_sample; // The src alpha
for(int i = 0; i < 450; i++)
{
color_sample = texture3D(volume_tex,vect);
// why multiply the stepsize?
alpha_sample = color_sample.a*stepsize;
// why multply 3?
col_acc += (1.0 - alpha_acc) * color_sample * alpha_sample*3 ;
alpha_acc += alpha_sample;
vect += delta_dir;
length_acc += delta_dir_len;
if(length_acc >= len || alpha_acc > 1.0)
break; // terminate if opacity > 1 or the ray is outside the volume
}
gl_FragColor = col_acc;
}
if you seen the original shader of cg there is only a little difference between cg and glsl. the most difficult part to translate the demo to glsl version is that the cg function in the opengl such as:
param = cgGetNamedParameter(program, par);
cgGLSetTextureParameter(param, tex);
cgGLEnableTextureParameter(param);
encapsulate the process of texture unit and multitexture activation (using glActiveTexture) and deactivation, which is very important in this demo as it used the fixed pipeline as well as programmable pipeline. here is the key segment changed in the function void raycasting_pass() of main.cpp of the demo in Peter Triers GPU raycasting tutorial:
function raycasting_pass
void raycasting_pass()
{
// specify which texture to bind
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
GL_TEXTURE_2D, final_image, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glUseProgram(p);
glUniform1f(stepsizeIndex, stepsize);
glActiveTexture(GL_TEXTURE1);
glEnable(GL_TEXTURE_3D);
glBindTexture(GL_TEXTURE_3D, volume_texture);
glUniform1i(volume_tex, 1);
glActiveTexture(GL_TEXTURE0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, backface_buffer);
glUniform1i(tex, 0);
glUseProgram(p);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
drawQuads(1.0,1.0, 1.0); // Draw a cube
glDisable(GL_CULL_FACE);
glUseProgram(0);
// recover to use only one texture unit as for the fixed pipeline
glActiveTexture(GL_TEXTURE1);
glDisable(GL_TEXTURE_3D);
glActiveTexture(GL_TEXTURE0);
}
That's it.

How do I use a GLSL shader to apply a radial blur to an entire scene?

I have a radial blur shader in GLSL, which takes a texture, applies a radial blur to it and renders the result to the screen. This works very well, so far.
The problem is, that this applies the radial blur to the first texture in the scene. But what I actually want to do, is to apply this blur to the whole scene.
What is the best way to achieve this functionality? Can I do this with only shaders, or do I have to render the scene to a texture first (in OpenGL) and then pass this texture to the shader for further processing?
// Vertex shader
varying vec2 uv;
void main(void)
{
gl_Position = vec4( gl_Vertex.xy, 0.0, 1.0 );
gl_Position = sign( gl_Position );
uv = (vec2( gl_Position.x, - gl_Position.y ) + vec2(1.0) ) / vec2(2.0);
}
// Fragment shader
uniform sampler2D tex;
varying vec2 uv;
const float sampleDist = 1.0;
const float sampleStrength = 2.2;
void main(void)
{
float samples[10];
samples[0] = -0.08;
samples[1] = -0.05;
samples[2] = -0.03;
samples[3] = -0.02;
samples[4] = -0.01;
samples[5] = 0.01;
samples[6] = 0.02;
samples[7] = 0.03;
samples[8] = 0.05;
samples[9] = 0.08;
vec2 dir = 0.5 - uv;
float dist = sqrt(dir.x*dir.x + dir.y*dir.y);
dir = dir/dist;
vec4 color = texture2D(tex,uv);
vec4 sum = color;
for (int i = 0; i < 10; i++)
sum += texture2D( tex, uv + dir * samples[i] * sampleDist );
sum *= 1.0/11.0;
float t = dist * sampleStrength;
t = clamp( t ,0.0,1.0);
gl_FragColor = mix( color, sum, t );
}
This basically is called "post-processing" because you're applying an effect (here: radial blur) to the whole scene after it's rendered.
So yes, you're right: the good way for post-processing is to:
create a screen-sized NPOT texture (GL_TEXTURE_RECTANGLE),
create a FBO, attach the texture to it
set this FBO to active, render the scene
disable the FBO, draw a full-screen quad with the FBO's texture.
As for the "why", the reason is simple: the scene is rendered in parallel (the fragment shader is executed independently for many pixels). In order to do radial blur for pixel (x,y), you first need to know the pre-blur pixel values of the surrounding pixels. And those are not available in the first pass, because they are only being rendered in the meantime.
Therefore, you must apply the radial blur only after the whole scene is rendered and fragment shader for fragment (x,y) is able to read any pixel from the scene. This is the reason why you need 2 rendering stages for that.