I am working on a simple OpenGL application, which uses the base of Triangle Tutorial. For fun, I decided to add a third value. The Data format is (data, x_axis, y_axis). data acts as a z-axis value, but the z-axis can be kept zero. However, I want to draw just the data at the point where x_axis and y_axis point in the 2D plane. for example, if (x,y) = (0.23, 0.2) and data value is 0.23, I want to place this data value to the marked (x,y). Moreover, the shaders which I have written (sort of basic), are not working properly, or I am missing something?
every x_axis and y_axis has data scanned in a range of angles. x_axis lies between 30-120, whereas the y_axis lies between 30-110.
using vec4 since I am considering 4 points, of which Z remains 0. Will remove Z-axis completely.
Can I draw the data separately, after the vertex are drawn?
vertex shader
#version 460 core
uniform mat4 model;
uniform mat4 projection;
in vec4 pos;
out vec4 cvColor;
void main()
{
if(pos.z <= 0.10)
{
cvColor = vec4(0.323, 0.242, 0.22, 1.0 );
}
else if (pos.z > 0.11 && pos.z <=0.20)
{
cvColor = vec4(0.322, 0.241, 0.3, 1.0);
}
else if (pos.z > 0.21 && pos.z <=0.30 )
{
cvColor = vec4(0.453, 0.245, 0.33, 1.0);
}
else
{
cvColor = vec4(1.0, 1.0, 1.0, 1.0);
}
gl_Position = projection*model*vec4(pos.x, pos.y, pos.z, 1.0);
}
fragment shader
#version 460 core
uniform vec4 cvColor;
out vec4 color;
void main()
{
color = cvColor;
}
Related
I am currently trying to draw a 2D grid on a single quad using only shaders. I am using SFML as the graphics library and sf::View to control the camera. So far I have been able to draw an anti-aliased multi level grid. The first level (blue) outlines a chunk and the second level (grey) outlines the tiles within a chunk.
I would now like to fade grid levels based on the distance from the camera. For example, the chunk grid should fade in as the camera zooms in. The same should be done for the tile grid after the chunk grid has been completely faded in.
I am not sure how this could be implemented as I am still new to OpenGL and GLSL. If anybody has any pointers on how this functionality can be implemented, please let me know.
Vertex Shader
#version 130
out vec2 texCoords;
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
texCoords = (gl_TextureMatrix[0] * gl_MultiTexCoord0).xy;
}
Fragment Shader
#version 130
uniform vec2 chunkSize = vec2(64.0, 64.0);
uniform vec2 tileSize = vec2(16.0, 16.0);
uniform vec3 chunkBorderColor = vec3(0.0, 0.0, 1.0);
uniform vec3 tileBorderColor = vec3(0.5, 0.5, 0.5);
uniform bool drawGrid = true;
in vec2 texCoords;
void main() {
vec2 uv = texCoords.xy * chunkSize;
vec3 color = vec3(1.0, 1.0, 1.0);
if(drawGrid) {
float aa = length(fwidth(uv));
vec2 halfChunkSize = chunkSize / 2.0;
vec2 halfTileSize = tileSize / 2.0;
vec2 a = abs(mod(uv - halfChunkSize, chunkSize) - halfChunkSize);
vec2 b = abs(mod(uv - halfTileSize, tileSize) - halfTileSize);
color = mix(
color,
tileBorderColor,
smoothstep(aa, .0, min(b.x, b.y))
);
color = mix(
color,
chunkBorderColor,
smoothstep(aa, .0, min(a.x, a.y))
);
}
gl_FragColor.rgb = color;
gl_FragColor.a = 1.0;
}
You need to split your multiplication in the vertex shader to two parts:
// have a variable to be interpolated per fragment
out vec2 vertex_coordinate;
...
{
// this will store the coordinates of the vertex
// before its projected (i.e. its "world" coordinates)
vertex_coordinate = gl_ModelViewMatrix * gl_Vertex;
// get your projected vertex position as before
gl_Position = gl_ProjectionMatrix * vertex_coordinate;
...
}
Then in the fragment shader you change the color based on the world vertex coordinate and the camera position:
in vec2 vertex_coordinate;
// have to update this value, every time your camera changes its position
uniform vec2 camera_world_position = vec2(64.0, 64.0);
...
{
...
// calculate the distance from the fragment in world coordinates to the camera
float fade_factor = length(camera_world_position - vertex_coordinate);
// make it to be 1 near the camera and 0 if its more then 100 units.
fade_factor = clamp(1.0 - fade_factor / 100.0, 0.0, 1.0);
// update your final color with this factor
gl_FragColor.rgb = color * fade_factor;
...
}
The second way to do it is to use the projected coordinate's w. I personally prefer to calculate the distance in units of space. I did not test this code, it might have some trivial syntax errors, but if you understand the idea, you can apply it in any other way.
I have a simple OpenGL application that i want to draw a cube, but it seems that the depth buffer is working somehow incorrectly. I am using SFML for windowin system and GLEW as the library.
The main looks like following: https://pastebin.com/tq5t0TJN
...code...
The mesh class looks like following: https://pastebin.com/YiWH8dWH
...code...
The loading functions looks like following: https://pastebin.com/vUNLn0gu - but they seem to work all correctly
...code...
Now I got a vertex shader: https://pastebin.com/EeP4RXHy
#version 330 core
layout (location = 0) in vec3 location_attrib;
layout (location = 1) in vec2 texcoord_attrib;
layout (location = 2) in vec3 normal_attrib;
out vec3 location;
out vec3 normal;
void main()
{
normal = normal_attrib;
gl_Position = vec4(location_attrib.x / 2, location_attrib.y / 2, location_attrib.z / 2, 1.0);
}
And a fragment shader: https://pastebin.com/E9krMFZq
#version 330 core
out vec4 gl_FragColor;
//out float gl_FragDepth ;
in vec3 location;
in vec3 normal;
void main()
{
vec3 normal_normalized = normalize(normal);
vec3 light_dir = vec3(0, -1, 0);
float light_amount = dot(normal_normalized, light_dir);
vec3 color = vec3(1.0f, 0.5f, 0.2f);
color = color * light_amount;
color = clamp(color, vec3(0,0,0), vec3(1,1,1));
color.r += 0.1;
color.g += 0.1;
color.b += 0.1;
//gl_FragColor = vec4(color.r, color.g, color.b , 1.0);
gl_FragDepth = location.z / 1000;
gl_FragColor = vec4(vec3(gl_FragCoord.z), 1.0);
}
But the image I get when doing it this way looks like so:
Can anybody tell what is the issue?
I tried to figure out what is wrong with the depth test, tried different combinations, but it all seems to not work.
The depth of the fragment is explicitly set in the fragment shader:
gl_FragDepth = location.z / 1000;
The depth buffer can represent values in the range [0.0, 1.0] and the accuracy of the depth buffer is limited. In your case the depth buffer has 24 bits (settings.depthBits = 24;).
If the values are outside this range or if the difference between the values is so small that it cannot be represented with the accuracy of the depth buffer, then the depth test will not work correctly.
Anyway, the vertex shader does not write to the output variable location. Hence location.z is 0.0 for all fragments.
If you want to map the depth values to a subrange of [0.0, 1.0], then you can use glDepthRange to specify a mapping.
I'm trying to make kind of polar clock in Quartz Composer with GLSL Shaders. The problem is i've no idea of this programming language. However i've been searching and found this code as good start:
Vertex Shader:
#version 120
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
}
Fragment Shader:
#version 120
uniform sampler2D tex0;
uniform float border; // 0.01
uniform float circle_radius; // 0.5
uniform vec4 circle_color; // vec4(1.0, 1.0, 1.0, 1.0)
uniform vec2 circle_center; // vec2(0.5, 0.5)
void main (void)
{
vec2 uv = gl_TexCoord[0].xy;
vec4 bkg_color = texture2D(tex0,uv * vec2(1.0, -1.0));
// Offset uv with the center of the circle.
uv -= circle_center;
float dist = sqrt(dot(uv, uv));
if ( (dist > (circle_radius+border)) || (dist < (circle_radius-border)) )
gl_FragColor = bkg_color;
else
gl_FragColor = circle_color;
}
Now i'd like to know where to say to this code that it will be drawn by degrees depending of the variable on the input.
Thank You in advance
I've got a shader that implements shadow mapping like this:
#version 430 core
out vec4 color;
in VS_OUT {
vec3 N;
vec3 L;
vec3 V;
vec4 shadow_coord;
} fs_in;
layout(binding = 0) uniform sampler2DShadow shadow_tex;
uniform vec3 light_ambient_albedo = vec3(1.0);
uniform vec3 light_diffuse_albedo = vec3(1.0);
uniform vec3 light_specular_albedo = vec3(1.0);
uniform vec3 ambient_albedo = vec3(0.1, 0.1, 0.2);
uniform vec3 diffuse_albedo = vec3(0.4, 0.4, 0.8);
uniform vec3 specular_albedo = vec3(0.0, 0.0, 0.0);
uniform float specular_power = 128.0;
void main(void) {
//color = vec4(0.4, 0.4, 0.8, 1.0);
//normalize
vec3 N = normalize(fs_in.N);
vec3 L = normalize(fs_in.L);
vec3 V = normalize(fs_in.V);
//calculate R
vec3 R = reflect(-L, N);
//calcualte ambient
vec3 ambient = ambient_albedo * light_ambient_albedo;
//calculate diffuse
vec3 diffuse = max(dot(N, L), 0.0) * diffuse_albedo * light_diffuse_albedo;
//calcualte spcular
vec3 specular = pow(max(dot(R, V), 0.0), specular_power) * specular_albedo * light_specular_albedo;
//write color
color = textureProj(shadow_tex, fs_in.shadow_coord) * vec4(ambient + diffuse + specular, 0.5);
//if in shadow, then multiply color by 0.5 ^^, except alpha
}
What I want to do is to check first if the fragment is indeed in the shadow, and only then change the color (halve it, such that it becomes halfway between fully black and original color).
However how to check if the textureProj(...) result is indeed in shadow, as far as I know it returns a normalized float value.
Would something like textureProj(...) > 0.9999 suffice already? I know that it can returns values other than zero or one if you are using multisampling and I'd like behaviour that will not just break at one point.
The outputting vertex shader:
#version 430 core
layout(location = 0) in vec4 position;
layout(location = 0) uniform mat4 model_matrix;
layout(location = 1) uniform mat4 view_matrix;
layout(location = 2) uniform mat4 proj_matrix;
layout(location = 3) uniform mat4 shadow_matrix;
out VS_OUT {
vec3 N;
vec3 L;
vec3 V;
vec4 shadow_coord;
} vs_out;
uniform vec4 light_pos = vec4(-20.0, 7.5, -20.0, 1.0);
void main(void) {
vec4 local_light_pos = view_matrix * light_pos;
vec4 p = view_matrix * model_matrix * position;
//normal
vs_out.N = vec3(0.0, 1.0, 0.0);
//light vector
vs_out.L = local_light_pos.xyz - p.xyz;
//view vector
vs_out.V = -p.xyz;
//light space coordinates
vs_out.shadow_coord = shadow_matrix * position;
gl_Position = proj_matrix * p;
}
Note that the fragment shader is for terrain, and the vertex shader is for the floor, so there might be minor inconsistencies between the two, but they should be non relevant.
shadow_matrix is an uniform passed in as bias_matrix * light_projection_matrix * light_view_matrix * light_model_matrix.
textureProj (...) does not return a normalized floating-point value. It does return a single float if you use it on a sampler<1D|2D|2DRect>Shadow, but this value represents the result of a depth test. 1.0 = pass, 0.0 = fail.
Now, the interesting thing to note here, and the reason returning a float for a shadow sampler is meaningful at all has to do with filtering the shadow map. If you use a GL_LINEAR filter mode on the shadow map together with a shadow sampler, GL will actually pick the 4 closest texels in the shadow map and perform 4 independent depth tests.
Each depth test still has a binary result, but GL will return a weighted average of the result of all 4 tests (based on distance from the ideal sample location). So if you use GL_LINEAR in conjunction with a shadow sampler, you will have a value that lies somewhere in-between 0.0 and 1.0 representing the average occlusion for the 4 nearest depth samples.
I should point out that your use of textureProj (...) looks potentially wrong to me. The coordinate it uses is a 4D vector consisting of (s,t,r) [projected coordinates] and (q) [depth value to test]. I do not see anywhere in your code where you are assigning q a depth value. If you could edit your question to include the vertex/geometry shader that is outputting shadow_coord, that would help.
Try the following:
Get the distance from each vertex of your model to the light.
Send this distance to your fragment shader.
Compare the distance to the value stored in your shadow map sampler (I assume this texture stores the depth values of your scene from the camera's point of view?)
If the distance is greater than the sampler, the point is in shadow. Else, it is not.
If this is confusing, here's a pair of tutorials that should help:
http://ogldev.atspace.co.uk/www/tutorial23/tutorial23.html
http://ogldev.atspace.co.uk/www/tutorial24/tutorial24.html
i want to shade the quad with checkers:
f(P)=[floor(Px)+floor(Py)]mod2.
My quad is:
glBegin(GL_QUADS);
glVertex3f(0,0,0.0);
glVertex3f(4,0,0.0);
glVertex3f(4,4,0.0);
glVertex3f(0,4, 0.0);
glEnd();
The vertex shader file:
varying float factor;
float x,y;
void main(){
x=floor(gl_Position.x);
y=floor(gl_Position.y);
factor = mod((x+y),2.0);
}
And the fragment shader file is:
varying float factor;
void main(){
gl_FragColor = vec4(factor,factor,factor,1.0);
}
But im getting this:
It seems that the mod function doeasn't work or maybe somthing else...
Any help?
It is better to calculate this effect in fragment shader, something like that:
vertex program =>
varying vec2 texCoord;
void main(void)
{
gl_Position = vec4(gl_Vertex.xy, 0.0, 1.0);
gl_Position = sign(gl_Position);
texCoord = (vec2(gl_Position.x, gl_Position.y)
+ vec2(1.0)) / vec2(2.0);
}
fragment program =>
#extension GL_EXT_gpu_shader4 : enable
uniform sampler2D Texture0;
varying vec2 texCoord;
void main(void)
{
ivec2 size = textureSize2D(Texture0, 0);
float total = floor(texCoord.x * float(size.x)) +
floor(texCoord.y * float(size.y));
bool isEven = mod(total, 2.0) == 0.0;
vec4 col1 = vec4(0.0, 0.0, 0.0, 1.0);
vec4 col2 = vec4(1.0, 1.0, 1.0, 1.0);
gl_FragColor = (isEven) ? col1 : col2;
}
Output =>
Good luck!
Try this function in your fragment shader:
vec3 checker(in float u, in float v)
{
float checkSize = 2;
float fmodResult = mod(floor(checkSize * u) + floor(checkSize * v), 2.0);
float fin = max(sign(fmodResult), 0.0);
return vec3(fin, fin, fin);
}
Then in main you can call it using :
vec3 check = checker(fs_vertex_texture.x, fs_vertex_texture.y);
And simply pass x and y you are getting from vertex shader. All you have to do after that is to include it when calculating your vFragColor.
Keep in mind that you can change chec size simply by modifying checkSize value.
What your code does is calculate the factor 4 times (once for each vertex, since it's vertex shader code) and then interpolate those values (because it's written into a varying varible) and then output that variable as color in the fragment shader.
So it doesn't work that way. You need to do that calculation directly in the fragment shader. You can get the fragment position using the gl_FragCoord built-in variable in the fragment shader.
May I suggest the following:
float result = mod(dot(vec2(1.0), step(vec2(0.5), fract(v_uv * u_repeat))), 2.0);
v_uv is a vec2 of UV values,
u_repeat is a vec2 of how many times the pattern should be repeated for each axis.
result is 0 or 1, you can use it in mix function to provide colors, for example:
gl_FragColor = mix(vec4(1.0, 1.0, 1.0, 1.0), vec4(0.0, 0.0, 0.0, 1.0) result);
Another nice way to do it is by just tiling a known pattern (zooming out). Assuming that you have a square canvas:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
uv -= 0.5; // moving the coordinate system to middle of screen
// Output to screen
fragColor = vec4(vec3(step(uv.x * uv.y, 0.)), 1.);
}
Code above gives you this kind of pattern.
Code below by just zooming 4.5 times and taking the fractional part repeats the pattern 4.5 times resulting in 9 squares per row.
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fract(fragCoord/iResolution.xy * 4.5);
uv -= 0.5; // moving the coordinate system to middle of screen
// Output to screen
fragColor = vec4(vec3(step(uv.x * uv.y, 0.)), 1.);
}