Adding an image over another image - OpenTK - opengl

I can combine two images using below code. But is there any way to place the second image on bottom left of first image?
vec4 colorFirstImg = texture2D (sTexture_1, vec2(vTexCoord.x, vTexCoord.y));
vec4 colorSecondImg= texture2D (sTexture_2, vec2(vTexCoord.x, vTexCoord.y));
vec4 result = mix(colorFirstImg , colorSecondImg, colorSecondImg.a);
gl_FragColor =result;

Yes of course, you just have to scale the texture coordinates. If any component of the texture coordinates is > 1.0, skip the 2nd image, by passing 0.0 to the 3rd argument of mix:
vec4 colorFirstImg = texture2D(sTexture_1, vTexCoord.xy);
vec2 uv2 = vTexCoord.xy * 2.0;
vec4 colorSecondImg = texture2D(sTexture_2, uv2);
float a = (uv2.x <= 1.0 && uv2.y <= 1.0) ? colorSecondImg.a : 0.0;
vec4 result = mix(colorFirstImg , colorSecondImg, a);
gl_FragColor = result;

Related

How do I align the raytraced spheres from my fragment shader with GL_POINTS?

I have a very simple shader program that takes in a bunch of position data as GL_POINTS that generate screen-aligned squares of fragments like normal with a size depending on depth, and then in the fragment shader I wanted to draw a very simple ray-traced sphere for each one with just the shadow that is on the sphere opposite to the light. I went to this shadertoy to try to figure it out on my own. I used the sphIntersect function for ray-sphere intersection, and sphNormal to get the normal vectors on the sphere for lighting. The problem is that the spheres do not align with the squares of fragments, causing them to be cut off. This is because I am not sure how to match the projections of the spheres and the vertex positions so that they line up. Can I have an explanation of how to do this?
Here is a picture for reference.
Here are my vertex and fragment shaders for reference:
//vertex shader:
#version 460
layout(location = 0) in vec4 position; // position of each point in space
layout(location = 1) in vec4 color; //color of each point in space
layout(location = 2) uniform mat4 view_matrix; // projection * camera matrix
layout(location = 6) uniform mat4 cam_matrix; //just the camera matrix
out vec4 col; // color of vertex
out vec4 posi; // position of vertex
void main() {
vec4 p = view_matrix * vec4(position.xyz, 1.0);
gl_PointSize = clamp(1024.0 * position.w / p.z, 0.0, 4000.0);
gl_Position = p;
col = color;
posi = cam_matrix * position;
}
//fragment shader:
#version 460
in vec4 col; // color of vertex associated with this fragment
in vec4 posi; // position of the vertex associated with this fragment relative to camera
out vec4 f_color;
layout (depth_less) out float gl_FragDepth;
float sphIntersect( in vec3 ro, in vec3 rd, in vec4 sph )
{
vec3 oc = ro - sph.xyz;
float b = dot( oc, rd );
float c = dot( oc, oc ) - sph.w*sph.w;
float h = b*b - c;
if( h<0.0 ) return -1.0;
return -b - sqrt( h );
}
vec3 sphNormal( in vec3 pos, in vec4 sph )
{
return normalize(pos-sph.xyz);
}
void main() {
vec4 c = clamp(col, 0.0, 1.0);
vec2 p = ((2.0*gl_FragCoord.xy)-vec2(1920.0, 1080.0)) / 2.0;
vec3 ro = vec3(0.0, 0.0, -960.0 );
vec3 rd = normalize(vec3(p.x, p.y,960.0));
vec3 lig = normalize(vec3(0.6,0.3,0.1));
vec4 k = vec4(posi.x, posi.y, -posi.z, 2.0*posi.w);
float t = sphIntersect(ro, rd, k);
vec3 ps = ro + (t * rd);
vec3 nor = sphNormal(ps, k);
if(t < 0.0) c = vec4(1.0);
else c.xyz *= clamp(dot(nor,lig), 0.0, 1.0);
f_color = c;
gl_FragDepth = t * 0.0001;
}
Looks like you have many spheres so I would do this:
Input data
I would have VBO containing x,y,z,r describing your spheres, You will also need your view transform (uniform) that can create ray direction and start position for each fragment. Something like my vertex shader in here:
Reflection and refraction impossible without recursive ray tracing?
Create BBOX in Geometry shader and convert your POINT to QUAD or POLYGON
note that you have to account for perspective. If you are not familiar with geometry shaders see:
rendring cubics in GLSL
Where I emmit sequence of OBB from input lines...
In fragment raytrace sphere
You have to compute intersection between sphere and ray, chose the closer intersection and compute its depth and normal (for lighting). In case of no intersection you have to discard; fragment !!!
From what I can see in your images Your QUADs does not correspond to your spheres hence the clipping and also you do not discard; fragments with no intersections so you overwrite with background color already rendered stuff around last rendered spheres so you have only single sphere left in QUAD regardless of how many spheres are really there ...
To create a ray direction that matches a perspective matrix from screen space, the following ray direction formula can be used:
vec3 rd = normalize(vec3(((2.0 / screenWidth) * gl_FragCoord.xy) - vec2(aspectRatio, 1.0), -proj_matrix[1][1]));
The value of 2.0 / screenWidth can be pre-computed or the opengl built-in uniform structs can be used.
To get a bounding box or other shape for your spheres, it is very important to use camera-facing shapes, and not camera-plane-facing shapes. Use the following process where position is the incoming VBO position data, and the w-component of position is the radius:
vec4 p = vec4((cam_matrix * vec4(position.xyz, 1.0)).xyz, position.w);
o.vpos = p;
float l2 = dot(p.xyz, p.xyz);
float r2 = p.w * p.w;
float k = 1.0 - (r2/l2);
float radius = p.w * sqrt(k);
if(l2 < r2) {
p = vec4(0.0, 0.0, -p.w * 0.49, p.w);
radius = p.w;
k = 0.0;
}
vec3 hx = radius * normalize(vec3(-p.z, 0.0, p.x));
vec3 hy = radius * normalize(vec3(-p.x * p.y, p.z * p.z + p.x * p.x, -p.z * p.y));
p.xyz *= k;
Then use hx and hy as basis vectors for any 2D shape that you want the billboard to be shaped like for the vertices. Don't forget later to multiply each vertex by a perspective matrix to get the final position of each vertex. Here is a visualization of the billboarding on desmos using a hexagon shape: https://www.desmos.com/calculator/yeeew6tqwx

How can I render a textured quad so that I fade different corners?

I'm drawing textured quads to the screen in a 2D environment. The quads are used as a tile-map. In order to "blend" some of the tiles together I had the idea like:
A single "grass" tile drawn on top of dirt would render it as a faded circle of grass; faded from probably the quarter point.
If there was a larger area of grass tiles, then the edges would gradually fade from the quarter point that is on the edge of the grass.
So if the entire left-edge of the quad was to be faded, it would have 0 opacity at the left-edge, and then full opacity at one quarter of the width of the quad. Right edge fade would have full opacity at the three-quarters width, and fade down to 0 opacity at the right-most edge.
I figured that setting 4 corners as "on" or "off" would be enough to have the fragment shader work it out. However, I can't work it out.
If corner0 were 0 the result should be something like this for the quad:
If both corner0 and corner1 were 0 then it would look like this:
This is what I have so far:
#version 330
layout(location=0) in vec3 inVertexPosition;
layout(location=1) in vec2 inTexelCoords;
layout(location=2) in vec2 inElementPosition;
layout(location=3) in vec2 inElementSize;
layout(location=4) in uint inCorner0;
layout(location=5) in uint inCorner1;
layout(location=6) in uint inCorner2;
layout(location=7) in uint inCorner3;
smooth out vec2 texelCoords;
flat out vec2 elementPosition;
flat out vec2 elementSize;
flat out uint corner0;
flat out uint corner1;
flat out uint corner2;
flat out uint corner3;
void main()
{
gl_Position = vec4(inVertexPosition.x,
-inVertexPosition.y,
inVertexPosition.z, 1.0);
texelCoords = vec2(inTexelCoords.x,1-inTexelCoords.y);
elementPosition.x = (inElementPosition.x + 1.0) / 2.0;
elementPosition.y = -((inElementPosition.y + 1.0) / 2.0);
elementSize.x = (inElementSize.x) / 2.0;
elementSize.y = -((inElementSize.y) / 2.0);
corner0 = inCorner0;
corner1 = inCorner1;
corner2 = inCorner2;
corner3 = inCorner3;
}
The element position is provided in the range of [-1,1], the corner variables are all either 0 or 1. These are provided on an instance basis, whereas the vertex position and texelcoords are provided per-vertex. The vertex y-coord is inverted because I work in reverse and just flip it here for ease. ElementSize is on the scale of [0,2], so I'm just converting it to [0,1] range.
The UV coords could be any values, not neccessarily [0,1].
Here's the frag shader
#version 330
precision highp float;
layout(location=0) out vec4 frag_colour;
smooth in vec2 texelCoords;
flat in vec2 elementPosition;
flat in vec2 elementSize;
flat in uint corner0;
flat in uint corner1;
flat in uint corner2;
flat in uint corner3;
uniform sampler2D uTexture;
const vec2 uScreenDimensions = vec2(600,600);
void main()
{
vec2 uv = texelCoords;
vec4 c = texture(uTexture,uv);
frag_colour = c;
vec2 fragPos = gl_FragCoord.xy / uScreenDimensions;
// What can I do using the fragPos, elementPos??
}
Basically, I'm not sure what I can do using the fragPos and elementPosition to fade pixels toward a corner if that corner is 0 instead of 1. I kind of understand that it should be based on the distance of the frag from the corner position... but I can't work it out. I added elementSize because I think it's needed to determine how far from the corner the given frag is...
To achieve a fading effect, you have to use Blending. YOu have to set the alpha channel of the fragment color dependent on a scale:
frag_colour = vec4(c.rgb, c.a * scale);
scale has to be computed dependent on the texture coordinates (uv). If a coordinate is in range [0.0, 0.25] or [0.75, 1.0] then the texture has to be faded dependent on the corresponding cornerX variable. In the following the variables uv is assumed to be a 2 dimensional vector, in range [0, 1].
Compute a linear gradients for the left, right, bottom and top side, dependent on uv:
float gradL = min(1.0, uv.x * 4.0);
float gradR = min(1.0, (1.0 - uv.x) * 4.0);
float gradT = min(1.0, uv.y * 4.0);
float gradB = min(1.0, (1.0 - uv.y) * 4.0);
Or compute Hermite gradients by using smoothstep:
float gradL = smoothstep(0.0, 0.25, uv.x);
float gradR = 1.0 - smoothstep(0.75, 1.0, uv.x);
float gradT = smoothstep(0.0, 0.25, uv.y);
float gradB = 1.0 - smoothstep(0.75, 1.0, uv.y);
Compute the fade factor for the 4 corners and the 4 sides dependent on gradL, gradR, gradT, gradB and the corresponding cornerX variable. Finally compute the maximum fade factor:
float fade0 = float(corner0) * max(0.0, 1.0 - dot(vec2(0.707), vec2(gradL, gradT)));
float fade1 = float(corner1) * max(0.0, 1.0 - dot(vec2(0.707), vec2(gradL, gradB)));
float fade2 = float(corner2) * max(0.0, 1.0 - dot(vec2(0.707), vec2(gradR, gradB)));
float fade3 = float(corner3) * max(0.0, 1.0 - dot(vec2(0.707), vec2(gradR, gradT)));
float fadeL = float(corner0) * float(corner1) * (1.0 - gradL);
float fadeB = float(corner1) * float(corner2) * (1.0 - gradB);
float fadeR = float(corner2) * float(corner3) * (1.0 - gradR);
float fadeT = float(corner3) * float(corner0) * (1.0 - gradT);
float fade = max(
max(max(fade0, fade1), max(fade2, fade3)),
max(max(fadeL, fadeR), max(fadeB, fadeT)));
At the end compute the scale and set the fragment color:
float scale = 1.0 - fade;
frag_colour = vec4(c.rgb, c.a * scale);

Downsample and upsample texture offsets, OpenGL GLSL

Let's say that I want to downsample from 4x4 to 2x2 texels texture, do some fancy stuff, and upsample it again from 2x2 to 4x4. How do I calculate the correct neighbor texels offsets? I can't use bilinear filtering or nearest filtering. I need to pick 4 samples for each fragment execution and pick the maximum one before downsampling. The same holds for the upsampling pass, i.e., I need to pick 4 samples for each fragment execution.
Have I calculated the neighbor offsets correctly(I'm using a fullscreen quad)?
//Downsample: 1.0 / 2.0, Upsample: 1.0 / 4.0.
vec2 texelSize = vec2(1.0 / textureWidth, 1.0 / textureHeight);
const vec2 DOWNSAMPLE_OFFSETS[4] = vec2[]
(
vec2(-0.5, -0.5) * texelSize,
vec2(-0.5, 0.5) * texelSize,
vec2(0.5, -0.5) * texelSize,
vec2(0.5, 0.5) * texelSize
);
const vec2 UPSAMPLE_OFFSETS[4] = vec2[]
(
vec2(-1.0, -1.0) * texelSize,
vec2(-1.0, 1.0) * texelSize,
vec2(1.0, -1.0) * texelSize,
vec2(1.0, 1.0) * texelSize
);
//Fragment shader.
#version 400 core
uniform sampler2D mainTexture;
in vec2 texCoord;
out vec4 fragColor;
void main(void)
{
#if defined(DOWNSAMPLE)
vec2 uv0 = texCoord + DOWNSAMPLE_OFFSETS[0];
vec2 uv1 = texCoord + DOWNSAMPLE_OFFSETS[1];
vec2 uv2 = texCoord + DOWNSAMPLE_OFFSETS[2];
vec2 uv3 = texCoord + DOWNSAMPLE_OFFSETS[3];
#else
vec2 uv0 = texCoord + UPSAMPLE_OFFSETS[0];
vec2 uv1 = texCoord + UPSAMPLE_OFFSETS[1];
vec2 uv2 = texCoord + UPSAMPLE_OFFSETS[2];
vec2 uv3 = texCoord + UPSAMPLE_OFFSETS[3];
#endif
float val0 = texture(mainTexture, uv0).r;
float val1 = texture(mainTexture, uv1).r;
float val2 = texture(mainTexture, uv2).r;
float val3 = texture(mainTexture, uv3).r;
//Do some stuff...
fragColor = ...;
}
The offsets look correct, assuming texelSize is in both cases the texel size of the render target. That is, twice as big for the downsampling pass than the upsampling pass. In the case of upsampling, you are not hitting the source texel centers exactly, but come close enough that nearest neighbor filtering snaps them to the intended result.
A more efficient option is to use textureGather instruction, specified in the ARB_texture_gather extension. When used to sample a texture, it returns the same four texels, that would be used for filtering. It only returns a single component of each texel to produce a vec4, but given that you only care about the red component, it's an ideal solution if the extension is available. The code would then be the same for both downsampling and upsampling:
#define GATHER_RED_COMPONENT 0
vec4 vals = textureGather(mainTexture, texcoord, GATHER_RED_COMPONENT);
// Output the maximum value
fragColor = max(max(vals.x, vals.y), max(vals.z, vals.w));

OpenGL GLSL blend two textures by arbitrary shape

I have a full screen quad with two textures.
I want to blend two textures in arbitrary shape according to user selection.
For example, the quad at first is 100% texture0 while texture1 is transparent.
If the user selects a region, for example a circle, by dragging the mouse on the quad, then
circle region should display both texture0 and texture1 as translucent.
The region not enclosed by the circle should still be texture0.
Please see example image, textures are simplified as colors.
For now, I have achieved blending two textures on the quad, but the blending region can only be vertical slices because I use the step() function.
My frag shader:
uniform sampler2D Texture0;
uniform sampler2D Texture1;
uniform float alpha;
uniform float leftBlend;
uniform float rightBlend;
varying vec4 oColor;
varying vec2 oTexCoord;
void main()
{
vec4 first_sample = texture2D(Texture0, oTexCoord);
vec4 second_sample = texture2D(Texture1, oTexCoord);
float stepLeft = step(leftBlend, oTexCoord.x);
float stepRight = step(rightBlend, 1.0 - oTexCoord.x);
if(stepLeft == 1.0 && stepRight == 1.0)
gl_FragColor = oColor * first_sample;
else
gl_FragColor = oColor * (first_sample * alpha + second_sample * (1.0-alpha));
if (gl_FragColor.a < 0.4)
discard;
}
To achieve arbitrary shape, I assume I need to create a alpha mask texture which is the same size as texture0 and texture 1?
Then I pass that texture to frag shader to check values, if value is 0 then texture0, if value is 1 then blend texture0 and texture1.
Is my approach correct? Can you point me to any samples?
I want effect such as OpenGL - mask with multiple textures
but I want to create mask texture in my program dynamically, and I want to implement blending in GLSL
I have got blending working with mask texture of black and white
uniform sampler2D TextureMask;
vec4 mask_sample = texture2D(TextureMask, oTexCoord);
if(mask_sample.r == 0)
gl_FragColor = first_sample;
else
gl_FragColor = (first_sample * alpha + second_sample * (1.0-alpha));
now mask texture is loaded statically from a image on disk, now I just need to create mask texture dynamically in opengl
Here's one approach and sample.
Create a boolean test for whether you want to blend.
In my sample, I use an equation for a circle centered on the screen.
Then blend (i blended by weighted addition of the 2 colors).
(NOTE: i didn't have texture coords to work with in this sample, so i used the screen resolution to determine the circle position).
uniform vec2 resolution;
void main( void ) {
vec2 position = gl_FragCoord.xy / resolution;
// test if we're "in" or "out" of the blended region
// lets use a circle of radius 0.5, but you can make this mroe complex and/or pass this value in from the user
bool isBlended = (position.x - 0.5) * (position.x - 0.5) +
(position.y - 0.5) * (position.y - 0.5) > 0.25;
vec4 color1 = vec4(1,0,0,1); // this could come from texture 1
vec4 color2 = vec4(0,1,0,1); // this could come from texture 2
vec4 finalColor;
if (isBlended)
{
// blend
finalColor = color1 * 0.5 + color2 * 0.5;
}
else
{
// don't blend
finalColor = color1;
}
gl_FragColor = finalColor;
}
See the sample running here: http://glsl.heroku.com/e#18231.0
(tried to post my sample image but i don't have enough rep) sorry :/
Update:
Here's another sample using mouse position to determine the position of the blended area.
To run, paste the code in this sandbox site: https://www.shadertoy.com/new
This one should work on objects of any shape, as long as you have the mouse data setup correct.
void main(void)
{
vec2 position = gl_FragCoord.xy;
// test if we're "in" or "out" of the blended region
// lets use a circle of radius 10px, but you can make this mroe complex and/or pass this value in from the user
float diffX = position.x - iMouse.x;
float diffY = position.y - iMouse.y;
bool isBlended = (diffX * diffX) + (diffY * diffY) < 100.0;
vec4 color1 = vec4(1,0,0,1); // this could come from texture 1
vec4 color2 = vec4(0,1,0,1); // this could come from texture 2
vec4 finalColor;
if (isBlended)
{
// blend
finalColor = color1 * 0.5 + color2 * 0.5;
}
else
{
// don't blend
finalColor = color1;
}
gl_FragColor = finalColor;
}

GLSL Checkerboard Pattern

i want to shade the quad with checkers:
f(P)=[floor(Px)+floor(Py)]mod2.
My quad is:
glBegin(GL_QUADS);
glVertex3f(0,0,0.0);
glVertex3f(4,0,0.0);
glVertex3f(4,4,0.0);
glVertex3f(0,4, 0.0);
glEnd();
The vertex shader file:
varying float factor;
float x,y;
void main(){
x=floor(gl_Position.x);
y=floor(gl_Position.y);
factor = mod((x+y),2.0);
}
And the fragment shader file is:
varying float factor;
void main(){
gl_FragColor = vec4(factor,factor,factor,1.0);
}
But im getting this:
It seems that the mod function doeasn't work or maybe somthing else...
Any help?
It is better to calculate this effect in fragment shader, something like that:
vertex program =>
varying vec2 texCoord;
void main(void)
{
gl_Position = vec4(gl_Vertex.xy, 0.0, 1.0);
gl_Position = sign(gl_Position);
texCoord = (vec2(gl_Position.x, gl_Position.y)
+ vec2(1.0)) / vec2(2.0);
}
fragment program =>
#extension GL_EXT_gpu_shader4 : enable
uniform sampler2D Texture0;
varying vec2 texCoord;
void main(void)
{
ivec2 size = textureSize2D(Texture0, 0);
float total = floor(texCoord.x * float(size.x)) +
floor(texCoord.y * float(size.y));
bool isEven = mod(total, 2.0) == 0.0;
vec4 col1 = vec4(0.0, 0.0, 0.0, 1.0);
vec4 col2 = vec4(1.0, 1.0, 1.0, 1.0);
gl_FragColor = (isEven) ? col1 : col2;
}
Output =>
Good luck!
Try this function in your fragment shader:
vec3 checker(in float u, in float v)
{
float checkSize = 2;
float fmodResult = mod(floor(checkSize * u) + floor(checkSize * v), 2.0);
float fin = max(sign(fmodResult), 0.0);
return vec3(fin, fin, fin);
}
Then in main you can call it using :
vec3 check = checker(fs_vertex_texture.x, fs_vertex_texture.y);
And simply pass x and y you are getting from vertex shader. All you have to do after that is to include it when calculating your vFragColor.
Keep in mind that you can change chec size simply by modifying checkSize value.
What your code does is calculate the factor 4 times (once for each vertex, since it's vertex shader code) and then interpolate those values (because it's written into a varying varible) and then output that variable as color in the fragment shader.
So it doesn't work that way. You need to do that calculation directly in the fragment shader. You can get the fragment position using the gl_FragCoord built-in variable in the fragment shader.
May I suggest the following:
float result = mod(dot(vec2(1.0), step(vec2(0.5), fract(v_uv * u_repeat))), 2.0);
v_uv is a vec2 of UV values,
u_repeat is a vec2 of how many times the pattern should be repeated for each axis.
result is 0 or 1, you can use it in mix function to provide colors, for example:
gl_FragColor = mix(vec4(1.0, 1.0, 1.0, 1.0), vec4(0.0, 0.0, 0.0, 1.0) result);
Another nice way to do it is by just tiling a known pattern (zooming out). Assuming that you have a square canvas:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
uv -= 0.5; // moving the coordinate system to middle of screen
// Output to screen
fragColor = vec4(vec3(step(uv.x * uv.y, 0.)), 1.);
}
Code above gives you this kind of pattern.
Code below by just zooming 4.5 times and taking the fractional part repeats the pattern 4.5 times resulting in 9 squares per row.
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fract(fragCoord/iResolution.xy * 4.5);
uv -= 0.5; // moving the coordinate system to middle of screen
// Output to screen
fragColor = vec4(vec3(step(uv.x * uv.y, 0.)), 1.);
}