sobel operator - false edges natural images - c++

I'm having a problem with edge detection using Sobel operator: it produces too many false edges, effect is shown on pictures below.
I'm using a 3x3 sobel operator - first extracting vertical then horizontal, final output is magnitude of each filter output.
Edges on synthetic images are extracted properly but natural images produce have too many false edges or "noise" even if image is preprocessed by applying blur or median filter.
What might be cause of this? Is it implementation problem (then: why synthetic images are fine?) or I need to do some more preprocessing?
Original:
Output:
code:
void imageOp::filter(image8* image, int maskSize, int16_t *mask)
{
if((image == NULL) || (maskSize/2 == 0) || maskSize < 1)
{
if(image == NULL)
{
printf("filter: image pointer == NULL \n");
}
else if(maskSize < 1)
{
printf("filter: maskSize must be greater than 1\n");
}
else
{
printf("filter: maskSize must be odd number\n");
}
return;
}
image8* fImage = new image8(image->getHeight(), image->getWidth());
uint16_t sum = 0;
int d = maskSize/2;
int ty, tx;
for(int x = 0; x < image->getHeight(); x++) //
{ // loop over image
for(int y = 0; y < image->getWidth(); y++) //
{
for(int xm = -d; xm <= d; xm++)
{
for(int ym = -d; ym <= d; ym++)
{
ty = y + ym;
if(ty < 0) // edge conditions
{
ty = (-1)*ym - 1;
}
else if(ty >= image->getWidth())
{
ty = image->getWidth() - ym;
}
tx = x + xm;
if(tx < 0) // edge conditions
{
tx = (-1)*xm - 1;
}
else if(tx >= image->getHeight())
{
tx = image->getHeight() - xm;
}
sum += image->img[tx][ty] * mask[((xm+d)*maskSize) + ym + d];
}
}
if(sum > 255)
{
fImage->img[x][y] = 255;
}
else if(sum < 0)
{
fImage->img[x][y] = 0;
}
else
{
fImage->img[x][y] = (uint8_t)sum;
}
sum = 0;
}
}
for(int x = 0; x < image->getHeight(); x++)
{
for(int y = 0; y < image->getWidth(); y++)
{
image->img[x][y] = fImage->img[x][y];
}
}
delete fImage;
}

This appears to be due to a math error somewhere in your code. To follow on my comment, this is what I get when I run your image through a Sobel operator here (edge strength is indicated by brightness of the output image):
I used a GLSL fragment shader to produce this:
precision mediump float;
varying vec2 textureCoordinate;
varying vec2 leftTextureCoordinate;
varying vec2 rightTextureCoordinate;
varying vec2 topTextureCoordinate;
varying vec2 topLeftTextureCoordinate;
varying vec2 topRightTextureCoordinate;
varying vec2 bottomTextureCoordinate;
varying vec2 bottomLeftTextureCoordinate;
varying vec2 bottomRightTextureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
float bottomLeftIntensity = texture2D(inputImageTexture, bottomLeftTextureCoordinate).r;
float topRightIntensity = texture2D(inputImageTexture, topRightTextureCoordinate).r;
float topLeftIntensity = texture2D(inputImageTexture, topLeftTextureCoordinate).r;
float bottomRightIntensity = texture2D(inputImageTexture, bottomRightTextureCoordinate).r;
float leftIntensity = texture2D(inputImageTexture, leftTextureCoordinate).r;
float rightIntensity = texture2D(inputImageTexture, rightTextureCoordinate).r;
float bottomIntensity = texture2D(inputImageTexture, bottomTextureCoordinate).r;
float topIntensity = texture2D(inputImageTexture, topTextureCoordinate).r;
float h = -topLeftIntensity - 2.0 * topIntensity - topRightIntensity + bottomLeftIntensity + 2.0 * bottomIntensity + bottomRightIntensity;
float v = -bottomLeftIntensity - 2.0 * leftIntensity - topLeftIntensity + bottomRightIntensity + 2.0 * rightIntensity + topRightIntensity;
float mag = length(vec2(h, v));
gl_FragColor = vec4(vec3(mag), 1.0);
You don't show your mask values, which I assume contain the Sobel kernel. In the above code, I've hardcoded the calculations performed against the red channel of each pixel in a 3x3 Sobel kernel. This is purely for performance on my platform.
One thing I don't notice in your code (again, I may be missing it like I did the sum being set back to 0) is the determination of the magnitude of the vector for the two portions of the Sobel operator. I'd expect to see a square root operation in there somewhere, if that was present.

Related

Breaking out of GLSL loops via a uniform

I want to calculate the per-row minimum of a matrix of floats in GLSL in the browser, of about 1000 rows, 4000 columns.
Building on previous answers (see this) I used a for loop. However I would like to use a uniform for the upper bound, which is not possible in WebGL GLSL ES 1.0. This is because the length of the row is defined after the fragment shader, and I'd like to avoid messing with #DEFINEs.
So I found out that this workaround - fixed cycle length with a if/break defined by a uniform - works ok:
#define MAX_INT 65536
void main(void) {
float m = 0.0;
float k = -1.0;
int r = 40;
for(int i = 0; i < MAX_INT; ++i){
float ndx = floor(gl_FragCoord.y) * float(r) + float(i);
float a = getPoint(values, dimensions, ndx).x;
m = m > a ? m : a;
if (i >= r) { break; }
};
}
Now the question: does this have big drawbacks? Is there something weird I am doing and I'm missing something?
I believe, but am not entirely sure, that the only risk is some driver/gpu will still make the long loop.
As an example imagine this loop
uniform int limit;
void main() {
float sum = 0;
for (int i = 0; i < 3; ++i) {
sum += texture2D(tex, vec2(float(i) / 3, 0)).r;
if (i >= limit) {
break;
}
}
gl_FragColor = vec4(sum);
}
that can be re-written by the driver like this
uniform int limit;
void main() {
float sum = 0;
for (int i = 0; i < 3; ++i) {
float temp = texture2D(tex, vec2(float(i) / 3, 0)).r;
sum += temp * step(float(i), float(limit));
}
gl_FragColor = vec4(sum);
}
no branches. I don't know if any such drivers/gpus still exist that have no conditionals but the idea of requiring a const integer expression for a loop is so the branches can be removed and/or the loop un-rolled at compile time if the driver/GPU decided to do either.
uniform int limit;
void main() {
float sum = 0;
sum += step(float(0), float(limit)) * texture2D(tex, vec2(float(0) / 3, 0)).r;
sum += step(float(1), float(limit)) * texture2D(tex, vec2(float(1) / 3, 0)).r;
sum += step(float(2), float(limit)) * texture2D(tex, vec2(float(2) / 3, 0)).r;
gl_FragColor = vec4(sum);
}
Also, as an aside, the specific example you have above doesn't output anything so most drivers would turn the entire shader into a no-op.

Why are my Ray march fragment shader refelction texture lookups slowing my frame rate?

I’ve written a Fragment shader in GLSL, using shader toy.
Link : https://www.shadertoy.com/view/wtGSzy
most of it works, but when I enable texture lookups in the reflection function, the performance drops from 60FPS to 5~FPS.
The code in question is on lines 173 - 176
if(SDFObjectToDraw.texChannelID == 0)
col = texture(iChannel0, uv);
if(SDFObjectToDraw.texChannelID == 1)
col = texture(iChannel1, uv);
This same code can bee seen in my rayMarch function (lines 274-277) and works fine for colouring my objects. It only causes issues in the reflection function.
My question is, why are my texture lookups, in the reflection code, dropping my performance this much and what can I do to improve it?
/**
* Return the normalized direction to march in from the eye point for a single pixel.
*
* fieldOfView: vertical field of view in degrees
* size: resolution of the output image
* fragCoord: the x,y coordinate of the pixel in the output image
*/
vec3 rayDirection(float fieldOfView, vec2 size, vec2 fragCoord) {
vec2 xy = fragCoord - size / 2.0;
float z = size.y / tan(radians(fieldOfView) / 2.0);
return normalize(vec3(xy, -z));
}
float start = 0.0;
vec3 eye = vec3(0,0,5);
int MAX_MARCHING_STEPS = 255;
float EPSILON = 0.00001;
float end = 10.0;
const uint Shpere = 1u;
const uint Box = 2u;
const uint Plane = 4u;
vec3 lightPos = vec3(-10,0,5);
#define M_PI 3.1415926535897932384626433832795
const int SDF_OBJECT_COUNT = 4;
struct SDFObject
{
uint Shape;
vec3 Position;
float Radius;
int texChannelID;
float Ambiant;
float Spec;
float Diff;
vec3 BoxSize;
bool isMirror; //quick hack to get refletions working
};
SDFObject SDFObjects[SDF_OBJECT_COUNT] = SDFObject[SDF_OBJECT_COUNT](
SDFObject(Shpere, vec3(2,0,-3),1.0,0,0.2,0.2,0.8, vec3(0,0,0),true)
,SDFObject(Shpere, vec3(-2,0,-3),1.0,0,0.1,1.0,1.0, vec3(0,0,0),false)
,SDFObject(Box, vec3(0,0,-6),0.2,1,0.2,0.2,0.8, vec3(1.0,0.5,0.5),false)
,SDFObject(Plane, vec3(0,0,0),1.0,1,0.2,0.2,0.8, vec3(0.0,1.0,0.0),false)
);
float shereSDF(vec3 p, SDFObject o)
{
return length(p-o.Position)-o.Radius;
}
float boxSDF(vec3 pointToTest, vec3 boxBoundery, float radius, vec3 boxPos)
{
vec3 q = abs(pointToTest - boxPos) - boxBoundery;
return length(max(q,0.0)) + min(max(q.x, max(q.y,q.z)) ,0.0) -radius;
}
float planeSDF(vec3 p, vec4 n, vec3 Pos)
{
return dot(p-Pos, n.xyz) + n.w;
}
bool IsShadow(vec3 LightPos, vec3 HitPos)
{
bool isShadow = false;
vec3 viewRayDirection = normalize(lightPos- HitPos) ;
float depth = start;
vec3 hitpoint;
for(int i=0; i<MAX_MARCHING_STEPS; i++)
{
hitpoint = (HitPos+ depth * viewRayDirection);
float dist = end;
for(int j =0; j<SDF_OBJECT_COUNT; j++)
{
float distToObjectBeingConsidered;
if(SDFObjects[j].Shape == Shpere)
distToObjectBeingConsidered = shereSDF(hitpoint, SDFObjects[j]);
if(SDFObjects[j].Shape == Box)
distToObjectBeingConsidered = boxSDF(hitpoint, SDFObjects[j].BoxSize , SDFObjects[j].Radius, SDFObjects[j].Position);
if(SDFObjects[j].Shape == Plane)
distToObjectBeingConsidered= planeSDF(hitpoint, vec4(SDFObjects[j].BoxSize, SDFObjects[j].Radius), SDFObjects[j].Position);
if( distToObjectBeingConsidered < dist)
{
dist = distToObjectBeingConsidered;
}
}
if(dist < EPSILON)
{
isShadow = true;
}
depth += dist;
if(depth >= end)
{
isShadow = false;
}
}
return isShadow;
}
vec3 MirrorReflection(vec3 inComingRay, vec3 surfNormal, vec3 HitPos, int objectIndexToIgnore)
{
vec3 returnCol;
vec3 reflectedRay = reflect(inComingRay, surfNormal);
vec3 RayDirection = normalize(reflectedRay) ;
float depth = start;
vec3 hitpoint;
int i;
for(i=0; i<MAX_MARCHING_STEPS; i++)
{
hitpoint = (HitPos+ depth * RayDirection);
SDFObject SDFObjectToDraw;
float dist = end;
for(int j =0; j<SDF_OBJECT_COUNT; j++)
{
float distToObjectBeingConsidered;
if(SDFObjects[j].Shape == Shpere)
distToObjectBeingConsidered = shereSDF(hitpoint, SDFObjects[j]);
if(SDFObjects[j].Shape == Box)
distToObjectBeingConsidered = boxSDF(hitpoint, SDFObjects[j].BoxSize , SDFObjects[j].Radius, SDFObjects[j].Position);
if(SDFObjects[j].Shape == Plane)
distToObjectBeingConsidered= planeSDF(hitpoint, vec4(SDFObjects[j].BoxSize, SDFObjects[j].Radius), SDFObjects[j].Position);
if( distToObjectBeingConsidered < dist && j!= objectIndexToIgnore )// D > 0.0)
{
dist = distToObjectBeingConsidered;
SDFObjectToDraw = SDFObjects[j];
}
}
if(dist < EPSILON)
{
vec3 normal =normalize(hitpoint-SDFObjectToDraw.Position);
float u = 0.5+ (atan(normal.z, normal.x)/(2.0*M_PI));
float v = 0.5+ (asin(normal.y)/(M_PI));
vec2 uv =vec2(u,v);
vec4 col = vec4(0,0.5,0.5,0);
///>>>>>>>>>>>> THESE LINES ARE broken, WHY?
//if(SDFObjectToDraw.texChannelID == 0)
//col = texture(iChannel0, uv);
//if(SDFObjectToDraw.texChannelID == 1)
//col = texture(iChannel1, uv);
vec3 NormalizedDirToLight = normalize(lightPos-SDFObjectToDraw.Position);
float theta = dot(normal,NormalizedDirToLight);
vec3 reflectionOfLight = reflect(NormalizedDirToLight, normal);
vec3 viewDir = normalize(SDFObjectToDraw.Position);
float Spec = dot(reflectionOfLight, viewDir);
if(IsShadow(lightPos, hitpoint))
{
returnCol= (col.xyz*SDFObjectToDraw.Ambiant);
}
else
{
returnCol= (col.xyz*SDFObjectToDraw.Ambiant)
+(col.xyz * max(theta *SDFObjectToDraw.Diff, SDFObjectToDraw.Ambiant));
}
break;
}
depth += dist;
if(depth >= end)
{
//should look up bg texture here but cant be assed right now
returnCol = vec3(1.0,0.0,0.0);
break;
}
}
return returnCol;//*= (vec3(i+1)/vec3(MAX_MARCHING_STEPS));
}
vec3 rayMarch(vec2 fragCoord)
{
vec3 viewRayDirection = rayDirection(45.0, iResolution.xy, fragCoord);
float depth = start;
vec3 hitpoint;
vec3 ReturnColour = vec3(0,0,0);
for(int i=0; i<MAX_MARCHING_STEPS; i++)
{
hitpoint = (eye+ depth * viewRayDirection);
float dist = end;
SDFObject SDFObjectToDraw;
int objectInDexToIgnore=-1;
//find closest objecct to current point
for(int j =0; j<SDF_OBJECT_COUNT; j++)
{
float distToObjectBeingConsidered;
if(SDFObjects[j].Shape == Shpere)
distToObjectBeingConsidered = shereSDF(hitpoint, SDFObjects[j]);
if(SDFObjects[j].Shape == Box)
distToObjectBeingConsidered = boxSDF(hitpoint, SDFObjects[j].BoxSize , SDFObjects[j].Radius, SDFObjects[j].Position);
if(SDFObjects[j].Shape == Plane)
distToObjectBeingConsidered= planeSDF(hitpoint, vec4(SDFObjects[j].BoxSize, SDFObjects[j].Radius), SDFObjects[j].Position);
if( distToObjectBeingConsidered < dist)
{
dist = distToObjectBeingConsidered;
SDFObjectToDraw = SDFObjects[j];
objectInDexToIgnore = j;
}
}
//if we are close enough to an objectoto hit it.
if(dist < EPSILON)
{
vec3 normal =normalize(hitpoint-SDFObjectToDraw.Position);
if(SDFObjectToDraw.isMirror)
{
ReturnColour = MirrorReflection( viewRayDirection, normal, hitpoint, objectInDexToIgnore);
}
else
{
float u = 0.5+ (atan(normal.z, normal.x)/(2.0*M_PI));
float v = 0.5+ (asin(normal.y)/(M_PI));
vec2 uv =vec2(u,v);
vec4 col;
if(SDFObjectToDraw.texChannelID == 0)
col = texture(iChannel0, uv);
if(SDFObjectToDraw.texChannelID == 1)
col = texture(iChannel1, uv);
vec3 NormalizedDirToLight = normalize(lightPos-SDFObjectToDraw.Position);
float theta = dot(normal,NormalizedDirToLight);
vec3 reflectionOfLight = reflect(NormalizedDirToLight, normal);
vec3 viewDir = normalize(SDFObjectToDraw.Position);
float Spec = dot(reflectionOfLight, viewDir);
if(IsShadow(lightPos, hitpoint))
{
ReturnColour= (col.xyz*SDFObjectToDraw.Ambiant);
}
else
{
ReturnColour= (col.xyz*SDFObjectToDraw.Ambiant)
+(col.xyz * max(theta *SDFObjectToDraw.Diff, SDFObjectToDraw.Ambiant));
//+(col.xyz* Spec * SDFObjectToDraw.Spec);
}
}
return ReturnColour;
}
depth += dist;
if(depth >= end)
{
float u = fragCoord.x/ iResolution.x;
float v = fragCoord.y/ iResolution.y;
vec4 col = texture(iChannel2, vec2(u,v));
ReturnColour =col.xyz;
}
}
return ReturnColour;
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
//vec2 uv = fragCoord/iResolution.xy;
// Time varying pixel color
//vec3 col = 0.5 + 0.5*cos(iTime+uv.xyx+vec3(0,2,4));
// Output to screen
lightPos *= cos(iTime+vec3(1.5,2,2));
//lightPos= vec3(cos(iTime)*2.0,0,0);
vec3 SDFCol= rayMarch(fragCoord);
vec3 col = vec3(0);
//if(SDFVal <=1.0)
// col = vec3(1,0,0);
//col = vec3(SDFVal,0,0);
col = vec3(0.5,0,0);
col = SDFCol;
fragColor = vec4(col,1.0);
}
[...] This same code can bee seen in my rayMarch function (lines 274-277) and works fine for colouring my objects. [...]
The "working" texture lookup is executed in a loop in rayMarch. MAX_MARCHING_STEPS is 255, so the lookup is done at most 255 times.
vec3 rayMarch(vec2 fragCoord)
{
// [...]
for(int i=0; i<MAX_MARCHING_STEPS; i++)
{
// [...]
if(SDFObjectToDraw.texChannelID == 0)
col = texture(iChannel0, uv);
if(SDFObjectToDraw.texChannelID == 1)
col = texture(iChannel1, uv);
// [...]
}
// [...]
}
When you do the lookup in MirrorReflection then the performance breaks down, because it is done in a loop in MirrorReflection and MirrorReflection is called in a loop in rayMarch. In this case the lookup is done up to 255*255 = 65025 times.
~65000 texture lookups for a fragment is far to much and cause the break down of performance.
vec3 MirrorReflection(vec3 inComingRay, vec3 surfNormal, vec3 HitPos, int objectIndexToIgnore)
{
// [...]
for(i=0; i<MAX_MARCHING_STEPS; i++)
{
// [...]
if(SDFObjectToDraw.texChannelID == 0)
col = texture(iChannel0, uv);
if(SDFObjectToDraw.texChannelID == 1)
col = texture(iChannel1, uv);
// [...]
}
// [...]
}
vec3 rayMarch(vec2 fragCoord)
{
// [...]
for(int i=0; i<MAX_MARCHING_STEPS; i++)
{
// [...]
ReturnColour = MirrorReflection(viewRayDirection, normal, hitpoint, objectInDexToIgnore);
// [...]
}
// [...]
}

oct tree generation going wrong at the last step

NOTE: THIS QUESTION HAS BEEN DRASTICALLY EDITED FROM ITS ORIGINAL FORM
I am attempting to create a logarithmic raytracer by implementing an oct tree data structure combined with voxelization to achieve fast ray tracing.
Currently I am having issues with the ray collision detection.
The expected output should be the voxelized stanford dragon with its normal map.
Currrently the issue is that some regions are transparent:
The full dragon:
Transparent regions:
From these images it should be clear that the geometry is correct, but the collision checks are wrong.
There are 2 fragment shaders involved in this process:
The voxelizer fragment shader:
#version 430
in vec3 f_pos;
in vec3 f_norm;
in vec2 f_uv;
out vec4 f_color;
struct Voxel
{
vec4 position;
vec4 normal;
vec4 color;
};
struct Node
{
int children[8];
};
layout(std430, binding = 0) buffer voxel_buffer
{
Voxel voxels[];
};
layout(std430, binding = 1) buffer buffer_index
{
uint index;
};
layout(std430, binding = 2) buffer tree_buffer
{
Node tree[];
};
layout(std430, binding = 3) buffer tree_index
{
uint t_index;
};
out vec4 fragment_color;
uniform int voxel_resolution;
uniform int cube_dim;
int getVIndex(vec3 position, int level)
{
float size = cube_dim / pow(2,level);
int bit2 = int(position.x > size);
int bit1 = int(position.y > size);
int bit0 = int(position.z > size);
return 4*bit2 + 2*bit1 + bit0;
}
void main()
{
uint m_index = atomicAdd(index, 1);
voxels[m_index].position = vec4(f_pos*cube_dim,1);
voxels[m_index].normal = vec4(f_norm,1);
voxels[m_index].color = vec4(f_norm,1);
int max_level = int(log2(voxel_resolution));
int node = 0;
vec3 corner = vec3(-cube_dim);
int child;
for(int level=0; level<max_level-1; level++)
{
float size = cube_dim / pow(2,level);
vec3 corners[] =
{corner, corner+vec3(0,0,size),
corner+vec3(0,size,0), corner+vec3(0,size,size),
corner+vec3(size,0,0), corner+vec3(size,0,size),
corner+vec3(size,size,0), corner+vec3(size,size,size)};
vec3 offsetPos = (vec3(voxels[m_index].position));
child = getVIndex(offsetPos-corner, level);
int mrun = 500;
while ((tree[node].children[child] <= 0) && (mrun > 0)){
mrun--;
if( (atomicCompSwap( tree[node].children[child] , 0 , -1) == 0 ))
{
tree[node].children[child] = int(atomicAdd(t_index, 1));
}
}
if(mrun < 1)
discard;
if(level==max_level-2)
break;
node = tree[node].children[child];
corner = corners[child];
}
tree[node].children[child] = int(m_index);
}
I understand the logic may not be clear so let me explain:
We start with a 3D psoition voxels[m_index].position = vec4(f_pos*cube_dim,1); And we know there is a cube with dimensions (-cube_dim,-cube_dim,-cube_dim) to (cube_dim,cube_dim,cube_dim)
So a cube whose diagonals intersect at the origin with side length of 2*cube_dim. That has been divided into multiple little cubes with side length 2*cube_dim/voxel_resolution. Basically this is just a cube subdivided n times to make a cartesian grid.
Using this coordinate we start at the big cube, subdividing it into 8 equal sized subsapaces and detecting which of these subspaces contians the coordinate.
We do this until we find the smallest box containing the position.
The raytracer
#version 430
in vec2 f_coord;
out vec4 fragment_color;
struct Voxel
{
vec4 position;
vec4 normal;
vec4 color;
};
struct Node
{
int children[8];
};
layout(std430, binding = 0) buffer voxel_buffer
{
Voxel voxels[];
};
layout(std430, binding = 1) buffer buffer_index
{
uint index;
};
layout(std430, binding = 2) buffer tree_buffer
{
Node tree[];
};
layout(std430, binding = 3) buffer tree_index
{
uint t_index;
};
uniform vec3 camera_pos;
uniform float aspect_ratio;
uniform float cube_dim;
uniform int voxel_resolution;
float planeIntersection(vec3 origin, vec3 ray, vec3 pNormal, vec3 pPoint)
{
pNormal = normalize(pNormal);
return (dot(pPoint,pNormal)-dot(pNormal,origin))/dot(ray,pNormal);
}
#define EPSILON 0.001
bool inBoxBounds(vec3 corner, float size, vec3 position)
{
bool inside = true;
position-=corner;
for(int i=0; i<3; i++)
{
inside = inside && (position[i] > -EPSILON);
inside = inside && (position[i] < size+EPSILON);
}
return inside;
}
float boxIntersection(vec3 origin, vec3 dir, vec3 corner0, float size)
{
dir = normalize(dir);
vec3 corner1 = corner0 + vec3(size,size,size);
vec3 normals[6] =
{ vec3(-1,0,0), vec3(0,-1,0), vec3(0,0,-1), vec3(1,0,0), vec3(0,1,0), vec3(0,0,1) };
float coeffs[6];
for(uint i=0; i<3; i++)
coeffs[i] = planeIntersection(origin, dir, normals[i], corner0);
for(uint i=3; i<6; i++)
coeffs[i] = planeIntersection(origin, dir, normals[i], corner1);
float t = 1.f/0.f;
for(uint i=0; i<6; i++){
coeffs[i] = coeffs[i] < 0 ? 1.f/0.f : coeffs[i];
t = inBoxBounds(corner0,size,origin+dir*coeffs[i]) ? min(coeffs[i],t) : t;
}
return t;
}
void sort(float elements[8], int indices[8], vec3 vectors[8])
{
for(uint i=0; i<8; i++)
{
for(uint j=i; j<8; j++)
{
if(elements[j] < elements[i])
{
float swap = elements[i];
elements[i] = elements[j];
elements[j] = swap;
int iSwap = indices[i];
indices[i] = indices[j];
indices[j] = iSwap;
vec3 vSwap = vectors[i];
vectors[i] = vectors[j];
vectors[j] = vSwap;
}
}
}
}
int getVIndex(vec3 position, int level)
{
float size = cube_dim / pow(2,level);
int bit2 = int(position.x > size);
int bit1 = int(position.y > size);
int bit0 = int(position.z > size);
return 4*bit2 + 2*bit1 + bit0;
}
#define MAX_TREE_HEIGHT 11
int nodes[8*MAX_TREE_HEIGHT];
int levels[8*MAX_TREE_HEIGHT];
vec3 positions[8*MAX_TREE_HEIGHT];
int sp=0;
void push(int node, int level, vec3 corner)
{
nodes[sp] = node;
levels[sp] = level;
positions[sp] = corner;
sp++;
}
void main()
{
vec3 r = vec3(f_coord.x, f_coord.y, 1.f/tan(radians(40)));
r.y/=aspect_ratio;
vec3 dir = r;
r += vec3(0,0,-1.f/tan(radians(40))) + camera_pos;
fragment_color = vec4(0);
//int level = 0;
int max_level = int(log2(voxel_resolution));
push(0,0,vec3(-cube_dim));
float tc = 1.f;
int level=0;
int node=0;
do
{
sp--;
node = nodes[sp];
level = levels[sp];
vec3 corner = positions[sp];
float size = cube_dim / pow(2,level);
vec3 corners[] =
{corner, corner+vec3(0,0,size),
corner+vec3(0, size,0), corner+vec3(0,size,size),
corner+vec3(size,0,0), corner+vec3(size,0,size),
corner+vec3(size,size,0), corner+vec3(size,size,size)};
float t = boxIntersection(r, dir, corner, size*2);
if(!isinf(t))
tc *= 0.9f;
float coeffs[8];
for(int child=0; child<8; child++)
{
if(tree[node].children[child]>0)
coeffs[child] = boxIntersection(r, dir, corners[child], size);
else
coeffs[child] = 1.f/0.f;
}
int indices[8] = {0,1,2,3,4,5,6,7};
sort(coeffs, indices, corners);
for(uint i=7; i>=0; i--)
{
if(!isinf(coeffs[i]))
{
push(tree[node].children[indices[i]],
level+1, corners[i]);
}
}
}while(level < (max_level-1) && sp>0);
if(level==max_level-1)
{
fragment_color = abs(voxels[node].normal);
}
else
{
fragment_color=vec4(tc);
}
}
}
In here, we start at the biggest cube, testing intersections with each set of 8 children (the 8 cubes resulting from subdividing a cube). Each time we successfully detect a collision, we move down the tree, until we reach the lowest level which describes the actual geometry and we color the scene based on that.
Debugging and Problem
The important part is that there are 2 buffers, one to store the tree except the leafs, and one to store the leafs.
So in both the voxelization and the ray tracing, the last layer needs to be treated differently.
The issues I have noticed about the transparency are as follows:
It happens only on planes aligned with the cartesian grid
It seems it happens when the ray moves in a negative direction (down
or to the left). (At least that's my imperssion but it's not 100%
certain)
I am not sure what I am doing wrong.
EDIT:
The original issue seems to have been fixed, however the raytracer is still bugged. I have edited the question to refelct the current state of the problem.
The error comes from the sorting function as someone in the comments mentioned although not for the same reasons.
What has happened is that, I thought the sort function would modify the arrays passed to it, but it seems to be copying the data, so it does not return anything.
In other words:
void sort(float elements[8], int indices[8], vec3 vectors[8])
{
for(uint i=0; i<8; i++)
{
for(uint j=i; j<8; j++)
{
if((elements[j] < elements[i]))
{
float swap = elements[i];
elements[i] = elements[j];
elements[j] = swap;
int iSwap = indices[i];
indices[i] = indices[j];
indices[j] = iSwap;
vec3 vSwap = vectors[i];
vectors[i] = vectors[j];
vectors[j] = vSwap;
}
}
}
}
Does not return the correct values inside of elements, indices and vectors, so calling this function does nothing but waste computation cycles.

colourful Noise in luminance filter opengl

My final goal is to detect a laser line in my picture. To do so, first I convert my RGB colour space to HSV(in order to examine brightness). Then I will select only the pixels which have a certain value of H, S and V (the brightest pixels with a certain colour (red in my case)).
The pixels which satisfy these values I set their luminance for all 3 channels of RGB and if they don't satisfy I set them as black.
Here comes my problem and question:
As I mentioned, I would have either a black pixel or a grey (luminance) pixel. I don't understand how these purple or green pixels come into my picture. They are like noise and they are constantly changing!
At first, I thought I have these colour because of values bigger than 1. But I think OpenGL clamps the values to 0-1( I even tried it myself but no success).
Anyone know what causes this effect?
Any help or idea is appreciated.
Here is my fragment shader:
precision highp float;
varying vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
const highp vec3 W = vec3(0.299, 0.587, 0.114);
void main()
{
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
float luminance = dot(textureColor.rgb, W);
float r = textureColor.r;
float g = textureColor.g;
float b = textureColor.b;
float h;
float s;
float v;
float min = 0.0;
float max = 0.0;
float delta = 0.0;
if(r >= g) {
if(r >= b) {
max = r;
}
else {
max = b;
}
}
else {
if(g >= b) {
max = g;
}
else {
max = b;
}
}
// max = MAX( r, g, b );
if(r < g) {
if(r < b) {
min = r;
}
else {
min = b;
}
}
else {
if(g < b) {
min = g;
}
else {
min = b;
}
}
v = max; // v
delta = max - min;
if (delta == 0.0) {
h = 0.0;
s = 0.0;
return;
}
else if( max != 0.0 )
s = delta / max; // s
else {
// r = g = b = 0 // s = 0, v is undefined
s = 0.0;
h = -1.0;
return;
}
if( r == max ){
h = ( g - b ) / delta; // between yellow & magenta
h = mod(h, 6.0);
}
else if( g == max )
h = 2.0 + ( b - r ) / delta; // between cyan & yellow
else
h = 4.0 + ( r - g ) / delta; // between magenta & cyan
h = h * 60.0; // degrees
if( h < 0.0 )
h = h + 360.0;
//HSV transformation ends here
if(v >0.8){
if(s > 0.7){
if( h >320.0 && h < 360.0){
if(luminance > 1.0)
gl_FragColor = vec4(vec3(1.0), textureColor.a);
else
gl_FragColor = vec4(vec3(luminance ), textureColor.a);
}
}
}else{
gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);
}
}
I have to mention that, the big white spot is sunlight and it's so bright that it passes my HSV conditions at the end. That's not my problem now, but these purple noise like pixels which are there or the green noises around the picture are my problem.
The purple and green pixels are probably uninitialized memory.
gl_FragColor is not guaranteed to be initialized at the start of a shader, so initializing it before the return statements on lines 70 and 78 should fix the issue.
If it does not and an uninitialized framebuffer is used, then it may be because of a different (and less likey) cause:
It may be because of blending. Try disabling it or ensuring gl_FragColor.a is 1.
It may be because the stencil buffer is exposing it. Try disabling stencil testing or ensuring that all pixels pass the stencil test.
Alternatively, if it is caused by blending of the stencil test, you could initialize the framebuffer with something like glClear().
While your fragment "returns" without setting a fragment color, like here:
else {
// r = g = b = 0 // s = 0, v is undefined
s = 0.0;
h = -1.0;
return; //<<<<==== NO gl_FragColor set
I think this code lacks a else-path for the cases s <= 0.7 or h <= 320.0:
if(v >0.8){
if(s > 0.7){
if( h >320.0 && h < 360.0){
if(luminance > 1.0)
gl_FragColor = vec4(vec3(1.0), textureColor.a);
else
gl_FragColor = vec4(vec3(luminance ), textureColor.a);
}
//<<<<===== NO 'else' for h<320
}
///<<<<====== NO 'else' for s<0.7
}else{
gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);
}
Thus, there are many cases where the fragment (current pixel on image) remains untouched.
So my mistake was some places which I have returned but didn't set the gl_FragColor. In my case, any where which the process failed or didn't meet the conditions I have to set:
gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);
The corrected code should look like:
precision highp float;
varying vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
const highp vec3 W = vec3(0.299, 0.587, 0.114);
void main()
{
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
float luminance = dot(textureColor.rgb, W);
//RF: test
float r = textureColor.r;
float g = textureColor.g;
float b = textureColor.b;
float h;
float s;
float v;
float min = 0.0;
float max = 0.0;
float delta = 0.0;
// min = MIN( r, g, b );
if(r >= g) {
if(r >= b) {
max = r;
}
else {
max = b;
}
}
else {
if(g >= b) {
max = g;
}
else {
max = b;
}
}
if(r < g) {
if(r < b) {
min = r;
}
else {
min = b;
}
}
else {
if(g < b) {
min = g;
}
else {
min = b;
}
}
v = max; // v
delta = max - min;
if (delta == 0.0) {
h = 0.0;
s = 0.0;
gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);
return;
}
else if( max != 0.0 )
s = delta / max; // s
else {
s = 0.0;
h = -1.0;
gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);
return;
}
if( r == max ){
h = ( g - b ) / delta; // between yellow & magenta
h = mod(h, 6.0);
}
else if( g == max )
h = 2.0 + ( b - r ) / delta; // between cyan & yellow
else
h = 4.0 + ( r - g ) / delta; // between magenta & cyan
h = h * 60.0; // degrees
if( h < 0.0 )
h = h + 360.0;
//---------------------------------------------------
if(v >0.8){
if(s > 0.2){
if( h >320.0 && h < 360.0){
gl_FragColor = vec4(vec3(luminance ), textureColor.a);
return;
}
}
}
gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);
}
and then it works great:
Resulting Image

(Cocos2d-x) Is there any action can blur a Sprite?

I add a Sprite as background.
Now I wish my Sprite can blur gradually become blurred.
I think I may modify the Texture2D to do the job, but it seems that Texture2D can not be modified.
So, what should I do?
You can use shader for that. You can get simple blur shader from cocos test project, like this:
#ifdef GL_ES
precision mediump float;
#endif
varying vec4 v_fragmentColor;
varying vec2 v_texCoord;
uniform vec2 resolution;
uniform float blurRadius;
uniform float sampleNum;
vec4 blur(vec2);
void main(void)
{
vec4 col = blur(v_texCoord); //* v_fragmentColor.rgb;
gl_FragColor = vec4(col) * v_fragmentColor;
}
vec4 blur(vec2 p)
{
if (blurRadius > 0.0 && sampleNum > 1.0)
{
vec4 col = vec4(0);
vec2 unit = 1.0 / resolution.xy;
float r = blurRadius;
float sampleStep = r / sampleNum;
float count = 0.0;
for(float x = -r; x < r; x += sampleStep)
{
for(float y = -r; y < r; y += sampleStep)
{
float weight = (r - abs(x)) * (r - abs(y));
col += texture2D(CC_Texture0, p + vec2(x * unit.x, y * unit.y)) * weight;
count += weight;
}
}
return col / count;
}
return texture2D(CC_Texture0, p);
}
If you don't know how to add custom shader to your sprite - here is an example!
You extend Sprite class:
class MySpriteBlur : public Sprite {
public:
~MySpriteBlur();
bool initWithTexture(Texture2D* texture, const Rect& rect);
void initGLProgram();
static MySpriteBlur *create(const char *pszFileName);
void setBlurRadius(float radius);
void setBlurSampleNum(float num);
protected:
float _blurRadius;
float _blurSampleNum;
};
And then implement it:
MySpriteBlur::~MySpriteBlur() {
}
MySpriteBlur* MySpriteBlur::create(const char *pszFileName) {
MySpriteBlur* pRet = new (std::nothrow) MySpriteBlur();
if (pRet && pRet->initWithFile(pszFileName)) {
pRet->autorelease();
} else {
CC_SAFE_DELETE(pRet);
}
return pRet;
}
bool MySpriteBlur::initWithTexture(Texture2D* texture, const Rect& rect) {
_blurRadius = 0;
if (Sprite::initWithTexture(texture, rect)) {
#if CC_ENABLE_CACHE_TEXTURE_DATA
auto listener = EventListenerCustom::create(EVENT_RENDERER_RECREATED, [this](EventCustom* event) {
initGLProgram();
});
_eventDispatcher->addEventListenerWithSceneGraphPriority(listener, this);
#endif
initGLProgram();
return true;
}
return false;
}
void MySpriteBlur::initGLProgram() {
std::string fragSource = FileUtils::getInstance()->getStringFromFile(
FileUtils::getInstance()->fullPathForFilename("shaders/example_blur.fsh"));
auto program = GLProgram::createWithByteArrays(ccPositionTextureColor_noMVP_vert, fragSource.data());
auto glProgramState = GLProgramState::getOrCreateWithGLProgram(program);
setGLProgramState(glProgramState);
auto size = getTexture()->getContentSizeInPixels();
getGLProgramState()->setUniformVec2("resolution", size);
getGLProgramState()->setUniformFloat("blurRadius", _blurRadius);
getGLProgramState()->setUniformFloat("sampleNum", 7.0f);
}
void MySpriteBlur::setBlurRadius(float radius) {
_blurRadius = radius;
getGLProgramState()->setUniformFloat("blurRadius", _blurRadius);
}
void MySpriteBlur::setBlurSampleNum(float num) {
_blurSampleNum = num;
getGLProgramState()->setUniformFloat("sampleNum", _blurSampleNum);
}
Hope that will help!
You have three options:
1) make a blurred background in photoshop (quick and simple, but extra size),
2) use a shader (not that simple and blur is a heavy operation),
3) redraw (on the fly) your background making it a new texture.
Here's my post how to draw on texture:
http://discuss.cocos2d-x.org/t/is-it-possible-to-erase-some-pixels-from-a-sprite/34460/5?u=piotrros
Knowing this here's a function from my project, which blurs one image (a data array) to another one:
void Sample::blur(unsigned char* inputData, unsigned char* outputData, float r) {
int R2 = pow(r + 2, 2);
for(int i = 0; i < canvasHeight; i++){
for(int j = 0; j < canvasWidth; j++) {
int val1 = 0;
int val2 = 0;
int val3 = 0;
int val4 = 0;
int index2 = (j + (canvasHeight - i - 1) * canvasWidth) * 4;
for(int iy = i - r; iy < i + r + 1; iy++){
for(int ix = j - r; ix < j + r + 1; ix++) {
int x = CLAMP(ix, 0, canvasWidth - 1);
int y = CLAMP(iy, 0, canvasHeight - 1);
int index = (x + (canvasHeight - y - 1) * canvasWidth) * 4;
val1 += inputData[index];
val2 += inputData[index + 1];
val3 += inputData[index + 2];
val4 += inputData[index + 3];
}
}
outputData[index2] = val1 / R2;
outputData[index2 + 1] = val2 / R2;
outputData[index2 + 2] = val3 / R2;
outputData[index2 + 3] = val4 / R2;
}
}
}
Just remember that blur is heavy and long operation so if you have a big image it may take a while.