Stable values in vertex shader, unstable in fragment when rendered with webgl - glsl

I'm working with an unfamiliar scene and trying to render an array of points.
In the vertex shader i have this if/else block that resizes every 6th point and the last 3. This works as expected, points render at different sizes and as i navigate the scene they remain consistent.
However, when i run the same if/else tests in the fragment shader to color points differently it melts:
vert block:
gl_PointSize = uPointSize;
if(int(vIndex) == uTrajectoryCount - 1){
gl_PointSize = uPointSize * 2.0;
}
else if(int(vIndex) == uTrajectoryCount - 2){
gl_PointSize = uPointSize * 1.6;
}
else if(int(vIndex) == uTrajectoryCount - 3){
gl_PointSize = uPointSize * 1.3;
}
else if(
abs(mod(vIndex,6.0)) < 0.01
) {
gl_PointSize = uPointSize * 2.0;
}
frag block
if(int(vIndex) == uTrajectoryCount - 1){
gl_FragColor = vec4(1.,0.,0.,1.);
}
else if(int(vIndex) == uTrajectoryCount - 2){
gl_FragColor = vec4(0.,1.,0.,1.);
}
else if(int(vIndex) == uTrajectoryCount - 3){
gl_FragColor = vec4(0.,0.,1.,1.);
}
else if(
abs(mod(vIndex,6.0)) < 0.01
) {
gl_FragColor = vec4(1.,1.,0.,1.);
}
What could i look for that would make this "stable" i expect the bigger points not to flicker and stay yellow, and the last three to be r g b.

This appears to be a rounding issue related to floating point imprecision.Why you construct an int from a float the value is truncated. Round the value with floor(vIndex+0.5) or round(vIndex) instead of int(vIndex).

Related

RayTracing, refraction code produces weird results

This is the relevant code for refraction. I'm getting weird results when I change the index of refraction. Ideally, the sphere is supposed to show the distorted purple sphere and background due to refraction but instead it appears to look really like there's a red sphere inside the transparent one.
if(depth < MaxRecursion) {
if(shape[pos]->transparent() == 1) {
vec normal = N;
// etam = index or refrac of medium, etao = index of outside;
float eta, etao = 1.0, etam = 1.1
float cosi = dot(N, unit_vector(r.dir());
if(cosi < 0.0) {
cosi = -cosi;
}
else {
normal = -normal;
swap(etao, etam);
}
eta = etao/etam;
float c2 = 1.0 - eta*eta*(1.0 - cosi * cosi);
if(c2 > 0.0) {
vec dir = unit_vector(eta*unit_vector(r.dir()) - (eta*cosi -
sqrtf(c2))*normal);
ray refraction(r.p_at_par(t_near) + normal*1e-4, dir);
total += color(refraction, shape, lighting, depth + 1);
// color function is the current function
}
else { // total internal reflection
ray reflect = reflection(unit_vector(r.dir()), normal, r.p_at_par(t_near));
total += color(reflect, shape, lighting, depth + 1);
}
}
}
and for higher indexes of refraction, I get a completely white sphere (etam = 1.9)
I know than the error lies in this part of the code alone, because everything else works fine without the refraction code (i.e reflection, shadow, etc.)

Wierd Raytracing Artifacts

I am trying to create a ray tracer using Qt, but I have some really weird artifacts going on.
Before I implemented shading, I just had 4 spheres, 3 triangles and 2 bounded planes in my scene. They all showed up as expected and as the color expected however, for my planes, I would see dots the same color as the background. These dots would stay static from my view position, so if I moved the camera around the dots would move around as well. However they only affected the planes and triangles and would never appear on the spheres.
One I implemented shading the issue got worse. The dots now also appear on spheres in the light source, so any part affected by the diffuse.
Also, my one plane of pure blue (RGB 0,0,255) has gone straight black. Since I have two planes I switched their colors and again the blue one went black, so it's a color issue and not a plane issue.
If anyone has any suggestions as to what the problem could be or wants to see any particular code let me know.
#include "plane.h"
#include "intersection.h"
#include <math.h>
#include <iostream>
Plane::Plane(QVector3D bottomLeftVertex, QVector3D topRightVertex, QVector3D normal, QVector3D point, Material *material)
{
minCoords_.setX(qMin(bottomLeftVertex.x(),topRightVertex.x()));
minCoords_.setY(qMin(bottomLeftVertex.y(),topRightVertex.y()));
minCoords_.setZ(qMin(bottomLeftVertex.z(),topRightVertex.z()));
maxCoords_.setX(qMax(bottomLeftVertex.x(),topRightVertex.x()));
maxCoords_.setY(qMax(bottomLeftVertex.y(),topRightVertex.y()));
maxCoords_.setZ(qMax(bottomLeftVertex.z(),topRightVertex.z()));
normal_ = normal;
normal_.normalize();
point_ = point;
material_ = material;
}
Plane::~Plane()
{
}
void Plane::intersect(QVector3D rayOrigin, QVector3D rayDirection, Intersection* result)
{
if(normal_ == QVector3D(0,0,0)) //plane is degenerate
{
cout << "degenerate plane" << endl;
return;
}
float minT;
//t = -Normal*(Origin-Point) / Normal*direction
float numerator = (-1)*QVector3D::dotProduct(normal_, (rayOrigin - point_));
float denominator = QVector3D::dotProduct(normal_, rayDirection);
if (fabs(denominator) < 0.0000001) //plane orthogonal to view
{
return;
}
minT = numerator / denominator;
if (minT < 0.0)
{
return;
}
QVector3D intersectPoint = rayOrigin + (rayDirection * minT);
//check inside plane dimensions
if(intersectPoint.x() < minCoords_.x() || intersectPoint.x() > maxCoords_.x() ||
intersectPoint.y() < minCoords_.y() || intersectPoint.y() > maxCoords_.y() ||
intersectPoint.z() < minCoords_.z() || intersectPoint.z() > maxCoords_.z())
{
return;
}
//only update if closest object
if(result->distance_ > minT)
{
result->hit_ = true;
result->intersectPoint_ = intersectPoint;
result->normalAtIntersect_ = normal_;
result->distance_ = minT;
result->material_ = material_;
}
}
QVector3D MainWindow::traceRay(QVector3D rayOrigin, QVector3D rayDirection, int depth)
{
if(depth > maxDepth)
{
return backgroundColour;
}
Intersection* rayResult = new Intersection();
foreach (Shape* shape, shapeList)
{
shape->intersect(rayOrigin, rayDirection, rayResult);
}
if(rayResult->hit_ == false)
{
return backgroundColour;
}
else
{
QVector3D intensity = QVector3D(0,0,0);
QVector3D shadowRay = pointLight - rayResult->intersectPoint_;
shadowRay.normalize();
Intersection* shadowResult = new Intersection();
foreach (Shape* shape, shapeList)
{
shape->intersect(rayResult->intersectPoint_, shadowRay, shadowResult);
}
if(shadowResult->hit_ == true)
{
intensity += shadowResult->material_->diffuse_ * intensityAmbient;
}
else
{
intensity += rayResult->material_->ambient_ * intensityAmbient;
// Diffuse
intensity += rayResult->material_->diffuse_ * intensityLight * qMax(QVector3D::dotProduct(rayResult->normalAtIntersect_,shadowRay), 0.0f);
// Specular
QVector3D R = ((2*(QVector3D::dotProduct(rayResult->normalAtIntersect_,shadowRay))* rayResult->normalAtIntersect_) - shadowRay);
R.normalize();
QVector3D V = rayOrigin - rayResult->intersectPoint_;
V.normalize();
intensity += rayResult->material_->specular_ * intensityLight * pow(qMax(QVector3D::dotProduct(R,V), 0.0f), rayResult->material_->specularExponent_);
}
return intensity;
}
}
So I figured out my issues. They are due to float being terrible at precision, any check for < 0.0 would intermittently fail because of floats precision. I had to add an offset to all my checks so that I was checking for < 0.001.

Compute shaders : error in the initialization of textures

I have an image2DArray in my compute shaders with 7 slices.
I can write in it with the function imageStore without problem and also display these textures.
My problem comes with the initialization, I try to initialize my textures but I can't. Indeed, I make a loop for the initialization :
for(int i=0; i<N; i++){
imageStore( outputTexture , ivec3(texel, i), vec4(0));
}
When N = 7, nothing is displayed but when N < 7 everything works well and my textures initialized.
Is someone can explain me why I can't initialize correctly my image2DArray ?
Edit :
What I test to see that : try to write in all slices of the texture and display it. It works fine but data from the previous frame stay if I don't initialize the texture. So, I initialize all pixels of the slices to 0 but nothing display anymore if N=7.
Some code :
#version 430 compatibility
layout(rgba8) coherent uniform image2DArray outputTexture;
...
void main(){
ivec2 texel = ivec2(gl_GlobalInvocationID.xy);
ivec2 outSize = imageSize( outputTexture ).xy;
if( texel.x >= outSize.x || texel.y >= outSize.y )
return;
initializeMeshSet( meshSet );
vec4 pWorld = texelFetch(gBuffer[0],texel,0);
pWorld /= pWorld.w;
vec4 nWorld = texelFetch(gBuffer[1],texel,0);
nWorld /= nWorld.w;
if( length(nWorld.xyz) < 0.1 ){
for(int i=0; i<4; i++){
imageStore( outputTexture , ivec3(texel, i), vec4(0));
}
return;
}
if(nbFrame == 0){
float value = treatment(texel, pWorld, nWorld.xyz, outSize.x);
imageStore( outputTexture, ivec3(texel, 0), vec4(vec3(value),1.0));
imageStore( outputTexture, ivec3(texel, 1), vec4(0.0,0.0,0.0, 1.0));
}
else if(nbFrame == 1){
float value = treatment2(texel, pWorld, nWorld.xyz, outSize.x);
vec3 previousValue = imageLoad(outputTexture, ivec3(texel, 1)).xyz * (nbFrame - 1);
value += previousValue;
value /= nbFrame;
imageStore( outputTexture, ivec3(texel, 1), vec4(vec3(value), 1.0));
}
}

What is this vertex shader doing?

I recently took over a project that was left stagnate a team member quit a few months ago. While trying to get myself up to speed I came across this vertex shader and I'm having a hard time understanding what its doing:
uniform int axes;
varying vec4 passcolor;
void main()
{
// transform the vertex
vec4 point;
if (axes == 0) {
point = gl_Vertex;
} else if (axes == 1) {
point = gl_Vertex.xzyw;
} else if (axes == 2) {
point = gl_Vertex.xwzy;
} else if (axes == 3) {
point = gl_Vertex.yzxw;
} else if (axes == 4) {
point = gl_Vertex.ywxz;
} else if (axes == 5) {
point = gl_Vertex.zwxy;
}
point.z = 0.0;
point.w = 1.0;
// eliminate w point
gl_Position = gl_ModelViewProjectionMatrix * point;
passcolor = gl_Color;
}
The lines I'd like to better understand are the lines like this one:
point = gl_Vertex.xwzy;
I can't seem to find documentation that explains this.
Can someone give a quick explanation of what this shader is doing?
I can't seem to find documentation that explains this.
The GLSL specification is pretty clear about how swizzle selection works.
What the shader is basically doing is an obtuse way of picking two axes from a vec4. Notice that the Z and W of the point are overwritten after the selection. By all rights, it could be rewritten as:
vec2 point;
if (axes == 0) {
point = gl_Vertex.xy;
} else if (axes == 1) {
point = gl_Vertex.xz;
} else if (axes == 2) {
point = gl_Vertex.xw;
} else if (axes == 3) {
point = gl_Vertex.yz;
} else if (axes == 4) {
point = gl_Vertex.yw;
} else if (axes == 5) {
point = gl_Vertex.zw;
}
gl_Position = gl_ModelViewProjectionMatrix * vec4(point.xy, 0.0, 1.0);
So it is just selecting two coordinates from the input vertex. Why it needs to do this, I can't say.
The order of the x, y, z, and w after the . determines a mapping from the x, y, z and w of gl_Vertex to the x, y, z, and w of point.
This is called swizzling. point = gl_Vertex is equivalent to point = gl_Vertex.xyzw. point = gl_Vertex.yxzw results in a point equivalent to gl_Vertex with x and y values swapped.

how to implement grayscale rendering in OpenGL?

When rendering a scene of textured polygons, I'd like to be able to switch between rendering in the original colors and a "grayscale" mode. I've been trying to achieve this using blending and color matrix operations; none of it worked (with blending I couldn't find a glBlendFunc() that achieved something remotely resembling to what I wanted, and color matrix operations ...are discussed here).
A solution that comes to mind (but also is rather expensive) is to capture the screen every frame and convert the resulting texture to a grayscale one and display that instead... (Where I said grayscale I actually meant anything with a low saturation, but I'm guessing for most of the possible solutions it won't differ all that much from grayscale).
What other options do I have?
The default OpenGL framebuffer uses the RGB colour-space, which doesn't store an explicit saturation. You need an approach for extracting the saturation, modifying it, and change it back again.
My previous suggestion which simply used the RGB vector length to represent 0 in luminance was incorrect, as it didn't take scaling into account, I apologize.
Credit for the new short snippet goes to the regular user "RTFM_FTW" from ##opengl and ##opengl3 on FreeNode/IRC, and it lets you modify the saturation directly without computing the costly RGB->HSV->RGB conversion, which is exactly what you want. Though the HSV code is inferior with respect to your question, I let it stay.
void main( void )
{
vec3 R0 = texture2DRect( S, gl_TexCoord[0].st ).rgb;
gl_FragColor = vec4( mix( vec3( dot( R0, vec3( 0.2125, 0.7154, 0.0721 ) ) ),
R0, T ), gl_Color.a );
}
If you want more control than just the saturation, you need to convert to HSL or HSV colour-space. As shown below by using a GLSL fragment shader.
Read the OpenGL 3.0 and GLSL 1.30 specification available on http://www.opengl.org/registry to learn how to use GLSL v1.30 functionality.
#version 130
#define RED 0
#define GREEN 1
#define BLUE 2
in vec4 vertexIn;
in vec4 colorIn;
in vec2 tcoordIn;
out vec4 pixel;
Sampler2D tex;
vec4 texel;
const float epsilon = 1e-6;
vec3 RGBtoHSV(vec3 color)
{
/* hue, saturation and value are all in the range [0,1> here, as opposed to their
normal ranges of: hue: [0,360>, sat: [0, 100] and value: [0, 256> */
int sortindex[3] = {RED,GREEN,BLUE};
float rgbArr[3] = float[3](color.r, color.g, color.b);
float hue, saturation, value, diff;
float minCol, maxCol;
int minIndex, maxIndex;
if(color.g < color.r)
swap(sortindex[0], sortindex[1]);
if(color.b < color.g)
swap(sortindex[1], sortindex[2]);
if(color.r < color.b)
swap(sortindex[2], sortindex[0]);
minIndex = sortindex[0];
maxIndex = sortindex[2];
minCol = rgbArr[minIndex];
maxCol = rgbArr[maxIndex];
diff = maxCol - minCol;
/* Hue */
if( diff < epsilon){
hue = 0.0;
}
else if(maxIndex == RED){
hue = ((1.0/6.0) * ( (color.g - color.b) / diff )) + 1.0;
hue = fract(hue);
}
else if(maxIndex == GREEN){
hue = ((1.0/6.0) * ( (color.b - color.r) / diff )) + (1.0/3.0);
}
else if(maxIndex == BLUE){
hue = ((1.0/6.0) * ( (color.r - color.g) / diff )) + (2.0/3.0);
}
/* Saturation */
if(maxCol < epsilon)
saturation = 0;
else
saturation = (maxCol - minCol) / maxCol;
/* Value */
value = maxCol;
return vec3(hue, saturation, value);
}
vec3 HSVtoRGB(vec3 color)
{
float f,p,q,t, hueRound;
int hueIndex;
float hue, saturation, value;
vec3 result;
/* just for clarity */
hue = color.r;
saturation = color.g;
value = color.b;
hueRound = floor(hue * 6.0);
hueIndex = int(hueRound) % 6;
f = (hue * 6.0) - hueRound;
p = value * (1.0 - saturation);
q = value * (1.0 - f*saturation);
t = value * (1.0 - (1.0 - f)*saturation);
switch(hueIndex)
{
case 0:
result = vec3(value,t,p);
break;
case 1:
result = vec3(q,value,p);
break;
case 2:
result = vec3(p,value,t);
break;
case 3:
result = vec3(p,q,value);
break;
case 4:
result = vec3(t,p,value);
break;
default:
result = vec3(value,p,q);
break;
}
return result;
}
void main(void)
{
vec4 srcColor;
vec3 hsvColor;
vec3 rgbColor;
texel = Texture2D(tex, tcoordIn);
srcColor = texel*colorIn;
hsvColor = RGBtoHSV(srcColor.rgb);
/* You can do further changes here, if you want. */
hsvColor.g = 0; /* Set saturation to zero */
rgbColor = HSVtoRGB(hsvColor);
pixel = vec4(rgbColor.r, rgbColor.g, rgbColor.b, srcColor.a);
}
If you're working against a modern-enough OpenGL, I would say pixel shaders is a very suitable solution here. Either by hooking into each polygon's shading as they render, or by doing a single full-screen quad in a second pass that just reads each pixel, converts to grayscale, and writes it back. Unless your resolution, graphics hardware, and target framerate are somehow "extrem", that should be doable these days in most cases.
For most Desktops Render-To-Texture isn't that expensive anymore, all of compiz, aero, etc and effects like bloom or depth of field seen in recent titles depend on it.
Actually you don't convert the screen texture per se to grayscale, you would want to draw a scree-sized quad with the texture and a fragment shader transforming the valures to grayscale.
Another option is to have two sets of fragment shaders for your triangles, one just copying the gl_FrontColor attribute as the fixed function pieline would, and another that writes grayscale values to the screen buffer.
A third option might be indexed color modes, if you set uüp a grayscale palette, but that mode might be deprecated and poorly supported by now; plus you lose a lot of functionality like blending, if I remember correctly.