normal for analytical surface - opengl

I'm calculating surface normal for my analytical surface.
Some parts of normal i'm getting are correct but not all.
Code is :
SurfaceVertices3f[pos] = i;
SurfaceVertices3f[pos+1] = j;
SurfaceVertices3f[pos+2] = (cos(i)*sin(j));
/*a and b hold the poutput of partial differentiation of vertices from above three lines.a is wrt i and b is wrt j */
a[0]=1;
a[1]=0;
a[2]=-sin(i)*sin(j);
b[0]=0;
b[1]=1;
b[2]=cos(i)*cos(j);
normal_var=Vec3Df::crossProduct( a, b);
normal_var.normalize();
My output looks like this, right image is mine and left one i'm using as refrence .
http://tinypic.com/view.php?pic=73l9co&s=5
Could anyone tell me what mistake i'm doing?

Your normal calculation is correct. The reference image has just a different way to map normals to colors.
If you have a look at the green ground color, you will see that the color's norm is not 1. But normals should have a norm of 1. If we assume another common mapping from normal to color like this one:
color.rgb = normal.xyz / 2 + 0.5
We see that this is no unit vector either. So either they used yet a different mapping or they just don't have unit length normals.

Related

Packing the normal vector and tangent vector

In the deferred shading engine I'm working on, I currently store the normal vector in a buffer with the internal format GL_RGBA16F.
I was always aware that this could not be the best solution, but I had no time to deal with it.
Recently I read "Survey of Efficient Representations for Independent Unit Vectors", which inspired me to use Octahedral Normal Vectors (ONV) and to change the buffer to GL_RG16_SNORM:
Encode the normal vector (vec3 to vec2):
// Returns +/- 1
vec2 signNotZero( vec2 v )
{
return vec2((v.x >= 0.0) ? +1.0 : -1.0, (v.y >= 0.0) ? +1.0 : -1.0);
}
// Assume normalized input. Output is on [-1, 1] for each component.
vec2 float32x3_to_oct( in vec3 v )
{
// Project the sphere onto the octahedron, and then onto the xy plane
vec2 p = v.xy * (1.0 / (abs(v.x) + abs(v.y) + abs(v.z)));
// Reflect the folds of the lower hemisphere over the diagonals
return (v.z <= 0.0) ? ((1.0 - abs(p.yx)) * signNotZero(p)) : p;
}
Decode the normal vector (vec2 to vec3):
vec3 oct_to_float32x3( vec2 e )
{
vec3 v = vec3(e.xy, 1.0 - abs(e.x) - abs(e.y));
if (v.z < 0) v.xy = (1.0 - abs(v.yx)) * signNotZero(v.xy);
return normalize(v);
}
Since I have implemented an anisotropic light model right now, it is necessary to store the tangent vector as well as the normal vector. I want to store both vectors in one and the same color attachment of the frame buffer. That brings me to my question. What is a efficient compromise to pack a unit normal vector and tangent vector in a buffer?
Of course it would be easy with the algorithms from the paper to store the normal vector in the RG channels and the tangent vector in the BA channels of a GL_RGBA16_SNORM buffer, and this is my current implementation too.
But since the normal vector an the tangent vector are always orthogonal, there must be more elegant way, which either increases accuracy or saves memory.
So the real question is: How can I take advantage of the fact that I know that 2 vectors are orthogonal? Can I store both vectors in an GL_RGB16_SNORM buffer and if not can I improve the accuracy when I pack them to a GL_RGBA16_SNORM buffer.
The following considerations are purely mathematical and I have no experience with their practicality. However, I think that especially Option 2 might be a viable candidate.
Both of the following options have in common how they state the problem: Given a normal (that you can reconstruct using ONV), how can one encode the tangent with a single number.
Option 1
The first option is very close to what meowgoesthedog suggested. Define an arbitrary reference vector (e.g. (0, 0, 1)). Then encode the tangent as the angle (normalized to the [-1, 1] range) that you need to rotate this vector about the normal to match the tangent direction (after projecting on the tangent plane, of course). You will need two different reference vectors (or even three) and choose the correct one depending on the normal. You don't want the reference vector to be parallel to the normal. I assume that this is computationally more expensive than the second option but that would need measuring. But you would get a uniform error distribution in return.
Option 2
Let's consider the plane orthogonal to the tangent. This plane can be defined either by the tangent or by two vectors that lie in the plane. We know one vector: the surface normal. If we know a second vector v, we can calculate the tangent as t = normalize(cross(normal, v)). To encode this vector, we can prescribe two components and solve for the remaining one. E.g. let our vector be (1, 1, x). Then, to encode the vector, we need to find x, such that cross((1, 1, x), normal) is parallel to the tangent. This can be done with some simple arithmetic. Again, you would need a few different vector templates to account for all scenarios. In the end, you have a scheme whose encoder is more complex but whose decoder couldn't be simpler. The error distribution will not be as uniform as in Option 1, but should be ok for a reasonable choice of vector templates.

Getting the Tangent for a Object Space to Texture Space

A university assignment requires me to use the Vertex Coordinates I have to calculate the Normals and the Tangent from the Normal values so that I can create a Object Space to Texture Space Matrix.
I have the code needed to make the Matrix, and the binormal but I don't have the code for calculating the Tangent. I tried to look online, but the answers usually confuse me. Can you explain to me clearly how it works?
EDIT: I have corrected what I wrote previously as clearly I misunderstood the assignment. Thank you everyone for helping me see that.
A tangent in the mathematical sense is a property of a geometric object, not of the normalmap. In case of normalmapping, we are in addition searching for a very specific tangent (there are infinitely many in each point, basically every vector in the plane defined by the normal is a tangent).
But let's go one step back: We want a space where the u-direction of the texture is mapped on the tangent direction, the v-direction on the bitangent/binormal and the up-vector of the normalmap to the normal of the object. Thus the tangent for a triangle (v0, v1, v2) with uv-coordinates (uv1, uv2, uv3) can be calculated as:
dv1 = v1-v0
dv2 = v2-v0
duv1 = uv1-uv0
duv2 = uv2-uv0
r = 1.0f / (duv1.x * duv2.y - duv1.y * duv2.x);
tangent = (dv1 * duv2.y - dv2 * duv1.y) * r;
bitangent = (dv2 * duv1.x - dv1 * duv2.x) * r;
When having this done for all triangles, we have to smooth the tangents at shared vertices (quite similar to what happens with the normal). There are several algorithms for doing this, depending on what you need. One can, for example, weight the tangents by the surface area of the adjacent triangles or by the incident angle of them.
An implementation of this whole calculation can be found [here] along a more detailed explaination: (http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-13-normal-mapping/)

Compare intensity pixel value Vec3b in OpenCV

I have a 3 channel Mat image, type is CV_8UC3.
I want to compare, in a loop, the intensity value of a pixel with its neighbours and then set 0 or 1 if the neighbour is greater or not.
I can get the intensity calling Img.at<Vec3b>(x,y).
But my question is: how can I compare two Vec3b?
Should I compare pixels value for every channel (BGR or Vec3b[0], Vec3b[1] and Vec3b[2]), and then merge the three channels results into a single Mat object?
Me again :)
If you want to compare (greater or less) two RGB values you need to project the 3-dimensional RGB space onto a plane or axis.
Of course, there are many possibilities to do this, but an easy way would be to use the HSV color space. The hue (H), however, is not appropriate as a linear order function because it is circular (i.e. the value 1.0 is identical with 0.0, so you cannot decide if 0.5 > 0.0 or 0.5 < 0.0). However, the saturation (S) or the value (V) are appropriate projection functions for your purpose:
If you want to have colored pixels "larger" than monochrome pixels, you will prefer S.
If you want to have lighter pixels larger than darker pixels, you will probably prefer V.
Also any combination of S and V would be a valid projection function, e.g. S+V.
As far as I understand, you want a measure to calculate distance/similarity between two Vec3b pixels. This can be reflected to the general problem of finding distance between two vectors in an n-mathematical space.
One of the famous measures (and I think this is what you're asking for), is the Euclidean distance.
If you are using Opencv then you can simply use:
cv::Vec3b a(1, 1, 1);
cv::Vec3b b(5, 5, 5);
double dist = cv::norm(a, b, CV_L2);
You can refer to this for reading about cv::norm and its options.
Edit: If you are doing this to measure color similarity, it's recommended to use the LAB color space as it's proved that Euclidean distance in LAB space is a good approximation for human perception of colors.
Edit 2: I see what you mean, for this you can get the magnitude of each vector and then compare them, something like this:
double a_magnitude = cv::norm(a, CV_L2);
double b_magnitude = cv::norm(b, CV_L2);
if(a_magnitude > b_magnitude)
// do something
else
// do something else.

OpenCV undistortPoints and triangulatePoint give odd results (stereo)

I'm trying to get 3D coordinates of several points in space, but I'm getting odd results from both undistortPoints() and triangulatePoints().
Since both cameras have different resolution, I've calibrated them separately, got RMS errors of 0,34 and 0,43, then used stereoCalibrate() to get more matrices, got an RMS of 0,708, and then used stereoRectify() to get remaining matrices. With that in hand I've started the work on gathered coordinates, but I get weird results.
For example, input is: (935, 262), and the undistortPoints() output is (1228.709125, 342.79841) for one point, while for another it's (934, 176) and (1227.9016, 292.4686) respectively. Which is weird, because both of these points are very close to the middle of the frame, where distortions are the smallest. I didn't expect it to move them by 300 pixels.
When passed to traingulatePoints(), the results get even stranger - I've measured the distance between three points in real life (with a ruler), and calculated the distance between pixels on each picture. Because this time the points were on a pretty flat plane, these two lengths (pixel and real) matched, as in |AB|/|BC| in both cases was around 4/9. However, triangulatePoints() gives me results off the rails, with |AB|/|BC| being 3/2 or 4/2.
This is my code:
double pointsBok[2] = { bokList[j].toFloat()+xBok/2, bokList[j+1].toFloat()+yBok/2 };
cv::Mat imgPointsBokProper = cv::Mat(1,1, CV_64FC2, pointsBok);
double pointsTyl[2] = { tylList[j].toFloat()+xTyl/2, tylList[j+1].toFloat()+yTyl/2 };
//cv::Mat imgPointsTyl = cv::Mat(2,1, CV_64FC1, pointsTyl);
cv::Mat imgPointsTylProper = cv::Mat(1,1, CV_64FC2, pointsTyl);
cv::undistortPoints(imgPointsBokProper, imgPointsBokProper,
intrinsicOne, distCoeffsOne, R1, P1);
cv::undistortPoints(imgPointsTylProper, imgPointsTylProper,
intrinsicTwo, distCoeffsTwo, R2, P2);
cv::triangulatePoints(P1, P2, imgWutBok, imgWutTyl, point4D);
double wResult = point4D.at<double>(3,0);
double realX = point4D.at<double>(0,0)/wResult;
double realY = point4D.at<double>(1,0)/wResult;
double realZ = point4D.at<double>(2,0)/wResult;
The angles between points are kinda sorta good but usually not:
`7,16816 168,389 4,44275` vs `5,85232 170,422 3,72561` (degrees)
`8,44743 166,835 4,71715` vs `12,4064 158,132 9,46158`
`9,34182 165,388 5,26994` vs `19,0785 150,883 10,0389`
I've tried to use undistort() on the entire frame, but got results just as odd. The distance between B and C points should be pretty much unchanged at all times, and yet this is what I get:
7502,42
4876,46
3230,13
2740,67
2239,95
Frame by frame.
Pixel distance (bottom) vs real distance (top) - should be very similar:
Angle:
Also, shouldn't both undistortPoints() and undistort() give the same results (another set of videos here)?
The function cv::undistort does undistortion and reprojection in one go. It performs the following list of operations:
undo camera projection (multiplication with the inverse of the camera matrix)
apply the distortion model to undo the distortion
rotate by the provided Rotation matrix R1/R2
project points to image using the provided Projection matrix P1/P2
If you pass the matrices R1, P1 resp. R2, P2 from cv::stereoCalibrate(), the input points will be undistorted and rectified. Rectification means that the images are transformed in a way such that corresponding points have the same y-coordinate. There is no unique solution for image rectification, as you can apply any translation or scaling to both images, without changing the alignement of corresponding points.
That being said, cv::stereoCalibrate() can shift the center of projection quite a bit (e.g. 300 pixels). If you want pure undistortion you can pass an Identity Matrix (instead of R1) and the original camera Matrix K (instead of P1). This should lead to pixel coordinates similar to the original ones.

Advanced moiré a pattern reduction in HLSL / GLSL procedural textures shader - antialiasing

I am working on a procedural texture, it looks fine, except very far away, the small texture pixels disintegrate into noise and moiré patterns.
I have set out to find a solution to average and quantise the scale of the pattern far away and close up, so that close by it is in full detail, and far away it is rounded off so that one pixel of a distant mountain only represents one colour found there, and not 10 or 20 colours at that point.
It is easy to do it by rounding the World_Position that the volumetric texture is based on using an if statement i.e.:
if( camera-pixel_distance > 1200 meters ) {wpos = round(wpos/3)*3;}//---round far away pixels
return texturefucntion(wpos);
the result of rounding far away textures is that they will look like this, except very far away:
the trouble with this is i have to make about 5 if conditions for the various distances, and i have to estimate a random good rounding value
I tried to make a function that cuts the distance of the pixel into distance steps, and applies a LOD devider to the pixel_worldposition value to make it progressively rounder at distance but i got nonsense results, actually the HLSL was totally flipping out. here is the attempt:
float cmra= floor(_WorldSpaceCameraPos/500)*500; //round camera distance by steps of 500m
float dst= (1-distance(cmra,pos)/4500)*1000 ; //maximum faraway view is 4500 meters
pos= floor(pos/dst)*dst;//close pixels are rounded by 1000, far ones rounded by 20,30 etc
it returned nonsense patterns that i could not understand.
Are there good documented algorithms for smoothing and rounding distance texture artifacts? can i use the scren pixel resolution, combined with the distance of the pixel, to round each pixel to one color that stays a stable color?
Are you familiar with the GLSL (and I would assume HLSL) functions dFdx() and dFdy() or fwidth()? They were made specifically to solve this problem. From the GLSL Spec:
genType dFdy (genType p)
Returns the derivative in y using local differencing for the input argument p.
These two functions are commonly used to estimate the filter width used to anti-alias procedural textures.
and
genType fwidth (genType p)
Returns the sum of the absolute derivative in x and y using local differencing for the input argument p, i.e.: abs (dFdx (p)) + abs (dFdy (p));
OK i found some great code and a tutorial for the solution, it's a simple code that can be tweaked by distance and many parameters.
from this tutorial:
http://www.yaldex.com/open-gl/ch17lev1sec4.html#ch17fig04
half4 frag (v2f i) : COLOR
{
float Frequency = 0.020;
float3 pos = mul (_Object2World, i.uv).xyz;
float V = pos.z;
float sawtooth = frac(V * Frequency);
float triangle = (abs(2.0 * sawtooth - 1.0));
//return triangle;
float dp = length(float2(ddx(V), ddy(V)));
float edge = dp * Frequency * 8.0;
float square = smoothstep(0.5 - edge, 0.5 + edge, triangle);
// gl_FragColor = vec4(vec3(square), 1.0);
if (pos.x>0.){return float4(float3(square), 1.0);}
if (pos.x<0.){return float4(float3(triangle), 1.0);}
}