GLSL fragment shader - draw simple thick curve - opengl

I am trying to draw a very simple curve in just a fragment shader where there is a horizontal section, a transition section, then another horizontal section. It looks like the following:
My approach:
Rather than using bezier curves (which would then make it more complicated with thickness), I tried to take a shortcut. Basically, I just use one smooth step to transition between horizontal segments, which gives a decent curve. To compute thickness of the curve, for any given fragment x, I compute the y and ultimately the coordinate of where on the line we should be (x,y). Unfortunately, this isn't computing the shortest distance to the curve as seen below.
Below is a diagram to help perhaps understand the function I am having trouble with.
// Start is a 2D point where the line will start
// End is a 2d point where the line will end
// transition_x is the "x" position where we're use a smoothstep to transition between points
float CurvedLine(vec2 start, vec2 end, float transition_x) {
// Setup variables for positioning the line
float curve_width_frac = bendWidth; // How wide should we make the S bend
float thickness = abs(end.x - start.x) * curve_width_frac; // normalize
float start_blend = transition_x - thickness;
float end_blend = transition_x + thickness;
// for the current fragment, if you draw a line straight up, what's the first point it hits?
float progress_along_line = smoothstep(start_blend, end_blend, frag_coord.x);
vec2 point_on_line_from_x = vec2(frag_coord.x, mix(start.y,end.y, progress_along_line)); // given an x, this is the Y
// Convert to application specific stuff since units are a little odd
vec2 nearest_coord = point_on_line_from_x * dimensions;
vec2 rad_as_coord = rad * dimensions;
// return pseudo distance function where 1 is inside and 0 is outside
return 1.0 - smoothstep(lineWidth * dimensions.y, lineWidth * 1.2 * dimensions.y, distance(nearest_coord, rad_as_coord));
// return mix(vec4(1.0), vec4(0.0), s));
}
So I am familiar with given a line or line segment, compute the shortest distance to the line but I am not too sure how to tackle it with this curved segment. Any suggestions would be greatly appreciated.

I would do this in 2 passes:
render thin curve
do not yet use target colors but BW/grayscale instead ... Black background white lines to make the next step easier.
smooth the original image and threshold
so simply use any FIR smoothing or Gaussian blur that will bleed the colors up to half of your thickness distance. After this just threshold the result against background and recolor to wanted colors. The smoothing needs the rendered image from #1 as input. You can use simple convolution with circular mask:
0 0 0 1 1 1 0 0 0
0 0 1 1 1 1 1 0 0
0 1 1 1 1 1 1 1 0
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
0 1 1 1 1 1 1 1 0
0 0 1 1 1 1 1 0 0
0 0 0 1 1 1 0 0 0
btw. the color intensity after convoluiton like this will be a function of distance from center so it can be used as texture coordinate or shading parameter if you want ...
Also instead of convolution matrix you can use 2 nested for loops instead:
// convolution
col=vec4(0.0,0.0,0.0,0.0);
for (y=-r;y<=+r;y++)
for (x=-r;x<=+r;x++)
if ((x*x)+(y*y)<=r*r)
col+=texture2D(sampler,vec2(x0+x*mx,y0+y*my));
// threshold & recolor
if (col.r>threshold) col=col_curve; // assuming 1st pass has red channel used
else col=col_background;
where x0,y0 is your fragment position in texture and mx,my scales from pixels to texture coordinate scale. Also you need to handle edge cases when as x+x0 and y+y0 can be outside your texture.
Beware the thicker the curve the slower this will get ... For higher thicknesses is faster to apply smaller radius smoothing few times (more passes)
Here some related QAs that could covers some of the steps:
OpenGL Scale Single Pixel Line for multi pass (old api)
How to implement 2D raycasting light effect in GLSL scanning input texture

Related

How do I calculate a COLLADA file's parent and children joint transforms?

Trying to implement rigging:
Created a simple rigged snake test with Blender and exported a COLLADA file.
I've loaded vertex positions, weights and joint IDs correctly.
I've loaded the Skeleton joint hierarchy and these transforms for each bone (I load matrices taking all the floats into a float[16], and then I use glm::make_mat4(tmpFloatArray), and then I transpose it, not sure if this is the correct way):
<visual_scene id="Scene" name="Scene">
<node id="Armature" name="Armature" type="NODE">
<matrix sid="transform">1 0 0 0 0 0 1 0 0 -1 0 0 0 0 0 1</matrix>
<node id="Armature_Bone" name="Bone" sid="Bone" type="JOINT">
<matrix sid="transform">0.3299372 0.944003 -1.78814e-7 0 -4.76837e-7 0 -1 0 -0.944003 0.3299374 3.8743e-7 0 0 0 0 1</matrix>
<node id="Armature_Bone_001" name="Bone.001" sid="Bone_001" type="JOINT">
<matrix sid="transform">0.886344 -0.4630275 3.31894e-7 2.98023e-8 0.4630274 0.886344 -1.86307e-7 1.239941 -2.07907e-7 3.18808e-7 1 -2.84217e-14 0 0 0 1</matrix>
<node id="Armature_Bone_002" name="Bone.002" sid="Bone_002" type="JOINT">
<matrix sid="transform">0.9669114 0.2551119 -1.83038e-7 -1.19209e-7 -0.2551119 0.9669115 1.29195e-7 1.219687 2.09941e-7 -7.82246e-8 1 0 0 0 0 1</matrix>
<node id="Armature_Bone_003" name="Bone.003" sid="Bone_003" type="JOINT">
<matrix sid="transform">0.8538353 0.5205433 1.0139e-7 -1.19209e-7 -0.5205433 0.8538353 2.4693e-7 1.815649 4.19671e-8 -2.63615e-7 1 5.68434e-14 0 0 0 1</matrix>
Now if I set each bone's matrix to glm::mat4(1), I get this:
But if I try to multiply by a joints parent transform, like in the Thin Matrix rigging tutorial, I get very weird results:
void SkelManager::setTposeTransforms(std::vector<Joint>& _reference)
{
for (int child = 0; child < _reference.size(); child++)
{
if (_reference[child].parent == -1)
{
//_reference[child].tPose = glm::mat4(1);
_reference[child].tPose = _reference[child].transform;
}
for (int parent = 0; parent < _reference.size(); parent++)
if (_reference[child].parent == parent)
{
//_reference[child].tPose = glm::mat4(1);
_reference[child].tPose = _reference[parent].tPose * _reference[child].transform;
}
}
}
Please help, I've been stuck on this for a couple weeks and I've had no success, and no matter how hard I search the web I can't find anything that works, any ideas on what I could be doing wrong?
I use glm::make_mat4(tmpFloatArray), and then I transpose it, not sure
if this is the correct way):
See COLLADA spec's about matrix:
Matrices in COLLADA are column matrices in the mathematical sense.
These matrices are written in row- major order to aid the human
reader. See the example.
so yes, you need to transpose it.
It is not so hard to load COLLADA's skeleton animations. Follow these steps:
Importer side:
Load all joint node hierarchy, multiply joint transform with parent until root node as you do for normal/other nodes (scene graph). It is better to do multiplication when transforms are changed for each frame...
Load Controller->Skin element with joint IDs, weights... also bind_shape_matrix and INV_BIND_MATRIX
Load Animation object[s] to animate joints
Load instance_controller, it stores material and <skeleton> element which indicates where is the root node for joint hierarchy. It is important because you need to start resolve SID from that element not entire document or top nodes in scene...
Render side:
Prepare all joint transforms for each frame if needed. Multiply joint transforms with their parents
Create this matrix for each joints:
FinalJointTrans4x4 = JointTransform * InvBindPose * BindShapeMatrix
JointTransform is the transform that multiplied with parents...
InvBindPose (or InvBindMatrix) is the transform you read from skin->joints->INV_BIND_MATRIX for each joints
BindShapeMatrix is the transform that you read from skin->bind_shape_matrix
Send these FinalJointTrans4x4 matrices and weights to shader (a uniform buffer would be good to store matrices)
Use these information in the shader, render it.
Maybe (from http://github.com/recp/gk):
...
mat4 skinMat;
skinMat = uJoints[JOINTS.x] * WEIGHTS.x
+ uJoints[JOINTS.y] * WEIGHTS.y
+ uJoints[JOINTS.z] * WEIGHTS.z
+ uJoints[JOINTS.w] * WEIGHTS.w;
pos4 = skinMat * pos4;
norm4 = skinMat * norm4;
...
#ifdef JOINT_COUNT
gl_Position = VP * pos4;
#else
gl_Position = MVP * pos4;
#endif
...
There may other details that I may forgot to mention (I may edit the answer later) but this must help a lot.
PS: There is a library called AssetKit (http://github.com/recp/assetkit) you can use it to load COLLADA files if you like.

Jagged lines while handling UV values out of range [0,1]

I am parsing an OBJ which has texture coordinates more than 1 and less than 0 as well. I then write it back by making the UV values in the range [0,1]. Based on the understanding from another question on SO, am doing the conversion to the range [0,1] as follows.
if (oldU > 1.0) or (oldU < 0.0):
oldU = math.modf(oldU)[0] # Returns the floating part
if oldU < 0.0 :
oldU = 1 + oldU
if (oldV > 1.0) or (oldV < 0.0):
oldV = math.modf(oldV)[0] # Returns the floating part
if oldV < 0.0:
oldV = 1 + oldV
But I see some jagged lines in my output obj file and the original obj file when rendered in some software:
Original
Restricted to [0,1]
This may work not as you've expected.
Given some triangle edge that starts at U=0.9 and ends at U=1.1 then after your UV clipping you'll get start at 0.9 but end at 0.1 so the triangle will use different part of your texture. I believe this happens at the bottom of your mesh.
In general there's no problem with using UV outside of 0-1 range so first try to render the mesh as it is and see if you have any problems.
If you really want to move UVs to 0-1 range then scale and move UVs instead of clipping them per vertex. Iterate over all vertices and store min and max values for U and V, then scale UV for every vartex, so min becomes 0 and max becomes 1.

Taking very large screenshot from scene

I want to take very large screenshots from my application in OpenGL like 20000x20000 for printing on the banner. First of all, I am not able to create such big framebuffers because of the maximum GPU texture size limitation. Anyone can help me how to capture the framebuffer in different chunks?
As you already noted, capturing it in multiple passes is the way to go. In the simplest form, you can use multiple passes, each rendering a part of the scene.
To get the properr image sub-region, all you need to do is applying another transformation to the clip-space positions of your vertices. This boils down to simple translations and scalings in x and y:
When considering the euclidiean interpretation of the clip space - the normalized device space - the viewing volume is represented by a cube [-1,1] in all 3 dimensions.
To render only an axis-aligned sub-region of that cube, we have to upscale it so that only the sub-region fits into the [-1,1] area, and we have to translate it properly.
Assuming we want to divide the image into an uniform grid of m times n tiles, we could do the following transformations when rendering tile i,j:
Move the bottom left of the tile to the origin. That tile position will be at (-1 + 2*i/m, -1 + 2*j/n), so we have to translate with the negated value:
x' = x + 1 - 2*i/m,
y' = y + 1 - 2*j/n
This is only a helper step to make the final translation easier.
Scale by factors m and n along x and y directions:
x'' = m * x' = x * m + m - 2*i,
y'' = y' * n = y * n + n - 2*j
The tile is now aligned such that it's bottom left corner is (still) at the origin and the upper right center is at (2,2), so just translate it back with (-1, -1) so that we end up in the view voulme again:
x''' = x'' - 1 = x * m + m - 2*i - 1,
y''' = y'' - 1 = y * n + n - 2*j - 1
This can of coure be represented as simple affine transformation matrix:
( m 0 0 m - 2*i - 1)
( 0 n 0 n - 2*j - 1)
( 0 0 1 0 )
( 0 0 0 1 )
In most cases, you can simply pre-multiply that matrix to the projection-matrix (or whatever matrices you use), and won't have to change anything else during rendering.

Draw camera position using OpenCV

I am quite new to OpenCV so please forgive me if I am asking something obvious.
I have a program that gives me position and rotation of moving camera. But to be sure if my program works correctly I want to draw those results in 3 coordinate system.
I also have camera projection matrix
Camera matrix: [1135,52 0 1139,49
0 1023,50 543,50
0 0 1]
example how my result looks (calculated camera position):
Position = [ 0,92725 0,041710 -0,372177 0,0803997
-0,0279857 -0,983288 -0,1179896 -0,0466907
0,373459 -0,177219 0,910561, 1,19969
0 0 0 1 ]

2D Matrix to 3D Matrix

I have 2D transformations stored in a plain ol' 3x3 Matrix. How can I re-format that one to a Matrix I can shove over to OpenGL in order to transform orthogonal shapes, polygons and suchlike.
How do I have to put the values so the transformations are preserved?
(On an unrelated note, is there a fast way to invert a 3x3 Matrix?)
Some explanation about transformation matrices: All the columns, except the last one, describe the orientation of a new coordinate system in the base of the current coordinate system. So the first column is the X vector of the new coordinate system, as seen from the current, the second is the new Y vector and the 3rd is the new Z. So far this only covers the rotation. The last column is used for the relative offset. The last row and the bottom most right value are used for the homogenous transformations. It's best to leave the last row 0, ..., 0, 1
In your case you're missing the Z values, so we just insert a identity transform there, so that incoming values are left as they are.
Say this is your original matrix:
xx xy tx
yx yy ty
0 0 1
This matrix is missing the Z transformation. Inserting identity means: Leave Z as is, and don't mix with the rest. So ·z = z· = 0, except zz = 1. This gives you the following matrix
↓
xx xy 0 tx
yx yy 0 ty
0 0 1 0 ←
0 0 0 1
You can apply that onto the current OpenGL matrix stack with glMultMatrix if OpenGL version is below 3 core profile. Be aware that OpenGL numbers the matrix in column major order i.e, the indices in the array go like this (hexadecimal digits)
0 4 8 c
1 5 9 d
2 6 a e
3 7 b f
This contrary to the usual C notation which is
0 1 2 3
4 5 6 7
8 9 a b
c d e f
With OpenGL-3 core and later you've to do matrix management and manipulation yourself, anyway.
EDIT for second part of question
If by inverting one means finding the matrix M^-1 for a given matrix M, so that M^1 * M = M * M^1 = 1. For 3×3 matrices the determinant inversion method requires less operations than Gauss-Jordan elemination and is thus the most efficient way to do it. Already for 4×4 matrices determinant inversion is slower than every other method. http://www.sosmath.com/matrix/inverse/inverse.html
If you know that your matrix is orthonormal, then you may just transpose the upper left part except bottom row and rightmost column, and negate the sign of the rightmost column, except the very bottom right element. This exploits the fact that for orthonormal matrices M^-1 = M^T.
Just add the fourth row and column. For example given
2 3 3
3 2 4
0 0 1
Create the following
2 3 3 0
3 2 4 0
0 0 1 0
0 0 0 1
The transformation still occurs on the x-y plane even though it is now in three-space.