Drawing inner shadow for Bezier curves in OpenGL / GLSL - opengl

I'm trying to draw an inner glow/shadow for an object consisting of four cubic Bezier curves. To draw a single Bezier curve I'm splitting it into segments and then calculating distances from current pixel to each line segment and finally I'm blending it with min:
// No GL_MAX blending mode in OpenGL ES 2.0
d_min = min(d_min, d)
where d is a distance for every segment. Zoomed in example of blending two line segments:
It works reasonably well when you have ~25 segments representing a short Bezier curve
except for "gutters" in the places where Bezier curves and their respective gradients combine.
Q: How can I avoid these artifacts? Is there a better method for drawing inner glow/shadow?

What about doing a blur on your curves and then masking the result to the inside area?

see:
GLSL rendering 2D cubic BEZIER curves
It basically convert inputted 4-vertex geometry chunks into BEZIER cubics with defined curve width d.
In fragment you got information about distance to curve ll so you can use it directly to shade your stuff. For your case using white background and black color and these points:
double pnt[]= // cubic curve control points
{
+0.9,-0.8,0.0,
+0.5,+0.8,0.0,
-0.5,+0.8,0.0,
-0.9,-0.8,0.0,
};
I got this output (d=0.05):
I just change the final coloring line in vertex shader to:
l=ll/d; // distance from center of curve 0..1
col=vec4(l,l,l,1.0);
You can un-linearize the l by sqrt or pow to enhance the shading effect...

Related

Is my normal interpolation perspectively correct

I am trying to implement a software renderer
It looks like this, it seems my interpolated normal is not perspectively correct
I use scanline conversion and calculate normal with following steps:
Assume we are now drawing line AB (A B have same y value in screen space)
Calculating normal of B by interpolating normals of Top vertex and bottom vertex. (Alpha and Beta value is retrieve from top and bottom in screen space)
to calculate A is similar
draw line AB. Calculating the normals of fragments by interpolating normals of A and B
calc light contribution
If I am doing wrong, how to do correct interpolation?

Mesh and cone intersection algorithm

I am looking for an efficient algorithm for mesh (set of triangles) and cone (given by origin, direction and angle from that direction) intersection. More precisely I want to find intersection point which is closest to the cone's origin. For now all what I can think about is to intersect a mesh with several rays from the cone origin and get the closest point. (Of course some spatial structure will be constructed for mesh to reject unnecessary intersections)
Also I found the following algo with brief description:
"Cone to mesh intersection is computed on the GPU by drawing the cone geometry with the mesh and reading the minimum depth value marking the intersection point".
Unfortunately it's implementation isn't obvious for me.
So can anyone suggest something more efficient than I have or explain in more details how it can be done on GPU using OpenGL?
on GPU I would do it like this:
set view
to cones origin
directing outwards
covering the bigest circle slice
for infinite cone use max Z value of mesh vertexes in view coordinate system
clear buffers
draw mesh
but in fragment shader draw only pixels intersecting cone
|fragment.xyz-screen_middle|=tan(cone_ang/2)*fragment.z
read z-buffer
read fragments and from valid (filled) select the closest one to cones origin
[notes]
if your gfx engine can handle also output values from your fragment shader
then you can skip bullet 4 and do the min distance search inside bullet 3 instead of rendering ...
that will speed up the process considerably (need just single xyz vector)

How to draw a Cartesian plane via OpenGL?

I need to draw a Cartesian plane (standard OXYZ), where i would construct planes from equations ax+by+cz+d=0 and some objects.
How can i do that via OpenGL? Anybody?
You need to create triangle or quad. Calculate points in plane using your equation and from those points construct geometry.
For rendering geometry, look for some tutorials. There are plenty of them around.
If I am interpreting your question correctly, you just want to draw the axes of the Cartesian planes xy, xz, yz.
You can achieve this very easily by drawing a non-solid cube (glutWireCube should do the job), such that its bottom-front-left corner is at (0,0,0) (or bottom-back-left corner, based on the direction of positive depth).

Screen space bounding box computation in OpenGL

I'm trying to implement tiled deferred rendering method and now I'm stuck. I'm computing min/max depth for each tile (32x32) and storing it in texture. Then I want to compute screen space bounding box (bounding square) represented by left down and top right coords of rectangle for every pointlight (sphere) in my scene (see pic from my app). This together with min/max depth will be used to check if light affects actual tile.
Problem is I have no idea how to do this. Any idea, source code or exact math?
Update
Screen-space is basically a 2D entity, so instead of a bounding box think of a bounding rectangle.
Here is a simple way to compute it:
Project 8 corner points of your world-space bounding box onto the screen using your ModelViewProjection matrix
Find a bounding rectangle of these points (which is just min/max X and Y coordinates of the points)
A more sophisticated way can be used to compute a screen-space bounding rect for a point light source. We calculate four planes that pass through the camera position and are tangent to the light’s sphere of illumination (the light radius). Intersections of each tangent plane with the image plane gives us 4 lines on the image plane. This lines define the resulting bounding rectangle.
Refer to this article for math details: http://www.altdevblogaday.com/2012/03/01/getting-the-projected-extent-of-a-sphere-to-the-near-plane/

OpenGL/GLUT - Project ModelView Coordinate to Texture Matrix

Is there a way using OpenGL or GLUT to project a point from the model-view matrix into an associated texture matrix? If not, is there a commonly used library that achieves this? I want to modify the texture of an object according to a ray cast in 3D space.
The simplest case would be:
A ray is cast which intersects a quad, mapped with a single texture.
The point of intersection is converted to a value in texture space clamped between [0.0,1.0] in the x and y axis.
A 3x3 patch of pixels centered around the rounded value of the resulting texture point is set to an alpha value of 0.( or another RGBA value which is convenient, for the desired effect).
To illustrate here is a more complex version of the question using a sphere, the pink box shows the replaced pixels.
I just specify texture points for mapping in OpenGL, I don't actually know how the pixels are projected onto the sphere. Basically I need to to the inverse of that projection, but I don't quite know how to do that math, especially on more complex shapes like a sphere or an arbitrary convex hull. I assume that you can somehow find a planar polygon that makes up the shape, which the ray is intersecting, and from there the inverse projection of a quad or triangle would be trivial.
Some equations, articles and/or example code would be nice.
There are a few ways you could accomplish what you're trying to do:
Project a world coordinate point into normalized device coordinates (NDCs) by doing the model-view and projection transformation matrix multiplications by yourself (or if you're using old-style OpenGL, call gluProject), and perform the perspective division step. If you use a depth coordinate of zero, this would correspond to intersecting your ray at the imaging plane. The only other correction you'd need to do map from NDCs (which are in the range [-1,1] in x and y) into texture space by dividing the resulting coordinate by two, and then shifting by .5.
Skip the ray tracing all together, and bind your texture as a framebuffer attachment to a framebuffer object, and then render a big point (or sprite) that modifies the colors in the neighborhood of the intersection as you want. You could use the same model-view and projection matrices, and will (probably) only need to update the viewport to match the texture resolution.
So I found a solution that is a little complicated, but does the trick.
For complex geometry you must determine which quad or triangle was intersected, and use this as the plane. The quad must be planar(obviously).
Draw a plane in the identity matrix with dimensions 1x1x0, map the texture on points identical to the model geometry.
Transform the plane, and store the inverse of each transform matrix in a stack
Find the point at which the the plane is intersected
Transform this point using the inverse matrix stack until it returns to identity matrix(it should have no depth(
Convert this point from 1x1 space into pixel space by multiplying the point by the number of pixels and rounding. Or start your 2D combining logic here.