OpenGL - Create a border over a textured polygon - opengl

I'm working with cocos2d-x 2.0.4. I illustrate what I am trying to do through these two images.
What i want to do is to create a blurred border or a border with a gradient on it programmatically. I have two ideas to do that but I'm not sure if it is the correct way to do. First solution would be to triangulate the polygon containing only the blurred color (concave polygon with a hole in this case) and rendering color on it with a gradient, vertices on the outside of the polygon would be full-alpha and vertices on the inside zero-alpha. The interpolation would do the job of gradient then.
Second solution would be to do it inside the shader itself. All I need is to calculate the distance from a pixel and the closest edge of the polygon to it. Then under a certain threshold I affect pixel white color with a certain alpha value depending on that distance (the shortest the distance is, the biggest alpha is).
Anyway I am very new to openGL stuff and I am afraid that the second solution will end up with big processing time as I have to calculate the distance for every pixel of the polygon. What do you think about this guys? Any ideas the tend to confirm my guesses or am I completely wrong on this?
EDIT:
The solution I finally chose was to use the bisector of every angle (easy to calculate with 3 consecutive vertices) in the polygon and take a point on that bisector that would become a vertex for the inner polygon. Then i take either a outer polygon vertex or a inner polygon vertex to build an array of vertices that can fit the GL_TRIANGLE_STRIP parameter. I put the image below to understand better.

Will a rim lighting shader do what you want? Link to an example
Example code for a GLSL rim lighting shader:
const float rimStart = 0.5f;
const float rimEnd = 1.0f;
const float rimMultiplier = 0.0f;
vec3 rimColor = vec3(1.0f, 1.0f, 1.0f);
float NormalToCam = 1.0 - dot(normalize(outNormal), normalize(camPos - vertexWorldPos.xyz));
float rim = smoothstep(rimStart, rimEnd, NormalToCam) * rimMultiplier;
outColor.rgb += (rimColor * rim);

In order to make this look right from any viewpoint in a 3D scene you will need to perform some silhouetting. This essentially involves using a geometry shader to determine what edges of an object have an adjacent face that is facing the screen and an adjacent face that is not facing the screen. I believe this can be achieved by testing if the dot product between one adjacent face normal and your camera direction is <= 0 while the dot product of the other adjacent face normal and your camera direction is > 0.
Once you know all the edges that outline your polygon at a certain angle, you can tessellate the polygon defined by that border into triangle-strips (still in geometry shader). Then, you will pass a color per vertex to your fragment shader; where all vertices lying on the border pass the border color at full alpha and non-border points pass a color at zero alpha . The fragment shader will interpolate from border color to center alpha color at intermediate fragments giving you the gradient you want. Your total approach should be something like this:
Draw object with non-border shader program as the background color.
Enable alpha blending.
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
Draw object with silhouetting program determining the edges that make up the borders with the border color, and drawing non-border points as zero alpha.
glDisable(GL_BLEND);

Related

Discard fragments of vertice drawn with gl_PointSize of 100, depending on distance to center

In a strict GLES 3.0 environment I draw vertices as GL_POINTS and set their gl_PointSize to 100, this renders me nice 100x100 px points. But they are flat shaded:
Instead I want to render them as (perfect) circles in my shader.
For GL_TRIANGLE_STRIP I did this by calculating the distance between the flat shaded quad center and the interpolated (between vertices) point position to then discard the fragment when bigger than the wanted radius.
Works fine for GL_TRIANGLE_STRIP. Doesn't work for GL_POINTS because there is only one vertex. I would need 2 vertices to interpolate in between. What I would need is the fragment's position instead, so I could discard the fragment depending on its distance to the points center position.
Any idea how I could do this with GL_POINTS?
Switching to GL_TRIANGLE_STRIP or other primitives is not possible. Geometry shaders are also not available.

Can we input random values to render color on a cube using PyOpenGL?

I have a cube that can be rotated using mouse navigation in PyOpenGL. I want to create small sections on each face of the cube and render the different sections with different colors/illumination. It is like having a certain light source and the cube is considered as a room being illuminated with the light source. How do I set my desired values for each section ? Is it possible to do so ?
Extend your fragment shader to expect some kind of interpolated value (Either just add "face coordinates" to your vertices that goes from 0 to 1 across the face, or by transforming your texture uv coordinates if you have them). Then you can just use if-else inside the fragment shader.
i.e.
if (coords.x < 0.5 && coords.y < 0.5) // one quarter of the face
Lightning=...
FragColor=...
else if ...
You could use a geometry shader and some exponential function to distribute the colors in each face. If i undertand correcltly you want something like this:
In this case you will pass the colors you want as vertex attributes for the face you want and in the geometry shader you will compute the distance from each face-vertex to the face's center. Then you will pass that distance to the fragment shader. Passing the distance to the Fshader will interpolate the value for each fragment.

Bypass classical deferred shading light volumes

I would like to "bypass" the classical light volume approach of deferred lighting.
Usually, when you want to affect pixels within a pointlight volume, you can simply render a sphere mesh.
I would like to try another way to do that, the idea is to render a cube which encompass the sphere, the cube is "circumscribes" to the sphere so each face's center is a sphere's point. Then you only have to know from your point of view which fragment would be a part of the circle (the sphere on your screen) if you had render the sphere instead.
So the main problem is to know which fragment will have to be discarded.
How could I do that:
Into the fragment shader, I have my "camera" world coordinates, my fragment world coordinates, my sphere world center, and my sphere radius.
Thus I have the straight line whose the orientation vector is modelized by camera-fragment world points.
And I can build my sphere equation.
Finally I can know if the line intersect the sphere.
Is is correct to say that, from my point of view, if the line intersect the sphere, thus this fragment must be considered as an highlighted fragment (a fragment that would have been rendered if I had rendered a sphere instead) ?
Thus the check "lenght(fragment - sphereCenter) <= sphereRadius" doesn't really mean something here because the fragment is not on the sphere.
So what?
The standard deferred shading solution for lights is to render a full-screen quad. The purpose of rendering a sphere instead is to avoid doing a bunch of per-fragment calculations for fragments which are outside of the light source's effect. This means that the center of that sphere is the light source, and its radius represents the maximum distance for which the source has an effect.
So the length from the fragment (that is, reconstructed from your g-buffer data, not the fragment produced by the cube) to the sphere's center is very much relevant. That's the length between the fragment and the light source. If that is larger than the sphere radius (AKA: maximum reach of the light), then you can cull the fragment.
Or you can just let your light attenuation calculations do the same job. After all, in order for lights to not look like they are being cropped, that sphere radius must also be used with some form of light attenuation. That is, when a fragment is at that distance, the attenuation of the light must be either 0 or otherwise negligibly small.
As such... it doesn't matter if you're rendering a sphere, cube, or a full-screen quad. You can either cull the fragment or let the light attenuation do its job.
However, if you want to possibly save performance by discarding the fragment before reading any of the g-buffers, you can do this. Assuming you have access to the camera-space position of the sphere/cube's center in the FS:
Convert the position of the cube's fragment into camera-space. You can do this by reverse-transforming gl_FragCoord, but it'd probably be faster to just pass the camera-space position to the fragment shader. It's not like your VS is doing a lot of work or anything.
Because the camera-space position is in camera space, it already represents a direction from the camera into the scene. So now, use this direction to perform part of ray/sphere intersection. Namely, you stop once you compute the discriminant (to avoid an expensive square-root). The discriminant is:
float A = dot(cam_position, cam_position);
float B = -2 * (dot(cam_position, cam_sphere_center);
float C = (dot(cam_sphere_center, cam_sphere_center)) - (radius * radius)
float Discriminant = (B * B) - 4 * A * C;
If the discriminant is negative, discard the fragment. Otherwise, do your usual stuff.

Smooth Normals On Pyramid Corners

So, these are my normals for a generated mesh, contrast boosted in gimp to make them easier to see:
The mesh is a pyramid with a flat top. All of the normals are smoothed appropriately by averaging them will weighted surrounding face normals, and that works as expected.
However, as you can see, there are very noticeable seams wherever there are flat surfaces. With only diffuse lighting these are barely noticeable, but with specular they look hideous.
How can I get rid of these? My first thought was to replace all of the 6 vertex tiles with 12 vertex tiles, so that they would all be the same. However, that would of course double the size of the mesh. Is there any other way to do what I'm after?
EDIT: All of the corners have the triangles lain out to fit over their respective corners, all flat surfaces are split along the NE/SW.
Draw the normals as lines from their vertex to actually see what is really happening.
just draw line for each vertex V and corresponding normal N
double V[3],N[3],tmp[3];
for (int i=0;i<3;i++) tmp[i]=V[i]+0.3*N[i]; // 0.3 is the line size ...
glColor3f(0.0,0.5,0.0);
glBegin(GL_LINES);
glVertex3dv(V);
glVertex3dv(N);
glEnd();
this way you can easily visually check the correctness of normals
there should be single normal line per vertex on smooth areas
if there are more then there is your problem
for example this is how it should look:
green lines are the normals
triangle surface is generated by Bezier surface
normals are computed by crossproduct + smoothed (like in bullet 2)
left image is wireframe+normals
middle image is surface+normals
right just surface
I use this normal averaging

Antialiased GLSL impostors

If you draw a sphere using an impostor based ray-tracing approach as described for example here
http://www.arcsynthesis.org/gltut/Illumination/Tutorial%2013.html
you typically draw a quad and then use 'discard' to skip pixels that have a distance from the quad center larger than the sphere radius.
When you turn on anti-aliasing, GLSL will anti-alias the border of the primitive you draw - in this case the quad - but not the border between the drawn and discarded pixels.
I have attached two screen shots displaying the sphere and a blow-up of its border. Except for the top-most pixels, that lie on the quad border, clearly the sphere border has not been anti-aliased.
Is there any trick I can use to make the impostor spheres have a nice anti-aliased border?
Best regard,
Mads
Instead of just discarding the pixel, set your sphere to have inner and outer radius.
Everything inside the inner radius is fully opaque, everything outside the outer radius is discarded, and anything in between is linearly interpolated between 0 and 1 alpha values.
float alpha = (position - inner) / (outer - inner);
Kneejerk reaction would be to multisample for yourself: render to a texture that is e.g. four times as large as your actual output, then ensure you generate mip maps and render from that texture back onto your screen.
Alternatively do that directly in your shader and let OpenGL continue worrying about geometry edges: sample four rays per pixel and average them.