Discard fragments of vertice drawn with gl_PointSize of 100, depending on distance to center - glsl

In a strict GLES 3.0 environment I draw vertices as GL_POINTS and set their gl_PointSize to 100, this renders me nice 100x100 px points. But they are flat shaded:
Instead I want to render them as (perfect) circles in my shader.
For GL_TRIANGLE_STRIP I did this by calculating the distance between the flat shaded quad center and the interpolated (between vertices) point position to then discard the fragment when bigger than the wanted radius.
Works fine for GL_TRIANGLE_STRIP. Doesn't work for GL_POINTS because there is only one vertex. I would need 2 vertices to interpolate in between. What I would need is the fragment's position instead, so I could discard the fragment depending on its distance to the points center position.
Any idea how I could do this with GL_POINTS?
Switching to GL_TRIANGLE_STRIP or other primitives is not possible. Geometry shaders are also not available.

Related

How do I detect triangle edge and access the two vertices that form it?

I've seen other questions about only drawing fragments on the triangle edges using barycentric coordinates, but I need more than that and I wonder if there should be another approach.
This is basically a shadow map render and I want to write some additional results to the FBO color attachment. (Namely the light origin - edge vertices plane equation).
I can easily do this via a geometry shader converting triangles to lines but it's not pixel-to-pixel exact to the triangle edge. And it's also causing depth fighting that I can't accept.
I was hoping for a trick in a fragment shader that I can somehow render triangles and get the edge vertex coordinates in there.

OpenGL: Mapping texture on a sphere using spherical coordinates

I have a texture of the earth which I want to map onto a sphere.
As it is a unit sphere and the model itself has no texture coordinates, the easiest thing I could think of is to just calculate spherical coordinates for each vertex and use them as texture coordinates.
textureCoordinatesVarying = vec2(atan(modelPositionVarying.y, modelPositionVarying.x)/(2*M_PI)+.5, acos(modelPositionVarying.z/sqrt(length(modelPositionVarying.xyz)))/M_PI);
When doing this in the fragment shader, this works fine, as I calculate the texture coordinates from the (interpolated) vertex positions.
But when I do this in the vertex shader, which I also would do if the model itself has texture coordinates, I get the result as shown in the image below. The vertices are shown as points and a texture coordinate (u) lower than 0.5 is red while all others are blue.
So it looks like that the texture coordinate (u) of two adjacent red/blue vertices have value (almost) 1.0 and 0.0. The variably is then smoothly interpolated and therefore yields values somewhere between 0.0 and 1.0. This of course is wrong, because the value should either be 1.0 or 0.0 but nothing in between.
Is there a way to work with spherical coordinates as texture coordinates without getting those effects shown above? (if possible, without changing the model)
This is a common problem. The seams between two texture coordinate topologies, where you want the texture coordinate to seamlessly wrap from 1.0 to 0.0 requires the mesh to properly handle this. To do this, the mesh must duplicate every vertex along the seam. One of the vertices will have a 0.0 texture coordinate and will be connected to the vertices coming from the right (in your example). The other will have a 1.0 texture coordinate and will be connected to the vertices coming from the left (in your example).
This is a mesh problem, and it is best to solve it in the mesh itself. The same position needs two different texture coordinates, so you must duplicate the position in question.
Alternatively, you could have the fragment shader generate the texture coordinate from an interpolated vertex normal. Of course, this is more computationally expensive, as it requires doing a conversion from a direction to a pair of angles (and then to the [0, 1] texture coordinate range).

Smooth Normals On Pyramid Corners

So, these are my normals for a generated mesh, contrast boosted in gimp to make them easier to see:
The mesh is a pyramid with a flat top. All of the normals are smoothed appropriately by averaging them will weighted surrounding face normals, and that works as expected.
However, as you can see, there are very noticeable seams wherever there are flat surfaces. With only diffuse lighting these are barely noticeable, but with specular they look hideous.
How can I get rid of these? My first thought was to replace all of the 6 vertex tiles with 12 vertex tiles, so that they would all be the same. However, that would of course double the size of the mesh. Is there any other way to do what I'm after?
EDIT: All of the corners have the triangles lain out to fit over their respective corners, all flat surfaces are split along the NE/SW.
Draw the normals as lines from their vertex to actually see what is really happening.
just draw line for each vertex V and corresponding normal N
double V[3],N[3],tmp[3];
for (int i=0;i<3;i++) tmp[i]=V[i]+0.3*N[i]; // 0.3 is the line size ...
glColor3f(0.0,0.5,0.0);
glBegin(GL_LINES);
glVertex3dv(V);
glVertex3dv(N);
glEnd();
this way you can easily visually check the correctness of normals
there should be single normal line per vertex on smooth areas
if there are more then there is your problem
for example this is how it should look:
green lines are the normals
triangle surface is generated by Bezier surface
normals are computed by crossproduct + smoothed (like in bullet 2)
left image is wireframe+normals
middle image is surface+normals
right just surface
I use this normal averaging

Antialiased GLSL impostors

If you draw a sphere using an impostor based ray-tracing approach as described for example here
http://www.arcsynthesis.org/gltut/Illumination/Tutorial%2013.html
you typically draw a quad and then use 'discard' to skip pixels that have a distance from the quad center larger than the sphere radius.
When you turn on anti-aliasing, GLSL will anti-alias the border of the primitive you draw - in this case the quad - but not the border between the drawn and discarded pixels.
I have attached two screen shots displaying the sphere and a blow-up of its border. Except for the top-most pixels, that lie on the quad border, clearly the sphere border has not been anti-aliased.
Is there any trick I can use to make the impostor spheres have a nice anti-aliased border?
Best regard,
Mads
Instead of just discarding the pixel, set your sphere to have inner and outer radius.
Everything inside the inner radius is fully opaque, everything outside the outer radius is discarded, and anything in between is linearly interpolated between 0 and 1 alpha values.
float alpha = (position - inner) / (outer - inner);
Kneejerk reaction would be to multisample for yourself: render to a texture that is e.g. four times as large as your actual output, then ensure you generate mip maps and render from that texture back onto your screen.
Alternatively do that directly in your shader and let OpenGL continue worrying about geometry edges: sample four rays per pixel and average them.

OpenGL - Create a border over a textured polygon

I'm working with cocos2d-x 2.0.4. I illustrate what I am trying to do through these two images.
What i want to do is to create a blurred border or a border with a gradient on it programmatically. I have two ideas to do that but I'm not sure if it is the correct way to do. First solution would be to triangulate the polygon containing only the blurred color (concave polygon with a hole in this case) and rendering color on it with a gradient, vertices on the outside of the polygon would be full-alpha and vertices on the inside zero-alpha. The interpolation would do the job of gradient then.
Second solution would be to do it inside the shader itself. All I need is to calculate the distance from a pixel and the closest edge of the polygon to it. Then under a certain threshold I affect pixel white color with a certain alpha value depending on that distance (the shortest the distance is, the biggest alpha is).
Anyway I am very new to openGL stuff and I am afraid that the second solution will end up with big processing time as I have to calculate the distance for every pixel of the polygon. What do you think about this guys? Any ideas the tend to confirm my guesses or am I completely wrong on this?
EDIT:
The solution I finally chose was to use the bisector of every angle (easy to calculate with 3 consecutive vertices) in the polygon and take a point on that bisector that would become a vertex for the inner polygon. Then i take either a outer polygon vertex or a inner polygon vertex to build an array of vertices that can fit the GL_TRIANGLE_STRIP parameter. I put the image below to understand better.
Will a rim lighting shader do what you want? Link to an example
Example code for a GLSL rim lighting shader:
const float rimStart = 0.5f;
const float rimEnd = 1.0f;
const float rimMultiplier = 0.0f;
vec3 rimColor = vec3(1.0f, 1.0f, 1.0f);
float NormalToCam = 1.0 - dot(normalize(outNormal), normalize(camPos - vertexWorldPos.xyz));
float rim = smoothstep(rimStart, rimEnd, NormalToCam) * rimMultiplier;
outColor.rgb += (rimColor * rim);
In order to make this look right from any viewpoint in a 3D scene you will need to perform some silhouetting. This essentially involves using a geometry shader to determine what edges of an object have an adjacent face that is facing the screen and an adjacent face that is not facing the screen. I believe this can be achieved by testing if the dot product between one adjacent face normal and your camera direction is <= 0 while the dot product of the other adjacent face normal and your camera direction is > 0.
Once you know all the edges that outline your polygon at a certain angle, you can tessellate the polygon defined by that border into triangle-strips (still in geometry shader). Then, you will pass a color per vertex to your fragment shader; where all vertices lying on the border pass the border color at full alpha and non-border points pass a color at zero alpha . The fragment shader will interpolate from border color to center alpha color at intermediate fragments giving you the gradient you want. Your total approach should be something like this:
Draw object with non-border shader program as the background color.
Enable alpha blending.
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
Draw object with silhouetting program determining the edges that make up the borders with the border color, and drawing non-border points as zero alpha.
glDisable(GL_BLEND);