opengl calculate normal for face - c++

I am attempting to load models exported from Blender into OpenGL. Particularly, I followed the source code from this tutorial to help me get started. Because the loader is fairly simple, it only read in the vertex coordinates as well as face indices, ignoring the normal and tex.
It then calculates the normal for each face:
float coord1[3] = { Faces_Triangles[triangle_index], Faces_Triangles[triangle_index+1],Faces_Triangles[triangle_index+2]};
float coord2[3] = {Faces_Triangles[triangle_index+3],Faces_Triangles[triangle_index+4],Faces_Triangles[triangle_index+5]};
float coord3[3] = {Faces_Triangles[triangle_index+6],Faces_Triangles[triangle_index+7],Faces_Triangles[triangle_index+8]};
float *norm = this->calculateNormal( coord1, coord2, coord3 );
float* Model_OBJ::calculateNormal( float *coord1, float *coord2, float *coord3 )
{
/* calculate Vector1 and Vector2 */
float va[3], vb[3], vr[3], val;
va[0] = coord1[0] - coord2[0];
va[1] = coord1[1] - coord2[1];
va[2] = coord1[2] - coord2[2];
vb[0] = coord1[0] - coord3[0];
vb[1] = coord1[1] - coord3[1];
vb[2] = coord1[2] - coord3[2];
/* cross product */
vr[0] = va[1] * vb[2] - vb[1] * va[2];
vr[1] = vb[0] * va[2] - va[0] * vb[2];
vr[2] = va[0] * vb[1] - vb[0] * va[1];
/* normalization factor */
val = sqrt( vr[0]*vr[0] + vr[1]*vr[1] + vr[2]*vr[2] );
float norm[3];
norm[0] = vr[0]/val;
norm[1] = vr[1]/val;
norm[2] = vr[2]/val;
return norm;
}
I have 2 questions.
How do I know if the normal is facing inwards or outwards? Is there some ordering of the vertices in each row in the .obj file that gives indication how to calculate the normal?
In the initialization function, he uses GL_SMOOTH. Is this incorrect, since I need to provide normals for each vertex if using GL_SMOOTH instead of GL_FLAT?

Question 1
glFrontFace determines wind order
Wind order means, what order a set of vertexes should appear for a normal to be considered positive. Consider the triangle below. It's vertexes are defined clockwise. If we told OpenGL glFrontFace(GL_CW) (That clockwise means front face) then the normal would essentially be sticking right out of the screen towards you in order to be considered "outward".
On a side note, counter-clockwise is the default and what you should stick with.
No matter what, you need should really define normals especially if you want to do any lighting in your scene as they are used for lighting calculation. glFrontFace just lets you tell OpenGL which way you want to interpret what the front of a polygon is.
In the above example, and below diagram, if we told OpenGL that we define faces counter-clockwise and also glEnabled glCullFace and set it to GL_BACK then our triangle wouldn't show up because we would be looking at the back of it and we told OpenGL not to show the back of polygons.
You can read more about face culling here: Face Culling.
Wavefront .obj has support for declaring normals in a file if you don't want to create them yourself. Just make sure your exporter adds them.
Additionaly, Wavefront wants each vertex to have a normal defined:
f v1//vn1 v2//vn2 v3//vn3 ...
Where vN is the vertex of the f face and vnN is the normal of the vertex. By providing a normal for each vertex, you achieve a smoother looking surface than you would by defining a normal per face or by setting all of the normals of vertexes of the same face to be the same. Take a look at this question to see the difference you can make on a sphere: OpenGL: why do I have to set a normal with glNormal?
If your .obj file doesn't have normals defined, I would use the face definition order and cross two edges of the defined face. Consider the method used here: Calculating a Surface Normal
Edit
I think I may be a little confusing. The front face of a polygon is only slightly related to its normals. Normals are only really used for lighting calculations. You don't have to have them, but they are one of the big variables used in calculating how lit your object is.
I am explaining the "front-faced-ness" of a polygon at the same time because it sort of makes sense, when talking about convex polygons, that your normal would stick out of the "front" of your triangle with respect to the shape you are making.
If you created a huge cave, or if your camera were to mostly reside inside of some concave shape, then it would make sense to have your normals point inwards since your light sources are probably going to want to bounce off of the inside of your shape.
Question 2
GL_SMOOTH determines which of the shading models you want to use with glShadeModel
GL_SMOOTH means smooth shading, where color is actually interpolated between each vertex, vs GL_FLAT means flat shading, where only one one color will be used. Typically, you'll use the default value, GL_SMOOTH.
You don't have to define normals for each vertex in either case. However, if you want GL_SMOOTH to look "good" you'll probably want to as it will interpolate as it renders between each vertex rather than just picking one vertex for properties.
Also, bear in mind that all of this goes out the window whenever you leave the fixed-function pipeline and start using shaders.

Related

Converting an equiangular cubemap to an equirectangular one

I am making a retro-style game with OpenGL, and I want to draw my own cubemaps for it. Here is an example of one:
As you can tell, there is no perspective warping anywhere; each face is fully equiangular. When using this as a cubemap, the result is this:
As you can see, it looks box-y, and not spherical at all. I know of a solution to this, which is to remap each point on the cubemap to a a sphere position. I have done this manually by creating a sphere mesh and mapping the cubemap texture onto it (and then rendering that to an environment map), but this is time-consuming and complicated.
I seek a different solution: in my fragment shader, I hope to remap the sampling ray to a sphere position, instead of a cube position. Here is my original fragment shader, without any changes:
#version 400 core
in vec3 cube_edge;
out vec3 color;
uniform samplerCube skybox_sampler;
void main(void) {
color = texture(skybox_sampler, cube_edge).rgb;
}
I can get a ray that maps to the sphere by just normalizing cube_edge, but that doesn't change anything, for some reason. After messing around a bit, I tried this mapping, which almost works, but not quite:
vec3 sphere_edge = vec3(cube_edge.x, normalize(cube_edge).y, cube_edge.z);
As you can see, some faces become spherical in nature, whereas the top face warps inwards, instead of outwards.
I also tried the results from this site: http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html, but the faces were not curved outwards enough.
I have been stuck on this for so long now - if you know how I can change my cube to sphere mapping in my fragment shader, or if that's even possible, please let me know!
As you can tell, there is no perspective warping anywhere; each face is fully equiangular.
This premise is incorrect. You hand-drew some images; this doesn't make them equiangular.
'Equiangular cubemap' (EAC) specifically means a cubemap remapped by this formula (section 2.4):
u = 4/pi * atan(u)
v = 4/pi * atan(v)
Let's recognize first that the term is misleading, because even though EAC aims at reducing the variation in sampling rate, the sampling rate is not constant. In fact no 2d projection of any part of a sphere can truly be equi-angular; this is a mathematical fact.
Nonetheless, we can try to apply this correction. Implemented in GLSL fragment shader as:
d /= max(abs(d.x), max(abs(d.y), abs(d.z));
d = atan(d)/atan(1);
gives the following result:
Compare it with the uncorrected d:
As you can see the EAC projection shrinks the pixels in the middle by a little bit, and expands them near the corners, so that they cover more equal area.
Instead, it appears that you want a cylindrical projection around the horizon. It can be implemented like so:
d /= length(d.xy);
d.xy /= max(abs(d.x), abs(d.y));
d.xy = atan(d.xy)/atan(1);
Which gives the following result:
However there's no artifact-free way to fit the top/bottom square faces of the cube onto the circular faces of the cylinder -- which is why you see the artifacts there.
Bottom-line: you cannot fit the image that you drew onto a sphere in a visually pleasing way. You should instead re-focus your effort on alternative ways of authoring your environment map. I recommend you try using an equidistant cylindrical projection for the horizon, cap it with solid colors above/below a fixed latitude, and use billboards for objects that cannot be represented in that projection.
Your problem is that the size of the geometry on which the environment is placed is too small. You are not looking at the environment but at the inside of a small cube in which you are sitting. The environment map should behave as if you are always in the center of the map and the environment is infinitely far away. I suggest to draw the environment map on the far plane of the viewing frustum. You can do this by setting the z-component of the clip space position equal to the w-component in the vertex shader. If you set z to w, you guarantee that the final z value of the position will be 1.0. This is the z value of the far plane. (You can do that with Swizzling gl_Position = clipPos.xyww). It is quite sufficient to draw a cube and wrap the environment by looking up the map with the interpolated vertices of the cube. In the case of a samplerCube, the 3-dimensional texture coordinate is treated as a direction vector. You can use the vertex coordinate of the cube to look up the texture.
Vertex shader:
cube_edge = inVertex.xyz;
vec4 clipPos = projection * view * vec4(inVertex.xyz, 1.0);
gl_Position = clipPos.xyww;
Fragment shader:
color = texture(skybox_sampler, cube_edge).rgb;
The solution is also explained in detail at LearnOpenGL - Cubemap.

Matrix calculations for gpu skinning

I'm trying to do skeletal animation in OpenGL using Assimp as my model import library.
What exactly do I need to the with the bones' offsetMatrix variable? What do I need to multiply it by?
Let's take for instance this code, which I used to animate characters in a game I worked. I used Assimp too, to load bone information and I read myself the OGL tutorial already pointed out by Nico.
glm::mat4 getParentTransform()
{
if (this->parent)
return parent->nodeTransform;
else
return glm::mat4(1.0f);
}
void updateSkeleton(Bone* bone = NULL)
{
bone->nodeTransform = bone->getParentTransform() // This retrieve the transformation one level above in the tree
* bone->transform //bone->transform is the assimp matrix assimp_node->mTransformation
* bone->localTransform; //this is your T * R matrix
bone->finalTransform = inverseGlobal // which is scene->mRootNode->mTransformation from assimp
* bone->nodeTransform //defined above
* bone->boneOffset; //which is ai_mesh->mBones[i]->mOffsetMatrix
for (int i = 0; i < bone->children.size(); i++) {
updateSkeleton (&bone->children[i]);
}
}
Essentially the GlobalTransform as it is referred in the tutorial Skeletal Animation with Assimp or properly the transform of the root node scene->mRootNode->mTransformation is the transformation from local space to global space. To give you an example, when in a 3D modeler (let's pick Blender for instance) you create your mesh or you load your character, it is usually positioned (by default) at the origin of the Cartesian plane and its rotation is set to the identity quaternion.
However you can translate/rotate your mesh/character from the origin (0,0,0) to somewhere else and have in a single scene even multiple meshes with different positions. When you load them, especially if you do skeletal animation, it is mandatory to translate them back in local space (i.e. back at the origin 0,0,0 ) and this is the reason why you have to multiply everything by the InverseGlobal (which brings back your mesh to local space).
After that you need to multiply it by the node transform which is the multiplication of the parentTransform (the transformation one level up in the tree, this is the overall transform) the transform (formerly the assimp_node->mTransformation which is just the transformation of the bone relative to the node's parent) and your local transformation (any T * R) you want to apply to do: forward kinematic, inverse kinematic or key-frame interpolation.
Eventually there is the boneOffset (ai_mesh->mBones[i]->mOffsetMatrix) that transforms from mesh space to bone space in bind pose as stated in the documentation.
Here there is a link to GitHub if you want to look at the whole code for my Skeleton class.
Hope it helps.
The offset matrix defines the transform (translation, scale, rotation) that transforms the vertex in mesh space, and converts it to "bone" space. As an example consider the following vertex and a bone with the following properties;
Vertex Position<0, 1, 2>
Bone Position<10, 2, 4>
Bone Rotation<0,0,0,1> // Note - no rotation
Bone Scale<1, 1, 1>
If we multiply a vertex by the offset Matrix in this case we would get a vertex position of <-10, -1, 2>.
How do we use this? You have two options on how to use this matrix which is down to how we store the vertex data in the vertex buffers. The options are;
1) Store the mesh vertices in mesh space
2) Store the mesh vertices in bone space
In the case of #1, we would take the offsetMatrix and apply it to the vertices that are influenced by the bone as we build the vertex buffer. And then when we animate the mesh, we later apply the animated matrix for that bone.
In the case of #2, we would use the offsetMatrix in combination with the animation matrix for that bone when transforming the vertices stored in the vertex buffer. So it would be something like (Note: you may have to switch the matrix concatenations around here);
anim_vertex = (offset_matrix * anim_matrix) * mesh_vertex
Does this help?
As I already assumed, the mOffsetMatrix is the inverse bind pose matrix. This tutorial states the correct transformations that you need for linear blend skinning:
You first need to evaluate your animation state. This will give you a system transform from animated bone space to world space for every bone (GlobalTransformation in the tutorial). The mOffsetMatrix is the system transform from world space to bind pose bone space. Therefore, what you do for skinning is the following (assuming that a specific vertex is influenced by a single bone): Transform the vertex to bone space with mOffsetMatrix. Now assume an animated bone and transform the intermediate result back from animated bone space to world space. So:
boneMatrix[i] = animationMatrix[i] * mOffsetMatrix[i]
If the vertex is influenced by multiple bones, LBS simply averages the results. That's where the weights come into play. Skinning is usually implemented in a vertex shader:
vec4 result = vec4(0);
for each influencing bone i
result += weight[i] * boneMatrix[i] * vertexPos;
Usually, the maximum number of influencing bones is fixed and you can unroll the for loop.
The tutorial uses an additional m_GlobalInverseTransform for the boneMatrix. However, I have no clue why they do that. Basically, this undoes the overall transformation of the entire scene. Probably it is used to center the model in the view.

texture mapping over many triangles in a circle

After some help, i want to texture onto a circle as you can see below.
I want to do it in such a way that the centre of the circle starts on the shared point of the triangles.
the triangles can change in size and number and will range over varying degrees ie 45, 68, 250 so only the part of the texture visible in the triangle can be seen.
its basically a one to one mapping shift the image to the left and you see only the part where there are triangles.
not sure what this is called or what to google for, can any one makes some suggestions or point me to relevant information.
i was thinking i would have to generate the texture coordinates on the fly to select the relevant part, but it feels like i should be able to do a one to one mapping which would be simpler than calculating triangles on the texture to map to the opengl triangles.
Generating texture coordinates for this isn't difficult. Each point of polygon corresponds to certain angle, so i'th point angle will be i*2*pi/N, where N is the order of regular polygon (number of sides). Then you can use the following to evaluate each point texture coordinates:
texX = (cos(i*2*pi/N)+1)/2
texY = (sin(i*2*pi/N)+1)/2
Well, and the center point has (0.5, 0.5).
It may be even simpler to generate coordinates in the shader, if you have one specially for this:
I assume, you get pos vertex position. It depends on how you store the polygon vertexes, but let the center be (0,0) and other points ranging from (-1;-1) to (1;1). Then the pos should be simply used as texture coordinates with offset:
vec2 texCoords = (pos + vec2(1,1))*0.5;
and the pos itself then should be passed to vector-matrix multiplication as usual.

OpenGL first person camera rotation and translation

I'm trying to write a simple maze game, without using any deprecated OpenGL API (i.e. no immediate mode). I'm using one Vertex Buffer Object for each tile I have in my maze, which is essentially a combination of four Vertexs:
class Vertex {
public:
GLfloat x, y, z; // coords
GLfloat tx, ty; // texture coords
Vertex();
};
and are stored in VBOs like this:
void initVBO()
{
Vertex vertices[4];
vertices[0].x = -0.5;
vertices[0].y = -0.5;
vertices[0].z = 0.0;
vertices[0].tx = 0.0;
vertices[0].ty = 1.0;
vertices[1].x = -0.5;
vertices[1].y = 0.5;
vertices[1].z = 0.0;
vertices[1].tx = 0.0;
vertices[1].ty = 0.0;
vertices[2].x = 0.5;
vertices[2].y = 0.5;
vertices[2].z = 0.0;
vertices[2].tx = 1.0;
vertices[2].ty = 0.0;
vertices[3].x = 0.5;
vertices[3].y = -0.5;
vertices[3].z = 0.0;
vertices[3].tx = 1.0;
vertices[3].ty = 1.0;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex)*4, &vertices[0].x, GL_STATIC_DRAW);
ushort indices[4];
indices[0] = 0;
indices[1] = 1;
indices[2] = 2;
indices[3] = 3;
glGenBuffers(1, &ibo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(ushort) * 4, indices, GL_STATIC_DRAW);
}
Now, I'm stuck on the camera movement. In a previous version of my project, I used glRotatef and glTranslatef to translate and rotate the scene and then I rendered every tile using glBegin()/glEnd() mode. But these two functions are now deprecated, and I didn't find any tutorial about creating a camera in a context using only VBOs. Which is the correct way to proceed? Should I loop between every tile modifying the position of the vertices according to the new camera position?
But these two functions are now deprecated, and I didn't find any tutorial about creating a camera in a context using only VBOs.
VBOs have nothing to do with this.
Immediate mode and the matrix stack are two different pairs of shoes. VBOs deal with getting geometry data to the renderer, the matrix stack deals with getting the transformation there. It's only geometry data that's affected by VBOs.
As for your question: You calculate the matrices yourself and pass them to the shader by uniform. It's also important to understand the OpenGL's matrix function never have been GPU accelerated (except for one single machine, SGI's Onyx), so this didn't even offer some performance gain. Actually using OpenGL's matrix stack had a negative impact on overall performance, due to carrying out redundant operations, that have to be done somewhere else in the program as well.
For a simple matrix math library look at my linmath.h http://github.com/datenwolf/linmath.h
I will add to datenwolf's answer. I am assuming that only the shader pipeline is available to you.
Requirements
In OpenGL 4.0+ Opengl does not do any rendering for you whatsoever as it moves away from the fixed function pipeline. If you are rendering your geometry without a shader right now you are using the deprecated pipeline. Getting up and running without some base framework will be difficult (not impossible, but I would recommend using a base framework). I would recommend, as a start, using GLUT (this will create a window for you and has basic callbacks for the idle function and input), GLEW (to setup the rendering context) and gLTools (matrix stack, generic shaders and shader manager for a quick setup so that you can at least start rendering).
Setup
I will be giving the important pieces here which you can then piece together. At this point I am assuming you have GLUT set up properly (search for how to set it up) and you are able to register the update loop with it and create a window (that is, the loop which calls one of your selected functions [note, this cannot be a method] every frame). Refer to the link above for help on this.
First, initialize glew (by calling glewInit())
Setup your scene. This includes using GLBatch class (from glTools) to create a set of vertices to render as triangles and initializing the GLShaderManager class (also from GLTools) and its stock shaders by calling its InitializeStockShaders() function.
In your idle loop, call UseStockShader() function of the shader manager to start a new batch. Call the draw() function on your vertex batch. For a complete overview of glTools, go here.
Don't forget to clear the window before rendering and swapping the buffers after rendering by calling glClear() and glSwapBuffers() respectively.
Note that most of the functions I gave above accept arguments. You should be able to figure those out by looking at the respective library's documentations.
MVP Matrix (EDIT: Forgot to add this section)
OpenGL renders everything that is in the -1,1 co-ordinates looking down the z-axis. It has no notion of a camera and does not care for anything that falls outside these co-ordinates. The model-view-projection matrix is what transforms your scene to fit these co-ordinates.
As a starting point, don't worry about this until you have something rendered on the screen (make sure all the co-ordinates that you give your vertex batch are less than 1). Once you do, then setup your projection matrix (default projection is orthographic) by using the GLFrustum class in glTools. You will get your projection matrix from this class which you will multiply with your model view matrix. The model-view matrix is a combination of the model's transformation matrix and your camera's transformation (remember, there is no camera, so essentially, you are moving the scene instead). Once you have multiplied all of them to make one matrix, you pass it on to the shader using the UseStockShader() function.
Use a stock shader in GLTools (e.g. GLT_SHADER_FLAT) then start creating your own.
Reference
Lastly, I would highly recommend getting this book: OpenGL SuperBible, Comprehensive Tutorial and Reference (Fifth edition - make sure it is this edition)
If you really want to stick with the newest OpenGL API, where lots of features where removed in favor of a programmable pipeline (OpenGL 4 and OpenGL ES 2), you will have to write the vertex and fragment shaders yourself, and implement the transformation stuff there. You will have to manually create all the attributes you use in the shader, specifically the coords and texture coords you use in your example. You will also need 2 uniform variables, one for model-view matrix and other for projection matrix, if you want to mimic the behavior of old fixed functionality OpenGL.
The rotation/translation you are used to do are matrix operations. During vertex transformation stage of the pipeline, now performed by the vertex shader you supply, you must multiply a 4x4 transformation matrix by the vertex position (4 coordinates, interpreted as a 4x1 matrix, where the 4th coordinate is usually 1 if you are not doing anything too fancy). There resulting vector will be at the correct relative position according to the that transformation. Then you multiply the projection matrix to that vector, and the output the result to the fragment shader.
You can learn how all those matrices are built by looking at the documentation of glRotate, glTranslate and gluPerspective. Remember matrix multiplication is non-comutative, so the order you multiply them matters (this is exactly why the order you call glRotate and
glTranslate also matters).
About learning GLSL and how to use shaders, I've learned on here, but these tutorials relates to OpenGL 1.4 and 2 and are now very old. The main difference is that the predefined input variables to the vertex shaders, such as gl_Vertex and gl_ModelViewMatrix no longer exists, and you have to create them by yourself.

Is it possible to thicken a quadratic Bézier curve using the GPU only?

I draw lots of quadratic Bézier curves in my OpenGL program. Right now, the curves are one-pixel thin and software-generated, because I'm at a rather early stage, and it is enough to see what works.
Simply enough, given 3 control points (P0 to P2), I evaluate the following equation with t varying from 0 to 1 (with steps of 1/8) in software and use GL_LINE_STRIP to link them together:
B(t) = (1 - t)2P0 + 2(1 - t)tP1 + t2P2
Where B, obviously enough, results in a 2-dimensional vector.
This approach worked 'well enough', since even my largest curves don't need much more than 8 steps to look curved. Still, one pixel thin curves are ugly.
I wanted to write a GLSL shader that would accept control points and a uniform thickness variable to, well, make the curves thicker. At first I thought about making a pixel shader only, that would color only pixels within a thickness / 2 distance of the curve, but doing so requires solving a third degree polynomial, and choosing between three solutions inside a shader doesn't look like the best idea ever.
I then tried to look up if other people already did it. I stumbled upon a white paper by Loop and Blinn from Microsoft Research where the guys show an easy way of filling the area under a curve. While it works well to that extent, I'm having trouble adapting the idea to drawing between two bouding curves.
Finding bounding curves that match a single curve is rather easy with a geometry shader. The problems come with the fragment shader that should fill the whole thing. Their approach uses the interpolated texture coordinates to determine if a fragment falls over or under the curve; but I couldn't figure a way to do it with two curves (I'm pretty new to shaders and not a maths expert, so the fact I didn't figure out how to do it certainly doesn't mean it's impossible).
My next idea was to separate the filled curve into triangles and only use the Bézier fragment shader on the outer parts. But for that I need to split the inner and outer curves at variable spots, and that means again that I have to solve the equation, which isn't really an option.
Are there viable algorithms for stroking quadratic Bézier curves with a shader?
This partly continues my previous answer, but is actually quite different since I got a couple of central things wrong in that answer.
To allow the fragment shader to only shade between two curves, two sets of "texture" coordinates are supplied as varying variables, to which the technique of Loop-Blinn is applied.
varying vec2 texCoord1,texCoord2;
varying float insideOutside;
varying vec4 col;
void main()
{
float f1 = texCoord1[0] * texCoord1[0] - texCoord1[1];
float f2 = texCoord2[0] * texCoord2[0] - texCoord2[1];
float alpha = (sign(insideOutside*f1) + 1) * (sign(-insideOutside*f2) + 1) * 0.25;
gl_FragColor = vec4(col.rgb, col.a * alpha);
}
So far, easy. The hard part is setting up the texture coordinates in the geometry shader. Loop-Blinn specifies them for the three vertices of the control triangle, and they are interpolated appropriately across the triangle. But, here we need to have the same interpolated values available while actually rendering a different triangle.
The solution to this is to find the linear function mapping from (x,y) coordinates to the interpolated/extrapolated values. Then, these values can be set for each vertex while rendering a triangle. Here's the key part of my code for this part.
vec2[3] tex = vec2[3]( vec2(0,0), vec2(0.5,0), vec2(1,1) );
mat3 uvmat;
uvmat[0] = vec3(pos2[0].x, pos2[1].x, pos2[2].x);
uvmat[1] = vec3(pos2[0].y, pos2[1].y, pos2[2].y);
uvmat[2] = vec3(1, 1, 1);
mat3 uvInv = inverse(transpose(uvmat));
vec3 uCoeffs = vec3(tex[0][0],tex[1][0],tex[2][0]) * uvInv;
vec3 vCoeffs = vec3(tex[0][1],tex[1][1],tex[2][1]) * uvInv;
float[3] uOther, vOther;
for(i=0; i<3; i++) {
uOther[i] = dot(uCoeffs,vec3(pos1[i].xy,1));
vOther[i] = dot(vCoeffs,vec3(pos1[i].xy,1));
}
insideOutside = 1;
for(i=0; i< gl_VerticesIn; i++){
gl_Position = gl_ModelViewProjectionMatrix * pos1[i];
texCoord1 = tex[i];
texCoord2 = vec2(uOther[i], vOther[i]);
EmitVertex();
}
EndPrimitive();
Here pos1 and pos2 contain the coordinates of the two control triangles. This part renders the triangle defined by pos1, but with texCoord2 set to the translated values from the pos2 triangle. Then the pos2 triangle needs to be rendered, similarly. Then the gap between these two triangles at each end needs to filled, with both sets of coordinates translated appropriately.
The calculation of the matrix inverse requires either GLSL 1.50 or it needs to be coded manually. It would be better to solve the equation for the translation without calculating the inverse. Either way, I don't expect this part to be particularly fast in the geometry shader.
You should be able to use technique of Loop and Blinn in the paper you mentioned.
Basically you'll need to offset each control point in the normal direction, both ways, to get the control points for two curves (inner and outer). Then follow the technique in Section 3.1 of Loop and Blinn - this breaks up sections of the curve to avoid triangle overlaps, and then triangulates the main part of the interior (note that this part requires the CPU). Finally, these triangles are filled, and the small curved parts outside of them are rendered on the GPU using Loop and Blinn's technique (at the start and end of Section 3).
An alternative technique that may work for you is described here:
Thick Bezier Curves in OpenGL
EDIT:
Ah, you want to avoid even the CPU triangulation - I should have read more closely.
One issue you have is the interface between the geometry shader and the fragment shader - the geometry shader will need to generate primitives (most likely triangles) that are then individually rasterized and filled via the fragment program.
In your case with constant thickness I think quite a simple triangulation will work - using Loop and Bling for all the "curved bits". When the two control triangles don't intersect it's easy. When they do, the part outside the intersection is easy. So the only hard part is within the intersection (which should be a triangle).
Within the intersection you want to shade a pixel only if both control triangles lead to it being shaded via Loop and Bling. So the fragment shader needs to be able to do texture lookups for both triangles. One can be as standard, and you'll need to add a vec2 varying variable for the second set of texture coordinates, which you'll need to set appropriately for each vertex of the triangle. As well you'll need a uniform "sampler2D" variable for the texture which you can then sample via texture2D. Then you just shade fragments that satisfy the checks for both control triangles (within the intersection).
I think this works in every case, but it's possible I've missed something.
I don't know how to exactly solve this, but it's very interesting. I think you need every different processing unit in the GPU:
Vertex shader
Throw a normal line of points to your vertex shader. Let the vertex shader displace the points to the bezier.
Geometry shader
Let your geometry shader create an extra point per vertex.
foreach (point p in bezierCurve)
new point(p+(0,thickness,0)) // in tangent with p1-p2
Fragment shader
To stroke your bezier with a special stroke, you can use a texture with an alpha channel. You can check the alpha channel on its value. If it's zero, clip the pixel. This way, you can still make the system think it is a solid line, instead of a half-transparent one. You could apply some patterns in your alpha channel.
I hope this will help you on your way. You will have to figure out things yourself a lot, but I think that the Geometry shading will speed your bezier up.
Still for the stroking I keep with my choice of creating a GL_QUAD_STRIP and an alpha-channel texture.