GLSL Shader: Half a colour value in var A, other half in var B, without conditional statement - glsl

I'm working on a texture / terrain splatting GLSL shader that will support 8 textures.
The first version I made uses RGBA as input - each channel is the intensity for each texture.
The second version I tried to double the "channels" by splitting them in half. For example;
R0 = nothing on Texture1
R64 = Half on texture 1
R127 = Full texture 1
R128 = nothing Texture 5
R196 = half texture 5
R254 = Full texture 5
andsoforth.
I've come to know that if- statements in shaders are very bad practice - it also felt like doing something wrong.
The thing is, I dont know the mathmatical functions to take half of each channel as use it as a full variable.
Here's the fragment shader code;
varying vec2 v_vTexcoord;
varying vec4 v_vColour;
uniform sampler2D texture; //Texture
uniform float surfWidth;
uniform float surfHeight;
uniform float xpos;
uniform float ypos;
void main()
{
//Setting the main texture position
vec2 texpos = (v_vTexcoord*vec2(surfWidth, surfHeight)) + vec2(xpos, ypos);
//Modulo 1 for repeat, devide by 4 because its a 4x4 texture sheet
float tpx = mod(texpos.x,1.0)/4.0;
float tpy = mod(texpos.y,1.0)/4.0;
//Setup terrain "intensities"
float t1 = 0.0;
float t2 = 0.0;
float t3 = 0.0;
float t4 = 0.0;
float t5 = 0.0;
float t6 = 0.0;
float t7 = 0.0;
float t8 = 0.0;
//Load from the surface (splatmap)
float btr = (v_vColour * texture2D(gm_BaseTexture, v_vTexcoord)).r;
float btg = (v_vColour * texture2D(gm_BaseTexture, v_vTexcoord)).g;
float btb = (v_vColour * texture2D(gm_BaseTexture, v_vTexcoord)).b;
float bta = (v_vColour * texture2D(gm_BaseTexture, v_vTexcoord)).a;
//Calculating what texture to use (also deviding each channel into 2)
if (btr <= 0.5) {
t1 = btr*2.0;
} else {
t5 = (btr-0.5)*2.0;
}
if (btg <= 0.5) {
t2 = btg*2.0;
} else {
t6 = (btg-0.5)*2.0;
}
if (btb <= 0.5) {
t3 = btb*2.0;
} else {
t7 = (btb-0.5)*2.0;
}
if (bta <= 0.5) {
t4 = bta*2.0;
} else {
t8 = (bta-0.5)*2.0;
}
//Get terrain pixels at proper positions
vec4 ter1 = texture2D(texture, vec2(tpx, tpy));
ter1.a = t1;
vec4 ter2 = texture2D(texture, vec2(tpx+0.25, tpy));
ter2.a = t2;
vec4 ter3 = texture2D(texture, vec2(tpx+0.50, tpy));
ter3.a = t3;
vec4 ter4 = texture2D(texture, vec2(tpx+0.75, tpy));
ter4.a = t4;
vec4 ter5 = texture2D(texture, vec2(tpx, tpy+0.25));
ter5.a = t5;
vec4 ter6 = texture2D(texture, vec2(tpx+0.25, tpy+0.25));
ter6.a = t6;
vec4 ter7 = texture2D(texture, vec2(tpx+0.50, tpy+0.25));
ter7.a = t7;
vec4 ter8 = texture2D(texture, vec2(tpx+0.75, tpy+0.25));
ter8.a = t8;
//Output to screen
gl_FragColor =
ter1*vec4(t1) +
ter2*vec4(t2) +
ter3*vec4(t3) +
ter4*vec4(t4) +
ter5*vec4(t5) +
ter6*vec4(t6) +
ter7*vec4(t7) +
ter8*vec4(t8);
}
I'm quite sure that I'm going to need something like clamp() or lerp() but I can't wrap my head around it..
Also, when textures overlap, they get "brighter" (because both textures are simply added in the last statement... I have no idea how I could prevent that from happening and always outputting a "maximum" of the texture itself (so that it doesn't "light up"). Excuse me if I sound dumb, this is my first real shader :)

Branches are not that bad on modern hardware assuming they span multiple fragments and potentially save quite a bunch of work. Writing your logic wihout branches would look like this:
btr *= 2.;
t1 = fract(min(btr,1.));
t5 = max(btr-1.,0.);
Note that with your approach you can not blend two "splatting channels" that are packed in the same color channel(i.e. blending t1 and t5). It would be simpler(and probably more efficient) to just sample another splatmap.
As to blending the final output, assuming you want to linearly blend you'd divide the individual weights by the sum of all weights.
float sum = t1+t2+t3+t4+t5+t6+t7+t8;
gl_FragColor = ter1*(t1/sum) + ter2*(t2/sum) + ter3*(t3/sum) + ...

Related

Shader Flipping Faces

I'm trying to construct a render engine using OpenGL and C++. but can't seem to get past this problem. The same model is being rendered 5 different times using different shaders, in 4 out of the 5 shaders the backface culling is working properly. In the tessellation shader, however, it is not. Any outwards faces are invisible, so you can see directly to the rear ones. Does anyone know why this shader flips the faces?
Vertex Shader
void main()
{
worldVertexPosition_cs = (transformationMatrix * vec4(position_vs, 1.0)).xyz;
worldTextureCoords_cs = textureCoords_vs;
worldNormal_cs = mat3(transpose(inverse(transformationMatrix))) * normal_vs;
}
Control Shader
float getTessLevel(float distance0, float distance1)
{
float avgDistance = (distance0 + distance1) / 2.0;
avgDistance = (100 - avgDistance) / 20;
if (avgDistance < 1) {
avgDistance = 1;
}
return avgDistance;
}
void main()
{
worldTextureCoords_es[gl_InvocationID] = worldTextureCoords_cs[gl_InvocationID];
worldNormal_es[gl_InvocationID] = worldNormal_cs[gl_InvocationID];
worldVertexPosition_es[gl_InvocationID] = worldVertexPosition_cs[gl_InvocationID];
float eyeToVertexDistance0 = distance(eyePos, worldVertexPosition_es[0]);
float eyeToVertexDistance1 = distance(eyePos, worldVertexPosition_es[1]);
float eyeToVertexDistance2 = distance(eyePos, worldVertexPosition_es[2]);
gl_TessLevelOuter[0] = getTessLevel(eyeToVertexDistance1, eyeToVertexDistance2);
gl_TessLevelOuter[1] = getTessLevel(eyeToVertexDistance2, eyeToVertexDistance0);
gl_TessLevelOuter[2] = getTessLevel(eyeToVertexDistance0, eyeToVertexDistance1);
gl_TessLevelInner[0] = gl_TessLevelOuter[2];
}
Evaluation Shader
vec2 interpolate2D(vec2 v0, vec2 v1, vec2 v2)
{
return vec2(gl_TessCoord.x) * v0 + vec2(gl_TessCoord.y) * v1 + vec2(gl_TessCoord.z) * v2;
}
vec3 interpolate3D(vec3 v0, vec3 v1, vec3 v2)
{
return vec3(gl_TessCoord.x) * v0 + vec3(gl_TessCoord.y) * v1 + vec3(gl_TessCoord.z) * v2;
}
void main()
{
worldTextureCoords_fs = interpolate2D(worldTextureCoords_es[0], worldTextureCoords_es[1], worldTextureCoords_es[2]);
worldNormal_fs = interpolate3D(worldNormal_es[0], worldNormal_es[1], worldNormal_es[2]);
worldNormal_fs = normalize(worldNormal_fs);
worldVertexPosition_fs = interpolate3D(worldVertexPosition_es[0], worldVertexPosition_es[1], worldVertexPosition_es[2]);
float displacement = texture(texture_displacement0, worldTextureCoords_fs.xy).x;
worldVertexPosition_fs += worldNormal_fs * (displacement / 1.0f);
gl_Position = projectionMatrix * viewMatrix * vec4(worldVertexPosition_fs.xyz, 1.0);
}
Fragment Shader
void main()
{
vec3 unitNormal = normalize(worldNormal_fs);
vec3 unitLightVector = normalize(lightPosition - worldVertexPosition_fs);
float dotResult = dot(unitNormal, unitLightVector);
float brightness = max(dotResult, blackPoint);
vec3 diffuse = brightness * lightColor;
FragColor = vec4(diffuse, 1.0) * texture(texture_diffuse0, worldTextureCoords_fs);
FragColor.rgb = pow(FragColor.rgb, vec3(1.0/gamma));
}
In the Tessellation Evaluation Shader you've to define the winding order of the generated triangles.
This is done via the cw and ccw parameters. Default is ccw.
Either generate clockwise primitives:
layout(triangles, cw) in;
Or generate counterclockwise primitives:
layout(triangles, ccw) in;

How to get a smooth result with RSM (Reflective Shadow Mapping)?

I'm trying to implement a Reflective Shadow Mapping program with Vulkan.
The problem is that a get bad result :
As you can see the result is not smooth.
Here I am rendering in a first pass the position, normal and flux from the light position in 3 textures with a resolution of 512 * 512.
In a second pass, I compute the indirect illumination from the first pass textures according to this paper (http://www.klayge.org/material/3_12/GI/rsm.pdf) :
for(int i = 0; i < 151; i++)
{
vec4 rsmProjCoords = projCoords + vec4(rsmDiskSampling[i] * 0.09, 0.0, 0.0);
vec3 indirectLightPos = texture(rsmPosition, rsmProjCoords.xy).rgb;
vec3 indirectLightNorm = texture(rsmNormal, rsmProjCoords.xy).rgb;
vec3 indirectLightFlux = texture(rsmFlux, rsmProjCoords.xy).rgb;
vec3 r = worldPos - indirectLightPos;
float distP2 = dot( r, r );
vec3 emission = indirectLightFlux * (max(0.0, dot(indirectLightNorm, r)) * max(0.0, dot(N, -r)));
emission *= rsmDiskSampling[i].x * rsmDiskSampling[i].x / (distP2 * distP2);
indirectRSM += emission;
}
The problem is fixed.
The main problem was the sampling, I was using a linear sampling instead of a nearest sampling :
samplerInfo.magFilter = VK_FILTER_NEAREST;
samplerInfo.minFilter = VK_FILTER_NEAREST;
Other problems were the number of VPL used and the distance between them.

Billboarding using Qt3D 2.0

I am looking for the best way to create a billboard in Qt3D. I would like a plane which faces the camera wherever it is and does not change sized when the camera dollies forward or back. I have read how to do this using GLSL vertex and geometry shaders, but I am looking for the Qt3D way, unless customer shaders is the most efficient and best way of billboarding.
I have looked, and it appears I can set the Matrix on a QTransform via properties, but it isn't clear to me how I would manipulate the matrix, or perhaps there is a better way? I am using the C++ api, but a QML answer would do. I could port it to C++.
If you want to draw just one billboard, you can add a plane and rotate it whenever the camera moves. However, if you want to do this efficiently with thousands or millions of billboards, I recommend using custom shaders. We did this to draw impostor spheres in Qt3D.
However, we didn't use a geometry shader because we were targeting systems that didn't support geometry shaders. Instead, we used only the vertex shader by placing four vertices in the origin and moved these on the shader. To create many copies, we used instanced drawing. We moved each set of four vertices according to the positions of the spheres. Finally, we moved each of the four vertices of each sphere such that they result in a billboard that is always facing the camera.
Start out by subclassing QGeometry and created a buffer functor that creates four points, all in the origin (see spherespointgeometry.cpp). Give each point an ID that we can use later. If you use geometry shaders, the ID is not needed and you can get away with creating only one vertex.
class SpheresPointVertexDataFunctor : public Qt3DRender::QBufferDataGenerator
{
public:
SpheresPointVertexDataFunctor()
{
}
QByteArray operator ()() Q_DECL_OVERRIDE
{
const int verticesCount = 4;
// vec3 pos
const quint32 vertexSize = (3+1) * sizeof(float);
QByteArray verticesData;
verticesData.resize(vertexSize*verticesCount);
float *verticesPtr = reinterpret_cast<float*>(verticesData.data());
// Vertex 1
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
// VertexID 1
*verticesPtr++ = 0.0;
// Vertex 2
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
// VertexID 2
*verticesPtr++ = 1.0;
// Vertex 3
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
// VertexID3
*verticesPtr++ = 2.0;
// Vertex 4
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
// VertexID 4
*verticesPtr++ = 3.0;
return verticesData;
}
bool operator ==(const QBufferDataGenerator &other) const Q_DECL_OVERRIDE
{
Q_UNUSED(other);
return true;
}
QT3D_FUNCTOR(SpheresPointVertexDataFunctor)
};
For the real positions, we used a separate QBuffer. We also set color and scale, but I have omitted those here (see spheredata.cpp):
void SphereData::setPositions(QVector<QVector3D> positions, QVector3D color, float scale)
{
QByteArray ba;
ba.resize(positions.size() * sizeof(QVector3D));
SphereVBOData *vboData = reinterpret_cast<QVector3D *>(ba.data());
for(int i=0; i<positions.size(); i++) {
QVector3D &position = vboData[i];
position = positions[i];
}
m_buffer->setData(ba);
m_count = positions.count();
}
Then, in QML, we connected the geometry with the buffer in a QGeometryRenderer. This can also be done in C++, if you prefer (see
Spheres.qml):
GeometryRenderer {
id: spheresMeshInstanced
primitiveType: GeometryRenderer.TriangleStrip
enabled: instanceCount != 0
instanceCount: sphereData.count
geometry: SpheresPointGeometry {
attributes: [
Attribute {
name: "pos"
attributeType: Attribute.VertexAttribute
vertexBaseType: Attribute.Float
vertexSize: 3
byteOffset: 0
byteStride: (3 + 3 + 1) * 4
divisor: 1
buffer: sphereData ? sphereData.buffer : null
}
]
}
}
Finally, we created custom shaders to draw the billboards. Note that because we were drawing impostor spheres, the billboard size was increased to handle raytracing in the fragment shader from awkward angles. You likely do not need the 2.0*0.6 factor in general.
Vertex shader:
#version 330
in vec3 vertexPosition;
in float vertexId;
in vec3 pos;
in vec3 col;
in float scale;
uniform vec3 eyePosition = vec3(0.0, 0.0, 0.0);
uniform mat4 modelMatrix;
uniform mat4 mvp;
out vec3 modelSpherePosition;
out vec3 modelPosition;
out vec3 color;
out vec2 planePosition;
out float radius;
vec3 makePerpendicular(vec3 v) {
if(v.x == 0.0 && v.y == 0.0) {
if(v.z == 0.0) {
return vec3(0.0, 0.0, 0.0);
}
return vec3(0.0, 1.0, 0.0);
}
return vec3(-v.y, v.x, 0.0);
}
void main() {
vec3 position = vertexPosition + pos;
color = col;
radius = scale;
modelSpherePosition = (modelMatrix * vec4(position, 1.0)).xyz;
vec3 view = normalize(position - eyePosition);
vec3 right = normalize(makePerpendicular(view));
vec3 up = cross(right, view);
float texCoordX = 1.0 - 2.0*(float(vertexId==0.0) + float(vertexId==2.0));
float texCoordY = 1.0 - 2.0*(float(vertexId==0.0) + float(vertexId==1.0));
planePosition = vec2(texCoordX, texCoordY);
position += 2*0.6*(-up - right)*(scale*float(vertexId==0.0));
position += 2*0.6*(-up + right)*(scale*float(vertexId==1.0));
position += 2*0.6*(up - right)*(scale*float(vertexId==2.0));
position += 2*0.6*(up + right)*(scale*float(vertexId==3.0));
vec4 modelPositionTmp = modelMatrix * vec4(position, 1.0);
modelPosition = modelPositionTmp.xyz;
gl_Position = mvp*vec4(position, 1.0);
}
Fragment shader:
#version 330
in vec3 modelPosition;
in vec3 modelSpherePosition;
in vec3 color;
in vec2 planePosition;
in float radius;
out vec4 fragColor;
uniform mat4 modelView;
uniform mat4 inverseModelView;
uniform mat4 inverseViewMatrix;
uniform vec3 eyePosition;
uniform vec3 viewVector;
void main(void) {
vec3 rayDirection = eyePosition - modelPosition;
vec3 rayOrigin = modelPosition - modelSpherePosition;
vec3 E = rayOrigin;
vec3 D = rayDirection;
// Sphere equation
// x^2 + y^2 + z^2 = r^2
// Ray equation is
// P(t) = E + t*D
// We substitute ray into sphere equation to get
// (Ex + Dx * t)^2 + (Ey + Dy * t)^2 + (Ez + Dz * t)^2 = r^2
float r2 = radius*radius;
float a = D.x*D.x + D.y*D.y + D.z*D.z;
float b = 2.0*E.x*D.x + 2.0*E.y*D.y + 2.0*E.z*D.z;
float c = E.x*E.x + E.y*E.y + E.z*E.z - r2;
// discriminant of sphere equation
float d = b*b - 4.0*a*c;
if(d < 0.0) {
discard;
}
float t = (-b + sqrt(d))/(2.0*a);
vec3 sphereIntersection = rayOrigin + t * rayDirection;
vec3 normal = normalize(sphereIntersection);
vec3 normalDotCamera = color*dot(normal, normalize(rayDirection));
float pi = 3.1415926535897932384626433832795;
vec3 position = modelSpherePosition + sphereIntersection;
// flat red
fragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
It has been some time since we first implemented this, and there might be easier ways to do it now, but this should give you an idea of the pieces you need.

Updating Attribute variables in Vertex Shader(glVertexAttrib3f) working as glVertex

I am having trouble using Attribute variables for getting a value into vertex shader. I want to provide the geometry shader with one of the points from the previous primitive(line) for some calculation. I am providing this point using a vec3 attribute variable(Ppoint) in to vertex shader and then to geometry shader using a out variable in vertex shader and a in variable in geometry shader(pointPass).
The problem is when I am updating the attribute variable in the glBegin()/glEnd() block while drawing the lines the values in glVertexAttrib3f are taken as vertices and a line is also rendered to those points. This causes some extra lines to be displayed and all the geometry shader functionality is disturbed.
Here is my code for all the shaders and my opengl program to draw the lines.
Vertex Shader
#version 330 compatibility
out vec3 pointPass;
attribute vec3 Ppoint;
void main()
{
pointPass = Ppoint;
gl_Position = gl_Vertex;
}
Geometry Shader
#version 330 compatibility
in vec3 pointPass[];
out vec4 colorFrag;
layout(lines) in;
// 100 vertices are not actually required specified more for trial
layout(triangle_strip, max_vertices=100) out;
vec3 getA(vec3 axis){
vec3 a;
a.x = 1.0;
a.y = 1.0;
a.z = -(axis.x + axis.y)/axis.z;
a = normalize(a);
return a;
}
vec3 getB(vec3 axis, vec3 a){
vec3 b;
b.x = (a.y*axis.z - a.z*axis.y);
b.y = (a.z*axis.x - a.x*axis.z);
b.z = (a.x*axis.y - a.y*axis.x );
b = normalize(b);
return b;
}
void main()
{
vec3 axis0, axis1, v0, v1, v2;
float radius = 0.5;
float rotation = 0.0f;
float pi = 3.1416;
int numPoints = 15;
vec3 p1, p2, p3, p4;
int count = 0, i;
float increment = 2*pi/numPoints;
v0 = pointPass[0];
v1 = gl_in[0].gl_Position.xyz;
v2 = gl_in[1].gl_Position.xyz;
axis1 = v1 - v2;
axis1 = normalize(axis1);
vec3 a1 = getA(axis1);
vec3 b1 = getB(axis1, a1);
axis0 = v0-v2;
axis0 = normalize(axis0);
vec3 a0 = getA(axis0);
vec3 b0 = getB(axis0, a0);
// Rotation with theta
for(rotation = 0; rotation<=2*pi; rotation+=increment){
p1 = v1 + radius*cos(rotation)*a0 + radius*sin(rotation)*b0;
p2 = v1 + radius*cos(rotation + increment)*a0 + radius*sin (rotation + increment)*b0;
p3 = v2 + radius*cos(rotation)*a1 + radius*sin(rotation)*b1;
p4 = v2 + radius*cos(rotation + increment)*a1 + radius*sin(rotation + increment)*b1;
// FIRST Triangle
// FIRST vertex
gl_Position = (gl_ModelViewProjectionMatrix*vec4(p3,1.0) );
EmitVertex();
// SECOND vertex
gl_Position = (gl_ModelViewProjectionMatrix*vec4(p1, 1.0) );
EmitVertex();
// THIRD vertex
gl_Position = (gl_ModelViewProjectionMatrix*vec4(p4, 1.0) );
EmitVertex();
// SECOND Triangle
// FIRST vertex
gl_Position = (gl_ModelViewProjectionMatrix*vec4(p2, 1.0) );
EmitVertex();
}
EndPrimitive();
}
Fragment Shader
#version 330 compatibility
in vec4 colorFrag;
void main()
{
gl_FragColor = colorFrag;
}
OpenGL program for drawing lines
// vPoints is a std::vector of 3d vector class created by me.
void drawLines(){
float angle =0.0f;
int numLines = 30;
int count = 0;
float disp = 0.30f;
float radius_x = 5.0;
float radius_y = 5.0;
vPoints.resize(numLines+2);
// Loop around in a circle and specify even points along the spiral
float increment = (float)(2*GL_PI/numLines);
for(angle = 0.0f; angle < (2.0f*GL_PI); angle += increment)
{
// Calculate x and y position of the next vertex
float x1 = radius_x*sin(angle);
float y1 = radius_y*cos(angle);
float z1 = count*disp;
vPoints[count].SetVector(x1, y1, z1);
count ++;
}
// Drawing only first two line segments for testing
glBegin(GL_LINES);
int pointPassLocation = glGetAttribLocation(programID, "Ppoint");
// This is also considered as a vertex and a line is drawn from this point to vPoints[1]
glVertexAttrib3f(pointPassLocation, vPoints[0].GetX(), vPoints[0].GetY(), vPoints[0].GetZ());
glVertex3d(vPoints[1].GetX(), vPoints[1].GetY(), vPoints[1].GetZ());
glVertex3d(vPoints[2].GetX(), vPoints[2].GetY(), vPoints[2].GetZ());
// Again this is also considered as a point and a line is drawn from vPoints[2] to this point.
glVertexAttrib3f(pointPassLocation, vPoints[1].GetX(), vPoints[1].GetY(), vPoints[1].GetZ());
glVertex3d(vPoints[2].GetX(), vPoints[2].GetY(), vPoints[2].GetZ());
glVertex3d(vPoints[3].GetX(), vPoints[3].GetY(), vPoints[3].GetZ());
glEnd();
}
So instead of 2 lines which I wanted to draw from vPoints[1] to vPoints[2] and vPoints[2] to vPoints[3], I am getting 3 lines with 6 vertices considering the two glVertexAttrib3f statements as vertices.
Am I doing it correct, or is there a better way or another way to do this.

Sphere Shading OpenGL

I am trying to shade a sphere.I have no idea where to start from. I calculated the vertices, and connected them by using GL_TRIANGLE_FAN, and I also drew the normals to each vertex. The problem is that I have no idea how to even start doing some shading/lighting. I am using OpeGL 3+. Here is some of my code:
Sphere's Vertices Calculations (I found online and implemented):
void CreateUnitSphere(int dtheta,int dphi) //dtheta, dphi angle
{
GLdouble x,y,z;
GLdouble magnitude=0;
int no_vertice=-1;
int n;
int k;
int theta,phi;
const double PI = 3.1415926535897;
GLdouble DTOR = (PI/180);//degrees to radians
//setting the color to white
for (k=0; k<10296*3; k+=1)
{
sphere_vertices[k].color[0] = 1.0f;
sphere_vertices[k].color[1] = 1.0f;
sphere_vertices[k].color[2] = 1.0f;
}
for (theta=-90;theta<=90-dtheta;theta+=dtheta) {
for (phi=0;phi<=360-dphi;phi+=dphi) {
x = cos(theta*DTOR) * cos(phi*DTOR);
y = cos(theta*DTOR) * sin(phi*DTOR);
z = sin(theta*DTOR);
//calculating Vertex 1
no_vertice+=1;
sphere_vertices[no_vertice].position[0] = x;
sphere_vertices[no_vertice].position[1] = y;
sphere_vertices[no_vertice].position[2] = z;
x = cos((theta+dtheta)*DTOR) * cos(phi*DTOR);
y = cos((theta+dtheta)*DTOR) * sin(phi*DTOR);
z = sin((theta+dtheta)*DTOR);
//calculating Vertex 2
no_vertice+=1;
sphere_vertices[no_vertice].position[0] = x;
sphere_vertices[no_vertice].position[1] = y;
sphere_vertices[no_vertice].position[2] = z;
x = cos((theta+dtheta)*DTOR) * cos((phi+dphi)*DTOR);
y = cos((theta+dtheta)*DTOR) * sin((phi+dphi)*DTOR);
z = sin((theta+dtheta)*DTOR);
//calculating Vertex 3
no_vertice+=1;
sphere_vertices[no_vertice].position[0] = x;
sphere_vertices[no_vertice].position[1] = y;
sphere_vertices[no_vertice].position[2] = z;
if (theta > -90 && theta < 90) {
x = cos(theta*DTOR) * cos((phi+dphi)*DTOR);
y = cos(theta*DTOR) * sin((phi+dphi)*DTOR);
z = sin(theta*DTOR);
//calculating Vertex 4
no_vertice+=1;
sphere_vertices[no_vertice].position[0] = x;
sphere_vertices[no_vertice].position[1] = y;
sphere_vertices[no_vertice].position[2] = z;
}
}
}
no_vertice = -1;
int no_index=10296;
//calculate normals and add them to the array of vertices
for (no_vertice=0; no_vertice<=10296; no_vertice+=1) {
no_index+=1;
//getting the sphere's vertices
x=sphere_vertices[no_vertice].position[0];
y=sphere_vertices[no_vertice].position[1];
z=sphere_vertices[no_vertice].position[2];
//normalising vector "norm(Vertex - Center)"
magnitude = sqrt((x*x) + (y*y) + (z*z));
//adding the new vector (the one divided by the magnitude
sphere_vertices[no_index].position[0] = (x/magnitude)/0.8;
sphere_vertices[no_index].position[1] = (y/magnitude)/0.8;
sphere_vertices[no_index].position[2] = (z/magnitude)/0.8;
///adding the vertex's normal (line drawing issue)
no_index+=1;
sphere_vertices[no_index].position[0] = sphere_vertices[no_vertice].position[0];
sphere_vertices[no_index].position[1] = sphere_vertices[no_vertice].position[1];
sphere_vertices[no_index].position[2] = sphere_vertices[no_vertice].position[2];
}
}
Here is my Sphere without the "GL_TRIANGLE_FAN", JUST "GL_LINE_STRIP"
and this is how I use "glDrawArrays" :
glDrawArrays(GL_LINE_STRIP, 0, 10296);
glDrawArrays(GL_LINES, 10297, 30888);
From 0-10296 are the Sphere's Vertices.
From 10297-30888 are the Sphere's Normal Vertices.
Here is my Vertex file:
precision highp float;
in vec3 in_Position; //declare position
in vec3 in_Color;
// mvpmatrix is the result of multiplying the model, view, and projection matrices */
uniform mat4 mvpmatrix;
out vec3 ex_Color;
void main(void) {
// Multiply the mvp matrix by the vertex to obtain our final vertex position (mvp was created in *.cpp)
gl_Position = mvpmatrix * vec4(in_Position, 1.0);
ex_Color = in_Color;
}
and my Fragment file
#version 330
precision highp float;
in vec3 ex_Color;
out vec4 gl_FragColor;
void main(void) {
gl_FragColor = vec4(ex_Color,1.0);
}
Now I know that I need to pass the normals to the vertice and fragment shader, but how do I do that and how/where do I implement the light calculations, linear interpolation??
Thanks
Basically you need to calculate the lighting in vertex shader and pass the vertex color to the fragment shader if you want a per-vertex lighting or pass the normal and light direction as the varying variables and calculate everything there for the per-pixel lighting.
The main trick here is that when you pass the normal to the fragment shader it is being interpolated between vertices for each fragment and as the result the shading is very smooth but also slower.
Here is a very nice article to start with.