Sphere Shading OpenGL - opengl

I am trying to shade a sphere.I have no idea where to start from. I calculated the vertices, and connected them by using GL_TRIANGLE_FAN, and I also drew the normals to each vertex. The problem is that I have no idea how to even start doing some shading/lighting. I am using OpeGL 3+. Here is some of my code:
Sphere's Vertices Calculations (I found online and implemented):
void CreateUnitSphere(int dtheta,int dphi) //dtheta, dphi angle
{
GLdouble x,y,z;
GLdouble magnitude=0;
int no_vertice=-1;
int n;
int k;
int theta,phi;
const double PI = 3.1415926535897;
GLdouble DTOR = (PI/180);//degrees to radians
//setting the color to white
for (k=0; k<10296*3; k+=1)
{
sphere_vertices[k].color[0] = 1.0f;
sphere_vertices[k].color[1] = 1.0f;
sphere_vertices[k].color[2] = 1.0f;
}
for (theta=-90;theta<=90-dtheta;theta+=dtheta) {
for (phi=0;phi<=360-dphi;phi+=dphi) {
x = cos(theta*DTOR) * cos(phi*DTOR);
y = cos(theta*DTOR) * sin(phi*DTOR);
z = sin(theta*DTOR);
//calculating Vertex 1
no_vertice+=1;
sphere_vertices[no_vertice].position[0] = x;
sphere_vertices[no_vertice].position[1] = y;
sphere_vertices[no_vertice].position[2] = z;
x = cos((theta+dtheta)*DTOR) * cos(phi*DTOR);
y = cos((theta+dtheta)*DTOR) * sin(phi*DTOR);
z = sin((theta+dtheta)*DTOR);
//calculating Vertex 2
no_vertice+=1;
sphere_vertices[no_vertice].position[0] = x;
sphere_vertices[no_vertice].position[1] = y;
sphere_vertices[no_vertice].position[2] = z;
x = cos((theta+dtheta)*DTOR) * cos((phi+dphi)*DTOR);
y = cos((theta+dtheta)*DTOR) * sin((phi+dphi)*DTOR);
z = sin((theta+dtheta)*DTOR);
//calculating Vertex 3
no_vertice+=1;
sphere_vertices[no_vertice].position[0] = x;
sphere_vertices[no_vertice].position[1] = y;
sphere_vertices[no_vertice].position[2] = z;
if (theta > -90 && theta < 90) {
x = cos(theta*DTOR) * cos((phi+dphi)*DTOR);
y = cos(theta*DTOR) * sin((phi+dphi)*DTOR);
z = sin(theta*DTOR);
//calculating Vertex 4
no_vertice+=1;
sphere_vertices[no_vertice].position[0] = x;
sphere_vertices[no_vertice].position[1] = y;
sphere_vertices[no_vertice].position[2] = z;
}
}
}
no_vertice = -1;
int no_index=10296;
//calculate normals and add them to the array of vertices
for (no_vertice=0; no_vertice<=10296; no_vertice+=1) {
no_index+=1;
//getting the sphere's vertices
x=sphere_vertices[no_vertice].position[0];
y=sphere_vertices[no_vertice].position[1];
z=sphere_vertices[no_vertice].position[2];
//normalising vector "norm(Vertex - Center)"
magnitude = sqrt((x*x) + (y*y) + (z*z));
//adding the new vector (the one divided by the magnitude
sphere_vertices[no_index].position[0] = (x/magnitude)/0.8;
sphere_vertices[no_index].position[1] = (y/magnitude)/0.8;
sphere_vertices[no_index].position[2] = (z/magnitude)/0.8;
///adding the vertex's normal (line drawing issue)
no_index+=1;
sphere_vertices[no_index].position[0] = sphere_vertices[no_vertice].position[0];
sphere_vertices[no_index].position[1] = sphere_vertices[no_vertice].position[1];
sphere_vertices[no_index].position[2] = sphere_vertices[no_vertice].position[2];
}
}
Here is my Sphere without the "GL_TRIANGLE_FAN", JUST "GL_LINE_STRIP"
and this is how I use "glDrawArrays" :
glDrawArrays(GL_LINE_STRIP, 0, 10296);
glDrawArrays(GL_LINES, 10297, 30888);
From 0-10296 are the Sphere's Vertices.
From 10297-30888 are the Sphere's Normal Vertices.
Here is my Vertex file:
precision highp float;
in vec3 in_Position; //declare position
in vec3 in_Color;
// mvpmatrix is the result of multiplying the model, view, and projection matrices */
uniform mat4 mvpmatrix;
out vec3 ex_Color;
void main(void) {
// Multiply the mvp matrix by the vertex to obtain our final vertex position (mvp was created in *.cpp)
gl_Position = mvpmatrix * vec4(in_Position, 1.0);
ex_Color = in_Color;
}
and my Fragment file
#version 330
precision highp float;
in vec3 ex_Color;
out vec4 gl_FragColor;
void main(void) {
gl_FragColor = vec4(ex_Color,1.0);
}
Now I know that I need to pass the normals to the vertice and fragment shader, but how do I do that and how/where do I implement the light calculations, linear interpolation??
Thanks

Basically you need to calculate the lighting in vertex shader and pass the vertex color to the fragment shader if you want a per-vertex lighting or pass the normal and light direction as the varying variables and calculate everything there for the per-pixel lighting.
The main trick here is that when you pass the normal to the fragment shader it is being interpolated between vertices for each fragment and as the result the shading is very smooth but also slower.
Here is a very nice article to start with.

Related

How do I align the raytraced spheres from my fragment shader with GL_POINTS?

I have a very simple shader program that takes in a bunch of position data as GL_POINTS that generate screen-aligned squares of fragments like normal with a size depending on depth, and then in the fragment shader I wanted to draw a very simple ray-traced sphere for each one with just the shadow that is on the sphere opposite to the light. I went to this shadertoy to try to figure it out on my own. I used the sphIntersect function for ray-sphere intersection, and sphNormal to get the normal vectors on the sphere for lighting. The problem is that the spheres do not align with the squares of fragments, causing them to be cut off. This is because I am not sure how to match the projections of the spheres and the vertex positions so that they line up. Can I have an explanation of how to do this?
Here is a picture for reference.
Here are my vertex and fragment shaders for reference:
//vertex shader:
#version 460
layout(location = 0) in vec4 position; // position of each point in space
layout(location = 1) in vec4 color; //color of each point in space
layout(location = 2) uniform mat4 view_matrix; // projection * camera matrix
layout(location = 6) uniform mat4 cam_matrix; //just the camera matrix
out vec4 col; // color of vertex
out vec4 posi; // position of vertex
void main() {
vec4 p = view_matrix * vec4(position.xyz, 1.0);
gl_PointSize = clamp(1024.0 * position.w / p.z, 0.0, 4000.0);
gl_Position = p;
col = color;
posi = cam_matrix * position;
}
//fragment shader:
#version 460
in vec4 col; // color of vertex associated with this fragment
in vec4 posi; // position of the vertex associated with this fragment relative to camera
out vec4 f_color;
layout (depth_less) out float gl_FragDepth;
float sphIntersect( in vec3 ro, in vec3 rd, in vec4 sph )
{
vec3 oc = ro - sph.xyz;
float b = dot( oc, rd );
float c = dot( oc, oc ) - sph.w*sph.w;
float h = b*b - c;
if( h<0.0 ) return -1.0;
return -b - sqrt( h );
}
vec3 sphNormal( in vec3 pos, in vec4 sph )
{
return normalize(pos-sph.xyz);
}
void main() {
vec4 c = clamp(col, 0.0, 1.0);
vec2 p = ((2.0*gl_FragCoord.xy)-vec2(1920.0, 1080.0)) / 2.0;
vec3 ro = vec3(0.0, 0.0, -960.0 );
vec3 rd = normalize(vec3(p.x, p.y,960.0));
vec3 lig = normalize(vec3(0.6,0.3,0.1));
vec4 k = vec4(posi.x, posi.y, -posi.z, 2.0*posi.w);
float t = sphIntersect(ro, rd, k);
vec3 ps = ro + (t * rd);
vec3 nor = sphNormal(ps, k);
if(t < 0.0) c = vec4(1.0);
else c.xyz *= clamp(dot(nor,lig), 0.0, 1.0);
f_color = c;
gl_FragDepth = t * 0.0001;
}
Looks like you have many spheres so I would do this:
Input data
I would have VBO containing x,y,z,r describing your spheres, You will also need your view transform (uniform) that can create ray direction and start position for each fragment. Something like my vertex shader in here:
Reflection and refraction impossible without recursive ray tracing?
Create BBOX in Geometry shader and convert your POINT to QUAD or POLYGON
note that you have to account for perspective. If you are not familiar with geometry shaders see:
rendring cubics in GLSL
Where I emmit sequence of OBB from input lines...
In fragment raytrace sphere
You have to compute intersection between sphere and ray, chose the closer intersection and compute its depth and normal (for lighting). In case of no intersection you have to discard; fragment !!!
From what I can see in your images Your QUADs does not correspond to your spheres hence the clipping and also you do not discard; fragments with no intersections so you overwrite with background color already rendered stuff around last rendered spheres so you have only single sphere left in QUAD regardless of how many spheres are really there ...
To create a ray direction that matches a perspective matrix from screen space, the following ray direction formula can be used:
vec3 rd = normalize(vec3(((2.0 / screenWidth) * gl_FragCoord.xy) - vec2(aspectRatio, 1.0), -proj_matrix[1][1]));
The value of 2.0 / screenWidth can be pre-computed or the opengl built-in uniform structs can be used.
To get a bounding box or other shape for your spheres, it is very important to use camera-facing shapes, and not camera-plane-facing shapes. Use the following process where position is the incoming VBO position data, and the w-component of position is the radius:
vec4 p = vec4((cam_matrix * vec4(position.xyz, 1.0)).xyz, position.w);
o.vpos = p;
float l2 = dot(p.xyz, p.xyz);
float r2 = p.w * p.w;
float k = 1.0 - (r2/l2);
float radius = p.w * sqrt(k);
if(l2 < r2) {
p = vec4(0.0, 0.0, -p.w * 0.49, p.w);
radius = p.w;
k = 0.0;
}
vec3 hx = radius * normalize(vec3(-p.z, 0.0, p.x));
vec3 hy = radius * normalize(vec3(-p.x * p.y, p.z * p.z + p.x * p.x, -p.z * p.y));
p.xyz *= k;
Then use hx and hy as basis vectors for any 2D shape that you want the billboard to be shaped like for the vertices. Don't forget later to multiply each vertex by a perspective matrix to get the final position of each vertex. Here is a visualization of the billboarding on desmos using a hexagon shape: https://www.desmos.com/calculator/yeeew6tqwx

World To Screenspace for Mode7 effect (fragment shader)

I have a fragment shader that transforms the view into something resembling mode7.
I want to know the Screen-Space x,y coordinates given a world position.
As the transformation happens in the fragment shader, I can't simply inverse a matrix. This is the fragment shader code:
uniform float Fov; //1.4
uniform float Horizon; //0.6
uniform float Scaling; //0.8
void main() {
vec2 pos = uv.xy - vec2(0.5, Horizon);
vec3 p = vec3(pos.x, pos.y, pos.y + Fov);
vec2 s = vec2(p.x/p.z, p.y/p.z) * Scaling;
s.x += 0.5;
s.y += screenRatio;
gl_FragColor = texture2D(ColorTexture, s);
}
It transforms pixels in a pseudo 3d way:
-
What I want to do is get a screen-space coordinate for a given world position (in normal code, not shaders).
How do I reverse the order of operations above?
This is what I have right now:
(GAME_WIDTH and GAME_HEIGHT are constants and hold pixel values, e.g. 320x240)
vec2 WorldToScreenspace(float x, float y) {
// normalize coordinates 0..1, as x,y are in pixels
x = x/GAME_WIDTH - 0.5;
y = y/GAME_HEIGHT - Horizon;
// as z depends on a y value I have yet to calculate, how can I calc it?
float z = ??;
// invert: vec2 s = vec2(p.x/p.z, p.y/p.z) * Scaling;
float sx = x*z / Scaling;
float sy = y*z / Scaling;
// invert: pos = uv.xy - vec2(0.5, Horizon);
sx += 0.5;
sy += screenRatio;
// convert back to screen space
return new vec2(sx * GAME_WIDTH, sy * GAME_HEIGHT);
}

Billboarding using Qt3D 2.0

I am looking for the best way to create a billboard in Qt3D. I would like a plane which faces the camera wherever it is and does not change sized when the camera dollies forward or back. I have read how to do this using GLSL vertex and geometry shaders, but I am looking for the Qt3D way, unless customer shaders is the most efficient and best way of billboarding.
I have looked, and it appears I can set the Matrix on a QTransform via properties, but it isn't clear to me how I would manipulate the matrix, or perhaps there is a better way? I am using the C++ api, but a QML answer would do. I could port it to C++.
If you want to draw just one billboard, you can add a plane and rotate it whenever the camera moves. However, if you want to do this efficiently with thousands or millions of billboards, I recommend using custom shaders. We did this to draw impostor spheres in Qt3D.
However, we didn't use a geometry shader because we were targeting systems that didn't support geometry shaders. Instead, we used only the vertex shader by placing four vertices in the origin and moved these on the shader. To create many copies, we used instanced drawing. We moved each set of four vertices according to the positions of the spheres. Finally, we moved each of the four vertices of each sphere such that they result in a billboard that is always facing the camera.
Start out by subclassing QGeometry and created a buffer functor that creates four points, all in the origin (see spherespointgeometry.cpp). Give each point an ID that we can use later. If you use geometry shaders, the ID is not needed and you can get away with creating only one vertex.
class SpheresPointVertexDataFunctor : public Qt3DRender::QBufferDataGenerator
{
public:
SpheresPointVertexDataFunctor()
{
}
QByteArray operator ()() Q_DECL_OVERRIDE
{
const int verticesCount = 4;
// vec3 pos
const quint32 vertexSize = (3+1) * sizeof(float);
QByteArray verticesData;
verticesData.resize(vertexSize*verticesCount);
float *verticesPtr = reinterpret_cast<float*>(verticesData.data());
// Vertex 1
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
// VertexID 1
*verticesPtr++ = 0.0;
// Vertex 2
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
// VertexID 2
*verticesPtr++ = 1.0;
// Vertex 3
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
// VertexID3
*verticesPtr++ = 2.0;
// Vertex 4
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
// VertexID 4
*verticesPtr++ = 3.0;
return verticesData;
}
bool operator ==(const QBufferDataGenerator &other) const Q_DECL_OVERRIDE
{
Q_UNUSED(other);
return true;
}
QT3D_FUNCTOR(SpheresPointVertexDataFunctor)
};
For the real positions, we used a separate QBuffer. We also set color and scale, but I have omitted those here (see spheredata.cpp):
void SphereData::setPositions(QVector<QVector3D> positions, QVector3D color, float scale)
{
QByteArray ba;
ba.resize(positions.size() * sizeof(QVector3D));
SphereVBOData *vboData = reinterpret_cast<QVector3D *>(ba.data());
for(int i=0; i<positions.size(); i++) {
QVector3D &position = vboData[i];
position = positions[i];
}
m_buffer->setData(ba);
m_count = positions.count();
}
Then, in QML, we connected the geometry with the buffer in a QGeometryRenderer. This can also be done in C++, if you prefer (see
Spheres.qml):
GeometryRenderer {
id: spheresMeshInstanced
primitiveType: GeometryRenderer.TriangleStrip
enabled: instanceCount != 0
instanceCount: sphereData.count
geometry: SpheresPointGeometry {
attributes: [
Attribute {
name: "pos"
attributeType: Attribute.VertexAttribute
vertexBaseType: Attribute.Float
vertexSize: 3
byteOffset: 0
byteStride: (3 + 3 + 1) * 4
divisor: 1
buffer: sphereData ? sphereData.buffer : null
}
]
}
}
Finally, we created custom shaders to draw the billboards. Note that because we were drawing impostor spheres, the billboard size was increased to handle raytracing in the fragment shader from awkward angles. You likely do not need the 2.0*0.6 factor in general.
Vertex shader:
#version 330
in vec3 vertexPosition;
in float vertexId;
in vec3 pos;
in vec3 col;
in float scale;
uniform vec3 eyePosition = vec3(0.0, 0.0, 0.0);
uniform mat4 modelMatrix;
uniform mat4 mvp;
out vec3 modelSpherePosition;
out vec3 modelPosition;
out vec3 color;
out vec2 planePosition;
out float radius;
vec3 makePerpendicular(vec3 v) {
if(v.x == 0.0 && v.y == 0.0) {
if(v.z == 0.0) {
return vec3(0.0, 0.0, 0.0);
}
return vec3(0.0, 1.0, 0.0);
}
return vec3(-v.y, v.x, 0.0);
}
void main() {
vec3 position = vertexPosition + pos;
color = col;
radius = scale;
modelSpherePosition = (modelMatrix * vec4(position, 1.0)).xyz;
vec3 view = normalize(position - eyePosition);
vec3 right = normalize(makePerpendicular(view));
vec3 up = cross(right, view);
float texCoordX = 1.0 - 2.0*(float(vertexId==0.0) + float(vertexId==2.0));
float texCoordY = 1.0 - 2.0*(float(vertexId==0.0) + float(vertexId==1.0));
planePosition = vec2(texCoordX, texCoordY);
position += 2*0.6*(-up - right)*(scale*float(vertexId==0.0));
position += 2*0.6*(-up + right)*(scale*float(vertexId==1.0));
position += 2*0.6*(up - right)*(scale*float(vertexId==2.0));
position += 2*0.6*(up + right)*(scale*float(vertexId==3.0));
vec4 modelPositionTmp = modelMatrix * vec4(position, 1.0);
modelPosition = modelPositionTmp.xyz;
gl_Position = mvp*vec4(position, 1.0);
}
Fragment shader:
#version 330
in vec3 modelPosition;
in vec3 modelSpherePosition;
in vec3 color;
in vec2 planePosition;
in float radius;
out vec4 fragColor;
uniform mat4 modelView;
uniform mat4 inverseModelView;
uniform mat4 inverseViewMatrix;
uniform vec3 eyePosition;
uniform vec3 viewVector;
void main(void) {
vec3 rayDirection = eyePosition - modelPosition;
vec3 rayOrigin = modelPosition - modelSpherePosition;
vec3 E = rayOrigin;
vec3 D = rayDirection;
// Sphere equation
// x^2 + y^2 + z^2 = r^2
// Ray equation is
// P(t) = E + t*D
// We substitute ray into sphere equation to get
// (Ex + Dx * t)^2 + (Ey + Dy * t)^2 + (Ez + Dz * t)^2 = r^2
float r2 = radius*radius;
float a = D.x*D.x + D.y*D.y + D.z*D.z;
float b = 2.0*E.x*D.x + 2.0*E.y*D.y + 2.0*E.z*D.z;
float c = E.x*E.x + E.y*E.y + E.z*E.z - r2;
// discriminant of sphere equation
float d = b*b - 4.0*a*c;
if(d < 0.0) {
discard;
}
float t = (-b + sqrt(d))/(2.0*a);
vec3 sphereIntersection = rayOrigin + t * rayDirection;
vec3 normal = normalize(sphereIntersection);
vec3 normalDotCamera = color*dot(normal, normalize(rayDirection));
float pi = 3.1415926535897932384626433832795;
vec3 position = modelSpherePosition + sphereIntersection;
// flat red
fragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
It has been some time since we first implemented this, and there might be easier ways to do it now, but this should give you an idea of the pieces you need.

Recreating scene from depth encoded image in Vulkan vs OpenGL

I'm in the process of translating a piece of OpenGL code to Vulkan. The code recreates a rendered scene from an image (on a hemisphere projection) with depth information encoded. Note that I also load the model view matrix used for the projection to recreate the scene. The translation has been pretty straightforward but I'm running into issues due to the new Vulkan coordinate system.
The original OpenGL shader with comments follows:
#version 430
layout (triangles) in;
layout (triangle_strip, max_vertices = 3) out;
in vec2 posGeom[];
out vec2 texCoord;
uniform mat4 view;
uniform mat4 projection;
uniform float threshold;
uniform vec3 quantization;
uniform mat4 inverseStaticView;
uniform sampler2D rgbdTexture;
//get the image space for each pixel of our hemisphere image
vec3 getSphereRay(const vec2 coord) {
//get length of ray from camera to point on image plane
float len = 1 - coord.x * coord.x - coord.y * coord.y;
if (len > 0)
return vec3(coord, -sqrt(len));//scale to unit length vector as viewing ray
else
return vec3(0);
}
vec4 getPosition(const in vec2 inCoord, const in float depth) {
vec2 coord = inCoord;
//reverse the stretching from sphere to quad (based on y-coordinate)
float percent = sqrt(1.0 - coord.y * coord.y);
coord.x = coord.x * percent;
//scale ray with corresponding depth
vec3 normal = getSphereRay(coord) * depth;
//move from image space to world space by inverse view matrix
return inverseStaticView * vec4(normal, 1);
}
bool hasZeroDepth = false;
//get the real depth from quantized and packed depth by inverting the gamma correction and inverting min, max
float getDepth(int idx) {
float depth = texture(rgbdTexture, posGeom[idx] * 0.5 + 0.5).w;
if(depth == 0)
hasZeroDepth = true;
float minDepth = quantization.x;
float maxDepth = quantization.y;
float gamma = quantization.z;
depth = pow(depth, gamma);
depth = depth * (maxDepth - minDepth) + minDepth;
return depth;
}
//emit the position and texcoord
void emitPosition(int idx, float depth) {
texCoord = posGeom[idx] * 0.5 + 0.5;
gl_Position = projection * view * getPosition(posGeom[idx], depth);
EmitVertex();
}
void main() {
float d0 = getDepth(0);
float d1 = getDepth(1);
float d2 = getDepth(2);
//do not emit tris with zero (invalid) depth
if(!hasZeroDepth) {
float minDepth = min(d0, min(d1, d2));
float maxDepth = max(d0, max(d1, d2));
float minDist = maxDepth - minDepth;
float avgDepth = (d0 + d1 + d2) / 3;
float thres = threshold;
//look at tri stretching factor
if(minDist / avgDepth < thres) {
//emit original tri
emitPosition(0, d0);
emitPosition(1, d1);
emitPosition(2, d2);
} else {
//emit tri with maxDepth to only show background
emitPosition(0, maxDepth);
emitPosition(1, maxDepth);
emitPosition(2, maxDepth);
}
}
}
In the Vulkan shader, I account for the Vulkan coordinate system by inverting the y value. I also must normalize the world values for reasons that are unclear to me (otherwise what's rendered is completely nonsense). The shader code follows:
#version 450
layout (triangles) in;
layout (triangle_strip, max_vertices = 3) out;
layout(binding = 0) uniform UniformBufferObject {
mat4 modelView;
mat4 inverseStaticModelView;
float quantization;
} ubo;
layout(binding = 1) uniform sampler2D texSampler;
layout(location = 0) in vec2 posGeom[];
layout(location = 0) out vec2 texCoord;
bool hasZeroDepth = false;
float minDepth = 0;
float maxDepth = 1.0;
vec3 unproject(vec2 win) {
float scale = 1 - win.y * win.y;
// Invert y to account for Vulkan coordinate system.
float y = win.y * -1;
// Scale x to account for hemisphere projection.
float x = win.x * scale;
float z = -sqrt(1 - x * x - y * y);
if(z < 0){
vec4 outVals = ubo.inverseStaticModelView * vec4(x, y, z, 1.0);
return vec3(outVals[0], outVals[1], outVals[2]) / outVals.w;
}else
return vec3(0);
}
vec3 reconstructWorldPosition(vec2 ndc, float depth) {
vec3 pos = unproject(ndc);
return depth * normalize(pos);
}
float getDepth(int idx) {
float depth = texture(texSampler, posGeom[idx] * 0.5 + 0.5).w;
if(depth == 0)
hasZeroDepth = true;
depth = pow(depth, ubo.quantization);
return depth;
}
void emitPosition(int idx, float depth) {
vec2 pos = posGeom[idx].xy;
texCoord = pos * 0.5 + 0.5;
vec3 positionFromDepth = reconstructWorldPosition(pos, depth);
gl_Position = ubo.modelView * vec4(positionFromDepth,1);
EmitVertex();
}
void main() {
float d0 = getDepth(0);
float d1 = getDepth(1);
float d2 = getDepth(2);
if(!hasZeroDepth) {
float minDepth = min(d0, min(d1, d2));
float maxDepth = max(d0, max(d1, d2));
float minDist = maxDepth - minDepth;
float avgDepth = (d0 + d1 + d2) / 3.0;
float thres = 0.1;
if(minDist / avgDepth < thres ) {
emitPosition(0, d0);
emitPosition(1, d1);
emitPosition(2, d2);
} else {
emitPosition(0, maxDepth);
emitPosition(1, maxDepth);
emitPosition(2, maxDepth);
}
}
}
Images of the output of the two programs are contained in this album: http://imgur.com/a/KUl57
The Vulkan output appears to almost be correct except for some odd artifacts in the lower left hand of the scene. My suspicion is that the scaling to the x coordinate to account for the hemisphere projection is causing the issue. I've played around with the scaling and other parts of the shader but I can't seem to get it right. Am I overlooking something else that is different between Vulkan and OpenGL, especially with regards to the coordinate system?

Updating Attribute variables in Vertex Shader(glVertexAttrib3f) working as glVertex

I am having trouble using Attribute variables for getting a value into vertex shader. I want to provide the geometry shader with one of the points from the previous primitive(line) for some calculation. I am providing this point using a vec3 attribute variable(Ppoint) in to vertex shader and then to geometry shader using a out variable in vertex shader and a in variable in geometry shader(pointPass).
The problem is when I am updating the attribute variable in the glBegin()/glEnd() block while drawing the lines the values in glVertexAttrib3f are taken as vertices and a line is also rendered to those points. This causes some extra lines to be displayed and all the geometry shader functionality is disturbed.
Here is my code for all the shaders and my opengl program to draw the lines.
Vertex Shader
#version 330 compatibility
out vec3 pointPass;
attribute vec3 Ppoint;
void main()
{
pointPass = Ppoint;
gl_Position = gl_Vertex;
}
Geometry Shader
#version 330 compatibility
in vec3 pointPass[];
out vec4 colorFrag;
layout(lines) in;
// 100 vertices are not actually required specified more for trial
layout(triangle_strip, max_vertices=100) out;
vec3 getA(vec3 axis){
vec3 a;
a.x = 1.0;
a.y = 1.0;
a.z = -(axis.x + axis.y)/axis.z;
a = normalize(a);
return a;
}
vec3 getB(vec3 axis, vec3 a){
vec3 b;
b.x = (a.y*axis.z - a.z*axis.y);
b.y = (a.z*axis.x - a.x*axis.z);
b.z = (a.x*axis.y - a.y*axis.x );
b = normalize(b);
return b;
}
void main()
{
vec3 axis0, axis1, v0, v1, v2;
float radius = 0.5;
float rotation = 0.0f;
float pi = 3.1416;
int numPoints = 15;
vec3 p1, p2, p3, p4;
int count = 0, i;
float increment = 2*pi/numPoints;
v0 = pointPass[0];
v1 = gl_in[0].gl_Position.xyz;
v2 = gl_in[1].gl_Position.xyz;
axis1 = v1 - v2;
axis1 = normalize(axis1);
vec3 a1 = getA(axis1);
vec3 b1 = getB(axis1, a1);
axis0 = v0-v2;
axis0 = normalize(axis0);
vec3 a0 = getA(axis0);
vec3 b0 = getB(axis0, a0);
// Rotation with theta
for(rotation = 0; rotation<=2*pi; rotation+=increment){
p1 = v1 + radius*cos(rotation)*a0 + radius*sin(rotation)*b0;
p2 = v1 + radius*cos(rotation + increment)*a0 + radius*sin (rotation + increment)*b0;
p3 = v2 + radius*cos(rotation)*a1 + radius*sin(rotation)*b1;
p4 = v2 + radius*cos(rotation + increment)*a1 + radius*sin(rotation + increment)*b1;
// FIRST Triangle
// FIRST vertex
gl_Position = (gl_ModelViewProjectionMatrix*vec4(p3,1.0) );
EmitVertex();
// SECOND vertex
gl_Position = (gl_ModelViewProjectionMatrix*vec4(p1, 1.0) );
EmitVertex();
// THIRD vertex
gl_Position = (gl_ModelViewProjectionMatrix*vec4(p4, 1.0) );
EmitVertex();
// SECOND Triangle
// FIRST vertex
gl_Position = (gl_ModelViewProjectionMatrix*vec4(p2, 1.0) );
EmitVertex();
}
EndPrimitive();
}
Fragment Shader
#version 330 compatibility
in vec4 colorFrag;
void main()
{
gl_FragColor = colorFrag;
}
OpenGL program for drawing lines
// vPoints is a std::vector of 3d vector class created by me.
void drawLines(){
float angle =0.0f;
int numLines = 30;
int count = 0;
float disp = 0.30f;
float radius_x = 5.0;
float radius_y = 5.0;
vPoints.resize(numLines+2);
// Loop around in a circle and specify even points along the spiral
float increment = (float)(2*GL_PI/numLines);
for(angle = 0.0f; angle < (2.0f*GL_PI); angle += increment)
{
// Calculate x and y position of the next vertex
float x1 = radius_x*sin(angle);
float y1 = radius_y*cos(angle);
float z1 = count*disp;
vPoints[count].SetVector(x1, y1, z1);
count ++;
}
// Drawing only first two line segments for testing
glBegin(GL_LINES);
int pointPassLocation = glGetAttribLocation(programID, "Ppoint");
// This is also considered as a vertex and a line is drawn from this point to vPoints[1]
glVertexAttrib3f(pointPassLocation, vPoints[0].GetX(), vPoints[0].GetY(), vPoints[0].GetZ());
glVertex3d(vPoints[1].GetX(), vPoints[1].GetY(), vPoints[1].GetZ());
glVertex3d(vPoints[2].GetX(), vPoints[2].GetY(), vPoints[2].GetZ());
// Again this is also considered as a point and a line is drawn from vPoints[2] to this point.
glVertexAttrib3f(pointPassLocation, vPoints[1].GetX(), vPoints[1].GetY(), vPoints[1].GetZ());
glVertex3d(vPoints[2].GetX(), vPoints[2].GetY(), vPoints[2].GetZ());
glVertex3d(vPoints[3].GetX(), vPoints[3].GetY(), vPoints[3].GetZ());
glEnd();
}
So instead of 2 lines which I wanted to draw from vPoints[1] to vPoints[2] and vPoints[2] to vPoints[3], I am getting 3 lines with 6 vertices considering the two glVertexAttrib3f statements as vertices.
Am I doing it correct, or is there a better way or another way to do this.