Is it possible to use OpenGL point sprites to simulate billboard sprites? - opengl

I was trying to set point sprites in OpenGL to change size with distance just as a billboarded sprite would, but I can't get the values in GL_POINT_DISTANCE_ATTENUATION_ARB to do anything useful. Is there a correlation of values to this that would match a given projection? Is what I'm trying to do even possible?
Render code being used:
glPointParameterfARB = (PFNGLPOINTPARAMETERFARBPROC)wglGetProcAddress("glPointParameterfARB");
glPointParameterfvARB = (PFNGLPOINTPARAMETERFVARBPROC)wglGetProcAddress("glPointParameterfvARB");
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluPerspective(100.0, 800.0/600.0, 0.1, 10.0);
float quadratic[] = { 5.0f, 0.1f, 10.0f };
glPointParameterfvARB( GL_POINT_DISTANCE_ATTENUATION_ARB, quadratic );
float maxSize = 0.0f;
glGetFloatv( GL_POINT_SIZE_MAX_ARB, &maxSize );
if( maxSize > 100.0f ) maxSize = 100.0f;
glPointSize( maxSize );
glPointParameterfARB( GL_POINT_FADE_THRESHOLD_SIZE_ARB, 0.1f );
glPointParameterfARB( GL_POINT_SIZE_MIN_ARB, 0.1f );
glPointParameterfARB( GL_POINT_SIZE_MAX_ARB, maxSize );
glTexEnvf( GL_POINT_SPRITE_ARB, GL_COORD_REPLACE_ARB, GL_TRUE );
glEnable( GL_POINT_SPRITE_ARB );
glScalef(0.75,1,1);
glTranslatef(0.00,0.0,-1.0);
glScalef(0.5,0.5,0.5);
glRotatef(counter*0.1+0.5,1.0,1.0,0.0);
glBegin( GL_POINTS );
for( int i = 0; i < 100; ++i )
{
glColor4f( i%10*0.1, i/10*0.1, 0.5, 1.0f );
glVertex3f( i%10*0.2-1.0,i/10*0.2-1.0,
((i%10-5)*(i%10-5)+(i/10-5)*(i/10-5))*0.01 );
}
glEnd();
glDisable( GL_POINT_SPRITE_ARB );

Here's how I make my poor man's approach to scaling the point size:
void render() {
glEnable(GL_VERTEX_PROGRAM_POINT_SIZE_ARB);
glHint(GL_POINT_SMOOTH_HINT, GL_NICEST);
glEnable(GL_POINT_SPRITE);
glActiveTexture(GL_TEXTURE0);
glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE);
/* Activate shader program here */
/* Send pointSize to shader program */
glBegin(GL_POINTS);
/* Render points here */
glVertex3f(...);
glEnd(GL_POINTS);
}
Vertex shader:
uniform float pointSize;
void main() {
gl_Position = ftransform();
gl_PointSize = pointSize / gl_Position.w;
}
You can do whatever you want in the fragment shader, but you'll have to compute the color, lighting and texturing yourself.

GLSL aside, doing what you want is pretty simple with distance attenuation. Seeing as how the projected size of things decreases quadratically with their distance in perspective projections, you only need to use the quadratic factor.
If you want to use the point size you manually set at a distance of, say, 150 units from the eye, just use 1/(150^2) as the quadratic factor (and zero for the constant and linear factors -- if anything, you may want to use some small number like 0.01 for the constant factor just to avoid potential divisions by zero).

In my experience point size attenuation isn't worth the trouble. You're much better off writing a very simple GLSL vertex shader that sets the point size manually according to some calculation you perform on your own. It took me about half a day to learn from scratch all the GLSL I needed to make this happen.
The GLSL code may be as simple as these few lines:
attribute float psize;
void main()
{
gl_FrontColor = gl_Color;
gl_PointSize = psize;
gl_Position = ftransform();
}
Where psize is the point size parameter the user chooses.

Just have a look in pmviewer.sourceforge.net the code is using point sprites and each point has a own color and size to simulate volume rendering:
The vertex shader is:
vertexShader
// with ATI hardware, uniform variable MUST be used by output
// variables. That's why win_height is used by gl_FrontColor
attribute float a_hsml1;
uniform float win_height;
uniform vec4 cameralocin;
void main()
{
vec4 position=gl_ModelViewMatrix*gl_Vertex;
vec4 cameraloc=gl_ModelViewMatrix*cameralocin;
float d=distance(vec3(cameraloc),vec3(position));
float a_hsml=gl_Normal.x;
float pointSize=win_height*a_hsml/d; // <- point diameter in
//pixels (drops like sqrt(1/r^2))
gl_PointSize=pointSize;
gl_TexCoord[0]=gl_MultiTexCoord0;
gl_Position=ftransform();
gl_FrontColor=vec4(gl_Color.r,gl_Color.g,gl_Color.b,gl_Color.a);
}
pixelShader
uniform sampler2D splatTexture;
void main()
{
vec4 color = gl_Color * texture2D(splatTexture, gl_TexCoord[0].st);
gl_FragColor = color;\n"
}
Just to send particles to gpu:
void PutOneArrayToGPU(unsigned int m_vbo, float *hArray, unsigned int num)
{
glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * num, hArray, GL_STATIC_DRAW);
int size = 0;
glGetBufferParameteriv(GL_ARRAY_BUFFER, GL_BUFFER_SIZE, &size);
if ((unsigned)size != (sizeof(float) *num))
{
fprintf(stderr, "WARNING: Pixel Buffer Object allocation failed!\n");
fprintf(stderr, "TurningOff the GPU accelerated rendering\n");
flag_GpuRender=false;
}
return flag_GpuRender;
}
Then render them:
void DrawPointsByGPU()
{
glEnableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, m_vboPos);
glVertexPointer(3, GL_FLOAT, 0, 0);
glEnableClientState(GL_COLOR_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, m_vboColor);
glColorPointer(4, GL_FLOAT, 0, 0);
glEnableClientState(GL_NORMAL_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, m_vboHSML);
glNormalPointer( GL_FLOAT, 3*sizeof(float), 0);
glDrawArrays(GL_POINTS, 0, m_numParticles);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
};

Related

Unable to plot multiple elements using FBO

I'm trying to edit this tutorial so to render multiple circles within the FBO. I simplified the tutorial so to save memory that I'm sending through the FBO: I'm only sending the x and y coordinates, alongside with a float that will determine the colour of the node. This information is read from this text file. Even though I'm trying to plot ~660 nodes, my code does not display all of them. My application should scale up and possibly plot any possible size of nodes read in input.
I provide a graphical illustration of what I would expect to obtain via a plot made in R:
library(ggplot2)
t <-read.table("pastebin_file.txt", header = T)
ggplot(t, aes(x, y)) + geom_point(aes(colour = factor(col)))
In OpenGL, I'm getting an inferior number of vertices (I know, the colors are inverted, but that is not my concern):
I guess that the problem might be with the VBO, or I forgot to set all the parameters properly. At this stage, I don't know what the problem is. How could I fix this problem so to replicate R's output on OpenGL? I provide a MWE with all the shaders in the last part of the question:
main.cpp
#include <stdio.h>
#include <stdlib.h>
#include <GL/glew.h>
#include <GL/glut.h>
#include "utils/shaders.h"
size_t n = 0;
void render(void)
{
// Clear the screen to black
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// I want to render exactly all the vertices that were loaded on the VBO.
glDrawArrays(GL_POINTS, 0, n);
glutSwapBuffers(); // Update the rendering
}
program programma;
void set_shader()
{
// Loading the shaders using a custom class. Nevertheless, the code is exactly the same as the one in https://open.gl/content/code/c7_final.txt, that is loading and compiling the three shaders, and then linking them together in one single program
programma.add_shader(shader_t::vertex, "shaders/vertexShader3.txt");
programma.add_shader(shader_t::fragment, "shaders/fragmentShader3.txt");
programma.add_shader(shader_t::geometry, "shaders/geometryShader3.txt");
programma.compile();
}
GLuint vbo;
GLuint vao;
#include <regex>
#include <iostream>
size_t fbo(const std::string& filename) {
// Create VBO with point coordinates
glGenBuffers(1, &vbo);
std::fstream name{filename};
std::string line;
std::getline(name, line); // Skipping the first line, that just contains the header
std::vector<GLfloat> points; // Storage for all the coordinates
n = 0;
std::regex rgx ("\\s+");
while (std::getline(name, line)) {
std::sregex_token_iterator iter(line.begin(), line.end(), rgx, -1);
std::sregex_token_iterator end;
points.emplace_back(std::stof(*iter++)/20); // x, rescaled, so it can fit into screen
points.emplace_back(std::stof(*iter++)/20); // y, rescaled, so it can fit into screen
int i = std::stoi(*iter++);
points.emplace_back(i); // determining the color
n++;
}
std::cout << n << std::endl; // number of vertices
std::cout << sizeof(float) * 3 * n << std::endl; // expected size in B = 7992
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, points.size(), points.data(), GL_STATIC_DRAW);
// Create VAO
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// Specify the layout of the node data: just two floats for the (x,y) pairs
GLint posAttrib = glGetAttribLocation(programma.id, "pos");
glEnableVertexAttribArray(posAttrib);
glVertexAttribPointer(posAttrib, 2, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), 0);
// Determining the color of the circle with one single float parameter
GLint sidesAttrib = glGetAttribLocation(programma.id, "sides");
glEnableVertexAttribArray(sidesAttrib);
glVertexAttribPointer(sidesAttrib, 1, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), (void*) (2 * sizeof(GLfloat)));
return points.size()/3;
}
int main(int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
glutInitWindowSize(200, 200);
glutCreateWindow("Stuff");
glutIdleFunc(render);
glewInit();
if (!glewIsSupported("GL_VERSION_2_0")) {
fprintf(stderr, "GL 2.0 unsupported\n");
return 1;
}
set_shader();
fbo("pastebin_file.txt");
glutMainLoop();
glDeleteBuffers(1, &vbo);
glDeleteVertexArrays(1, &vao);
return 0;
}
#endif
shaders/vertexShader3.txt
#version 150 core
in vec2 pos; // input vertex position
in float sides; // determines the output color
out vec3 vColor;
void main() {
gl_Position = vec4(pos, 0.0, 1.0);
if (sides == 1.0) { // determining the color
vColor = vec3(1.0,0.0,0.0);
} else {
vColor = vec3(0.0,1.0,0.0);
}
}
shaders/geometryShader3.txt
#version 150 core
layout(points) in;
layout(line_strip, max_vertices = 640) out;
in vec3 vColor[];
out vec3 fColor;
const float PI = 3.1415926;
const float lati = 10;
void main() {
fColor = vColor[0];
// Safe, GLfloats can represent small integers exactly
for (int i = 0; i <= lati; i++) {
// Angle between each side in radians
float ang = PI * 2.0 / lati * i;
// Offset from center of point
vec4 offset = vec4(cos(ang) * 0.3/20, -sin(ang) * 0.4/20, 0.0, 0.0);
gl_Position = gl_in[0].gl_Position + offset;
EmitVertex();
}
EndPrimitive();
}
shaders/fragmentShader3.txt
#version 150 core
in vec3 fColor;
out vec4 outColor;
void main() {
outColor = vec4(fColor, 1.0); // Simply returning the color
}
The 2nd argument of glBufferData has to be the size of the buffer in bytes:
glBufferData(GL_ARRAY_BUFFER, points.size(), points.data(), GL_STATIC_DRAW);
glBufferData(GL_ARRAY_BUFFER,
points.size() * sizeof(points[0]), points.data(), GL_STATIC_DRAW);

draw sphere in OpenGL 4.0

my OpenGL version is 4.0. I would like to draw a sphere through latitude and longitude. I use this method:
x=ρsinϕcosθ
y=ρsinϕsinθ
z=ρcosϕ
This is a part of my code:
glm::vec3 buffer[1000];
glm::vec3 outer;
buffercount = 1000;
float section = 10.0f;
GLfloat alpha, beta;
int index = 0;
for (alpha = 0.0 ; alpha <= PI; alpha += PI/section)
{
for (beta = 0.0 ; beta <= 2* PI; beta += PI/section)
{
outer.x = radius*cos(beta)*sin(alpha);
outer.y = radius*sin(beta)*sin(alpha);
outer.z = radius*cos(alpha);
buffer[index] = outer;
index = index +1;
}
}
GLuint sphereVBO, sphereVAO;
glGenVertexArrays(1, &sphereVAO);
glGenBuffers(1,&sphereVBO);
glBindVertexArray(sphereVAO);
glBindBuffer(GL_ARRAY_BUFFER,sphereVBO);
glBufferData(GL_ARRAY_BUFFER,sizeof(glm::vec3) *buffercount ,&buffer[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), (GLvoid*)0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
...
while (!glfwWindowShouldClose(window))
{
...
...
for (GLuint i = 0; i < buffercount; i++)
{
...
...
glm::mat4 model;
model = glm::translate(model, buffer[i]);
GLfloat angle = 10.0f * i;
model = glm::rotate(model, angle, glm::vec3(1.0f, 0.3f, 0.5f));
glUniformMatrix4fv(modelMat, 1, GL_FALSE, glm::value_ptr(model));
}
glDrawArrays(GL_TRIANGLE_FAN, 0, 900);
glfwSwapBuffers(window);
}
if section = 5, the performance is like this:
if section = 20. the performance is like this:
I think that I might have logic problem in my code. I am struggle in this problem...
-----update-----
I edited my code, It doesn't have any error, but I got a blank screen. I guess that something wrong in my vertex shader. I might pass wrong variables to vertex sheder. Please help me.
gluperspective is deprecated in my OpenGL 4.1
I switch to :
float aspect=float(4.0f)/float(3.0f);
glm::mat4 projection_matrix = glm::perspective(60.0f/aspect,aspect,0.1f,100.0f);
It shows that this error: constant expression evaluates to -1 which cannot be narrowed to type 'GLuint'(aka 'unsigned int')
GLuint sphere_vbo[4]={-1,-1,-1,-1};
GLuint sphere_vao[4]={-1,-1,-1,-1};
I'm not sure how to revise it...I switch to:
GLuint sphere_vbo[4]={1,1,1,1};
GLuint sphere_vao[4]={1,1,1,1};
I put Spektre's code in spherer.h file
This is a part of my main.cpp file:
...
...
Shader shader("basic.vert", "basic.frag");
sphere_init();
while (!glfwWindowShouldClose(window))
{
glfwPollEvents();
glClearColor(0.2f, 0.3f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
shader.Use();
GLuint MatrixID = glGetUniformLocation(shader.Program, "MVP");
GLfloat radius = 10.0f;
GLfloat camX = sin(glfwGetTime()) * radius;
GLfloat camZ = cos(glfwGetTime()) * radius;
// view matrix
glm::mat4 view;
view = glm::lookAt(glm::vec3(camX, 0.0, camZ), glm::vec3(0.0, 0.0, 0.0), glm::vec3(0.0, 1.0, 0.0));
glm::mat4 view_matrix = view;
// projection matrix
float aspect=float(4.0f)/float(3.0f);
glm::mat4 projection_matrix = glm::perspective(60.0f/aspect,aspect,0.1f,100.0f);
// model matrix
glm::mat4 model_matrix = glm::mat4(1.0f);// identity
//ModelViewProjection
glm::mat4 model_view_projection = projection_matrix * view_matrix * model_matrix;
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &model_view_projection[0][0]);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0,0.0,-10.0);
glEnable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
sphere_draw();
glFlush();
glfwSwapBuffers(window);
}
sphere_exit();
glfwTerminate();
return 0;
}
This is my vertex shader file:
#version 410 core
uniform mat4 MVP;
layout(location = 0) in vec3 vertexPosition_modelspace;
out vec4 vertexColor;
void main()
{
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
vertexColor = vec4(0, 1, 0, 1.0);
}
I added error-check function get_log in my shader.h file.
...
...
vertex = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertex, 1, &vShaderCode, NULL);
glCompileShader(vertex);
checkCompileErrors(vertex, "VERTEX");
get_log(vertex);
...
...
void get_log(GLuint shader){
GLint isCompiled = 0;
GLchar infoLog[1024];
glGetShaderiv(shader, GL_COMPILE_STATUS, &isCompiled);
if(isCompiled == GL_FALSE)
{
printf("----error--- \n");
GLint maxLength = 0;
glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &maxLength);
glGetShaderInfoLog(shader, 1024, NULL, infoLog);
std::cout << "| ERROR::::" << &infoLog << "\n| -- ------------------ --------------------------------- -- |" << std::endl;
glDeleteShader(shader); // Don't leak the shader.
}else{
printf("---no error --- \n");
}
}
I tested both fragment shader and vertex shader, it both showed ---no error---
As I mentioned in the comments you need to add indices to your mesh VAO/VBO. Not sure why GL_QUADS is not implemented on your machine that makes no sense as it is basic primitive so to make this easy to handle I use only GL_TRIANGLES which is far from ideal but what to heck ... Try this:
//---------------------------------------------------------------------------
const int na=36; // vertex grid size
const int nb=18;
const int na3=na*3; // line in grid size
const int nn=nb*na3; // whole grid size
GLfloat sphere_pos[nn]; // vertex
GLfloat sphere_nor[nn]; // normal
//GLfloat sphere_col[nn]; // color
GLuint sphere_ix [na*(nb-1)*6]; // indices
GLuint sphere_vbo[4]={-1,-1,-1,-1};
GLuint sphere_vao[4]={-1,-1,-1,-1};
void sphere_init()
{
// generate the sphere data
GLfloat x,y,z,a,b,da,db,r=3.5;
int ia,ib,ix,iy;
da=2.0*M_PI/GLfloat(na);
db= M_PI/GLfloat(nb-1);
// [Generate sphere point data]
// spherical angles a,b covering whole sphere surface
for (ix=0,b=-0.5*M_PI,ib=0;ib<nb;ib++,b+=db)
for (a=0.0,ia=0;ia<na;ia++,a+=da,ix+=3)
{
// unit sphere
x=cos(b)*cos(a);
y=cos(b)*sin(a);
z=sin(b);
sphere_pos[ix+0]=x*r;
sphere_pos[ix+1]=y*r;
sphere_pos[ix+2]=z*r;
sphere_nor[ix+0]=x;
sphere_nor[ix+1]=y;
sphere_nor[ix+2]=z;
}
// [Generate GL_TRIANGLE indices]
for (ix=0,iy=0,ib=1;ib<nb;ib++)
{
for (ia=1;ia<na;ia++,iy++)
{
// first half of QUAD
sphere_ix[ix]=iy; ix++;
sphere_ix[ix]=iy+1; ix++;
sphere_ix[ix]=iy+na; ix++;
// second half of QUAD
sphere_ix[ix]=iy+na; ix++;
sphere_ix[ix]=iy+1; ix++;
sphere_ix[ix]=iy+na+1; ix++;
}
// first half of QUAD
sphere_ix[ix]=iy; ix++;
sphere_ix[ix]=iy+1-na; ix++;
sphere_ix[ix]=iy+na; ix++;
// second half of QUAD
sphere_ix[ix]=iy+na; ix++;
sphere_ix[ix]=iy-na+1; ix++;
sphere_ix[ix]=iy+1; ix++;
iy++;
}
// [VAO/VBO stuff]
GLuint i;
glGenVertexArrays(4,sphere_vao);
glGenBuffers(4,sphere_vbo);
glBindVertexArray(sphere_vao[0]);
i=0; // vertex
glBindBuffer(GL_ARRAY_BUFFER,sphere_vbo[i]);
glBufferData(GL_ARRAY_BUFFER,sizeof(sphere_pos),sphere_pos,GL_STATIC_DRAW);
glEnableVertexAttribArray(i);
glVertexAttribPointer(i,3,GL_FLOAT,GL_FALSE,0,0);
i=1; // indices
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,sphere_vbo[i]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,sizeof(sphere_ix),sphere_ix,GL_STATIC_DRAW);
glEnableVertexAttribArray(i);
glVertexAttribPointer(i,4,GL_UNSIGNED_INT,GL_FALSE,0,0);
i=2; // normal
glBindBuffer(GL_ARRAY_BUFFER,sphere_vbo[i]);
glBufferData(GL_ARRAY_BUFFER,sizeof(sphere_nor),sphere_nor,GL_STATIC_DRAW);
glEnableVertexAttribArray(i);
glVertexAttribPointer(i,3,GL_FLOAT,GL_FALSE,0,0);
/*
i=3; // color
glBindBuffer(GL_ARRAY_BUFFER,sphere_vbo[i]);
glBufferData(GL_ARRAY_BUFFER,sizeof(sphere_col),sphere_col,GL_STATIC_DRAW);
glEnableVertexAttribArray(i);
glVertexAttribPointer(i,3,GL_FLOAT,GL_FALSE,0,0);
*/
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER,0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
glDisableVertexAttribArray(3);
}
void sphere_exit()
{
glDeleteVertexArrays(4,sphere_vao);
glDeleteBuffers(4,sphere_vbo);
}
void sphere_draw()
{
glEnable(GL_CULL_FACE);
glFrontFace(GL_CCW);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glBindVertexArray(sphere_vao[0]);
// glDrawArrays(GL_POINTS,0,sizeof(sphere_pos)/sizeof(GLfloat)); // POINTS ... no indices for debug
glDrawElements(GL_TRIANGLES,sizeof(sphere_ix)/sizeof(GLuint),GL_UNSIGNED_INT,0); // indices (choose just one line not both !!!)
glBindVertexArray(0);
}
void gl_draw()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
float aspect=float(xs)/float(ys);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60.0/aspect,aspect,0.1,100.0);
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0,0.0,-10.0);
glEnable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
sphere_draw();
glFlush();
SwapBuffers(hdc);
}
//---------------------------------------------------------------------------
Usage is simple after OpenGL context is created and extensions loaded call sphere_init() before closing app call sphere_exit() (while OpenGL context is still running) and when you want to render call sphere_draw(). I make an gl_draw() example with some settings and here the preview of it:
The point is to create 2D grid of points covering whole surface of sphere (via spherical long,lat a,b angles) and then just create triangles covering whole grid...

How to get keyboard navigation in OpenGL

I'm trying to create a solar system in OpenGL. I have the basic code for earth spinning on its axis and im trying to set the camera to move with the arrow keys.
using namespace std;
using namespace glm;
const int windowWidth = 1024;
const int windowHeight = 768;
GLuint VBO;
int NUMVERTS = 0;
bool* keyStates = new bool[256]; //Create an array of boolean values of length 256 (0-255)
float fraction = 0.1f; //Fraction for navigation speed using keys
// Transform uniforms location
GLuint gModelToWorldTransformLoc;
GLuint gWorldToViewToProjectionTransformLoc;
// Lighting uniforms location
GLuint gAmbientLightIntensityLoc;
GLuint gDirectionalLightIntensityLoc;
GLuint gDirectionalLightDirectionLoc;
// Materials uniform location
GLuint gKaLoc;
GLuint gKdLoc;
// TextureSampler uniform location
GLuint gTextureSamplerLoc;
// Texture ID
GLuint gTextureObject[11];
//Navigation variables
float posX;
float posY;
float posZ;
float viewX = 0.0f;
float viewY = 0.0f;
float viewZ = 0.0f;
float dirX;
float dirY;
float dirZ;
vec3 cameraPos = vec3(0.0f,0.0f,5.0f);
vec3 cameraView = vec3(viewX,viewY,viewZ);
vec3 cameraDir = vec3(0.0f,1.0f,0.0f);
These are all my variables that im using to edit the camera.
static void renderSceneCallBack()
{
// Clear the back buffer and the z-buffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Create our world space to view space transformation matrix
mat4 worldToViewTransform = lookAt(
cameraPos, // The position of your camera, in world space
cameraView, // where you want to look at, in world space
cameraDir // Camera up direction (set to 0,-1,0 to look upside-down)
);
// Create out projection transform
mat4 projectionTransform = perspective(45.0f, (float)windowWidth / (float)windowHeight, 1.0f, 100.0f);
// Combine the world space to view space transformation matrix and the projection transformation matrix
mat4 worldToViewToProjectionTransform = projectionTransform * worldToViewTransform;
// Update the transforms in the shader program on the GPU
glUniformMatrix4fv(gWorldToViewToProjectionTransformLoc, 1, GL_FALSE, &worldToViewToProjectionTransform[0][0]);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(aitVertex), 0);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(aitVertex), (const GLvoid*)12);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(aitVertex), (const GLvoid*)24);
// Set the material properties
glUniform1f(gKaLoc, 0.8f);
glUniform1f(gKdLoc, 0.8f);
// Bind the texture to the texture unit 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, gTextureObject[0]);
// Set our sampler to user Texture Unit 0
glUniform1i(gTextureSamplerLoc, 0);
// Draw triangle
mat4 modelToWorldTransform = mat4(1.0f);
static float angle = 0.0f;
angle+=1.0f;
modelToWorldTransform = rotate(modelToWorldTransform, angle, vec3(0.0f, 1.0f, 0.0f));
glUniformMatrix4fv(gModelToWorldTransformLoc, 1, GL_FALSE, &modelToWorldTransform[0][0]);
glDrawArrays(GL_TRIANGLES, 0, NUMVERTS);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
glutSwapBuffers();
}
This is the function that draws the earth onto the screen and determines where the camera is at.
void keyPressed (unsigned char key, int x, int y)
{
keyStates[key] = true; //Set the state of the current key to pressed
cout<<"keyPressed ";
}
void keyUp(unsigned char key, int x, int y)
{
keyStates[key] = false; //Set the state of the current key to released
cout<<"keyUp ";
}
void keyOperations (void)
{
if(keyStates['a'])
{
viewX += 0.5f;
}
cout<<"keyOperations ";
}
These are the functions I'm trying to use to edit the camera variables dynamically
// Create a vertex buffer
createVertexBuffer();
glutKeyboardFunc(keyPressed); //Tell Glut to use the method "keyPressed" for key events
glutKeyboardUpFunc(keyUp); //Tell Glut to use the method "keyUp" for key events
keyOperations();
glutMainLoop();
Finally here's the few lines in my main method where I'm trying to call the key press functions. In the console I see it detects that im pressing them but the planet doesnt move at all, I think I may be calling the keyOperations in the wrong place but I'm not sure.
You are correct, key operations is being called in the wrong place. Where it is now is called once then never again. It needs to go in your update code where you update the rotation of the planet. That way it is called at least once per frame.

OpenGL: Geometry Shader performance with a lot of cubes

So I wrote a really simple OpenGL program to draw 100x100x100 points drawn as cubes using the Geometry Shader. I wanted to do it to benchmark it against what I could currently do using DirectX11.
With DirectX11, I can easily render these cubes at 60fps (vsync). However, with OpenGL I'm stuck at 40fps.
In both applications, I am:
Using a point tolopology to represent just the position of the cube (stride = 12 bytes).
Only mapping to the Vertex Buffer in the initialise function, only ever once.
Using only two draw calls in total: one to render the cubes, one to render frametime.
Using back-face culling, and depth testing.
Limiting state changes to the minimum I need to draw the cubes (VBO's/Shader Program).
Here is my draw call:
GLboolean CCubeApplication::Draw()
{
auto program = m_ppBatches[0]->GetShaders()->GetProgram(0);
program->Bind();
{
glUniformMatrix4fv(program->GetUniform("g_uWVP"), 1, false, glm::value_ptr(m_matMatrices[MATRIX_WVP]));
glDrawArrays(GL_POINTS, 0, m_uiTotal);
}
return true;
}
This function calls glBindVertexArray and glUseProgram
program->Bind();
And the rest is straight-forward. My Update function does nothing but update the camera's position and view matrix, and is identical in DirectX/OpenGL versions.
My vertex shader is a pass-through, and my fragment shader returns a constant colour. This is my geometry shader:
#version 440 core
// GS_LAYOUT
layout(points) in;
layout(triangle_strip, max_vertices = 36) out;
// GS_IN
in vec4 vOut_pos[];
// GS_OUT
// UNIFORMS
uniform mat4 g_uWVP;
const float f = 0.1f;
const int elements[] = int[]
(
0,2,1,
2,3,1,
1,3,5,
3,7,5,
5,7,4,
7,6,4,
4,6,0,
6,2,0,
3,2,7,
2,6,7,
5,4,1,
4,0,1
);
// GS
void main()
{
vec4 vertices[] = vec4[]
(
g_uWVP * (vOut_pos[0] + vec4(-f,-f,-f, 0)),
g_uWVP * (vOut_pos[0] + vec4(-f,-f,+f, 0)),
g_uWVP * (vOut_pos[0] + vec4(-f,+f,-f, 0)),
g_uWVP * (vOut_pos[0] + vec4(-f,+f,+f, 0)),
g_uWVP * (vOut_pos[0] + vec4(+f,-f,-f, 0)),
g_uWVP * (vOut_pos[0] + vec4(+f,-f,+f, 0)),
g_uWVP * (vOut_pos[0] + vec4(+f,+f,-f, 0)),
g_uWVP * (vOut_pos[0] + vec4(+f,+f,+f, 0))
);
uint uiIndex = 0;
for(uint uiTri = 0; uiTri < 12; ++uiTri)
{
for(uint uiVert = 0; uiVert < 3; ++uiVert)
{
gl_Position = vertices[elements[uiIndex++]];
EmitVertex();
}
EndPrimitive();
}
}
I've seen people talk about instancing or other such rendering methods, but I'm primarily interested in understanding why I can't get at least the same performance from OpenGL as I do with DirectX - seeing as the way I do it in both seem to be virtually identical to me. Identical data, identical shaders. Help?
UPDATE
So I downloaded gDEBugger, and here is my call stack for one frame:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
// Drawing cubes
glBindVertexArray(1)
glUseProgram(1)
glUniformMatrix4fv(0, 1, FALSE, {matrixData})
glDrawArrays(GL_POINTS, 0, 1000000)
// Drawing text
glBindVertexArray(2);
glUseProgram(5);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, 2);
glBindBuffer(GL_ARRAY_BUFFER, 2);
glBufferData(GL_ARRAY_BUFFER, 212992, {textData}, GL_DYNAMIC_DRAW);
glDrawArrays(GL_POINTS, 0, 34);
// Swap buffers
wglSwapBuffers();

OpenGL 3.1 lighting messed up, using phong shading

After many painful hours of attempting to figure out why my lighting is messed up I am still at a loss.
The OpenGL normals are correct (backface culling does not cause any of my triangles to disappear)
I calculate my normals in order to interpolate for lighting, all the triangles on the same faces also have the same normals.
If any one has any thoughts that would be appreciated.
I am definitely new to OpenGL, so that is a bit obvious in my code.
here are my shaders:
vertex shader
#version 330 core
layout(location = 0) in vec3 Position;
layout(location = 1) in vec3 vertexColor;
in vec3 vNormal;
out vec3 fragmentColor; // Output data ; will be interpolated for each fragment.
uniform mat4 MVP;
uniform mat4 transformMatrix;
uniform vec4 LightPosition;
// output values that will be interpretated per-fragment
out vec3 fN;
out vec3 fE;
out vec3 fL;
void main()
{
fN = vNormal;
fE = Position.xyz;
fL = LightPosition.xyz;
if( LightPosition.w != 0.0 ) {
fL = LightPosition.xyz - Position.xyz;
}
// Output position of the vertex, in clip space : MVP * position
vec4 v = vec4(Position,1); // Transform in homoneneous 4D vector
gl_Position = MVP * v;
//gl_Position = MVP * v;
// The color of each vertex will be interpolated
// to produce the color of each fragment
//fragmentColor = vertexColor; // take out at some point
}
and the fragmentShader, using phong shading
#version 330
//out vec3 color;
// per-fragment interpolated values from the vertex shader
in vec3 fN;
in vec3 fL;
in vec3 fE;
out vec4 fColor;
uniform vec4 AmbientProduct, DiffuseProduct, SpecularProduct;
uniform mat4 ModelView;
uniform vec4 LightPosition;
uniform float Shininess;
in vec3 fragmentColor; // Interpolated values from the vertex shaders
void main()
{
// Normalize the input lighting vectors
vec3 N = normalize(fN);
vec3 E = normalize(fE);
vec3 L = normalize(fL);
vec3 H = normalize( L + E );
vec4 ambient = AmbientProduct;
float Kd = max(dot(L, N), 0.0);
vec4 diffuse = Kd*DiffuseProduct;
float Ks = pow(max(dot(N, H), 0.0), Shininess);
vec4 specular = Ks*SpecularProduct;
// discard the specular highlight if the light's behind the vertex
if( dot(L, N) < 0.0 ) {
specular = vec4(0.0, 0.0, 0.0, 1.0);
}
fColor = ambient + diffuse + specular;
fColor.a = 1.0;
//color = vec3(1,0,0);
// Output color = color specified in the vertex shader,
// interpolated between all 3 surrounding vertices
//color = fragmentColor;
}
void setMatrices()
{
GLfloat FoV = 45; // the zoom of the camera
glm::vec3 cameraPosition(4,3,3), // the position of your camera, in world space // change to see what happends
cameraTarget(0,0,0), // where you want to look at, in world space
upVector(0,-1,0);
// Projection matrix : 45° Field of View, 4:3 ratio, display range : 0.1 unit <-> 100 units
glm::mat4 Projection = glm::perspective(FoV, 3.0f / 3.0f, 0.001f, 100.0f); // ratio needs to change here when the screen size/ratio changes
// Camera matrix
glm::mat4 View = glm::lookAt(
cameraPosition, // Camera is at (4,3,3), in World Space
cameraTarget, // and looks at the origin
upVector // Head is up (set to 0,-1,0 to look upside-down)
);
// Model matrix : an identity matrix (model will be at the origin)
glm::mat4 Model = glm::mat4(1.0f); // Changes for each model !
// Our ModelViewProjection : multiplication of our 3 matrices
glm::mat4 MVP = Projection * View * Model * transformMatrix; //matrix multiplication is the other way around
// Get a handle for our "MVP" uniform.
// Only at initialisation time.
GLuint MatrixID = glGetUniformLocation(programID, "MVP");
// Send our transformation to the currently bound shader,
// in the "MVP" uniform
// For each model you render, since the MVP will be different (at least the M part)
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);
RotationID = glGetUniformLocation(programID,"transformMatrix");
//lighting
cubeNormal = glGetAttribLocation( programID, "vNormal" );
}
void setBuffers()
{
// Get a vertex array object
GLuint VAO;
glGenVertexArrays(1, &VAO);
glBindVertexArray(VAO);
glUseProgram(programID);
// cube buffer objects
glGenBuffers(1, &CubeVertexbuffer); // Generate 1 buffer, put the resulting identifier in vertexbuffer
glBindBuffer(GL_ARRAY_BUFFER, CubeVertexbuffer); // The following commands will talk about our 'vertexbuffer' buffer
glBufferData(GL_ARRAY_BUFFER, sizeof(CubeBufferData), CubeBufferData, GL_STATIC_DRAW); // Give our vertices to OpenGL.
// cube normal objects
glGenBuffers(1, &CubeNormalbuffer); // Generate 1 buffer, put the resulting identifier in vertexbuffer
glBindBuffer(GL_ARRAY_BUFFER, CubeNormalbuffer); // The following commands will talk about our 'vertexbuffer' buffer
glBufferData(GL_ARRAY_BUFFER, sizeof(CubeNormalBufferData), CubeNormalBufferData, GL_STATIC_DRAW); // Give our vertices to OpenGL.
//octahedron buffer objects
glGenBuffers(1, &OctaVertexbuffer); // Generate 1 buffer, put the resulting identifier in vertexbuffer
glBindBuffer(GL_ARRAY_BUFFER, OctaVertexbuffer); // The following commands will talk about our 'vertexbuffer' buffer
glBufferData(GL_ARRAY_BUFFER, sizeof(octahedronBufData), octahedronBufData, GL_STATIC_DRAW); // Give our vertices to OpenGL.
//tetrahedron buffer objects
glGenBuffers(1, &TetraVertexbuffer); // Generate 1 buffer, put the resulting identifier in vertexbuffer
glBindBuffer(GL_ARRAY_BUFFER, TetraVertexbuffer); // The following commands will talk about our 'vertexbuffer' buffer
glBufferData(GL_ARRAY_BUFFER, sizeof(tetrahedronBufData), tetrahedronBufData, GL_STATIC_DRAW); // Give our vertices to OpenGL.
//dodecahedron buffer objects
glGenBuffers(1, &DodecaVertexbuffer); // Generate 1 buffer, put the resulting identifier in vertexbuffer
glBindBuffer(GL_ARRAY_BUFFER, DodecaVertexbuffer); // The following commands will talk about our 'vertexbuffer' buffer
glBufferData(GL_ARRAY_BUFFER, sizeof(dodecahedronBufData), dodecahedronBufData, GL_STATIC_DRAW); // Give our vertices to OpenGL.
//icosahedron buffer objects
glGenBuffers(1, &icosaVertexbuffer); // Generate 1 buffer, put the resulting identifier in vertexbuffer
glBindBuffer(GL_ARRAY_BUFFER, icosaVertexbuffer); // The following commands will talk about our 'vertexbuffer' buffer
glBufferData(GL_ARRAY_BUFFER, sizeof(icosahedronBufData), icosahedronBufData, GL_STATIC_DRAW); // Give our vertices to OpenGL.
//sphere buffer objects
glGenBuffers(1, &sphereVertexbuffer); // Generate 1 buffer, put the resulting identifier in vertexbuffer
glBindBuffer(GL_ARRAY_BUFFER, sphereVertexbuffer); // The following commands will talk about our 'vertexbuffer' buffer
glBufferData(GL_ARRAY_BUFFER, sizeof(sphereBufData), sphereBufData, GL_STATIC_DRAW); // Give our vertices to OpenGL.
glGenBuffers(1, &colorbuffer);
glBindBuffer(GL_ARRAY_BUFFER, colorbuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_color_buffer_data), g_color_buffer_data, GL_STATIC_DRAW);
// lighting stuff
// Initialize shader lighting parameters
point4 light_position= { 0.0, 20.0, -10.0, 0.0 };
color4 light_ambient ={ 0.2, 0.2, 0.2, 1.0 };
color4 light_diffuse ={ 1.0, 1.0, 1.0, 1.0 };
color4 light_specular ={ 1.0, 1.0, 1.0, 1.0 };
color4 material_ambient ={ 1.0, 0.0, 1.0, 1.0 };
color4 material_diffuse ={ 1.0, 0.8, 0.0, 1.0 };
color4 material_specular ={ 1.0, 0.8, 0.0, 1.0 };
float material_shininess = 20.0;
color4 ambient_product;
color4 diffuse_product;
color4 specular_product;
int i;
for (i = 0; i < 3; i++) {
ambient_product[i] = light_ambient[i] * material_ambient[i];
diffuse_product[i] = light_diffuse[i] * material_diffuse[i];
specular_product[i] = light_specular[i] * material_specular[i];
}
//printColor("diffuse", diffuse_product);
//printColor("specular", specular_product);
glUniform4fv( glGetUniformLocation(programID, "AmbientProduct"),
1, ambient_product );
glUniform4fv( glGetUniformLocation(programID, "DiffuseProduct"),
1, diffuse_product );
glUniform4fv( glGetUniformLocation(programID, "SpecularProduct"),
1, specular_product );
glUniform4fv( glGetUniformLocation(programID, "LightPosition"),
1, light_position );
glUniform1f( glGetUniformLocation(programID, "Shininess"),
material_shininess );
}
and some more....
void display()
{
setMatrices(); // initilize Matrices
// Use our shader
//glUseProgram(programID);
glClearColor(0.0f, 0.0f, 0.3f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// 2nd attribute buffer : colors
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, colorbuffer);
glVertexAttribPointer(
1, // attribute. No particular reason for 1, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
glEnableVertexAttribArray(0); // 1rst attribute buffer : vertices
// enum platosShapes{tet, cube, octah, dodec, icos};
switch(shapeInUse)
{
case tet:
{
glBindBuffer(GL_ARRAY_BUFFER, TetraVertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
glDrawArrays(GL_TRIANGLES, 0, 4*3); // Starting from vertex 0; 3 vertices total -> 1 triangle // need to know amount of vertices here // and change to triangle strips accordingly
}
break;
case cube:
{
//GLuint cubeNormal = glGetAttribLocation( programID, "vNormal" );
glEnableVertexAttribArray( cubeNormal );
glVertexAttribPointer( cubeNormal, 3, GL_FLOAT, GL_FALSE, 0,
(const GLvoid *) (sizeof(CubeNormalBufferData)) );
//glDisableVertexAttribArray( cubeNormal );
glBindBuffer(GL_ARRAY_BUFFER, CubeVertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
glDrawArrays(GL_TRIANGLES, 0, 12*3); // Starting from vertex 0; 3 vertices total -> 1 triangle // need to know amount of vertices here // and change to triangle strips accordingly
}
break;
case octah:
{
glBindBuffer(GL_ARRAY_BUFFER, OctaVertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
glDrawArrays(GL_TRIANGLES, 0, 8*3); // Starting from vertex 0; 3 vertices total -> 1 triangle // need to know amount of vertices here // and change to triangle strips accordingly
}
break;
case dodec:
{
glBindBuffer(GL_ARRAY_BUFFER, DodecaVertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
glDrawArrays(GL_TRIANGLE_FAN, 0, 5 * 6); // Starting from vertex 0; 3 vertices total -> 1 triangle // need to know amount of vertices here // and change to triangle strips accordingly
glDrawArrays(GL_TRIANGLE_FAN, (5 * 6) + 1, 30);
//glutSolidDodecahedron();
//glDrawArrays(GL_TRIANGLE_STRIP,0,5*12);
}
break;
case icos:
{
glBindBuffer(GL_ARRAY_BUFFER, icosaVertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
glDrawArrays(GL_TRIANGLES, 0, 3*20); // Starting from vertex 0; 3 vertices total -> 1 triangle // need to know amount of vertices here // and change to triangle strips accordingly
}
break;
case sphere:
{
glBindBuffer(GL_ARRAY_BUFFER, sphereVertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
//glDrawElements(GL_TRIANGLES, cnt2, GL_UNSIGNED_INT, 0)
glDrawArrays(GL_TRIANGLE_FAN, 0, 100);
}
}
glDisableVertexAttribArray(0);
glFlush();
}
and some more........
void calculateNormals(GLfloat bufData[], GLfloat normBufData[], int size) // probalby works
{
int count = 0;
GLfloat temp[9];
for(int i = 0; i < size; i++)
{
temp[count] = bufData[i];
count++;
if((i+1) % 9 == 0)
{
count = 0;
//for(int i = 0; i < 9; i++)
//{
// cout << temp[i] << "!,";
// if((i + 1) % 3 == 0)
// cout << "\n";
//}
calculateCross(temp, normBufData);
}
}
printNormals(normBufData, size);
}
void calculateCross(GLfloat bufData[], GLfloat normBufData[]) // probably works
{
static int counter = 0; // need to reset in bettween new buffers
glm::vec3 C1;
glm::vec3 C2;
glm::vec3 normal;
//cout << bufData[0] << "," << bufData[1] << "," << bufData[2] << " buf 1 \n";
//cout << bufData[3] << "," << bufData[4] << "," << bufData[5] << " buf 2 \n";
//cout << bufData[6] << "," << bufData[7] << "," << bufData[8] << " buf 3 \n\n";
//C1.x = bufData[3] - bufData[0];
//C1.y = bufData[4] - bufData[1];
//C1.z = bufData[5] - bufData[2];
//C2.x = bufData[6] - bufData[0];
//C2.y = bufData[7] - bufData[1];
//C2.z = bufData[8] - bufData[2];
C1.x = bufData[0] - bufData[3];
C1.y = bufData[1] - bufData[4];
C1.z = bufData[2] - bufData[5];
C2.x = bufData[0] - bufData[6];
C2.y = bufData[1] - bufData[7];
C2.z = bufData[2] - bufData[8];
//C2.x = bufData[6] - bufData[0];
//C2.y = bufData[7] - bufData[1];
//C2.z = bufData[8] - bufData[2];
//cout << C1.x << " 1x \n";
//cout << C1.y << " 1y \n";
//cout << C1.z << " 1z \n";
//cout << C2.x << " 2x \n";
//cout << C2.y << " 2y \n";
//cout << C2.z << " 2z \n";
normal = glm::cross(C1, C2);
//cout << "\nNORMAL : " << normal.x << "," << normal.y << "," << normal.z << " counter = " << counter << "\n";
for(int j = 0; j < 3; j++)
{
for(int i = 0; i < 3; i++)
{
normBufData[counter] = normal.x;
normBufData[counter + 1] = normal.y;
normBufData[counter + 2] = normal.z;
}
counter+=3;
}
}
and main.....
int main(int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(700, 700); // Window Size
glutCreateWindow("Michael - Lab 3");
glutDisplayFunc(display);
glutTimerFunc(10, timeFucn, 10);
glutIdleFunc(Idle);
glutKeyboardFunc(keyboard);
glewExperimental = GL_TRUE;
glewInit();
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST); // Enable depth test
glDepthFunc(GL_LESS); // Accept fragment if it closer to the camera than the former one
GenerateSphere(); // this function generates points for the sphere
programID = LoadShader( "VertexShader.glsl", "FragmentShader.glsl" ); // Create and compile our GLSL program from the shaders
setBuffers(); // initilize buffers
calculateNormals(CubeBufferData,CubeNormalBufferData,108); // calculate norms
//printNormals(CubeNormalBufferData);
glutMainLoop();
}
You forgot to bind the buffer object with normals before calling glVertexAttribPointer( cubeNormal, 3,....);. Therefore, the actual data for normals is taken from the color buffer, which causes weirdest Phong evaluation result.
BTW, nice coding style :)
Phong and Gouraud shadings are not applicable to objects with all planar surfaces, e.g. a cube.