I am using GL_TRIANGLE_FAN to draw a circle. When I use other Triangle primitives, I get some triangles, but when I use GL_TRIANGLE_FAN, I ge a blank screen. I am new to this, and I am not getting where I am going wrong.
#include <string>
#include <fstream>
#include <iostream>
#include <sstream>
#include <vector>
//Include GLEW
#include <GL/glew.h>
//Include GLFW
#include <glfw3.h>
#include <GL/glut.h>
#include <GL/gl.h>
#include <math.h>
int width;
int height;
float r;
float theta;
GLuint vboHandle[1];
GLuint indexVBO;
struct vertex
{
double x, y;
double u, v;
double r, g, b;
}temp;
std::vector<vertex> vertices;
std::vector<GLuint64> indeces;
void initNormal(){
float a=0;
int value1 = 1;
double radius = 0.3;
double centerX = 0;
double centerY = 0;
double theta = 0;
//u,v,r,g,b are dummy for now
temp.x = 0;
temp.y = 0;
temp.u = a;
temp.v = a;
temp.r = a;
temp.g = a;
temp.b = a;
vertices.push_back(temp);
indeces.push_back(0);
for (int i = 1; i <= 72; i++){
a = a+0.10;
temp.x = radius*cos(((22 / 7.0) / 180.0)*theta);
temp.y = radius*sin(((22 / 7.0) / 180.0)*theta);
temp.u = a;
temp.v = a;//value1 / (i * 2);
temp.r = a;//value1 / i;
temp.g = a; //value1 / (i * 2);
temp.b = a;//value1 / i;
std::ofstream ofs;
vertices.push_back(temp);
indeces.push_back(i);
theta = theta + 10;
}
}
void initVbo(){
GLenum err = glewInit();
if (err != GLEW_OK) {
fprintf(stderr, "Error: %s\n", glewGetErrorString(err));
//return -1;
}
glPointSize(10);
glGenBuffers(1, &vboHandle[0]); // create a VBO handle
glBindBuffer(GL_ARRAY_BUFFER, vboHandle[0]); // bind the handle to the current VBO
glBufferData(GL_ARRAY_BUFFER, sizeof(vertex)* vertices.size(), &vertices[0], GL_DYNAMIC_DRAW); // allocate space and copy the data over
glBindBuffer(GL_ARRAY_BUFFER, 0); // clean up
glGenBuffers(1, &indexVBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexVBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLuint64)*indeces.size(), &indeces[0], GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); //clean up
}
void display(){
glClearColor(0, 0, 0, 1);
glClear(GL_COLOR_BUFFER_BIT);
glColor4f(1, 1, 1, 1);
glEnableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, vboHandle[0]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexVBO);
glVertexPointer(2, GL_DOUBLE, sizeof(vertex), 0);
glDrawElements(GL_TRIANGLES, indeces.size(), GL_UNSIGNED_INT, (char*)NULL + 0);//2 indeces needed to make one line
glFlush();
}
void initializeGlut(int argc, char** argv){
std::cout << "entered";
glutInit(&argc, argv);
width = 800;
height = 800;
glutInitDisplayMode(GLUT_RGB | GLUT_SINGLE);
glutInitWindowSize(width, height);
glutCreateWindow("Bhavya's Program");
glutDisplayFunc(display);
}
void main(int argc, char** argv){
initializeGlut(argc, argv);
initNormal();
initVbo();
//glutReshapeFunc(reshape);
glutMainLoop();
}
The main problem in your code is that you're using the wrong type for the index values:
std::vector<GLuint64> indeces;
GLuint64 is not a valid index type, and it certainly does not match the index type specified in the draw command:
glDrawElements(GL_TRIANGLES, indeces.size(), GL_UNSIGNED_INT, ...);
Replace all occurrences of GLuint64 with the correct type, which is GLuint, and you should start seeing something.
The reason you're not seeing anything at all when drawing with GL_TRIANGLE_FAN becomes clearer if you picture the memory layout of the index buffer with the wrong type. If you write a sequence of 64-bit values, which are then interpreted as 32-bit values, every second value will be read as value 0.
With GL_TRIANGLE_FAN, all triangles are formed from the first index (which you set to 0) and two sequential indices from the array. With every second index read as 0, this means that every triangle has two indices of value 0. Which in turn means that all triangles are degenerate, and will not light up any pixels.
The circle drawing code will need some improvement as well. Right now you're looping from 0 to 720 degrees, which will go around the circle twice. Also, 22/7 is a very rough approximation of pi. You may want to use a more precise constant definition from a math header file instead.
While it's not a correctness problem, I would also avoid using double values for vertex attributes. OpenGL implementations internally uses floats. If you specify the attributes as doubles, you will only use extra memory, and add overhead to convert the values from double to float.
Related
I created a minimal setup with a fragment shader setting the color to write, so not even a parameter.
The vertex shader passes in a matrix and transforms the points. We can see the sphere, but only part of it.
I hesitate to post the whole code, trying as hard as possible for a minimum working solution but it's about 300 lines including the shader loading code. I will post just the core pieces, and if people want more I will post it all.
Here is the code for the demo including a stripped down Sphere class and glmain.
Not shown is main() which does try..catch and calls glmain
#include <GL/glew.h>
#include "common/common.hh"
#include <glm/glm.hpp>
#include <glm/ext.hpp>
#include <numbers>
#include <iostream>
#include <iomanip>
#include <cstdint>
#include <string>
using namespace std;
using namespace glm;
using namespace std::numbers;
class Sphere {
private:
uint32_t progid; // handle to the shader code
uint32_t vao; // array object container for vbo and indices
uint32_t vbo; // handle to the point data on the graphics card
uint32_t lbo; // handle to buffer of indices for lines for wireframe sphere
uint32_t latRes, lonRes;
uint32_t resolution;
public:
/**
* #brief Construct a sphere
*
* #param r radius of the sphere
* #param latRes resolution of the grid in latitude
* #param lonRes resolution of the grid in latitude
*/
Sphere(double r, uint32_t latRes, uint32_t lonRes);
~Sphere() { cleanup(); }
void render(mat4& trans);
void cleanup();
};
Sphere::Sphere(double r, uint32_t latRes, uint32_t lonRes) : latRes(latRes), lonRes(lonRes),
resolution((2*latRes-2)*lonRes + 2) {
progid = loadShaders( "05_3d.vert", "02simple.frag" );
double dlon = 2.0*numbers::pi / lonRes, dlat = numbers::pi / latRes;
double z;
double lat = -numbers::pi/2 + dlat; // latitude in radians
double rcircle;
float vert[resolution*3]; // x,y,z
uint32_t c = 0;
for (uint32_t j = 0; j < 2*latRes-2; j++, lat += dlat) {
//what is the radius of hte circle at that height?
rcircle = r* cos(lat); // size of the circle at this latitude
z = r * sin(lat); // height of each circle
double t = 0;
for (uint32_t i = 0; i < lonRes; i++, t += dlon) {
vert[c++] = rcircle * cos(t), vert[c++] = rcircle * sin(t);
vert[c++] = z;
}
cout << endl;
}
// south pole
vert[c++] = 0;
vert[c++] = 0;
vert[c++] = -r;
// north pole
vert[c++] = 0;
vert[c++] = 0;
vert[c++] = r;
cout << "resolution: " << resolution << endl;
cout << "predicted num vert components: " << resolution*3 << endl;
cout << "actual num vert components: " << c << endl;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, resolution, vert, GL_STATIC_DRAW);
glBindVertexArray(0);
}
void Sphere::render(mat4& trans) {
glUseProgram(progid); // Use the shader
uint32_t matrixID = glGetUniformLocation(progid, "trans");
glUniformMatrix4fv(matrixID, 1, GL_FALSE, &trans[0][0]);
glBindVertexArray(vao);
glVertexAttribPointer(
0, // first parameter to shader, numbered 0
3, // 3 floating point numbers (x,y,z)
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // this is the entire set of data, move on
(void*)0 // array buffer offset
);
glEnableVertexAttribArray(0); // pass x,y to shader
glEnable(GL_PROGRAM_POINT_SIZE);
//points don't work, why not? And how to set the size of the points?
glPointSize(5);
glDrawArrays(GL_POINT, 0, resolution);
// line strips work, but incomplete (see screen shot)
glDrawArrays(GL_LINE_STRIP, 0, resolution);
glDisableVertexAttribArray(0);
}
void Sphere::cleanup() {
glDeleteBuffers(1, &vbo); // remove vbo memory from graphics card
glDeleteVertexArrays(1, &vao); // remove vao from graphics card
glDeleteProgram(progid);
}
using namespace std;
void glmain() {
win = createWindow(800, 800, "Sphere demo");
glClearColor(0.0f, 0.0f, 0.4f, 0.0f); // Dark blue background
Sphere sphere(1.0, 30, 15);
mat4 trans= lookAt(vec3(0,0,0), vec3(10,5,10), vec3(0,1,0));
do {
glClear( GL_COLOR_BUFFER_BIT ); // Clear the screen
//glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDisable(GL_DEPTH_TEST);
//glDepthFunc(GL_LESS);
sphere.render(trans);
glfwSwapBuffers(win); // double buffer
glfwPollEvents();
} while( glfwGetKey(win, GLFW_KEY_ESCAPE ) != GLFW_PRESS &&
glfwWindowShouldClose(win) == 0 );
}
Points did not display at all so the call is commented out. We drew a line strip instead. That works somewhat. Why is it truncated? Why doesn't it at least finish the layer of the sphere?
The shaders are shown below:
#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 v;
uniform mat4 trans;
void main(){
gl_PointSize = 5;
gl_Position = trans * vec4(v,1.0);
gl_Position.w = 1.0;
}
fragment shader:
#version 330 core
out vec4 color;
void main()
{
color = vec4(1,1,1,1);
}
The size argument of glBufferData specifies the size in bytes of the buffer object's new data store:
glBufferData(GL_ARRAY_BUFFER, resolution, vert, GL_STATIC_DRAW);
glBufferData(GL_ARRAY_BUFFER, resolution * 3 * sizeof(float), vert, GL_STATIC_DRAW);
500x500 grid with 1000 sub Divisions:
Just one question.
Why is this happening ?
#include <iostream>
#include <sstream>
#include <vector>
#define GLEW_STATIC
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include "glm/glm.hpp"
#include "glm/gtc/matrix_transform.hpp"
#include "GameEngine.hpp"
#include "ShaderProgram.h"
#include "Camera.h"
#include "Mesh.h"
const char *title = "Terrain";
GameEngine engine;
OrbitCamera orbitCamera;
float gYaw = 0.0f;
float gPitch = 1.0f;
float gRadius = 200.0f;
const float MOUSE_SENSTIVITY = 0.25f;
bool gWireFrame = false;
void glfw_onKey(GLFWwindow *window, int key, int scancode, int action, int mode);
void glfw_onMouseMove(GLFWwindow *window, double posX, double posY);
void glfw_onMouseScroll(GLFWwindow *window, double deltaX, double deltaY);
int main()
{
if (!engine.init(1024, 768, title))
{
std::cerr << "OpenGL init failed" << std::endl;
std::cin.get();
return -1;
}
//set callbacks
glfwSetKeyCallback(engine.getWindow(), glfw_onKey);
glfwSetCursorPosCallback(engine.getWindow(), glfw_onMouseMove);
std::vector<Vertex> VER;
std::vector<glm::vec3> verts;
std::vector<unsigned int> indices;
std::vector<glm::vec3> norms;
int subDiv = 1000;
int width = 500;
int height = 500;
int size = 0;
for (int row = 0; row < subDiv; row++)
{
for (int col = 0; col < subDiv; col++)
{
float x = (float)((col * width) / subDiv - (width / 2.0));
float z = ((subDiv - row) * height) / subDiv - (height / 2.0);
glm::vec3 pos = glm::vec3(x, 0, z);
verts.push_back(pos);
}
}
size = subDiv * subDiv;
size = verts.size();
for (int row = 0; row < subDiv -1 ; row++)
{
for (int col = 0; col < subDiv -1; col++)
{
int row1 = row * (subDiv);
int row2 = (row+1) * (subDiv);
indices.push_back(row1+col);
indices.push_back(row1+col+1);
indices.push_back( row2+col+1);
indices.push_back(row1+col);
indices.push_back( row2+col+1);
indices.push_back(row2+col);
}
}
for (int i = 0; i < verts.size(); i++)
{
Vertex vertex;
vertex.position = verts[i];
vertex.normal = glm::vec3(0, 0, 0);
vertex.texCoords = glm::vec2(0, 0);
VER.push_back(vertex);
}
VER.begin();
for (int i = 0; i < indices.size(); i += 3)
{
Vertex a = VER[indices[i]];
Vertex b = VER[indices[i + 1]];
Vertex c = VER[indices[i + 2]];
glm::vec3 p = glm::cross(b.position - a.position, c.position - a.position);
VER[indices[i]].normal += p;
VER[indices[i + 1]].normal += p;
VER[indices[i + 2]].normal += p;
}
for (int i = 0; i < VER.size(); i++)
{
VER[i].normal = glm::normalize(VER[i].normal);
}
glm::vec3 cubePos = glm::vec3(0.0f, 0.0f, -5.0f);
GLuint vbo, vao, ibo;
glGenVertexArrays(1, &vao);
glGenBuffers(1, &vbo);
glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, VER.size() * sizeof(Vertex), &VER[0], GL_STATIC_DRAW);
// Vertex Positions
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)0);
glEnableVertexAttribArray(0);
// Normals attribute
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)(3 * sizeof(GLfloat)));
glEnableVertexAttribArray(1);
// Vertex Texture Coords
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)(6 * sizeof(GLfloat)));
glEnableVertexAttribArray(2);
int n = indices.size() * sizeof(unsigned int);
glGenBuffers(1, &ibo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices.size() * sizeof(unsigned int), &indices[0], GL_STATIC_DRAW);
glBindVertexArray(0);
ShaderProgram shaderProgram;
shaderProgram.loadShaders("shaders/vert.glsl", "shaders/frag.glsl");
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
while (!glfwWindowShouldClose(engine.getWindow()))
{
glfwPollEvents();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glm::mat4 model, view, projection;
model = glm::mat4(1.0f);
orbitCamera.setLookAt(glm::vec3(0, 0, 0));
orbitCamera.rotate(gYaw, gPitch);
orbitCamera.setRadius(gRadius);
model = glm::translate(model, glm::vec3(0, 0, 0));
//model = glm::scale(model, glm::vec3(1, 0, 1));
//model = scaleMat;
projection = glm::perspective(glm::radians(45.0f), (float)engine.getWidth() / (float)engine.getHeight(), 0.00001f, 100.0f);
shaderProgram.use();
glm::vec3 viewPos;
viewPos.x = orbitCamera.getPosition().x;
viewPos.y = orbitCamera.getPosition().y;
viewPos.z = orbitCamera.getPosition().z;
shaderProgram.setUniform("projection", projection);
shaderProgram.setUniform("view", orbitCamera.getViewMatrix());
shaderProgram.setUniform("model", model);
shaderProgram.setUniform("lightPos", glm::vec3(5, 10, 10));
shaderProgram.setUniform("viewPos", viewPos);
glBindVertexArray(vao);
glDrawElements(GL_TRIANGLES,indices.size(), GL_UNSIGNED_INT, 0);
//glDrawArrays(GL_TRIANGLES, 0, VER.size());
glBindVertexArray(0);
glfwSwapBuffers(engine.getWindow());
}
//cleanup
glDeleteVertexArrays(1, &vao);
glDeleteBuffers(1, &vbo);
glfwTerminate();
return 0;
}
void glfw_onKey(GLFWwindow *window, int key, int scancode, int action, int mode)
{
if (key == GLFW_KEY_ESCAPE && action == GLFW_PRESS)
{
glfwSetWindowShouldClose(window, GL_TRUE);
}
if (key == GLFW_KEY_E && action == GLFW_PRESS)
{
gWireFrame = !gWireFrame;
if (gWireFrame)
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
else
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
}
}
void glfw_onMouseMove(GLFWwindow *window, double posX, double posY)
{
static glm::vec2 lastMousePos = glm::vec2(0, 0);
if (glfwGetMouseButton(engine.getWindow(), GLFW_MOUSE_BUTTON_LEFT) == 1)
{
gYaw -= ((float)posX - lastMousePos.x) * MOUSE_SENSTIVITY;
gPitch += ((float)posY - lastMousePos.y) * MOUSE_SENSTIVITY;
}
if (glfwGetMouseButton(engine.getWindow(), GLFW_MOUSE_BUTTON_RIGHT) == 1)
{
float dx = 0.01f * ((float)posX - lastMousePos.x);
float dy = 0.01f * ((float)posY - lastMousePos.y);
gRadius += dx - dy;
}
lastMousePos.x = (float)posX;
lastMousePos.y = (float)posY;
}
This is the main code. Rest is just basic initializing code, nothing fancy.
I've tried changing the swapinterval but that doesn't seems to be the problem.
I can share code for the other classes if anyone wants to take a look. And I've also tried lowering the sub divisions.
Edit*
After increasing the value of far plane to 8000:
Still not crisp.
the edit with second image is telling you what is happening ... if tampering with znear/zfar changes output like that it means your depth buffer has low bitwidth to the range you want to use...
However increasing zfar should make things worse (you just for some reason don't see it maybe its cut off or some weird math accuracy singularity).
for me its usual to select the planes so:
zfar/znear < (2^depth_buffer_bitwidth)/2
check you depth_buffer_bitwidth
Try to use 24 bits (you might have 16 bits right now). That should work on all gfx cards these days. You can try 32 bits too but that will work only on newer cards. I am using this code to get the max I can:
What is the proper OpenGL initialisation on Intel HD 3000?
However you are using GLFW so you need to find how to do it in it ... probably there is some hint for this in it ...
increase znear as much as you can
tampering znear has much much more impact than zfar...
Use linear depth buffer
this is the best option for large depth range views like terrains that covers stuf in whole depth view range. See:
How to correctly linearize depth in OpenGL ES in iOS?
however you need shaders and new api for this... I do not think this is doable in old api but luckily you are on new api already ...
if none of above is enough
You can stack up more frustrums together at a cost of multiple rendering of the same geometry. for more info see:
Is it possible to make realistic n-body solar system simulation in matter of size and mass?
How do you initialize OpenGL?
Are you using GL_BLEND?
Using blending is nice to get anti-aliased polygon edges, however it also means your z-buffer gets updated even when a very translucent fragment is drawn. This prevents other opaque fragments with the same z-depth from being drawn, which might be what is causing those holes. You could try disabling GL_BLEND to see if the issue goes away.
What depth function are you using?
By default it is set to GL_LESS. You might want to try glDepthFunc(GL_LEQUAL); So fragments with the same z-depth will be drawn. However, due to rounding errors this might not solve your problem entirely.
I have succeeded in extracting a point cloud using a Kinect, but I cannot go further to save it or to add to it the next captured frame. Here is what I have found so far, and I would love to enhance it so that I can store many point clouds in one file to have a big 3D map.
#include "main.h"
#include "glut.h"
#include <cmath>
#include <cstdio>
#include <Windows.h>
#include <Ole2.h>
#include <Kinect.h>
// We'll be using buffer objects to store the kinect point cloud
GLuint vboId;
GLuint cboId;
// Intermediate Buffers
unsigned char rgbimage[colorwidth*colorheight*4]; // Stores RGB color image
ColorSpacePoint depth2rgb[width*height]; // Maps depth pixels to rgb pixels
CameraSpacePoint depth2xyz[width*height]; // Maps depth pixels to 3d coordinates
// Kinect Variables
IKinectSensor* sensor; // Kinect sensor
IMultiSourceFrameReader* reader; // Kinect data source
ICoordinateMapper* mapper; // Converts between depth, color, and 3d coordinates
bool initKinect() {
if (FAILED(GetDefaultKinectSensor(&sensor))) {
return false;
}
if (sensor) {
sensor->get_CoordinateMapper(&mapper);
sensor->Open();
sensor->OpenMultiSourceFrameReader(
FrameSourceTypes::FrameSourceTypes_Depth | FrameSourceTypes::FrameSourceTypes_Color,
&reader);
return reader;
} else {
return false;
}
}
void getDepthData(IMultiSourceFrame* frame, GLubyte* dest) {
IDepthFrame* depthframe;
IDepthFrameReference* frameref = NULL;
frame->get_DepthFrameReference(&frameref);
frameref->AcquireFrame(&depthframe);
if (frameref) frameref->Release();
if (!depthframe) return;
// Get data from frame
unsigned int sz;
unsigned short* buf;
depthframe->AccessUnderlyingBuffer(&sz, &buf);
// Write vertex coordinates
mapper->MapDepthFrameToCameraSpace(width*height, buf, width*height, depth2xyz);
float* fdest = (float*)dest;
for (int i = 0; i < sz; i++) {
*fdest++ = depth2xyz[i].X;
*fdest++ = depth2xyz[i].Y;
*fdest++ = depth2xyz[i].Z;
}
// Fill in depth2rgb map
mapper->MapDepthFrameToColorSpace(width*height, buf, width*height, depth2rgb);
if (depthframe) depthframe->Release();
}
void getRgbData(IMultiSourceFrame* frame, GLubyte* dest) {
IColorFrame* colorframe;
IColorFrameReference* frameref = NULL;
frame->get_ColorFrameReference(&frameref);
frameref->AcquireFrame(&colorframe);
if (frameref) frameref->Release();
if (!colorframe) return;
// Get data from frame
colorframe->CopyConvertedFrameDataToArray(colorwidth*colorheight*4, rgbimage, ColorImageFormat_Rgba);
// Write color array for vertices
float* fdest = (float*)dest;
for (int i = 0; i < width*height; i++) {
ColorSpacePoint p = depth2rgb[i];
// Check if color pixel coordinates are in bounds
if (p.X < 0 || p.Y < 0 || p.X > colorwidth || p.Y > colorheight) {
*fdest++ = 0;
*fdest++ = 0;
*fdest++ = 0;
}
else {
int idx = (int)p.X + colorwidth*(int)p.Y;
*fdest++ = rgbimage[4*idx + 0]/255.;
*fdest++ = rgbimage[4*idx + 1]/255.;
*fdest++ = rgbimage[4*idx + 2]/255.;
}
// Don't copy alpha channel
}
if (colorframe) colorframe->Release();
}
void getKinectData() {
IMultiSourceFrame* frame = NULL;
if (SUCCEEDED(reader->AcquireLatestFrame(&frame))) {
GLubyte* ptr;
glBindBuffer(GL_ARRAY_BUFFER, vboId);
ptr = (GLubyte*)glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY);
if (ptr) {
getDepthData(frame, ptr);
}
glUnmapBuffer(GL_ARRAY_BUFFER);
glBindBuffer(GL_ARRAY_BUFFER, cboId);
ptr = (GLubyte*)glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY);
if (ptr) {
getRgbData(frame, ptr);
}
glUnmapBuffer(GL_ARRAY_BUFFER);
}
if (frame) frame->Release();
}
void rotateCamera() {
static double angle = 0.;
static double radius = 3.;
double x = radius*sin(angle);
double z = radius*(1-cos(angle)) - radius/2;
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(x,0,z,0,0,radius/2,0,1,0);
angle += 0.002;
}
void drawKinectData() {
getKinectData();
rotateCamera();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glVertexPointer(3, GL_FLOAT, 0, NULL);
glBindBuffer(GL_ARRAY_BUFFER, cboId);
glColorPointer(3, GL_FLOAT, 0, NULL);
glPointSize(1.f);
glDrawArrays(GL_POINTS, 0, width*height);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
}
int main(int argc, char* argv[]) {
if (!init(argc, argv)) return 1;
if (!initKinect()) return 1;
// OpenGL setup
glClearColor(0,0,0,0);
glClearDepth(1.0f);
// Set up array buffers
const int dataSize = width*height * 3 * 4;
glGenBuffers(1, &vboId);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, dataSize, 0, GL_DYNAMIC_DRAW);
glGenBuffers(1, &cboId);
glBindBuffer(GL_ARRAY_BUFFER, cboId);
glBufferData(GL_ARRAY_BUFFER, dataSize, 0, GL_DYNAMIC_DRAW);
// Camera setup
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45, width /(GLdouble) height, 0.1, 1000);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0,0,0,0,0,1,0,1,0);
// Main loop
execute();
return 0;
}
If you're only looking at concatenating point clouds, this could be easily achieved with PCL using the += operator between point clouds. There's a small tutorial about that on the PCL website, that you can find here.
On the other hand, if you're looking for a way to build a big map by merging and stitching different point clouds, you would need to find the set of intersecting features between the point clouds and transform them so that they overlap in the right region. You can do that by building up an algorithm based on the Iterative Closest Point. It might be interesting to look at KinFu, which does that in real-time and produces a mesh from the scanned clouds. The source code is available on the PCL Project github.
I want to draw a circle in a specific position using the coordinates of the centre of the circle and its radius. All the methods that i found are using glut and none of them position the circle in a specific point.
I wanna mention that I'm new to this things and if I'm doing something wrong, I would be happy to know it.
This is what I did so far:
class Constructor
Mesh::Mesh(Vertex * vertices, unsigned int numVertices) {
m_drawCont = numVertices;
glGenVertexArrays(1, &m_vertexArrayObject);
glBindVertexArray(m_vertexArrayObject);
glGenBuffers(NUM_BUFFERS, m_vertexArrayBuffers);
glBindBuffer(GL_ARRAY_BUFFER, m_vertexArrayBuffers[POSITION_VB]);
//PUT ALL OF OUR VERTEX DATA IN THE ARRAY
glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(vertices[0]), vertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glBindVertexArray(0);
}
Draw Circle Method
void Mesh::DrawCircle() {
glBindVertexArray(m_vertexArrayObject);
glDrawArrays(GL_LINE_LOOP, 0, m_drawCont);
glBindVertexArray(0);
}
Main method
int main(int argc, char **argv) {
Display display(800, 600, "Window1");
Shader shader("./res/basicShader");
Vertex vertices2[3000];
for (int i = 0; i < 3000; i++) {
vertices2[i] = Vertex(glm::vec3(cos(2 * 3.14159*i / 1000.0), sin(2 * 3.14159*i / 1000.0), 0));
}
Mesh mesh3(vertices2, sizeof(vertices2) / sizeof(vertices2[0]));
while (!display.IsClosed()) {
display.Clear(0.0f, 0.15f, 0.3f, 1.0f);
shader.Bind();
mesh3.DrawCircle();
display.Update();
}
}
And this is the
output image
The code which actually creates circle vertices
as cos(x) and sin(x) function returns values is [0..1] than multiplication to some value will give us circle with radios of that value. Adding or subtracting x and y values will move the center of the circle to a specific position. fragments value specifies detalization of circle greater-better.
std::vector<Vertex> CreateCircleArray(float radius, float x, float y, int fragments)
{
const float PI = 3.1415926f;
std::vector<Vertex> result;
float increment = 2.0f * PI / fragments;
for (float currAngle = 0.0f; currAngle <= 2.0f * PI; currAngle += increment)
{
result.push_back(glm::vec3(radius * cos(currAngle) + x, radius * sin(currAngle) + y, 0));
}
return result;
}
I have a simple OpenGL program which I am trying to utilize Vertex Buffer Objects for rendering instead of the old glBegin() - glEnd(). Basically the user clicks on the window indicating a starting point, and then presses a key to generate subsequent points which OpenGL draws as a line.
I've implemented this using glBegin() and glEnd() but have not been successful using a VBO. I am wondering if the problem is that after I initialize the VBO, I'm adding more vertices which it doesn't have memory allocated for, and thus doesn't display them.
Edit: Also, I'm a bit confused as to how it knows exactly which values in the vertex struct to use for x and y, as well as for r, g, b. I haven't been able to find a clear example of this.
#include <windows.h>
#include <stdio.h>
#include <stdlib.h>
#include <Math.h>
#include <iostream>
#include <vector>
#include <GL/glew.h>
#include <GL/glut.h>
struct vertex {
float x, y, u, v, r, g, b;
};
const int D = 10; // distance
const int A = 10; // angle
const int WINDOW_WIDTH = 500, WINDOW_HEIGHT = 500;
std::vector<vertex> vertices;
boolean start = false;
GLuint vboId;
void update_line_point() {
vertex temp;
temp.x = vertices.back().x + D * vertices.back().u;
temp.y = vertices.back().y + D * vertices.back().v;
temp.u = vertices.back().u;
temp.v = vertices.back().v;
vertices.push_back(temp);
}
void update_line_angle() {
float u_prime, v_prime;
u_prime = vertices.back().u * cos(A) - vertices.back().v * sin(A);
v_prime = vertices.back().u * sin(A) + vertices.back().v * cos(A);
vertices.back().u = u_prime;
vertices.back().v = v_prime;
}
void initVertexBuffer() {
glGenBuffers(1, &vboId);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertex) * vertices.size(), &vertices[0], GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
void displayCB() {
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, WINDOW_WIDTH, 0, WINDOW_HEIGHT);
if (start) {
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glVertexPointer(2, GL_FLOAT, sizeof(vertex), &vertices[0]);
glColorPointer(3, GL_FLOAT, sizeof(vertex), &vertices[0]);
glDrawArrays(GL_LINE_STRIP, 0, vertices.size());
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
/***** this is what I'm trying to achieve
glColor3f(1, 0, 0);
glBegin(GL_LINE_STRIP);
for (std::vector<vertex>::size_type i = 0; i < vertices.size(); i++) {
glVertex2f(vertices[i].x, vertices[i].y);
}
glEnd();
*****/
glFlush();
glutSwapBuffers();
}
void mouseCB(int button, int state, int x, int y) {
if (state == GLUT_DOWN) {
vertices.clear();
vertex temp = {x, WINDOW_HEIGHT - y, 1, 0, 1, 0, 0}; // default red color
vertices.push_back(temp);
start = true;
initVertexBuffer();
}
glutPostRedisplay();
}
void keyboardCB(unsigned char key, int x, int y) {
switch(key) {
case 'f':
if (start) {
update_line_point();
}
break;
case 't':
if (start) {
update_line_angle();
}
break;
}
glutPostRedisplay();
}
void initCallbackFunc() {
glutDisplayFunc(displayCB);
glutMouseFunc(mouseCB);
glutKeyboardFunc(keyboardCB);
}
int main(int argc, char** argv) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGB|GLUT_DOUBLE|GLUT_DEPTH);
glutInitWindowSize(WINDOW_WIDTH, WINDOW_HEIGHT);
glutInitWindowPosition(100, 100);
glutCreateWindow("Test");
initCallbackFunc();
// initialize glew
GLenum glewInitResult;
glewExperimental = GL_TRUE;
glewInitResult = glewInit();
if (GLEW_OK != glewInitResult) {
std::cerr << "Error initializing glew." << std::endl;
return 1;
}
glClearColor(1, 1, 1, 0);
glutMainLoop();
return 0;
}
If you have a VBO bound then the pointer argument to the gl*Pointer() calls is interpreted as a byte offset from the beginning of the VBO, not an actual pointer. Your usage is consistent with vertex array usage though.
So for your vertex struct x starts at byte zero and r starts at byte sizeof(float) * 4.
Also, your mouse callback reset your vertex vector on every call so you would never be able have more than one vertex in it at any given time. It also leaked VBO names via the glGenBuffers() in initVertexBuffer().
Give this a shot:
#include <GL/glew.h>
#include <GL/glut.h>
#include <iostream>
#include <vector>
struct vertex
{
float x, y;
float u, v;
float r, g, b;
};
GLuint vboId;
std::vector<vertex> vertices;
void mouseCB(int button, int state, int x, int y)
{
y = glutGet( GLUT_WINDOW_HEIGHT ) - y;
if (state == GLUT_DOWN)
{
vertex temp = {x, y, 1, 0, 1, 0, 0}; // default red color
vertices.push_back(temp);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertex) * vertices.size(), &vertices[0], GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
glutPostRedisplay();
}
void displayCB()
{
glClearColor(1, 1, 1, 0);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
double w = glutGet( GLUT_WINDOW_WIDTH );
double h = glutGet( GLUT_WINDOW_HEIGHT );
glOrtho( 0, w, 0, h, -1, 1 );
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
if ( vertices.size() > 1 )
{
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glVertexPointer(2, GL_FLOAT, sizeof(vertex), (void*)(sizeof( float ) * 0));
glColorPointer(3, GL_FLOAT, sizeof(vertex), (void*)(sizeof( float ) * 4));
glDrawArrays(GL_LINE_STRIP, 0, vertices.size());
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
glutSwapBuffers();
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGB|GLUT_DOUBLE|GLUT_DEPTH);
glutInitWindowSize(500, 500);
glutInitWindowPosition(100, 100);
glutCreateWindow("Test");
// initialize glew
glewExperimental = GL_TRUE;
GLenum glewInitResult = glewInit();
if (GLEW_OK != glewInitResult) {
std::cerr << "Error initializing glew." << std::endl;
return 1;
}
glGenBuffers(1, &vboId);
glutDisplayFunc(displayCB);
glutMouseFunc(mouseCB);
glutMainLoop();
return 0;
}
A VBO is a buffer located somewhere in memory (almost always in dedicated GPU memory - VRAM) of a fixed size. You specify this size in glBufferData, and you also simultaneously give the GL a pointer to copy from. The key word here is copy. Everything you do to the vector after glBufferData isn't reflected in the VBO.
You should be binding and doing another glBufferData call after changing the vector. You will also probably get better performance from glBufferSubData or glMapBuffer if the VBO is already large enough to handle the new data, but in a small application like this the performance hit of calling glBufferData every time is basically non-existent.
Also, to address your other question about the values you need to pick out x, y, etc. The way your VBO is set up is that the values are interleaved. so in memory, your vertices will look like this:
+-------------------------------------------------
| x | y | u | v | r | g | b | x | y | u | v | ...
+-------------------------------------------------
You tell OpenGL where your vertices and colors are with the glVertexPointer and glColorPointer functions respectively.
The size parameter specifies how many elements there are for each vertex. In this case, it's 2 for vertices, and 3 for colors.
The type parameter specifies what type each element is. In your case it's GL_FLOAT for both.
The stride parameter is how many bytes you need to skip from the start of one vertex to the start of the next. With an interleaved setup like yours, this is simply sizeof(vertex) for both.
The last parameter, pointer, isn't actually a pointer to your vector in this case. When a VBO is bound, pointer becomes a byte offset into the VBO. For vertices, this should be 0, since the first vertex starts at the very first byte of the VBO. For colors, this should be 4 * sizeof(float), since the first color is preceded by 4 floats.